Compare commits

..

46 Commits

Author SHA1 Message Date
Michal
705df06996 feat: gated project experience & prompt intelligence
Some checks failed
CI / lint (pull_request) Has been cancelled
CI / typecheck (pull_request) Has been cancelled
CI / test (pull_request) Has been cancelled
CI / build (pull_request) Has been cancelled
CI / package (pull_request) Has been cancelled
Implements the full gated session flow and prompt intelligence system:

- Prisma schema: add gated, priority, summary, chapters, linkTarget fields
- Session gate: state machine (gated → begin_session → ungated) with LLM-powered
  tool selection based on prompt index
- Tag matcher: intelligent prompt-to-tool matching with project/server/action tags
- LLM selector: tiered provider selection (fast for gating, heavy for complex tasks)
- Link resolver: cross-project MCP resource references (project/server:uri format)
- Prompt summary service: LLM-generated summaries and chapter extraction
- System project bootstrap: ensures default project exists on startup
- Structural link health checks: enrichWithLinkStatus on prompt GET endpoints
- CLI: create prompt --priority/--link, create project --gated/--no-gated,
  describe project shows prompts section, get prompts shows PRI/LINK/STATUS
- Apply/edit: priority, linkTarget, gated fields supported
- Shell completions: fish updated with new flags
- 1,253 tests passing across all packages

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-25 23:22:42 +00:00
62647a7f90 Merge pull request 'fix: per-provider health checks in status display' (#44) from fix/per-provider-health-check into main
Some checks are pending
CI / lint (push) Waiting to run
CI / typecheck (push) Waiting to run
CI / test (push) Waiting to run
CI / build (push) Blocked by required conditions
CI / package (push) Blocked by required conditions
2026-02-25 02:25:28 +00:00
Michal
39ca134201 fix: per-provider health checks in /llm/providers and status display
Some checks failed
CI / lint (pull_request) Has been cancelled
CI / typecheck (pull_request) Has been cancelled
CI / test (pull_request) Has been cancelled
CI / build (pull_request) Has been cancelled
CI / package (pull_request) Has been cancelled
The /llm/providers endpoint now runs isAvailable() on each provider in
parallel and returns health status per provider. The status command shows
✓/✗ per provider based on actual availability, not just the fast tier.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-25 02:25:06 +00:00
78a1dc9c8a Merge pull request 'feat: tiered LLM providers (fast/heavy)' (#43) from feat/tiered-llm-providers into main
Some checks are pending
CI / lint (push) Waiting to run
CI / typecheck (push) Waiting to run
CI / test (push) Waiting to run
CI / build (push) Blocked by required conditions
CI / package (push) Blocked by required conditions
2026-02-25 02:16:29 +00:00
Michal
9ce705608b feat: tiered LLM providers (fast/heavy) with multi-provider config
Some checks failed
CI / lint (pull_request) Has been cancelled
CI / typecheck (pull_request) Has been cancelled
CI / test (pull_request) Has been cancelled
CI / build (pull_request) Has been cancelled
CI / package (pull_request) Has been cancelled
Adds tier-based LLM routing so fast local models (vLLM, Ollama) handle
structured tasks while cloud models (Gemini, Anthropic) are reserved for
heavy reasoning. Single-provider configs continue to work via fallback.

- Tier type + ProviderRegistry with assignTier/getProvider/fallback chain
- Multi-provider config format: { providers: [{ name, type, tier, ... }] }
- NamedProvider wrapper for multiple instances of same provider type
- Setup wizard: Simple (legacy) / Advanced (fast+heavy tiers) modes
- Status display: tiered view with /llm/providers endpoint
- Call sites use getProvider('fast') instead of getActive()
- Full backward compatibility with existing single-provider configs

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-25 02:16:08 +00:00
Michal
0824f8e635 fix: cache LLM health check result for 10 minutes
Some checks are pending
CI / lint (push) Waiting to run
CI / typecheck (push) Waiting to run
CI / test (push) Waiting to run
CI / build (push) Blocked by required conditions
CI / package (push) Blocked by required conditions
Avoids burning tokens on every `mcpctl status` call. The /llm/health
endpoint now caches successful results for 10min, errors for 1min.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-25 01:39:15 +00:00
Michal
9bd3127519 fix: warmup ACP subprocess eagerly to avoid 30s cold-start on status
Some checks are pending
CI / lint (push) Waiting to run
CI / typecheck (push) Waiting to run
CI / test (push) Waiting to run
CI / build (push) Blocked by required conditions
CI / package (push) Blocked by required conditions
The pool refactor made ACP client creation lazy, causing the first
/llm/health call to spawn + initialize + prompt Gemini in one request
(30s+). Now warmup() eagerly starts the subprocess on mcplocal boot.
Also fetch models in parallel with LLM health check.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-25 01:37:30 +00:00
e8ac500ae9 Merge pull request 'feat: per-project LLM models, ACP session pool, smart pagination tests' (#42) from feat/per-project-llm-pagination-tests into main
Some checks are pending
CI / lint (push) Waiting to run
CI / typecheck (push) Waiting to run
CI / test (push) Waiting to run
CI / build (push) Blocked by required conditions
CI / package (push) Blocked by required conditions
2026-02-25 01:29:56 +00:00
Michal
bed725b387 feat: per-project LLM models, ACP session pool, smart pagination tests
Some checks failed
CI / lint (pull_request) Has been cancelled
CI / typecheck (pull_request) Has been cancelled
CI / test (pull_request) Has been cancelled
CI / build (pull_request) Has been cancelled
CI / package (pull_request) Has been cancelled
- ACP session pool with per-model subprocesses and 8h idle eviction
- Per-project LLM config: local override → mcpd recommendation → global default
- Model override support in ResponsePaginator
- /llm/models endpoint + available models in mcpctl status
- Remove --llm-provider/--llm-model from create project (use edit/apply)
- 8 new smart pagination integration tests (e2e flow)
- 260 mcplocal tests, 330 CLI tests passing

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-25 01:29:38 +00:00
17a456d835 Merge pull request 'feat: completions update, create promptrequest, LLM flag rename, ACP content fix' (#41) from feat/completions-llm-flags-promptrequest into main
Some checks are pending
CI / lint (push) Waiting to run
CI / typecheck (push) Waiting to run
CI / test (push) Waiting to run
CI / build (push) Blocked by required conditions
CI / package (push) Blocked by required conditions
2026-02-25 00:21:51 +00:00
Michal
9481d394a1 feat: completions update, create promptrequest, LLM flag rename, ACP content fix
Some checks failed
CI / lint (pull_request) Has been cancelled
CI / typecheck (pull_request) Has been cancelled
CI / test (pull_request) Has been cancelled
CI / build (pull_request) Has been cancelled
CI / package (pull_request) Has been cancelled
- Add prompts/promptrequests to shell completions (fish + bash)
- Add approve, setup, prompt, promptrequest commands to completions
- Add `create promptrequest` CLI command (POST /projects/:name/promptrequests)
- Rename --proxy-mode-llm-provider/model to --llm-provider/model
- Fix ACP client: handle single-object content format from real Gemini
- Add tests for single-object content and agent_thought_chunk filtering

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-25 00:21:31 +00:00
Michal
bc769c4eeb fix: LLM health check via mcplocal instead of spawning gemini directly
Some checks are pending
CI / lint (push) Waiting to run
CI / typecheck (push) Waiting to run
CI / test (push) Waiting to run
CI / build (push) Blocked by required conditions
CI / package (push) Blocked by required conditions
Status command now queries mcplocal's /llm/health endpoint instead of
spawning the gemini binary. This uses the persistent ACP connection
(fast) and works for any configured provider, not just gemini-cli.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-25 00:03:25 +00:00
6f534c8ba9 Merge pull request 'feat: persistent Gemini ACP provider + status spinner' (#40) from feat/gemini-acp-provider into main
Some checks are pending
CI / lint (push) Waiting to run
CI / typecheck (push) Waiting to run
CI / test (push) Waiting to run
CI / build (push) Blocked by required conditions
CI / package (push) Blocked by required conditions
2026-02-24 23:52:31 +00:00
Michal
11da8b1fbf feat: persistent Gemini ACP provider + status spinner
Some checks failed
CI / lint (pull_request) Has been cancelled
CI / typecheck (pull_request) Has been cancelled
CI / test (pull_request) Has been cancelled
CI / build (pull_request) Has been cancelled
CI / package (pull_request) Has been cancelled
Replace per-call gemini CLI spawning (~10s cold start each time) with
persistent ACP (Agent Client Protocol) subprocess. First call absorbs
the cold start, subsequent calls are near-instant over JSON-RPC stdio.

- Add AcpClient: manages persistent gemini --experimental-acp subprocess
  with lazy init, auto-restart on crash/timeout, NDJSON framing
- Add GeminiAcpProvider: LlmProvider wrapper with serial queue for
  concurrent calls, same interface as GeminiCliProvider
- Add dispose() to LlmProvider interface + disposeAll() to registry
- Wire provider disposal into mcplocal shutdown handler
- Add status command spinner with progressive output and color-coded
  LLM health check results (green checkmark/red cross)
- 25 new tests (17 ACP client + 8 provider)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-24 23:52:04 +00:00
Michal
848868d45f feat: auto-detect gemini binary path, LLM health check in status
Some checks are pending
CI / lint (push) Waiting to run
CI / typecheck (push) Waiting to run
CI / test (push) Waiting to run
CI / build (push) Blocked by required conditions
CI / package (push) Blocked by required conditions
- Setup wizard auto-detects gemini binary via `which`, saves full path
  so systemd service can find it without user PATH
- `mcpctl status` tests LLM provider health (gemini: quick prompt test,
  ollama: health check, API providers: key stored confirmation)
- Shows error details inline: "gemini-cli / gemini-2.5-flash (not authenticated)"

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-24 23:24:31 +00:00
Michal
869217a07a fix: exactOptionalPropertyTypes and ResponsePaginator type errors
Some checks are pending
CI / lint (push) Waiting to run
CI / typecheck (push) Waiting to run
CI / test (push) Waiting to run
CI / build (push) Blocked by required conditions
CI / package (push) Blocked by required conditions
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-24 23:15:15 +00:00
04d115933b Merge pull request 'feat: LLM provider configuration, secret store, and setup wizard' (#39) from feat/llm-config-and-secrets into main
Some checks are pending
CI / lint (push) Waiting to run
CI / typecheck (push) Waiting to run
CI / test (push) Waiting to run
CI / build (push) Blocked by required conditions
CI / package (push) Blocked by required conditions
2026-02-24 22:48:39 +00:00
Michal
7c23da10c6 feat: LLM provider configuration, secret store, and setup wizard
Some checks failed
CI / lint (pull_request) Has been cancelled
CI / typecheck (pull_request) Has been cancelled
CI / test (pull_request) Has been cancelled
CI / build (pull_request) Has been cancelled
CI / package (pull_request) Has been cancelled
Add secure credential storage (GNOME Keyring + file fallback),
LLM provider config in ~/.mcpctl/config.json, interactive setup
wizard (mcpctl config setup), and wire configured provider into
mcplocal for smart pagination summaries.

- Secret store: SecretStore interface, GnomeKeyringStore, FileSecretStore
- Config schema: LlmConfigSchema with provider/model/url/binaryPath
- Setup wizard: arrow-key provider/model selection, dynamic model fetch
- Provider factory: creates ProviderRegistry from config + secrets
- Status: shows LLM line with hint when not configured
- 572 tests passing across all packages

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-24 22:48:17 +00:00
32b4de4343 Merge pull request 'feat: smart response pagination for large MCP tool results' (#38) from feat/response-pagination into main
Some checks are pending
CI / lint (push) Waiting to run
CI / typecheck (push) Waiting to run
CI / test (push) Waiting to run
CI / build (push) Blocked by required conditions
CI / package (push) Blocked by required conditions
2026-02-24 21:40:53 +00:00
Michal
e06db9afba feat: smart response pagination for large MCP tool results
Some checks failed
CI / lint (pull_request) Has been cancelled
CI / typecheck (pull_request) Has been cancelled
CI / test (pull_request) Has been cancelled
CI / build (pull_request) Has been cancelled
CI / package (pull_request) Has been cancelled
Intercepts oversized tool responses (>80K chars), caches them, and returns
a page index. LLM can fetch specific pages via _resultId/_page params.
Supports LLM-generated smart summaries with simple fallback.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-24 21:40:33 +00:00
Michal
a25809b84a fix: auto-read user credentials for mcpd auth
Some checks are pending
CI / lint (push) Waiting to run
CI / typecheck (push) Waiting to run
CI / test (push) Waiting to run
CI / build (push) Blocked by required conditions
CI / package (push) Blocked by required conditions
mcplocal now reads ~/.mcpctl/credentials automatically when
MCPLOCAL_MCPD_TOKEN env var is not set, matching CLI behavior.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-24 19:14:56 +00:00
f5a902d3e0 Merge pull request 'fix: STDIO transport stdout flush and MCP notification handling' (#37) from fix/stdio-flush-and-notifications into main
Some checks are pending
CI / lint (push) Waiting to run
CI / typecheck (push) Waiting to run
CI / test (push) Waiting to run
CI / build (push) Blocked by required conditions
CI / package (push) Blocked by required conditions
2026-02-24 19:10:03 +00:00
Michal
9cb0c5ce24 fix: STDIO transport stdout flush and MCP notification handling
Some checks failed
CI / lint (pull_request) Has been cancelled
CI / typecheck (pull_request) Has been cancelled
CI / test (pull_request) Has been cancelled
CI / build (pull_request) Has been cancelled
CI / package (pull_request) Has been cancelled
- Wait for stdout.write callback before process.exit in STDIO transport
  to prevent truncation of large responses (e.g. grafana tools/list)
- Handle MCP notification methods (notifications/initialized, etc.) in
  router instead of returning "Method not found" error
- Use -p shorthand in config claude output

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-24 19:09:47 +00:00
06230ec034 Merge pull request 'feat: prompt resources, proxy transport fix, enriched descriptions' (#36) from feat/prompt-resources-and-proxy-transport into main
Some checks are pending
CI / lint (push) Waiting to run
CI / typecheck (push) Waiting to run
CI / test (push) Waiting to run
CI / build (push) Blocked by required conditions
CI / package (push) Blocked by required conditions
2026-02-24 14:53:24 +00:00
Michal
079c7b3dfa feat: add prompt resources, fix MCP proxy transport, enrich tool descriptions
Some checks failed
CI / lint (pull_request) Has been cancelled
CI / typecheck (pull_request) Has been cancelled
CI / test (pull_request) Has been cancelled
CI / build (pull_request) Has been cancelled
CI / package (pull_request) Has been cancelled
- Fix MCP proxy to support SSE and STDIO transports (not just HTTP POST)
- Enrich tool descriptions with server context for LLM clarity
- Add Prompt and PromptRequest resources with two-resource RBAC model
- Add propose_prompt MCP tool for LLM to create pending prompt requests
- Add prompt resources visible in MCP resources/list (approved + session's pending)
- Add project-level prompt/instructions in MCP initialize response
- Add ServiceAccount subject type for RBAC (SA identity from X-Service-Account header)
- Add CLI commands: create prompt, get prompts/promptrequests, approve promptrequest
- Add prompts to apply config schema
- 956 tests passing across all packages

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-24 14:53:00 +00:00
Michal
7829f4fb92 fix: handle SSE responses in MCP bridge and add Commander-level tests
Some checks are pending
CI / lint (push) Waiting to run
CI / typecheck (push) Waiting to run
CI / test (push) Waiting to run
CI / build (push) Blocked by required conditions
CI / package (push) Blocked by required conditions
The bridge now parses SSE text/event-stream responses (extracting data:
lines) in addition to plain JSON. Also sends correct Accept header
per MCP streamable HTTP spec. Added tests for SSE handling and
command option parsing (-p/--project).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-24 10:17:45 +00:00
Michal
fa6240107f fix: mcp command accepts --project directly for Claude spawned processes
The mcp subcommand now has its own -p/--project option with
passThroughOptions(), so `mcpctl mcp --project NAME` works when Claude
spawns the process. Updated config claude to generate
args: ['mcp', '--project', project] and added Commander-level tests.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-24 10:14:16 +00:00
b34ea63d3d Merge pull request 'feat: add mcpctl mcp STDIO bridge, rework config claude' (#35) from feat/mcp-stdio-bridge into main
Some checks are pending
CI / lint (push) Waiting to run
CI / typecheck (push) Waiting to run
CI / test (push) Waiting to run
CI / build (push) Blocked by required conditions
CI / package (push) Blocked by required conditions
2026-02-24 00:52:21 +00:00
Michal
e17a2282e8 feat: add mcpctl mcp STDIO bridge, rework config claude
Some checks failed
CI / lint (pull_request) Has been cancelled
CI / typecheck (pull_request) Has been cancelled
CI / test (pull_request) Has been cancelled
CI / build (pull_request) Has been cancelled
CI / package (pull_request) Has been cancelled
- New `mcpctl mcp -p PROJECT` command: STDIO-to-StreamableHTTP bridge
  that reads JSON-RPC from stdin and forwards to mcplocal project endpoint
- Rework `config claude` to write mcpctl mcp entry instead of fetching
  server configs from API (no secrets in .mcp.json)
- Keep `config claude-generate` as backward-compat alias
- Fix discovery.ts auth token not being forwarded to mcpd (RBAC bypass)
- Update fish/bash completions for new commands
- 10 new MCP bridge tests, updated claude tests, fixed project-discovery test

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-24 00:52:05 +00:00
01d3c4e02d Merge pull request 'fix: don't send Content-Type on bodyless DELETE, include full server data in project queries' (#34) from fix/delete-content-type-and-project-servers into main
Some checks are pending
CI / lint (push) Waiting to run
CI / typecheck (push) Waiting to run
CI / test (push) Waiting to run
CI / build (push) Blocked by required conditions
CI / package (push) Blocked by required conditions
2026-02-23 19:55:35 +00:00
Michal
e4affe5962 fix: don't send Content-Type on bodyless DELETE, include full server data in project queries
Some checks failed
CI / lint (pull_request) Has been cancelled
CI / typecheck (pull_request) Has been cancelled
CI / test (pull_request) Has been cancelled
CI / build (pull_request) Has been cancelled
CI / package (pull_request) Has been cancelled
- Only set Content-Type: application/json when request body is present (fixes
  Fastify rejecting empty DELETE with "Body cannot be empty" 400 error)
- Changed PROJECT_INCLUDE to return full server objects instead of just {id, name}
  so project server listings show transport, package, image columns

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 19:54:34 +00:00
c75e7cdf4d Merge pull request 'fix: prevent attach/detach-server from repeating server arg on tab' (#33) from fix/completion-no-repeat-server-arg into main
Some checks are pending
CI / lint (push) Waiting to run
CI / typecheck (push) Waiting to run
CI / test (push) Waiting to run
CI / build (push) Blocked by required conditions
CI / package (push) Blocked by required conditions
2026-02-23 19:36:53 +00:00
Michal
65c340a03c fix: prevent attach/detach-server from repeating server arg on tab
Some checks failed
CI / lint (pull_request) Has been cancelled
CI / typecheck (pull_request) Has been cancelled
CI / test (pull_request) Has been cancelled
CI / build (pull_request) Has been cancelled
CI / package (pull_request) Has been cancelled
Added __mcpctl_needs_server_arg guard in fish and position check in
bash so completions stop after one server name is selected.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 19:36:45 +00:00
677d34b868 Merge pull request 'fix: instance completions use server.name, smart attach/detach' (#32) from fix/completion-instances-attach-detach into main
Some checks are pending
CI / lint (push) Waiting to run
CI / typecheck (push) Waiting to run
CI / test (push) Waiting to run
CI / build (push) Blocked by required conditions
CI / package (push) Blocked by required conditions
2026-02-23 19:32:34 +00:00
Michal
c5b8cb60b7 fix: instance completions use server.name, smart attach/detach
Some checks failed
CI / lint (pull_request) Has been cancelled
CI / typecheck (pull_request) Has been cancelled
CI / test (pull_request) Has been cancelled
CI / build (pull_request) Has been cancelled
CI / package (pull_request) Has been cancelled
- Instances have no name field — use server.name for completions
- attach-server: show only servers NOT in the project
- detach-server: show only servers IN the project
- Add helper functions for project-aware server completion
- 5 new tests covering all three fixes

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 19:32:18 +00:00
9a5deffb8f Merge pull request 'fix: use .[][].name in jq for wrapped JSON response' (#31) from fix/completion-jq-wrapped-json into main
Some checks are pending
CI / lint (push) Waiting to run
CI / typecheck (push) Waiting to run
CI / test (push) Waiting to run
CI / build (push) Blocked by required conditions
CI / package (push) Blocked by required conditions
2026-02-23 19:27:02 +00:00
Michal
ec7ada5383 fix: use .[][].name in jq for wrapped JSON response
Some checks failed
CI / lint (pull_request) Has been cancelled
CI / typecheck (pull_request) Has been cancelled
CI / test (pull_request) Has been cancelled
CI / build (pull_request) Has been cancelled
CI / package (pull_request) Has been cancelled
API returns { "resources": [...] } not bare arrays, so .[].name
produced no output. Use .[][].name to unwrap the outer object first.
Also auto-load .env in pr.sh.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 19:26:47 +00:00
b81d3be2d5 Merge pull request 'fix: use jq for completion name extraction to avoid nested matches' (#30) from fix/completion-nested-names into main
Some checks are pending
CI / lint (push) Waiting to run
CI / typecheck (push) Waiting to run
CI / test (push) Waiting to run
CI / build (push) Blocked by required conditions
CI / package (push) Blocked by required conditions
2026-02-23 19:23:48 +00:00
Michal
e2c54bfc5c fix: use jq for completion name extraction to avoid nested matches
Some checks failed
CI / lint (pull_request) Has been cancelled
CI / typecheck (pull_request) Has been cancelled
CI / test (pull_request) Has been cancelled
CI / build (pull_request) Has been cancelled
CI / package (pull_request) Has been cancelled
The regex "name":\s*"..." on JSON matched nested server names inside
project objects, mixing resource types in completions. Switch to
jq -r '.[].name' for proper top-level extraction. Add jq as RPM
dependency. Add pr.sh for PR creation via Gitea API.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 19:23:21 +00:00
7b7854b007 Merge pull request 'feat: erase stale fish completions and add completion tests' (#29) from feat/completions-stale-erase-and-tests into main
Some checks are pending
CI / lint (push) Waiting to run
CI / typecheck (push) Waiting to run
CI / test (push) Waiting to run
CI / build (push) Blocked by required conditions
CI / package (push) Blocked by required conditions
2026-02-23 19:17:00 +00:00
Michal
f23dd99662 feat: erase stale fish completions and add completion tests
Some checks failed
CI / lint (pull_request) Has been cancelled
CI / typecheck (pull_request) Has been cancelled
CI / test (pull_request) Has been cancelled
CI / build (pull_request) Has been cancelled
CI / package (pull_request) Has been cancelled
Fish completions are additive — sourcing a new file doesn't remove old
rules. Add `complete -c mcpctl -e` at the top to clear stale entries.
Also add 12 structural tests to prevent completion regressions.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 19:16:36 +00:00
43af85cb58 Merge pull request 'feat: context-aware completions with dynamic resource names' (#28) from feat/completions-project-scope-dynamic into main
Some checks are pending
CI / lint (push) Waiting to run
CI / typecheck (push) Waiting to run
CI / test (push) Waiting to run
CI / build (push) Blocked by required conditions
CI / package (push) Blocked by required conditions
2026-02-23 19:08:45 +00:00
Michal
6d2e3c2eb3 feat: context-aware completions with dynamic resource names
Some checks failed
CI / lint (pull_request) Has been cancelled
CI / typecheck (pull_request) Has been cancelled
CI / test (pull_request) Has been cancelled
CI / build (pull_request) Has been cancelled
CI / package (pull_request) Has been cancelled
- Hide attach-server/detach-server from --help (only relevant with --project)
- --project shows only project-scoped commands in tab completion
- Tab after resource type fetches live resource names from API
- --project value auto-completes from existing project names
- Stop offering resource types after one is already selected

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 19:08:29 +00:00
ce21db3853 Merge pull request 'feat: --project scopes get servers/instances' (#27) from feat/project-scoped-get into main
Some checks are pending
CI / lint (push) Waiting to run
CI / typecheck (push) Waiting to run
CI / test (push) Waiting to run
CI / build (push) Blocked by required conditions
CI / package (push) Blocked by required conditions
2026-02-23 19:03:23 +00:00
Michal
767725023e feat: --project flag scopes get servers/instances to project
Some checks failed
CI / lint (pull_request) Has been cancelled
CI / typecheck (pull_request) Has been cancelled
CI / test (pull_request) Has been cancelled
CI / build (pull_request) Has been cancelled
CI / package (pull_request) Has been cancelled
mcpctl --project NAME get servers — shows only servers attached to the project
mcpctl --project NAME get instances — shows only instances of project servers

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 19:03:07 +00:00
2bd1b55fe8 Merge pull request 'feat: add tests.sh runner and project routes tests' (#26) from feat/tests-sh-and-project-routes-tests into main
Some checks are pending
CI / lint (push) Waiting to run
CI / typecheck (push) Waiting to run
CI / test (push) Waiting to run
CI / build (push) Blocked by required conditions
CI / package (push) Blocked by required conditions
2026-02-23 18:58:06 +00:00
103 changed files with 12957 additions and 317 deletions

View File

@@ -0,0 +1,392 @@
# PRD: Gated Project Experience & Prompt Intelligence
## Overview
When 300 developers connect their LLM clients (Claude Code, Cursor, etc.) to mcpctl projects, they need relevant context — security policies, architecture decisions, operational runbooks — without flooding the context window. This feature introduces a gated session flow where the client LLM drives its own context retrieval through keyword-based matching, with the proxy providing a prompt index and encouraging ongoing discovery.
## Problem
- Injecting all prompts into instructions doesn't scale (hundreds of pages of policies)
- Exposing prompts only as MCP resources means LLMs never read them
- An index-only approach works for small numbers but breaks down at scale
- No mechanism to link external knowledge (Notion, Docmost) as prompts
- LLMs tend to work with whatever they have rather than proactively seek more context
## Core Concepts
### Gated Experience
A project-level flag (`gated: boolean`, default: `true`) that controls whether sessions go through a keyword-driven prompt retrieval flow before accessing project tools and resources.
**Flow (A + C):**
1. On `initialize`, instructions include the **prompt index** (names + summaries for all prompts, up to a reasonable cap) and tell client LLM: "Call `begin_session` with 5 keywords describing your task"
2. **If client obeys**: `begin_session({ tags: ["zigbee", "lights", "mqtt", "pairing", "automation"] })` → prompt selection (see below) → returns matched prompt content + full prompt index + encouragement to retrieve more → session ungated
3. **If client ignores**: First `tools/call` is intercepted → keywords extracted from tool name + arguments → same prompt selection → briefing injected alongside tool result → session ungated
4. **Ongoing retrieval**: Client can call `read_prompts({ tags: ["security", "vpn"] })` at any point to retrieve more prompts. The prompt index is always visible so the client LLM can see what's available.
**Prompt selection — tiered approach:**
- **Primary (heavy LLM available)**: Tags + full prompt index (names, priorities, summaries, chapters) are sent to the heavy LLM (e.g. Gemini). The LLM understands synonyms, context, and intent — it knows "zigbee" relates to "Z2M" and "Zigbee2MQTT", and that someone working on "lights" probably needs the "common-mistakes" prompt about pairing. The LLM returns a ranked list of relevant prompt names with brief explanations of why each is relevant. The heavy LLM may use the fast LLM for preprocessing if needed (e.g. generating missing summaries on the fly).
- **Fallback (no LLM, or `llmProvider=none`)**: Deterministic keyword-based tag matching against summaries/chapters with byte-budget allocation (see "Tag Matching Algorithm" below). Same approach as ResponsePaginator's byte-based fallback. Triggered when: no LLM providers configured, project has `llmProvider: "none"`, or local override sets `provider: "none"`.
- **Hybrid (both paths always available)**: Even when heavy LLM does the initial selection, the `read_prompts({ tags: [...] })` tool always uses keyword matching. This way the client LLM can retrieve specific prompts by keyword that the heavy LLM may have missed. The LLM is smart about context, keywords are precise about names — together they cover both fuzzy and exact retrieval.
**LLM availability resolution** (same chain as existing LLM features):
- Project `llmProvider: "none"` → no LLM, keyword fallback only
- Project `llmProvider: null` → inherit from global config
- Local override `provider: "none"` → no LLM, keyword fallback only
- No providers configured → keyword fallback only
- Otherwise → use heavy LLM for `begin_session`, fast LLM for summary generation
### Encouraging Retrieval
LLMs tend to proceed with incomplete information rather than seek more context. The system must actively counter this at multiple points:
**In `initialize` instructions:**
```
You have access to project knowledge containing policies, architecture decisions,
and guidelines. Some may contain critical rules about what you're doing. After your
initial briefing, if you're unsure about conventions, security requirements, or
best practices — request more context using read_prompts. It's always better to
check than to guess wrong. The project may have specific rules you don't know about yet.
```
**In `begin_session` response (after matched prompts):**
```
Other prompts available that may become relevant as your work progresses:
- security-policies: Network segmentation, firewall rules, VPN access
- naming-conventions: Service and resource naming standards
- ...
If any of these seem related to what you're doing now or later, request them
with read_prompts({ tags: [...] }) or resources/read. Don't assume you have
all the context — check when in doubt.
```
**In `read_prompts` response:**
```
Remember: you can request more prompts at any time with read_prompts({ tags: [...] }).
The project may have additional guidelines relevant to your current approach.
```
The tone is not "here's optional reading" but "there are rules you might not know about, and violating them costs more than reading them."
### Prompt Priority (1-10)
Every prompt has a priority level that influences selection order and byte-budget allocation:
| Range | Meaning | Behavior |
|-------|---------|----------|
| 1-3 | Reference | Low priority, included only on strong keyword match |
| 4-6 | Standard | Default priority, included on moderate keyword match |
| 7-9 | Important | High priority, lower match threshold |
| 10 | Critical | Always included in full, regardless of keyword match (guardrails, common mistakes) |
Default priority for new prompts: `5`.
### Prompt Summaries & Chapters (Auto-generated)
Each prompt gets auto-generated metadata used for the prompt index and tag matching:
- `summary` (string, ~20 words) — one-line description of what the prompt covers
- `chapters` (string[]) — key sections/topics extracted from content
Generation pipeline:
- **Fast LLM available**: Summarize content, extract key topics
- **No fast LLM**: First sentence of content + markdown headings via regex
- Regenerated on prompt create/update
- Cached on the prompt record
### Tag Matching Algorithm (No-LLM Fallback)
When no local LLM is available, the system falls back to a deterministic retrieval algorithm:
1. Client provides tags (5 keywords from `begin_session`, or extracted from tool call)
2. For each prompt, compute a match score:
- Check tags against prompt `summary` and `chapters` (case-insensitive substring match)
- Score = `number_of_matching_tags * base_priority`
- Priority 10 prompts: score = infinity (always included)
3. Sort by score descending
4. Fill a byte budget (configurable, default ~8KB) from top down:
- Include full content until budget exhausted
- Remaining matched prompts: include as index entries (name + summary)
- Non-matched prompts: listed as names only in the "other prompts available" section
**When `begin_session` is skipped (intercept path):**
- Extract keywords from tool name + arguments (e.g., `home-assistant/get_entities({ domain: "light" })` → tags: `["home-assistant", "entities", "light"]`)
- Run same matching algorithm
- Inject briefing alongside the real tool result
### `read_prompts` Tool (Ongoing Retrieval)
Available after session is ungated. Allows the client LLM to request more context at any point:
```json
{
"name": "read_prompts",
"description": "Request additional project context by keywords. Use this whenever you need guidelines, policies, or conventions related to your current work. It's better to check than to guess.",
"inputSchema": {
"type": "object",
"properties": {
"tags": {
"type": "array",
"items": { "type": "string" },
"description": "Keywords describing what context you need (e.g. [\"security\", \"vpn\", \"firewall\"])"
}
},
"required": ["tags"]
}
}
```
Returns matched prompt content + the prompt index reminder.
### Prompt Links
A prompt can be a **link** to an MCP resource in another project's server. The linked content is fetched server-side (by the proxy, not the client), enforcing RBAC.
Format: `project/server:resource-uri`
Example: `system-public/docmost-mcp:docmost://pages/architecture-overview`
Properties:
- The proxy fetches linked content using the source project's service account
- Client LLM never gets direct access to the source MCP server
- Dead links are detected and marked (health check on link resolution)
- Dead links generate error log entries
RBAC for links:
- Creating a link requires `edit` permission on RBAC in the target project
- A service account permission is created on the source project for the linked resource
- Default: admin group members can manage links
## Schema Changes
### Project
Add field:
- `gated: boolean` (default: `true`)
### Prompt
Add fields:
- `priority: integer` (1-10, default: 5)
- `summary: string | null` (auto-generated)
- `chapters: string[] | null` (auto-generated, stored as JSON)
- `linkTarget: string | null` (format: `project/server:resource-uri`, null for regular prompts)
### PromptRequest
Add field:
- `priority: integer` (1-10, default: 5)
## API Changes
### Modified Endpoints
- `POST /api/v1/prompts` — accept `priority`, `linkTarget`
- `PUT /api/v1/prompts/:id` — accept `priority` (not `linkTarget` — links are immutable, delete and recreate)
- `POST /api/v1/promptrequests` — accept `priority`
- `GET /api/v1/prompts` — return `priority`, `summary`, `linkTarget`, `linkStatus` (alive/dead/unknown)
- `GET /api/v1/projects/:name/prompts/visible` — return `priority`, `summary`, `chapters`
### New Endpoints
- `POST /api/v1/prompts/:id/regenerate-summary` — force re-generation of summary/chapters
- `GET /api/v1/projects/:name/prompt-index` — returns compact index (name, priority, summary, chapters)
## MCP Protocol Changes (mcplocal router)
### Session State
Router tracks per-session state:
- `gated: boolean` — starts `true` if project is gated
- `tags: string[]` — accumulated tags from begin_session + read_prompts calls
- `retrievedPrompts: Set<string>` — prompts already sent to client (avoid re-sending)
### Gated Session Flow
1. On `initialize`: instructions include prompt index + gate message + retrieval encouragement
2. `tools/list` while gated: only `begin_session` visible (progressive tool exposure)
3. `begin_session({ tags })`: match tags → return briefing + prompt index + encouragement → ungate → send `notifications/tools/list_changed`
4. On first `tools/call` while still gated: extract keywords → match → inject briefing alongside result → ungate
5. After ungating: all tools work normally, `read_prompts` available for ongoing retrieval
### `begin_session` Tool
```json
{
"name": "begin_session",
"description": "Start your session by providing 5 keywords that describe your current task. You'll receive relevant project context, policies, and guidelines. Required before using other tools.",
"inputSchema": {
"type": "object",
"properties": {
"tags": {
"type": "array",
"items": { "type": "string" },
"maxItems": 10,
"description": "5 keywords describing your current task (e.g. [\"zigbee\", \"automation\", \"lights\", \"mqtt\", \"pairing\"])"
}
},
"required": ["tags"]
}
}
```
Response structure:
```
[Priority 10 prompts — always, full content]
[Tag-matched prompts — full content, byte-budget-capped, priority-ordered]
Other prompts available that may become relevant as your work progresses:
- <name>: <summary>
- <name>: <summary>
- ...
If any of these seem related to what you're doing, request them with
read_prompts({ tags: [...] }). Don't assume you have all the context — check.
```
### Prompt Index in Instructions
The `initialize` instructions include a compact prompt index so the client LLM can see what knowledge exists. Format per prompt: `- <name>: <summary>` (~100 chars max per entry).
Cap: if more than 50 prompts, include only priority 7+ in instructions index. Full index always available via `resources/list`.
## CLI Changes
### New/Modified Commands
- `mcpctl create prompt <name> --priority <1-10>` — create with priority
- `mcpctl create prompt <name> --link <project/server:uri>` — create linked prompt
- `mcpctl get prompt -A` — show all prompts across all projects, with link targets
- `mcpctl describe project <name>` — show gated status, session greeting, prompt table
- `mcpctl edit project <name>``gated` field editable
### Prompt Link Display
```
$ mcpctl get prompt -A
PROJECT NAME PRIORITY LINK STATUS
homeautomation security-policies 8 - -
homeautomation architecture-adr 6 system-public/docmost-mcp:docmost://pages/a1 alive
homeautomation common-mistakes 10 - -
system-public onboarding 4 - -
```
## Describe Project Output
```
$ mcpctl describe project homeautomation
Name: homeautomation
Gated: true
LLM Provider: gemini-cli
...
Session greeting:
You have access to project knowledge containing policies, architecture decisions,
and guidelines. Call begin_session with 5 keywords describing your task to receive
relevant context. Some prompts contain critical rules — it's better to check than guess.
Prompts:
NAME PRIORITY TYPE LINK
common-mistakes 10 local -
security-policies 8 local -
architecture-adr 6 link system-public/docmost-mcp:docmost://pages/a1
stack 5 local -
```
## Testing Strategy
**Full test coverage is required.** Every new module, service, route, and algorithm must have comprehensive tests. No feature ships without tests.
### Unit Tests (mcpd)
- Prompt priority CRUD: create/update/get with priority field, default value, validation (1-10 range)
- Prompt link CRUD: create with linkTarget, immutability (can't update linkTarget), delete
- Prompt summary generation: auto-generation on create/update, regex fallback when no LLM
- `GET /api/v1/prompts` with priority, linkTarget, linkStatus fields
- `GET /api/v1/projects/:name/prompt-index` returns compact index
- `POST /api/v1/prompts/:id/regenerate-summary` triggers re-generation
- Project `gated` field: CRUD, default value
### Unit Tests (mcplocal — gating flow)
- State machine: gated → `begin_session` → ungated (happy path)
- State machine: gated → `tools/call` intercepted → ungated (fallback path)
- State machine: non-gated project skips gate entirely
- LLM selection path: tags + prompt index sent to heavy LLM, ranked results returned, priority 10 always included
- LLM selection path: heavy LLM uses fast LLM for missing summary generation
- No-LLM fallback: tag matching score calculation, priority weighting, substring matching
- No-LLM fallback: byte-budget exhaustion, priority ordering, index fallback, edge cases
- Keyword extraction from tool calls: tool name parsing, argument extraction
- `begin_session` response: matched content + index + encouragement text (both LLM and fallback paths)
- `read_prompts` response: additional matches, deduplication against already-sent prompts (both paths)
- Tools blocked while gated: return error directing to `begin_session`
- `tools/list` while gated: only `begin_session` visible
- `tools/list` after ungating: `begin_session` replaced by `read_prompts` + all upstream tools
- Priority 10 always included regardless of tag match or budget
- Prompt index in instructions: cap at 50, priority 7+ when over cap
- Notifications: `tools/list_changed` sent after ungating
### Unit Tests (mcplocal — prompt links)
- Link resolution: fetch content from source project's MCP server via service account
- Dead link detection: source server unavailable, resource not found, permission denied
- Dead link marking: status field updated, error logged
- RBAC enforcement: link creation requires edit permission on target project RBAC
- Service account permission: auto-created on source project for linked resource
- Content isolation: client LLM cannot access source server directly
### Unit Tests (CLI)
- `create prompt` with `--priority` flag, validation
- `create prompt` with `--link` flag, format validation
- `get prompt -A` output: all projects, link targets, status columns
- `describe project` output: gated status, session greeting, prompt table
- `edit project` with gated field
- Shell completions for new flags and resources
### Integration Tests
- End-to-end gated session: connect → begin_session with tags → tools available → correct prompts returned
- End-to-end intercept: connect → skip begin_session → call tool → keywords extracted → briefing injected
- End-to-end read_prompts: after ungating → request more context → additional prompts returned → no duplicates
- Prompt link resolution: create link → fetch content → verify content matches source
- Dead link lifecycle: create link → kill source → verify dead detection → restore → verify recovery
- Priority ordering: create prompts at various priorities → verify selection order and budget allocation
- Encouragement text: verify retrieval encouragement present in begin_session, read_prompts, and instructions
## System Prompts (mcpctl-system project)
All gate messages, encouragement text, and briefing templates are stored as prompts in a special `mcpctl-system` project. This makes them editable at runtime via `mcpctl edit prompt` without code changes or redeployment.
### Required System Prompts
| Name | Priority | Purpose |
|------|----------|---------|
| `gate-instructions` | 10 | Text injected into `initialize` instructions for gated projects. Tells client to call `begin_session` with 5 keywords. |
| `gate-encouragement` | 10 | Appended after `begin_session` response. Lists remaining prompts and encourages further retrieval. |
| `read-prompts-reminder` | 10 | Appended after `read_prompts` response. Reminds client that more context is available. |
| `gate-intercept-preamble` | 10 | Prepended to briefing when injected via tool call intercept (Option C fallback). |
| `session-greeting` | 10 | Shown in `mcpctl describe project` as the "hello prompt" — what client LLMs see on connect. |
### Bootstrap
The `mcpctl-system` project and its system prompts are created automatically on first startup (seed migration). They can be edited afterward but not deleted — delete attempts return an error.
### How mcplocal Uses Them
On router initialization, mcplocal fetches system prompts from mcpd via:
```
GET /api/v1/projects/mcpctl-system/prompts/visible
```
These are cached with the same 60s TTL as project routers. The prompt content supports template variables:
- `{{prompt_index}}` — replaced with the current project's prompt index
- `{{project_name}}` — replaced with the current project name
- `{{matched_prompts}}` — replaced with tag-matched prompt content
- `{{remaining_prompts}}` — replaced with the list of non-matched prompts
This way the encouragement text, tone, and structure can be tuned by editing prompts — no code changes needed.
## Security Considerations
- Prompt links: content fetched server-side, client never gets direct access to source MCP server
- RBAC: link creation requires edit permission on target project's RBAC
- Service account: source project grants read access to linked resource only
- Dead links: logged as errors, marked in listings, never expose source server errors to client
- Tag extraction: sanitize tool call arguments before using as keywords (prevent injection)

View File

@@ -1408,13 +1408,497 @@
"status": "done",
"subtasks": [],
"updatedAt": "2026-02-21T18:52:29.084Z"
},
{
"id": "37",
"title": "Add priority, summary, chapters, and linkTarget fields to Prompt schema",
"description": "Extend the Prisma schema for the Prompt model to include priority (integer 1-10, default 5), summary (nullable string), chapters (nullable JSON array), and linkTarget (nullable string for prompt links).",
"details": "1. Update `/src/db/prisma/schema.prisma` to add fields to the Prompt model:\n - `priority Int @default(5)` with check constraint 1-10\n - `summary String? @db.Text`\n - `chapters Json?` (stored as JSON array of strings)\n - `linkTarget String?` (format: `project/server:resource-uri`)\n\n2. Create Prisma migration:\n ```bash\n pnpm --filter db exec prisma migrate dev --name add-prompt-priority-summary-chapters-link\n ```\n\n3. Update TypeScript types in shared package to reflect new fields\n\n4. Add validation for priority range (1-10) at the database level if possible, otherwise enforce in application layer",
"testStrategy": "- Unit test: Verify migration creates columns with correct types and defaults\n- Unit test: Verify priority default is 5\n- Unit test: Verify nullable fields accept null\n- Unit test: Verify chapters stores/retrieves JSON arrays correctly\n- Integration test: Create prompt with all new fields, retrieve and verify values",
"priority": "high",
"dependencies": [],
"status": "done",
"subtasks": [],
"updatedAt": "2026-02-25T22:35:08.154Z"
},
{
"id": "38",
"title": "Add priority field to PromptRequest schema",
"description": "Extend the Prisma schema for the PromptRequest model to include the priority field (integer 1-10, default 5) to match the Prompt model.",
"details": "1. Update `/src/db/prisma/schema.prisma` to add to PromptRequest:\n - `priority Int @default(5)`\n\n2. Create Prisma migration:\n ```bash\n pnpm --filter db exec prisma migrate dev --name add-promptrequest-priority\n ```\n\n3. Update the `CreatePromptRequestSchema` in `/src/mcpd/src/validation/prompt.schema.ts` to include priority validation:\n ```typescript\n priority: z.number().int().min(1).max(10).default(5).optional(),\n ```\n\n4. Update TypeScript types in shared package",
"testStrategy": "- Unit test: Migration creates priority column with default 5\n- Unit test: PromptRequest creation with explicit priority\n- Unit test: PromptRequest creation uses default priority when not specified\n- Unit test: Validation rejects priority outside 1-10 range",
"priority": "high",
"dependencies": [
"37"
],
"status": "done",
"subtasks": [],
"updatedAt": "2026-02-25T22:35:08.160Z"
},
{
"id": "39",
"title": "Add gated field to Project schema",
"description": "Extend the Prisma schema for the Project model to include the gated boolean field (default true) that controls whether sessions go through the keyword-driven prompt retrieval flow.",
"details": "1. Update `/src/db/prisma/schema.prisma` to add to Project:\n - `gated Boolean @default(true)`\n\n2. Create Prisma migration:\n ```bash\n pnpm --filter db exec prisma migrate dev --name add-project-gated\n ```\n\n3. Update project-related TypeScript types\n\n4. Update project validation schemas to include gated field:\n ```typescript\n gated: z.boolean().default(true).optional(),\n ```\n\n5. Update project API routes to accept and return the gated field",
"testStrategy": "- Unit test: Migration creates gated column with default true\n- Unit test: Project creation with gated=false\n- Unit test: Project creation uses default gated=true when not specified\n- Unit test: Project update can toggle gated field\n- Integration test: GET /api/v1/projects/:name returns gated field",
"priority": "high",
"dependencies": [],
"status": "done",
"subtasks": [],
"updatedAt": "2026-02-25T22:35:08.165Z"
},
{
"id": "40",
"title": "Update Prompt CRUD API to handle priority and linkTarget",
"description": "Modify prompt API endpoints to accept, validate, and return the priority and linkTarget fields. LinkTarget should be immutable after creation.",
"details": "1. Update `/src/mcpd/src/validation/prompt.schema.ts`:\n ```typescript\n export const CreatePromptSchema = z.object({\n name: z.string().min(1).max(100).regex(/^[a-z0-9-]+$/),\n content: z.string().min(1).max(50000),\n projectId: z.string().optional(),\n priority: z.number().int().min(1).max(10).default(5).optional(),\n linkTarget: z.string().regex(/^[a-z0-9-]+\\/[a-z0-9-]+:[\\S]+$/).optional(),\n });\n \n export const UpdatePromptSchema = z.object({\n content: z.string().min(1).max(50000).optional(),\n priority: z.number().int().min(1).max(10).optional(),\n // Note: linkTarget is NOT included - links are immutable\n });\n ```\n\n2. Update `/src/mcpd/src/routes/prompts.ts`:\n - POST /api/v1/prompts: Accept priority, linkTarget\n - PUT /api/v1/prompts/:id: Accept priority only (not linkTarget)\n - GET endpoints: Return priority, linkTarget in response\n\n3. Update repository layer to handle new fields\n\n4. Add linkTarget format validation: `project/server:resource-uri`",
"testStrategy": "- Unit test: POST /api/v1/prompts with priority creates prompt with correct priority\n- Unit test: POST /api/v1/prompts with linkTarget creates linked prompt\n- Unit test: PUT /api/v1/prompts/:id with priority updates priority\n- Unit test: PUT /api/v1/prompts/:id rejects linkTarget (immutable)\n- Unit test: GET /api/v1/prompts returns priority and linkTarget fields\n- Unit test: Invalid linkTarget format rejected (validation error)\n- Unit test: Priority outside 1-10 range rejected",
"priority": "high",
"dependencies": [
"37"
],
"status": "done",
"subtasks": [],
"updatedAt": "2026-02-25T22:37:17.506Z"
},
{
"id": "41",
"title": "Update PromptRequest API to handle priority",
"description": "Modify prompt request API endpoints to accept, validate, and return the priority field for proposed prompts.",
"details": "1. Update validation in `/src/mcpd/src/validation/prompt.schema.ts`:\n ```typescript\n export const CreatePromptRequestSchema = z.object({\n name: z.string().min(1).max(100).regex(/^[a-z0-9-]+$/),\n content: z.string().min(1).max(50000),\n projectId: z.string().optional(),\n createdBySession: z.string().optional(),\n createdByUserId: z.string().optional(),\n priority: z.number().int().min(1).max(10).default(5).optional(),\n });\n ```\n\n2. Update `/src/mcpd/src/routes/prompts.ts` for PromptRequest endpoints:\n - POST /api/v1/promptrequests: Accept priority\n - GET /api/v1/promptrequests: Return priority\n - POST /api/v1/promptrequests/:id/approve: Preserve priority when creating Prompt\n\n3. Update PromptService.approve() to copy priority from request to prompt\n\n4. Update repository layer",
"testStrategy": "- Unit test: POST /api/v1/promptrequests with priority creates request with correct priority\n- Unit test: POST /api/v1/promptrequests uses default priority 5 when not specified\n- Unit test: GET /api/v1/promptrequests returns priority field\n- Unit test: Approve preserves priority from request to created prompt\n- Unit test: Priority validation (1-10 range)",
"priority": "high",
"dependencies": [
"38"
],
"status": "done",
"subtasks": [],
"updatedAt": "2026-02-25T22:37:17.511Z"
},
{
"id": "42",
"title": "Implement prompt summary generation service",
"description": "Create a service that auto-generates summary (20 words) and chapters (key sections) for prompts, using fast LLM when available or regex fallback.",
"details": "1. Create `/src/mcpd/src/services/prompt-summary.service.ts`:\n ```typescript\n export class PromptSummaryService {\n constructor(\n private llmClient: LlmClient | null,\n private promptRepo: IPromptRepository\n ) {}\n \n async generateSummary(content: string): Promise<{ summary: string; chapters: string[] }> {\n if (this.llmClient) {\n return this.generateWithLlm(content);\n }\n return this.generateWithRegex(content);\n }\n \n private async generateWithLlm(content: string): Promise<...> {\n // Send content to fast LLM with prompt:\n // \"Generate a 20-word summary and extract key section topics...\"\n }\n \n private generateWithRegex(content: string): { summary: string; chapters: string[] } {\n // summary: first sentence of content (truncated to ~20 words)\n // chapters: extract markdown headings via regex /^#+\\s+(.+)$/gm\n }\n }\n ```\n\n2. Integrate with PromptService:\n - Call generateSummary on prompt create\n - Call generateSummary on prompt update (when content changes)\n - Cache results on the prompt record\n\n3. Handle LLM availability check via existing LlmConfig patterns",
"testStrategy": "- Unit test: generateWithRegex extracts first sentence as summary\n- Unit test: generateWithRegex extracts markdown headings as chapters\n- Unit test: generateWithLlm calls LLM with correct prompt (mock LLM)\n- Unit test: generateSummary uses LLM when available\n- Unit test: generateSummary falls back to regex when no LLM\n- Unit test: Empty content handled gracefully\n- Unit test: Content without headings returns empty chapters array\n- Integration test: Creating prompt triggers summary generation",
"priority": "high",
"dependencies": [
"37"
],
"status": "done",
"subtasks": [],
"updatedAt": "2026-02-25T22:39:28.196Z"
},
{
"id": "43",
"title": "Add regenerate-summary API endpoint",
"description": "Create POST /api/v1/prompts/:id/regenerate-summary endpoint to force re-generation of summary and chapters for a prompt.",
"details": "1. Add route in `/src/mcpd/src/routes/prompts.ts`:\n ```typescript\n fastify.post('/api/v1/prompts/:id/regenerate-summary', async (request, reply) => {\n const { id } = request.params as { id: string };\n const prompt = await promptService.findById(id);\n if (!prompt) {\n return reply.status(404).send({ error: 'Prompt not found' });\n }\n \n const { summary, chapters } = await summaryService.generateSummary(prompt.content);\n const updated = await promptService.updateSummary(id, summary, chapters);\n \n return reply.send(updated);\n });\n ```\n\n2. Add `updateSummary(id, summary, chapters)` method to PromptRepository and PromptService\n\n3. Return the updated prompt with new summary/chapters in response",
"testStrategy": "- Unit test: POST to valid prompt ID regenerates summary\n- Unit test: Returns updated prompt with new summary/chapters\n- Unit test: 404 for non-existent prompt ID\n- Unit test: Uses LLM when available, regex fallback otherwise\n- Integration test: End-to-end regeneration updates database",
"priority": "medium",
"dependencies": [
"42"
],
"status": "done",
"subtasks": [],
"updatedAt": "2026-02-25T22:39:28.201Z"
},
{
"id": "44",
"title": "Create prompt-index API endpoint",
"description": "Create GET /api/v1/projects/:name/prompt-index endpoint that returns a compact index of prompts (name, priority, summary, chapters) for a project.",
"details": "1. Add route in `/src/mcpd/src/routes/prompts.ts`:\n ```typescript\n fastify.get('/api/v1/projects/:name/prompt-index', async (request, reply) => {\n const { name } = request.params as { name: string };\n const project = await projectService.findByName(name);\n if (!project) {\n return reply.status(404).send({ error: 'Project not found' });\n }\n \n const prompts = await promptService.findByProject(project.id);\n const index = prompts.map(p => ({\n name: p.name,\n priority: p.priority,\n summary: p.summary,\n chapters: p.chapters,\n linkTarget: p.linkTarget,\n }));\n \n return reply.send({ prompts: index });\n });\n ```\n\n2. Consider adding global prompts to the index (inherited by all projects)\n\n3. Sort by priority descending in response",
"testStrategy": "- Unit test: Returns compact index for valid project\n- Unit test: Index contains name, priority, summary, chapters, linkTarget\n- Unit test: 404 for non-existent project\n- Unit test: Empty array for project with no prompts\n- Unit test: Results sorted by priority descending\n- Integration test: End-to-end retrieval matches database state",
"priority": "medium",
"dependencies": [
"42"
],
"status": "done",
"subtasks": [],
"updatedAt": "2026-02-25T22:39:28.208Z"
},
{
"id": "45",
"title": "Implement tag-matching algorithm for prompt selection",
"description": "Create a deterministic keyword-based tag matching algorithm as the no-LLM fallback for prompt selection, with byte-budget allocation and priority weighting.",
"details": "1. Create `/src/mcplocal/src/services/tag-matcher.service.ts`:\n ```typescript\n interface MatchedPrompt {\n prompt: PromptIndex;\n score: number;\n matchedTags: string[];\n }\n \n export class TagMatcherService {\n constructor(private byteBudget: number = 8192) {}\n \n matchPrompts(tags: string[], promptIndex: PromptIndex[]): {\n fullContent: PromptIndex[]; // Prompts to include in full\n indexOnly: PromptIndex[]; // Prompts to include as index entries\n remaining: PromptIndex[]; // Non-matched prompts (names only)\n } {\n // 1. Priority 10 prompts: always included (score = Infinity)\n // 2. For each prompt, compute score:\n // - Check tags against summary + chapters (case-insensitive substring)\n // - score = matching_tags_count * priority\n // 3. Sort by score descending\n // 4. Fill byte budget from top:\n // - Include full content until budget exhausted\n // - Remaining matched: include as index entries\n // - Non-matched: names only\n }\n \n private computeScore(tags: string[], prompt: PromptIndex): number {\n if (prompt.priority === 10) return Infinity;\n const matchingTags = tags.filter(tag => \n this.matchesPrompt(tag.toLowerCase(), prompt)\n );\n return matchingTags.length * prompt.priority;\n }\n \n private matchesPrompt(tag: string, prompt: PromptIndex): boolean {\n const searchText = [\n prompt.summary || '',\n ...(prompt.chapters || [])\n ].join(' ').toLowerCase();\n return searchText.includes(tag);\n }\n }\n ```\n\n2. Handle edge cases: empty tags, no prompts, all priority 10, etc.",
"testStrategy": "- Unit test: Priority 10 prompts always included regardless of tags\n- Unit test: Score calculation: matching_tags * priority\n- Unit test: Case-insensitive matching\n- Unit test: Substring matching in summary and chapters\n- Unit test: Byte budget exhaustion stops full content inclusion\n- Unit test: Matched prompts beyond budget become index entries\n- Unit test: Non-matched prompts listed as names only\n- Unit test: Sorting by score descending\n- Unit test: Empty tags returns priority 10 only\n- Unit test: No prompts returns empty result",
"priority": "high",
"dependencies": [
"44"
],
"status": "done",
"subtasks": [],
"updatedAt": "2026-02-25T22:40:47.570Z"
},
{
"id": "46",
"title": "Implement LLM-based prompt selection service",
"description": "Create a service that uses the heavy LLM to intelligently select relevant prompts based on tags and the full prompt index, understanding synonyms and context.",
"details": "1. Create `/src/mcplocal/src/services/llm-prompt-selector.service.ts`:\n ```typescript\n export class LlmPromptSelectorService {\n constructor(\n private llmClient: LlmClient,\n private fastLlmClient: LlmClient | null,\n private tagMatcher: TagMatcherService // fallback\n ) {}\n \n async selectPrompts(tags: string[], promptIndex: PromptIndex[]): Promise<{\n selected: Array<{ name: string; reason: string }>;\n priority10: PromptIndex[]; // Always included\n }> {\n // 1. Extract priority 10 prompts (always included)\n // 2. Generate missing summaries using fast LLM if needed\n // 3. Send to heavy LLM:\n const prompt = `\n Given these keywords: ${tags.join(', ')}\n And this prompt index:\n ${promptIndex.map(p => `- ${p.name}: ${p.summary}`).join('\\n')}\n \n Select the most relevant prompts for someone working on tasks\n related to these keywords. Consider synonyms and related concepts.\n Return a ranked JSON array: [{name: string, reason: string}]\n `;\n // 4. Parse LLM response\n // 5. On LLM error, fall back to tag matcher\n }\n }\n ```\n\n2. Handle LLM timeouts and errors gracefully with fallback\n\n3. Validate LLM response format",
"testStrategy": "- Unit test: Priority 10 prompts always returned regardless of LLM selection\n- Unit test: LLM called with correct prompt format (mock)\n- Unit test: LLM response parsed correctly\n- Unit test: Invalid LLM response falls back to tag matcher\n- Unit test: LLM timeout falls back to tag matcher\n- Unit test: Missing summaries trigger fast LLM generation\n- Unit test: No LLM available uses tag matcher directly\n- Integration test: End-to-end selection with mock LLM",
"priority": "high",
"dependencies": [
"45"
],
"status": "done",
"subtasks": [],
"updatedAt": "2026-02-25T22:45:57.158Z"
},
{
"id": "47",
"title": "Implement session state management for gating",
"description": "Extend the McpRouter to track per-session gating state including gated status, accumulated tags, and retrieved prompts set.",
"details": "1. Update `/src/mcplocal/src/router.ts` to add session state:\n ```typescript\n interface SessionState {\n gated: boolean; // starts true if project is gated\n tags: string[]; // accumulated from begin_session + read_prompts\n retrievedPrompts: Set<string>; // prompts already sent (avoid duplicates)\n }\n \n export class McpRouter {\n private sessionStates: Map<string, SessionState> = new Map();\n \n getSessionState(sessionId: string): SessionState {\n if (!this.sessionStates.has(sessionId)) {\n this.sessionStates.set(sessionId, {\n gated: this.projectConfig?.gated ?? true,\n tags: [],\n retrievedPrompts: new Set(),\n });\n }\n return this.sessionStates.get(sessionId)!;\n }\n \n ungateSession(sessionId: string): void {\n const state = this.getSessionState(sessionId);\n state.gated = false;\n }\n \n addRetrievedPrompts(sessionId: string, names: string[]): void {\n const state = this.getSessionState(sessionId);\n names.forEach(n => state.retrievedPrompts.add(n));\n }\n }\n ```\n\n2. Clean up session state when session closes\n\n3. Handle session state for non-gated projects (gated=false from start)",
"testStrategy": "- Unit test: New session starts with gated=true for gated project\n- Unit test: New session starts with gated=false for non-gated project\n- Unit test: ungateSession changes gated to false\n- Unit test: addRetrievedPrompts adds to set\n- Unit test: retrievedPrompts prevents duplicates\n- Unit test: Session state isolated per sessionId\n- Unit test: Session cleanup removes state",
"priority": "high",
"dependencies": [
"39"
],
"status": "done",
"subtasks": [],
"updatedAt": "2026-02-25T22:45:57.164Z"
},
{
"id": "48",
"title": "Implement begin_session tool for gated sessions",
"description": "Create the begin_session MCP tool that accepts 5 keywords, triggers prompt selection, returns matched content with encouragement, and ungates the session.",
"details": "1. Add begin_session tool definition in `/src/mcplocal/src/router.ts`:\n ```typescript\n private getBeginSessionTool(): Tool {\n return {\n name: 'begin_session',\n description: 'Start your session by providing 5 keywords that describe your current task. You\\'ll receive relevant project context, policies, and guidelines. Required before using other tools.',\n inputSchema: {\n type: 'object',\n properties: {\n tags: {\n type: 'array',\n items: { type: 'string' },\n maxItems: 10,\n description: '5 keywords describing your current task'\n }\n },\n required: ['tags']\n }\n };\n }\n ```\n\n2. Implement begin_session handler:\n - Validate tags array (1-10 items)\n - Call LlmPromptSelector or TagMatcher based on LLM availability\n - Fetch full content for selected prompts\n - Build response with matched content + index + encouragement\n - Ungate session\n - Send `notifications/tools/list_changed`\n\n3. Response format:\n ```\n [Priority 10 prompts - full content]\n \n [Tag-matched prompts - full content, priority-ordered]\n \n Other prompts available that may become relevant...\n - name: summary\n ...\n If any seem related, request them with read_prompts({ tags: [...] }).\n ```",
"testStrategy": "- Unit test: begin_session with valid tags returns matched prompts\n- Unit test: begin_session includes priority 10 prompts always\n- Unit test: begin_session response includes encouragement text\n- Unit test: begin_session response includes prompt index\n- Unit test: Session ungated after successful begin_session\n- Unit test: notifications/tools/list_changed sent after ungating\n- Unit test: Empty tags handled (returns priority 10 only)\n- Unit test: Invalid tags rejected with error\n- Unit test: begin_session while already ungated returns error",
"priority": "high",
"dependencies": [
"46",
"47"
],
"status": "done",
"subtasks": [],
"updatedAt": "2026-02-25T22:50:39.111Z"
},
{
"id": "49",
"title": "Implement read_prompts tool for ongoing retrieval",
"description": "Create the read_prompts MCP tool that allows clients to request additional context by keywords after the session is ungated.",
"details": "1. Add read_prompts tool definition:\n ```typescript\n private getReadPromptsTool(): Tool {\n return {\n name: 'read_prompts',\n description: 'Request additional project context by keywords. Use this whenever you need guidelines, policies, or conventions related to your current work.',\n inputSchema: {\n type: 'object',\n properties: {\n tags: {\n type: 'array',\n items: { type: 'string' },\n description: 'Keywords describing what context you need'\n }\n },\n required: ['tags']\n }\n };\n }\n ```\n\n2. Implement read_prompts handler:\n - Always use keyword matching (not LLM) for precision\n - Exclude already-retrieved prompts from response\n - Add newly retrieved prompts to session state\n - Include reminder about more prompts available\n\n3. Response format:\n ```\n [Matched prompt content - deduplicated]\n \n Remember: you can request more prompts at any time with read_prompts({ tags: [...] }).\n The project may have additional guidelines relevant to your current approach.\n ```",
"testStrategy": "- Unit test: read_prompts returns matched prompts by keyword\n- Unit test: Already retrieved prompts excluded from response\n- Unit test: Newly retrieved prompts added to session state\n- Unit test: Response includes reminder text\n- Unit test: read_prompts while gated returns error\n- Unit test: Empty tags returns empty response\n- Unit test: Uses keyword matching not LLM",
"priority": "high",
"dependencies": [
"48"
],
"status": "done",
"subtasks": [],
"updatedAt": "2026-02-25T22:50:39.115Z"
},
{
"id": "50",
"title": "Implement progressive tool exposure for gated sessions",
"description": "Modify tools/list behavior to only expose begin_session while gated, and expose all tools plus read_prompts after ungating.",
"details": "1. Update tools/list handling in `/src/mcplocal/src/router.ts`:\n ```typescript\n async handleToolsList(sessionId: string): Promise<Tool[]> {\n const state = this.getSessionState(sessionId);\n \n if (state.gated) {\n // Only show begin_session while gated\n return [this.getBeginSessionTool()];\n }\n \n // After ungating: all upstream tools + read_prompts\n const upstreamTools = await this.discoverTools();\n return [...upstreamTools, this.getReadPromptsTool()];\n }\n ```\n\n2. Block direct tool calls while gated:\n ```typescript\n async handleToolCall(sessionId: string, toolName: string, args: any): Promise<any> {\n const state = this.getSessionState(sessionId);\n \n if (state.gated && toolName !== 'begin_session') {\n // Intercept: extract keywords, match prompts, inject briefing\n return this.handleInterceptedCall(sessionId, toolName, args);\n }\n \n // Normal routing\n return this.routeToolCall(toolName, args);\n }\n ```\n\n3. Ensure notifications/tools/list_changed is sent after ungating",
"testStrategy": "- Unit test: tools/list while gated returns only begin_session\n- Unit test: tools/list after ungating returns all tools + read_prompts\n- Unit test: begin_session not visible after ungating\n- Unit test: Tool call while gated (not begin_session) triggers intercept\n- Unit test: Tool call after ungating routes normally\n- Unit test: notifications/tools/list_changed sent on ungate",
"priority": "high",
"dependencies": [
"48",
"49"
],
"status": "done",
"subtasks": [],
"updatedAt": "2026-02-25T22:50:39.120Z"
},
{
"id": "51",
"title": "Implement keyword extraction from tool calls",
"description": "Create a service that extracts keywords from tool names and arguments for the intercept fallback path when clients skip begin_session.",
"details": "1. Create `/src/mcplocal/src/services/keyword-extractor.service.ts`:\n ```typescript\n export class KeywordExtractorService {\n extractKeywords(toolName: string, args: Record<string, any>): string[] {\n const keywords: string[] = [];\n \n // Extract from tool name (split on / and -)\n // e.g., \"home-assistant/get_entities\" -> [\"home\", \"assistant\", \"get\", \"entities\"]\n keywords.push(...this.extractFromName(toolName));\n \n // Extract from argument values\n // e.g., { domain: \"light\", entity_id: \"light.kitchen\" } -> [\"light\", \"kitchen\"]\n keywords.push(...this.extractFromArgs(args));\n \n // Deduplicate and sanitize\n return [...new Set(keywords.map(k => this.sanitize(k)))];\n }\n \n private sanitize(keyword: string): string {\n // Remove special characters, lowercase, limit length\n return keyword.toLowerCase().replace(/[^a-z0-9]/g, '').slice(0, 50);\n }\n }\n ```\n\n2. Handle various argument types: strings, arrays, nested objects\n\n3. Prevent injection by sanitizing extracted keywords",
"testStrategy": "- Unit test: Extracts keywords from tool name with /\n- Unit test: Extracts keywords from tool name with -\n- Unit test: Extracts keywords from string argument values\n- Unit test: Extracts keywords from array argument values\n- Unit test: Handles nested object arguments\n- Unit test: Sanitizes special characters\n- Unit test: Deduplicates keywords\n- Unit test: Handles empty arguments\n- Unit test: Limits keyword length to prevent abuse",
"priority": "medium",
"dependencies": [],
"status": "done",
"subtasks": [],
"updatedAt": "2026-02-25T22:40:47.575Z"
},
{
"id": "52",
"title": "Implement tool call intercept with briefing injection",
"description": "When a gated session calls a tool without first calling begin_session, intercept the call, extract keywords, match prompts, and inject the briefing alongside the real tool result.",
"details": "1. Implement handleInterceptedCall in `/src/mcplocal/src/router.ts`:\n ```typescript\n async handleInterceptedCall(\n sessionId: string,\n toolName: string,\n args: any\n ): Promise<ToolResult> {\n // 1. Extract keywords from tool call\n const keywords = this.keywordExtractor.extractKeywords(toolName, args);\n \n // 2. Match prompts using keywords\n const { fullContent, indexOnly, remaining } = \n await this.promptSelector.selectPrompts(keywords, this.promptIndex);\n \n // 3. Execute the actual tool call\n const actualResult = await this.routeToolCall(toolName, args);\n \n // 4. Build briefing with intercept preamble\n const briefing = this.buildBriefing(fullContent, indexOnly, remaining, 'intercept');\n \n // 5. Ungate session\n this.ungateSession(sessionId);\n \n // 6. Send notifications/tools/list_changed\n await this.sendToolsListChanged();\n \n // 7. Return combined result\n return {\n content: [{\n type: 'text',\n text: `${briefing}\\n\\n---\\n\\n${actualResult.content[0].text}`\n }]\n };\n }\n ```\n\n2. Use gate-intercept-preamble system prompt for the briefing prefix",
"testStrategy": "- Unit test: Tool call while gated triggers intercept\n- Unit test: Keywords extracted from tool name and args\n- Unit test: Prompts matched using extracted keywords\n- Unit test: Actual tool still executes and returns result\n- Unit test: Briefing prepended to tool result\n- Unit test: Session ungated after intercept\n- Unit test: notifications/tools/list_changed sent\n- Unit test: Intercept preamble included in briefing\n- Integration test: End-to-end intercept flow",
"priority": "high",
"dependencies": [
"50",
"51"
],
"status": "done",
"subtasks": [],
"updatedAt": "2026-02-25T22:51:03.822Z"
},
{
"id": "53",
"title": "Add prompt index to initialize instructions",
"description": "Modify the initialize handler to include the compact prompt index and gate message in instructions for gated projects.",
"details": "1. Update initialize handling in `/src/mcplocal/src/router.ts`:\n ```typescript\n async handleInitialize(sessionId: string): Promise<InitializeResult> {\n const state = this.getSessionState(sessionId);\n \n let instructions = this.projectConfig.prompt || '';\n \n if (state.gated) {\n // Add gate instructions\n const gateInstructions = await this.getSystemPrompt('gate-instructions');\n \n // Build prompt index (cap at 50, priority 7+ if over)\n const index = this.buildPromptIndex();\n \n instructions += `\\n\\n${gateInstructions.replace('{{prompt_index}}', index)}`;\n }\n \n return {\n protocolVersion: '2024-11-05',\n capabilities: { ... },\n serverInfo: { ... },\n instructions,\n };\n }\n ```\n\n2. Build prompt index with cap:\n - If <= 50 prompts: include all\n - If > 50 prompts: include only priority 7+\n - Format: `- <name>: <summary>` (~100 chars per entry)",
"testStrategy": "- Unit test: Gated project includes gate instructions in initialize\n- Unit test: Prompt index included in instructions\n- Unit test: Index capped at 50 entries\n- Unit test: Over 50 prompts shows priority 7+ only\n- Unit test: Non-gated project skips gate instructions\n- Unit test: {{prompt_index}} template replaced\n- Integration test: End-to-end initialize with gated project",
"priority": "high",
"dependencies": [
"47",
"44"
],
"status": "done",
"subtasks": [],
"updatedAt": "2026-02-25T22:52:13.697Z"
},
{
"id": "54",
"title": "Create mcpctl-system project with system prompts",
"description": "Implement bootstrap logic to create the mcpctl-system project and its required system prompts on first startup, with protection against deletion.",
"details": "1. Create seed migration or startup hook:\n ```typescript\n async function bootstrapSystemProject() {\n const systemProject = await projectRepo.findByName('mcpctl-system');\n if (systemProject) return; // Already exists\n \n // Create mcpctl-system project\n const project = await projectRepo.create({\n name: 'mcpctl-system',\n description: 'System prompts for mcpctl gating and encouragement',\n gated: false, // System project is not gated\n ownerId: SYSTEM_USER_ID,\n });\n \n // Create required system prompts\n const systemPrompts = [\n { name: 'gate-instructions', priority: 10, content: GATE_INSTRUCTIONS },\n { name: 'gate-encouragement', priority: 10, content: GATE_ENCOURAGEMENT },\n { name: 'read-prompts-reminder', priority: 10, content: READ_PROMPTS_REMINDER },\n { name: 'gate-intercept-preamble', priority: 10, content: GATE_INTERCEPT_PREAMBLE },\n { name: 'session-greeting', priority: 10, content: SESSION_GREETING },\n ];\n \n for (const p of systemPrompts) {\n await promptRepo.create({ ...p, projectId: project.id });\n }\n }\n ```\n\n2. Add delete protection in prompt delete endpoint:\n - Check if prompt belongs to mcpctl-system\n - Return 403 error if attempting to delete system prompt\n\n3. Define default content for each system prompt per PRD",
"testStrategy": "- Unit test: System project created on first startup\n- Unit test: All 5 system prompts created\n- Unit test: Subsequent startups don't duplicate\n- Unit test: Delete system prompt returns 403\n- Unit test: System prompts have priority 10\n- Unit test: mcpctl-system project has gated=false\n- Integration test: End-to-end bootstrap flow",
"priority": "high",
"dependencies": [
"40",
"39"
],
"status": "done",
"subtasks": [],
"updatedAt": "2026-02-25T22:56:12.064Z"
},
{
"id": "55",
"title": "Implement system prompt fetching and caching in mcplocal",
"description": "Add functionality to mcplocal router to fetch system prompts from mcpd and cache them with 60s TTL, supporting template variable replacement.",
"details": "1. Add system prompt fetching in `/src/mcplocal/src/router.ts`:\n ```typescript\n private systemPromptCache: Map<string, { content: string; expiresAt: number }> = new Map();\n \n async getSystemPrompt(name: string): Promise<string> {\n const cached = this.systemPromptCache.get(name);\n if (cached && cached.expiresAt > Date.now()) {\n return cached.content;\n }\n \n const prompts = await this.mcpdClient.fetch(\n '/api/v1/projects/mcpctl-system/prompts/visible'\n );\n const prompt = prompts.find(p => p.name === name);\n if (!prompt) {\n throw new Error(`System prompt not found: ${name}`);\n }\n \n this.systemPromptCache.set(name, {\n content: prompt.content,\n expiresAt: Date.now() + 60000, // 60s TTL\n });\n \n return prompt.content;\n }\n ```\n\n2. Add template variable replacement:\n ```typescript\n replaceTemplateVariables(content: string, vars: Record<string, string>): string {\n return content\n .replace(/\\{\\{prompt_index\\}\\}/g, vars.prompt_index || '')\n .replace(/\\{\\{project_name\\}\\}/g, vars.project_name || '')\n .replace(/\\{\\{matched_prompts\\}\\}/g, vars.matched_prompts || '')\n .replace(/\\{\\{remaining_prompts\\}\\}/g, vars.remaining_prompts || '');\n }\n ```",
"testStrategy": "- Unit test: System prompt fetched from mcpd\n- Unit test: Cached prompt returned within TTL\n- Unit test: Cache miss triggers fresh fetch\n- Unit test: Missing system prompt throws error\n- Unit test: Template variables replaced correctly\n- Unit test: Unknown template variables left as-is\n- Integration test: End-to-end fetch and cache",
"priority": "high",
"dependencies": [
"54"
],
"status": "done",
"subtasks": [],
"updatedAt": "2026-02-25T22:57:28.917Z"
},
{
"id": "56",
"title": "Implement prompt link resolution service",
"description": "Create a service that fetches linked prompt content from source MCP servers using the project's service account, with dead link detection.",
"details": "1. Create `/src/mcplocal/src/services/link-resolver.service.ts`:\n ```typescript\n export class LinkResolverService {\n constructor(private mcpdClient: McpdClient) {}\n \n async resolveLink(linkTarget: string): Promise<{\n content: string | null;\n status: 'alive' | 'dead' | 'unknown';\n error?: string;\n }> {\n // Parse linkTarget: project/server:resource-uri\n const { project, server, uri } = this.parseLink(linkTarget);\n \n try {\n // Use service account for source project\n const content = await this.fetchResource(project, server, uri);\n return { content, status: 'alive' };\n } catch (error) {\n this.logDeadLink(linkTarget, error);\n return { \n content: null, \n status: 'dead',\n error: error.message \n };\n }\n }\n \n private parseLink(linkTarget: string): { project: string; server: string; uri: string } {\n const match = linkTarget.match(/^([^/]+)\\/([^:]+):(.+)$/);\n if (!match) throw new Error('Invalid link format');\n return { project: match[1], server: match[2], uri: match[3] };\n }\n \n private async fetchResource(project: string, server: string, uri: string): Promise<string> {\n // Call mcpd to fetch resource via service account\n // mcpd routes to the source project's MCP server\n }\n }\n ```\n\n2. Log dead links as errors\n\n3. Cache resolution results",
"testStrategy": "- Unit test: Valid link parsed correctly\n- Unit test: Invalid link format throws error\n- Unit test: Successful resolution returns content and status='alive'\n- Unit test: Failed resolution returns status='dead' with error\n- Unit test: Dead link logged as error\n- Unit test: Service account header included in request\n- Integration test: End-to-end link resolution",
"priority": "medium",
"dependencies": [
"40"
],
"status": "done",
"subtasks": [],
"updatedAt": "2026-02-25T23:07:29.026Z"
},
{
"id": "57",
"title": "Add linkStatus to prompt GET responses",
"description": "Modify the GET /api/v1/prompts endpoint to include linkStatus (alive/dead/unknown) for linked prompts by checking link health.",
"details": "1. Update `/src/mcpd/src/routes/prompts.ts` GET endpoint:\n ```typescript\n fastify.get('/api/v1/prompts', async (request, reply) => {\n const prompts = await promptService.findAll(filter);\n \n // Check link status for linked prompts\n const promptsWithStatus = await Promise.all(\n prompts.map(async (p) => {\n if (!p.linkTarget) {\n return { ...p, linkStatus: null };\n }\n const status = await linkResolver.checkLinkHealth(p.linkTarget);\n return { ...p, linkStatus: status };\n })\n );\n \n return reply.send(promptsWithStatus);\n });\n ```\n\n2. Consider caching link health to avoid repeated checks\n\n3. Add `linkStatus` field to prompt response schema:\n - `null` for non-linked prompts\n - `'alive'` for working links\n - `'dead'` for broken links\n - `'unknown'` for unchecked links",
"testStrategy": "- Unit test: Non-linked prompt has linkStatus=null\n- Unit test: Linked prompt with working link has linkStatus='alive'\n- Unit test: Linked prompt with broken link has linkStatus='dead'\n- Unit test: Link health cached to avoid repeated checks\n- Unit test: All prompts in response have linkStatus field\n- Integration test: End-to-end GET with linked prompts",
"priority": "medium",
"dependencies": [
"56"
],
"status": "done",
"subtasks": [],
"updatedAt": "2026-02-25T23:09:07.078Z"
},
{
"id": "58",
"title": "Add RBAC for prompt link creation",
"description": "Implement RBAC checks requiring edit permission on the target project to create prompt links, and auto-create service account permission on the source project.",
"details": "1. Update prompt creation in `/src/mcpd/src/services/prompt.service.ts`:\n ```typescript\n async createPrompt(data: CreatePromptInput, userId: string): Promise<Prompt> {\n if (data.linkTarget) {\n // Verify user has edit permission on target project RBAC\n const hasPermission = await this.rbacService.checkPermission(\n userId, data.projectId, 'edit'\n );\n if (!hasPermission) {\n throw new ForbiddenError('Edit permission required to create prompt links');\n }\n \n // Parse link target\n const { project: sourceProject, server, uri } = this.parseLink(data.linkTarget);\n \n // Create service account permission on source project\n await this.rbacService.createServiceAccountPermission(\n data.projectId, // target project\n sourceProject, // source project\n server,\n uri,\n 'read'\n );\n }\n \n return this.promptRepo.create(data);\n }\n ```\n\n2. Clean up service account permission when link is deleted\n\n3. Handle permission denied from source project",
"testStrategy": "- Unit test: Link creation requires edit permission\n- Unit test: Link creation without permission throws 403\n- Unit test: Service account permission created on source project\n- Unit test: Service account permission deleted when link deleted\n- Unit test: Non-link prompts skip RBAC checks\n- Integration test: End-to-end link creation with RBAC",
"priority": "medium",
"dependencies": [
"56"
],
"status": "done",
"subtasks": [],
"updatedAt": "2026-02-25T23:09:07.081Z"
},
{
"id": "59",
"title": "Update CLI create prompt command for priority and link",
"description": "Extend the mcpctl create prompt command to accept --priority (1-10) and --link (project/server:uri) flags.",
"details": "1. Update `/src/cli/src/commands/create.ts` for prompt:\n ```typescript\n .command('prompt <name>')\n .description('Create a new prompt')\n .option('-p, --project <name>', 'Project to create prompt in')\n .option('--priority <number>', 'Priority level (1-10, default: 5)', '5')\n .option('--link <target>', 'Link to MCP resource (project/server:uri)')\n .option('-f, --file <path>', 'Read content from file')\n .action(async (name, options) => {\n const priority = parseInt(options.priority, 10);\n if (priority < 1 || priority > 10) {\n console.error('Priority must be between 1 and 10');\n process.exit(1);\n }\n \n let content = '';\n if (options.link) {\n // Linked prompts don't need content (fetched from source)\n content = `[Link: ${options.link}]`;\n } else if (options.file) {\n content = await fs.readFile(options.file, 'utf-8');\n } else {\n content = await promptForContent();\n }\n \n const body = {\n name,\n content,\n projectId: options.project,\n priority,\n linkTarget: options.link,\n };\n \n await api.post('/api/v1/prompts', body);\n });\n ```\n\n2. Validate link format: `project/server:resource-uri`\n\n3. Add shell completions for new flags",
"testStrategy": "- Unit test: --priority flag sets prompt priority\n- Unit test: --priority validation (1-10 range)\n- Unit test: --link flag sets linkTarget\n- Unit test: --link validation (format check)\n- Unit test: Linked prompt skips content prompt\n- Unit test: Default priority is 5\n- Integration test: End-to-end create with flags",
"priority": "medium",
"dependencies": [
"40"
],
"status": "done",
"subtasks": [],
"updatedAt": "2026-02-25T23:03:45.972Z"
},
{
"id": "60",
"title": "Update CLI get prompt command for -A flag and link columns",
"description": "Extend the mcpctl get prompt command with -A (all projects) flag and add link target and status columns to output.",
"details": "1. Update `/src/cli/src/commands/get.ts` for prompt:\n ```typescript\n .command('prompt [name]')\n .option('-A, --all-projects', 'Show prompts from all projects')\n .option('-p, --project <name>', 'Filter by project')\n .action(async (name, options) => {\n let url = '/api/v1/prompts';\n if (options.allProjects) {\n url += '?all=true';\n } else if (options.project) {\n url += `?project=${options.project}`;\n }\n \n const prompts = await api.get(url);\n \n // Format table with new columns\n formatPromptsTable(prompts, {\n columns: ['PROJECT', 'NAME', 'PRIORITY', 'LINK', 'STATUS']\n });\n });\n ```\n\n2. Update table formatter to handle link columns:\n ```\n PROJECT NAME PRIORITY LINK STATUS\n homeautomation security-policies 8 - -\n homeautomation architecture-adr 6 system-public/docmost-mcp:docmost://pages/a1 alive\n ```\n\n3. Add shell completions for -A flag",
"testStrategy": "- Unit test: -A flag shows all projects\n- Unit test: --project flag filters by project\n- Unit test: PRIORITY column displayed\n- Unit test: LINK column shows linkTarget or -\n- Unit test: STATUS column shows linkStatus or -\n- Unit test: Table formatted correctly\n- Integration test: End-to-end get with flags",
"priority": "medium",
"dependencies": [
"57",
"59"
],
"status": "done",
"subtasks": [],
"updatedAt": "2026-02-25T23:09:31.501Z"
},
{
"id": "61",
"title": "Update CLI describe project command for gated status",
"description": "Extend mcpctl describe project to show gated status, session greeting, and prompt table with priority and link information.",
"details": "1. Update `/src/cli/src/commands/get.ts` describe project:\n ```typescript\n async function describeProject(name: string) {\n const project = await api.get(`/api/v1/projects/${name}`);\n const prompts = await api.get(`/api/v1/projects/${name}/prompt-index`);\n const greeting = await getSessionGreeting(name);\n \n console.log(`Name: ${project.name}`);\n console.log(`Gated: ${project.gated}`);\n console.log(`LLM Provider: ${project.llmProvider || '-'}`);\n console.log(`...`);\n console.log();\n console.log(`Session greeting:`);\n console.log(` ${greeting}`);\n console.log();\n console.log(`Prompts:`);\n console.log(` NAME PRIORITY TYPE LINK`);\n for (const p of prompts) {\n const type = p.linkTarget ? 'link' : 'local';\n const link = p.linkTarget || '-';\n console.log(` ${p.name.padEnd(20)} ${p.priority.toString().padEnd(9)} ${type.padEnd(7)} ${link}`);\n }\n }\n ```\n\n2. Fetch session greeting from system prompts or project config",
"testStrategy": "- Unit test: Gated status displayed\n- Unit test: Session greeting displayed\n- Unit test: Prompt table with PRIORITY, TYPE, LINK columns\n- Unit test: TYPE shows 'local' or 'link'\n- Unit test: LINK shows target or -\n- Integration test: End-to-end describe project",
"priority": "medium",
"dependencies": [
"44",
"54"
],
"status": "done",
"subtasks": [],
"updatedAt": "2026-02-25T23:04:56.320Z"
},
{
"id": "62",
"title": "Update CLI edit project command for gated field",
"description": "Extend mcpctl edit project to allow editing the gated boolean field.",
"details": "1. Update `/src/cli/src/commands/edit.ts` for project:\n ```typescript\n async function editProject(name: string) {\n const project = await api.get(`/api/v1/projects/${name}`);\n \n // Add gated to editable fields\n const yaml = `\n name: ${project.name}\n description: ${project.description}\n gated: ${project.gated}\n llmProvider: ${project.llmProvider || ''}\n ...`;\n \n const edited = await openEditor(yaml);\n const parsed = YAML.parse(edited);\n \n // Validate gated is boolean\n if (typeof parsed.gated !== 'boolean') {\n console.error('gated must be true or false');\n process.exit(1);\n }\n \n await api.put(`/api/v1/projects/${name}`, parsed);\n }\n ```\n\n2. Update project validation schema to accept gated\n\n3. Handle conversion from string 'true'/'false' to boolean",
"testStrategy": "- Unit test: Gated field appears in editor YAML\n- Unit test: Gated field saved on edit\n- Unit test: Boolean validation (true/false only)\n- Unit test: String 'true'/'false' converted to boolean\n- Integration test: End-to-end edit project gated",
"priority": "medium",
"dependencies": [
"39"
],
"status": "done",
"subtasks": [],
"updatedAt": "2026-02-25T23:03:46.657Z"
},
{
"id": "63",
"title": "Add unit tests for prompt priority and link CRUD",
"description": "Create comprehensive unit tests for all prompt CRUD operations with the new priority and linkTarget fields.",
"details": "1. Add tests in `/src/mcpd/tests/services/prompt-service.test.ts`:\n ```typescript\n describe('Prompt Priority', () => {\n it('creates prompt with explicit priority', async () => {\n const prompt = await service.createPrompt({ ...data, priority: 8 });\n expect(prompt.priority).toBe(8);\n });\n \n it('uses default priority 5 when not specified', async () => {\n const prompt = await service.createPrompt(data);\n expect(prompt.priority).toBe(5);\n });\n \n it('validates priority range 1-10', async () => {\n await expect(service.createPrompt({ ...data, priority: 11 }))\n .rejects.toThrow();\n });\n \n it('updates priority', async () => {\n const updated = await service.updatePrompt(id, { priority: 3 });\n expect(updated.priority).toBe(3);\n });\n });\n \n describe('Prompt Links', () => {\n it('creates linked prompt', async () => {\n const prompt = await service.createPrompt({\n ...data,\n linkTarget: 'project/server:uri'\n });\n expect(prompt.linkTarget).toBe('project/server:uri');\n });\n \n it('rejects invalid link format', async () => {\n await expect(service.createPrompt({\n ...data,\n linkTarget: 'invalid'\n })).rejects.toThrow();\n });\n \n it('linkTarget is immutable on update', async () => {\n // linkTarget not accepted in update schema\n });\n });\n ```",
"testStrategy": "This task IS the test implementation. Verify:\n- All priority CRUD tests pass\n- All link CRUD tests pass\n- Validation tests cover edge cases\n- Tests use proper mocking patterns\n- Coverage meets project standards",
"priority": "high",
"dependencies": [
"40",
"41"
],
"status": "done",
"subtasks": [],
"updatedAt": "2026-02-25T22:52:53.091Z"
},
{
"id": "64",
"title": "Add unit tests for tag matching algorithm",
"description": "Create comprehensive unit tests for the deterministic tag matching algorithm covering score calculation, byte budget, and priority handling.",
"details": "1. Add tests in `/src/mcplocal/tests/services/tag-matcher.test.ts`:\n ```typescript\n describe('TagMatcherService', () => {\n describe('score calculation', () => {\n it('priority 10 prompts have infinite score', () => {\n const score = matcher.computeScore(['any'], { priority: 10, ... });\n expect(score).toBe(Infinity);\n });\n \n it('score = matching_tags * priority', () => {\n const score = matcher.computeScore(\n ['tag1', 'tag2'],\n { priority: 5, summary: 'tag1 tag2', chapters: [] }\n );\n expect(score).toBe(10); // 2 tags * 5 priority\n });\n });\n \n describe('matching', () => {\n it('matches case-insensitively', () => {\n const matches = matcher.matchesPrompt('ZIGBEE', { summary: 'zigbee setup' });\n expect(matches).toBe(true);\n });\n \n it('matches substring in summary', () => { ... });\n it('matches substring in chapters', () => { ... });\n });\n \n describe('byte budget', () => {\n it('includes full content until budget exhausted', () => { ... });\n it('matched prompts beyond budget become index entries', () => { ... });\n it('non-matched prompts listed as names only', () => { ... });\n });\n });\n ```",
"testStrategy": "This task IS the test implementation. Verify:\n- Score calculation tests pass\n- Matching tests cover all cases\n- Byte budget tests verify allocation\n- Edge cases handled (empty tags, no prompts, etc.)\n- Tests are deterministic",
"priority": "high",
"dependencies": [
"45"
],
"status": "done",
"subtasks": [],
"updatedAt": "2026-02-25T22:51:03.827Z"
},
{
"id": "65",
"title": "Add unit tests for gating state machine",
"description": "Create comprehensive unit tests for the session gating state machine covering all transitions and edge cases.",
"details": "1. Add tests in `/src/mcplocal/tests/router-gating.test.ts`:\n ```typescript\n describe('Gating State Machine', () => {\n describe('initial state', () => {\n it('starts gated for gated project', () => {\n const router = createRouter({ gated: true });\n const state = router.getSessionState('session1');\n expect(state.gated).toBe(true);\n });\n \n it('starts ungated for non-gated project', () => {\n const router = createRouter({ gated: false });\n const state = router.getSessionState('session1');\n expect(state.gated).toBe(false);\n });\n });\n \n describe('begin_session transition', () => {\n it('ungates session on successful begin_session', async () => {\n const router = createGatedRouter();\n await router.handleBeginSession('session1', { tags: ['test'] });\n expect(router.getSessionState('session1').gated).toBe(false);\n });\n \n it('returns matched prompts', async () => { ... });\n it('sends notifications/tools/list_changed', async () => { ... });\n });\n \n describe('intercept transition', () => {\n it('ungates session on tool call intercept', async () => { ... });\n it('extracts keywords from tool call', async () => { ... });\n it('injects briefing with tool result', async () => { ... });\n });\n \n describe('tools/list behavior', () => {\n it('returns only begin_session while gated', async () => { ... });\n it('returns all tools + read_prompts after ungating', async () => { ... });\n });\n });\n ```",
"testStrategy": "This task IS the test implementation. Verify:\n- Initial state tests pass\n- Transition tests cover happy paths\n- Edge case tests (already ungated, etc.)\n- Notification tests verify signals sent\n- Tests use proper mocking",
"priority": "high",
"dependencies": [
"50",
"52"
],
"status": "done",
"subtasks": [],
"updatedAt": "2026-02-25T22:51:03.832Z"
},
{
"id": "66",
"title": "Add unit tests for LLM prompt selection",
"description": "Create unit tests for the LLM-based prompt selection service covering LLM interactions, fallback behavior, and priority 10 handling.",
"details": "1. Add tests in `/src/mcplocal/tests/services/llm-prompt-selector.test.ts`:\n ```typescript\n describe('LlmPromptSelectorService', () => {\n describe('priority 10 handling', () => {\n it('always includes priority 10 prompts', async () => {\n const result = await selector.selectPrompts(['unrelated'], promptIndex);\n expect(result.priority10).toContain(priority10Prompt);\n });\n });\n \n describe('LLM selection', () => {\n it('sends tags and index to heavy LLM', async () => {\n await selector.selectPrompts(['zigbee', 'mqtt'], promptIndex);\n expect(mockLlm.complete).toHaveBeenCalledWith(\n expect.stringContaining('zigbee')\n );\n });\n \n it('parses LLM response correctly', async () => {\n mockLlm.complete.mockResolvedValue(\n '[{\"name\": \"prompt1\", \"reason\": \"relevant\"}]'\n );\n const result = await selector.selectPrompts(['test'], promptIndex);\n expect(result.selected[0].name).toBe('prompt1');\n });\n });\n \n describe('fallback behavior', () => {\n it('falls back to tag matcher on LLM error', async () => { ... });\n it('falls back on LLM timeout', async () => { ... });\n it('falls back when no LLM available', async () => { ... });\n });\n \n describe('summary generation', () => {\n it('generates missing summaries with fast LLM', async () => { ... });\n });\n });\n ```",
"testStrategy": "This task IS the test implementation. Verify:\n- Priority 10 tests pass\n- LLM interaction tests use proper mocks\n- Fallback tests cover all error scenarios\n- Summary generation tests pass\n- Response parsing handles edge cases",
"priority": "high",
"dependencies": [
"46"
],
"status": "done",
"subtasks": [],
"updatedAt": "2026-02-25T22:51:03.836Z"
},
{
"id": "67",
"title": "Add integration tests for gated session flow",
"description": "Create end-to-end integration tests for the complete gated session flow including connect, begin_session, tool calls, and read_prompts.",
"details": "1. Add tests in `/src/mcplocal/tests/integration/gated-flow.test.ts`:\n ```typescript\n describe('Gated Session Flow Integration', () => {\n let app: FastifyInstance;\n let mcpClient: McpClient;\n \n beforeAll(async () => {\n app = await createTestApp();\n // Seed test project with gated=true and test prompts\n });\n \n describe('end-to-end gated flow', () => {\n it('connect → begin_session with tags → tools available → correct prompts', async () => {\n // 1. Connect to MCP endpoint\n const session = await mcpClient.connect(app, 'test-project');\n \n // 2. Verify only begin_session available\n const toolsBefore = await session.listTools();\n expect(toolsBefore.map(t => t.name)).toEqual(['begin_session']);\n \n // 3. Call begin_session\n const briefing = await session.callTool('begin_session', {\n tags: ['test', 'integration']\n });\n expect(briefing).toContain('matched prompt content');\n \n // 4. Verify all tools now available\n const toolsAfter = await session.listTools();\n expect(toolsAfter.map(t => t.name)).toContain('read_prompts');\n });\n });\n \n describe('end-to-end intercept flow', () => {\n it('connect → skip begin_session → call tool → keywords extracted → briefing injected', async () => { ... });\n });\n \n describe('end-to-end read_prompts', () => {\n it('after ungating → request more context → additional prompts → no duplicates', async () => { ... });\n });\n });\n ```",
"testStrategy": "This task IS the test implementation. Verify:\n- Happy path tests pass\n- Intercept path tests pass\n- read_prompts deduplication works\n- Tests use realistic data\n- Tests clean up properly",
"priority": "high",
"dependencies": [
"65"
],
"status": "done",
"subtasks": [],
"updatedAt": "2026-02-25T22:51:03.840Z"
},
{
"id": "68",
"title": "Add integration tests for prompt links",
"description": "Create end-to-end integration tests for prompt link creation, resolution, and dead link detection.",
"details": "1. Add tests in `/src/mcplocal/tests/integration/prompt-links.test.ts`:\n ```typescript\n describe('Prompt Links Integration', () => {\n describe('link creation', () => {\n it('creates link with RBAC permission', async () => {\n // Setup: user with edit permission on target project\n const prompt = await api.post('/api/v1/prompts', {\n name: 'linked-prompt',\n content: '[Link]',\n projectId: targetProject.id,\n linkTarget: 'source-project/server:uri'\n });\n expect(prompt.linkTarget).toBe('source-project/server:uri');\n });\n \n it('rejects link creation without RBAC permission', async () => { ... });\n });\n \n describe('link resolution', () => {\n it('fetches content from source server', async () => { ... });\n it('uses service account for RBAC', async () => { ... });\n });\n \n describe('dead link lifecycle', () => {\n it('detects dead link when source unavailable', async () => {\n // Kill source server\n const prompts = await api.get('/api/v1/prompts');\n const linked = prompts.find(p => p.linkTarget);\n expect(linked.linkStatus).toBe('dead');\n });\n \n it('recovers when source restored', async () => { ... });\n });\n });\n ```",
"testStrategy": "This task IS the test implementation. Verify:\n- RBAC tests cover permission scenarios\n- Resolution tests verify content fetched\n- Dead link tests cover full lifecycle\n- Tests properly mock/control source servers\n- Tests clean up resources",
"priority": "medium",
"dependencies": [
"57",
"58"
],
"status": "done",
"subtasks": [],
"updatedAt": "2026-02-25T23:12:22.348Z"
},
{
"id": "69",
"title": "Add CLI unit tests for new prompt and project flags",
"description": "Create unit tests for the new CLI flags: --priority, --link for prompts, -A for get, and gated field for projects.",
"details": "1. Add tests in `/src/cli/tests/commands/prompt.test.ts`:\n ```typescript\n describe('create prompt command', () => {\n it('--priority sets prompt priority', async () => {\n await cli('create prompt test --priority 8');\n expect(mockApi.post).toHaveBeenCalledWith(\n '/api/v1/prompts',\n expect.objectContaining({ priority: 8 })\n );\n });\n \n it('--priority validates range 1-10', async () => {\n await expect(cli('create prompt test --priority 15'))\n .rejects.toThrow('Priority must be between 1 and 10');\n });\n \n it('--link sets linkTarget', async () => {\n await cli('create prompt test --link proj/srv:uri');\n expect(mockApi.post).toHaveBeenCalledWith(\n '/api/v1/prompts',\n expect.objectContaining({ linkTarget: 'proj/srv:uri' })\n );\n });\n });\n \n describe('get prompt command', () => {\n it('-A shows all projects', async () => {\n await cli('get prompt -A');\n expect(mockApi.get).toHaveBeenCalledWith('/api/v1/prompts?all=true');\n });\n });\n ```\n\n2. Add tests for project gated field editing\n\n3. Add tests for describe project output",
"testStrategy": "This task IS the test implementation. Verify:\n- Flag parsing tests pass\n- Validation tests cover edge cases\n- API call tests verify correct parameters\n- Output formatting tests verify columns\n- Tests mock API properly",
"priority": "medium",
"dependencies": [
"59",
"60",
"61",
"62"
],
"status": "done",
"subtasks": [],
"updatedAt": "2026-02-25T23:12:22.352Z"
},
{
"id": "70",
"title": "Add shell completions for new CLI flags",
"description": "Update shell completion scripts (bash, zsh, fish) to include completions for new flags: --priority, --link, -A, and gated values.",
"details": "1. Update `/completions/mcpctl.fish`:\n ```fish\n # create prompt completions\n complete -c mcpctl -n '__fish_seen_subcommand_from create; and __fish_seen_subcommand_from prompt' -l priority -d 'Priority level (1-10)' -a '(seq 1 10)'\n complete -c mcpctl -n '__fish_seen_subcommand_from create; and __fish_seen_subcommand_from prompt' -l link -d 'Link to MCP resource (project/server:uri)'\n \n # get prompt completions \n complete -c mcpctl -n '__fish_seen_subcommand_from get; and __fish_seen_subcommand_from prompt' -s A -l all-projects -d 'Show prompts from all projects'\n ```\n\n2. Update bash completions similarly\n\n3. Update zsh completions similarly\n\n4. Add dynamic completion for priority values (1-10)",
"testStrategy": "- Manual test: Fish completions suggest --priority with values 1-10\n- Manual test: Fish completions suggest --link flag\n- Manual test: Fish completions suggest -A/--all-projects\n- Manual test: Bash completions work similarly\n- Manual test: Zsh completions work similarly",
"priority": "low",
"dependencies": [
"59",
"60"
],
"status": "done",
"subtasks": [],
"updatedAt": "2026-02-25T23:12:22.363Z"
}
],
"metadata": {
"version": "1.0.0",
"lastModified": "2026-02-21T18:52:29.084Z",
"taskCount": 36,
"completedCount": 33,
"lastModified": "2026-02-25T23:12:22.364Z",
"taskCount": 70,
"completedCount": 67,
"tags": [
"master"
]

View File

@@ -2,14 +2,83 @@ _mcpctl() {
local cur prev words cword
_init_completion || return
local commands="status login logout config get describe delete logs create edit apply backup restore help"
local global_opts="-v --version --daemon-url --direct -h --help"
local resources="servers instances secrets templates projects users groups rbac"
local commands="status login logout config get describe delete logs create edit apply backup restore mcp approve help"
local project_commands="attach-server detach-server get describe delete logs create edit help"
local global_opts="-v --version --daemon-url --direct --project -h --help"
local resources="servers instances secrets templates projects users groups rbac prompts promptrequests"
case "${words[1]}" in
# Check if --project was given
local has_project=false
local i
for ((i=1; i < cword; i++)); do
if [[ "${words[i]}" == "--project" ]]; then
has_project=true
break
fi
done
# Find the first subcommand (skip --project and its argument, skip flags)
local subcmd=""
local subcmd_pos=0
for ((i=1; i < cword; i++)); do
if [[ "${words[i]}" == "--project" || "${words[i]}" == "--daemon-url" ]]; then
((i++)) # skip the argument
continue
fi
if [[ "${words[i]}" != -* ]]; then
subcmd="${words[i]}"
subcmd_pos=$i
break
fi
done
# Find the resource type after get/describe/delete/edit
local resource_type=""
if [[ -n "$subcmd_pos" ]] && [[ $subcmd_pos -gt 0 ]]; then
for ((i=subcmd_pos+1; i < cword; i++)); do
if [[ "${words[i]}" != -* ]] && [[ " $resources " == *" ${words[i]} "* ]]; then
resource_type="${words[i]}"
break
fi
done
fi
# If completing the --project value
if [[ "$prev" == "--project" ]]; then
local names
names=$(mcpctl get projects -o json 2>/dev/null | jq -r '.[][].name' 2>/dev/null)
COMPREPLY=($(compgen -W "$names" -- "$cur"))
return
fi
# Fetch resource names dynamically (jq extracts only top-level names)
_mcpctl_resource_names() {
local rt="$1"
if [[ -n "$rt" ]]; then
# Instances don't have a name field — use server.name instead
if [[ "$rt" == "instances" ]]; then
mcpctl get instances -o json 2>/dev/null | jq -r '.[][].server.name' 2>/dev/null
else
mcpctl get "$rt" -o json 2>/dev/null | jq -r '.[][].name' 2>/dev/null
fi
fi
}
# Get the --project value from the command line
_mcpctl_get_project_value() {
local i
for ((i=1; i < cword; i++)); do
if [[ "${words[i]}" == "--project" ]] && (( i+1 < cword )); then
echo "${words[i+1]}"
return
fi
done
}
case "$subcmd" in
config)
if [[ $cword -eq 2 ]]; then
COMPREPLY=($(compgen -W "view set path reset claude-generate impersonate help" -- "$cur"))
if [[ $((cword - subcmd_pos)) -eq 1 ]]; then
COMPREPLY=($(compgen -W "view set path reset claude claude-generate setup impersonate help" -- "$cur"))
fi
return ;;
status)
@@ -20,36 +89,32 @@ _mcpctl() {
return ;;
logout)
return ;;
get)
if [[ $cword -eq 2 ]]; then
mcp)
return ;;
get|describe|delete)
if [[ -z "$resource_type" ]]; then
COMPREPLY=($(compgen -W "$resources" -- "$cur"))
else
COMPREPLY=($(compgen -W "-o --output -h --help" -- "$cur"))
fi
return ;;
describe)
if [[ $cword -eq 2 ]]; then
COMPREPLY=($(compgen -W "$resources" -- "$cur"))
else
COMPREPLY=($(compgen -W "-o --output --show-values -h --help" -- "$cur"))
fi
return ;;
delete)
if [[ $cword -eq 2 ]]; then
COMPREPLY=($(compgen -W "$resources" -- "$cur"))
local names
names=$(_mcpctl_resource_names "$resource_type")
COMPREPLY=($(compgen -W "$names -o --output -h --help" -- "$cur"))
fi
return ;;
edit)
if [[ $cword -eq 2 ]]; then
if [[ -z "$resource_type" ]]; then
COMPREPLY=($(compgen -W "servers projects" -- "$cur"))
else
local names
names=$(_mcpctl_resource_names "$resource_type")
COMPREPLY=($(compgen -W "$names -h --help" -- "$cur"))
fi
return ;;
logs)
COMPREPLY=($(compgen -W "--tail --since -f --follow -h --help" -- "$cur"))
return ;;
create)
if [[ $cword -eq 2 ]]; then
COMPREPLY=($(compgen -W "server secret project user group rbac help" -- "$cur"))
if [[ $((cword - subcmd_pos)) -eq 1 ]]; then
COMPREPLY=($(compgen -W "server secret project user group rbac prompt promptrequest help" -- "$cur"))
fi
return ;;
apply)
@@ -61,13 +126,51 @@ _mcpctl() {
restore)
COMPREPLY=($(compgen -W "-i --input -p --password -c --conflict -h --help" -- "$cur"))
return ;;
attach-server)
# Only complete if no server arg given yet (first arg after subcmd)
if [[ $((cword - subcmd_pos)) -ne 1 ]]; then return; fi
local proj names all_servers proj_servers
proj=$(_mcpctl_get_project_value)
if [[ -n "$proj" ]]; then
all_servers=$(mcpctl get servers -o json 2>/dev/null | jq -r '.[][].name' 2>/dev/null)
proj_servers=$(mcpctl --project "$proj" get servers -o json 2>/dev/null | jq -r '.[][].name' 2>/dev/null)
names=$(comm -23 <(echo "$all_servers" | sort) <(echo "$proj_servers" | sort))
else
names=$(_mcpctl_resource_names "servers")
fi
COMPREPLY=($(compgen -W "$names" -- "$cur"))
return ;;
detach-server)
# Only complete if no server arg given yet (first arg after subcmd)
if [[ $((cword - subcmd_pos)) -ne 1 ]]; then return; fi
local proj names
proj=$(_mcpctl_get_project_value)
if [[ -n "$proj" ]]; then
names=$(mcpctl --project "$proj" get servers -o json 2>/dev/null | jq -r '.[][].name' 2>/dev/null)
fi
COMPREPLY=($(compgen -W "$names" -- "$cur"))
return ;;
approve)
if [[ -z "$resource_type" ]]; then
COMPREPLY=($(compgen -W "promptrequest" -- "$cur"))
else
local names
names=$(_mcpctl_resource_names "$resource_type")
COMPREPLY=($(compgen -W "$names" -- "$cur"))
fi
return ;;
help)
COMPREPLY=($(compgen -W "$commands" -- "$cur"))
return ;;
esac
if [[ $cword -eq 1 ]]; then
COMPREPLY=($(compgen -W "$commands $global_opts" -- "$cur"))
# No subcommand yet — offer commands based on context
if [[ -z "$subcmd" ]]; then
if $has_project; then
COMPREPLY=($(compgen -W "$project_commands $global_opts" -- "$cur"))
else
COMPREPLY=($(compgen -W "$commands $global_opts" -- "$cur"))
fi
fi
}

View File

@@ -1,6 +1,10 @@
# mcpctl fish completions
set -l commands status login logout config get describe delete logs create edit apply backup restore help
# Erase any stale completions from previous versions
complete -c mcpctl -e
set -l commands status login logout config get describe delete logs create edit apply patch backup restore mcp approve help
set -l project_commands attach-server detach-server get describe delete logs create edit help
# Disable file completions by default
complete -c mcpctl -f
@@ -9,31 +13,206 @@ complete -c mcpctl -f
complete -c mcpctl -s v -l version -d 'Show version'
complete -c mcpctl -l daemon-url -d 'mcplocal daemon URL' -x
complete -c mcpctl -l direct -d 'Bypass mcplocal, connect directly to mcpd'
complete -c mcpctl -l project -d 'Target project context' -x
complete -c mcpctl -s h -l help -d 'Show help'
# Top-level commands
complete -c mcpctl -n "not __fish_seen_subcommand_from $commands" -a status -d 'Show status and connectivity'
complete -c mcpctl -n "not __fish_seen_subcommand_from $commands" -a login -d 'Authenticate with mcpd'
complete -c mcpctl -n "not __fish_seen_subcommand_from $commands" -a logout -d 'Log out'
complete -c mcpctl -n "not __fish_seen_subcommand_from $commands" -a config -d 'Manage configuration'
complete -c mcpctl -n "not __fish_seen_subcommand_from $commands" -a get -d 'List resources'
complete -c mcpctl -n "not __fish_seen_subcommand_from $commands" -a describe -d 'Show resource details'
complete -c mcpctl -n "not __fish_seen_subcommand_from $commands" -a delete -d 'Delete a resource'
complete -c mcpctl -n "not __fish_seen_subcommand_from $commands" -a logs -d 'Get instance logs'
complete -c mcpctl -n "not __fish_seen_subcommand_from $commands" -a create -d 'Create a resource'
complete -c mcpctl -n "not __fish_seen_subcommand_from $commands" -a edit -d 'Edit a resource'
complete -c mcpctl -n "not __fish_seen_subcommand_from $commands" -a apply -d 'Apply configuration from file'
complete -c mcpctl -n "not __fish_seen_subcommand_from $commands" -a backup -d 'Backup configuration'
complete -c mcpctl -n "not __fish_seen_subcommand_from $commands" -a restore -d 'Restore from backup'
complete -c mcpctl -n "not __fish_seen_subcommand_from $commands" -a help -d 'Show help'
# Helper: check if --project was given
function __mcpctl_has_project
set -l tokens (commandline -opc)
for i in (seq (count $tokens))
if test "$tokens[$i]" = "--project"
return 0
end
end
return 1
end
# Resource types for get/describe/delete/edit
set -l resources servers instances secrets templates projects users groups rbac
complete -c mcpctl -n "__fish_seen_subcommand_from get describe delete" -a "$resources" -d 'Resource type'
complete -c mcpctl -n "__fish_seen_subcommand_from edit" -a 'servers projects' -d 'Resource type'
# Helper: check if a resource type has been selected after get/describe/delete/edit
set -l resources servers instances secrets templates projects users groups rbac prompts promptrequests
# All accepted resource aliases (plural + singular + short forms)
set -l resource_aliases servers server srv instances instance inst secrets secret sec templates template tpl projects project proj users user groups group rbac rbac-definition rbac-binding prompts prompt promptrequests promptrequest pr
# get/describe/delete options
function __mcpctl_needs_resource_type
set -l tokens (commandline -opc)
set -l found_cmd false
for tok in $tokens
if $found_cmd
# Check if next token after get/describe/delete/edit is a resource type or alias
if contains -- $tok $resource_aliases
return 1 # resource type already present
end
end
if contains -- $tok get describe delete edit patch
set found_cmd true
end
end
if $found_cmd
return 0 # command found but no resource type yet
end
return 1
end
# Map any resource alias to the canonical plural form for API calls
function __mcpctl_resolve_resource
switch $argv[1]
case server srv servers; echo servers
case instance inst instances; echo instances
case secret sec secrets; echo secrets
case template tpl templates; echo templates
case project proj projects; echo projects
case user users; echo users
case group groups; echo groups
case rbac rbac-definition rbac-binding; echo rbac
case prompt prompts; echo prompts
case promptrequest promptrequests pr; echo promptrequests
case '*'; echo $argv[1]
end
end
function __mcpctl_get_resource_type
set -l tokens (commandline -opc)
set -l found_cmd false
for tok in $tokens
if $found_cmd
if contains -- $tok $resource_aliases
__mcpctl_resolve_resource $tok
return
end
end
if contains -- $tok get describe delete edit patch
set found_cmd true
end
end
end
# Fetch resource names dynamically from the API (jq extracts only top-level names)
function __mcpctl_resource_names
set -l resource (__mcpctl_get_resource_type)
if test -z "$resource"
return
end
# Instances don't have a name field — use server.name instead
if test "$resource" = "instances"
mcpctl get instances -o json 2>/dev/null | jq -r '.[][].server.name' 2>/dev/null
else if test "$resource" = "prompts" -o "$resource" = "promptrequests"
# Use -A to include all projects, not just global
mcpctl get $resource -A -o json 2>/dev/null | jq -r '.[][].name' 2>/dev/null
else
mcpctl get $resource -o json 2>/dev/null | jq -r '.[][].name' 2>/dev/null
end
end
# Fetch project names for --project value
function __mcpctl_project_names
mcpctl get projects -o json 2>/dev/null | jq -r '.[][].name' 2>/dev/null
end
# Helper: get the --project value from the command line
function __mcpctl_get_project_value
set -l tokens (commandline -opc)
for i in (seq (count $tokens))
if test "$tokens[$i]" = "--project"; and test $i -lt (count $tokens)
echo $tokens[(math $i + 1)]
return
end
end
end
# Servers currently attached to the project (for detach-server)
function __mcpctl_project_servers
set -l proj (__mcpctl_get_project_value)
if test -z "$proj"
return
end
mcpctl --project $proj get servers -o json 2>/dev/null | jq -r '.[][].name' 2>/dev/null
end
# Servers NOT attached to the project (for attach-server)
function __mcpctl_available_servers
set -l proj (__mcpctl_get_project_value)
if test -z "$proj"
# No project — show all servers
mcpctl get servers -o json 2>/dev/null | jq -r '.[][].name' 2>/dev/null
return
end
set -l all (mcpctl get servers -o json 2>/dev/null | jq -r '.[][].name' 2>/dev/null)
set -l attached (mcpctl --project $proj get servers -o json 2>/dev/null | jq -r '.[][].name' 2>/dev/null)
for s in $all
if not contains -- $s $attached
echo $s
end
end
end
# --project value completion
complete -c mcpctl -l project -xa '(__mcpctl_project_names)'
# Top-level commands (without --project)
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a status -d 'Show status and connectivity'
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a login -d 'Authenticate with mcpd'
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a logout -d 'Log out'
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a config -d 'Manage configuration'
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a get -d 'List resources'
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a describe -d 'Show resource details'
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a delete -d 'Delete a resource'
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a logs -d 'Get instance logs'
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a create -d 'Create a resource'
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a edit -d 'Edit a resource'
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a apply -d 'Apply configuration from file'
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a backup -d 'Backup configuration'
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a restore -d 'Restore from backup'
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a patch -d 'Patch a resource field'
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a approve -d 'Approve a prompt request'
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a help -d 'Show help'
# Project-scoped commands (with --project)
complete -c mcpctl -n "__mcpctl_has_project; and not __fish_seen_subcommand_from $project_commands" -a attach-server -d 'Attach a server to the project'
complete -c mcpctl -n "__mcpctl_has_project; and not __fish_seen_subcommand_from $project_commands" -a detach-server -d 'Detach a server from the project'
complete -c mcpctl -n "__mcpctl_has_project; and not __fish_seen_subcommand_from $project_commands" -a get -d 'List resources (scoped to project)'
complete -c mcpctl -n "__mcpctl_has_project; and not __fish_seen_subcommand_from $project_commands" -a describe -d 'Show resource details'
complete -c mcpctl -n "__mcpctl_has_project; and not __fish_seen_subcommand_from $project_commands" -a delete -d 'Delete a resource'
complete -c mcpctl -n "__mcpctl_has_project; and not __fish_seen_subcommand_from $project_commands" -a logs -d 'Get instance logs'
complete -c mcpctl -n "__mcpctl_has_project; and not __fish_seen_subcommand_from $project_commands" -a create -d 'Create a resource'
complete -c mcpctl -n "__mcpctl_has_project; and not __fish_seen_subcommand_from $project_commands" -a edit -d 'Edit a resource'
complete -c mcpctl -n "__mcpctl_has_project; and not __fish_seen_subcommand_from $project_commands" -a help -d 'Show help'
# Resource types — only when resource type not yet selected
complete -c mcpctl -n "__fish_seen_subcommand_from get describe delete patch; and __mcpctl_needs_resource_type" -a "$resources" -d 'Resource type'
complete -c mcpctl -n "__fish_seen_subcommand_from edit; and __mcpctl_needs_resource_type" -a 'servers secrets projects groups rbac prompts promptrequests' -d 'Resource type'
# Resource names — after resource type is selected
complete -c mcpctl -n "__fish_seen_subcommand_from get describe delete edit patch; and not __mcpctl_needs_resource_type" -a '(__mcpctl_resource_names)' -d 'Resource name'
# Helper: check if attach-server/detach-server already has a server argument
function __mcpctl_needs_server_arg
set -l tokens (commandline -opc)
set -l found_cmd false
for tok in $tokens
if $found_cmd
if not string match -q -- '-*' $tok
return 1 # server arg already present
end
end
if contains -- $tok attach-server detach-server
set found_cmd true
end
end
if $found_cmd
return 0 # command found but no server arg yet
end
return 1
end
# attach-server: show servers NOT in the project (only if no server arg yet)
complete -c mcpctl -n "__fish_seen_subcommand_from attach-server; and __mcpctl_needs_server_arg" -a '(__mcpctl_available_servers)' -d 'Server'
# detach-server: show servers IN the project (only if no server arg yet)
complete -c mcpctl -n "__fish_seen_subcommand_from detach-server; and __mcpctl_needs_server_arg" -a '(__mcpctl_project_servers)' -d 'Server'
# get/describe options
complete -c mcpctl -n "__fish_seen_subcommand_from get" -s o -l output -d 'Output format' -xa 'table json yaml'
complete -c mcpctl -n "__fish_seen_subcommand_from get" -l project -d 'Filter by project' -xa '(__mcpctl_project_names)'
complete -c mcpctl -n "__fish_seen_subcommand_from get" -s A -l all -d 'Show all resources across projects'
complete -c mcpctl -n "__fish_seen_subcommand_from describe" -s o -l output -d 'Output format' -xa 'detail json yaml'
complete -c mcpctl -n "__fish_seen_subcommand_from describe" -l show-values -d 'Show secret values'
@@ -43,24 +222,42 @@ complete -c mcpctl -n "__fish_seen_subcommand_from login" -l email -d 'Email add
complete -c mcpctl -n "__fish_seen_subcommand_from login" -l password -d 'Password' -x
# config subcommands
set -l config_cmds view set path reset claude-generate impersonate
set -l config_cmds view set path reset claude claude-generate setup impersonate
complete -c mcpctl -n "__fish_seen_subcommand_from config; and not __fish_seen_subcommand_from $config_cmds" -a view -d 'Show configuration'
complete -c mcpctl -n "__fish_seen_subcommand_from config; and not __fish_seen_subcommand_from $config_cmds" -a set -d 'Set a config value'
complete -c mcpctl -n "__fish_seen_subcommand_from config; and not __fish_seen_subcommand_from $config_cmds" -a path -d 'Show config file path'
complete -c mcpctl -n "__fish_seen_subcommand_from config; and not __fish_seen_subcommand_from $config_cmds" -a reset -d 'Reset to defaults'
complete -c mcpctl -n "__fish_seen_subcommand_from config; and not __fish_seen_subcommand_from $config_cmds" -a claude-generate -d 'Generate .mcp.json'
complete -c mcpctl -n "__fish_seen_subcommand_from config; and not __fish_seen_subcommand_from $config_cmds" -a claude -d 'Generate .mcp.json for project'
complete -c mcpctl -n "__fish_seen_subcommand_from config; and not __fish_seen_subcommand_from $config_cmds" -a setup -d 'Configure LLM provider'
complete -c mcpctl -n "__fish_seen_subcommand_from config; and not __fish_seen_subcommand_from $config_cmds" -a impersonate -d 'Impersonate a user'
# create subcommands
set -l create_cmds server secret project user group rbac
set -l create_cmds server secret project user group rbac prompt promptrequest
complete -c mcpctl -n "__fish_seen_subcommand_from create; and not __fish_seen_subcommand_from $create_cmds" -a server -d 'Create a server'
complete -c mcpctl -n "__fish_seen_subcommand_from create; and not __fish_seen_subcommand_from $create_cmds" -a secret -d 'Create a secret'
complete -c mcpctl -n "__fish_seen_subcommand_from create; and not __fish_seen_subcommand_from $create_cmds" -a project -d 'Create a project'
complete -c mcpctl -n "__fish_seen_subcommand_from create; and not __fish_seen_subcommand_from $create_cmds" -a user -d 'Create a user'
complete -c mcpctl -n "__fish_seen_subcommand_from create; and not __fish_seen_subcommand_from $create_cmds" -a group -d 'Create a group'
complete -c mcpctl -n "__fish_seen_subcommand_from create; and not __fish_seen_subcommand_from $create_cmds" -a rbac -d 'Create an RBAC binding'
complete -c mcpctl -n "__fish_seen_subcommand_from create; and not __fish_seen_subcommand_from $create_cmds" -a prompt -d 'Create an approved prompt'
complete -c mcpctl -n "__fish_seen_subcommand_from create; and not __fish_seen_subcommand_from $create_cmds" -a promptrequest -d 'Create a prompt request'
# logs options
# create prompt/promptrequest options
complete -c mcpctl -n "__fish_seen_subcommand_from create; and __fish_seen_subcommand_from prompt promptrequest" -l project -d 'Project name' -xa '(__mcpctl_project_names)'
complete -c mcpctl -n "__fish_seen_subcommand_from create; and __fish_seen_subcommand_from prompt promptrequest" -l content -d 'Prompt content text' -x
complete -c mcpctl -n "__fish_seen_subcommand_from create; and __fish_seen_subcommand_from prompt promptrequest" -l content-file -d 'Read content from file' -rF
complete -c mcpctl -n "__fish_seen_subcommand_from create; and __fish_seen_subcommand_from prompt promptrequest" -l priority -d 'Priority 1-10' -xa '(seq 1 10)'
complete -c mcpctl -n "__fish_seen_subcommand_from create; and __fish_seen_subcommand_from prompt" -l link -d 'Link to MCP resource (project/server:uri)' -x
# create project --gated/--no-gated
complete -c mcpctl -n "__fish_seen_subcommand_from create; and __fish_seen_subcommand_from project" -l gated -d 'Enable gated sessions'
complete -c mcpctl -n "__fish_seen_subcommand_from create; and __fish_seen_subcommand_from project" -l no-gated -d 'Disable gated sessions'
# logs: takes a server/instance name, then options
function __mcpctl_instance_names
mcpctl get instances -o json 2>/dev/null | jq -r '.[][].server.name' 2>/dev/null
end
complete -c mcpctl -n "__fish_seen_subcommand_from logs" -a '(__mcpctl_instance_names)' -d 'Server name'
complete -c mcpctl -n "__fish_seen_subcommand_from logs" -l tail -d 'Number of lines' -x
complete -c mcpctl -n "__fish_seen_subcommand_from logs" -l since -d 'Since timestamp' -x
complete -c mcpctl -n "__fish_seen_subcommand_from logs" -s f -l follow -d 'Follow log output'
@@ -74,6 +271,53 @@ complete -c mcpctl -n "__fish_seen_subcommand_from restore" -s i -l input -d 'In
complete -c mcpctl -n "__fish_seen_subcommand_from restore" -s p -l password -d 'Decryption password' -x
complete -c mcpctl -n "__fish_seen_subcommand_from restore" -s c -l conflict -d 'Conflict strategy' -xa 'skip overwrite fail'
# approve: first arg is resource type, second is name
function __mcpctl_approve_needs_type
set -l tokens (commandline -opc)
set -l found false
for tok in $tokens
if $found
if contains -- $tok promptrequest promptrequests
return 1 # type already given
end
end
if test "$tok" = "approve"
set found true
end
end
if $found
return 0 # approve found but no type yet
end
return 1
end
function __mcpctl_approve_needs_name
set -l tokens (commandline -opc)
set -l found_type false
for tok in $tokens
if $found_type
# next non-flag token after type is the name
if not string match -q -- '-*' $tok
return 1 # name already given
end
end
if contains -- $tok promptrequest promptrequests
set found_type true
end
end
if $found_type
return 0 # type given but no name yet
end
return 1
end
function __mcpctl_promptrequest_names
mcpctl get promptrequests -A -o json 2>/dev/null | jq -r '.[][].name' 2>/dev/null
end
complete -c mcpctl -n "__fish_seen_subcommand_from approve; and __mcpctl_approve_needs_type" -a 'promptrequest' -d 'Resource type'
complete -c mcpctl -n "__fish_seen_subcommand_from approve; and __mcpctl_approve_needs_name" -a '(__mcpctl_promptrequest_names)' -d 'Prompt request name'
# apply takes a file
complete -c mcpctl -n "__fish_seen_subcommand_from apply" -s f -l file -d 'Configuration file' -rF
complete -c mcpctl -n "__fish_seen_subcommand_from apply" -F

View File

@@ -5,6 +5,8 @@ release: "1"
maintainer: michal
description: kubectl-like CLI for managing MCP servers
license: MIT
depends:
- jq
contents:
- src: ./dist/mcpctl
dst: /usr/bin/mcpctl

55
pr.sh Executable file
View File

@@ -0,0 +1,55 @@
#!/usr/bin/env bash
# Usage: bash pr.sh "PR title" "PR body"
# Loads GITEA_TOKEN from .env automatically
set -euo pipefail
# Load .env if GITEA_TOKEN not already exported
if [ -z "${GITEA_TOKEN:-}" ] && [ -f .env ]; then
set -a
source .env
set +a
fi
GITEA_URL="${GITEA_URL:-http://10.0.0.194:3012}"
REPO="${GITEA_OWNER:-michal}/mcpctl"
TITLE="${1:?Usage: pr.sh <title> [body]}"
BODY="${2:-}"
BASE="${3:-main}"
HEAD=$(git rev-parse --abbrev-ref HEAD)
if [ "$HEAD" = "$BASE" ]; then
echo "Error: already on $BASE, switch to a feature branch first" >&2
exit 1
fi
if [ -z "${GITEA_TOKEN:-}" ]; then
echo "Error: GITEA_TOKEN not set and .env not found" >&2
exit 1
fi
# Push if needed
if ! git rev-parse --verify "origin/$HEAD" &>/dev/null; then
git push -u origin "$HEAD"
else
git push
fi
# Create PR
RESPONSE=$(curl -s -X POST "$GITEA_URL/api/v1/repos/$REPO/pulls" \
-H "Authorization: token $GITEA_TOKEN" \
-H "Content-Type: application/json" \
-d "$(jq -n --arg t "$TITLE" --arg b "$BODY" --arg h "$HEAD" --arg base "$BASE" \
'{title: $t, body: $b, head: $h, base: $base}')")
PR_NUM=$(echo "$RESPONSE" | jq -r '.number // empty')
PR_URL=$(echo "$RESPONSE" | jq -r '.html_url // empty')
if [ -z "$PR_NUM" ]; then
echo "Error creating PR:" >&2
echo "$RESPONSE" | jq . 2>/dev/null || echo "$RESPONSE" >&2
exit 1
fi
echo "PR #$PR_NUM: https://mysources.co.uk/$REPO/pulls/$PR_NUM"

View File

@@ -24,7 +24,10 @@ export class ApiError extends Error {
function request<T>(method: string, url: string, timeout: number, body?: unknown, token?: string): Promise<ApiResponse<T>> {
return new Promise((resolve, reject) => {
const parsed = new URL(url);
const headers: Record<string, string> = { 'Content-Type': 'application/json' };
const headers: Record<string, string> = {};
if (body !== undefined) {
headers['Content-Type'] = 'application/json';
}
if (token) {
headers['Authorization'] = `Bearer ${token}`;
}

View File

@@ -1,5 +1,5 @@
import { Command } from 'commander';
import { readFileSync } from 'node:fs';
import { readFileSync, readSync } from 'node:fs';
import yaml from 'js-yaml';
import { z } from 'zod';
import type { ApiClient } from '../api-client.js';
@@ -76,13 +76,14 @@ const GroupSpecSchema = z.object({
});
const RbacSubjectSchema = z.object({
kind: z.enum(['User', 'Group']),
kind: z.enum(['User', 'Group', 'ServiceAccount']),
name: z.string().min(1),
});
const RESOURCE_ALIASES: Record<string, string> = {
server: 'servers', instance: 'instances', secret: 'secrets',
project: 'projects', template: 'templates', user: 'users', group: 'groups',
prompt: 'prompts', promptrequest: 'promptrequests',
};
const RbacRoleBindingSchema = z.union([
@@ -103,10 +104,20 @@ const RbacBindingSpecSchema = z.object({
roleBindings: z.array(RbacRoleBindingSchema).default([]),
});
const PromptSpecSchema = z.object({
name: z.string().min(1).max(100).regex(/^[a-z0-9-]+$/),
content: z.string().min(1).max(50000),
projectId: z.string().optional(),
priority: z.number().int().min(1).max(10).optional(),
linkTarget: z.string().optional(),
});
const ProjectSpecSchema = z.object({
name: z.string().min(1),
description: z.string().default(''),
prompt: z.string().max(10000).default(''),
proxyMode: z.enum(['direct', 'filtered']).default('direct'),
gated: z.boolean().default(true),
llmProvider: z.string().optional(),
llmModel: z.string().optional(),
servers: z.array(z.string()).default([]),
@@ -121,6 +132,7 @@ const ApplyConfigSchema = z.object({
templates: z.array(TemplateSpecSchema).default([]),
rbacBindings: z.array(RbacBindingSpecSchema).default([]),
rbac: z.array(RbacBindingSpecSchema).default([]),
prompts: z.array(PromptSpecSchema).default([]),
}).transform((data) => ({
...data,
// Merge rbac into rbacBindings so both keys work
@@ -158,6 +170,7 @@ export function createApplyCommand(deps: ApplyCommandDeps): Command {
if (config.projects.length > 0) log(` ${config.projects.length} project(s)`);
if (config.templates.length > 0) log(` ${config.templates.length} template(s)`);
if (config.rbacBindings.length > 0) log(` ${config.rbacBindings.length} rbacBinding(s)`);
if (config.prompts.length > 0) log(` ${config.prompts.length} prompt(s)`);
return;
}
@@ -165,11 +178,27 @@ export function createApplyCommand(deps: ApplyCommandDeps): Command {
});
}
function readStdin(): string {
const chunks: Buffer[] = [];
const buf = Buffer.alloc(4096);
try {
// eslint-disable-next-line no-constant-condition
while (true) {
const bytesRead = readSync(0, buf, 0, buf.length, null);
if (bytesRead === 0) break;
chunks.push(buf.subarray(0, bytesRead));
}
} catch {
// EOF or closed pipe
}
return Buffer.concat(chunks).toString('utf-8');
}
function loadConfigFile(path: string): ApplyConfig {
const raw = readFileSync(path, 'utf-8');
const raw = path === '-' ? readStdin() : readFileSync(path, 'utf-8');
let parsed: unknown;
if (path.endsWith('.json')) {
if (path === '-' ? raw.trimStart().startsWith('{') : path.endsWith('.json')) {
parsed = JSON.parse(raw);
} else {
parsed = yaml.load(raw);
@@ -292,6 +321,24 @@ async function applyConfig(client: ApiClient, config: ApplyConfig, log: (...args
log(`Error applying rbacBinding '${rbacBinding.name}': ${err instanceof Error ? err.message : err}`);
}
}
// Apply prompts
for (const prompt of config.prompts) {
try {
const existing = await findByName(client, 'prompts', prompt.name);
if (existing) {
const updateData: Record<string, unknown> = { content: prompt.content };
if (prompt.priority !== undefined) updateData.priority = prompt.priority;
await client.put(`/api/v1/prompts/${(existing as { id: string }).id}`, updateData);
log(`Updated prompt: ${prompt.name}`);
} else {
await client.post('/api/v1/prompts', prompt);
log(`Created prompt: ${prompt.name}`);
}
} catch (err) {
log(`Error applying prompt '${prompt.name}': ${err instanceof Error ? err.message : err}`);
}
}
}
async function findByName(client: ApiClient, resource: string, name: string): Promise<unknown | null> {

View File

@@ -0,0 +1,464 @@
import { Command } from 'commander';
import http from 'node:http';
import https from 'node:https';
import { execFile } from 'node:child_process';
import { promisify } from 'node:util';
import { loadConfig, saveConfig } from '../config/index.js';
import type { ConfigLoaderDeps, McpctlConfig, LlmConfig, LlmProviderName, LlmProviderEntry, LlmTier } from '../config/index.js';
import type { SecretStore } from '@mcpctl/shared';
import { createSecretStore } from '@mcpctl/shared';
const execFileAsync = promisify(execFile);
export interface ConfigSetupPrompt {
select<T>(message: string, choices: Array<{ name: string; value: T; description?: string }>): Promise<T>;
input(message: string, defaultValue?: string): Promise<string>;
password(message: string): Promise<string>;
confirm(message: string, defaultValue?: boolean): Promise<boolean>;
}
export interface ConfigSetupDeps {
configDeps: Partial<ConfigLoaderDeps>;
secretStore: SecretStore;
log: (...args: string[]) => void;
prompt: ConfigSetupPrompt;
fetchModels: (url: string, path: string) => Promise<string[]>;
whichBinary: (name: string) => Promise<string | null>;
}
interface ProviderChoice {
name: string;
value: LlmProviderName;
description: string;
}
/** Provider config fields returned by per-provider setup functions. */
interface ProviderFields {
model?: string;
url?: string;
binaryPath?: string;
}
const FAST_PROVIDER_CHOICES: ProviderChoice[] = [
{ name: 'vLLM', value: 'vllm', description: 'Self-hosted vLLM (OpenAI-compatible)' },
{ name: 'Ollama', value: 'ollama', description: 'Local models via Ollama' },
];
const HEAVY_PROVIDER_CHOICES: ProviderChoice[] = [
{ name: 'Gemini CLI', value: 'gemini-cli', description: 'Google Gemini via local CLI (free, no API key)' },
{ name: 'Anthropic (Claude)', value: 'anthropic', description: 'Claude API (requires API key)' },
{ name: 'OpenAI', value: 'openai', description: 'OpenAI API (requires API key)' },
{ name: 'DeepSeek', value: 'deepseek', description: 'DeepSeek API (requires API key)' },
];
const ALL_PROVIDER_CHOICES: ProviderChoice[] = [
...FAST_PROVIDER_CHOICES,
...HEAVY_PROVIDER_CHOICES,
{ name: 'None (disable)', value: 'none', description: 'Disable LLM features' },
];
const GEMINI_MODELS = ['gemini-2.5-flash', 'gemini-2.5-pro', 'gemini-2.0-flash'];
const ANTHROPIC_MODELS = ['claude-haiku-3-5-20241022', 'claude-sonnet-4-20250514', 'claude-opus-4-20250514'];
const DEEPSEEK_MODELS = ['deepseek-chat', 'deepseek-reasoner'];
function defaultFetchModels(baseUrl: string, path: string): Promise<string[]> {
return new Promise((resolve) => {
const url = new URL(path, baseUrl);
const isHttps = url.protocol === 'https:';
const transport = isHttps ? https : http;
const req = transport.get({
hostname: url.hostname,
port: url.port || (isHttps ? 443 : 80),
path: url.pathname,
timeout: 5000,
}, (res) => {
const chunks: Buffer[] = [];
res.on('data', (chunk: Buffer) => chunks.push(chunk));
res.on('end', () => {
try {
const raw = Buffer.concat(chunks).toString('utf-8');
const data = JSON.parse(raw) as { models?: Array<{ name: string }>; data?: Array<{ id: string }> };
// Ollama format: { models: [{ name }] }
if (data.models) {
resolve(data.models.map((m) => m.name));
return;
}
// OpenAI/vLLM format: { data: [{ id }] }
if (data.data) {
resolve(data.data.map((m) => m.id));
return;
}
resolve([]);
} catch {
resolve([]);
}
});
});
req.on('error', () => resolve([]));
req.on('timeout', () => { req.destroy(); resolve([]); });
});
}
async function defaultSelect<T>(message: string, choices: Array<{ name: string; value: T; description?: string }>): Promise<T> {
const { default: inquirer } = await import('inquirer');
const { answer } = await inquirer.prompt([{
type: 'list',
name: 'answer',
message,
choices: choices.map((c) => ({
name: c.description ? `${c.name}${c.description}` : c.name,
value: c.value,
short: c.name,
})),
}]);
return answer as T;
}
async function defaultInput(message: string, defaultValue?: string): Promise<string> {
const { default: inquirer } = await import('inquirer');
const { answer } = await inquirer.prompt([{
type: 'input',
name: 'answer',
message,
default: defaultValue,
}]);
return answer as string;
}
async function defaultPassword(message: string): Promise<string> {
const { default: inquirer } = await import('inquirer');
const { answer } = await inquirer.prompt([{ type: 'password', name: 'answer', message }]);
return answer as string;
}
async function defaultConfirm(message: string, defaultValue?: boolean): Promise<boolean> {
const { default: inquirer } = await import('inquirer');
const { answer } = await inquirer.prompt([{
type: 'confirm',
name: 'answer',
message,
default: defaultValue ?? true,
}]);
return answer as boolean;
}
const defaultPrompt: ConfigSetupPrompt = {
select: defaultSelect,
input: defaultInput,
password: defaultPassword,
confirm: defaultConfirm,
};
async function defaultWhichBinary(name: string): Promise<string | null> {
try {
const { stdout } = await execFileAsync('which', [name], { timeout: 3000 });
const path = stdout.trim();
return path || null;
} catch {
return null;
}
}
// --- Per-provider setup functions (return ProviderFields for reuse in both modes) ---
async function setupGeminiCliFields(
prompt: ConfigSetupPrompt,
log: (...args: string[]) => void,
whichBinary: (name: string) => Promise<string | null>,
currentModel?: string,
): Promise<ProviderFields> {
const model = await prompt.select<string>('Select model:', [
...GEMINI_MODELS.map((m) => ({
name: m === currentModel ? `${m} (current)` : m,
value: m,
})),
{ name: 'Custom...', value: '__custom__' },
]);
const finalModel = model === '__custom__'
? await prompt.input('Model name:', currentModel)
: model;
let binaryPath: string | undefined;
const detected = await whichBinary('gemini');
if (detected) {
log(`Found gemini at: ${detected}`);
binaryPath = detected;
} else {
log('Warning: gemini binary not found in PATH');
const manualPath = await prompt.input('Binary path (or install with: npm i -g @google/gemini-cli):');
if (manualPath) binaryPath = manualPath;
}
const result: ProviderFields = { model: finalModel };
if (binaryPath) result.binaryPath = binaryPath;
return result;
}
async function setupOllamaFields(
prompt: ConfigSetupPrompt,
fetchModels: ConfigSetupDeps['fetchModels'],
currentUrl?: string,
currentModel?: string,
): Promise<ProviderFields> {
const url = await prompt.input('Ollama URL:', currentUrl ?? 'http://localhost:11434');
const models = await fetchModels(url, '/api/tags');
let model: string;
if (models.length > 0) {
const choices = models.map((m) => ({
name: m === currentModel ? `${m} (current)` : m,
value: m,
}));
choices.push({ name: 'Custom...', value: '__custom__' });
model = await prompt.select<string>('Select model:', choices);
if (model === '__custom__') {
model = await prompt.input('Model name:', currentModel);
}
} else {
model = await prompt.input('Model name (could not fetch models):', currentModel ?? 'llama3.2');
}
const result: ProviderFields = { model };
if (url) result.url = url;
return result;
}
async function setupVllmFields(
prompt: ConfigSetupPrompt,
fetchModels: ConfigSetupDeps['fetchModels'],
currentUrl?: string,
currentModel?: string,
): Promise<ProviderFields> {
const url = await prompt.input('vLLM URL:', currentUrl ?? 'http://localhost:8000');
const models = await fetchModels(url, '/v1/models');
let model: string;
if (models.length > 0) {
const choices = models.map((m) => ({
name: m === currentModel ? `${m} (current)` : m,
value: m,
}));
choices.push({ name: 'Custom...', value: '__custom__' });
model = await prompt.select<string>('Select model:', choices);
if (model === '__custom__') {
model = await prompt.input('Model name:', currentModel);
}
} else {
model = await prompt.input('Model name (could not fetch models):', currentModel ?? 'default');
}
const result: ProviderFields = { model };
if (url) result.url = url;
return result;
}
async function setupApiKeyFields(
prompt: ConfigSetupPrompt,
secretStore: SecretStore,
provider: LlmProviderName,
secretKey: string,
hardcodedModels: string[],
currentModel?: string,
currentUrl?: string,
): Promise<ProviderFields> {
const existingKey = await secretStore.get(secretKey);
let apiKey: string;
if (existingKey) {
const masked = `****${existingKey.slice(-4)}`;
const changeKey = await prompt.confirm(`API key stored (${masked}). Change it?`, false);
apiKey = changeKey ? await prompt.password('API key:') : existingKey;
} else {
apiKey = await prompt.password('API key:');
}
if (apiKey !== existingKey) {
await secretStore.set(secretKey, apiKey);
}
let model: string;
if (hardcodedModels.length > 0) {
const choices = hardcodedModels.map((m) => ({
name: m === currentModel ? `${m} (current)` : m,
value: m,
}));
choices.push({ name: 'Custom...', value: '__custom__' });
model = await prompt.select<string>('Select model:', choices);
if (model === '__custom__') {
model = await prompt.input('Model name:', currentModel);
}
} else {
model = await prompt.input('Model name:', currentModel ?? 'gpt-4o');
}
let url: string | undefined;
if (provider === 'openai') {
const customUrl = await prompt.confirm('Use custom API endpoint?', false);
if (customUrl) {
url = await prompt.input('API URL:', currentUrl ?? 'https://api.openai.com');
}
}
const result: ProviderFields = { model };
if (url) result.url = url;
return result;
}
/** Configure a single provider type and return its fields. */
async function setupProviderFields(
providerType: LlmProviderName,
prompt: ConfigSetupPrompt,
log: (...args: string[]) => void,
fetchModels: ConfigSetupDeps['fetchModels'],
whichBinary: (name: string) => Promise<string | null>,
secretStore: SecretStore,
): Promise<ProviderFields> {
switch (providerType) {
case 'gemini-cli':
return setupGeminiCliFields(prompt, log, whichBinary);
case 'ollama':
return setupOllamaFields(prompt, fetchModels);
case 'vllm':
return setupVllmFields(prompt, fetchModels);
case 'anthropic':
return setupApiKeyFields(prompt, secretStore, 'anthropic', 'anthropic-api-key', ANTHROPIC_MODELS);
case 'openai':
return setupApiKeyFields(prompt, secretStore, 'openai', 'openai-api-key', []);
case 'deepseek':
return setupApiKeyFields(prompt, secretStore, 'deepseek', 'deepseek-api-key', DEEPSEEK_MODELS);
default:
return {};
}
}
/** Build a LlmProviderEntry from type, name, and fields. */
function buildEntry(providerType: LlmProviderName, name: string, fields: ProviderFields, tier?: LlmTier): LlmProviderEntry {
const entry: LlmProviderEntry = { name, type: providerType };
if (fields.model) entry.model = fields.model;
if (fields.url) entry.url = fields.url;
if (fields.binaryPath) entry.binaryPath = fields.binaryPath;
if (tier) entry.tier = tier;
return entry;
}
/** Simple mode: single provider (legacy format). */
async function simpleSetup(
config: McpctlConfig,
configDeps: Partial<ConfigLoaderDeps>,
prompt: ConfigSetupPrompt,
log: (...args: string[]) => void,
fetchModels: ConfigSetupDeps['fetchModels'],
whichBinary: (name: string) => Promise<string | null>,
secretStore: SecretStore,
): Promise<void> {
const currentLlm = config.llm && 'provider' in config.llm ? config.llm : undefined;
const choices = ALL_PROVIDER_CHOICES.map((c) => {
if (currentLlm?.provider === c.value) {
return { ...c, name: `${c.name} (current)` };
}
return c;
});
const provider = await prompt.select<LlmProviderName>('Select LLM provider:', choices);
if (provider === 'none') {
const updated: McpctlConfig = { ...config, llm: { provider: 'none' } };
saveConfig(updated, configDeps);
log('LLM disabled. Restart mcplocal: systemctl --user restart mcplocal');
return;
}
const fields = await setupProviderFields(provider, prompt, log, fetchModels, whichBinary, secretStore);
const llmConfig: LlmConfig = { provider, ...fields };
const updated: McpctlConfig = { ...config, llm: llmConfig };
saveConfig(updated, configDeps);
log(`\nLLM configured: ${llmConfig.provider}${llmConfig.model ? ` / ${llmConfig.model}` : ''}`);
log('Restart mcplocal: systemctl --user restart mcplocal');
}
/** Advanced mode: multiple providers with tier assignments. */
async function advancedSetup(
config: McpctlConfig,
configDeps: Partial<ConfigLoaderDeps>,
prompt: ConfigSetupPrompt,
log: (...args: string[]) => void,
fetchModels: ConfigSetupDeps['fetchModels'],
whichBinary: (name: string) => Promise<string | null>,
secretStore: SecretStore,
): Promise<void> {
const entries: LlmProviderEntry[] = [];
// Fast providers
const addFast = await prompt.confirm('Add a FAST provider? (vLLM, Ollama — local, cheap, fast)', true);
if (addFast) {
let addMore = true;
while (addMore) {
const providerType = await prompt.select<LlmProviderName>('Fast provider type:', FAST_PROVIDER_CHOICES);
const defaultName = providerType === 'vllm' ? 'vllm-local' : providerType;
const name = await prompt.input('Provider name:', defaultName);
const fields = await setupProviderFields(providerType, prompt, log, fetchModels, whichBinary, secretStore);
entries.push(buildEntry(providerType, name, fields, 'fast'));
log(` Added: ${name} (${providerType}) → fast tier`);
addMore = await prompt.confirm('Add another fast provider?', false);
}
}
// Heavy providers
const addHeavy = await prompt.confirm('Add a HEAVY provider? (Gemini, Anthropic, OpenAI — cloud, smart)', true);
if (addHeavy) {
let addMore = true;
while (addMore) {
const providerType = await prompt.select<LlmProviderName>('Heavy provider type:', HEAVY_PROVIDER_CHOICES);
const defaultName = providerType;
const name = await prompt.input('Provider name:', defaultName);
const fields = await setupProviderFields(providerType, prompt, log, fetchModels, whichBinary, secretStore);
entries.push(buildEntry(providerType, name, fields, 'heavy'));
log(` Added: ${name} (${providerType}) → heavy tier`);
addMore = await prompt.confirm('Add another heavy provider?', false);
}
}
if (entries.length === 0) {
log('No providers configured.');
return;
}
// Summary
log('\nProvider configuration:');
for (const e of entries) {
log(` ${e.tier ?? 'unassigned'}: ${e.name} (${e.type})${e.model ? ` / ${e.model}` : ''}`);
}
const updated: McpctlConfig = { ...config, llm: { providers: entries } };
saveConfig(updated, configDeps);
log('\nRestart mcplocal: systemctl --user restart mcplocal');
}
export function createConfigSetupCommand(deps?: Partial<ConfigSetupDeps>): Command {
return new Command('setup')
.description('Interactive LLM provider setup wizard')
.action(async () => {
const configDeps = deps?.configDeps ?? {};
const log = deps?.log ?? ((...args: string[]) => console.log(...args));
const prompt = deps?.prompt ?? defaultPrompt;
const fetchModels = deps?.fetchModels ?? defaultFetchModels;
const whichBinary = deps?.whichBinary ?? defaultWhichBinary;
const secretStore = deps?.secretStore ?? await createSecretStore();
const config = loadConfig(configDeps);
const mode = await prompt.select<'simple' | 'advanced'>('Setup mode:', [
{ name: 'Simple', value: 'simple', description: 'One provider for everything' },
{ name: 'Advanced', value: 'advanced', description: 'Multiple providers with fast/heavy tiers' },
]);
if (mode === 'simple') {
await simpleSetup(config, configDeps, prompt, log, fetchModels, whichBinary, secretStore);
} else {
await advancedSetup(config, configDeps, prompt, log, fetchModels, whichBinary, secretStore);
}
});
}

View File

@@ -6,11 +6,12 @@ import { loadConfig, saveConfig, mergeConfig, getConfigPath, DEFAULT_CONFIG } fr
import type { McpctlConfig, ConfigLoaderDeps } from '../config/index.js';
import { formatJson, formatYaml } from '../formatters/index.js';
import { saveCredentials, loadCredentials } from '../auth/index.js';
import { createConfigSetupCommand } from './config-setup.js';
import type { CredentialsDeps, StoredCredentials } from '../auth/index.js';
import type { ApiClient } from '../api-client.js';
interface McpConfig {
mcpServers: Record<string, { command: string; args: string[]; env?: Record<string, string> }>;
mcpServers: Record<string, { command?: string; args?: string[]; url?: string; env?: Record<string, string> }>;
}
export interface ConfigCommandDeps {
@@ -84,21 +85,27 @@ export function createConfigCommand(deps?: Partial<ConfigCommandDeps>, apiDeps?:
log('Configuration reset to defaults');
});
if (apiDeps) {
const { client, credentialsDeps, log: apiLog } = apiDeps;
config
.command('claude-generate')
.description('Generate .mcp.json from a project configuration')
// claude/claude-generate: generate .mcp.json pointing at mcpctl mcp bridge
function registerClaudeCommand(name: string, hidden: boolean): void {
const cmd = config
.command(name)
.description(hidden ? '' : 'Generate .mcp.json that connects a project via mcpctl mcp bridge')
.requiredOption('--project <name>', 'Project name')
.option('-o, --output <path>', 'Output file path', '.mcp.json')
.option('--merge', 'Merge with existing .mcp.json instead of overwriting')
.option('--stdout', 'Print to stdout instead of writing a file')
.action(async (opts: { project: string; output: string; merge?: boolean; stdout?: boolean }) => {
const mcpConfig = await client.get<McpConfig>(`/api/v1/projects/${opts.project}/mcp-config`);
.action((opts: { project: string; output: string; merge?: boolean; stdout?: boolean }) => {
const mcpConfig: McpConfig = {
mcpServers: {
[opts.project]: {
command: 'mcpctl',
args: ['mcp', '-p', opts.project],
},
},
};
if (opts.stdout) {
apiLog(JSON.stringify(mcpConfig, null, 2));
log(JSON.stringify(mcpConfig, null, 2));
return;
}
@@ -121,8 +128,21 @@ export function createConfigCommand(deps?: Partial<ConfigCommandDeps>, apiDeps?:
writeFileSync(outputPath, JSON.stringify(finalConfig, null, 2) + '\n');
const serverCount = Object.keys(finalConfig.mcpServers).length;
apiLog(`Wrote ${outputPath} (${serverCount} server(s))`);
log(`Wrote ${outputPath} (${serverCount} server(s))`);
});
if (hidden) {
// Commander shows empty-description commands but they won't clutter help output
void cmd; // suppress unused lint
}
}
registerClaudeCommand('claude', false);
registerClaudeCommand('claude-generate', true); // backward compat
config.addCommand(createConfigSetupCommand({ configDeps }));
if (apiDeps) {
const { client, credentialsDeps, log: apiLog } = apiDeps;
config
.command('impersonate')

View File

@@ -196,8 +196,9 @@ export function createCreateCommand(deps: CreateCommandDeps): Command {
.argument('<name>', 'Project name')
.option('-d, --description <text>', 'Project description', '')
.option('--proxy-mode <mode>', 'Proxy mode (direct, filtered)')
.option('--proxy-mode-llm-provider <name>', 'LLM provider name (for filtered proxy mode)')
.option('--proxy-mode-llm-model <name>', 'LLM model name (for filtered proxy mode)')
.option('--prompt <text>', 'Project-level prompt / instructions for the LLM')
.option('--gated', 'Enable gated sessions (default: true)')
.option('--no-gated', 'Disable gated sessions')
.option('--server <name>', 'Server name (repeat for multiple)', collect, [])
.option('--force', 'Update if already exists')
.action(async (name: string, opts) => {
@@ -206,8 +207,8 @@ export function createCreateCommand(deps: CreateCommandDeps): Command {
description: opts.description,
proxyMode: opts.proxyMode ?? 'direct',
};
if (opts.proxyModeLlmProvider) body.llmProvider = opts.proxyModeLlmProvider;
if (opts.proxyModeLlmModel) body.llmModel = opts.proxyModeLlmModel;
if (opts.prompt) body.prompt = opts.prompt;
if (opts.gated !== undefined) body.gated = opts.gated as boolean;
if (opts.server.length > 0) body.servers = opts.server;
try {
@@ -347,5 +348,85 @@ export function createCreateCommand(deps: CreateCommandDeps): Command {
}
});
// --- create prompt ---
cmd.command('prompt')
.description('Create an approved prompt')
.argument('<name>', 'Prompt name (lowercase alphanumeric with hyphens)')
.option('--project <name>', 'Project name to scope the prompt to')
.option('--content <text>', 'Prompt content text')
.option('--content-file <path>', 'Read prompt content from file')
.option('--priority <number>', 'Priority 1-10 (default: 5, higher = more important)')
.option('--link <target>', 'Link to MCP resource (format: project/server:uri)')
.action(async (name: string, opts) => {
let content = opts.content as string | undefined;
if (opts.contentFile) {
const fs = await import('node:fs/promises');
content = await fs.readFile(opts.contentFile as string, 'utf-8');
}
if (!content) {
throw new Error('--content or --content-file is required');
}
const body: Record<string, unknown> = { name, content };
if (opts.project) {
// Resolve project name to ID
const projects = await client.get<Array<{ id: string; name: string }>>('/api/v1/projects');
const project = projects.find((p) => p.name === opts.project);
if (!project) throw new Error(`Project '${opts.project as string}' not found`);
body.projectId = project.id;
}
if (opts.priority) {
const priority = Number(opts.priority);
if (isNaN(priority) || priority < 1 || priority > 10) {
throw new Error('--priority must be a number between 1 and 10');
}
body.priority = priority;
}
if (opts.link) {
body.linkTarget = opts.link;
}
const prompt = await client.post<{ id: string; name: string }>('/api/v1/prompts', body);
log(`prompt '${prompt.name}' created (id: ${prompt.id})`);
});
// --- create promptrequest ---
cmd.command('promptrequest')
.description('Create a prompt request (pending proposal that needs approval)')
.argument('<name>', 'Prompt request name (lowercase alphanumeric with hyphens)')
.option('--project <name>', 'Project name to scope the prompt request to')
.option('--content <text>', 'Prompt content text')
.option('--content-file <path>', 'Read prompt content from file')
.option('--priority <number>', 'Priority 1-10 (default: 5, higher = more important)')
.action(async (name: string, opts) => {
let content = opts.content as string | undefined;
if (opts.contentFile) {
const fs = await import('node:fs/promises');
content = await fs.readFile(opts.contentFile as string, 'utf-8');
}
if (!content) {
throw new Error('--content or --content-file is required');
}
const body: Record<string, unknown> = { name, content };
if (opts.project) {
body.project = opts.project;
}
if (opts.priority) {
const priority = Number(opts.priority);
if (isNaN(priority) || priority < 1 || priority > 10) {
throw new Error('--priority must be a number between 1 and 10');
}
body.priority = priority;
}
const pr = await client.post<{ id: string; name: string }>(
'/api/v1/promptrequests',
body,
);
log(`prompt request '${pr.name}' created (id: ${pr.id})`);
log(` approve with: mcpctl approve promptrequest ${pr.name}`);
});
return cmd;
}

View File

@@ -133,11 +133,15 @@ function formatInstanceDetail(instance: Record<string, unknown>, inspect?: Recor
return lines.join('\n');
}
function formatProjectDetail(project: Record<string, unknown>): string {
function formatProjectDetail(
project: Record<string, unknown>,
prompts: Array<{ name: string; priority: number; linkTarget: string | null }> = [],
): string {
const lines: string[] = [];
lines.push(`=== Project: ${project.name} ===`);
lines.push(`${pad('Name:')}${project.name}`);
if (project.description) lines.push(`${pad('Description:')}${project.description}`);
lines.push(`${pad('Gated:')}${project.gated ? 'yes' : 'no'}`);
// Proxy config section
const proxyMode = project.proxyMode as string | undefined;
@@ -162,6 +166,18 @@ function formatProjectDetail(project: Record<string, unknown>): string {
}
}
// Prompts section
if (prompts.length > 0) {
lines.push('');
lines.push('Prompts:');
const nameW = Math.max(4, ...prompts.map((p) => p.name.length)) + 2;
lines.push(` ${'NAME'.padEnd(nameW)}${'PRI'.padEnd(6)}TYPE`);
for (const p of prompts) {
const type = p.linkTarget ? 'link' : 'local';
lines.push(` ${p.name.padEnd(nameW)}${String(p.priority).padEnd(6)}${type}`);
}
}
lines.push('');
lines.push('Metadata:');
lines.push(` ${pad('ID:', 12)}${project.id}`);
@@ -586,9 +602,13 @@ export function createDescribeCommand(deps: DescribeCommandDeps): Command {
case 'templates':
deps.log(formatTemplateDetail(item));
break;
case 'projects':
deps.log(formatProjectDetail(item));
case 'projects': {
const projectPrompts = await deps.client
.get<Array<{ name: string; priority: number; linkTarget: string | null }>>(`/api/v1/prompts?projectId=${item.id as string}`)
.catch(() => []);
deps.log(formatProjectDetail(item, projectPrompts));
break;
}
case 'users': {
// Fetch RBAC definitions and groups to show permissions
const [rbacDefsForUser, allGroupsForUser] = await Promise.all([

View File

@@ -6,6 +6,7 @@ import { execSync } from 'node:child_process';
import yaml from 'js-yaml';
import type { ApiClient } from '../api-client.js';
import { resolveResource, resolveNameOrId, stripInternalFields } from './shared.js';
import { reorderKeys } from '../formatters/output.js';
export interface EditCommandDeps {
client: ApiClient;
@@ -47,7 +48,7 @@ export function createEditCommand(deps: EditCommandDeps): Command {
return;
}
const validResources = ['servers', 'secrets', 'projects', 'groups', 'rbac'];
const validResources = ['servers', 'secrets', 'projects', 'groups', 'rbac', 'prompts', 'promptrequests'];
if (!validResources.includes(resource)) {
log(`Error: unknown resource type '${resourceArg}'`);
process.exitCode = 1;
@@ -61,7 +62,7 @@ export function createEditCommand(deps: EditCommandDeps): Command {
const current = await client.get<Record<string, unknown>>(`/api/v1/${resource}/${id}`);
// Strip read-only fields for editor
const editable = stripInternalFields(current);
const editable = reorderKeys(stripInternalFields(current)) as Record<string, unknown>;
// Serialize to YAML
const singular = resource.replace(/s$/, '');

View File

@@ -5,7 +5,7 @@ import type { Column } from '../formatters/table.js';
import { resolveResource, stripInternalFields } from './shared.js';
export interface GetCommandDeps {
fetchResource: (resource: string, id?: string) => Promise<unknown[]>;
fetchResource: (resource: string, id?: string, opts?: { project?: string; all?: boolean }) => Promise<unknown[]>;
log: (...args: string[]) => void;
}
@@ -22,6 +22,7 @@ interface ProjectRow {
name: string;
description: string;
proxyMode: string;
gated: boolean;
ownerId: string;
servers?: Array<{ server: { name: string } }>;
}
@@ -83,6 +84,7 @@ interface RbacRow {
const projectColumns: Column<ProjectRow>[] = [
{ header: 'NAME', key: 'name' },
{ header: 'MODE', key: (r) => r.proxyMode ?? 'direct', width: 10 },
{ header: 'GATED', key: (r) => r.gated ? 'yes' : 'no', width: 6 },
{ header: 'SERVERS', key: (r) => r.servers ? String(r.servers.length) : '0', width: 8 },
{ header: 'DESCRIPTION', key: 'description', width: 30 },
{ header: 'ID', key: 'id' },
@@ -130,6 +132,44 @@ const templateColumns: Column<TemplateRow>[] = [
{ header: 'DESCRIPTION', key: 'description', width: 50 },
];
interface PromptRow {
id: string;
name: string;
projectId: string | null;
project?: { name: string } | null;
priority: number;
linkTarget: string | null;
linkStatus: 'alive' | 'dead' | null;
createdAt: string;
}
interface PromptRequestRow {
id: string;
name: string;
projectId: string | null;
project?: { name: string } | null;
createdBySession: string | null;
createdAt: string;
}
const promptColumns: Column<PromptRow>[] = [
{ header: 'NAME', key: 'name' },
{ header: 'PROJECT', key: (r) => r.project?.name ?? (r.projectId ? r.projectId : '(global)'), width: 20 },
{ header: 'PRI', key: (r) => String(r.priority), width: 4 },
{ header: 'LINK', key: (r) => r.linkTarget ? r.linkTarget.split(':')[0]! : '-', width: 20 },
{ header: 'STATUS', key: (r) => r.linkStatus ?? '-', width: 6 },
{ header: 'CREATED', key: (r) => new Date(r.createdAt).toLocaleString(), width: 20 },
{ header: 'ID', key: 'id' },
];
const promptRequestColumns: Column<PromptRequestRow>[] = [
{ header: 'NAME', key: 'name' },
{ header: 'PROJECT', key: (r) => r.project?.name ?? (r.projectId ? r.projectId : '(global)'), width: 20 },
{ header: 'SESSION', key: (r) => r.createdBySession ? r.createdBySession.slice(0, 12) : '-', width: 14 },
{ header: 'CREATED', key: (r) => new Date(r.createdAt).toLocaleString(), width: 20 },
{ header: 'ID', key: 'id' },
];
const instanceColumns: Column<InstanceRow>[] = [
{ header: 'NAME', key: (r) => r.server?.name ?? '-', width: 20 },
{ header: 'STATUS', key: 'status', width: 10 },
@@ -157,6 +197,10 @@ function getColumnsForResource(resource: string): Column<Record<string, unknown>
return groupColumns as unknown as Column<Record<string, unknown>>[];
case 'rbac':
return rbacColumns as unknown as Column<Record<string, unknown>>[];
case 'prompts':
return promptColumns as unknown as Column<Record<string, unknown>>[];
case 'promptrequests':
return promptRequestColumns as unknown as Column<Record<string, unknown>>[];
default:
return [
{ header: 'ID', key: 'id' as keyof Record<string, unknown> },
@@ -182,9 +226,14 @@ export function createGetCommand(deps: GetCommandDeps): Command {
.argument('<resource>', 'resource type (servers, projects, instances)')
.argument('[id]', 'specific resource ID or name')
.option('-o, --output <format>', 'output format (table, json, yaml)', 'table')
.action(async (resourceArg: string, id: string | undefined, opts: { output: string }) => {
.option('--project <name>', 'Filter by project')
.option('-A, --all', 'Show all (including project-scoped) resources')
.action(async (resourceArg: string, id: string | undefined, opts: { output: string; project?: string; all?: true }) => {
const resource = resolveResource(resourceArg);
const items = await deps.fetchResource(resource, id);
const fetchOpts: { project?: string; all?: boolean } = {};
if (opts.project) fetchOpts.project = opts.project;
if (opts.all) fetchOpts.all = true;
const items = await deps.fetchResource(resource, id, Object.keys(fetchOpts).length > 0 ? fetchOpts : undefined);
if (opts.output === 'json') {
// Apply-compatible JSON wrapped in resource key

224
src/cli/src/commands/mcp.ts Normal file
View File

@@ -0,0 +1,224 @@
import { Command } from 'commander';
import http from 'node:http';
import { createInterface } from 'node:readline';
export interface McpBridgeOptions {
projectName: string;
mcplocalUrl: string;
token?: string | undefined;
stdin: NodeJS.ReadableStream;
stdout: NodeJS.WritableStream;
stderr: NodeJS.WritableStream;
}
function postJsonRpc(
url: string,
body: string,
sessionId: string | undefined,
token: string | undefined,
): Promise<{ status: number; headers: http.IncomingHttpHeaders; body: string }> {
return new Promise((resolve, reject) => {
const parsed = new URL(url);
const headers: Record<string, string> = {
'Content-Type': 'application/json',
'Accept': 'application/json, text/event-stream',
};
if (sessionId) {
headers['mcp-session-id'] = sessionId;
}
if (token) {
headers['Authorization'] = `Bearer ${token}`;
}
const req = http.request(
{
hostname: parsed.hostname,
port: parsed.port,
path: parsed.pathname,
method: 'POST',
headers,
timeout: 30_000,
},
(res) => {
const chunks: Buffer[] = [];
res.on('data', (chunk: Buffer) => chunks.push(chunk));
res.on('end', () => {
resolve({
status: res.statusCode ?? 0,
headers: res.headers,
body: Buffer.concat(chunks).toString('utf-8'),
});
});
},
);
req.on('error', reject);
req.on('timeout', () => {
req.destroy();
reject(new Error('Request timed out'));
});
req.write(body);
req.end();
});
}
function sendDelete(
url: string,
sessionId: string,
token: string | undefined,
): Promise<void> {
return new Promise((resolve) => {
const parsed = new URL(url);
const headers: Record<string, string> = {
'mcp-session-id': sessionId,
};
if (token) {
headers['Authorization'] = `Bearer ${token}`;
}
const req = http.request(
{
hostname: parsed.hostname,
port: parsed.port,
path: parsed.pathname,
method: 'DELETE',
headers,
timeout: 5_000,
},
() => resolve(),
);
req.on('error', () => resolve()); // Best effort cleanup
req.on('timeout', () => {
req.destroy();
resolve();
});
req.end();
});
}
/**
* Extract JSON-RPC messages from an HTTP response body.
* Handles both plain JSON and SSE (text/event-stream) formats.
*/
function extractJsonRpcMessages(contentType: string | undefined, body: string): string[] {
if (contentType?.includes('text/event-stream')) {
// Parse SSE: extract data: lines
const messages: string[] = [];
for (const line of body.split('\n')) {
if (line.startsWith('data: ')) {
messages.push(line.slice(6));
}
}
return messages;
}
// Plain JSON response
return [body];
}
/**
* STDIO-to-Streamable-HTTP MCP bridge.
*
* Reads JSON-RPC messages line-by-line from stdin, POSTs them to
* mcplocal's project endpoint, and writes responses to stdout.
*/
export async function runMcpBridge(opts: McpBridgeOptions): Promise<void> {
const { projectName, mcplocalUrl, token, stdin, stdout, stderr } = opts;
const endpointUrl = `${mcplocalUrl.replace(/\/$/, '')}/projects/${encodeURIComponent(projectName)}/mcp`;
let sessionId: string | undefined;
const rl = createInterface({ input: stdin, crlfDelay: Infinity });
for await (const line of rl) {
const trimmed = line.trim();
if (!trimmed) continue;
try {
const result = await postJsonRpc(endpointUrl, trimmed, sessionId, token);
// Capture session ID from first response
if (!sessionId) {
const sid = result.headers['mcp-session-id'];
if (typeof sid === 'string') {
sessionId = sid;
}
}
if (result.status >= 400) {
stderr.write(`MCP bridge error: HTTP ${result.status}: ${result.body}\n`);
}
// Handle both plain JSON and SSE responses
const messages = extractJsonRpcMessages(result.headers['content-type'], result.body);
for (const msg of messages) {
const trimmedMsg = msg.trim();
if (trimmedMsg) {
stdout.write(trimmedMsg + '\n');
}
}
} catch (err) {
stderr.write(`MCP bridge error: ${err instanceof Error ? err.message : String(err)}\n`);
}
}
// stdin closed — cleanup session
if (sessionId) {
await sendDelete(endpointUrl, sessionId, token);
}
}
export interface McpCommandDeps {
getProject: () => string | undefined;
configLoader?: () => { mcplocalUrl: string };
credentialsLoader?: () => { token: string } | null;
}
export function createMcpCommand(deps: McpCommandDeps): Command {
const cmd = new Command('mcp')
.description('MCP STDIO transport bridge — connects stdin/stdout to a project MCP endpoint')
.passThroughOptions()
.option('-p, --project <name>', 'Project name')
.action(async (opts: { project?: string }) => {
// Accept -p/--project on the command itself, or fall back to global --project
const projectName = opts.project ?? deps.getProject();
if (!projectName) {
process.stderr.write('Error: --project is required for the mcp command\n');
process.exitCode = 1;
return;
}
let mcplocalUrl = 'http://localhost:3200';
if (deps.configLoader) {
mcplocalUrl = deps.configLoader().mcplocalUrl;
} else {
try {
const { loadConfig } = await import('../config/index.js');
mcplocalUrl = loadConfig().mcplocalUrl;
} catch {
// Use default
}
}
let token: string | undefined;
if (deps.credentialsLoader) {
token = deps.credentialsLoader()?.token;
} else {
try {
const { loadCredentials } = await import('../auth/index.js');
token = loadCredentials()?.token;
} catch {
// No credentials
}
}
await runMcpBridge({
projectName,
mcplocalUrl,
token,
stdin: process.stdin,
stdout: process.stdout,
stderr: process.stderr,
});
});
return cmd;
}

View File

@@ -1,6 +1,6 @@
import { Command } from 'commander';
import type { ApiClient } from '../api-client.js';
import { resolveNameOrId } from './shared.js';
import { resolveNameOrId, resolveResource } from './shared.js';
export interface ProjectOpsDeps {
client: ApiClient;
@@ -45,3 +45,21 @@ export function createDetachServerCommand(deps: ProjectOpsDeps): Command {
log(`server '${serverName}' detached from project '${projectName}'`);
});
}
export function createApproveCommand(deps: ProjectOpsDeps): Command {
const { client, log } = deps;
return new Command('approve')
.description('Approve a pending prompt request (atomic: delete request, create prompt)')
.argument('<resource>', 'Resource type (promptrequest)')
.argument('<name>', 'Resource name or ID')
.action(async (resourceArg: string, nameOrId: string) => {
const resource = resolveResource(resourceArg);
if (resource !== 'promptrequests') {
throw new Error(`approve is only supported for 'promptrequest', got '${resourceArg}'`);
}
const id = await resolveNameOrId(client, 'promptrequests', nameOrId);
const prompt = await client.post<{ id: string; name: string }>(`/api/v1/promptrequests/${id}/approve`, {});
log(`prompt request approved → prompt '${prompt.name}' created (id: ${prompt.id})`);
});
}

View File

@@ -16,6 +16,11 @@ export const RESOURCE_ALIASES: Record<string, string> = {
rbac: 'rbac',
'rbac-definition': 'rbac',
'rbac-binding': 'rbac',
prompt: 'prompts',
prompts: 'prompts',
promptrequest: 'promptrequests',
promptrequests: 'promptrequests',
pr: 'promptrequests',
};
export function resolveResource(name: string): string {
@@ -56,8 +61,21 @@ export async function resolveNameOrId(
/** Strip internal/read-only fields from an API response to make it apply-compatible. */
export function stripInternalFields(obj: Record<string, unknown>): Record<string, unknown> {
const result = { ...obj };
for (const key of ['id', 'createdAt', 'updatedAt', 'version', 'ownerId']) {
for (const key of ['id', 'createdAt', 'updatedAt', 'version', 'ownerId', 'summary', 'chapters']) {
delete result[key];
}
// Strip relationship joins that aren't part of the resource spec (like k8s namespaces don't list deployments)
if ('servers' in result && Array.isArray(result.servers)) {
delete result.servers;
}
if ('owner' in result && typeof result.owner === 'object') {
delete result.owner;
}
if ('members' in result && Array.isArray(result.members)) {
delete result.members;
}
if ('project' in result && typeof result.project === 'object' && result.project !== null) {
delete result.project;
}
return result;
}

View File

@@ -7,11 +7,32 @@ import type { CredentialsDeps } from '../auth/index.js';
import { formatJson, formatYaml } from '../formatters/index.js';
import { APP_VERSION } from '@mcpctl/shared';
// ANSI helpers
const GREEN = '\x1b[32m';
const RED = '\x1b[31m';
const DIM = '\x1b[2m';
const RESET = '\x1b[0m';
const CLEAR_LINE = '\x1b[2K\r';
interface ProvidersInfo {
providers: string[];
tiers: { fast: string[]; heavy: string[] };
health: Record<string, boolean>;
}
export interface StatusCommandDeps {
configDeps: Partial<ConfigLoaderDeps>;
credentialsDeps: Partial<CredentialsDeps>;
log: (...args: string[]) => void;
write: (text: string) => void;
checkHealth: (url: string) => Promise<boolean>;
/** Check LLM health via mcplocal's /llm/health endpoint */
checkLlm: (mcplocalUrl: string) => Promise<string>;
/** Fetch available models from mcplocal's /llm/models endpoint */
fetchModels: (mcplocalUrl: string) => Promise<string[]>;
/** Fetch provider tier info from mcplocal's /llm/providers endpoint */
fetchProviders: (mcplocalUrl: string) => Promise<ProvidersInfo | null>;
isTTY: boolean;
}
function defaultCheckHealth(url: string): Promise<boolean> {
@@ -28,15 +49,114 @@ function defaultCheckHealth(url: string): Promise<boolean> {
});
}
/**
* Check LLM health by querying mcplocal's /llm/health endpoint.
* This tests the actual provider running inside the daemon (uses persistent ACP for gemini, etc.)
*/
function defaultCheckLlm(mcplocalUrl: string): Promise<string> {
return new Promise((resolve) => {
const req = http.get(`${mcplocalUrl}/llm/health`, { timeout: 45000 }, (res) => {
const chunks: Buffer[] = [];
res.on('data', (chunk: Buffer) => chunks.push(chunk));
res.on('end', () => {
try {
const body = JSON.parse(Buffer.concat(chunks).toString('utf-8')) as { status: string; error?: string };
if (body.status === 'ok') {
resolve('ok');
} else if (body.status === 'not configured') {
resolve('not configured');
} else if (body.error) {
resolve(body.error.slice(0, 80));
} else {
resolve(body.status);
}
} catch {
resolve('invalid response');
}
});
});
req.on('error', () => resolve('mcplocal unreachable'));
req.on('timeout', () => { req.destroy(); resolve('timeout'); });
});
}
function defaultFetchModels(mcplocalUrl: string): Promise<string[]> {
return new Promise((resolve) => {
const req = http.get(`${mcplocalUrl}/llm/models`, { timeout: 5000 }, (res) => {
const chunks: Buffer[] = [];
res.on('data', (chunk: Buffer) => chunks.push(chunk));
res.on('end', () => {
try {
const body = JSON.parse(Buffer.concat(chunks).toString('utf-8')) as { models?: string[] };
resolve(body.models ?? []);
} catch {
resolve([]);
}
});
});
req.on('error', () => resolve([]));
req.on('timeout', () => { req.destroy(); resolve([]); });
});
}
function defaultFetchProviders(mcplocalUrl: string): Promise<ProvidersInfo | null> {
return new Promise((resolve) => {
const req = http.get(`${mcplocalUrl}/llm/providers`, { timeout: 5000 }, (res) => {
const chunks: Buffer[] = [];
res.on('data', (chunk: Buffer) => chunks.push(chunk));
res.on('end', () => {
try {
const body = JSON.parse(Buffer.concat(chunks).toString('utf-8')) as ProvidersInfo;
resolve(body);
} catch {
resolve(null);
}
});
});
req.on('error', () => resolve(null));
req.on('timeout', () => { req.destroy(); resolve(null); });
});
}
const SPINNER_FRAMES = ['⠋', '⠙', '⠹', '⠸', '⠼', '⠴', '⠦', '⠧', '⠇', '⠏'];
const defaultDeps: StatusCommandDeps = {
configDeps: {},
credentialsDeps: {},
log: (...args) => console.log(...args),
write: (text) => process.stdout.write(text),
checkHealth: defaultCheckHealth,
checkLlm: defaultCheckLlm,
fetchModels: defaultFetchModels,
fetchProviders: defaultFetchProviders,
isTTY: process.stdout.isTTY ?? false,
};
/** Determine LLM label from config (handles both legacy and multi-provider formats). */
function getLlmLabel(llm: unknown): string | null {
if (!llm || typeof llm !== 'object') return null;
// Legacy format: { provider, model }
if ('provider' in llm) {
const legacy = llm as { provider: string; model?: string };
if (legacy.provider === 'none') return null;
return `${legacy.provider}${legacy.model ? ` / ${legacy.model}` : ''}`;
}
// Multi-provider format: { providers: [...] }
if ('providers' in llm) {
const multi = llm as { providers: Array<{ name: string; type: string; tier?: string }> };
if (multi.providers.length === 0) return null;
return multi.providers.map((p) => `${p.name}${p.tier ? ` (${p.tier})` : ''}`).join(', ');
}
return null;
}
/** Check if config uses multi-provider format. */
function isMultiProvider(llm: unknown): boolean {
return !!llm && typeof llm === 'object' && 'providers' in llm;
}
export function createStatusCommand(deps?: Partial<StatusCommandDeps>): Command {
const { configDeps, credentialsDeps, log, checkHealth } = { ...defaultDeps, ...deps };
const { configDeps, credentialsDeps, log, write, checkHealth, checkLlm, fetchModels, fetchProviders, isTTY } = { ...defaultDeps, ...deps };
return new Command('status')
.description('Show mcpctl status and connectivity')
@@ -45,33 +165,124 @@ export function createStatusCommand(deps?: Partial<StatusCommandDeps>): Command
const config = loadConfig(configDeps);
const creds = loadCredentials(credentialsDeps);
const llmLabel = getLlmLabel(config.llm);
const multiProvider = isMultiProvider(config.llm);
if (opts.output !== 'table') {
// JSON/YAML: run everything in parallel, wait, output at once
const [mcplocalReachable, mcpdReachable, llmStatus, providersInfo] = await Promise.all([
checkHealth(config.mcplocalUrl),
checkHealth(config.mcpdUrl),
llmLabel ? checkLlm(config.mcplocalUrl) : Promise.resolve(null),
multiProvider ? fetchProviders(config.mcplocalUrl) : Promise.resolve(null),
]);
const llm = llmLabel
? llmStatus === 'ok' ? llmLabel : `${llmLabel} (${llmStatus})`
: null;
const status = {
version: APP_VERSION,
mcplocalUrl: config.mcplocalUrl,
mcplocalReachable,
mcpdUrl: config.mcpdUrl,
mcpdReachable,
auth: creds ? { user: creds.user } : null,
registries: config.registries,
outputFormat: config.outputFormat,
llm,
llmStatus,
...(providersInfo ? { providers: providersInfo } : {}),
};
log(opts.output === 'json' ? formatJson(status) : formatYaml(status));
return;
}
// Table format: print lines progressively, LLM last with spinner
// Fast health checks first
const [mcplocalReachable, mcpdReachable] = await Promise.all([
checkHealth(config.mcplocalUrl),
checkHealth(config.mcpdUrl),
]);
const status = {
version: APP_VERSION,
mcplocalUrl: config.mcplocalUrl,
mcplocalReachable,
mcpdUrl: config.mcpdUrl,
mcpdReachable,
auth: creds ? { user: creds.user } : null,
registries: config.registries,
outputFormat: config.outputFormat,
};
log(`mcpctl v${APP_VERSION}`);
log(`mcplocal: ${config.mcplocalUrl} (${mcplocalReachable ? 'connected' : 'unreachable'})`);
log(`mcpd: ${config.mcpdUrl} (${mcpdReachable ? 'connected' : 'unreachable'})`);
log(`Auth: ${creds ? `logged in as ${creds.user}` : 'not logged in'}`);
log(`Registries: ${config.registries.join(', ')}`);
log(`Output: ${config.outputFormat}`);
if (opts.output === 'json') {
log(formatJson(status));
} else if (opts.output === 'yaml') {
log(formatYaml(status));
if (!llmLabel) {
log(`LLM: not configured (run 'mcpctl config setup')`);
return;
}
// LLM check + models + providers fetch in parallel
const llmPromise = checkLlm(config.mcplocalUrl);
const modelsPromise = fetchModels(config.mcplocalUrl);
const providersPromise = multiProvider ? fetchProviders(config.mcplocalUrl) : Promise.resolve(null);
if (isTTY) {
let frame = 0;
const interval = setInterval(() => {
write(`${CLEAR_LINE}LLM: ${DIM}${SPINNER_FRAMES[frame % SPINNER_FRAMES.length]} checking...${RESET}`);
frame++;
}, 80);
const [llmStatus, models, providersInfo] = await Promise.all([llmPromise, modelsPromise, providersPromise]);
clearInterval(interval);
if (providersInfo && (providersInfo.tiers.fast.length > 0 || providersInfo.tiers.heavy.length > 0)) {
// Tiered display with per-provider health
write(`${CLEAR_LINE}`);
for (const tier of ['fast', 'heavy'] as const) {
const names = providersInfo.tiers[tier];
if (names.length === 0) continue;
const label = tier === 'fast' ? 'LLM (fast): ' : 'LLM (heavy):';
const parts = names.map((n) => {
const ok = providersInfo.health[n];
return ok ? `${n} ${GREEN}${RESET}` : `${n} ${RED}${RESET}`;
});
log(`${label} ${parts.join(', ')}`);
}
} else {
// Legacy single provider display
if (llmStatus === 'ok' || llmStatus === 'ok (key stored)') {
write(`${CLEAR_LINE}LLM: ${llmLabel} ${GREEN}${llmStatus}${RESET}\n`);
} else {
write(`${CLEAR_LINE}LLM: ${llmLabel} ${RED}${llmStatus}${RESET}\n`);
}
}
if (models.length > 0) {
log(`${DIM} Available: ${models.join(', ')}${RESET}`);
}
} else {
log(`mcpctl v${status.version}`);
log(`mcplocal: ${status.mcplocalUrl} (${mcplocalReachable ? 'connected' : 'unreachable'})`);
log(`mcpd: ${status.mcpdUrl} (${mcpdReachable ? 'connected' : 'unreachable'})`);
log(`Auth: ${creds ? `logged in as ${creds.user}` : 'not logged in'}`);
log(`Registries: ${status.registries.join(', ')}`);
log(`Output: ${status.outputFormat}`);
// Non-TTY: no spinner, just wait and print
const [llmStatus, models, providersInfo] = await Promise.all([llmPromise, modelsPromise, providersPromise]);
if (providersInfo && (providersInfo.tiers.fast.length > 0 || providersInfo.tiers.heavy.length > 0)) {
for (const tier of ['fast', 'heavy'] as const) {
const names = providersInfo.tiers[tier];
if (names.length === 0) continue;
const label = tier === 'fast' ? 'LLM (fast): ' : 'LLM (heavy):';
const parts = names.map((n) => {
const ok = providersInfo.health[n];
return ok ? `${n}` : `${n}`;
});
log(`${label} ${parts.join(', ')}`);
}
} else {
if (llmStatus === 'ok' || llmStatus === 'ok (key stored)') {
log(`LLM: ${llmLabel}${llmStatus}`);
} else {
log(`LLM: ${llmLabel}${llmStatus}`);
}
}
if (models.length > 0) {
log(`${DIM} Available: ${models.join(', ')}${RESET}`);
}
}
});
}

View File

@@ -1,4 +1,4 @@
export { McpctlConfigSchema, DEFAULT_CONFIG } from './schema.js';
export type { McpctlConfig } from './schema.js';
export { McpctlConfigSchema, LlmConfigSchema, LlmProviderEntrySchema, LlmMultiConfigSchema, LLM_PROVIDERS, LLM_TIERS, DEFAULT_CONFIG } from './schema.js';
export type { McpctlConfig, LlmConfig, LlmProviderEntry, LlmMultiConfig, LlmProviderName, LlmTier } from './schema.js';
export { loadConfig, saveConfig, mergeConfig, getConfigPath } from './loader.js';
export type { ConfigLoaderDeps } from './loader.js';

View File

@@ -1,5 +1,50 @@
import { z } from 'zod';
export const LLM_PROVIDERS = ['gemini-cli', 'ollama', 'anthropic', 'openai', 'deepseek', 'vllm', 'none'] as const;
export type LlmProviderName = typeof LLM_PROVIDERS[number];
export const LLM_TIERS = ['fast', 'heavy'] as const;
export type LlmTier = typeof LLM_TIERS[number];
/** Legacy single-provider format. */
export const LlmConfigSchema = z.object({
/** LLM provider name */
provider: z.enum(LLM_PROVIDERS),
/** Model name */
model: z.string().optional(),
/** Provider URL (for ollama, vllm, openai with custom endpoint) */
url: z.string().optional(),
/** Binary path override (for gemini-cli) */
binaryPath: z.string().optional(),
}).strict();
export type LlmConfig = z.infer<typeof LlmConfigSchema>;
/** Multi-provider entry (advanced mode). */
export const LlmProviderEntrySchema = z.object({
/** User-chosen name for this provider instance (e.g. "vllm-local") */
name: z.string(),
/** Provider type */
type: z.enum(LLM_PROVIDERS),
/** Model name */
model: z.string().optional(),
/** Provider URL (for ollama, vllm, openai with custom endpoint) */
url: z.string().optional(),
/** Binary path override (for gemini-cli) */
binaryPath: z.string().optional(),
/** Tier assignment */
tier: z.enum(LLM_TIERS).optional(),
}).strict();
export type LlmProviderEntry = z.infer<typeof LlmProviderEntrySchema>;
/** Multi-provider format with providers array. */
export const LlmMultiConfigSchema = z.object({
providers: z.array(LlmProviderEntrySchema).min(1),
}).strict();
export type LlmMultiConfig = z.infer<typeof LlmMultiConfigSchema>;
export const McpctlConfigSchema = z.object({
/** mcplocal daemon endpoint (local LLM pre-processing proxy) */
mcplocalUrl: z.string().default('http://localhost:3200'),
@@ -19,6 +64,8 @@ export const McpctlConfigSchema = z.object({
outputFormat: z.enum(['table', 'json', 'yaml']).default('table'),
/** Smithery API key */
smitheryApiKey: z.string().optional(),
/** LLM provider configuration — accepts legacy single-provider or multi-provider format */
llm: z.union([LlmConfigSchema, LlmMultiConfigSchema]).optional(),
}).transform((cfg) => {
// Backward compatibility: if old daemonUrl is set but mcplocalUrl wasn't explicitly changed,
// use daemonUrl as mcplocalUrl

View File

@@ -6,6 +6,29 @@ export function formatJson(data: unknown): string {
return JSON.stringify(data, null, 2);
}
export function formatYaml(data: unknown): string {
return yaml.dump(data, { lineWidth: 120, noRefs: true }).trimEnd();
/**
* Reorder object keys so that long text fields (like `content`, `prompt`)
* come last. This makes YAML output more readable when content spans
* multiple lines.
*/
export function reorderKeys(obj: unknown): unknown {
if (Array.isArray(obj)) return obj.map(reorderKeys);
if (obj !== null && typeof obj === 'object') {
const rec = obj as Record<string, unknown>;
const lastKeys = ['content', 'prompt'];
const ordered: Record<string, unknown> = {};
for (const key of Object.keys(rec)) {
if (!lastKeys.includes(key)) ordered[key] = reorderKeys(rec[key]);
}
for (const key of lastKeys) {
if (key in rec) ordered[key] = rec[key];
}
return ordered;
}
return obj;
}
export function formatYaml(data: unknown): string {
const reordered = reorderKeys(data);
return yaml.dump(reordered, { lineWidth: 120, noRefs: true }).trimEnd();
}

View File

@@ -12,7 +12,9 @@ import { createCreateCommand } from './commands/create.js';
import { createEditCommand } from './commands/edit.js';
import { createBackupCommand, createRestoreCommand } from './commands/backup.js';
import { createLoginCommand, createLogoutCommand } from './commands/auth.js';
import { createAttachServerCommand, createDetachServerCommand } from './commands/project-ops.js';
import { createAttachServerCommand, createDetachServerCommand, createApproveCommand } from './commands/project-ops.js';
import { createMcpCommand } from './commands/mcp.js';
import { createPatchCommand } from './commands/patch.js';
import { ApiClient, ApiError } from './api-client.js';
import { loadConfig } from './config/index.js';
import { loadCredentials } from './auth/index.js';
@@ -53,7 +55,33 @@ export function createProgram(): Command {
log: (...args) => console.log(...args),
}));
const fetchResource = async (resource: string, nameOrId?: string): Promise<unknown[]> => {
const fetchResource = async (resource: string, nameOrId?: string, opts?: { project?: string; all?: boolean }): Promise<unknown[]> => {
const projectName = opts?.project ?? program.opts().project as string | undefined;
// --project scoping for servers and instances
if (projectName && !nameOrId && (resource === 'servers' || resource === 'instances')) {
const projectId = await resolveNameOrId(client, 'projects', projectName);
if (resource === 'servers') {
return client.get<unknown[]>(`/api/v1/projects/${projectId}/servers`);
}
// instances: fetch project servers, then filter instances by serverId
const projectServers = await client.get<Array<{ id: string }>>(`/api/v1/projects/${projectId}/servers`);
const serverIds = new Set(projectServers.map((s) => s.id));
const allInstances = await client.get<Array<{ serverId: string }>>(`/api/v1/instances`);
return allInstances.filter((inst) => serverIds.has(inst.serverId));
}
// --project scoping for prompts and promptrequests
if (!nameOrId && (resource === 'prompts' || resource === 'promptrequests')) {
if (projectName) {
return client.get<unknown[]>(`/api/v1/${resource}?project=${encodeURIComponent(projectName)}`);
}
// Default: global-only. --all (-A) shows everything.
if (!opts?.all) {
return client.get<unknown[]>(`/api/v1/${resource}?scope=global`);
}
}
if (nameOrId) {
// Glob pattern — use query param filtering
if (nameOrId.includes('*')) {
@@ -118,6 +146,11 @@ export function createProgram(): Command {
log: (...args) => console.log(...args),
}));
program.addCommand(createPatchCommand({
client,
log: (...args) => console.log(...args),
}));
program.addCommand(createBackupCommand({
client,
log: (...args) => console.log(...args),
@@ -133,8 +166,12 @@ export function createProgram(): Command {
log: (...args: string[]) => console.log(...args),
getProject: () => program.opts().project as string | undefined,
};
program.addCommand(createAttachServerCommand(projectOpsDeps));
program.addCommand(createDetachServerCommand(projectOpsDeps));
program.addCommand(createAttachServerCommand(projectOpsDeps), { hidden: true });
program.addCommand(createDetachServerCommand(projectOpsDeps), { hidden: true });
program.addCommand(createApproveCommand(projectOpsDeps));
program.addCommand(createMcpCommand({
getProject: () => program.opts().project as string | undefined,
}), { hidden: true });
return program;
}

View File

@@ -21,6 +21,16 @@ beforeAll(async () => {
res.writeHead(201, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ id: 'srv-new', ...body }));
});
} else if (req.url === '/api/v1/servers/srv-1' && req.method === 'DELETE') {
// Fastify rejects empty body with Content-Type: application/json
const ct = req.headers['content-type'] ?? '';
if (ct.includes('application/json')) {
res.writeHead(400, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: "Body cannot be empty when content-type is set to 'application/json'" }));
} else {
res.writeHead(204);
res.end();
}
} else if (req.url === '/api/v1/missing' && req.method === 'GET') {
res.writeHead(404, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: 'Not found' }));
@@ -75,6 +85,12 @@ describe('ApiClient', () => {
await expect(client.get('/anything')).rejects.toThrow();
});
it('performs DELETE without Content-Type header', async () => {
const client = new ApiClient({ baseUrl: `http://localhost:${port}` });
// Should succeed (204) because no Content-Type is sent on bodyless DELETE
await expect(client.delete('/api/v1/servers/srv-1')).resolves.toBeUndefined();
});
it('sends Authorization header when token provided', async () => {
// We need a separate server to check the header
let receivedAuth = '';

View File

@@ -8,19 +8,14 @@ import { saveCredentials, loadCredentials } from '../../src/auth/index.js';
function mockClient(): ApiClient {
return {
get: vi.fn(async () => ({
mcpServers: {
'slack--default': { command: 'npx', args: ['-y', '@anthropic/slack-mcp'], env: { WORKSPACE: 'test' } },
'github--default': { command: 'npx', args: ['-y', '@anthropic/github-mcp'] },
},
})),
get: vi.fn(async () => ({})),
post: vi.fn(async () => ({ token: 'impersonated-tok', user: { email: 'other@test.com' } })),
put: vi.fn(async () => ({})),
delete: vi.fn(async () => {}),
} as unknown as ApiClient;
}
describe('config claude-generate', () => {
describe('config claude', () => {
let client: ReturnType<typeof mockClient>;
let output: string[];
let tmpDir: string;
@@ -36,18 +31,23 @@ describe('config claude-generate', () => {
rmSync(tmpDir, { recursive: true, force: true });
});
it('generates .mcp.json from project config', async () => {
it('generates .mcp.json with mcpctl mcp bridge entry', async () => {
const outPath = join(tmpDir, '.mcp.json');
const cmd = createConfigCommand(
{ configDeps: { configDir: tmpDir }, log },
{ client, credentialsDeps: { configDir: tmpDir }, log },
);
await cmd.parseAsync(['claude-generate', '--project', 'proj-1', '-o', outPath], { from: 'user' });
await cmd.parseAsync(['claude', '--project', 'homeautomation', '-o', outPath], { from: 'user' });
// No API call should be made
expect(client.get).not.toHaveBeenCalled();
expect(client.get).toHaveBeenCalledWith('/api/v1/projects/proj-1/mcp-config');
const written = JSON.parse(readFileSync(outPath, 'utf-8'));
expect(written.mcpServers['slack--default']).toBeDefined();
expect(output.join('\n')).toContain('2 server(s)');
expect(written.mcpServers['homeautomation']).toEqual({
command: 'mcpctl',
args: ['mcp', '-p', 'homeautomation'],
});
expect(output.join('\n')).toContain('1 server(s)');
});
it('prints to stdout with --stdout', async () => {
@@ -55,9 +55,13 @@ describe('config claude-generate', () => {
{ configDeps: { configDir: tmpDir }, log },
{ client, credentialsDeps: { configDir: tmpDir }, log },
);
await cmd.parseAsync(['claude-generate', '--project', 'proj-1', '--stdout'], { from: 'user' });
await cmd.parseAsync(['claude', '--project', 'myproj', '--stdout'], { from: 'user' });
expect(output[0]).toContain('mcpServers');
const parsed = JSON.parse(output[0]);
expect(parsed.mcpServers['myproj']).toEqual({
command: 'mcpctl',
args: ['mcp', '-p', 'myproj'],
});
});
it('merges with existing .mcp.json', async () => {
@@ -70,12 +74,41 @@ describe('config claude-generate', () => {
{ configDeps: { configDir: tmpDir }, log },
{ client, credentialsDeps: { configDir: tmpDir }, log },
);
await cmd.parseAsync(['claude-generate', '--project', 'proj-1', '-o', outPath, '--merge'], { from: 'user' });
await cmd.parseAsync(['claude', '--project', 'proj-1', '-o', outPath, '--merge'], { from: 'user' });
const written = JSON.parse(readFileSync(outPath, 'utf-8'));
expect(written.mcpServers['existing--server']).toBeDefined();
expect(written.mcpServers['slack--default']).toBeDefined();
expect(output.join('\n')).toContain('3 server(s)');
expect(written.mcpServers['proj-1']).toEqual({
command: 'mcpctl',
args: ['mcp', '-p', 'proj-1'],
});
expect(output.join('\n')).toContain('2 server(s)');
});
it('backward compat: claude-generate still works', async () => {
const outPath = join(tmpDir, '.mcp.json');
const cmd = createConfigCommand(
{ configDeps: { configDir: tmpDir }, log },
{ client, credentialsDeps: { configDir: tmpDir }, log },
);
await cmd.parseAsync(['claude-generate', '--project', 'proj-1', '-o', outPath], { from: 'user' });
const written = JSON.parse(readFileSync(outPath, 'utf-8'));
expect(written.mcpServers['proj-1']).toEqual({
command: 'mcpctl',
args: ['mcp', '-p', 'proj-1'],
});
});
it('uses project name as the server key', async () => {
const outPath = join(tmpDir, '.mcp.json');
const cmd = createConfigCommand(
{ configDeps: { configDir: tmpDir }, log },
);
await cmd.parseAsync(['claude', '--project', 'my-fancy-project', '-o', outPath], { from: 'user' });
const written = JSON.parse(readFileSync(outPath, 'utf-8'));
expect(Object.keys(written.mcpServers)).toEqual(['my-fancy-project']);
});
});

View File

@@ -0,0 +1,293 @@
import { describe, it, expect, vi, beforeEach } from 'vitest';
import { createConfigSetupCommand } from '../../src/commands/config-setup.js';
import type { ConfigSetupDeps, ConfigSetupPrompt } from '../../src/commands/config-setup.js';
import type { SecretStore } from '@mcpctl/shared';
import { mkdtempSync, rmSync, readFileSync } from 'node:fs';
import { join } from 'node:path';
import { tmpdir } from 'node:os';
let tempDir: string;
let logs: string[];
beforeEach(() => {
tempDir = mkdtempSync(join(tmpdir(), 'mcpctl-config-setup-test-'));
logs = [];
});
function cleanup() {
rmSync(tempDir, { recursive: true, force: true });
}
function mockSecretStore(secrets: Record<string, string> = {}): SecretStore {
const store: Record<string, string> = { ...secrets };
return {
get: vi.fn(async (key: string) => store[key] ?? null),
set: vi.fn(async (key: string, value: string) => { store[key] = value; }),
delete: vi.fn(async () => true),
backend: () => 'mock',
};
}
function mockPrompt(answers: unknown[]): ConfigSetupPrompt {
let callIndex = 0;
return {
select: vi.fn(async () => answers[callIndex++]),
input: vi.fn(async () => answers[callIndex++] as string),
password: vi.fn(async () => answers[callIndex++] as string),
confirm: vi.fn(async () => answers[callIndex++] as boolean),
};
}
function buildDeps(overrides: {
secrets?: Record<string, string>;
answers?: unknown[];
fetchModels?: ConfigSetupDeps['fetchModels'];
whichBinary?: ConfigSetupDeps['whichBinary'];
} = {}): ConfigSetupDeps {
return {
configDeps: { configDir: tempDir },
secretStore: mockSecretStore(overrides.secrets),
log: (...args: string[]) => logs.push(args.join(' ')),
prompt: mockPrompt(overrides.answers ?? []),
fetchModels: overrides.fetchModels ?? vi.fn(async () => []),
whichBinary: overrides.whichBinary ?? vi.fn(async () => '/usr/bin/gemini'),
};
}
function readConfig(): Record<string, unknown> {
const raw = readFileSync(join(tempDir, 'config.json'), 'utf-8');
return JSON.parse(raw) as Record<string, unknown>;
}
async function runSetup(deps: ConfigSetupDeps): Promise<void> {
const cmd = createConfigSetupCommand(deps);
await cmd.parseAsync([], { from: 'user' });
}
describe('config setup wizard', () => {
describe('provider: none', () => {
it('disables LLM and saves config', async () => {
const deps = buildDeps({ answers: ['simple', 'none'] });
await runSetup(deps);
const config = readConfig();
expect(config.llm).toEqual({ provider: 'none' });
expect(logs.some((l) => l.includes('LLM disabled'))).toBe(true);
cleanup();
});
});
describe('provider: gemini-cli', () => {
it('auto-detects binary path and saves config', async () => {
// Answers: select provider, select model (no binary prompt — auto-detected)
const deps = buildDeps({
answers: ['simple', 'gemini-cli', 'gemini-2.5-flash'],
whichBinary: vi.fn(async () => '/home/user/.npm-global/bin/gemini'),
});
await runSetup(deps);
const config = readConfig();
const llm = config.llm as Record<string, unknown>;
expect(llm.provider).toBe('gemini-cli');
expect(llm.model).toBe('gemini-2.5-flash');
expect(llm.binaryPath).toBe('/home/user/.npm-global/bin/gemini');
expect(logs.some((l) => l.includes('Found gemini at'))).toBe(true);
cleanup();
});
it('prompts for manual path when binary not found', async () => {
// Answers: select provider, select model, enter manual path
const deps = buildDeps({
answers: ['simple', 'gemini-cli', 'gemini-2.5-flash', '/opt/gemini'],
whichBinary: vi.fn(async () => null),
});
await runSetup(deps);
const config = readConfig();
const llm = config.llm as Record<string, unknown>;
expect(llm.binaryPath).toBe('/opt/gemini');
expect(logs.some((l) => l.includes('not found'))).toBe(true);
cleanup();
});
it('saves gemini-cli with custom model', async () => {
// Answers: select provider, select custom, enter model name
const deps = buildDeps({
answers: ['simple', 'gemini-cli', '__custom__', 'gemini-3.0-flash'],
whichBinary: vi.fn(async () => '/usr/bin/gemini'),
});
await runSetup(deps);
const config = readConfig();
const llm = config.llm as Record<string, unknown>;
expect(llm.model).toBe('gemini-3.0-flash');
cleanup();
});
});
describe('provider: ollama', () => {
it('fetches models and allows selection', async () => {
const fetchModels = vi.fn(async () => ['llama3.2', 'codellama', 'mistral']);
// Answers: select provider, enter URL, select model
const deps = buildDeps({
answers: ['simple', 'ollama', 'http://localhost:11434', 'codellama'],
fetchModels,
});
await runSetup(deps);
expect(fetchModels).toHaveBeenCalledWith('http://localhost:11434', '/api/tags');
const config = readConfig();
const llm = config.llm as Record<string, unknown>;
expect(llm.provider).toBe('ollama');
expect(llm.model).toBe('codellama');
expect(llm.url).toBe('http://localhost:11434');
cleanup();
});
it('falls back to manual input when fetch fails', async () => {
const fetchModels = vi.fn(async () => []);
// Answers: select provider, enter URL, enter model manually
const deps = buildDeps({
answers: ['simple', 'ollama', 'http://localhost:11434', 'llama3.2'],
fetchModels,
});
await runSetup(deps);
const config = readConfig();
expect((config.llm as Record<string, unknown>).model).toBe('llama3.2');
cleanup();
});
});
describe('provider: anthropic', () => {
it('prompts for API key and saves to secret store', async () => {
// Answers: select provider, enter API key, select model
const deps = buildDeps({
answers: ['simple', 'anthropic', 'sk-ant-new-key', 'claude-haiku-3-5-20241022'],
});
await runSetup(deps);
expect(deps.secretStore.set).toHaveBeenCalledWith('anthropic-api-key', 'sk-ant-new-key');
const config = readConfig();
const llm = config.llm as Record<string, unknown>;
expect(llm.provider).toBe('anthropic');
expect(llm.model).toBe('claude-haiku-3-5-20241022');
// API key should NOT be in config file
expect(llm).not.toHaveProperty('apiKey');
cleanup();
});
it('shows existing key masked and allows keeping it', async () => {
// Answers: select provider, confirm change=false, select model
const deps = buildDeps({
secrets: { 'anthropic-api-key': 'sk-ant-existing-key-1234' },
answers: ['simple', 'anthropic', false, 'claude-sonnet-4-20250514'],
});
await runSetup(deps);
// Should NOT have called set (kept existing key)
expect(deps.secretStore.set).not.toHaveBeenCalled();
const config = readConfig();
expect((config.llm as Record<string, unknown>).model).toBe('claude-sonnet-4-20250514');
cleanup();
});
it('allows replacing existing key', async () => {
// Answers: select provider, confirm change=true, enter new key, select model
const deps = buildDeps({
secrets: { 'anthropic-api-key': 'sk-ant-old' },
answers: ['simple', 'anthropic', true, 'sk-ant-new', 'claude-haiku-3-5-20241022'],
});
await runSetup(deps);
expect(deps.secretStore.set).toHaveBeenCalledWith('anthropic-api-key', 'sk-ant-new');
cleanup();
});
});
describe('provider: vllm', () => {
it('fetches models from vLLM and allows selection', async () => {
const fetchModels = vi.fn(async () => ['my-model', 'llama-70b']);
// Answers: select provider, enter URL, select model
const deps = buildDeps({
answers: ['simple', 'vllm', 'http://gpu:8000', 'llama-70b'],
fetchModels,
});
await runSetup(deps);
expect(fetchModels).toHaveBeenCalledWith('http://gpu:8000', '/v1/models');
const config = readConfig();
const llm = config.llm as Record<string, unknown>;
expect(llm.provider).toBe('vllm');
expect(llm.url).toBe('http://gpu:8000');
expect(llm.model).toBe('llama-70b');
cleanup();
});
});
describe('provider: openai', () => {
it('prompts for key, model, and optional custom endpoint', async () => {
// Answers: select provider, enter key, enter model, confirm custom URL=true, enter URL
const deps = buildDeps({
answers: ['simple', 'openai', 'sk-openai-key', 'gpt-4o', true, 'https://custom.api.com'],
});
await runSetup(deps);
expect(deps.secretStore.set).toHaveBeenCalledWith('openai-api-key', 'sk-openai-key');
const config = readConfig();
const llm = config.llm as Record<string, unknown>;
expect(llm.provider).toBe('openai');
expect(llm.model).toBe('gpt-4o');
expect(llm.url).toBe('https://custom.api.com');
cleanup();
});
it('skips custom URL when not requested', async () => {
// Answers: select provider, enter key, enter model, confirm custom URL=false
const deps = buildDeps({
answers: ['simple', 'openai', 'sk-openai-key', 'gpt-4o-mini', false],
});
await runSetup(deps);
const config = readConfig();
const llm = config.llm as Record<string, unknown>;
expect(llm.url).toBeUndefined();
cleanup();
});
});
describe('provider: deepseek', () => {
it('prompts for key and model', async () => {
// Answers: select provider, enter key, select model
const deps = buildDeps({
answers: ['simple', 'deepseek', 'sk-ds-key', 'deepseek-chat'],
});
await runSetup(deps);
expect(deps.secretStore.set).toHaveBeenCalledWith('deepseek-api-key', 'sk-ds-key');
const config = readConfig();
const llm = config.llm as Record<string, unknown>;
expect(llm.provider).toBe('deepseek');
expect(llm.model).toBe('deepseek-chat');
cleanup();
});
});
describe('output messages', () => {
it('shows restart instruction', async () => {
const deps = buildDeps({ answers: ['simple', 'gemini-cli', 'gemini-2.5-flash'] });
await runSetup(deps);
expect(logs.some((l) => l.includes('systemctl --user restart mcplocal'))).toBe(true);
cleanup();
});
it('shows configured provider and model', async () => {
const deps = buildDeps({ answers: ['simple', 'gemini-cli', 'gemini-2.5-flash'] });
await runSetup(deps);
expect(logs.some((l) => l.includes('gemini-cli') && l.includes('gemini-2.5-flash'))).toBe(true);
cleanup();
});
});
});

View File

@@ -447,4 +447,114 @@ describe('create command', () => {
});
});
});
describe('create prompt', () => {
it('creates a prompt with content', async () => {
vi.mocked(client.post).mockResolvedValueOnce({ id: 'p-1', name: 'test-prompt' });
const cmd = createCreateCommand({ client, log });
await cmd.parseAsync(['prompt', 'test-prompt', '--content', 'Hello world'], { from: 'user' });
expect(client.post).toHaveBeenCalledWith('/api/v1/prompts', {
name: 'test-prompt',
content: 'Hello world',
});
expect(output.join('\n')).toContain("prompt 'test-prompt' created");
});
it('requires content or content-file', async () => {
const cmd = createCreateCommand({ client, log });
await expect(
cmd.parseAsync(['prompt', 'no-content'], { from: 'user' }),
).rejects.toThrow('--content or --content-file is required');
});
it('--priority sets prompt priority', async () => {
vi.mocked(client.post).mockResolvedValueOnce({ id: 'p-1', name: 'pri-prompt' });
const cmd = createCreateCommand({ client, log });
await cmd.parseAsync(['prompt', 'pri-prompt', '--content', 'x', '--priority', '8'], { from: 'user' });
expect(client.post).toHaveBeenCalledWith('/api/v1/prompts', expect.objectContaining({
priority: 8,
}));
});
it('--priority validates range 1-10', async () => {
const cmd = createCreateCommand({ client, log });
await expect(
cmd.parseAsync(['prompt', 'bad', '--content', 'x', '--priority', '15'], { from: 'user' }),
).rejects.toThrow('--priority must be a number between 1 and 10');
});
it('--priority rejects zero', async () => {
const cmd = createCreateCommand({ client, log });
await expect(
cmd.parseAsync(['prompt', 'bad', '--content', 'x', '--priority', '0'], { from: 'user' }),
).rejects.toThrow('--priority must be a number between 1 and 10');
});
it('--link sets linkTarget', async () => {
vi.mocked(client.post).mockResolvedValueOnce({ id: 'p-1', name: 'linked' });
const cmd = createCreateCommand({ client, log });
await cmd.parseAsync(['prompt', 'linked', '--content', 'x', '--link', 'proj/srv:docmost://pages/abc'], { from: 'user' });
expect(client.post).toHaveBeenCalledWith('/api/v1/prompts', expect.objectContaining({
linkTarget: 'proj/srv:docmost://pages/abc',
}));
});
it('--project resolves project name to ID', async () => {
vi.mocked(client.get).mockResolvedValueOnce([{ id: 'proj-1', name: 'my-project' }] as never);
vi.mocked(client.post).mockResolvedValueOnce({ id: 'p-1', name: 'scoped' });
const cmd = createCreateCommand({ client, log });
await cmd.parseAsync(['prompt', 'scoped', '--content', 'x', '--project', 'my-project'], { from: 'user' });
expect(client.post).toHaveBeenCalledWith('/api/v1/prompts', expect.objectContaining({
projectId: 'proj-1',
}));
});
it('--project throws when project not found', async () => {
vi.mocked(client.get).mockResolvedValueOnce([] as never);
const cmd = createCreateCommand({ client, log });
await expect(
cmd.parseAsync(['prompt', 'bad', '--content', 'x', '--project', 'nope'], { from: 'user' }),
).rejects.toThrow("Project 'nope' not found");
});
});
describe('create promptrequest', () => {
it('creates a prompt request with priority', async () => {
vi.mocked(client.post).mockResolvedValueOnce({ id: 'r-1', name: 'req' });
const cmd = createCreateCommand({ client, log });
await cmd.parseAsync(['promptrequest', 'req', '--content', 'proposal', '--priority', '7'], { from: 'user' });
expect(client.post).toHaveBeenCalledWith('/api/v1/promptrequests', expect.objectContaining({
name: 'req',
content: 'proposal',
priority: 7,
}));
});
});
describe('create project', () => {
it('creates a project with --gated', async () => {
vi.mocked(client.post).mockResolvedValueOnce({ id: 'proj-1', name: 'gated-proj' });
const cmd = createCreateCommand({ client, log });
await cmd.parseAsync(['project', 'gated-proj', '--gated'], { from: 'user' });
expect(client.post).toHaveBeenCalledWith('/api/v1/projects', expect.objectContaining({
gated: true,
}));
});
it('creates a project with --no-gated', async () => {
vi.mocked(client.post).mockResolvedValueOnce({ id: 'proj-1', name: 'open-proj' });
const cmd = createCreateCommand({ client, log });
await cmd.parseAsync(['project', 'open-proj', '--no-gated'], { from: 'user' });
expect(client.post).toHaveBeenCalledWith('/api/v1/projects', expect.objectContaining({
gated: false,
}));
});
});
});

View File

@@ -20,7 +20,7 @@ describe('get command', () => {
const cmd = createGetCommand(deps);
await cmd.parseAsync(['node', 'test', 'servers']);
expect(deps.fetchResource).toHaveBeenCalledWith('servers', undefined);
expect(deps.fetchResource).toHaveBeenCalledWith('servers', undefined, undefined);
expect(deps.output[0]).toContain('NAME');
expect(deps.output[0]).toContain('TRANSPORT');
expect(deps.output.join('\n')).toContain('slack');
@@ -31,14 +31,14 @@ describe('get command', () => {
const deps = makeDeps([]);
const cmd = createGetCommand(deps);
await cmd.parseAsync(['node', 'test', 'srv']);
expect(deps.fetchResource).toHaveBeenCalledWith('servers', undefined);
expect(deps.fetchResource).toHaveBeenCalledWith('servers', undefined, undefined);
});
it('passes ID when provided', async () => {
const deps = makeDeps([{ id: 'srv-1', name: 'slack' }]);
const cmd = createGetCommand(deps);
await cmd.parseAsync(['node', 'test', 'servers', 'srv-1']);
expect(deps.fetchResource).toHaveBeenCalledWith('servers', 'srv-1');
expect(deps.fetchResource).toHaveBeenCalledWith('servers', 'srv-1', undefined);
});
it('outputs apply-compatible JSON format', async () => {
@@ -94,7 +94,7 @@ describe('get command', () => {
const cmd = createGetCommand(deps);
await cmd.parseAsync(['node', 'test', 'users']);
expect(deps.fetchResource).toHaveBeenCalledWith('users', undefined);
expect(deps.fetchResource).toHaveBeenCalledWith('users', undefined, undefined);
const text = deps.output.join('\n');
expect(text).toContain('EMAIL');
expect(text).toContain('NAME');
@@ -110,7 +110,7 @@ describe('get command', () => {
const deps = makeDeps([]);
const cmd = createGetCommand(deps);
await cmd.parseAsync(['node', 'test', 'user']);
expect(deps.fetchResource).toHaveBeenCalledWith('users', undefined);
expect(deps.fetchResource).toHaveBeenCalledWith('users', undefined, undefined);
});
it('lists groups with correct columns', async () => {
@@ -126,7 +126,7 @@ describe('get command', () => {
const cmd = createGetCommand(deps);
await cmd.parseAsync(['node', 'test', 'groups']);
expect(deps.fetchResource).toHaveBeenCalledWith('groups', undefined);
expect(deps.fetchResource).toHaveBeenCalledWith('groups', undefined, undefined);
const text = deps.output.join('\n');
expect(text).toContain('NAME');
expect(text).toContain('MEMBERS');
@@ -141,7 +141,7 @@ describe('get command', () => {
const deps = makeDeps([]);
const cmd = createGetCommand(deps);
await cmd.parseAsync(['node', 'test', 'group']);
expect(deps.fetchResource).toHaveBeenCalledWith('groups', undefined);
expect(deps.fetchResource).toHaveBeenCalledWith('groups', undefined, undefined);
});
it('lists rbac definitions with correct columns', async () => {
@@ -156,7 +156,7 @@ describe('get command', () => {
const cmd = createGetCommand(deps);
await cmd.parseAsync(['node', 'test', 'rbac']);
expect(deps.fetchResource).toHaveBeenCalledWith('rbac', undefined);
expect(deps.fetchResource).toHaveBeenCalledWith('rbac', undefined, undefined);
const text = deps.output.join('\n');
expect(text).toContain('NAME');
expect(text).toContain('SUBJECTS');
@@ -170,7 +170,7 @@ describe('get command', () => {
const deps = makeDeps([]);
const cmd = createGetCommand(deps);
await cmd.parseAsync(['node', 'test', 'rbac-definition']);
expect(deps.fetchResource).toHaveBeenCalledWith('rbac', undefined);
expect(deps.fetchResource).toHaveBeenCalledWith('rbac', undefined, undefined);
});
it('lists projects with new columns', async () => {
@@ -251,4 +251,87 @@ describe('get command', () => {
await cmd.parseAsync(['node', 'test', 'rbac']);
expect(deps.output[0]).toContain('No rbac found');
});
it('lists prompts with project name column', async () => {
const deps = makeDeps([
{ id: 'p-1', name: 'debug-guide', projectId: 'proj-1', project: { name: 'smart-home' }, createdAt: '2025-01-01T00:00:00Z' },
{ id: 'p-2', name: 'global-rules', projectId: null, project: null, createdAt: '2025-01-01T00:00:00Z' },
]);
const cmd = createGetCommand(deps);
await cmd.parseAsync(['node', 'test', 'prompts']);
const text = deps.output.join('\n');
expect(text).toContain('NAME');
expect(text).toContain('PROJECT');
expect(text).toContain('debug-guide');
expect(text).toContain('smart-home');
expect(text).toContain('global-rules');
expect(text).toContain('(global)');
});
it('lists promptrequests with project name column', async () => {
const deps = makeDeps([
{ id: 'pr-1', name: 'new-rule', projectId: 'proj-1', project: { name: 'my-project' }, createdBySession: 'sess-abc123def456', createdAt: '2025-01-01T00:00:00Z' },
]);
const cmd = createGetCommand(deps);
await cmd.parseAsync(['node', 'test', 'promptrequests']);
const text = deps.output.join('\n');
expect(text).toContain('new-rule');
expect(text).toContain('my-project');
expect(text).toContain('sess-abc123d');
});
it('passes --project option to fetchResource', async () => {
const deps = makeDeps([]);
const cmd = createGetCommand(deps);
await cmd.parseAsync(['node', 'test', 'prompts', '--project', 'smart-home']);
expect(deps.fetchResource).toHaveBeenCalledWith('prompts', undefined, { project: 'smart-home' });
});
it('does not pass project when --project is not specified', async () => {
const deps = makeDeps([]);
const cmd = createGetCommand(deps);
await cmd.parseAsync(['node', 'test', 'prompts']);
expect(deps.fetchResource).toHaveBeenCalledWith('prompts', undefined, undefined);
});
it('passes --all flag to fetchResource', async () => {
const deps = makeDeps([]);
const cmd = createGetCommand(deps);
await cmd.parseAsync(['node', 'test', 'prompts', '-A']);
expect(deps.fetchResource).toHaveBeenCalledWith('prompts', undefined, { all: true });
});
it('passes both --project and --all when both given', async () => {
const deps = makeDeps([]);
const cmd = createGetCommand(deps);
await cmd.parseAsync(['node', 'test', 'prompts', '--project', 'my-proj', '-A']);
expect(deps.fetchResource).toHaveBeenCalledWith('prompts', undefined, { project: 'my-proj', all: true });
});
it('resolves prompt alias', async () => {
const deps = makeDeps([]);
const cmd = createGetCommand(deps);
await cmd.parseAsync(['node', 'test', 'prompt']);
expect(deps.fetchResource).toHaveBeenCalledWith('prompts', undefined, undefined);
});
it('resolves pr alias to promptrequests', async () => {
const deps = makeDeps([]);
const cmd = createGetCommand(deps);
await cmd.parseAsync(['node', 'test', 'pr']);
expect(deps.fetchResource).toHaveBeenCalledWith('promptrequests', undefined, undefined);
});
it('shows no results message for empty prompts list', async () => {
const deps = makeDeps([]);
const cmd = createGetCommand(deps);
await cmd.parseAsync(['node', 'test', 'prompts']);
expect(deps.output[0]).toContain('No prompts found');
});
});

View File

@@ -0,0 +1,481 @@
import { describe, it, expect, beforeAll, afterAll } from 'vitest';
import http from 'node:http';
import { Readable, Writable } from 'node:stream';
import { runMcpBridge, createMcpCommand } from '../../src/commands/mcp.js';
// ---- Mock MCP server (simulates mcplocal project endpoint) ----
interface RecordedRequest {
method: string;
url: string;
headers: http.IncomingHttpHeaders;
body: string;
}
let mockServer: http.Server;
let mockPort: number;
const recorded: RecordedRequest[] = [];
let sessionCounter = 0;
function makeInitializeResponse(id: number | string) {
return JSON.stringify({
jsonrpc: '2.0',
id,
result: {
protocolVersion: '2024-11-05',
capabilities: { tools: {} },
serverInfo: { name: 'test-server', version: '1.0.0' },
},
});
}
function makeToolsListResponse(id: number | string) {
return JSON.stringify({
jsonrpc: '2.0',
id,
result: {
tools: [
{ name: 'grafana/query', description: 'Query Grafana', inputSchema: { type: 'object', properties: {} } },
],
},
});
}
function makeToolCallResponse(id: number | string) {
return JSON.stringify({
jsonrpc: '2.0',
id,
result: {
content: [{ type: 'text', text: 'tool result' }],
},
});
}
beforeAll(async () => {
mockServer = http.createServer((req, res) => {
const chunks: Buffer[] = [];
req.on('data', (c: Buffer) => chunks.push(c));
req.on('end', () => {
const body = Buffer.concat(chunks).toString('utf-8');
recorded.push({ method: req.method ?? '', url: req.url ?? '', headers: req.headers, body });
if (req.method === 'DELETE') {
res.writeHead(200);
res.end();
return;
}
if (req.method === 'POST' && req.url?.startsWith('/projects/')) {
let sessionId = req.headers['mcp-session-id'] as string | undefined;
// Assign session ID on first request
if (!sessionId) {
sessionCounter++;
sessionId = `session-${sessionCounter}`;
}
res.setHeader('mcp-session-id', sessionId);
// Parse JSON-RPC and respond based on method
try {
const rpc = JSON.parse(body) as { id: number | string; method: string };
let responseBody: string;
switch (rpc.method) {
case 'initialize':
responseBody = makeInitializeResponse(rpc.id);
break;
case 'tools/list':
responseBody = makeToolsListResponse(rpc.id);
break;
case 'tools/call':
responseBody = makeToolCallResponse(rpc.id);
break;
default:
responseBody = JSON.stringify({ jsonrpc: '2.0', id: rpc.id, error: { code: -32601, message: 'Method not found' } });
}
// Respond in SSE format for /projects/sse-project/mcp
if (req.url?.includes('sse-project')) {
res.writeHead(200, { 'Content-Type': 'text/event-stream' });
res.end(`event: message\ndata: ${responseBody}\n\n`);
} else {
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(responseBody);
}
} catch {
res.writeHead(400, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: 'Invalid JSON' }));
}
return;
}
res.writeHead(404);
res.end();
});
});
await new Promise<void>((resolve) => {
mockServer.listen(0, () => {
const addr = mockServer.address();
if (addr && typeof addr === 'object') {
mockPort = addr.port;
}
resolve();
});
});
});
afterAll(() => {
mockServer.close();
});
// ---- Helper to run bridge with mock streams ----
function createMockStreams() {
const stdoutChunks: string[] = [];
const stderrChunks: string[] = [];
const stdout = new Writable({
write(chunk: Buffer, _encoding, callback) {
stdoutChunks.push(chunk.toString());
callback();
},
});
const stderr = new Writable({
write(chunk: Buffer, _encoding, callback) {
stderrChunks.push(chunk.toString());
callback();
},
});
return { stdout, stderr, stdoutChunks, stderrChunks };
}
function pushAndEnd(stdin: Readable, lines: string[]) {
for (const line of lines) {
stdin.push(line + '\n');
}
stdin.push(null); // EOF
}
// ---- Tests ----
describe('MCP STDIO Bridge', () => {
beforeAll(() => {
recorded.length = 0;
sessionCounter = 0;
});
it('forwards initialize request and returns response', async () => {
recorded.length = 0;
const stdin = new Readable({ read() {} });
const { stdout, stdoutChunks } = createMockStreams();
const initMsg = JSON.stringify({
jsonrpc: '2.0', id: 1, method: 'initialize',
params: { protocolVersion: '2024-11-05', capabilities: {}, clientInfo: { name: 'test', version: '1.0' } },
});
pushAndEnd(stdin, [initMsg]);
await runMcpBridge({
projectName: 'test-project',
mcplocalUrl: `http://localhost:${mockPort}`,
stdin, stdout, stderr: new Writable({ write(_, __, cb) { cb(); } }),
});
// Verify request was made to correct URL
expect(recorded.some((r) => r.url === '/projects/test-project/mcp' && r.method === 'POST')).toBe(true);
// Verify response on stdout
const output = stdoutChunks.join('');
const parsed = JSON.parse(output.trim());
expect(parsed.result.serverInfo.name).toBe('test-server');
expect(parsed.result.protocolVersion).toBe('2024-11-05');
});
it('sends session ID on subsequent requests', async () => {
recorded.length = 0;
const stdin = new Readable({ read() {} });
const { stdout, stdoutChunks } = createMockStreams();
const initMsg = JSON.stringify({
jsonrpc: '2.0', id: 1, method: 'initialize',
params: { protocolVersion: '2024-11-05', capabilities: {}, clientInfo: { name: 'test', version: '1.0' } },
});
const toolsListMsg = JSON.stringify({ jsonrpc: '2.0', id: 2, method: 'tools/list', params: {} });
pushAndEnd(stdin, [initMsg, toolsListMsg]);
await runMcpBridge({
projectName: 'test-project',
mcplocalUrl: `http://localhost:${mockPort}`,
stdin, stdout, stderr: new Writable({ write(_, __, cb) { cb(); } }),
});
// First POST should NOT have mcp-session-id header
const firstPost = recorded.find((r) => r.method === 'POST' && r.body.includes('initialize'));
expect(firstPost).toBeDefined();
expect(firstPost!.headers['mcp-session-id']).toBeUndefined();
// Second POST SHOULD have mcp-session-id header
const secondPost = recorded.find((r) => r.method === 'POST' && r.body.includes('tools/list'));
expect(secondPost).toBeDefined();
expect(secondPost!.headers['mcp-session-id']).toMatch(/^session-/);
// Verify tools/list response
const lines = stdoutChunks.join('').trim().split('\n');
expect(lines.length).toBe(2);
const toolsResponse = JSON.parse(lines[1]);
expect(toolsResponse.result.tools[0].name).toBe('grafana/query');
});
it('forwards tools/call and returns result', async () => {
recorded.length = 0;
const stdin = new Readable({ read() {} });
const { stdout, stdoutChunks } = createMockStreams();
const initMsg = JSON.stringify({
jsonrpc: '2.0', id: 1, method: 'initialize',
params: { protocolVersion: '2024-11-05', capabilities: {}, clientInfo: { name: 'test', version: '1.0' } },
});
const callMsg = JSON.stringify({
jsonrpc: '2.0', id: 2, method: 'tools/call',
params: { name: 'grafana/query', arguments: { query: 'test' } },
});
pushAndEnd(stdin, [initMsg, callMsg]);
await runMcpBridge({
projectName: 'test-project',
mcplocalUrl: `http://localhost:${mockPort}`,
stdin, stdout, stderr: new Writable({ write(_, __, cb) { cb(); } }),
});
const lines = stdoutChunks.join('').trim().split('\n');
expect(lines.length).toBe(2);
const callResponse = JSON.parse(lines[1]);
expect(callResponse.result.content[0].text).toBe('tool result');
});
it('forwards Authorization header when token provided', async () => {
recorded.length = 0;
const stdin = new Readable({ read() {} });
const { stdout } = createMockStreams();
const initMsg = JSON.stringify({
jsonrpc: '2.0', id: 1, method: 'initialize',
params: { protocolVersion: '2024-11-05', capabilities: {}, clientInfo: { name: 'test', version: '1.0' } },
});
pushAndEnd(stdin, [initMsg]);
await runMcpBridge({
projectName: 'test-project',
mcplocalUrl: `http://localhost:${mockPort}`,
token: 'my-secret-token',
stdin, stdout, stderr: new Writable({ write(_, __, cb) { cb(); } }),
});
const post = recorded.find((r) => r.method === 'POST');
expect(post).toBeDefined();
expect(post!.headers['authorization']).toBe('Bearer my-secret-token');
});
it('does not send Authorization header when no token', async () => {
recorded.length = 0;
const stdin = new Readable({ read() {} });
const { stdout } = createMockStreams();
const initMsg = JSON.stringify({
jsonrpc: '2.0', id: 1, method: 'initialize',
params: { protocolVersion: '2024-11-05', capabilities: {}, clientInfo: { name: 'test', version: '1.0' } },
});
pushAndEnd(stdin, [initMsg]);
await runMcpBridge({
projectName: 'test-project',
mcplocalUrl: `http://localhost:${mockPort}`,
stdin, stdout, stderr: new Writable({ write(_, __, cb) { cb(); } }),
});
const post = recorded.find((r) => r.method === 'POST');
expect(post).toBeDefined();
expect(post!.headers['authorization']).toBeUndefined();
});
it('sends DELETE to clean up session on stdin EOF', async () => {
recorded.length = 0;
const stdin = new Readable({ read() {} });
const { stdout } = createMockStreams();
const initMsg = JSON.stringify({
jsonrpc: '2.0', id: 1, method: 'initialize',
params: { protocolVersion: '2024-11-05', capabilities: {}, clientInfo: { name: 'test', version: '1.0' } },
});
pushAndEnd(stdin, [initMsg]);
await runMcpBridge({
projectName: 'test-project',
mcplocalUrl: `http://localhost:${mockPort}`,
stdin, stdout, stderr: new Writable({ write(_, __, cb) { cb(); } }),
});
// Should have a DELETE request for session cleanup
const deleteReq = recorded.find((r) => r.method === 'DELETE');
expect(deleteReq).toBeDefined();
expect(deleteReq!.headers['mcp-session-id']).toMatch(/^session-/);
});
it('does not send DELETE if no session was established', async () => {
recorded.length = 0;
const stdin = new Readable({ read() {} });
const { stdout } = createMockStreams();
// Push EOF immediately with no messages
stdin.push(null);
await runMcpBridge({
projectName: 'test-project',
mcplocalUrl: `http://localhost:${mockPort}`,
stdin, stdout, stderr: new Writable({ write(_, __, cb) { cb(); } }),
});
expect(recorded.filter((r) => r.method === 'DELETE')).toHaveLength(0);
});
it('writes errors to stderr, not stdout', async () => {
recorded.length = 0;
const stdin = new Readable({ read() {} });
const { stdout, stdoutChunks, stderr, stderrChunks } = createMockStreams();
// Send to a non-existent port to trigger connection error
const badMsg = JSON.stringify({ jsonrpc: '2.0', id: 1, method: 'initialize', params: {} });
pushAndEnd(stdin, [badMsg]);
await runMcpBridge({
projectName: 'test-project',
mcplocalUrl: 'http://localhost:1', // will fail to connect
stdin, stdout, stderr,
});
// Error should be on stderr
expect(stderrChunks.join('')).toContain('MCP bridge error');
// stdout should be empty (no corrupted output)
expect(stdoutChunks.join('')).toBe('');
});
it('skips blank lines in stdin', async () => {
recorded.length = 0;
const stdin = new Readable({ read() {} });
const { stdout, stdoutChunks } = createMockStreams();
const initMsg = JSON.stringify({
jsonrpc: '2.0', id: 1, method: 'initialize',
params: { protocolVersion: '2024-11-05', capabilities: {}, clientInfo: { name: 'test', version: '1.0' } },
});
pushAndEnd(stdin, ['', ' ', initMsg, '']);
await runMcpBridge({
projectName: 'test-project',
mcplocalUrl: `http://localhost:${mockPort}`,
stdin, stdout, stderr: new Writable({ write(_, __, cb) { cb(); } }),
});
// Only one POST (for the actual message)
const posts = recorded.filter((r) => r.method === 'POST');
expect(posts).toHaveLength(1);
// One response line
const lines = stdoutChunks.join('').trim().split('\n');
expect(lines).toHaveLength(1);
});
it('handles SSE (text/event-stream) responses', async () => {
recorded.length = 0;
const stdin = new Readable({ read() {} });
const { stdout, stdoutChunks } = createMockStreams();
const initMsg = JSON.stringify({
jsonrpc: '2.0', id: 1, method: 'initialize',
params: { protocolVersion: '2024-11-05', capabilities: {}, clientInfo: { name: 'test', version: '1.0' } },
});
pushAndEnd(stdin, [initMsg]);
await runMcpBridge({
projectName: 'sse-project', // triggers SSE response from mock server
mcplocalUrl: `http://localhost:${mockPort}`,
stdin, stdout, stderr: new Writable({ write(_, __, cb) { cb(); } }),
});
// Should extract JSON from SSE data: lines
const output = stdoutChunks.join('').trim();
const parsed = JSON.parse(output);
expect(parsed.result.serverInfo.name).toBe('test-server');
});
it('URL-encodes project name', async () => {
recorded.length = 0;
const stdin = new Readable({ read() {} });
const { stdout } = createMockStreams();
const { stderr } = createMockStreams();
const initMsg = JSON.stringify({
jsonrpc: '2.0', id: 1, method: 'initialize',
params: { protocolVersion: '2024-11-05', capabilities: {}, clientInfo: { name: 'test', version: '1.0' } },
});
pushAndEnd(stdin, [initMsg]);
await runMcpBridge({
projectName: 'my project',
mcplocalUrl: `http://localhost:${mockPort}`,
stdin, stdout, stderr,
});
const post = recorded.find((r) => r.method === 'POST');
expect(post?.url).toBe('/projects/my%20project/mcp');
});
});
describe('createMcpCommand', () => {
it('accepts --project option directly', () => {
const cmd = createMcpCommand({
getProject: () => undefined,
configLoader: () => ({ mcplocalUrl: 'http://localhost:3200' }),
credentialsLoader: () => null,
});
const opt = cmd.options.find((o) => o.long === '--project');
expect(opt).toBeDefined();
expect(opt!.short).toBe('-p');
});
it('parses --project from command args', async () => {
let capturedProject: string | undefined;
const cmd = createMcpCommand({
getProject: () => undefined,
configLoader: () => ({ mcplocalUrl: `http://localhost:${mockPort}` }),
credentialsLoader: () => null,
});
// Override the action to capture what project was parsed
// We test by checking the option parsing works, not by running the full bridge
const parsed = cmd.parse(['--project', 'test-proj'], { from: 'user' });
capturedProject = parsed.opts().project;
expect(capturedProject).toBe('test-proj');
});
it('parses -p shorthand from command args', () => {
const cmd = createMcpCommand({
getProject: () => undefined,
configLoader: () => ({ mcplocalUrl: `http://localhost:${mockPort}` }),
credentialsLoader: () => null,
});
const parsed = cmd.parse(['-p', 'my-project'], { from: 'user' });
expect(parsed.opts().project).toBe('my-project');
});
});

View File

@@ -30,8 +30,6 @@ describe('project with new fields', () => {
'project', 'smart-home',
'-d', 'Smart home project',
'--proxy-mode', 'filtered',
'--proxy-mode-llm-provider', 'gemini-cli',
'--proxy-mode-llm-model', 'gemini-2.0-flash',
'--server', 'my-grafana',
'--server', 'my-ha',
], { from: 'user' });
@@ -40,8 +38,6 @@ describe('project with new fields', () => {
name: 'smart-home',
description: 'Smart home project',
proxyMode: 'filtered',
llmProvider: 'gemini-cli',
llmModel: 'gemini-2.0-flash',
servers: ['my-grafana', 'my-ha'],
}));
});

View File

@@ -3,19 +3,39 @@ import { mkdtempSync, rmSync } from 'node:fs';
import { join } from 'node:path';
import { tmpdir } from 'node:os';
import { createStatusCommand } from '../../src/commands/status.js';
import type { StatusCommandDeps } from '../../src/commands/status.js';
import { saveConfig, DEFAULT_CONFIG } from '../../src/config/index.js';
import { saveCredentials } from '../../src/auth/index.js';
let tempDir: string;
let output: string[];
let written: string[];
function log(...args: string[]) {
output.push(args.join(' '));
}
function write(text: string) {
written.push(text);
}
function baseDeps(overrides?: Partial<StatusCommandDeps>): Partial<StatusCommandDeps> {
return {
configDeps: { configDir: tempDir },
credentialsDeps: { configDir: tempDir },
log,
write,
checkHealth: async () => true,
fetchProviders: async () => null,
isTTY: false,
...overrides,
};
}
beforeEach(() => {
tempDir = mkdtempSync(join(tmpdir(), 'mcpctl-status-test-'));
output = [];
written = [];
});
afterEach(() => {
@@ -24,12 +44,7 @@ afterEach(() => {
describe('status command', () => {
it('shows status in table format', async () => {
const cmd = createStatusCommand({
configDeps: { configDir: tempDir },
credentialsDeps: { configDir: tempDir },
log,
checkHealth: async () => true,
});
const cmd = createStatusCommand(baseDeps());
await cmd.parseAsync([], { from: 'user' });
const out = output.join('\n');
expect(out).toContain('mcpctl v');
@@ -39,46 +54,26 @@ describe('status command', () => {
});
it('shows unreachable when daemons are down', async () => {
const cmd = createStatusCommand({
configDeps: { configDir: tempDir },
credentialsDeps: { configDir: tempDir },
log,
checkHealth: async () => false,
});
const cmd = createStatusCommand(baseDeps({ checkHealth: async () => false }));
await cmd.parseAsync([], { from: 'user' });
expect(output.join('\n')).toContain('unreachable');
});
it('shows not logged in when no credentials', async () => {
const cmd = createStatusCommand({
configDeps: { configDir: tempDir },
credentialsDeps: { configDir: tempDir },
log,
checkHealth: async () => true,
});
const cmd = createStatusCommand(baseDeps());
await cmd.parseAsync([], { from: 'user' });
expect(output.join('\n')).toContain('not logged in');
});
it('shows logged in user when credentials exist', async () => {
saveCredentials({ token: 'tok', mcpdUrl: 'http://x:3100', user: 'alice@example.com' }, { configDir: tempDir });
const cmd = createStatusCommand({
configDeps: { configDir: tempDir },
credentialsDeps: { configDir: tempDir },
log,
checkHealth: async () => true,
});
const cmd = createStatusCommand(baseDeps());
await cmd.parseAsync([], { from: 'user' });
expect(output.join('\n')).toContain('logged in as alice@example.com');
});
it('shows status in JSON format', async () => {
const cmd = createStatusCommand({
configDeps: { configDir: tempDir },
credentialsDeps: { configDir: tempDir },
log,
checkHealth: async () => true,
});
const cmd = createStatusCommand(baseDeps());
await cmd.parseAsync(['-o', 'json'], { from: 'user' });
const parsed = JSON.parse(output[0]) as Record<string, unknown>;
expect(parsed['version']).toBe('0.1.0');
@@ -87,12 +82,7 @@ describe('status command', () => {
});
it('shows status in YAML format', async () => {
const cmd = createStatusCommand({
configDeps: { configDir: tempDir },
credentialsDeps: { configDir: tempDir },
log,
checkHealth: async () => false,
});
const cmd = createStatusCommand(baseDeps({ checkHealth: async () => false }));
await cmd.parseAsync(['-o', 'yaml'], { from: 'user' });
expect(output[0]).toContain('mcplocalReachable: false');
});
@@ -100,15 +90,12 @@ describe('status command', () => {
it('checks correct URLs from config', async () => {
saveConfig({ ...DEFAULT_CONFIG, mcplocalUrl: 'http://local:3200', mcpdUrl: 'http://remote:3100' }, { configDir: tempDir });
const checkedUrls: string[] = [];
const cmd = createStatusCommand({
configDeps: { configDir: tempDir },
credentialsDeps: { configDir: tempDir },
log,
const cmd = createStatusCommand(baseDeps({
checkHealth: async (url) => {
checkedUrls.push(url);
return false;
},
});
}));
await cmd.parseAsync([], { from: 'user' });
expect(checkedUrls).toContain('http://local:3200');
expect(checkedUrls).toContain('http://remote:3100');
@@ -116,14 +103,100 @@ describe('status command', () => {
it('shows registries from config', async () => {
saveConfig({ ...DEFAULT_CONFIG, registries: ['official'] }, { configDir: tempDir });
const cmd = createStatusCommand({
configDeps: { configDir: tempDir },
credentialsDeps: { configDir: tempDir },
log,
checkHealth: async () => true,
});
const cmd = createStatusCommand(baseDeps());
await cmd.parseAsync([], { from: 'user' });
expect(output.join('\n')).toContain('official');
expect(output.join('\n')).not.toContain('glama');
});
it('shows LLM not configured hint when no LLM is set', async () => {
const cmd = createStatusCommand(baseDeps());
await cmd.parseAsync([], { from: 'user' });
const out = output.join('\n');
expect(out).toContain('LLM:');
expect(out).toContain('not configured');
expect(out).toContain('mcpctl config setup');
});
it('shows green check when LLM is healthy (non-TTY)', async () => {
saveConfig({ ...DEFAULT_CONFIG, llm: { provider: 'anthropic', model: 'claude-haiku-3-5-20241022' } }, { configDir: tempDir });
const cmd = createStatusCommand(baseDeps({ checkLlm: async () => 'ok' }));
await cmd.parseAsync([], { from: 'user' });
const out = output.join('\n');
expect(out).toContain('anthropic / claude-haiku-3-5-20241022');
expect(out).toContain('✓ ok');
});
it('shows red cross when LLM check fails (non-TTY)', async () => {
saveConfig({ ...DEFAULT_CONFIG, llm: { provider: 'gemini-cli', model: 'gemini-2.5-flash' } }, { configDir: tempDir });
const cmd = createStatusCommand(baseDeps({ checkLlm: async () => 'not authenticated' }));
await cmd.parseAsync([], { from: 'user' });
const out = output.join('\n');
expect(out).toContain('✗ not authenticated');
});
it('shows error message from mcplocal', async () => {
saveConfig({ ...DEFAULT_CONFIG, llm: { provider: 'gemini-cli', model: 'gemini-2.5-flash' } }, { configDir: tempDir });
const cmd = createStatusCommand(baseDeps({ checkLlm: async () => 'binary not found' }));
await cmd.parseAsync([], { from: 'user' });
expect(output.join('\n')).toContain('✗ binary not found');
});
it('queries mcplocal URL for LLM health', async () => {
saveConfig({ ...DEFAULT_CONFIG, mcplocalUrl: 'http://custom:9999', llm: { provider: 'gemini-cli', model: 'gemini-2.5-flash' } }, { configDir: tempDir });
let queriedUrl = '';
const cmd = createStatusCommand(baseDeps({
checkLlm: async (url) => { queriedUrl = url; return 'ok'; },
}));
await cmd.parseAsync([], { from: 'user' });
expect(queriedUrl).toBe('http://custom:9999');
});
it('uses spinner on TTY and writes final result', async () => {
saveConfig({ ...DEFAULT_CONFIG, llm: { provider: 'gemini-cli', model: 'gemini-2.5-flash' } }, { configDir: tempDir });
const cmd = createStatusCommand(baseDeps({
isTTY: true,
checkLlm: async () => 'ok',
}));
await cmd.parseAsync([], { from: 'user' });
// On TTY, the final LLM line goes through write(), not log()
const finalWrite = written[written.length - 1];
expect(finalWrite).toContain('gemini-cli / gemini-2.5-flash');
expect(finalWrite).toContain('✓ ok');
});
it('uses spinner on TTY and shows failure', async () => {
saveConfig({ ...DEFAULT_CONFIG, llm: { provider: 'gemini-cli', model: 'gemini-2.5-flash' } }, { configDir: tempDir });
const cmd = createStatusCommand(baseDeps({
isTTY: true,
checkLlm: async () => 'not authenticated',
}));
await cmd.parseAsync([], { from: 'user' });
const finalWrite = written[written.length - 1];
expect(finalWrite).toContain('✗ not authenticated');
});
it('shows not configured when LLM provider is none', async () => {
saveConfig({ ...DEFAULT_CONFIG, llm: { provider: 'none' } }, { configDir: tempDir });
const cmd = createStatusCommand(baseDeps());
await cmd.parseAsync([], { from: 'user' });
expect(output.join('\n')).toContain('not configured');
});
it('includes llm and llmStatus in JSON output', async () => {
saveConfig({ ...DEFAULT_CONFIG, llm: { provider: 'gemini-cli', model: 'gemini-2.5-flash' } }, { configDir: tempDir });
const cmd = createStatusCommand(baseDeps({ checkLlm: async () => 'ok' }));
await cmd.parseAsync(['-o', 'json'], { from: 'user' });
const parsed = JSON.parse(output[0]) as Record<string, unknown>;
expect(parsed['llm']).toBe('gemini-cli / gemini-2.5-flash');
expect(parsed['llmStatus']).toBe('ok');
});
it('includes null llm in JSON output when not configured', async () => {
const cmd = createStatusCommand(baseDeps());
await cmd.parseAsync(['-o', 'json'], { from: 'user' });
const parsed = JSON.parse(output[0]) as Record<string, unknown>;
expect(parsed['llm']).toBeNull();
expect(parsed['llmStatus']).toBeNull();
});
});

View File

@@ -0,0 +1,176 @@
import { describe, it, expect } from 'vitest';
import { readFileSync } from 'node:fs';
import { join, dirname } from 'node:path';
import { fileURLToPath } from 'node:url';
const root = join(dirname(fileURLToPath(import.meta.url)), '..', '..', '..');
const fishFile = readFileSync(join(root, 'completions', 'mcpctl.fish'), 'utf-8');
const bashFile = readFileSync(join(root, 'completions', 'mcpctl.bash'), 'utf-8');
describe('fish completions', () => {
it('erases stale completions at the top', () => {
const lines = fishFile.split('\n');
const firstComplete = lines.findIndex((l) => l.startsWith('complete '));
expect(lines[firstComplete]).toContain('-e');
});
it('does not offer resource types without __mcpctl_needs_resource_type guard', () => {
const resourceTypes = ['servers', 'instances', 'secrets', 'templates', 'projects', 'users', 'groups', 'rbac', 'prompts', 'promptrequests'];
const lines = fishFile.split('\n').filter((l) => l.startsWith('complete '));
for (const line of lines) {
// Find lines that offer resource types as positional args
const offersResourceType = resourceTypes.some((r) => {
// Match `-a "...servers..."` or `-a 'servers projects'`
const aMatch = line.match(/-a\s+['"]([^'"]+)['"]/);
if (!aMatch) return false;
return aMatch[1].split(/\s+/).includes(r);
});
if (!offersResourceType) continue;
// Skip the help completions line and the -e line
if (line.includes('__fish_seen_subcommand_from help')) continue;
// Skip project-scoped command offerings (those offer commands, not resource types)
if (line.includes('attach-server') || line.includes('detach-server')) continue;
// Skip lines that offer commands (not resource types)
if (line.includes("-d 'Show") || line.includes("-d 'Manage") || line.includes("-d 'Authenticate") ||
line.includes("-d 'Log out'") || line.includes("-d 'Get instance") || line.includes("-d 'Create a resource'") ||
line.includes("-d 'Edit a resource'") || line.includes("-d 'Apply") || line.includes("-d 'Backup") ||
line.includes("-d 'Restore") || line.includes("-d 'List resources") || line.includes("-d 'Delete a resource'")) continue;
// Lines offering resource types MUST have __mcpctl_needs_resource_type in their condition
expect(line, `Resource type completion missing guard: ${line}`).toContain('__mcpctl_needs_resource_type');
}
});
it('resource name completions require resource type to be selected', () => {
const lines = fishFile.split('\n').filter((l) => l.startsWith('complete') && l.includes('__mcpctl_resource_names'));
expect(lines.length).toBeGreaterThan(0);
for (const line of lines) {
expect(line).toContain('not __mcpctl_needs_resource_type');
}
});
it('defines --project option', () => {
expect(fishFile).toContain("complete -c mcpctl -l project");
});
it('attach-server command only shows with --project', () => {
// Only check lines that OFFER attach-server as a command (via -a attach-server), not argument completions
const lines = fishFile.split('\n').filter((l) =>
l.startsWith('complete') && l.includes("-a attach-server"));
expect(lines.length).toBeGreaterThan(0);
for (const line of lines) {
expect(line).toContain('__mcpctl_has_project');
}
});
it('detach-server command only shows with --project', () => {
const lines = fishFile.split('\n').filter((l) =>
l.startsWith('complete') && l.includes("-a detach-server"));
expect(lines.length).toBeGreaterThan(0);
for (const line of lines) {
expect(line).toContain('__mcpctl_has_project');
}
});
it('resource name functions use jq .[][].name to unwrap wrapped JSON and avoid nested matches', () => {
// API returns { "resources": [...] } not [...], so .[].name fails silently.
// Must use .[][].name to unwrap the outer object then iterate the array.
// Also must not use string match regex which matches nested name fields.
const resourceNamesFn = fishFile.match(/function __mcpctl_resource_names[\s\S]*?^end/m)?.[0] ?? '';
const projectNamesFn = fishFile.match(/function __mcpctl_project_names[\s\S]*?^end/m)?.[0] ?? '';
expect(resourceNamesFn, '__mcpctl_resource_names must use jq .[][].name').toContain("jq -r '.[][].name'");
expect(resourceNamesFn, '__mcpctl_resource_names must not use string match on name').not.toMatch(/string match.*"name"/);
expect(projectNamesFn, '__mcpctl_project_names must use jq .[][].name').toContain("jq -r '.[][].name'");
expect(projectNamesFn, '__mcpctl_project_names must not use string match on name').not.toMatch(/string match.*"name"/);
});
it('instances use server.name instead of name', () => {
const resourceNamesFn = fishFile.match(/function __mcpctl_resource_names[\s\S]*?^end/m)?.[0] ?? '';
expect(resourceNamesFn, 'must handle instances via server.name').toContain('.server.name');
});
it('attach-server completes with available (unattached) servers and guards against repeat', () => {
const attachLine = fishFile.split('\n').find((l) =>
l.startsWith('complete') && l.includes('__fish_seen_subcommand_from attach-server'));
expect(attachLine, 'attach-server argument completion must exist').toBeDefined();
expect(attachLine, 'attach-server must use __mcpctl_available_servers').toContain('__mcpctl_available_servers');
expect(attachLine, 'attach-server must guard with __mcpctl_needs_server_arg').toContain('__mcpctl_needs_server_arg');
});
it('detach-server completes with project servers and guards against repeat', () => {
const detachLine = fishFile.split('\n').find((l) =>
l.startsWith('complete') && l.includes('__fish_seen_subcommand_from detach-server'));
expect(detachLine, 'detach-server argument completion must exist').toBeDefined();
expect(detachLine, 'detach-server must use __mcpctl_project_servers').toContain('__mcpctl_project_servers');
expect(detachLine, 'detach-server must guard with __mcpctl_needs_server_arg').toContain('__mcpctl_needs_server_arg');
});
it('non-project commands do not show with --project', () => {
const nonProjectCmds = ['status', 'login', 'logout', 'config', 'apply', 'backup', 'restore'];
const lines = fishFile.split('\n').filter((l) => l.startsWith('complete') && l.includes('-a '));
for (const cmd of nonProjectCmds) {
const cmdLines = lines.filter((l) => {
const aMatch = l.match(/-a\s+(\S+)/);
return aMatch && aMatch[1].replace(/['"]/g, '') === cmd;
});
for (const line of cmdLines) {
expect(line, `${cmd} should require 'not __mcpctl_has_project'`).toContain('not __mcpctl_has_project');
}
}
});
});
describe('bash completions', () => {
it('separates project commands from regular commands', () => {
expect(bashFile).toContain('project_commands=');
expect(bashFile).toContain('attach-server detach-server');
});
it('checks has_project before offering project commands', () => {
expect(bashFile).toContain('if $has_project');
expect(bashFile).toContain('$project_commands');
});
it('fetches resource names dynamically after resource type', () => {
expect(bashFile).toContain('_mcpctl_resource_names');
// get/describe/delete should use resource_names when resource_type is set
expect(bashFile).toMatch(/get\|describe\|delete\)[\s\S]*?_mcpctl_resource_names/);
});
it('attach-server filters out already-attached servers and guards against repeat', () => {
const attachBlock = bashFile.match(/attach-server\)[\s\S]*?return ;;/)?.[0] ?? '';
expect(attachBlock, 'attach-server must use _mcpctl_get_project_value').toContain('_mcpctl_get_project_value');
expect(attachBlock, 'attach-server must query project servers to exclude').toContain('--project');
expect(attachBlock, 'attach-server must check position to prevent repeat').toContain('cword - subcmd_pos');
});
it('detach-server shows only project servers and guards against repeat', () => {
const detachBlock = bashFile.match(/detach-server\)[\s\S]*?return ;;/)?.[0] ?? '';
expect(detachBlock, 'detach-server must use _mcpctl_get_project_value').toContain('_mcpctl_get_project_value');
expect(detachBlock, 'detach-server must query project servers').toContain('--project');
expect(detachBlock, 'detach-server must check position to prevent repeat').toContain('cword - subcmd_pos');
});
it('instances use server.name instead of name', () => {
const fnMatch = bashFile.match(/_mcpctl_resource_names\(\)[\s\S]*?\n\s*\}/)?.[0] ?? '';
expect(fnMatch, 'must handle instances via .server.name').toContain('.server.name');
});
it('defines --project option', () => {
expect(bashFile).toContain('--project');
});
it('resource name function uses jq .[][].name to unwrap wrapped JSON and avoid nested matches', () => {
const fnMatch = bashFile.match(/_mcpctl_resource_names\(\)[\s\S]*?\n\s*\}/)?.[0] ?? '';
expect(fnMatch, '_mcpctl_resource_names must use jq .[][].name').toContain("jq -r '.[][].name'");
expect(fnMatch, '_mcpctl_resource_names must not use grep on name').not.toMatch(/grep.*"name"/);
// Guard against .[].name (single bracket) which fails on wrapped JSON
expect(fnMatch, '_mcpctl_resource_names must not use .[].name (needs .[][].name)').not.toMatch(/jq.*'\.\[\]\.name'/);
});
});

View File

@@ -47,7 +47,7 @@ describe('CLI command registration (e2e)', () => {
expect(subcommands).toContain('reset');
});
it('create command has user, group, rbac subcommands', () => {
it('create command has user, group, rbac, prompt, promptrequest subcommands', () => {
const program = createProgram();
const create = program.commands.find((c) => c.name() === 'create');
expect(create).toBeDefined();
@@ -59,6 +59,24 @@ describe('CLI command registration (e2e)', () => {
expect(subcommands).toContain('user');
expect(subcommands).toContain('group');
expect(subcommands).toContain('rbac');
expect(subcommands).toContain('prompt');
expect(subcommands).toContain('promptrequest');
});
it('get command accepts --project option', () => {
const program = createProgram();
const get = program.commands.find((c) => c.name() === 'get');
expect(get).toBeDefined();
const projectOpt = get!.options.find((o) => o.long === '--project');
expect(projectOpt).toBeDefined();
expect(projectOpt!.description).toContain('project');
});
it('program-level --project option is defined', () => {
const program = createProgram();
const projectOpt = program.options.find((o) => o.long === '--project');
expect(projectOpt).toBeDefined();
});
it('displays version', () => {

View File

@@ -0,0 +1,11 @@
-- AlterTable: Add gated flag to Project
ALTER TABLE "Project" ADD COLUMN "gated" BOOLEAN NOT NULL DEFAULT true;
-- AlterTable: Add priority, summary, chapters, linkTarget to Prompt
ALTER TABLE "Prompt" ADD COLUMN "priority" INTEGER NOT NULL DEFAULT 5;
ALTER TABLE "Prompt" ADD COLUMN "summary" TEXT;
ALTER TABLE "Prompt" ADD COLUMN "chapters" JSONB;
ALTER TABLE "Prompt" ADD COLUMN "linkTarget" TEXT;
-- AlterTable: Add priority to PromptRequest
ALTER TABLE "PromptRequest" ADD COLUMN "priority" INTEGER NOT NULL DEFAULT 5;

View File

@@ -170,7 +170,9 @@ model Project {
id String @id @default(cuid())
name String @unique
description String @default("")
prompt String @default("")
proxyMode String @default("direct")
gated Boolean @default(true)
llmProvider String?
llmModel String?
ownerId String
@@ -178,8 +180,10 @@ model Project {
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
owner User @relation(fields: [ownerId], references: [id], onDelete: Cascade)
servers ProjectServer[]
owner User @relation(fields: [ownerId], references: [id], onDelete: Cascade)
servers ProjectServer[]
prompts Prompt[]
promptRequests PromptRequest[]
@@index([name])
@@index([ownerId])
@@ -227,6 +231,46 @@ enum InstanceStatus {
ERROR
}
// ── Prompts (approved content resources) ──
model Prompt {
id String @id @default(cuid())
name String
content String @db.Text
projectId String?
priority Int @default(5)
summary String? @db.Text
chapters Json?
linkTarget String?
version Int @default(1)
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
project Project? @relation(fields: [projectId], references: [id], onDelete: Cascade)
@@unique([name, projectId])
@@index([projectId])
}
// ── Prompt Requests (pending proposals from LLM sessions) ──
model PromptRequest {
id String @id @default(cuid())
name String
content String @db.Text
projectId String?
priority Int @default(5)
createdBySession String?
createdByUserId String?
createdAt DateTime @default(now())
project Project? @relation(fields: [projectId], references: [id], onDelete: Cascade)
@@unique([name, projectId])
@@index([projectId])
@@index([createdBySession])
}
// ── Audit Logs ──
model AuditLog {

View File

@@ -0,0 +1,102 @@
/**
* Bootstrap the mcpctl-system project and its system prompts.
*
* This runs on every mcpd startup and uses upserts to be idempotent.
* System prompts are editable by users but will be re-created if deleted.
*/
import type { PrismaClient } from '@prisma/client';
/** Well-known owner ID for system-managed resources. */
export const SYSTEM_OWNER_ID = 'system';
/** Well-known project name for system prompts. */
export const SYSTEM_PROJECT_NAME = 'mcpctl-system';
interface SystemPromptDef {
name: string;
priority: number;
content: string;
}
const SYSTEM_PROMPTS: SystemPromptDef[] = [
{
name: 'gate-instructions',
priority: 10,
content: `This project uses a gated session. Before you can access tools, you must describe your current task by calling begin_session with 3-7 keywords.
After calling begin_session, you will receive:
1. Relevant project prompts matched to your keywords
2. A list of other available prompts
3. Full access to all project tools
Choose your keywords carefully — they determine which context you receive.`,
},
{
name: 'gate-encouragement',
priority: 10,
content: `If any of the listed prompts seem relevant to your work, or if you encounter unfamiliar patterns, conventions, or constraints during implementation, use read_prompts({ tags: [...] }) to retrieve them.
It is better to check and not need it than to proceed without important context. The project maintainers have documented common pitfalls, architecture decisions, and required patterns — taking 10 seconds to retrieve a prompt can save hours of rework.`,
},
{
name: 'gate-intercept-preamble',
priority: 10,
content: `The following project context was automatically retrieved based on your tool call. You bypassed the begin_session step, so this context was matched using keywords extracted from your tool invocation.
Review this context carefully — it may contain important guidelines, constraints, or patterns relevant to your work. If you need more context, use read_prompts({ tags: [...] }) at any time.`,
},
{
name: 'session-greeting',
priority: 10,
content: `Welcome to this project. To get started, call begin_session with keywords describing your task.
Example: begin_session({ tags: ["zigbee", "pairing", "mqtt"] })
This will load relevant project context, policies, and guidelines tailored to your work.`,
},
];
/**
* Ensure the mcpctl-system project and its system prompts exist.
* Uses upserts so this is safe to call on every startup.
*/
export async function bootstrapSystemProject(prisma: PrismaClient): Promise<void> {
// Upsert the system project
const project = await prisma.project.upsert({
where: { name: SYSTEM_PROJECT_NAME },
create: {
name: SYSTEM_PROJECT_NAME,
description: 'System prompts for mcpctl gating and session management',
prompt: '',
proxyMode: 'direct',
gated: false,
ownerId: SYSTEM_OWNER_ID,
},
update: {}, // Don't overwrite user edits to the project itself
});
// Upsert each system prompt (re-create if deleted, don't overwrite content if edited)
for (const def of SYSTEM_PROMPTS) {
const existing = await prisma.prompt.findFirst({
where: { name: def.name, projectId: project.id },
});
if (!existing) {
await prisma.prompt.create({
data: {
name: def.name,
content: def.content,
priority: def.priority,
projectId: project.id,
},
});
}
// If the prompt exists, don't overwrite — user may have edited it
}
}
/** Get the names of all system prompts (for delete protection). */
export function getSystemPromptNames(): string[] {
return SYSTEM_PROMPTS.map((p) => p.name);
}

View File

@@ -18,6 +18,9 @@ import {
UserRepository,
GroupRepository,
} from './repositories/index.js';
import { PromptRepository } from './repositories/prompt.repository.js';
import { PromptRequestRepository } from './repositories/prompt-request.repository.js';
import { bootstrapSystemProject } from './bootstrap/system-project.js';
import {
McpServerService,
SecretService,
@@ -56,6 +59,8 @@ import {
registerUserRoutes,
registerGroupRoutes,
} from './routes/index.js';
import { registerPromptRoutes } from './routes/prompts.js';
import { PromptService } from './services/prompt.service.js';
type PermissionCheck =
| { kind: 'resource'; resource: string; action: RbacAction; resourceName?: string }
@@ -88,11 +93,38 @@ function mapUrlToPermission(method: string, url: string): PermissionCheck {
'rbac': 'rbac',
'audit-logs': 'rbac',
'mcp': 'servers',
'prompts': 'prompts',
'promptrequests': 'promptrequests',
};
const resource = resourceMap[segment];
if (resource === undefined) return { kind: 'skip' };
// Special case: /api/v1/promptrequests/:id/approve → needs both delete+promptrequests and create+prompts
// We check delete on promptrequests (the harder permission); create on prompts is checked in the service layer
const approveMatch = url.match(/^\/api\/v1\/promptrequests\/([^/?]+)\/approve/);
if (approveMatch?.[1]) {
return { kind: 'resource', resource: 'promptrequests', action: 'delete', resourceName: approveMatch[1] };
}
// Special case: /api/v1/projects/:name/prompts/visible → view prompts
const visiblePromptsMatch = url.match(/^\/api\/v1\/projects\/([^/?]+)\/prompts\/visible/);
if (visiblePromptsMatch?.[1]) {
return { kind: 'resource', resource: 'prompts', action: 'view' };
}
// Special case: /api/v1/projects/:name/promptrequests → create promptrequests
const projectPromptrequestsMatch = url.match(/^\/api\/v1\/projects\/([^/?]+)\/promptrequests/);
if (projectPromptrequestsMatch?.[1] && method === 'POST') {
return { kind: 'resource', resource: 'promptrequests', action: 'create' };
}
// Special case: /api/v1/projects/:id/instructions → view projects
const instructionsMatch = url.match(/^\/api\/v1\/projects\/([^/?]+)\/instructions/);
if (instructionsMatch?.[1]) {
return { kind: 'resource', resource: 'projects', action: 'view', resourceName: instructionsMatch[1] };
}
// Special case: /api/v1/projects/:id/mcp-config → requires 'expose' permission
const mcpConfigMatch = url.match(/^\/api\/v1\/projects\/([^/?]+)\/mcp-config/);
if (mcpConfigMatch?.[1]) {
@@ -204,6 +236,9 @@ async function main(): Promise<void> {
});
await seedTemplates(prisma, templates);
// Bootstrap system project and prompts
await bootstrapSystemProject(prisma);
// Repositories
const serverRepo = new McpServerRepository(prisma);
const secretRepo = new SecretRepository(prisma);
@@ -243,11 +278,14 @@ async function main(): Promise<void> {
const restoreService = new RestoreService(serverRepo, projectRepo, secretRepo, userRepo, groupRepo, rbacDefinitionRepo);
const authService = new AuthService(prisma);
const templateService = new TemplateService(templateRepo);
const mcpProxyService = new McpProxyService(instanceRepo, serverRepo);
const mcpProxyService = new McpProxyService(instanceRepo, serverRepo, orchestrator);
const rbacDefinitionService = new RbacDefinitionService(rbacDefinitionRepo);
const rbacService = new RbacService(rbacDefinitionRepo, prisma);
const userService = new UserService(userRepo);
const groupService = new GroupService(groupRepo, userRepo);
const promptRepo = new PromptRepository(prisma);
const promptRequestRepo = new PromptRequestRepository(prisma);
const promptService = new PromptService(promptRepo, promptRequestRepo, projectRepo);
// Auth middleware for global hooks
const authMiddleware = createAuthMiddleware({
@@ -294,9 +332,13 @@ async function main(): Promise<void> {
const check = mapUrlToPermission(request.method, url);
if (check.kind === 'skip') return;
// Extract service account identity from header (sent by mcplocal)
const saHeader = request.headers['x-service-account'];
const serviceAccountName = typeof saHeader === 'string' ? saHeader : undefined;
let allowed: boolean;
if (check.kind === 'operation') {
allowed = await rbacService.canRunOperation(request.userId, check.operation);
allowed = await rbacService.canRunOperation(request.userId, check.operation, serviceAccountName);
} else {
// Resolve CUID → human name for name-scoped RBAC bindings
if (check.resourceName !== undefined && CUID_RE.test(check.resourceName)) {
@@ -306,10 +348,10 @@ async function main(): Promise<void> {
if (entity) check.resourceName = entity.name;
}
}
allowed = await rbacService.canAccess(request.userId, check.action, check.resource, check.resourceName);
allowed = await rbacService.canAccess(request.userId, check.action, check.resource, check.resourceName, serviceAccountName);
// Compute scope for list filtering (used by preSerialization hook)
if (allowed && check.resourceName === undefined) {
request.rbacScope = await rbacService.getAllowedScope(request.userId, check.action, check.resource);
request.rbacScope = await rbacService.getAllowedScope(request.userId, check.action, check.resource, serviceAccountName);
}
}
if (!allowed) {
@@ -335,6 +377,7 @@ async function main(): Promise<void> {
registerRbacRoutes(app, rbacDefinitionService);
registerUserRoutes(app, userService);
registerGroupRoutes(app, groupService);
registerPromptRoutes(app, promptService, projectRepo);
// ── RBAC list filtering hook ──
// Filters array responses to only include resources the user is allowed to see.

View File

@@ -1,18 +1,18 @@
import type { PrismaClient, Project } from '@prisma/client';
export interface ProjectWithRelations extends Project {
servers: Array<{ id: string; server: { id: string; name: string } }>;
servers: Array<{ id: string; projectId: string; serverId: string; server: Record<string, unknown> & { id: string; name: string } }>;
}
const PROJECT_INCLUDE = {
servers: { include: { server: { select: { id: true, name: true } } } },
servers: { include: { server: true } },
} as const;
export interface IProjectRepository {
findAll(ownerId?: string): Promise<ProjectWithRelations[]>;
findById(id: string): Promise<ProjectWithRelations | null>;
findByName(name: string): Promise<ProjectWithRelations | null>;
create(data: { name: string; description: string; ownerId: string; proxyMode: string; llmProvider?: string; llmModel?: string }): Promise<ProjectWithRelations>;
create(data: { name: string; description: string; prompt?: string; ownerId: string; proxyMode: string; llmProvider?: string; llmModel?: string }): Promise<ProjectWithRelations>;
update(id: string, data: Record<string, unknown>): Promise<ProjectWithRelations>;
delete(id: string): Promise<void>;
setServers(projectId: string, serverIds: string[]): Promise<void>;
@@ -36,13 +36,14 @@ export class ProjectRepository implements IProjectRepository {
return this.prisma.project.findUnique({ where: { name }, include: PROJECT_INCLUDE }) as unknown as Promise<ProjectWithRelations | null>;
}
async create(data: { name: string; description: string; ownerId: string; proxyMode: string; llmProvider?: string; llmModel?: string }): Promise<ProjectWithRelations> {
async create(data: { name: string; description: string; prompt?: string; ownerId: string; proxyMode: string; llmProvider?: string; llmModel?: string }): Promise<ProjectWithRelations> {
const createData: Record<string, unknown> = {
name: data.name,
description: data.description,
ownerId: data.ownerId,
proxyMode: data.proxyMode,
};
if (data.prompt !== undefined) createData['prompt'] = data.prompt;
if (data.llmProvider !== undefined) createData['llmProvider'] = data.llmProvider;
if (data.llmModel !== undefined) createData['llmModel'] = data.llmModel;

View File

@@ -0,0 +1,69 @@
import type { PrismaClient, PromptRequest } from '@prisma/client';
export interface IPromptRequestRepository {
findAll(projectId?: string): Promise<PromptRequest[]>;
findGlobal(): Promise<PromptRequest[]>;
findById(id: string): Promise<PromptRequest | null>;
findByNameAndProject(name: string, projectId: string | null): Promise<PromptRequest | null>;
findBySession(sessionId: string, projectId?: string): Promise<PromptRequest[]>;
create(data: { name: string; content: string; projectId?: string; priority?: number; createdBySession?: string; createdByUserId?: string }): Promise<PromptRequest>;
update(id: string, data: { content?: string; priority?: number }): Promise<PromptRequest>;
delete(id: string): Promise<void>;
}
export class PromptRequestRepository implements IPromptRequestRepository {
constructor(private readonly prisma: PrismaClient) {}
async findAll(projectId?: string): Promise<PromptRequest[]> {
const include = { project: { select: { name: true } } };
if (projectId !== undefined) {
return this.prisma.promptRequest.findMany({
where: { OR: [{ projectId }, { projectId: null }] },
include,
orderBy: { createdAt: 'desc' },
});
}
return this.prisma.promptRequest.findMany({ include, orderBy: { createdAt: 'desc' } });
}
async findGlobal(): Promise<PromptRequest[]> {
return this.prisma.promptRequest.findMany({
where: { projectId: null },
include: { project: { select: { name: true } } },
orderBy: { createdAt: 'desc' },
});
}
async findById(id: string): Promise<PromptRequest | null> {
return this.prisma.promptRequest.findUnique({ where: { id } });
}
async findByNameAndProject(name: string, projectId: string | null): Promise<PromptRequest | null> {
return this.prisma.promptRequest.findUnique({
where: { name_projectId: { name, projectId: projectId ?? '' } },
});
}
async findBySession(sessionId: string, projectId?: string): Promise<PromptRequest[]> {
const where: Record<string, unknown> = { createdBySession: sessionId };
if (projectId !== undefined) {
where['OR'] = [{ projectId }, { projectId: null }];
}
return this.prisma.promptRequest.findMany({
where,
orderBy: { createdAt: 'desc' },
});
}
async create(data: { name: string; content: string; projectId?: string; priority?: number; createdBySession?: string; createdByUserId?: string }): Promise<PromptRequest> {
return this.prisma.promptRequest.create({ data });
}
async update(id: string, data: { content?: string; priority?: number }): Promise<PromptRequest> {
return this.prisma.promptRequest.update({ where: { id }, data });
}
async delete(id: string): Promise<void> {
await this.prisma.promptRequest.delete({ where: { id } });
}
}

View File

@@ -0,0 +1,58 @@
import type { PrismaClient, Prompt } from '@prisma/client';
export interface IPromptRepository {
findAll(projectId?: string): Promise<Prompt[]>;
findGlobal(): Promise<Prompt[]>;
findById(id: string): Promise<Prompt | null>;
findByNameAndProject(name: string, projectId: string | null): Promise<Prompt | null>;
create(data: { name: string; content: string; projectId?: string; priority?: number; linkTarget?: string }): Promise<Prompt>;
update(id: string, data: { content?: string; priority?: number; summary?: string; chapters?: string[] }): Promise<Prompt>;
delete(id: string): Promise<void>;
}
export class PromptRepository implements IPromptRepository {
constructor(private readonly prisma: PrismaClient) {}
async findAll(projectId?: string): Promise<Prompt[]> {
const include = { project: { select: { name: true } } };
if (projectId !== undefined) {
// Project-scoped + global prompts
return this.prisma.prompt.findMany({
where: { OR: [{ projectId }, { projectId: null }] },
include,
orderBy: { name: 'asc' },
});
}
return this.prisma.prompt.findMany({ include, orderBy: { name: 'asc' } });
}
async findGlobal(): Promise<Prompt[]> {
return this.prisma.prompt.findMany({
where: { projectId: null },
include: { project: { select: { name: true } } },
orderBy: { name: 'asc' },
});
}
async findById(id: string): Promise<Prompt | null> {
return this.prisma.prompt.findUnique({ where: { id } });
}
async findByNameAndProject(name: string, projectId: string | null): Promise<Prompt | null> {
return this.prisma.prompt.findUnique({
where: { name_projectId: { name, projectId: projectId ?? '' } },
});
}
async create(data: { name: string; content: string; projectId?: string; priority?: number; linkTarget?: string }): Promise<Prompt> {
return this.prisma.prompt.create({ data });
}
async update(id: string, data: { content?: string; priority?: number; summary?: string; chapters?: string[] }): Promise<Prompt> {
return this.prisma.prompt.update({ where: { id }, data });
}
async delete(id: string): Promise<void> {
await this.prisma.prompt.delete({ where: { id } });
}
}

View File

@@ -54,4 +54,16 @@ export function registerProjectRoutes(app: FastifyInstance, service: ProjectServ
const project = await service.resolveAndGet(request.params.id);
return project.servers.map((ps) => ps.server);
});
// Get project instructions for LLM (prompt + server list)
app.get<{ Params: { id: string } }>('/api/v1/projects/:id/instructions', async (request) => {
const project = await service.resolveAndGet(request.params.id);
return {
prompt: project.prompt,
servers: project.servers.map((ps) => ({
name: (ps.server as Record<string, unknown>).name as string,
description: (ps.server as Record<string, unknown>).description as string,
})),
};
});
}

View File

@@ -0,0 +1,207 @@
import type { FastifyInstance } from 'fastify';
import type { Prompt } from '@prisma/client';
import type { PromptService } from '../services/prompt.service.js';
import type { IProjectRepository, ProjectWithRelations } from '../repositories/project.repository.js';
type PromptWithLinkStatus = Prompt & { linkStatus: 'alive' | 'dead' | null };
/**
* Enrich prompts with linkStatus by checking if the target project/server exists.
* This is a structural check (does the target exist?) — not a runtime probe.
*/
async function enrichWithLinkStatus(
prompts: Prompt[],
projectRepo: IProjectRepository,
): Promise<PromptWithLinkStatus[]> {
// Cache project lookups to avoid repeated DB queries
const projectCache = new Map<string, ProjectWithRelations | null>();
const results: PromptWithLinkStatus[] = [];
for (const p of prompts) {
if (!p.linkTarget) {
results.push({ ...p, linkStatus: null } as PromptWithLinkStatus);
continue;
}
try {
// Parse: project/server:uri
const slashIdx = p.linkTarget.indexOf('/');
if (slashIdx < 1) { results.push({ ...p, linkStatus: 'dead' as const }); continue; }
const projectName = p.linkTarget.slice(0, slashIdx);
const rest = p.linkTarget.slice(slashIdx + 1);
const colonIdx = rest.indexOf(':');
if (colonIdx < 1) { results.push({ ...p, linkStatus: 'dead' as const }); continue; }
const serverName = rest.slice(0, colonIdx);
// Check if project exists (cached)
if (!projectCache.has(projectName)) {
projectCache.set(projectName, await projectRepo.findByName(projectName));
}
const project = projectCache.get(projectName);
if (!project) { results.push({ ...p, linkStatus: 'dead' as const }); continue; }
// Check if server is linked to that project
const hasServer = project.servers.some((s) => s.server.name === serverName);
results.push({ ...p, linkStatus: hasServer ? 'alive' as const : 'dead' as const });
} catch {
results.push({ ...p, linkStatus: 'dead' as const });
}
}
return results;
}
export function registerPromptRoutes(
app: FastifyInstance,
service: PromptService,
projectRepo: IProjectRepository,
): void {
// ── Prompts (approved) ──
app.get<{ Querystring: { project?: string; scope?: string; projectId?: string } }>('/api/v1/prompts', async (request) => {
let prompts: Prompt[];
const projectName = request.query.project;
if (projectName) {
const project = await projectRepo.findByName(projectName);
if (!project) {
throw Object.assign(new Error(`Project not found: ${projectName}`), { statusCode: 404 });
}
prompts = await service.listPrompts(project.id);
} else if (request.query.projectId) {
prompts = await service.listPrompts(request.query.projectId);
} else if (request.query.scope === 'global') {
prompts = await service.listGlobalPrompts();
} else {
prompts = await service.listPrompts();
}
return enrichWithLinkStatus(prompts, projectRepo);
});
app.get<{ Params: { id: string } }>('/api/v1/prompts/:id', async (request) => {
const prompt = await service.getPrompt(request.params.id);
const [enriched] = await enrichWithLinkStatus([prompt], projectRepo);
return enriched;
});
app.post('/api/v1/prompts', async (request, reply) => {
const prompt = await service.createPrompt(request.body);
reply.code(201);
return prompt;
});
app.put<{ Params: { id: string } }>('/api/v1/prompts/:id', async (request) => {
return service.updatePrompt(request.params.id, request.body);
});
app.delete<{ Params: { id: string } }>('/api/v1/prompts/:id', async (request, reply) => {
await service.deletePrompt(request.params.id);
reply.code(204);
});
// ── Prompt Requests (pending proposals) ──
app.get<{ Querystring: { project?: string; scope?: string } }>('/api/v1/promptrequests', async (request) => {
const projectName = request.query.project;
if (projectName) {
const project = await projectRepo.findByName(projectName);
if (!project) {
throw Object.assign(new Error(`Project not found: ${projectName}`), { statusCode: 404 });
}
return service.listPromptRequests(project.id);
}
if (request.query.scope === 'global') {
return service.listGlobalPromptRequests();
}
return service.listPromptRequests();
});
app.get<{ Params: { id: string } }>('/api/v1/promptrequests/:id', async (request) => {
return service.getPromptRequest(request.params.id);
});
app.put<{ Params: { id: string } }>('/api/v1/promptrequests/:id', async (request) => {
return service.updatePromptRequest(request.params.id, request.body);
});
app.delete<{ Params: { id: string } }>('/api/v1/promptrequests/:id', async (request, reply) => {
await service.deletePromptRequest(request.params.id);
reply.code(204);
});
app.post('/api/v1/promptrequests', async (request, reply) => {
const body = request.body as Record<string, unknown>;
// Resolve project name → ID if provided
if (body.project && typeof body.project === 'string') {
const project = await projectRepo.findByName(body.project);
if (!project) {
throw Object.assign(new Error(`Project not found: ${body.project}`), { statusCode: 404 });
}
const { project: _, ...rest } = body;
const req = await service.propose({ ...rest, projectId: project.id });
reply.code(201);
return req;
}
const req = await service.propose(body);
reply.code(201);
return req;
});
// Approve: atomic delete request → create prompt
app.post<{ Params: { id: string } }>('/api/v1/promptrequests/:id/approve', async (request) => {
return service.approve(request.params.id);
});
// Regenerate summary/chapters for a prompt
app.post<{ Params: { id: string } }>('/api/v1/prompts/:id/regenerate-summary', async (request) => {
return service.regenerateSummary(request.params.id);
});
// Compact prompt index for gating LLM (name, priority, summary, chapters)
app.get<{ Params: { name: string } }>('/api/v1/projects/:name/prompt-index', async (request) => {
const project = await projectRepo.findByName(request.params.name);
if (!project) {
throw Object.assign(new Error(`Project not found: ${request.params.name}`), { statusCode: 404 });
}
const prompts = await service.listPrompts(project.id);
return prompts.map((p) => ({
name: p.name,
priority: p.priority,
summary: p.summary,
chapters: p.chapters,
linkTarget: p.linkTarget,
}));
});
// ── Project-scoped endpoints (for mcplocal) ──
// Visible prompts: approved + session's pending requests
app.get<{ Params: { name: string }; Querystring: { session?: string } }>(
'/api/v1/projects/:name/prompts/visible',
async (request) => {
const project = await projectRepo.findByName(request.params.name);
if (!project) {
throw Object.assign(new Error(`Project not found: ${request.params.name}`), { statusCode: 404 });
}
return service.getVisiblePrompts(project.id, request.query.session);
},
);
// LLM propose: create a PromptRequest for a project
app.post<{ Params: { name: string } }>(
'/api/v1/projects/:name/promptrequests',
async (request, reply) => {
const project = await projectRepo.findByName(request.params.name);
if (!project) {
throw Object.assign(new Error(`Project not found: ${request.params.name}`), { statusCode: 404 });
}
const body = request.body as Record<string, unknown>;
const req = await service.propose({
...body,
projectId: project.id,
});
reply.code(201);
return req;
},
);
}

View File

@@ -1,7 +1,10 @@
import type { McpInstance } from '@prisma/client';
import type { McpInstance, McpServer } from '@prisma/client';
import type { IMcpInstanceRepository, IMcpServerRepository } from '../repositories/interfaces.js';
import type { McpOrchestrator } from './orchestrator.js';
import { NotFoundError } from './mcp-server.service.js';
import { InvalidStateError } from './instance.service.js';
import { sendViaSse } from './transport/sse-client.js';
import { sendViaStdio } from './transport/stdio-client.js';
export interface McpProxyRequest {
serverId: string;
@@ -38,17 +41,21 @@ export class McpProxyService {
constructor(
private readonly instanceRepo: IMcpInstanceRepository,
private readonly serverRepo: IMcpServerRepository,
private readonly orchestrator?: McpOrchestrator,
) {}
async execute(request: McpProxyRequest): Promise<McpProxyResponse> {
const server = await this.serverRepo.findById(request.serverId);
// External server: proxy directly to externalUrl
if (server?.externalUrl) {
return this.sendToExternal(server.id, server.externalUrl, request.method, request.params);
if (!server) {
throw new NotFoundError(`Server '${request.serverId}' not found`);
}
// Managed server: find running instance
// External server: proxy directly to externalUrl
if (server.externalUrl) {
return this.sendToExternal(server, request.method, request.params);
}
// Managed server: find running instance and dispatch by transport
const instances = await this.instanceRepo.findAll(request.serverId);
const running = instances.find((i) => i.status === 'RUNNING');
@@ -56,20 +63,95 @@ export class McpProxyService {
throw new NotFoundError(`No running instance found for server '${request.serverId}'`);
}
if (running.port === null || running.port === undefined) {
throw new InvalidStateError(
`Running instance '${running.id}' for server '${request.serverId}' has no port assigned`,
);
}
return this.sendJsonRpc(running, request.method, request.params);
return this.sendToManaged(server, running, request.method, request.params);
}
/**
* Send a JSON-RPC request to an external MCP server.
* Handles streamable-http protocol (session management + SSE response parsing).
* Send to an external MCP server. Dispatches based on transport type.
*/
private async sendToExternal(
server: McpServer,
method: string,
params?: Record<string, unknown>,
): Promise<McpProxyResponse> {
const url = server.externalUrl as string;
if (server.transport === 'SSE') {
return sendViaSse(url, method, params);
}
// STREAMABLE_HTTP (default for external)
return this.sendStreamableHttp(server.id, url, method, params);
}
/**
* Send to a managed (containerized) MCP server. Dispatches based on transport type.
*/
private async sendToManaged(
server: McpServer,
instance: McpInstance,
method: string,
params?: Record<string, unknown>,
): Promise<McpProxyResponse> {
const transport = server.transport as string;
// STDIO: use docker exec
if (transport === 'STDIO') {
if (!this.orchestrator) {
throw new InvalidStateError('Orchestrator required for STDIO transport');
}
if (!instance.containerId) {
throw new InvalidStateError(`Instance '${instance.id}' has no container ID`);
}
const packageName = server.packageName as string | null;
if (!packageName) {
throw new InvalidStateError(`Server '${server.id}' has no package name for STDIO transport`);
}
return sendViaStdio(this.orchestrator, instance.containerId, packageName, method, params);
}
// SSE or STREAMABLE_HTTP: need a base URL
const baseUrl = await this.resolveBaseUrl(instance, server);
if (transport === 'SSE') {
return sendViaSse(baseUrl, method, params);
}
// STREAMABLE_HTTP (default)
return this.sendStreamableHttp(server.id, baseUrl, method, params);
}
/**
* Resolve the base URL for an HTTP-based managed server.
* Prefers container internal IP on Docker network, falls back to localhost:port.
*/
private async resolveBaseUrl(instance: McpInstance, server: McpServer): Promise<string> {
const containerPort = (server.containerPort as number | null) ?? 3000;
if (this.orchestrator && instance.containerId) {
try {
const containerInfo = await this.orchestrator.inspectContainer(instance.containerId);
if (containerInfo.ip) {
return `http://${containerInfo.ip}:${containerPort}`;
}
} catch {
// Fall through to localhost
}
}
if (instance.port !== null && instance.port !== undefined) {
return `http://localhost:${instance.port}`;
}
throw new InvalidStateError(
`Cannot resolve URL for instance '${instance.id}': no container IP or host port`,
);
}
/**
* Send via streamable-http protocol with session management.
*/
private async sendStreamableHttp(
serverId: string,
url: string,
method: string,
@@ -109,14 +191,14 @@ export class McpProxyService {
// Session expired? Clear and retry once
if (response.status === 400 || response.status === 404) {
this.sessions.delete(serverId);
return this.sendToExternal(serverId, url, method, params);
return this.sendStreamableHttp(serverId, url, method, params);
}
return {
jsonrpc: '2.0',
id: 1,
error: {
code: -32000,
message: `External MCP server returned HTTP ${response.status}: ${response.statusText}`,
message: `MCP server returned HTTP ${response.status}: ${response.statusText}`,
},
};
}
@@ -126,8 +208,7 @@ export class McpProxyService {
}
/**
* Initialize a streamable-http session with an external server.
* Sends `initialize` and `notifications/initialized`, caches the session ID.
* Initialize a streamable-http session with a server.
*/
private async initSession(serverId: string, url: string): Promise<void> {
const initBody = {
@@ -174,41 +255,4 @@ export class McpProxyService {
body: JSON.stringify({ jsonrpc: '2.0', method: 'notifications/initialized' }),
});
}
private async sendJsonRpc(
instance: McpInstance,
method: string,
params?: Record<string, unknown>,
): Promise<McpProxyResponse> {
const url = `http://localhost:${instance.port}`;
const body: Record<string, unknown> = {
jsonrpc: '2.0',
id: 1,
method,
};
if (params !== undefined) {
body.params = params;
}
const response = await fetch(url, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(body),
});
if (!response.ok) {
return {
jsonrpc: '2.0',
id: 1,
error: {
code: -32000,
message: `MCP server returned HTTP ${response.status}: ${response.statusText}`,
},
};
}
const result = (await response.json()) as McpProxyResponse;
return result;
}
}

View File

@@ -53,8 +53,10 @@ export class ProjectService {
const project = await this.projectRepo.create({
name: data.name,
description: data.description,
prompt: data.prompt,
ownerId,
proxyMode: data.proxyMode,
gated: data.gated,
...(data.llmProvider !== undefined ? { llmProvider: data.llmProvider } : {}),
...(data.llmModel !== undefined ? { llmModel: data.llmModel } : {}),
});
@@ -75,9 +77,11 @@ export class ProjectService {
// Build update data for scalar fields
const updateData: Record<string, unknown> = {};
if (data.description !== undefined) updateData['description'] = data.description;
if (data.prompt !== undefined) updateData['prompt'] = data.prompt;
if (data.proxyMode !== undefined) updateData['proxyMode'] = data.proxyMode;
if (data.llmProvider !== undefined) updateData['llmProvider'] = data.llmProvider;
if (data.llmModel !== undefined) updateData['llmModel'] = data.llmModel;
if (data.gated !== undefined) updateData['gated'] = data.gated;
// Update scalar fields if any changed
if (Object.keys(updateData).length > 0) {

View File

@@ -0,0 +1,96 @@
/**
* Generates summary and chapters for prompt content.
*
* Uses regex-based extraction by default (first sentence + markdown headings).
* An optional LLM generator can be injected for higher-quality summaries.
*/
const MAX_SUMMARY_WORDS = 20;
const HEADING_RE = /^#{1,6}\s+(.+)$/gm;
export interface LlmSummaryGenerator {
generate(content: string): Promise<{ summary: string; chapters: string[] }>;
}
export class PromptSummaryService {
constructor(private readonly llmGenerator: LlmSummaryGenerator | null = null) {}
async generateSummary(content: string): Promise<{ summary: string; chapters: string[] }> {
if (this.llmGenerator) {
try {
return await this.llmGenerator.generate(content);
} catch {
// Fall back to regex on LLM failure
}
}
return this.generateWithRegex(content);
}
generateWithRegex(content: string): { summary: string; chapters: string[] } {
return {
summary: extractFirstSentence(content, MAX_SUMMARY_WORDS),
chapters: extractHeadings(content),
};
}
}
/**
* Extract the first sentence, truncated to maxWords.
* Strips markdown formatting.
*/
export function extractFirstSentence(content: string, maxWords: number): string {
if (!content.trim()) return '';
// Skip leading headings and blank lines to find first content line
const lines = content.split('\n');
let firstContent = '';
for (const line of lines) {
const trimmed = line.trim();
if (!trimmed) continue;
if (trimmed.startsWith('#')) continue;
firstContent = trimmed;
break;
}
if (!firstContent) {
// All lines are headings or empty — use first heading text
for (const line of lines) {
const trimmed = line.trim();
if (trimmed.startsWith('#')) {
firstContent = trimmed.replace(/^#+\s*/, '');
break;
}
}
}
if (!firstContent) return '';
// Strip basic markdown formatting
firstContent = firstContent
.replace(/\*\*(.+?)\*\*/g, '$1')
.replace(/\*(.+?)\*/g, '$1')
.replace(/`(.+?)`/g, '$1')
.replace(/\[(.+?)\]\(.+?\)/g, '$1');
// Split on sentence boundaries
const sentenceEnd = firstContent.search(/[.!?]\s|[.!?]$/);
const sentence = sentenceEnd >= 0 ? firstContent.slice(0, sentenceEnd + 1) : firstContent;
// Truncate to maxWords
const words = sentence.split(/\s+/);
if (words.length <= maxWords) return sentence;
return words.slice(0, maxWords).join(' ') + '...';
}
/**
* Extract markdown headings as chapter titles.
*/
export function extractHeadings(content: string): string[] {
const headings: string[] = [];
let match: RegExpExecArray | null;
while ((match = HEADING_RE.exec(content)) !== null) {
const heading = match[1]!.trim();
if (heading) headings.push(heading);
}
return headings;
}

View File

@@ -0,0 +1,198 @@
import type { Prompt, PromptRequest } from '@prisma/client';
import type { IPromptRepository } from '../repositories/prompt.repository.js';
import type { IPromptRequestRepository } from '../repositories/prompt-request.repository.js';
import type { IProjectRepository } from '../repositories/project.repository.js';
import { CreatePromptSchema, UpdatePromptSchema, CreatePromptRequestSchema, UpdatePromptRequestSchema } from '../validation/prompt.schema.js';
import { NotFoundError } from './mcp-server.service.js';
import type { PromptSummaryService } from './prompt-summary.service.js';
import { SYSTEM_PROJECT_NAME } from '../bootstrap/system-project.js';
export class PromptService {
private summaryService: PromptSummaryService | null = null;
constructor(
private readonly promptRepo: IPromptRepository,
private readonly promptRequestRepo: IPromptRequestRepository,
private readonly projectRepo: IProjectRepository,
) {}
setSummaryService(service: PromptSummaryService): void {
this.summaryService = service;
}
// ── Prompt CRUD ──
async listPrompts(projectId?: string): Promise<Prompt[]> {
return this.promptRepo.findAll(projectId);
}
async listGlobalPrompts(): Promise<Prompt[]> {
return this.promptRepo.findGlobal();
}
async getPrompt(id: string): Promise<Prompt> {
const prompt = await this.promptRepo.findById(id);
if (prompt === null) throw new NotFoundError(`Prompt not found: ${id}`);
return prompt;
}
async createPrompt(input: unknown): Promise<Prompt> {
const data = CreatePromptSchema.parse(input);
if (data.projectId) {
const project = await this.projectRepo.findById(data.projectId);
if (project === null) throw new NotFoundError(`Project not found: ${data.projectId}`);
}
const createData: { name: string; content: string; projectId?: string; priority?: number; linkTarget?: string } = {
name: data.name,
content: data.content,
};
if (data.projectId !== undefined) createData.projectId = data.projectId;
if (data.priority !== undefined) createData.priority = data.priority;
if (data.linkTarget !== undefined) createData.linkTarget = data.linkTarget;
const prompt = await this.promptRepo.create(createData);
// Auto-generate summary/chapters (non-blocking — don't fail create if summary fails)
if (this.summaryService && !data.linkTarget) {
this.generateAndStoreSummary(prompt.id, data.content).catch(() => {});
}
return prompt;
}
async updatePrompt(id: string, input: unknown): Promise<Prompt> {
const data = UpdatePromptSchema.parse(input);
await this.getPrompt(id);
const updateData: { content?: string; priority?: number } = {};
if (data.content !== undefined) updateData.content = data.content;
if (data.priority !== undefined) updateData.priority = data.priority;
const prompt = await this.promptRepo.update(id, updateData);
// Regenerate summary when content changes
if (this.summaryService && data.content !== undefined && !prompt.linkTarget) {
this.generateAndStoreSummary(prompt.id, data.content).catch(() => {});
}
return prompt;
}
async regenerateSummary(id: string): Promise<Prompt> {
const prompt = await this.getPrompt(id);
if (!this.summaryService) {
throw new Error('Summary generation not available');
}
return this.generateAndStoreSummary(prompt.id, prompt.content);
}
private async generateAndStoreSummary(id: string, content: string): Promise<Prompt> {
if (!this.summaryService) throw new Error('No summary service');
const { summary, chapters } = await this.summaryService.generateSummary(content);
return this.promptRepo.update(id, { summary, chapters });
}
async deletePrompt(id: string): Promise<void> {
const prompt = await this.getPrompt(id);
// Protect system prompts from deletion
if (prompt.projectId) {
const project = await this.projectRepo.findById(prompt.projectId);
if (project?.name === SYSTEM_PROJECT_NAME) {
throw Object.assign(new Error('Cannot delete system prompts'), { statusCode: 403 });
}
}
await this.promptRepo.delete(id);
}
// ── PromptRequest CRUD ──
async listPromptRequests(projectId?: string): Promise<PromptRequest[]> {
return this.promptRequestRepo.findAll(projectId);
}
async listGlobalPromptRequests(): Promise<PromptRequest[]> {
return this.promptRequestRepo.findGlobal();
}
async getPromptRequest(id: string): Promise<PromptRequest> {
const req = await this.promptRequestRepo.findById(id);
if (req === null) throw new NotFoundError(`PromptRequest not found: ${id}`);
return req;
}
async updatePromptRequest(id: string, input: unknown): Promise<PromptRequest> {
await this.getPromptRequest(id);
const data = UpdatePromptRequestSchema.parse(input);
const updateData: { content?: string; priority?: number } = {};
if (data.content !== undefined) updateData.content = data.content;
if (data.priority !== undefined) updateData.priority = data.priority;
return this.promptRequestRepo.update(id, updateData);
}
async deletePromptRequest(id: string): Promise<void> {
await this.getPromptRequest(id);
await this.promptRequestRepo.delete(id);
}
// ── Propose (LLM creates a PromptRequest) ──
async propose(input: unknown): Promise<PromptRequest> {
const data = CreatePromptRequestSchema.parse(input);
if (data.projectId) {
const project = await this.projectRepo.findById(data.projectId);
if (project === null) throw new NotFoundError(`Project not found: ${data.projectId}`);
}
const createData: { name: string; content: string; projectId?: string; priority?: number; createdBySession?: string; createdByUserId?: string } = {
name: data.name,
content: data.content,
};
if (data.projectId !== undefined) createData.projectId = data.projectId;
if (data.priority !== undefined) createData.priority = data.priority;
if (data.createdBySession !== undefined) createData.createdBySession = data.createdBySession;
if (data.createdByUserId !== undefined) createData.createdByUserId = data.createdByUserId;
return this.promptRequestRepo.create(createData);
}
// ── Approve (delete PromptRequest → create Prompt) ──
async approve(requestId: string): Promise<Prompt> {
const req = await this.getPromptRequest(requestId);
// Create the approved prompt (carry priority from request)
const createData: { name: string; content: string; projectId?: string; priority?: number } = {
name: req.name,
content: req.content,
};
if (req.projectId !== null) createData.projectId = req.projectId;
if (req.priority !== 5) createData.priority = req.priority;
const prompt = await this.promptRepo.create(createData);
// Delete the request
await this.promptRequestRepo.delete(requestId);
return prompt;
}
// ── Visibility for MCP (approved prompts + session's pending requests) ──
async getVisiblePrompts(
projectId?: string,
sessionId?: string,
): Promise<Array<{ name: string; content: string; type: 'prompt' | 'promptrequest' }>> {
const results: Array<{ name: string; content: string; type: 'prompt' | 'promptrequest' }> = [];
// Approved prompts (project-scoped + global)
const prompts = await this.promptRepo.findAll(projectId);
for (const p of prompts) {
results.push({ name: p.name, content: p.content, type: 'prompt' });
}
// Session's own pending requests
if (sessionId) {
const requests = await this.promptRequestRepo.findBySession(sessionId, projectId);
for (const r of requests) {
results.push({ name: r.name, content: r.content, type: 'promptrequest' });
}
}
return results;
}
}

View File

@@ -50,8 +50,8 @@ export class RbacService {
* If provided, name-scoped bindings only match when their name equals this.
* If omitted (listing), name-scoped bindings still grant access.
*/
async canAccess(userId: string, action: RbacAction, resource: string, resourceName?: string): Promise<boolean> {
const permissions = await this.getPermissions(userId);
async canAccess(userId: string, action: RbacAction, resource: string, resourceName?: string, serviceAccountName?: string): Promise<boolean> {
const permissions = await this.getPermissions(userId, serviceAccountName);
const normalized = normalizeResource(resource);
for (const perm of permissions) {
@@ -73,8 +73,8 @@ export class RbacService {
* Check whether a user is allowed to perform a named operation.
* Operations require an explicit 'run' role binding with a matching action.
*/
async canRunOperation(userId: string, operation: string): Promise<boolean> {
const permissions = await this.getPermissions(userId);
async canRunOperation(userId: string, operation: string, serviceAccountName?: string): Promise<boolean> {
const permissions = await this.getPermissions(userId, serviceAccountName);
for (const perm of permissions) {
if ('action' in perm && perm.role === 'run' && perm.action === operation) {
@@ -90,8 +90,8 @@ export class RbacService {
* Returns wildcard:true if any matching binding is unscoped (no name constraint).
* Returns wildcard:false with a set of allowed names if all bindings are name-scoped.
*/
async getAllowedScope(userId: string, action: RbacAction, resource: string): Promise<AllowedScope> {
const permissions = await this.getPermissions(userId);
async getAllowedScope(userId: string, action: RbacAction, resource: string, serviceAccountName?: string): Promise<AllowedScope> {
const permissions = await this.getPermissions(userId, serviceAccountName);
const normalized = normalizeResource(resource);
const names = new Set<string>();
@@ -113,31 +113,35 @@ export class RbacService {
/**
* Collect all permissions for a user across all matching RbacDefinitions.
*/
async getPermissions(userId: string): Promise<Permission[]> {
async getPermissions(userId: string, serviceAccountName?: string): Promise<Permission[]> {
// 1. Resolve user email
const user = await this.prisma.user.findUnique({
where: { id: userId },
select: { email: true },
});
if (user === null) return [];
if (user === null && serviceAccountName === undefined) return [];
// 2. Resolve group names the user belongs to
const memberships = await this.prisma.groupMember.findMany({
where: { userId },
select: { group: { select: { name: true } } },
});
const groupNames = memberships.map((m) => m.group.name);
let groupNames: string[] = [];
if (user !== null) {
const memberships = await this.prisma.groupMember.findMany({
where: { userId },
select: { group: { select: { name: true } } },
});
groupNames = memberships.map((m) => m.group.name);
}
// 3. Load all RbacDefinitions
const definitions = await this.rbacRepo.findAll();
// 4. Find definitions where user is a subject
// 4. Find definitions where user or service account is a subject
const permissions: Permission[] = [];
for (const def of definitions) {
const subjects = def.subjects as RbacSubject[];
const matched = subjects.some((s) => {
if (s.kind === 'User') return s.name === user.email;
if (s.kind === 'User') return user !== null && s.name === user.email;
if (s.kind === 'Group') return groupNames.includes(s.name);
if (s.kind === 'ServiceAccount') return serviceAccountName !== undefined && s.name === serviceAccountName;
return false;
});

View File

@@ -0,0 +1,2 @@
export { sendViaSse } from './sse-client.js';
export { sendViaStdio } from './stdio-client.js';

View File

@@ -0,0 +1,150 @@
import type { McpProxyResponse } from '../mcp-proxy-service.js';
/**
* SSE transport client for MCP servers using the legacy SSE protocol.
*
* Protocol: GET /sse → endpoint event with messages URL → POST to messages URL.
* Responses come back on the SSE stream, matched by JSON-RPC request ID.
*
* Each call opens a fresh SSE connection, initializes, sends the request,
* reads the response, and closes. Session caching may be added later.
*/
export async function sendViaSse(
baseUrl: string,
method: string,
params?: Record<string, unknown>,
timeoutMs = 30_000,
): Promise<McpProxyResponse> {
const controller = new AbortController();
const timer = setTimeout(() => controller.abort(), timeoutMs);
try {
// 1. GET /sse → SSE stream
const sseResp = await fetch(`${baseUrl}/sse`, {
method: 'GET',
headers: { 'Accept': 'text/event-stream' },
signal: controller.signal,
});
if (!sseResp.ok) {
return errorResponse(`SSE connect failed: HTTP ${sseResp.status}`);
}
const reader = sseResp.body?.getReader();
if (!reader) {
return errorResponse('No SSE stream body');
}
// 2. Read until we get the endpoint event with messages URL
const decoder = new TextDecoder();
let buffer = '';
let messagesUrl = '';
while (!messagesUrl) {
const { done, value } = await reader.read();
if (done) break;
buffer += decoder.decode(value, { stream: true });
for (const line of buffer.split('\n')) {
if (line.startsWith('data: ') && buffer.includes('event: endpoint')) {
const endpoint = line.slice(6).trim();
messagesUrl = endpoint.startsWith('http') ? endpoint : `${baseUrl}${endpoint}`;
}
}
const lines = buffer.split('\n');
buffer = lines[lines.length - 1] ?? '';
}
if (!messagesUrl) {
reader.cancel();
return errorResponse('No endpoint event from SSE stream');
}
const postHeaders = { 'Content-Type': 'application/json' };
// 3. Initialize
const initResp = await fetch(messagesUrl, {
method: 'POST',
headers: postHeaders,
body: JSON.stringify({
jsonrpc: '2.0',
id: 1,
method: 'initialize',
params: {
protocolVersion: '2024-11-05',
capabilities: {},
clientInfo: { name: 'mcpctl-proxy', version: '0.1.0' },
},
}),
signal: controller.signal,
});
if (!initResp.ok) {
reader.cancel();
return errorResponse(`SSE initialize failed: HTTP ${initResp.status}`);
}
// 4. Send notifications/initialized
await fetch(messagesUrl, {
method: 'POST',
headers: postHeaders,
body: JSON.stringify({ jsonrpc: '2.0', method: 'notifications/initialized' }),
signal: controller.signal,
});
// 5. Send the actual request
const requestId = 2;
await fetch(messagesUrl, {
method: 'POST',
headers: postHeaders,
body: JSON.stringify({
jsonrpc: '2.0',
id: requestId,
method,
...(params !== undefined ? { params } : {}),
}),
signal: controller.signal,
});
// 6. Read response from SSE stream (matched by request ID)
let responseBuffer = '';
const readTimeout = setTimeout(() => reader.cancel(), 5000);
while (true) {
const { done, value } = await reader.read();
if (done) break;
responseBuffer += decoder.decode(value, { stream: true });
for (const line of responseBuffer.split('\n')) {
if (line.startsWith('data: ')) {
try {
const parsed = JSON.parse(line.slice(6)) as McpProxyResponse;
if (parsed.id === requestId) {
clearTimeout(readTimeout);
reader.cancel();
return parsed;
}
} catch {
// Not valid JSON, skip
}
}
}
const respLines = responseBuffer.split('\n');
responseBuffer = respLines[respLines.length - 1] ?? '';
}
clearTimeout(readTimeout);
reader.cancel();
return errorResponse('No response received from SSE stream');
} finally {
clearTimeout(timer);
}
}
function errorResponse(message: string): McpProxyResponse {
return {
jsonrpc: '2.0',
id: 1,
error: { code: -32000, message },
};
}

View File

@@ -0,0 +1,119 @@
import type { McpOrchestrator } from '../orchestrator.js';
import type { McpProxyResponse } from '../mcp-proxy-service.js';
/**
* STDIO transport client for MCP servers running as Docker containers.
*
* Runs `docker exec` with an inline Node.js script that spawns the MCP server
* binary, pipes JSON-RPC messages via stdin/stdout, and returns the response.
*
* Each call is self-contained: initialize → notifications/initialized → request → response.
*/
export async function sendViaStdio(
orchestrator: McpOrchestrator,
containerId: string,
packageName: string,
method: string,
params?: Record<string, unknown>,
timeoutMs = 30_000,
): Promise<McpProxyResponse> {
const initMsg = JSON.stringify({
jsonrpc: '2.0',
id: 1,
method: 'initialize',
params: {
protocolVersion: '2024-11-05',
capabilities: {},
clientInfo: { name: 'mcpctl-proxy', version: '0.1.0' },
},
});
const initializedMsg = JSON.stringify({
jsonrpc: '2.0',
method: 'notifications/initialized',
});
const requestBody: Record<string, unknown> = {
jsonrpc: '2.0',
id: 2,
method,
};
if (params !== undefined) {
requestBody.params = params;
}
const requestMsg = JSON.stringify(requestBody);
// Inline Node.js script that:
// 1. Spawns the MCP server binary via npx
// 2. Sends initialize → initialized → actual request via stdin
// 3. Reads stdout for JSON-RPC response with id: 2
// 4. Outputs the full JSON-RPC response to stdout
const probeScript = `
const { spawn } = require('child_process');
const proc = spawn('npx', ['--prefer-offline', '-y', ${JSON.stringify(packageName)}], { stdio: ['pipe', 'pipe', 'pipe'] });
let output = '';
let responded = false;
proc.stdout.on('data', d => {
output += d;
const lines = output.split('\\n');
for (const line of lines) {
if (!line.trim()) continue;
try {
const msg = JSON.parse(line);
if (msg.id === 2) {
responded = true;
process.stdout.write(JSON.stringify(msg), () => {
proc.kill();
process.exit(0);
});
}
} catch {}
}
output = lines[lines.length - 1] || '';
});
proc.stderr.on('data', () => {});
proc.on('error', e => { process.stdout.write(JSON.stringify({jsonrpc:'2.0',id:2,error:{code:-32000,message:e.message}})); process.exit(1); });
proc.on('exit', (code) => { if (!responded) { process.stdout.write(JSON.stringify({jsonrpc:'2.0',id:2,error:{code:-32000,message:'process exited '+code}})); process.exit(1); } });
setTimeout(() => { if (!responded) { process.stdout.write(JSON.stringify({jsonrpc:'2.0',id:2,error:{code:-32000,message:'timeout'}})); proc.kill(); process.exit(1); } }, ${timeoutMs - 2000});
proc.stdin.write(${JSON.stringify(initMsg)} + '\\n');
setTimeout(() => {
proc.stdin.write(${JSON.stringify(initializedMsg)} + '\\n');
setTimeout(() => {
proc.stdin.write(${JSON.stringify(requestMsg)} + '\\n');
}, 500);
}, 500);
`.trim();
try {
const result = await orchestrator.execInContainer(
containerId,
['node', '-e', probeScript],
{ timeoutMs },
);
if (result.exitCode === 0 && result.stdout.trim()) {
try {
return JSON.parse(result.stdout.trim()) as McpProxyResponse;
} catch {
return errorResponse(`Failed to parse STDIO response: ${result.stdout.slice(0, 200)}`);
}
}
// Try to parse error response from stdout
try {
return JSON.parse(result.stdout.trim()) as McpProxyResponse;
} catch {
const errorMsg = result.stderr.trim() || `docker exec exit code ${result.exitCode}`;
return errorResponse(errorMsg);
}
} catch (err) {
return errorResponse(err instanceof Error ? err.message : String(err));
}
}
function errorResponse(message: string): McpProxyResponse {
return {
jsonrpc: '2.0',
id: 2,
error: { code: -32000, message },
};
}

View File

@@ -3,7 +3,9 @@ import { z } from 'zod';
export const CreateProjectSchema = z.object({
name: z.string().min(1).max(100).regex(/^[a-z0-9-]+$/, 'Name must be lowercase alphanumeric with hyphens'),
description: z.string().max(1000).default(''),
prompt: z.string().max(10000).default(''),
proxyMode: z.enum(['direct', 'filtered']).default('direct'),
gated: z.boolean().default(true),
llmProvider: z.string().max(100).optional(),
llmModel: z.string().max(100).optional(),
servers: z.array(z.string().min(1)).default([]),
@@ -14,7 +16,9 @@ export const CreateProjectSchema = z.object({
export const UpdateProjectSchema = z.object({
description: z.string().max(1000).optional(),
prompt: z.string().max(10000).optional(),
proxyMode: z.enum(['direct', 'filtered']).optional(),
gated: z.boolean().optional(),
llmProvider: z.string().max(100).nullable().optional(),
llmModel: z.string().max(100).nullable().optional(),
servers: z.array(z.string().min(1)).optional(),

View File

@@ -0,0 +1,36 @@
import { z } from 'zod';
const LINK_TARGET_RE = /^[a-z0-9-]+\/[a-z0-9-]+:\S+$/;
export const CreatePromptSchema = z.object({
name: z.string().min(1).max(100).regex(/^[a-z0-9-]+$/, 'Name must be lowercase alphanumeric with hyphens'),
content: z.string().min(1).max(50000),
projectId: z.string().optional(),
priority: z.number().int().min(1).max(10).default(5).optional(),
linkTarget: z.string().regex(LINK_TARGET_RE, 'Link target must be project/server:resource-uri').optional(),
});
export const UpdatePromptSchema = z.object({
content: z.string().min(1).max(50000).optional(),
priority: z.number().int().min(1).max(10).optional(),
// linkTarget intentionally excluded — links are immutable
});
export const CreatePromptRequestSchema = z.object({
name: z.string().min(1).max(100).regex(/^[a-z0-9-]+$/, 'Name must be lowercase alphanumeric with hyphens'),
content: z.string().min(1).max(50000),
projectId: z.string().optional(),
priority: z.number().int().min(1).max(10).default(5).optional(),
createdBySession: z.string().optional(),
createdByUserId: z.string().optional(),
});
export const UpdatePromptRequestSchema = z.object({
content: z.string().min(1).max(50000).optional(),
priority: z.number().int().min(1).max(10).optional(),
});
export type CreatePromptInput = z.infer<typeof CreatePromptSchema>;
export type UpdatePromptInput = z.infer<typeof UpdatePromptSchema>;
export type CreatePromptRequestInput = z.infer<typeof CreatePromptRequestSchema>;
export type UpdatePromptRequestInput = z.infer<typeof UpdatePromptRequestSchema>;

View File

@@ -1,7 +1,7 @@
import { z } from 'zod';
export const RBAC_ROLES = ['edit', 'view', 'create', 'delete', 'run', 'expose'] as const;
export const RBAC_RESOURCES = ['*', 'servers', 'instances', 'secrets', 'projects', 'templates', 'users', 'groups', 'rbac'] as const;
export const RBAC_RESOURCES = ['*', 'servers', 'instances', 'secrets', 'projects', 'templates', 'users', 'groups', 'rbac', 'prompts', 'promptrequests'] as const;
/** Singular→plural map for resource names. */
const RESOURCE_ALIASES: Record<string, string> = {
@@ -12,6 +12,8 @@ const RESOURCE_ALIASES: Record<string, string> = {
template: 'templates',
user: 'users',
group: 'groups',
prompt: 'prompts',
promptrequest: 'promptrequests',
};
/** Normalize a resource name to its canonical plural form. */
@@ -20,7 +22,7 @@ export function normalizeResource(resource: string): string {
}
export const RbacSubjectSchema = z.object({
kind: z.enum(['User', 'Group']),
kind: z.enum(['User', 'Group', 'ServiceAccount']),
name: z.string().min(1),
});

View File

@@ -0,0 +1,124 @@
import { describe, it, expect, vi, beforeEach } from 'vitest';
import { bootstrapSystemProject, SYSTEM_PROJECT_NAME, SYSTEM_OWNER_ID, getSystemPromptNames } from '../src/bootstrap/system-project.js';
import type { PrismaClient } from '@prisma/client';
function mockPrisma(): PrismaClient {
const prompts = new Map<string, { id: string; name: string; projectId: string }>();
let promptIdCounter = 1;
return {
project: {
upsert: vi.fn(async (args: { where: { name: string }; create: Record<string, unknown>; update: Record<string, unknown> }) => ({
id: 'sys-proj-id',
name: args.where.name,
...args.create,
})),
},
prompt: {
findFirst: vi.fn(async (args: { where: { name: string; projectId: string } }) => {
return prompts.get(`${args.where.projectId}:${args.where.name}`) ?? null;
}),
create: vi.fn(async (args: { data: { name: string; content: string; priority: number; projectId: string } }) => {
const id = `prompt-${promptIdCounter++}`;
const prompt = { id, ...args.data };
prompts.set(`${args.data.projectId}:${args.data.name}`, prompt);
return prompt;
}),
},
} as unknown as PrismaClient;
}
describe('bootstrapSystemProject', () => {
let prisma: PrismaClient;
beforeEach(() => {
prisma = mockPrisma();
});
it('creates the mcpctl-system project via upsert', async () => {
await bootstrapSystemProject(prisma);
expect(prisma.project.upsert).toHaveBeenCalledWith(
expect.objectContaining({
where: { name: SYSTEM_PROJECT_NAME },
create: expect.objectContaining({
name: SYSTEM_PROJECT_NAME,
ownerId: SYSTEM_OWNER_ID,
gated: false,
}),
update: {},
}),
);
});
it('creates all system prompts', async () => {
await bootstrapSystemProject(prisma);
const expectedNames = getSystemPromptNames();
expect(expectedNames.length).toBeGreaterThanOrEqual(4);
for (const name of expectedNames) {
expect(prisma.prompt.findFirst).toHaveBeenCalledWith(
expect.objectContaining({
where: { name, projectId: 'sys-proj-id' },
}),
);
}
expect(prisma.prompt.create).toHaveBeenCalledTimes(expectedNames.length);
});
it('creates system prompts with priority 10', async () => {
await bootstrapSystemProject(prisma);
const createCalls = vi.mocked(prisma.prompt.create).mock.calls;
for (const call of createCalls) {
const data = (call[0] as { data: { priority: number } }).data;
expect(data.priority).toBe(10);
}
});
it('does not re-create existing prompts (idempotent)', async () => {
// First call creates everything
await bootstrapSystemProject(prisma);
const firstCallCount = vi.mocked(prisma.prompt.create).mock.calls.length;
// Second call — prompts already exist in mock, should not create again
await bootstrapSystemProject(prisma);
// create should not have been called additional times
expect(vi.mocked(prisma.prompt.create).mock.calls.length).toBe(firstCallCount);
});
it('re-creates deleted prompts on subsequent startup', async () => {
// First run creates everything
await bootstrapSystemProject(prisma);
// Simulate deletion: clear the map so findFirst returns null
vi.mocked(prisma.prompt.findFirst).mockResolvedValue(null);
vi.mocked(prisma.prompt.create).mockClear();
// Second run should recreate
await bootstrapSystemProject(prisma);
const expectedNames = getSystemPromptNames();
expect(vi.mocked(prisma.prompt.create).mock.calls.length).toBe(expectedNames.length);
});
it('system project has gated=false', async () => {
await bootstrapSystemProject(prisma);
const upsertCall = vi.mocked(prisma.project.upsert).mock.calls[0]![0];
expect((upsertCall as { create: { gated: boolean } }).create.gated).toBe(false);
});
});
describe('getSystemPromptNames', () => {
it('returns all system prompt names', () => {
const names = getSystemPromptNames();
expect(names).toContain('gate-instructions');
expect(names).toContain('gate-encouragement');
expect(names).toContain('gate-intercept-preamble');
expect(names).toContain('session-greeting');
});
});

View File

@@ -16,6 +16,7 @@ function makeProject(overrides: Partial<ProjectWithRelations> = {}): ProjectWith
description: '',
ownerId: 'user-1',
proxyMode: 'direct',
gated: true,
llmProvider: null,
llmModel: null,
version: 1,

View File

@@ -12,6 +12,7 @@ function makeProject(overrides: Partial<ProjectWithRelations> = {}): ProjectWith
description: '',
ownerId: 'user-1',
proxyMode: 'direct',
gated: true,
llmProvider: null,
llmModel: null,
version: 1,

View File

@@ -0,0 +1,508 @@
import { describe, it, expect, vi, afterEach } from 'vitest';
import Fastify from 'fastify';
import type { FastifyInstance } from 'fastify';
import { registerPromptRoutes } from '../src/routes/prompts.js';
import { PromptService } from '../src/services/prompt.service.js';
import { errorHandler } from '../src/middleware/error-handler.js';
import type { IPromptRepository } from '../src/repositories/prompt.repository.js';
import type { IPromptRequestRepository } from '../src/repositories/prompt-request.repository.js';
import type { IProjectRepository } from '../src/repositories/project.repository.js';
import type { Prompt, PromptRequest, Project } from '@prisma/client';
let app: FastifyInstance;
function makePrompt(overrides: Partial<Prompt> = {}): Prompt {
return {
id: 'prompt-1',
name: 'test-prompt',
content: 'Hello world',
projectId: null,
priority: 5,
summary: null,
chapters: null,
linkTarget: null,
version: 1,
createdAt: new Date(),
updatedAt: new Date(),
...overrides,
};
}
function makePromptRequest(overrides: Partial<PromptRequest> = {}): PromptRequest {
return {
id: 'req-1',
name: 'test-request',
content: 'Proposed content',
projectId: null,
priority: 5,
createdBySession: 'session-abc',
createdByUserId: null,
createdAt: new Date(),
...overrides,
};
}
function makeProject(overrides: Partial<Project> = {}): Project {
return {
id: 'proj-1',
name: 'homeautomation',
description: '',
prompt: '',
proxyMode: 'direct',
gated: true,
llmProvider: null,
llmModel: null,
ownerId: 'user-1',
createdAt: new Date(),
updatedAt: new Date(),
...overrides,
} as Project;
}
function mockPromptRepo(): IPromptRepository {
return {
findAll: vi.fn(async () => []),
findGlobal: vi.fn(async () => []),
findById: vi.fn(async () => null),
findByNameAndProject: vi.fn(async () => null),
create: vi.fn(async (data) => makePrompt(data)),
update: vi.fn(async (id, data) => makePrompt({ id, ...data })),
delete: vi.fn(async () => {}),
};
}
function mockPromptRequestRepo(): IPromptRequestRepository {
return {
findAll: vi.fn(async () => []),
findGlobal: vi.fn(async () => []),
findById: vi.fn(async () => null),
findByNameAndProject: vi.fn(async () => null),
findBySession: vi.fn(async () => []),
create: vi.fn(async (data) => makePromptRequest(data)),
update: vi.fn(async (id, data) => makePromptRequest({ id, ...data })),
delete: vi.fn(async () => {}),
};
}
function makeProjectWithServers(
overrides: Partial<Project> = {},
serverNames: string[] = [],
) {
return {
...makeProject(overrides),
servers: serverNames.map((name, i) => ({
id: `ps-${i}`,
projectId: overrides.id ?? 'proj-1',
serverId: `srv-${i}`,
server: { id: `srv-${i}`, name },
})),
};
}
function mockProjectRepo(): IProjectRepository {
return {
findAll: vi.fn(async () => []),
findById: vi.fn(async () => null),
findByName: vi.fn(async () => null),
create: vi.fn(async (data) => makeProject(data)),
update: vi.fn(async (_id, data) => makeProject({ ...data as Partial<Project> })),
delete: vi.fn(async () => {}),
};
}
afterEach(async () => {
if (app) await app.close();
});
function buildApp(opts?: {
promptRepo?: IPromptRepository;
promptRequestRepo?: IPromptRequestRepository;
projectRepo?: IProjectRepository;
}) {
const promptRepo = opts?.promptRepo ?? mockPromptRepo();
const promptRequestRepo = opts?.promptRequestRepo ?? mockPromptRequestRepo();
const projectRepo = opts?.projectRepo ?? mockProjectRepo();
const service = new PromptService(promptRepo, promptRequestRepo, projectRepo);
app = Fastify();
app.setErrorHandler(errorHandler);
registerPromptRoutes(app, service, projectRepo);
return { app, promptRepo, promptRequestRepo, projectRepo, service };
}
describe('Prompt routes', () => {
describe('GET /api/v1/prompts', () => {
it('returns all prompts without project filter', async () => {
const promptRepo = mockPromptRepo();
const globalPrompt = makePrompt({ id: 'p-1', name: 'global-rule', projectId: null });
const scopedPrompt = makePrompt({ id: 'p-2', name: 'scoped-rule', projectId: 'proj-1' });
vi.mocked(promptRepo.findAll).mockResolvedValue([globalPrompt, scopedPrompt]);
const { app: a } = buildApp({ promptRepo });
const res = await a.inject({ method: 'GET', url: '/api/v1/prompts' });
expect(res.statusCode).toBe(200);
const body = res.json() as Prompt[];
expect(body).toHaveLength(2);
expect(promptRepo.findAll).toHaveBeenCalledWith(undefined);
});
it('filters by project name when ?project= is given', async () => {
const promptRepo = mockPromptRepo();
const projectRepo = mockProjectRepo();
vi.mocked(projectRepo.findByName).mockResolvedValue(makeProject({ id: 'proj-1', name: 'homeautomation' }));
vi.mocked(promptRepo.findAll).mockResolvedValue([
makePrompt({ id: 'p-1', name: 'ha-rule', projectId: 'proj-1' }),
makePrompt({ id: 'p-2', name: 'global-rule', projectId: null }),
]);
const { app: a } = buildApp({ promptRepo, projectRepo });
const res = await a.inject({ method: 'GET', url: '/api/v1/prompts?project=homeautomation' });
expect(res.statusCode).toBe(200);
expect(projectRepo.findByName).toHaveBeenCalledWith('homeautomation');
expect(promptRepo.findAll).toHaveBeenCalledWith('proj-1');
});
it('returns only global prompts when ?scope=global', async () => {
const promptRepo = mockPromptRepo();
const globalOnly = [makePrompt({ id: 'p-g', name: 'global-rule', projectId: null })];
vi.mocked(promptRepo.findGlobal).mockResolvedValue(globalOnly);
const { app: a } = buildApp({ promptRepo });
const res = await a.inject({ method: 'GET', url: '/api/v1/prompts?scope=global' });
expect(res.statusCode).toBe(200);
const body = res.json() as Prompt[];
expect(body).toHaveLength(1);
expect(promptRepo.findGlobal).toHaveBeenCalled();
expect(promptRepo.findAll).not.toHaveBeenCalled();
});
it('returns 404 when ?project= references unknown project', async () => {
const { app: a } = buildApp();
const res = await a.inject({ method: 'GET', url: '/api/v1/prompts?project=nonexistent' });
expect(res.statusCode).toBe(404);
const body = res.json() as { error: string };
expect(body.error).toContain('Project not found');
});
});
describe('GET /api/v1/promptrequests', () => {
it('returns all prompt requests without project filter', async () => {
const promptRequestRepo = mockPromptRequestRepo();
vi.mocked(promptRequestRepo.findAll).mockResolvedValue([
makePromptRequest({ id: 'r-1', name: 'req-a' }),
]);
const { app: a } = buildApp({ promptRequestRepo });
const res = await a.inject({ method: 'GET', url: '/api/v1/promptrequests' });
expect(res.statusCode).toBe(200);
expect(promptRequestRepo.findAll).toHaveBeenCalledWith(undefined);
});
it('returns only global prompt requests when ?scope=global', async () => {
const promptRequestRepo = mockPromptRequestRepo();
vi.mocked(promptRequestRepo.findGlobal).mockResolvedValue([]);
const { app: a } = buildApp({ promptRequestRepo });
const res = await a.inject({ method: 'GET', url: '/api/v1/promptrequests?scope=global' });
expect(res.statusCode).toBe(200);
expect(promptRequestRepo.findGlobal).toHaveBeenCalled();
expect(promptRequestRepo.findAll).not.toHaveBeenCalled();
});
it('filters by project name when ?project= is given', async () => {
const promptRequestRepo = mockPromptRequestRepo();
const projectRepo = mockProjectRepo();
vi.mocked(projectRepo.findByName).mockResolvedValue(makeProject({ id: 'proj-1' }));
const { app: a } = buildApp({ promptRequestRepo, projectRepo });
const res = await a.inject({ method: 'GET', url: '/api/v1/promptrequests?project=homeautomation' });
expect(res.statusCode).toBe(200);
expect(promptRequestRepo.findAll).toHaveBeenCalledWith('proj-1');
});
it('returns 404 for unknown project on promptrequests', async () => {
const { app: a } = buildApp();
const res = await a.inject({ method: 'GET', url: '/api/v1/promptrequests?project=nope' });
expect(res.statusCode).toBe(404);
});
});
describe('POST /api/v1/promptrequests', () => {
it('creates a global prompt request (no project)', async () => {
const promptRequestRepo = mockPromptRequestRepo();
const { app: a } = buildApp({ promptRequestRepo });
const res = await a.inject({
method: 'POST',
url: '/api/v1/promptrequests',
payload: { name: 'global-req', content: 'some content' },
});
expect(res.statusCode).toBe(201);
expect(promptRequestRepo.create).toHaveBeenCalledWith(
expect.objectContaining({ name: 'global-req', content: 'some content' }),
);
});
it('resolves project name to ID when project given', async () => {
const promptRequestRepo = mockPromptRequestRepo();
const projectRepo = mockProjectRepo();
const proj = makeProject({ id: 'proj-1', name: 'myproj' });
vi.mocked(projectRepo.findByName).mockResolvedValue(proj);
vi.mocked(projectRepo.findById).mockResolvedValue(proj);
const { app: a } = buildApp({ promptRequestRepo, projectRepo });
const res = await a.inject({
method: 'POST',
url: '/api/v1/promptrequests',
payload: { name: 'scoped-req', content: 'text', project: 'myproj' },
});
expect(res.statusCode).toBe(201);
expect(projectRepo.findByName).toHaveBeenCalledWith('myproj');
expect(promptRequestRepo.create).toHaveBeenCalledWith(
expect.objectContaining({ name: 'scoped-req', projectId: 'proj-1' }),
);
});
it('returns 404 for unknown project name', async () => {
const { app: a } = buildApp();
const res = await a.inject({
method: 'POST',
url: '/api/v1/promptrequests',
payload: { name: 'bad-req', content: 'x', project: 'nope' },
});
expect(res.statusCode).toBe(404);
});
});
describe('POST /api/v1/promptrequests/:id/approve', () => {
it('atomically approves a prompt request', async () => {
const promptRequestRepo = mockPromptRequestRepo();
const promptRepo = mockPromptRepo();
const req = makePromptRequest({ id: 'req-1', name: 'my-rule', projectId: 'proj-1' });
vi.mocked(promptRequestRepo.findById).mockResolvedValue(req);
const { app: a } = buildApp({ promptRepo, promptRequestRepo });
const res = await a.inject({ method: 'POST', url: '/api/v1/promptrequests/req-1/approve' });
expect(res.statusCode).toBe(200);
expect(promptRepo.create).toHaveBeenCalledWith({
name: 'my-rule',
content: 'Proposed content',
projectId: 'proj-1',
});
expect(promptRequestRepo.delete).toHaveBeenCalledWith('req-1');
});
});
describe('Security: projectId tampering', () => {
it('rejects projectId in prompt update payload', async () => {
const promptRepo = mockPromptRepo();
vi.mocked(promptRepo.findById).mockResolvedValue(makePrompt({ id: 'p-1', projectId: 'proj-a' }));
const { app: a } = buildApp({ promptRepo });
const res = await a.inject({
method: 'PUT',
url: '/api/v1/prompts/p-1',
payload: { content: 'new content', projectId: 'proj-evil' },
});
// Should succeed but ignore projectId — UpdatePromptSchema strips it
expect(res.statusCode).toBe(200);
expect(promptRepo.update).toHaveBeenCalledWith('p-1', { content: 'new content' });
// projectId must NOT be in the update call
const updateArg = vi.mocked(promptRepo.update).mock.calls[0]![1];
expect(updateArg).not.toHaveProperty('projectId');
});
it('rejects projectId in promptrequest update payload', async () => {
const promptRequestRepo = mockPromptRequestRepo();
vi.mocked(promptRequestRepo.findById).mockResolvedValue(makePromptRequest({ id: 'r-1', projectId: 'proj-a' }));
const { app: a } = buildApp({ promptRequestRepo });
const res = await a.inject({
method: 'PUT',
url: '/api/v1/promptrequests/r-1',
payload: { content: 'new content', projectId: 'proj-evil' },
});
expect(res.statusCode).toBe(200);
expect(promptRequestRepo.update).toHaveBeenCalledWith('r-1', { content: 'new content' });
const updateArg = vi.mocked(promptRequestRepo.update).mock.calls[0]![1];
expect(updateArg).not.toHaveProperty('projectId');
});
});
describe('linkStatus enrichment', () => {
it('returns linkStatus=null for non-linked prompts', async () => {
const promptRepo = mockPromptRepo();
vi.mocked(promptRepo.findAll).mockResolvedValue([
makePrompt({ id: 'p-1', name: 'plain', linkTarget: null }),
]);
const { app: a } = buildApp({ promptRepo });
const res = await a.inject({ method: 'GET', url: '/api/v1/prompts' });
expect(res.statusCode).toBe(200);
const body = res.json() as Array<{ linkStatus: string | null }>;
expect(body[0]!.linkStatus).toBeNull();
});
it('returns linkStatus=alive when project and server exist', async () => {
const promptRepo = mockPromptRepo();
const projectRepo = mockProjectRepo();
vi.mocked(promptRepo.findAll).mockResolvedValue([
makePrompt({ id: 'p-1', name: 'linked', linkTarget: 'source-proj/docmost-mcp:docmost://pages/abc' }),
]);
vi.mocked(projectRepo.findByName).mockImplementation(async (name) => {
if (name === 'source-proj') {
return makeProjectWithServers({ id: 'sp-1', name: 'source-proj' }, ['docmost-mcp']) as never;
}
return null;
});
const { app: a } = buildApp({ promptRepo, projectRepo });
const res = await a.inject({ method: 'GET', url: '/api/v1/prompts' });
expect(res.statusCode).toBe(200);
const body = res.json() as Array<{ linkStatus: string }>;
expect(body[0]!.linkStatus).toBe('alive');
});
it('returns linkStatus=dead when source project not found', async () => {
const promptRepo = mockPromptRepo();
vi.mocked(promptRepo.findAll).mockResolvedValue([
makePrompt({ id: 'p-1', name: 'broken', linkTarget: 'missing-proj/srv:some://uri' }),
]);
const { app: a } = buildApp({ promptRepo });
const res = await a.inject({ method: 'GET', url: '/api/v1/prompts' });
expect(res.statusCode).toBe(200);
const body = res.json() as Array<{ linkStatus: string }>;
expect(body[0]!.linkStatus).toBe('dead');
});
it('returns linkStatus=dead when server not in project', async () => {
const promptRepo = mockPromptRepo();
const projectRepo = mockProjectRepo();
vi.mocked(promptRepo.findAll).mockResolvedValue([
makePrompt({ id: 'p-1', name: 'wrong-srv', linkTarget: 'proj/wrong-server:some://uri' }),
]);
vi.mocked(projectRepo.findByName).mockResolvedValue(
makeProjectWithServers({ id: 'sp-1', name: 'proj' }, ['other-server']) as never,
);
const { app: a } = buildApp({ promptRepo, projectRepo });
const res = await a.inject({ method: 'GET', url: '/api/v1/prompts' });
expect(res.statusCode).toBe(200);
const body = res.json() as Array<{ linkStatus: string }>;
expect(body[0]!.linkStatus).toBe('dead');
});
it('enriches single prompt GET with linkStatus', async () => {
const promptRepo = mockPromptRepo();
const projectRepo = mockProjectRepo();
vi.mocked(promptRepo.findById).mockResolvedValue(
makePrompt({ id: 'p-1', name: 'linked', linkTarget: 'proj/srv:some://uri' }),
);
vi.mocked(projectRepo.findByName).mockResolvedValue(
makeProjectWithServers({ id: 'sp-1', name: 'proj' }, ['srv']) as never,
);
const { app: a } = buildApp({ promptRepo, projectRepo });
const res = await a.inject({ method: 'GET', url: '/api/v1/prompts/p-1' });
expect(res.statusCode).toBe(200);
const body = res.json() as { linkStatus: string };
expect(body.linkStatus).toBe('alive');
});
it('caches project lookup for multiple linked prompts', async () => {
const promptRepo = mockPromptRepo();
const projectRepo = mockProjectRepo();
vi.mocked(promptRepo.findAll).mockResolvedValue([
makePrompt({ id: 'p-1', name: 'link-a', linkTarget: 'proj/srv:uri-a' }),
makePrompt({ id: 'p-2', name: 'link-b', linkTarget: 'proj/srv:uri-b' }),
]);
vi.mocked(projectRepo.findByName).mockResolvedValue(
makeProjectWithServers({ id: 'sp-1', name: 'proj' }, ['srv']) as never,
);
const { app: a } = buildApp({ promptRepo, projectRepo });
const res = await a.inject({ method: 'GET', url: '/api/v1/prompts' });
expect(res.statusCode).toBe(200);
const body = res.json() as Array<{ linkStatus: string }>;
expect(body).toHaveLength(2);
expect(body[0]!.linkStatus).toBe('alive');
expect(body[1]!.linkStatus).toBe('alive');
// Should only call findByName once (cached)
expect(projectRepo.findByName).toHaveBeenCalledTimes(1);
});
it('supports ?projectId= query parameter', async () => {
const promptRepo = mockPromptRepo();
vi.mocked(promptRepo.findAll).mockResolvedValue([
makePrompt({ id: 'p-1', name: 'scoped', projectId: 'proj-1' }),
]);
const { app: a } = buildApp({ promptRepo });
const res = await a.inject({ method: 'GET', url: '/api/v1/prompts?projectId=proj-1' });
expect(res.statusCode).toBe(200);
expect(promptRepo.findAll).toHaveBeenCalledWith('proj-1');
});
});
describe('GET /api/v1/projects/:name/prompts/visible', () => {
it('returns approved prompts + session pending requests', async () => {
const promptRepo = mockPromptRepo();
const promptRequestRepo = mockPromptRequestRepo();
const projectRepo = mockProjectRepo();
vi.mocked(projectRepo.findByName).mockResolvedValue(makeProject({ id: 'proj-1' }));
vi.mocked(promptRepo.findAll).mockResolvedValue([
makePrompt({ name: 'approved-one', projectId: 'proj-1' }),
makePrompt({ name: 'global-one', projectId: null }),
]);
vi.mocked(promptRequestRepo.findBySession).mockResolvedValue([
makePromptRequest({ name: 'pending-one', projectId: 'proj-1' }),
]);
const { app: a } = buildApp({ promptRepo, promptRequestRepo, projectRepo });
const res = await a.inject({
method: 'GET',
url: '/api/v1/projects/homeautomation/prompts/visible?session=sess-123',
});
expect(res.statusCode).toBe(200);
const body = res.json() as Array<{ name: string; type: string }>;
expect(body).toHaveLength(3);
expect(body.map((b) => b.name)).toContain('approved-one');
expect(body.map((b) => b.name)).toContain('global-one');
expect(body.map((b) => b.name)).toContain('pending-one');
const pending = body.find((b) => b.name === 'pending-one');
expect(pending?.type).toBe('promptrequest');
});
it('returns 404 for unknown project', async () => {
const { app: a } = buildApp();
const res = await a.inject({
method: 'GET',
url: '/api/v1/projects/nonexistent/prompts/visible',
});
expect(res.statusCode).toBe(404);
});
});
});

View File

@@ -0,0 +1,421 @@
import { describe, it, expect, vi, beforeEach } from 'vitest';
import { PromptService } from '../../src/services/prompt.service.js';
import type { IPromptRepository } from '../../src/repositories/prompt.repository.js';
import type { IPromptRequestRepository } from '../../src/repositories/prompt-request.repository.js';
import type { IProjectRepository } from '../../src/repositories/project.repository.js';
import type { Prompt, PromptRequest, Project } from '@prisma/client';
function makePrompt(overrides: Partial<Prompt> = {}): Prompt {
return {
id: 'prompt-1',
name: 'test-prompt',
content: 'Hello world',
projectId: null,
priority: 5,
summary: null,
chapters: null,
linkTarget: null,
version: 1,
createdAt: new Date(),
updatedAt: new Date(),
...overrides,
};
}
function makePromptRequest(overrides: Partial<PromptRequest> = {}): PromptRequest {
return {
id: 'req-1',
name: 'test-request',
content: 'Proposed content',
projectId: null,
priority: 5,
createdBySession: 'session-abc',
createdByUserId: null,
createdAt: new Date(),
...overrides,
};
}
function makeProject(overrides: Partial<Project> = {}): Project {
return {
id: 'proj-1',
name: 'test-project',
description: '',
prompt: '',
proxyMode: 'direct',
gated: true,
llmProvider: null,
llmModel: null,
ownerId: 'user-1',
createdAt: new Date(),
updatedAt: new Date(),
...overrides,
} as Project;
}
function mockPromptRepo(): IPromptRepository {
return {
findAll: vi.fn(async () => []),
findGlobal: vi.fn(async () => []),
findById: vi.fn(async () => null),
findByNameAndProject: vi.fn(async () => null),
create: vi.fn(async (data) => makePrompt(data)),
update: vi.fn(async (id, data) => makePrompt({ id, ...data })),
delete: vi.fn(async () => {}),
};
}
function mockPromptRequestRepo(): IPromptRequestRepository {
return {
findAll: vi.fn(async () => []),
findGlobal: vi.fn(async () => []),
findById: vi.fn(async () => null),
findByNameAndProject: vi.fn(async () => null),
findBySession: vi.fn(async () => []),
create: vi.fn(async (data) => makePromptRequest(data)),
update: vi.fn(async (id, data) => makePromptRequest({ id, ...data })),
delete: vi.fn(async () => {}),
};
}
function mockProjectRepo(): IProjectRepository {
return {
findAll: vi.fn(async () => []),
findById: vi.fn(async () => null),
findByName: vi.fn(async () => null),
create: vi.fn(async (data) => makeProject(data)),
update: vi.fn(async (id, data) => makeProject({ id, ...data })),
delete: vi.fn(async () => {}),
};
}
describe('PromptService', () => {
let promptRepo: IPromptRepository;
let promptRequestRepo: IPromptRequestRepository;
let projectRepo: IProjectRepository;
let service: PromptService;
beforeEach(() => {
promptRepo = mockPromptRepo();
promptRequestRepo = mockPromptRequestRepo();
projectRepo = mockProjectRepo();
service = new PromptService(promptRepo, promptRequestRepo, projectRepo);
});
// ── Prompt CRUD ──
describe('listPrompts', () => {
it('should return all prompts', async () => {
const prompts = [makePrompt(), makePrompt({ id: 'prompt-2', name: 'other' })];
vi.mocked(promptRepo.findAll).mockResolvedValue(prompts);
const result = await service.listPrompts();
expect(result).toEqual(prompts);
expect(promptRepo.findAll).toHaveBeenCalledWith(undefined);
});
it('should filter by projectId', async () => {
await service.listPrompts('proj-1');
expect(promptRepo.findAll).toHaveBeenCalledWith('proj-1');
});
});
describe('listGlobalPrompts', () => {
it('should return only global prompts', async () => {
const globalPrompts = [makePrompt({ name: 'global-rule', projectId: null })];
vi.mocked(promptRepo.findGlobal).mockResolvedValue(globalPrompts);
const result = await service.listGlobalPrompts();
expect(result).toEqual(globalPrompts);
expect(promptRepo.findGlobal).toHaveBeenCalled();
});
});
describe('getPrompt', () => {
it('should return a prompt by id', async () => {
const prompt = makePrompt();
vi.mocked(promptRepo.findById).mockResolvedValue(prompt);
const result = await service.getPrompt('prompt-1');
expect(result).toEqual(prompt);
});
it('should throw NotFoundError for missing prompt', async () => {
await expect(service.getPrompt('nope')).rejects.toThrow('Prompt not found: nope');
});
});
describe('createPrompt', () => {
it('should create a prompt', async () => {
const result = await service.createPrompt({ name: 'new-prompt', content: 'stuff' });
expect(promptRepo.create).toHaveBeenCalledWith({ name: 'new-prompt', content: 'stuff' });
expect(result.name).toBe('new-prompt');
});
it('should validate project exists when projectId given', async () => {
vi.mocked(projectRepo.findById).mockResolvedValue(makeProject());
await service.createPrompt({ name: 'scoped', content: 'x', projectId: 'proj-1' });
expect(projectRepo.findById).toHaveBeenCalledWith('proj-1');
});
it('should throw when project not found', async () => {
await expect(
service.createPrompt({ name: 'bad', content: 'x', projectId: 'nope' }),
).rejects.toThrow('Project not found: nope');
});
it('should reject invalid name format', async () => {
await expect(
service.createPrompt({ name: 'INVALID_NAME', content: 'x' }),
).rejects.toThrow();
});
});
describe('updatePrompt', () => {
it('should update prompt content', async () => {
vi.mocked(promptRepo.findById).mockResolvedValue(makePrompt());
await service.updatePrompt('prompt-1', { content: 'updated' });
expect(promptRepo.update).toHaveBeenCalledWith('prompt-1', { content: 'updated' });
});
it('should throw for missing prompt', async () => {
await expect(service.updatePrompt('nope', { content: 'x' })).rejects.toThrow('Prompt not found');
});
});
describe('deletePrompt', () => {
it('should delete an existing prompt', async () => {
vi.mocked(promptRepo.findById).mockResolvedValue(makePrompt());
await service.deletePrompt('prompt-1');
expect(promptRepo.delete).toHaveBeenCalledWith('prompt-1');
});
it('should throw for missing prompt', async () => {
await expect(service.deletePrompt('nope')).rejects.toThrow('Prompt not found');
});
it('should reject deletion of system prompts', async () => {
vi.mocked(promptRepo.findById).mockResolvedValue(makePrompt({ projectId: 'sys-proj' }));
vi.mocked(projectRepo.findById).mockResolvedValue(makeProject({ id: 'sys-proj', name: 'mcpctl-system' }));
await expect(service.deletePrompt('prompt-1')).rejects.toThrow('Cannot delete system prompts');
});
it('should allow deletion of non-system project prompts', async () => {
vi.mocked(promptRepo.findById).mockResolvedValue(makePrompt({ projectId: 'proj-1' }));
vi.mocked(projectRepo.findById).mockResolvedValue(makeProject({ id: 'proj-1', name: 'my-project' }));
await service.deletePrompt('prompt-1');
expect(promptRepo.delete).toHaveBeenCalledWith('prompt-1');
});
});
// ── PromptRequest CRUD ──
describe('listPromptRequests', () => {
it('should return all prompt requests', async () => {
const reqs = [makePromptRequest()];
vi.mocked(promptRequestRepo.findAll).mockResolvedValue(reqs);
const result = await service.listPromptRequests();
expect(result).toEqual(reqs);
});
});
describe('getPromptRequest', () => {
it('should return a prompt request by id', async () => {
const req = makePromptRequest();
vi.mocked(promptRequestRepo.findById).mockResolvedValue(req);
const result = await service.getPromptRequest('req-1');
expect(result).toEqual(req);
});
it('should throw for missing request', async () => {
await expect(service.getPromptRequest('nope')).rejects.toThrow('PromptRequest not found');
});
});
describe('deletePromptRequest', () => {
it('should delete an existing request', async () => {
vi.mocked(promptRequestRepo.findById).mockResolvedValue(makePromptRequest());
await service.deletePromptRequest('req-1');
expect(promptRequestRepo.delete).toHaveBeenCalledWith('req-1');
});
});
// ── Propose ──
describe('propose', () => {
it('should create a prompt request', async () => {
const result = await service.propose({
name: 'my-prompt',
content: 'proposal',
createdBySession: 'sess-1',
});
expect(promptRequestRepo.create).toHaveBeenCalledWith(
expect.objectContaining({ name: 'my-prompt', content: 'proposal', createdBySession: 'sess-1' }),
);
expect(result.name).toBe('my-prompt');
});
it('should validate project exists when projectId given', async () => {
vi.mocked(projectRepo.findById).mockResolvedValue(makeProject());
await service.propose({
name: 'scoped',
content: 'x',
projectId: 'proj-1',
});
expect(projectRepo.findById).toHaveBeenCalledWith('proj-1');
});
});
// ── Approve ──
describe('approve', () => {
it('should delete request and create prompt (atomic)', async () => {
const req = makePromptRequest({ id: 'req-1', name: 'approved', content: 'good stuff', projectId: 'proj-1' });
vi.mocked(promptRequestRepo.findById).mockResolvedValue(req);
const result = await service.approve('req-1');
expect(promptRepo.create).toHaveBeenCalledWith(
expect.objectContaining({ name: 'approved', content: 'good stuff', projectId: 'proj-1' }),
);
expect(promptRequestRepo.delete).toHaveBeenCalledWith('req-1');
expect(result.name).toBe('approved');
});
it('should throw for missing request', async () => {
await expect(service.approve('nope')).rejects.toThrow('PromptRequest not found');
});
it('should handle global prompt (no projectId)', async () => {
const req = makePromptRequest({ id: 'req-2', name: 'global', content: 'stuff', projectId: null });
vi.mocked(promptRequestRepo.findById).mockResolvedValue(req);
await service.approve('req-2');
// Should NOT include projectId in the create call
const createArg = vi.mocked(promptRepo.create).mock.calls[0]![0];
expect(createArg).not.toHaveProperty('projectId');
});
});
// ── Priority ──
describe('prompt priority', () => {
it('creates prompt with explicit priority', async () => {
const result = await service.createPrompt({ name: 'high-pri', content: 'x', priority: 8 });
expect(promptRepo.create).toHaveBeenCalledWith(expect.objectContaining({ priority: 8 }));
expect(result.priority).toBe(8);
});
it('uses default priority 5 when not specified', async () => {
const result = await service.createPrompt({ name: 'default-pri', content: 'x' });
// Default in schema is 5 — create is called without priority
const createArg = vi.mocked(promptRepo.create).mock.calls[0]![0];
expect(createArg.priority).toBeUndefined();
});
it('rejects priority below 1', async () => {
await expect(
service.createPrompt({ name: 'bad-pri', content: 'x', priority: 0 }),
).rejects.toThrow();
});
it('rejects priority above 10', async () => {
await expect(
service.createPrompt({ name: 'bad-pri', content: 'x', priority: 11 }),
).rejects.toThrow();
});
it('updates prompt priority', async () => {
vi.mocked(promptRepo.findById).mockResolvedValue(makePrompt());
await service.updatePrompt('prompt-1', { priority: 3 });
expect(promptRepo.update).toHaveBeenCalledWith('prompt-1', expect.objectContaining({ priority: 3 }));
});
});
// ── Link Target ──
describe('prompt links', () => {
it('creates linked prompt with valid linkTarget', async () => {
const result = await service.createPrompt({
name: 'linked',
content: 'link content',
linkTarget: 'other-project/docmost-mcp:docmost://pages/abc',
});
expect(promptRepo.create).toHaveBeenCalledWith(
expect.objectContaining({ linkTarget: 'other-project/docmost-mcp:docmost://pages/abc' }),
);
});
it('rejects invalid link format', async () => {
await expect(
service.createPrompt({ name: 'bad-link', content: 'x', linkTarget: 'invalid-format' }),
).rejects.toThrow();
});
it('rejects link without server part', async () => {
await expect(
service.createPrompt({ name: 'bad-link', content: 'x', linkTarget: 'project:uri' }),
).rejects.toThrow();
});
it('approve carries priority from request to prompt', async () => {
const req = makePromptRequest({ id: 'req-1', name: 'high-pri', content: 'x', projectId: 'proj-1', priority: 9 });
vi.mocked(promptRequestRepo.findById).mockResolvedValue(req);
await service.approve('req-1');
expect(promptRepo.create).toHaveBeenCalledWith(
expect.objectContaining({ priority: 9 }),
);
});
it('propose passes priority through', async () => {
const result = await service.propose({
name: 'pri-req',
content: 'x',
priority: 7,
});
expect(promptRequestRepo.create).toHaveBeenCalledWith(
expect.objectContaining({ priority: 7 }),
);
});
});
// ── Visibility ──
describe('getVisiblePrompts', () => {
it('should return approved prompts and session requests', async () => {
vi.mocked(promptRepo.findAll).mockResolvedValue([
makePrompt({ name: 'approved-1', content: 'A' }),
]);
vi.mocked(promptRequestRepo.findBySession).mockResolvedValue([
makePromptRequest({ name: 'pending-1', content: 'B' }),
]);
const result = await service.getVisiblePrompts('proj-1', 'sess-1');
expect(result).toHaveLength(2);
expect(result[0]).toEqual({ name: 'approved-1', content: 'A', type: 'prompt' });
expect(result[1]).toEqual({ name: 'pending-1', content: 'B', type: 'promptrequest' });
});
it('should not include pending requests without sessionId', async () => {
vi.mocked(promptRepo.findAll).mockResolvedValue([makePrompt()]);
const result = await service.getVisiblePrompts('proj-1');
expect(result).toHaveLength(1);
expect(promptRequestRepo.findBySession).not.toHaveBeenCalled();
});
it('should return empty when no prompts or requests', async () => {
const result = await service.getVisiblePrompts();
expect(result).toEqual([]);
});
});
});

View File

@@ -0,0 +1,110 @@
import { describe, it, expect, vi } from 'vitest';
import {
PromptSummaryService,
extractFirstSentence,
extractHeadings,
type LlmSummaryGenerator,
} from '../../src/services/prompt-summary.service.js';
describe('extractFirstSentence', () => {
it('extracts first sentence from plain text', () => {
const result = extractFirstSentence('This is the first sentence. And this is the second.', 20);
expect(result).toBe('This is the first sentence.');
});
it('truncates to maxWords', () => {
const long = 'word '.repeat(30).trim();
const result = extractFirstSentence(long, 5);
expect(result).toBe('word word word word word...');
});
it('skips markdown headings to find content', () => {
const content = '# Title\n\n## Subtitle\n\nActual content here. More text.';
expect(extractFirstSentence(content, 20)).toBe('Actual content here.');
});
it('falls back to first heading if no content lines', () => {
const content = '# Only Headings\n## Nothing Else';
expect(extractFirstSentence(content, 20)).toBe('Only Headings');
});
it('strips markdown formatting', () => {
const content = 'This has **bold** and *italic* and `code` and [link](http://example.com).';
expect(extractFirstSentence(content, 20)).toBe('This has bold and italic and code and link.');
});
it('handles empty content', () => {
expect(extractFirstSentence('', 20)).toBe('');
expect(extractFirstSentence(' ', 20)).toBe('');
});
it('handles content with no sentence boundary', () => {
const content = 'No period at the end';
expect(extractFirstSentence(content, 20)).toBe('No period at the end');
});
it('handles exclamation and question marks', () => {
expect(extractFirstSentence('Is this a question? Yes it is.', 20)).toBe('Is this a question?');
expect(extractFirstSentence('Watch out! Be careful.', 20)).toBe('Watch out!');
});
});
describe('extractHeadings', () => {
it('extracts all levels of markdown headings', () => {
const content = '# H1\n## H2\n### H3\nSome text\n#### H4';
expect(extractHeadings(content)).toEqual(['H1', 'H2', 'H3', 'H4']);
});
it('returns empty array for content without headings', () => {
expect(extractHeadings('Just plain text\nMore text')).toEqual([]);
});
it('handles empty content', () => {
expect(extractHeadings('')).toEqual([]);
});
it('trims heading text', () => {
const content = '# Spaced Heading \n## Another ';
expect(extractHeadings(content)).toEqual(['Spaced Heading', 'Another']);
});
});
describe('PromptSummaryService', () => {
it('uses regex fallback when no LLM', async () => {
const service = new PromptSummaryService(null);
const result = await service.generateSummary('# Overview\n\nThis is a test document. It has content.\n\n## Section One\n\n## Section Two');
expect(result.summary).toBe('This is a test document.');
expect(result.chapters).toEqual(['Overview', 'Section One', 'Section Two']);
});
it('uses LLM when available', async () => {
const mockLlm: LlmSummaryGenerator = {
generate: vi.fn(async () => ({
summary: 'LLM-generated summary',
chapters: ['LLM Chapter 1'],
})),
};
const service = new PromptSummaryService(mockLlm);
const result = await service.generateSummary('Some content');
expect(result.summary).toBe('LLM-generated summary');
expect(result.chapters).toEqual(['LLM Chapter 1']);
expect(mockLlm.generate).toHaveBeenCalledWith('Some content');
});
it('falls back to regex on LLM failure', async () => {
const mockLlm: LlmSummaryGenerator = {
generate: vi.fn(async () => { throw new Error('LLM unavailable'); }),
};
const service = new PromptSummaryService(mockLlm);
const result = await service.generateSummary('Fallback content here. Second sentence.');
expect(result.summary).toBe('Fallback content here.');
expect(mockLlm.generate).toHaveBeenCalled();
});
it('generateWithRegex works directly', () => {
const service = new PromptSummaryService(null);
const result = service.generateWithRegex('# Title\n\nContent line. More.\n\n## Chapter A\n\n## Chapter B');
expect(result.summary).toBe('Content line.');
expect(result.chapters).toEqual(['Title', 'Chapter A', 'Chapter B']);
});
});

View File

@@ -5,6 +5,7 @@ import { McpdUpstream } from './upstream/mcpd.js';
interface McpdServer {
id: string;
name: string;
description?: string;
transport: string;
status?: string;
}
@@ -35,7 +36,7 @@ export async function refreshProjectUpstreams(
let servers: McpdServer[];
if (authToken) {
// Forward the client's auth token to mcpd so RBAC applies
const result = await mcpdClient.forward('GET', path, '', undefined);
const result = await mcpdClient.forward('GET', path, '', undefined, authToken);
if (result.status >= 400) {
throw new Error(`Failed to fetch project servers: ${result.status}`);
}
@@ -47,6 +48,36 @@ export async function refreshProjectUpstreams(
return syncUpstreams(router, mcpdClient, servers);
}
/**
* Fetch a project's LLM config (llmProvider, llmModel) from mcpd.
* These are the project-level "recommendations" — local overrides take priority.
*/
export interface ProjectLlmConfig {
llmProvider?: string;
llmModel?: string;
gated?: boolean;
}
export async function fetchProjectLlmConfig(
mcpdClient: McpdClient,
projectName: string,
): Promise<ProjectLlmConfig> {
try {
const project = await mcpdClient.get<{
llmProvider?: string;
llmModel?: string;
gated?: boolean;
}>(`/api/v1/projects/${encodeURIComponent(projectName)}`);
const config: ProjectLlmConfig = {};
if (project.llmProvider) config.llmProvider = project.llmProvider;
if (project.llmModel) config.llmModel = project.llmModel;
if (project.gated !== undefined) config.gated = project.gated;
return config;
} catch {
return {};
}
}
/** Shared sync logic: reconcile a router's upstreams with a server list. */
function syncUpstreams(router: McpRouter, mcpdClient: McpdClient, servers: McpdServer[]): string[] {
const registered: string[] = [];
@@ -63,7 +94,7 @@ function syncUpstreams(router: McpRouter, mcpdClient: McpdClient, servers: McpdS
// Add/update upstreams for each server
for (const server of servers) {
if (!currentNames.has(server.name)) {
const upstream = new McpdUpstream(server.id, server.name, mcpdClient);
const upstream = new McpdUpstream(server.id, server.name, mcpdClient, server.description);
router.addUpstream(upstream);
}
registered.push(server.name);

View File

@@ -0,0 +1,81 @@
/**
* LLM-based prompt selection for the gating flow.
*
* Sends tags + prompt index to the heavy LLM, which returns
* a ranked list of relevant prompt names.
*/
import type { ProviderRegistry } from '../providers/registry.js';
export interface PromptIndexForLlm {
name: string;
priority: number;
summary: string | null;
chapters: string[] | null;
}
export interface LlmSelectionResult {
selectedNames: string[];
reasoning: string;
}
export class LlmPromptSelector {
constructor(
private readonly providerRegistry: ProviderRegistry,
private readonly modelOverride?: string,
) {}
async selectPrompts(
tags: string[],
promptIndex: PromptIndexForLlm[],
): Promise<LlmSelectionResult> {
const systemPrompt = `You are a context selection assistant. Given a developer's task keywords and a list of available project prompts, select which prompts are relevant to their work. Return a JSON object with "selectedNames" (array of prompt names) and "reasoning" (brief explanation). Priority 10 prompts must always be included.`;
const userPrompt = `Task keywords: ${tags.join(', ')}
Available prompts:
${promptIndex.map((p) => `- ${p.name} (priority: ${p.priority}): ${p.summary ?? 'No summary'}${p.chapters?.length ? `\n Chapters: ${p.chapters.join(', ')}` : ''}`).join('\n')}
Select the relevant prompts. Return JSON: { "selectedNames": [...], "reasoning": "..." }`;
const provider = this.providerRegistry.getProvider('heavy');
if (!provider) {
throw new Error('No heavy LLM provider available');
}
const completionOptions: import('../providers/types.js').CompletionOptions = {
messages: [
{ role: 'system', content: systemPrompt },
{ role: 'user', content: userPrompt },
],
temperature: 0,
maxTokens: 1024,
};
if (this.modelOverride) {
completionOptions.model = this.modelOverride;
}
const result = await provider.complete(completionOptions);
const response = result.content;
// Parse JSON from response (may be wrapped in markdown code blocks)
const jsonMatch = response.match(/\{[\s\S]*"selectedNames"[\s\S]*\}/);
if (!jsonMatch) {
throw new Error('LLM response did not contain valid selection JSON');
}
const parsed = JSON.parse(jsonMatch[0]) as { selectedNames?: string[]; reasoning?: string };
const selectedNames = parsed.selectedNames ?? [];
const reasoning = parsed.reasoning ?? '';
// Always include priority 10 prompts
for (const p of promptIndex) {
if (p.priority === 10 && !selectedNames.includes(p.name)) {
selectedNames.push(p.name);
}
}
return { selectedNames, reasoning };
}
}

View File

@@ -0,0 +1,76 @@
/**
* Per-session gating state machine.
*
* Tracks whether a session has gone through the prompt selection flow.
* When gated, only begin_session is accessible. After ungating, all tools work.
*/
import type { PromptIndexEntry, TagMatchResult } from './tag-matcher.js';
export interface SessionState {
gated: boolean;
tags: string[];
retrievedPrompts: Set<string>;
briefing: string | null;
}
export class SessionGate {
private sessions = new Map<string, SessionState>();
/** Create a new session. Starts gated if the project is gated. */
createSession(sessionId: string, projectGated: boolean): void {
this.sessions.set(sessionId, {
gated: projectGated,
tags: [],
retrievedPrompts: new Set(),
briefing: null,
});
}
/** Get session state. Returns null if session doesn't exist. */
getSession(sessionId: string): SessionState | null {
return this.sessions.get(sessionId) ?? null;
}
/** Check if a session is currently gated. Unknown sessions are treated as ungated. */
isGated(sessionId: string): boolean {
return this.sessions.get(sessionId)?.gated ?? false;
}
/** Ungate a session after prompt selection is complete. */
ungate(sessionId: string, tags: string[], matchResult: TagMatchResult): void {
const session = this.sessions.get(sessionId);
if (!session) return;
session.gated = false;
session.tags = [...session.tags, ...tags];
// Track which prompts have been sent
for (const p of matchResult.fullContent) {
session.retrievedPrompts.add(p.name);
}
}
/** Record additional prompts retrieved via read_prompts. */
addRetrievedPrompts(sessionId: string, tags: string[], promptNames: string[]): void {
const session = this.sessions.get(sessionId);
if (!session) return;
session.tags = [...session.tags, ...tags];
for (const name of promptNames) {
session.retrievedPrompts.add(name);
}
}
/** Filter out prompts already sent to avoid duplicates. */
filterAlreadySent(sessionId: string, prompts: PromptIndexEntry[]): PromptIndexEntry[] {
const session = this.sessions.get(sessionId);
if (!session) return prompts;
return prompts.filter((p) => !session.retrievedPrompts.has(p.name));
}
/** Remove a session (cleanup on disconnect). */
removeSession(sessionId: string): void {
this.sessions.delete(sessionId);
}
}

View File

@@ -0,0 +1,109 @@
/**
* Deterministic keyword-based tag matching for prompt selection.
*
* Used as the no-LLM fallback (and for read_prompts in hybrid mode).
* Scores prompts by tag overlap * priority, then fills a byte budget.
*/
export interface PromptIndexEntry {
name: string;
priority: number;
summary: string | null;
chapters: string[] | null;
content: string;
}
export interface TagMatchResult {
/** Prompts with full content included (within byte budget) */
fullContent: PromptIndexEntry[];
/** Matched prompts beyond byte budget — name + summary only */
indexOnly: PromptIndexEntry[];
/** Non-matched prompts — listed for awareness */
remaining: PromptIndexEntry[];
}
const DEFAULT_BYTE_BUDGET = 8192;
export class TagMatcher {
constructor(private readonly byteBudget: number = DEFAULT_BYTE_BUDGET) {}
match(tags: string[], prompts: PromptIndexEntry[]): TagMatchResult {
const lowerTags = tags.map((t) => t.toLowerCase());
// Score each prompt
const scored = prompts.map((p) => ({
prompt: p,
score: this.computeScore(lowerTags, p),
matched: this.computeScore(lowerTags, p) > 0,
}));
// Partition: matched (score > 0) vs non-matched
const matched = scored.filter((s) => s.matched).sort((a, b) => b.score - a.score);
const nonMatched = scored.filter((s) => !s.matched).map((s) => s.prompt);
// Fill byte budget from matched prompts
let budgetRemaining = this.byteBudget;
const fullContent: PromptIndexEntry[] = [];
const indexOnly: PromptIndexEntry[] = [];
for (const { prompt } of matched) {
const contentBytes = Buffer.byteLength(prompt.content, 'utf-8');
if (budgetRemaining >= contentBytes) {
fullContent.push(prompt);
budgetRemaining -= contentBytes;
} else {
indexOnly.push(prompt);
}
}
return { fullContent, indexOnly, remaining: nonMatched };
}
private computeScore(lowerTags: string[], prompt: PromptIndexEntry): number {
// Priority 10 always included
if (prompt.priority === 10) return Infinity;
if (lowerTags.length === 0) return 0;
const searchText = [
prompt.name,
prompt.summary ?? '',
...(prompt.chapters ?? []),
].join(' ').toLowerCase();
let matchCount = 0;
for (const tag of lowerTags) {
if (searchText.includes(tag)) matchCount++;
}
return matchCount * prompt.priority;
}
}
/**
* Extract keywords from a tool call for the intercept fallback path.
* Pulls words from the tool name and string argument values.
*/
export function extractKeywordsFromToolCall(
toolName: string,
args: Record<string, unknown>,
): string[] {
const keywords = new Set<string>();
// Tool name parts (split on / and -)
for (const part of toolName.split(/[/-]/)) {
if (part.length > 2) keywords.add(part.toLowerCase());
}
// String argument values — extract words
for (const value of Object.values(args)) {
if (typeof value === 'string' && value.length < 200) {
for (const word of value.split(/\s+/)) {
const clean = word.replace(/[^a-zA-Z0-9-]/g, '').toLowerCase();
if (clean.length > 2) keywords.add(clean);
}
}
}
return [...keywords].slice(0, 10); // Cap at 10 keywords
}

View File

@@ -1,3 +1,7 @@
import { existsSync, readFileSync } from 'node:fs';
import { join } from 'node:path';
import { homedir } from 'node:os';
/** Configuration for the mcplocal HTTP server. */
export interface HttpConfig {
/** Port for the HTTP server (default: 3200) */
@@ -15,9 +19,137 @@ export interface HttpConfig {
const DEFAULT_HTTP_PORT = 3200;
const DEFAULT_HTTP_HOST = '127.0.0.1';
const DEFAULT_MCPD_URL = 'http://localhost:3100';
const DEFAULT_MCPD_TOKEN = '';
const DEFAULT_LOG_LEVEL = 'info';
/**
* Read the user's mcpctl credentials from ~/.mcpctl/credentials.
* Returns the token if found, empty string otherwise.
*/
function loadUserToken(): string {
try {
const credPath = join(homedir(), '.mcpctl', 'credentials');
if (!existsSync(credPath)) return '';
const raw = readFileSync(credPath, 'utf-8');
const parsed = JSON.parse(raw) as { token?: string };
return parsed.token ?? '';
} catch {
return '';
}
}
export interface LlmFileConfig {
provider: string;
model?: string;
url?: string;
binaryPath?: string;
}
/** Multi-provider entry from config file. */
export interface LlmProviderFileEntry {
name: string;
type: string;
model?: string;
url?: string;
binaryPath?: string;
tier?: 'fast' | 'heavy';
}
export interface ProjectLlmOverride {
model?: string;
provider?: string;
}
interface LlmMultiFileConfig {
providers: LlmProviderFileEntry[];
}
interface McpctlConfig {
llm?: LlmFileConfig | LlmMultiFileConfig;
projects?: Record<string, { llm?: ProjectLlmOverride }>;
}
/** Cached config for the process lifetime (reloaded on SIGHUP if needed). */
let cachedConfig: McpctlConfig | null = null;
function loadFullConfig(): McpctlConfig {
if (cachedConfig) return cachedConfig;
try {
const configPath = join(homedir(), '.mcpctl', 'config.json');
if (!existsSync(configPath)) return {};
const raw = readFileSync(configPath, 'utf-8');
cachedConfig = JSON.parse(raw) as McpctlConfig;
return cachedConfig;
} catch {
return {};
}
}
/** Type guard: is config the multi-provider format? */
function isMultiConfig(llm: LlmFileConfig | LlmMultiFileConfig): llm is LlmMultiFileConfig {
return 'providers' in llm && Array.isArray((llm as LlmMultiFileConfig).providers);
}
/**
* Load LLM configuration from ~/.mcpctl/config.json.
* Returns undefined if no LLM section is configured.
* @deprecated Use loadLlmProviders() for multi-provider support.
*/
export function loadLlmConfig(): LlmFileConfig | undefined {
const config = loadFullConfig();
if (!config.llm) return undefined;
if (isMultiConfig(config.llm)) {
// Multi-provider format — return first provider as legacy compat
const first = config.llm.providers[0];
if (!first) return undefined;
const legacy: LlmFileConfig = { provider: first.type };
if (first.model) legacy.model = first.model;
if (first.url) legacy.url = first.url;
if (first.binaryPath) legacy.binaryPath = first.binaryPath;
return legacy;
}
if (!config.llm.provider || config.llm.provider === 'none') return undefined;
return config.llm;
}
/**
* Load LLM providers from ~/.mcpctl/config.json.
* Normalizes both legacy single-provider and multi-provider formats.
* Returns empty array if no LLM is configured.
*/
export function loadLlmProviders(): LlmProviderFileEntry[] {
const config = loadFullConfig();
if (!config.llm) return [];
if (isMultiConfig(config.llm)) {
return config.llm.providers.filter((p) => p.type !== 'none');
}
// Legacy single-provider format → normalize to one entry
if (!config.llm.provider || config.llm.provider === 'none') return [];
const entry: LlmProviderFileEntry = {
name: config.llm.provider,
type: config.llm.provider,
};
if (config.llm.model) entry.model = config.llm.model;
if (config.llm.url) entry.url = config.llm.url;
if (config.llm.binaryPath) entry.binaryPath = config.llm.binaryPath;
return [entry];
}
/**
* Load per-project LLM override from ~/.mcpctl/config.json.
* Returns the project-specific model/provider override, or undefined.
*/
export function loadProjectLlmOverride(projectName: string): ProjectLlmOverride | undefined {
const config = loadFullConfig();
return config.projects?.[projectName]?.llm;
}
/** Reset cached config (for testing). */
export function resetConfigCache(): void {
cachedConfig = null;
}
export function loadHttpConfig(env: Record<string, string | undefined> = process.env): HttpConfig {
const portStr = env['MCPLOCAL_HTTP_PORT'];
const port = portStr !== undefined ? parseInt(portStr, 10) : DEFAULT_HTTP_PORT;
@@ -26,7 +158,7 @@ export function loadHttpConfig(env: Record<string, string | undefined> = process
httpPort: Number.isFinite(port) ? port : DEFAULT_HTTP_PORT,
httpHost: env['MCPLOCAL_HTTP_HOST'] ?? DEFAULT_HTTP_HOST,
mcpdUrl: env['MCPLOCAL_MCPD_URL'] ?? DEFAULT_MCPD_URL,
mcpdToken: env['MCPLOCAL_MCPD_TOKEN'] ?? DEFAULT_MCPD_TOKEN,
mcpdToken: env['MCPLOCAL_MCPD_TOKEN'] ?? loadUserToken(),
logLevel: (env['MCPLOCAL_LOG_LEVEL'] as HttpConfig['logLevel'] | undefined) ?? DEFAULT_LOG_LEVEL,
};
}

View File

@@ -23,11 +23,21 @@ export class ConnectionError extends Error {
export class McpdClient {
private readonly baseUrl: string;
private readonly token: string;
private readonly extraHeaders: Record<string, string>;
constructor(baseUrl: string, token: string) {
constructor(baseUrl: string, token: string, extraHeaders?: Record<string, string>) {
// Strip trailing slash for consistent URL joining
this.baseUrl = baseUrl.replace(/\/+$/, '');
this.token = token;
this.extraHeaders = extraHeaders ?? {};
}
/**
* Create a new client with additional default headers.
* Inherits base URL and token from the current client.
*/
withHeaders(headers: Record<string, string>): McpdClient {
return new McpdClient(this.baseUrl, this.token, { ...this.extraHeaders, ...headers });
}
async get<T>(path: string): Promise<T> {
@@ -62,6 +72,7 @@ export class McpdClient {
): Promise<{ status: number; body: unknown }> {
const url = `${this.baseUrl}${path}${query ? `?${query}` : ''}`;
const headers: Record<string, string> = {
...this.extraHeaders,
'Authorization': `Bearer ${authOverride ?? this.token}`,
'Accept': 'application/json',
};

View File

@@ -12,8 +12,11 @@ import type { FastifyInstance } from 'fastify';
import { StreamableHTTPServerTransport } from '@modelcontextprotocol/sdk/server/streamableHttp.js';
import type { JSONRPCMessage } from '@modelcontextprotocol/sdk/types.js';
import { McpRouter } from '../router.js';
import { refreshProjectUpstreams } from '../discovery.js';
import { ResponsePaginator } from '../llm/pagination.js';
import { refreshProjectUpstreams, fetchProjectLlmConfig } from '../discovery.js';
import { loadProjectLlmOverride } from './config.js';
import type { McpdClient } from './mcpd-client.js';
import type { ProviderRegistry } from '../providers/registry.js';
import type { JsonRpcRequest } from '../types.js';
interface ProjectCacheEntry {
@@ -28,7 +31,7 @@ interface SessionEntry {
const CACHE_TTL_MS = 60_000; // 60 seconds
export function registerProjectMcpEndpoint(app: FastifyInstance, mcpdClient: McpdClient): void {
export function registerProjectMcpEndpoint(app: FastifyInstance, mcpdClient: McpdClient, providerRegistry?: ProviderRegistry | null): void {
const projectCache = new Map<string, ProjectCacheEntry>();
const sessions = new Map<string, SessionEntry>();
@@ -44,6 +47,55 @@ export function registerProjectMcpEndpoint(app: FastifyInstance, mcpdClient: Mcp
const router = existing?.router ?? new McpRouter();
await refreshProjectUpstreams(router, mcpdClient, projectName, authToken);
// Resolve project LLM model: local override → mcpd recommendation → global default
const localOverride = loadProjectLlmOverride(projectName);
const mcpdConfig = await fetchProjectLlmConfig(mcpdClient, projectName);
const resolvedModel = localOverride?.model ?? mcpdConfig.llmModel ?? undefined;
// If project llmProvider is "none", disable LLM for this project
const llmDisabled = mcpdConfig.llmProvider === 'none' || localOverride?.provider === 'none';
const effectiveRegistry = llmDisabled ? null : (providerRegistry ?? null);
// Wire pagination support with LLM provider and project model override
router.setPaginator(new ResponsePaginator(effectiveRegistry, {}, resolvedModel));
// Configure prompt resources with SA-scoped client for RBAC
const saClient = mcpdClient.withHeaders({ 'X-Service-Account': `project:${projectName}` });
router.setPromptConfig(saClient, projectName);
// Configure gating if project has it enabled (default: true)
const isGated = mcpdConfig.gated !== false;
const gateConfig: import('../router.js').GateConfig = {
gated: isGated,
providerRegistry: effectiveRegistry,
};
if (resolvedModel) {
gateConfig.modelOverride = resolvedModel;
}
router.setGateConfig(gateConfig);
// Fetch project instructions and set on router
try {
const instructions = await mcpdClient.get<{ prompt: string; servers: Array<{ name: string; description: string }> }>(
`/api/v1/projects/${encodeURIComponent(projectName)}/instructions`,
);
const parts: string[] = [];
if (instructions.prompt) {
parts.push(instructions.prompt);
}
if (instructions.servers.length > 0) {
parts.push('Available MCP servers:');
for (const s of instructions.servers) {
parts.push(`- ${s.name}${s.description ? `: ${s.description}` : ''}`);
}
}
if (parts.length > 0) {
router.setInstructions(parts.join('\n'));
}
} catch {
// Instructions are optional — don't fail if endpoint is unavailable
}
projectCache.set(projectName, { router, lastRefresh: now });
return router;
}
@@ -84,7 +136,8 @@ export function registerProjectMcpEndpoint(app: FastifyInstance, mcpdClient: Mcp
transport.onmessage = async (message: JSONRPCMessage) => {
if ('method' in message && 'id' in message) {
const response = await router.route(message as unknown as JsonRpcRequest);
const ctx = transport.sessionId ? { sessionId: transport.sessionId } : undefined;
const response = await router.route(message as unknown as JsonRpcRequest, ctx);
await transport.send(response as unknown as JSONRPCMessage);
}
};
@@ -93,6 +146,7 @@ export function registerProjectMcpEndpoint(app: FastifyInstance, mcpdClient: Mcp
const id = transport.sessionId;
if (id) {
sessions.delete(id);
router.cleanupSession(id);
}
};

View File

@@ -10,11 +10,13 @@ import { registerProjectMcpEndpoint } from './project-mcp-endpoint.js';
import type { McpRouter } from '../router.js';
import type { HealthMonitor } from '../health.js';
import type { TieredHealthMonitor } from '../health/tiered.js';
import type { ProviderRegistry } from '../providers/registry.js';
export interface HttpServerDeps {
router: McpRouter;
healthMonitor?: HealthMonitor | undefined;
tieredHealthMonitor?: TieredHealthMonitor | undefined;
providerRegistry?: ProviderRegistry | null | undefined;
}
export async function createHttpServer(
@@ -79,6 +81,102 @@ export async function createHttpServer(
reply.code(200).send({ status: 'ok' });
});
// LLM health check — cached to avoid burning tokens on every call.
// Does a real inference call at most once per 10 minutes.
let llmHealthCache: { result: Record<string, unknown>; expiresAt: number } | null = null;
const LLM_HEALTH_CACHE_MS = 10 * 60 * 1000; // 10 minutes
app.get('/llm/health', async (_request, reply) => {
const provider = deps.providerRegistry?.getProvider('fast') ?? null;
if (!provider) {
reply.code(200).send({ status: 'not configured' });
return;
}
// Return cached result if fresh
if (llmHealthCache && Date.now() < llmHealthCache.expiresAt) {
reply.code(200).send(llmHealthCache.result);
return;
}
try {
const result = await provider.complete({
messages: [{ role: 'user', content: 'Respond with exactly: ok' }],
maxTokens: 10,
});
const ok = result.content.trim().toLowerCase().includes('ok');
const response = {
status: ok ? 'ok' : 'unexpected response',
provider: provider.name,
response: result.content.trim().slice(0, 100),
};
llmHealthCache = { result: response, expiresAt: Date.now() + LLM_HEALTH_CACHE_MS };
reply.code(200).send(response);
} catch (err) {
const msg = (err as Error).message ?? String(err);
const response = {
status: 'error',
provider: provider.name,
error: msg.slice(0, 200),
};
// Cache errors for 1 minute only (retry sooner)
llmHealthCache = { result: response, expiresAt: Date.now() + 60_000 };
reply.code(200).send(response);
}
});
// LLM models — list available models from the active provider
app.get('/llm/models', async (_request, reply) => {
const provider = deps.providerRegistry?.getProvider('fast') ?? null;
if (!provider) {
reply.code(200).send({ models: [], provider: null });
return;
}
try {
const models = await provider.listModels();
reply.code(200).send({ models, provider: provider.name });
} catch {
reply.code(200).send({ models: [], provider: provider.name });
}
});
// LLM providers — list all registered providers with tier assignments and health
app.get('/llm/providers', async (_request, reply) => {
const registry = deps.providerRegistry;
if (!registry) {
reply.code(200).send({ providers: [], tiers: { fast: [], heavy: [] }, health: {} });
return;
}
// Run isAvailable() on all providers in parallel (lightweight, no tokens burned)
const names = registry.list();
const healthChecks = await Promise.all(
names.map(async (name) => {
const provider = registry.get(name);
if (!provider) return { name, available: false };
try {
const available = await provider.isAvailable();
return { name, available };
} catch {
return { name, available: false };
}
}),
);
const health: Record<string, boolean> = {};
for (const check of healthChecks) {
health[check.name] = check.available;
}
reply.code(200).send({
providers: names,
tiers: {
fast: registry.getTierProviders('fast'),
heavy: registry.getTierProviders('heavy'),
},
health,
});
});
// Proxy management routes to mcpd
const mcpdClient = new McpdClient(config.mcpdUrl, config.mcpdToken);
registerProxyRoutes(app, mcpdClient);
@@ -87,7 +185,7 @@ export async function createHttpServer(
registerMcpEndpoint(app, deps.router);
// Project-scoped MCP endpoint at /projects/:projectName/mcp
registerProjectMcpEndpoint(app, mcpdClient);
registerProjectMcpEndpoint(app, mcpdClient, deps.providerRegistry);
return app;
}

View File

@@ -0,0 +1,171 @@
import type { SecretStore } from '@mcpctl/shared';
import type { LlmFileConfig, LlmProviderFileEntry } from './http/config.js';
import { ProviderRegistry } from './providers/registry.js';
import { GeminiAcpProvider } from './providers/gemini-acp.js';
import { OllamaProvider } from './providers/ollama.js';
import { AnthropicProvider } from './providers/anthropic.js';
import { OpenAiProvider } from './providers/openai.js';
import { DeepSeekProvider } from './providers/deepseek.js';
import type { LlmProvider } from './providers/types.js';
import type { GeminiAcpConfig } from './providers/gemini-acp.js';
import type { OllamaConfig } from './providers/ollama.js';
import type { AnthropicConfig } from './providers/anthropic.js';
import type { OpenAiConfig } from './providers/openai.js';
import type { DeepSeekConfig } from './providers/deepseek.js';
/**
* Thin wrapper that delegates all LlmProvider methods but overrides `name`.
* Used when the user's chosen name (e.g. "vllm-local") differs from the
* underlying provider's name (e.g. "openai").
*/
class NamedProvider implements LlmProvider {
readonly name: string;
private inner: LlmProvider;
constructor(name: string, inner: LlmProvider) {
this.name = name;
this.inner = inner;
}
complete(...args: Parameters<LlmProvider['complete']>) {
return this.inner.complete(...args);
}
listModels() {
return this.inner.listModels();
}
isAvailable() {
return this.inner.isAvailable();
}
dispose() {
this.inner.dispose?.();
}
}
/**
* Create a single LlmProvider from a provider entry config.
* Returns null if required config is missing (logs warning).
*/
async function createSingleProvider(
entry: LlmProviderFileEntry,
secretStore: SecretStore,
): Promise<LlmProvider | null> {
switch (entry.type) {
case 'gemini-cli': {
const cfg: GeminiAcpConfig = {};
if (entry.binaryPath) cfg.binaryPath = entry.binaryPath;
if (entry.model) cfg.defaultModel = entry.model;
const provider = new GeminiAcpProvider(cfg);
provider.warmup();
return provider;
}
case 'ollama': {
const cfg: OllamaConfig = {};
if (entry.url) cfg.baseUrl = entry.url;
if (entry.model) cfg.defaultModel = entry.model;
return new OllamaProvider(cfg);
}
case 'anthropic': {
const apiKey = await secretStore.get('anthropic-api-key');
if (!apiKey) {
process.stderr.write(`Warning: Anthropic API key not found for provider "${entry.name}". Run "mcpctl config setup" to configure.\n`);
return null;
}
const cfg: AnthropicConfig = { apiKey };
if (entry.model) cfg.defaultModel = entry.model;
return new AnthropicProvider(cfg);
}
case 'openai': {
const apiKey = await secretStore.get('openai-api-key');
if (!apiKey) {
process.stderr.write(`Warning: OpenAI API key not found for provider "${entry.name}". Run "mcpctl config setup" to configure.\n`);
return null;
}
const cfg: OpenAiConfig = { apiKey };
if (entry.url) cfg.baseUrl = entry.url;
if (entry.model) cfg.defaultModel = entry.model;
return new OpenAiProvider(cfg);
}
case 'deepseek': {
const apiKey = await secretStore.get('deepseek-api-key');
if (!apiKey) {
process.stderr.write(`Warning: DeepSeek API key not found for provider "${entry.name}". Run "mcpctl config setup" to configure.\n`);
return null;
}
const cfg: DeepSeekConfig = { apiKey };
if (entry.url) cfg.baseUrl = entry.url;
if (entry.model) cfg.defaultModel = entry.model;
return new DeepSeekProvider(cfg);
}
case 'vllm': {
if (!entry.url) {
process.stderr.write(`Warning: vLLM URL not configured for provider "${entry.name}". Run "mcpctl config setup" to configure.\n`);
return null;
}
return new OpenAiProvider({
apiKey: 'unused',
baseUrl: entry.url,
defaultModel: entry.model ?? 'default',
});
}
default:
return null;
}
}
/**
* Create a ProviderRegistry from multi-provider config entries + secret store.
* Registers each provider, wraps with NamedProvider if needed, assigns tiers.
*/
export async function createProvidersFromConfig(
entries: LlmProviderFileEntry[],
secretStore: SecretStore,
): Promise<ProviderRegistry> {
const registry = new ProviderRegistry();
for (const entry of entries) {
const rawProvider = await createSingleProvider(entry, secretStore);
if (!rawProvider) continue;
// Wrap with NamedProvider if user name differs from provider's built-in name
const provider = rawProvider.name !== entry.name
? new NamedProvider(entry.name, rawProvider)
: rawProvider;
registry.register(provider);
if (entry.tier) {
registry.assignTier(provider.name, entry.tier);
}
}
return registry;
}
/**
* Create a ProviderRegistry from legacy single-provider config + secret store.
* @deprecated Use createProvidersFromConfig() with loadLlmProviders() instead.
*/
export async function createProviderFromConfig(
config: LlmFileConfig | undefined,
secretStore: SecretStore,
): Promise<ProviderRegistry> {
if (!config?.provider || config.provider === 'none') {
return new ProviderRegistry();
}
const entry: LlmProviderFileEntry = {
name: config.provider,
type: config.provider,
};
if (config.model) entry.model = config.model;
if (config.url) entry.url = config.url;
if (config.binaryPath) entry.binaryPath = config.binaryPath;
return createProvidersFromConfig([entry], secretStore);
}

View File

@@ -6,3 +6,5 @@ export { FilterCache, DEFAULT_FILTER_CACHE_CONFIG } from './filter-cache.js';
export type { FilterCacheConfig } from './filter-cache.js';
export { FilterMetrics } from './metrics.js';
export type { FilterMetricsSnapshot } from './metrics.js';
export { ResponsePaginator, DEFAULT_PAGINATION_CONFIG, PAGINATION_INDEX_SYSTEM_PROMPT } from './pagination.js';
export type { PaginationConfig, PaginationIndex, PageSummary, PaginatedToolResponse } from './pagination.js';

View File

@@ -0,0 +1,359 @@
import { randomUUID } from 'node:crypto';
import type { ProviderRegistry } from '../providers/registry.js';
import { estimateTokens } from './token-counter.js';
// --- Configuration ---
export interface PaginationConfig {
/** Character threshold above which responses get paginated (default 80_000) */
sizeThreshold: number;
/** Characters per page (default 40_000) */
pageSize: number;
/** Max cached results (LRU eviction) (default 64) */
maxCachedResults: number;
/** TTL for cached results in ms (default 300_000 = 5 min) */
ttlMs: number;
/** Max tokens for the LLM index generation call (default 2048) */
indexMaxTokens: number;
}
export const DEFAULT_PAGINATION_CONFIG: PaginationConfig = {
sizeThreshold: 80_000,
pageSize: 40_000,
maxCachedResults: 64,
ttlMs: 300_000,
indexMaxTokens: 2048,
};
// --- Cache Entry ---
interface PageInfo {
/** 0-based page index */
index: number;
/** Start character offset in the raw string */
startChar: number;
/** End character offset (exclusive) */
endChar: number;
/** Approximate token count */
estimatedTokens: number;
}
interface CachedResult {
resultId: string;
toolName: string;
raw: string;
pages: PageInfo[];
index: PaginationIndex;
createdAt: number;
}
// --- Index Types ---
export interface PageSummary {
page: number;
startChar: number;
endChar: number;
estimatedTokens: number;
summary: string;
}
export interface PaginationIndex {
resultId: string;
toolName: string;
totalSize: number;
totalTokens: number;
totalPages: number;
pageSummaries: PageSummary[];
indexType: 'smart' | 'simple';
}
// --- The MCP response format ---
export interface PaginatedToolResponse {
content: Array<{
type: 'text';
text: string;
}>;
}
// --- LLM Prompt ---
export const PAGINATION_INDEX_SYSTEM_PROMPT = `You are a document indexing assistant. Given a large tool response split into pages, generate a concise summary for each page describing what data it contains.
Rules:
- For each page, write 1-2 sentences describing the key content
- Be specific: mention entity names, IDs, counts, or key fields visible on that page
- If it's JSON, describe the structure and notable entries
- If it's text, describe the topics covered
- Output valid JSON only: an array of objects with "page" (1-based number) and "summary" (string)
- Example output: [{"page": 1, "summary": "Configuration nodes and global settings (inject, debug, function nodes 1-15)"}, {"page": 2, "summary": "HTTP request nodes and API integrations (nodes 16-40)"}]`;
/**
* Handles transparent pagination of large MCP tool responses.
*
* When a tool response exceeds the size threshold, it is cached and an
* index is returned instead. The LLM can then request specific pages
* via _page/_resultId parameters on subsequent tool calls.
*
* If an LLM provider is available, the index includes AI-generated
* per-page summaries. Otherwise, simple byte-range descriptions are used.
*/
export class ResponsePaginator {
private cache = new Map<string, CachedResult>();
private readonly config: PaginationConfig;
constructor(
private providers: ProviderRegistry | null,
config: Partial<PaginationConfig> = {},
private modelOverride?: string,
) {
this.config = { ...DEFAULT_PAGINATION_CONFIG, ...config };
}
/**
* Check if a raw response string should be paginated.
*/
shouldPaginate(raw: string): boolean {
return raw.length >= this.config.sizeThreshold;
}
/**
* Paginate a large response: cache it and return the index.
* Returns null if the response is below threshold.
*/
async paginate(toolName: string, raw: string): Promise<PaginatedToolResponse | null> {
if (!this.shouldPaginate(raw)) return null;
const resultId = randomUUID();
const pages = this.splitPages(raw);
let index: PaginationIndex;
try {
index = await this.generateSmartIndex(resultId, toolName, raw, pages);
} catch (err) {
console.error(`[pagination] Smart index failed for ${toolName}, falling back to simple:`, err instanceof Error ? err.message : String(err));
index = this.generateSimpleIndex(resultId, toolName, raw, pages);
}
// Store in cache
this.evictExpired();
this.evictLRU();
this.cache.set(resultId, {
resultId,
toolName,
raw,
pages,
index,
createdAt: Date.now(),
});
return this.formatIndexResponse(index);
}
/**
* Serve a specific page from cache.
* Returns null if the resultId is not found (cache miss / expired).
*/
getPage(resultId: string, page: number | 'all'): PaginatedToolResponse | null {
this.evictExpired();
const entry = this.cache.get(resultId);
if (!entry) return null;
if (page === 'all') {
return {
content: [{ type: 'text', text: entry.raw }],
};
}
// Pages are 1-based in the API
const pageInfo = entry.pages[page - 1];
if (!pageInfo) {
return {
content: [{
type: 'text',
text: `Error: page ${String(page)} is out of range. This result has ${String(entry.pages.length)} pages (1-${String(entry.pages.length)}).`,
}],
};
}
const pageContent = entry.raw.slice(pageInfo.startChar, pageInfo.endChar);
return {
content: [{
type: 'text',
text: `[Page ${String(page)}/${String(entry.pages.length)} of result ${resultId}]\n\n${pageContent}`,
}],
};
}
/**
* Check if a tool call has pagination parameters (_page / _resultId).
* Returns the parsed pagination request, or null if not a pagination request.
*/
static extractPaginationParams(
args: Record<string, unknown>,
): { resultId: string; page: number | 'all' } | null {
const resultId = args['_resultId'];
const pageParam = args['_page'];
if (typeof resultId !== 'string' || pageParam === undefined) return null;
if (pageParam === 'all') return { resultId, page: 'all' };
const page = Number(pageParam);
if (!Number.isInteger(page) || page < 1) return null;
return { resultId, page };
}
// --- Private methods ---
private splitPages(raw: string): PageInfo[] {
const pages: PageInfo[] = [];
let offset = 0;
let pageIndex = 0;
while (offset < raw.length) {
const end = Math.min(offset + this.config.pageSize, raw.length);
// Try to break at a newline boundary if we're not at the end
let breakAt = end;
if (end < raw.length) {
const lastNewline = raw.lastIndexOf('\n', end);
if (lastNewline > offset) {
breakAt = lastNewline + 1;
}
}
pages.push({
index: pageIndex,
startChar: offset,
endChar: breakAt,
estimatedTokens: estimateTokens(raw.slice(offset, breakAt)),
});
offset = breakAt;
pageIndex++;
}
return pages;
}
private async generateSmartIndex(
resultId: string,
toolName: string,
raw: string,
pages: PageInfo[],
): Promise<PaginationIndex> {
const provider = this.providers?.getProvider('fast');
if (!provider) {
return this.generateSimpleIndex(resultId, toolName, raw, pages);
}
// Build a prompt with page previews (first ~500 chars of each page)
const previews = pages.map((p, i) => {
const preview = raw.slice(p.startChar, Math.min(p.startChar + 500, p.endChar));
const truncated = p.endChar - p.startChar > 500 ? '\n[...]' : '';
return `--- Page ${String(i + 1)} (chars ${String(p.startChar)}-${String(p.endChar)}, ~${String(p.estimatedTokens)} tokens) ---\n${preview}${truncated}`;
}).join('\n\n');
const result = await provider.complete({
messages: [
{ role: 'system', content: PAGINATION_INDEX_SYSTEM_PROMPT },
{ role: 'user', content: `Tool: ${toolName}\nTotal size: ${String(raw.length)} chars, ${String(pages.length)} pages\n\n${previews}` },
],
maxTokens: this.config.indexMaxTokens,
temperature: 0,
...(this.modelOverride ? { model: this.modelOverride } : {}),
});
// LLMs often wrap JSON in ```json ... ``` fences — strip them
const cleaned = result.content.replace(/^```(?:json)?\s*\n?/i, '').replace(/\n?```\s*$/i, '').trim();
const summaries = JSON.parse(cleaned) as Array<{ page: number; summary: string }>;
return {
resultId,
toolName,
totalSize: raw.length,
totalTokens: estimateTokens(raw),
totalPages: pages.length,
indexType: 'smart',
pageSummaries: pages.map((p, i) => ({
page: i + 1,
startChar: p.startChar,
endChar: p.endChar,
estimatedTokens: p.estimatedTokens,
summary: summaries.find((s) => s.page === i + 1)?.summary ?? `Page ${String(i + 1)}`,
})),
};
}
private generateSimpleIndex(
resultId: string,
toolName: string,
raw: string,
pages: PageInfo[],
): PaginationIndex {
return {
resultId,
toolName,
totalSize: raw.length,
totalTokens: estimateTokens(raw),
totalPages: pages.length,
indexType: 'simple',
pageSummaries: pages.map((p, i) => ({
page: i + 1,
startChar: p.startChar,
endChar: p.endChar,
estimatedTokens: p.estimatedTokens,
summary: `Page ${String(i + 1)}: characters ${String(p.startChar)}-${String(p.endChar)} (~${String(p.estimatedTokens)} tokens)`,
})),
};
}
private formatIndexResponse(index: PaginationIndex): PaginatedToolResponse {
const lines = [
`This response is too large to return directly (${String(index.totalSize)} chars, ~${String(index.totalTokens)} tokens).`,
`It has been split into ${String(index.totalPages)} pages.`,
'',
'To retrieve a specific page, call this same tool again with additional arguments:',
` "_resultId": "${index.resultId}"`,
` "_page": <page_number> (1-${String(index.totalPages)})`,
' "_page": "all" (returns the full response)',
'',
`--- Page Index${index.indexType === 'smart' ? ' (AI-generated summaries)' : ''} ---`,
];
for (const page of index.pageSummaries) {
lines.push(` Page ${String(page.page)}: ${page.summary}`);
}
return {
content: [{ type: 'text', text: lines.join('\n') }],
};
}
private evictExpired(): void {
const now = Date.now();
for (const [id, entry] of this.cache) {
if (now - entry.createdAt > this.config.ttlMs) {
this.cache.delete(id);
}
}
}
private evictLRU(): void {
while (this.cache.size >= this.config.maxCachedResults) {
const oldest = this.cache.keys().next();
if (oldest.done) break;
this.cache.delete(oldest.value);
}
}
/** Exposed for testing. */
get cacheSize(): number {
return this.cache.size;
}
/** Clear all cached results. */
clearCache(): void {
this.cache.clear();
}
}

View File

@@ -106,7 +106,7 @@ export class LlmProcessor {
return { optimized: false, params };
}
const provider = this.providers.getActive();
const provider = this.providers.getProvider('fast');
if (!provider) {
return { optimized: false, params };
}
@@ -142,7 +142,7 @@ export class LlmProcessor {
return { filtered: false, result: response.result, originalSize: raw.length, filteredSize: raw.length };
}
const provider = this.providers.getActive();
const provider = this.providers.getProvider('fast');
if (!provider) {
const raw = JSON.stringify(response.result);
return { filtered: false, result: response.result, originalSize: raw.length, filteredSize: raw.length };

View File

@@ -7,8 +7,11 @@ import { StdioProxyServer } from './server.js';
import { StdioUpstream } from './upstream/stdio.js';
import { HttpUpstream } from './upstream/http.js';
import { createHttpServer } from './http/server.js';
import { loadHttpConfig } from './http/config.js';
import { loadHttpConfig, loadLlmProviders } from './http/config.js';
import type { HttpConfig } from './http/config.js';
import { createProvidersFromConfig } from './llm-config.js';
import { createSecretStore } from '@mcpctl/shared';
import type { ProviderRegistry } from './providers/registry.js';
interface ParsedArgs {
configPath: string | undefined;
@@ -55,12 +58,28 @@ export interface MainResult {
server: StdioProxyServer;
httpServer: FastifyInstance | undefined;
httpConfig: HttpConfig;
providerRegistry: ProviderRegistry;
}
export async function main(argv: string[] = process.argv): Promise<MainResult> {
const args = parseArgs(argv);
const httpConfig = loadHttpConfig();
// Load LLM providers from user config + secret store
const llmEntries = loadLlmProviders();
const secretStore = await createSecretStore();
const providerRegistry = await createProvidersFromConfig(llmEntries, secretStore);
if (providerRegistry.hasTierConfig()) {
const fast = providerRegistry.getTierProviders('fast');
const heavy = providerRegistry.getTierProviders('heavy');
process.stderr.write(`LLM providers: fast=[${fast.join(',')}] heavy=[${heavy.join(',')}]\n`);
} else {
const activeLlm = providerRegistry.getActive();
if (activeLlm) {
process.stderr.write(`LLM provider: ${activeLlm.name}\n`);
}
}
let upstreamConfigs: UpstreamConfig[] = [];
if (args.configPath) {
@@ -115,7 +134,7 @@ export async function main(argv: string[] = process.argv): Promise<MainResult> {
// Start HTTP server unless disabled
let httpServer: FastifyInstance | undefined;
if (!args.noHttp) {
httpServer = await createHttpServer(httpConfig, { router });
httpServer = await createHttpServer(httpConfig, { router, providerRegistry });
await httpServer.listen({ port: httpConfig.httpPort, host: httpConfig.httpHost });
process.stderr.write(`mcpctl-proxy HTTP server listening on ${httpConfig.httpHost}:${httpConfig.httpPort}\n`);
}
@@ -126,6 +145,7 @@ export async function main(argv: string[] = process.argv): Promise<MainResult> {
if (shuttingDown) return;
shuttingDown = true;
providerRegistry.disposeAll();
server.stop();
if (httpServer) {
await httpServer.close();
@@ -137,7 +157,7 @@ export async function main(argv: string[] = process.argv): Promise<MainResult> {
process.on('SIGTERM', () => void shutdown());
process.on('SIGINT', () => void shutdown());
return { router, server, httpServer, httpConfig };
return { router, server, httpServer, httpConfig, providerRegistry };
}
// Run when executed directly

View File

@@ -0,0 +1,291 @@
import { spawn, type ChildProcess } from 'node:child_process';
import { createInterface, type Interface as ReadlineInterface } from 'node:readline';
export interface AcpClientConfig {
binaryPath: string;
model: string;
/** Timeout for individual RPC requests in ms (default: 60000) */
requestTimeoutMs: number;
/** Timeout for process initialization in ms (default: 30000) */
initTimeoutMs: number;
/** Override spawn for testing */
spawn?: typeof spawn;
}
interface PendingRequest {
resolve: (result: unknown) => void;
reject: (err: Error) => void;
timer: ReturnType<typeof setTimeout>;
}
/**
* Low-level ACP (Agent Client Protocol) client.
* Manages a persistent `gemini --experimental-acp` subprocess and communicates
* via JSON-RPC 2.0 over NDJSON stdio.
*
* Pattern follows StdioUpstream: readline for parsing, pending request map with timeouts.
*/
export class AcpClient {
private process: ChildProcess | null = null;
private readline: ReadlineInterface | null = null;
private pendingRequests = new Map<number, PendingRequest>();
private nextId = 1;
private sessionId: string | null = null;
private ready = false;
private initPromise: Promise<void> | null = null;
private readonly config: AcpClientConfig;
/** Accumulates text chunks from session/update agent_message_chunk during a prompt. */
private activePromptChunks: string[] = [];
constructor(config: AcpClientConfig) {
this.config = config;
}
/** Ensure the subprocess is spawned and initialized. Idempotent and lazy. */
async ensureReady(): Promise<void> {
if (this.ready && this.process && !this.process.killed) return;
// If already initializing, wait for it
if (this.initPromise) return this.initPromise;
this.initPromise = this.doInit();
try {
await this.initPromise;
} catch (err) {
this.initPromise = null;
throw err;
}
}
/** Send a prompt and collect the streamed text response. */
async prompt(text: string): Promise<string> {
await this.ensureReady();
// Set up chunk accumulator
this.activePromptChunks = [];
const result = await this.sendRequest('session/prompt', {
sessionId: this.sessionId,
prompt: [{ type: 'text', text }],
}, this.config.requestTimeoutMs) as { stopReason: string };
const collected = this.activePromptChunks.join('');
this.activePromptChunks = [];
if (result.stopReason === 'refusal') {
throw new Error('Gemini refused to process the prompt');
}
return collected;
}
/** Kill the subprocess and clean up. */
dispose(): void {
this.cleanup();
}
/** Check if the subprocess is alive and initialized. */
get isAlive(): boolean {
return this.ready && this.process !== null && !this.process.killed;
}
// --- Private ---
private async doInit(): Promise<void> {
// Clean up any previous state
this.cleanup();
this.spawnProcess();
this.setupReadline();
// ACP handshake: initialize
await this.sendRequest('initialize', {
protocolVersion: 1,
clientCapabilities: {},
clientInfo: { name: 'mcpctl', version: '1.0.0' },
}, this.config.initTimeoutMs);
// ACP handshake: session/new
const sessionResult = await this.sendRequest('session/new', {
cwd: '/tmp',
mcpServers: [],
}, this.config.initTimeoutMs) as { sessionId: string };
this.sessionId = sessionResult.sessionId;
this.ready = true;
}
private spawnProcess(): void {
const spawnFn = this.config.spawn ?? spawn;
this.process = spawnFn(this.config.binaryPath, ['--experimental-acp'], {
stdio: ['pipe', 'pipe', 'pipe'],
env: process.env,
});
this.process.on('exit', () => {
this.ready = false;
this.initPromise = null;
this.sessionId = null;
// Reject all pending requests
for (const [id, pending] of this.pendingRequests) {
clearTimeout(pending.timer);
pending.reject(new Error('Gemini ACP process exited'));
this.pendingRequests.delete(id);
}
});
this.process.on('error', (err) => {
this.ready = false;
this.initPromise = null;
for (const [id, pending] of this.pendingRequests) {
clearTimeout(pending.timer);
pending.reject(err);
this.pendingRequests.delete(id);
}
});
}
private setupReadline(): void {
if (!this.process?.stdout) return;
this.readline = createInterface({ input: this.process.stdout });
this.readline.on('line', (line) => this.handleLine(line));
}
private handleLine(line: string): void {
let msg: Record<string, unknown>;
try {
msg = JSON.parse(line) as Record<string, unknown>;
} catch {
// Skip non-JSON lines (e.g., debug output on stdout)
return;
}
// Response to a pending request (has 'id')
if ('id' in msg && msg.id !== undefined && ('result' in msg || 'error' in msg)) {
this.handleResponse(msg as { id: number; result?: unknown; error?: { code: number; message: string } });
return;
}
// Notification (has 'method', no 'id')
if ('method' in msg && !('id' in msg)) {
this.handleNotification(msg as { method: string; params?: Record<string, unknown> });
return;
}
// Request from agent (has 'method' AND 'id') — agent asking us for something
if ('method' in msg && 'id' in msg) {
this.handleAgentRequest(msg as { id: number; method: string; params?: Record<string, unknown> });
return;
}
}
private handleResponse(msg: { id: number; result?: unknown; error?: { code: number; message: string } }): void {
const pending = this.pendingRequests.get(msg.id);
if (!pending) return;
clearTimeout(pending.timer);
this.pendingRequests.delete(msg.id);
if (msg.error) {
pending.reject(new Error(`ACP error ${msg.error.code}: ${msg.error.message}`));
} else {
pending.resolve(msg.result);
}
}
private handleNotification(msg: { method: string; params?: Record<string, unknown> }): void {
if (msg.method !== 'session/update' || !msg.params) return;
const update = msg.params.update as Record<string, unknown> | undefined;
if (!update) return;
// Collect text from agent_message_chunk
if (update.sessionUpdate === 'agent_message_chunk') {
const content = update.content;
// Gemini ACP sends content as a single object {type, text} or an array [{type, text}]
const blocks: Array<{ type: string; text?: string }> = Array.isArray(content)
? content as Array<{ type: string; text?: string }>
: content && typeof content === 'object'
? [content as { type: string; text?: string }]
: [];
for (const block of blocks) {
if (block.type === 'text' && block.text) {
this.activePromptChunks.push(block.text);
}
}
}
}
/** Handle requests from the agent (e.g., session/request_permission). Reject them all. */
private handleAgentRequest(msg: { id: number; method: string; params?: Record<string, unknown> }): void {
if (!this.process?.stdin) return;
if (msg.method === 'session/request_permission') {
// Reject permission requests — we don't want tool use
const response = JSON.stringify({
jsonrpc: '2.0',
id: msg.id,
result: { outcome: { outcome: 'cancelled' } },
});
this.process.stdin.write(response + '\n');
} else {
// Unknown method — return error
const response = JSON.stringify({
jsonrpc: '2.0',
id: msg.id,
error: { code: -32601, message: 'Method not supported' },
});
this.process.stdin.write(response + '\n');
}
}
private sendRequest(method: string, params: Record<string, unknown>, timeoutMs: number): Promise<unknown> {
if (!this.process?.stdin) {
return Promise.reject(new Error('ACP process not started'));
}
const id = this.nextId++;
return new Promise((resolve, reject) => {
const timer = setTimeout(() => {
this.pendingRequests.delete(id);
// Kill the process on timeout — it's hung
this.cleanup();
reject(new Error(`ACP request '${method}' timed out after ${timeoutMs}ms`));
}, timeoutMs);
this.pendingRequests.set(id, { resolve, reject, timer });
const msg = JSON.stringify({ jsonrpc: '2.0', id, method, params });
this.process!.stdin!.write(msg + '\n');
});
}
private cleanup(): void {
this.ready = false;
this.initPromise = null;
this.sessionId = null;
this.activePromptChunks = [];
// Reject all pending requests
for (const [id, pending] of this.pendingRequests) {
clearTimeout(pending.timer);
pending.reject(new Error('ACP client disposed'));
this.pendingRequests.delete(id);
}
if (this.readline) {
this.readline.close();
this.readline = null;
}
if (this.process) {
this.process.kill('SIGTERM');
this.process = null;
}
}
}

View File

@@ -0,0 +1,165 @@
import { execFile } from 'node:child_process';
import { promisify } from 'node:util';
import type { LlmProvider, CompletionOptions, CompletionResult } from './types.js';
import { AcpClient } from './acp-client.js';
import type { AcpClientConfig } from './acp-client.js';
const execFileAsync = promisify(execFile);
export interface GeminiAcpConfig {
binaryPath?: string;
defaultModel?: string;
requestTimeoutMs?: number;
initTimeoutMs?: number;
/** Idle TTL for pooled sessions in ms (default: 8 hours) */
idleTtlMs?: number;
/** Override for testing — passed through to AcpClient */
spawn?: AcpClientConfig['spawn'];
}
interface PoolEntry {
client: AcpClient;
lastUsed: number;
queue: Promise<void>;
}
/**
* Gemini CLI provider using ACP (Agent Client Protocol) mode.
*
* Maintains a pool of persistent subprocesses keyed by model name.
* Each model gets its own `gemini --experimental-acp` subprocess with
* a serial request queue. Idle sessions are evicted after 8 hours.
*
* NOTE: Gemini ACP currently doesn't support per-session model selection,
* so all sessions use the same model. The pool infrastructure is ready for
* when vLLM/OpenAI providers are added (they support per-request model).
*/
export class GeminiAcpProvider implements LlmProvider {
readonly name = 'gemini-cli';
private pool = new Map<string, PoolEntry>();
private binaryPath: string;
private defaultModel: string;
private readonly requestTimeoutMs: number;
private readonly initTimeoutMs: number;
private readonly idleTtlMs: number;
private readonly spawnOverride?: AcpClientConfig['spawn'];
constructor(config?: GeminiAcpConfig) {
this.binaryPath = config?.binaryPath ?? 'gemini';
this.defaultModel = config?.defaultModel ?? 'gemini-2.5-flash';
this.requestTimeoutMs = config?.requestTimeoutMs ?? 60_000;
this.initTimeoutMs = config?.initTimeoutMs ?? 30_000;
this.idleTtlMs = config?.idleTtlMs ?? 8 * 60 * 60 * 1000; // 8 hours
if (config?.spawn) this.spawnOverride = config.spawn;
}
async complete(options: CompletionOptions): Promise<CompletionResult> {
const model = options.model ?? this.defaultModel;
const entry = this.getOrCreateEntry(model);
entry.lastUsed = Date.now();
this.evictIdle();
return this.enqueue(entry, () => this.doComplete(entry.client, options));
}
async listModels(): Promise<string[]> {
return ['gemini-2.5-flash', 'gemini-2.5-pro', 'gemini-2.0-flash'];
}
async isAvailable(): Promise<boolean> {
try {
await execFileAsync(this.binaryPath, ['--version'], { timeout: 5000 });
return true;
} catch {
return false;
}
}
dispose(): void {
for (const entry of this.pool.values()) {
entry.client.dispose();
}
this.pool.clear();
}
/**
* Eagerly spawn the default model's ACP subprocess so it's ready
* for the first request (avoids 30s cold-start on health checks).
*/
warmup(): void {
const entry = this.getOrCreateEntry(this.defaultModel);
// Fire-and-forget: start the subprocess initialization in the background
entry.client.ensureReady().catch(() => {
// Ignore errors — next request will retry
});
}
/** Number of active pool entries (for testing). */
get poolSize(): number {
return this.pool.size;
}
// --- Private ---
private getOrCreateEntry(model: string): PoolEntry {
const existing = this.pool.get(model);
if (existing) return existing;
const acpConfig: AcpClientConfig = {
binaryPath: this.binaryPath,
model,
requestTimeoutMs: this.requestTimeoutMs,
initTimeoutMs: this.initTimeoutMs,
};
if (this.spawnOverride) acpConfig.spawn = this.spawnOverride;
const entry: PoolEntry = {
client: new AcpClient(acpConfig),
lastUsed: Date.now(),
queue: Promise.resolve(),
};
this.pool.set(model, entry);
return entry;
}
private evictIdle(): void {
const now = Date.now();
for (const [model, entry] of this.pool) {
if (now - entry.lastUsed > this.idleTtlMs) {
entry.client.dispose();
this.pool.delete(model);
}
}
}
private async doComplete(client: AcpClient, options: CompletionOptions): Promise<CompletionResult> {
const prompt = options.messages
.map((m) => {
if (m.role === 'system') return `System: ${m.content}`;
if (m.role === 'user') return m.content;
if (m.role === 'assistant') return `Assistant: ${m.content}`;
return m.content;
})
.join('\n\n');
const content = await client.prompt(prompt);
return {
content: content.trim(),
toolCalls: [],
usage: { promptTokens: 0, completionTokens: 0, totalTokens: 0 },
finishReason: 'stop',
};
}
private enqueue<T>(entry: PoolEntry, fn: () => Promise<T>): Promise<T> {
const result = new Promise<T>((resolve, reject) => {
entry.queue = entry.queue.then(
() => fn().then(resolve, reject),
() => fn().then(resolve, reject),
);
});
return result;
}
}

View File

@@ -9,4 +9,8 @@ export { GeminiCliProvider } from './gemini-cli.js';
export type { GeminiCliConfig } from './gemini-cli.js';
export { DeepSeekProvider } from './deepseek.js';
export type { DeepSeekConfig } from './deepseek.js';
export { GeminiAcpProvider } from './gemini-acp.js';
export type { GeminiAcpConfig } from './gemini-acp.js';
export { AcpClient } from './acp-client.js';
export type { AcpClientConfig } from './acp-client.js';
export { ProviderRegistry } from './registry.js';

View File

@@ -1,11 +1,13 @@
import type { LlmProvider } from './types.js';
import type { LlmProvider, Tier } from './types.js';
/**
* Registry for LLM providers. Supports switching the active provider at runtime.
* Registry for LLM providers. Supports tier-based routing (fast/heavy)
* with cross-tier fallback, and legacy single-provider mode.
*/
export class ProviderRegistry {
private providers = new Map<string, LlmProvider>();
private activeProvider: string | null = null;
private tierProviders = new Map<Tier, string[]>();
register(provider: LlmProvider): void {
this.providers.set(provider.name, provider);
@@ -20,6 +22,15 @@ export class ProviderRegistry {
const first = this.providers.keys().next();
this.activeProvider = first.done ? null : first.value;
}
// Remove from tier assignments
for (const [tier, names] of this.tierProviders) {
const filtered = names.filter((n) => n !== name);
if (filtered.length === 0) {
this.tierProviders.delete(tier);
} else {
this.tierProviders.set(tier, filtered);
}
}
}
setActive(name: string): void {
@@ -34,6 +45,42 @@ export class ProviderRegistry {
return this.providers.get(this.activeProvider) ?? null;
}
/** Assign a provider to a tier. Call order = priority within the tier. */
assignTier(providerName: string, tier: Tier): void {
if (!this.providers.has(providerName)) {
throw new Error(`Provider '${providerName}' is not registered`);
}
const existing = this.tierProviders.get(tier) ?? [];
if (!existing.includes(providerName)) {
this.tierProviders.set(tier, [...existing, providerName]);
}
}
/**
* Get provider for a specific tier with fallback.
* Resolution: requested tier → other tier → getActive() (legacy).
*/
getProvider(tier: Tier): LlmProvider | null {
const primary = this.firstInTier(tier);
if (primary) return primary;
const otherTier: Tier = tier === 'fast' ? 'heavy' : 'fast';
const fallback = this.firstInTier(otherTier);
if (fallback) return fallback;
return this.getActive();
}
/** Get provider names assigned to a tier. */
getTierProviders(tier: Tier): string[] {
return this.tierProviders.get(tier) ?? [];
}
/** Whether any tier assignments exist (vs legacy single-provider mode). */
hasTierConfig(): boolean {
return this.tierProviders.size > 0;
}
get(name: string): LlmProvider | undefined {
return this.providers.get(name);
}
@@ -45,4 +92,32 @@ export class ProviderRegistry {
getActiveName(): string | null {
return this.activeProvider;
}
/** Provider info for status display. */
listProviders(): Array<{ name: string; tiers: Tier[] }> {
return this.list().map((name) => {
const tiers: Tier[] = [];
for (const [tier, names] of this.tierProviders) {
if (names.includes(name)) tiers.push(tier);
}
return { name, tiers };
});
}
/** Dispose all registered providers that have a dispose method. */
disposeAll(): void {
for (const provider of this.providers.values()) {
provider.dispose?.();
}
}
private firstInTier(tier: Tier): LlmProvider | null {
const names = this.tierProviders.get(tier);
if (!names) return null;
for (const name of names) {
const provider = this.providers.get(name);
if (provider) return provider;
}
return null;
}
}

View File

@@ -44,6 +44,9 @@ export interface CompletionOptions {
model?: string;
}
/** LLM provider tier. 'fast' = local inference, 'heavy' = cloud reasoning. */
export type Tier = 'fast' | 'heavy';
export interface LlmProvider {
/** Provider identifier (e.g., 'openai', 'anthropic', 'ollama') */
readonly name: string;
@@ -53,4 +56,6 @@ export interface LlmProvider {
listModels(): Promise<string[]>;
/** Check if the provider is configured and reachable */
isAvailable(): Promise<boolean>;
/** Optional cleanup for providers with persistent resources (e.g., subprocesses). */
dispose?(): void;
}

View File

@@ -1,5 +1,23 @@
import type { UpstreamConnection, JsonRpcRequest, JsonRpcResponse, JsonRpcNotification } from './types.js';
import type { LlmProcessor } from './llm/processor.js';
import { ResponsePaginator } from './llm/pagination.js';
import type { McpdClient } from './http/mcpd-client.js';
import { SessionGate } from './gate/session-gate.js';
import { TagMatcher, extractKeywordsFromToolCall } from './gate/tag-matcher.js';
import type { PromptIndexEntry, TagMatchResult } from './gate/tag-matcher.js';
import { LlmPromptSelector } from './gate/llm-selector.js';
import type { ProviderRegistry } from './providers/registry.js';
export interface RouteContext {
sessionId?: string;
}
export interface GateConfig {
gated: boolean;
providerRegistry: ProviderRegistry | null;
modelOverride?: string;
byteBudget?: number;
}
/**
* Routes MCP requests to the appropriate upstream server.
@@ -17,11 +35,46 @@ export class McpRouter {
private promptToServer = new Map<string, string>();
private notificationHandler: ((notification: JsonRpcNotification) => void) | null = null;
private llmProcessor: LlmProcessor | null = null;
private instructions: string | null = null;
private mcpdClient: McpdClient | null = null;
private projectName: string | null = null;
private mcpctlResourceContents = new Map<string, string>();
private paginator: ResponsePaginator | null = null;
private sessionGate = new SessionGate();
private gateConfig: GateConfig | null = null;
private tagMatcher: TagMatcher | null = null;
private llmSelector: LlmPromptSelector | null = null;
private cachedPromptIndex: PromptIndexEntry[] | null = null;
private promptIndexFetchedAt = 0;
private readonly PROMPT_INDEX_TTL_MS = 60_000;
private systemPromptCache = new Map<string, { content: string; fetchedAt: number }>();
private readonly SYSTEM_PROMPT_TTL_MS = 300_000; // 5 minutes
setPaginator(paginator: ResponsePaginator): void {
this.paginator = paginator;
}
setGateConfig(config: GateConfig): void {
this.gateConfig = config;
this.tagMatcher = new TagMatcher(config.byteBudget);
if (config.providerRegistry) {
this.llmSelector = new LlmPromptSelector(config.providerRegistry, config.modelOverride);
}
}
setLlmProcessor(processor: LlmProcessor): void {
this.llmProcessor = processor;
}
setInstructions(instructions: string): void {
this.instructions = instructions;
}
setPromptConfig(mcpdClient: McpdClient, projectName: string): void {
this.mcpdClient = mcpdClient;
this.projectName = projectName;
}
addUpstream(connection: UpstreamConnection): void {
this.upstreams.set(connection.name, connection);
if (this.notificationHandler && connection.onNotification) {
@@ -87,10 +140,18 @@ export class McpRouter {
for (const tool of tools) {
const namespacedName = `${serverName}/${tool.name}`;
this.toolToServer.set(namespacedName, serverName);
allTools.push({
// Enrich description with server context if available
const entry: { name: string; description?: string; inputSchema?: unknown } = {
...tool,
name: namespacedName,
});
};
if (upstream.description && tool.description) {
entry.description = `[${upstream.description}] ${tool.description}`;
} else if (upstream.description) {
entry.description = `[${upstream.description}]`;
}
// If neither upstream.description nor tool.description, keep tool.description (may be undefined — that's fine, just don't set it)
allTools.push(entry);
}
}
} catch {
@@ -223,28 +284,70 @@ export class McpRouter {
* Route a generic request. Handles protocol-level methods locally,
* delegates tool/resource/prompt calls to upstreams.
*/
async route(request: JsonRpcRequest): Promise<JsonRpcResponse> {
async route(request: JsonRpcRequest, context?: RouteContext): Promise<JsonRpcResponse> {
switch (request.method) {
case 'initialize':
return {
jsonrpc: '2.0',
id: request.id,
result: {
protocolVersion: '2024-11-05',
serverInfo: {
name: 'mcpctl-proxy',
version: '0.1.0',
},
capabilities: {
tools: {},
resources: {},
prompts: {},
},
case 'initialize': {
// Create gated session if project is gated
const isGated = this.gateConfig?.gated ?? false;
if (context?.sessionId && this.gateConfig) {
this.sessionGate.createSession(context.sessionId, isGated);
}
// Build instructions: base project instructions + gate message with prompt index
let instructions = this.instructions ?? '';
if (isGated) {
instructions = await this.buildGatedInstructions(instructions);
}
const result: Record<string, unknown> = {
protocolVersion: '2024-11-05',
serverInfo: {
name: 'mcpctl-proxy',
version: '0.1.0',
},
capabilities: {
tools: {},
resources: {},
prompts: {},
},
};
if (instructions) {
result['instructions'] = instructions;
}
return { jsonrpc: '2.0', id: request.id, result };
}
case 'tools/list': {
// When gated, only show begin_session
if (context?.sessionId && this.sessionGate.isGated(context.sessionId)) {
return {
jsonrpc: '2.0',
id: request.id,
result: { tools: [this.getBeginSessionTool()] },
};
}
const tools = await this.discoverTools();
// Append built-in tools if prompt config is set
if (this.mcpdClient && this.projectName) {
tools.push({
name: 'propose_prompt',
description: 'Propose a new prompt for this project. Creates a pending request that must be approved by a user before becoming active.',
inputSchema: {
type: 'object',
properties: {
name: { type: 'string', description: 'Prompt name (lowercase alphanumeric with hyphens, e.g. "debug-guide")' },
content: { type: 'string', description: 'Prompt content text' },
},
required: ['name', 'content'],
},
});
}
// Always offer read_prompts when gating is configured (even for ungated sessions)
if (this.gateConfig && this.mcpdClient && this.projectName) {
tools.push(this.getReadPromptsTool());
}
return {
jsonrpc: '2.0',
id: request.id,
@@ -253,10 +356,32 @@ export class McpRouter {
}
case 'tools/call':
return this.routeToolCall(request);
return this.routeToolCall(request, context);
case 'resources/list': {
const resources = await this.discoverResources();
// Append mcpctl prompt resources
if (this.mcpdClient && this.projectName) {
try {
const sessionParam = context?.sessionId ? `?session=${encodeURIComponent(context.sessionId)}` : '';
const visible = await this.mcpdClient.get<Array<{ name: string; content: string; type: string }>>(
`/api/v1/projects/${encodeURIComponent(this.projectName)}/prompts/visible${sessionParam}`,
);
this.mcpctlResourceContents.clear();
for (const p of visible) {
const uri = `mcpctl://prompts/${p.name}`;
resources.push({
uri,
name: p.name,
description: p.type === 'promptrequest' ? `[Pending proposal] ${p.name}` : `[Approved prompt] ${p.name}`,
mimeType: 'text/plain',
});
this.mcpctlResourceContents.set(uri, p.content);
}
} catch {
// Prompt resources are optional — don't fail discovery
}
}
return {
jsonrpc: '2.0',
id: request.id,
@@ -264,8 +389,66 @@ export class McpRouter {
};
}
case 'resources/read':
case 'resources/read': {
const params = request.params as Record<string, unknown> | undefined;
const uri = params?.['uri'] as string | undefined;
if (uri?.startsWith('mcpctl://prompts/') && this.mcpdClient && this.projectName) {
const promptName = uri.slice('mcpctl://prompts/'.length);
try {
const sessionParam = context?.sessionId ? `?session=${encodeURIComponent(context.sessionId)}` : '';
const visible = await this.mcpdClient.get<Array<{ name: string; content: string; type: string }>>(
`/api/v1/projects/${encodeURIComponent(this.projectName)}/prompts/visible${sessionParam}`,
);
const found = visible.find((p) => p.name === promptName);
if (found) {
this.mcpctlResourceContents.set(uri, found.content);
return {
jsonrpc: '2.0',
id: request.id,
result: {
contents: [{ uri, mimeType: 'text/plain', text: found.content }],
},
};
}
} catch {
// Fall through to cache
}
// Fallback to cache if mcpd is unreachable
const cached = this.mcpctlResourceContents.get(uri);
if (cached !== undefined) {
return {
jsonrpc: '2.0',
id: request.id,
result: {
contents: [{ uri, mimeType: 'text/plain', text: cached }],
},
};
}
return {
jsonrpc: '2.0',
id: request.id,
error: { code: -32602, message: `Resource not found: ${uri}` },
};
}
if (uri?.startsWith('mcpctl://')) {
const content = this.mcpctlResourceContents.get(uri);
if (content !== undefined) {
return {
jsonrpc: '2.0',
id: request.id,
result: {
contents: [{ uri, mimeType: 'text/plain', text: content }],
},
};
}
return {
jsonrpc: '2.0',
id: request.id,
error: { code: -32602, message: `Resource not found: ${uri}` },
};
}
return this.routeNamespacedCall(request, 'uri', this.resourceToServer);
}
case 'resources/subscribe':
case 'resources/unsubscribe':
@@ -283,6 +466,17 @@ export class McpRouter {
case 'prompts/get':
return this.routeNamespacedCall(request, 'name', this.promptToServer);
// Handle MCP notifications (no response expected, but return empty result if called as request)
case 'notifications/initialized':
case 'notifications/cancelled':
case 'notifications/progress':
case 'notifications/roots/list_changed':
return {
jsonrpc: '2.0',
id: request.id,
result: {},
};
default:
return {
jsonrpc: '2.0',
@@ -295,18 +489,58 @@ export class McpRouter {
/**
* Route a tools/call request, optionally applying LLM pre/post-processing.
*/
private async routeToolCall(request: JsonRpcRequest): Promise<JsonRpcResponse> {
private async routeToolCall(request: JsonRpcRequest, context?: RouteContext): Promise<JsonRpcResponse> {
const params = request.params as Record<string, unknown> | undefined;
const toolName = params?.['name'] as string | undefined;
// Handle built-in tools
if (toolName === 'propose_prompt') {
return this.handleProposePrompt(request, context);
}
if (toolName === 'begin_session') {
return this.handleBeginSession(request, context);
}
if (toolName === 'read_prompts') {
return this.handleReadPrompts(request, context);
}
// Extract tool arguments early (needed for both gated intercept and pagination)
const toolArgs = (params?.['arguments'] ?? {}) as Record<string, unknown>;
// Intercept: if session is gated and trying to call a real tool, auto-ungate with keyword extraction
if (context?.sessionId && this.sessionGate.isGated(context.sessionId)) {
return this.handleGatedIntercept(request, context, toolName ?? '', toolArgs);
}
// Intercept pagination page requests before routing to upstream
if (this.paginator) {
const paginationReq = ResponsePaginator.extractPaginationParams(toolArgs);
if (paginationReq) {
const pageResult = this.paginator.getPage(paginationReq.resultId, paginationReq.page);
if (pageResult) {
return { jsonrpc: '2.0', id: request.id, result: pageResult };
}
return {
jsonrpc: '2.0',
id: request.id,
result: {
content: [{
type: 'text',
text: 'Cached result not found (expired or invalid _resultId). Please re-call the tool without _resultId/_page to get a fresh result.',
}],
},
};
}
}
// If no processor or tool shouldn't be processed, route directly
if (!this.llmProcessor || !toolName || !this.llmProcessor.shouldProcess('tools/call', toolName)) {
return this.routeNamespacedCall(request, 'name', this.toolToServer);
const response = await this.routeNamespacedCall(request, 'name', this.toolToServer);
return this.maybePaginate(toolName, response);
}
// Preprocess request params
const toolParams = (params?.['arguments'] ?? {}) as Record<string, unknown>;
const processed = await this.llmProcessor.preprocessRequest(toolName, toolParams);
const processed = await this.llmProcessor.preprocessRequest(toolName, toolArgs);
const processedRequest: JsonRpcRequest = processed.optimized
? { ...request, params: { ...params, arguments: processed.params } }
: request;
@@ -314,6 +548,10 @@ export class McpRouter {
// Route to upstream
const response = await this.routeNamespacedCall(processedRequest, 'name', this.toolToServer);
// Paginate if response is large (skip LLM filtering for paginated responses)
const paginated = await this.maybePaginate(toolName, response);
if (paginated !== response) return paginated;
// Filter response
if (response.error) return response;
const filtered = await this.llmProcessor.filterResponse(toolName, response);
@@ -323,6 +561,487 @@ export class McpRouter {
return response;
}
/**
* If the response is large enough, paginate it and return the index instead.
*/
private async maybePaginate(toolName: string | undefined, response: JsonRpcResponse): Promise<JsonRpcResponse> {
if (!this.paginator || !toolName || response.error) return response;
const raw = JSON.stringify(response.result);
if (!this.paginator.shouldPaginate(raw)) return response;
const paginated = await this.paginator.paginate(toolName, raw);
if (!paginated) return response;
return { jsonrpc: '2.0', id: response.id, result: paginated };
}
private async handleProposePrompt(request: JsonRpcRequest, context?: RouteContext): Promise<JsonRpcResponse> {
if (!this.mcpdClient || !this.projectName) {
return {
jsonrpc: '2.0',
id: request.id,
error: { code: -32603, message: 'Prompt config not set — propose_prompt unavailable' },
};
}
const params = request.params as Record<string, unknown> | undefined;
const args = (params?.['arguments'] ?? {}) as Record<string, unknown>;
const name = args['name'] as string | undefined;
const content = args['content'] as string | undefined;
if (!name || !content) {
return {
jsonrpc: '2.0',
id: request.id,
error: { code: -32602, message: 'Missing required arguments: name and content' },
};
}
try {
const body: Record<string, unknown> = { name, content };
if (context?.sessionId) {
body['createdBySession'] = context.sessionId;
}
await this.mcpdClient.post(
`/api/v1/projects/${encodeURIComponent(this.projectName)}/promptrequests`,
body,
);
return {
jsonrpc: '2.0',
id: request.id,
result: {
content: [
{
type: 'text',
text: `Prompt request "${name}" created successfully. It will be visible to you as a resource at mcpctl://prompts/${name}. A user must approve it before it becomes permanent.`,
},
],
},
};
} catch (err) {
return {
jsonrpc: '2.0',
id: request.id,
error: {
code: -32603,
message: `Failed to propose prompt: ${err instanceof Error ? err.message : String(err)}`,
},
};
}
}
// ── Gate tool definitions ──
private getBeginSessionTool(): { name: string; description: string; inputSchema: unknown } {
return {
name: 'begin_session',
description: 'Start your session by providing keywords that describe your current task. You will receive relevant project context, policies, and guidelines. This is required before using other tools.',
inputSchema: {
type: 'object',
properties: {
tags: {
type: 'array',
items: { type: 'string' },
maxItems: 10,
description: '3-7 keywords describing your current task (e.g. ["zigbee", "pairing", "mqtt"])',
},
},
required: ['tags'],
},
};
}
private getReadPromptsTool(): { name: string; description: string; inputSchema: unknown } {
return {
name: 'read_prompts',
description: 'Retrieve additional project prompts by keywords. Use this if you need more context about specific topics. Returns matched prompts and a list of other available prompts.',
inputSchema: {
type: 'object',
properties: {
tags: {
type: 'array',
items: { type: 'string' },
maxItems: 10,
description: 'Keywords to match against available prompts',
},
},
required: ['tags'],
},
};
}
// ── Gate handlers ──
private async handleBeginSession(request: JsonRpcRequest, context?: RouteContext): Promise<JsonRpcResponse> {
if (!this.gateConfig || !this.mcpdClient || !this.projectName) {
return { jsonrpc: '2.0', id: request.id, error: { code: -32603, message: 'Gating not configured' } };
}
const params = request.params as Record<string, unknown> | undefined;
const args = (params?.['arguments'] ?? {}) as Record<string, unknown>;
const tags = args['tags'] as string[] | undefined;
if (!tags || !Array.isArray(tags) || tags.length === 0) {
return { jsonrpc: '2.0', id: request.id, error: { code: -32602, message: 'Missing or empty tags array' } };
}
const sessionId = context?.sessionId;
if (sessionId && !this.sessionGate.isGated(sessionId)) {
return {
jsonrpc: '2.0',
id: request.id,
result: {
content: [{ type: 'text', text: 'Session already started. Use read_prompts to retrieve additional context.' }],
},
};
}
try {
const promptIndex = await this.fetchPromptIndex();
// Primary: LLM selection. Fallback: deterministic tag matching.
let matchResult: TagMatchResult;
let reasoning = '';
if (this.llmSelector) {
try {
const llmIndex = promptIndex.map((p) => ({
name: p.name,
priority: p.priority,
summary: p.summary,
chapters: p.chapters,
}));
const llmResult = await this.llmSelector.selectPrompts(tags, llmIndex);
reasoning = llmResult.reasoning;
// Convert LLM names back to full PromptIndexEntry results via TagMatcher for byte-budget
const selectedSet = new Set(llmResult.selectedNames);
const selected = promptIndex.filter((p) => selectedSet.has(p.name));
const remaining = promptIndex.filter((p) => !selectedSet.has(p.name));
// Apply byte budget to the LLM-selected prompts
matchResult = this.tagMatcher!.match(
// Use all tags + selected names as keywords so everything scores > 0
[...tags, ...llmResult.selectedNames],
selected,
);
// Put LLM-unselected in remaining
matchResult.remaining = [...matchResult.remaining, ...remaining];
} catch {
// LLM failed — fall back to keyword matching
matchResult = this.tagMatcher!.match(tags, promptIndex);
}
} else {
matchResult = this.tagMatcher!.match(tags, promptIndex);
}
// Ungate the session
if (sessionId) {
this.sessionGate.ungate(sessionId, tags, matchResult);
}
// Build response
const responseParts: string[] = [];
if (reasoning) {
responseParts.push(`Selection reasoning: ${reasoning}\n`);
}
// Full content prompts
for (const p of matchResult.fullContent) {
responseParts.push(`--- ${p.name} (priority: ${p.priority}) ---\n${p.content}\n`);
}
// Index-only (over budget)
if (matchResult.indexOnly.length > 0) {
responseParts.push('Additional matched prompts (use read_prompts to retrieve full content):');
for (const p of matchResult.indexOnly) {
responseParts.push(` - ${p.name}: ${p.summary ?? 'No description'}`);
}
responseParts.push('');
}
// Remaining prompts for awareness
if (matchResult.remaining.length > 0) {
responseParts.push('Other available prompts:');
for (const p of matchResult.remaining) {
responseParts.push(` - ${p.name}: ${p.summary ?? 'No description'}`);
}
responseParts.push('');
}
// Encouragement (from system prompt or fallback)
const encouragement = await this.getSystemPrompt(
'gate-encouragement',
'If any of the listed prompts seem relevant to your work, or if you encounter unfamiliar patterns, conventions, or constraints during implementation, use read_prompts({ tags: [...] }) to retrieve them. It is better to check and not need it than to proceed without important context.',
);
responseParts.push(encouragement);
return {
jsonrpc: '2.0',
id: request.id,
result: {
content: [{ type: 'text', text: responseParts.join('\n') }],
},
};
} catch (err) {
return {
jsonrpc: '2.0',
id: request.id,
error: { code: -32603, message: `begin_session failed: ${err instanceof Error ? err.message : String(err)}` },
};
}
}
private async handleReadPrompts(request: JsonRpcRequest, context?: RouteContext): Promise<JsonRpcResponse> {
if (!this.tagMatcher || !this.mcpdClient || !this.projectName) {
return { jsonrpc: '2.0', id: request.id, error: { code: -32603, message: 'Prompt retrieval not configured' } };
}
const params = request.params as Record<string, unknown> | undefined;
const args = (params?.['arguments'] ?? {}) as Record<string, unknown>;
const tags = args['tags'] as string[] | undefined;
if (!tags || !Array.isArray(tags) || tags.length === 0) {
return { jsonrpc: '2.0', id: request.id, error: { code: -32602, message: 'Missing or empty tags array' } };
}
try {
const promptIndex = await this.fetchPromptIndex();
const sessionId = context?.sessionId;
// Filter out already-sent prompts
const available = sessionId ? this.sessionGate.filterAlreadySent(sessionId, promptIndex) : promptIndex;
// Always use deterministic tag matching for read_prompts (hybrid mode)
const matchResult = this.tagMatcher.match(tags, available);
// Record retrieved prompts
if (sessionId) {
this.sessionGate.addRetrievedPrompts(
sessionId,
tags,
matchResult.fullContent.map((p) => p.name),
);
}
if (matchResult.fullContent.length === 0 && matchResult.indexOnly.length === 0) {
return {
jsonrpc: '2.0',
id: request.id,
result: {
content: [{ type: 'text', text: 'No new matching prompts found for the given keywords.' }],
},
};
}
const responseParts: string[] = [];
for (const p of matchResult.fullContent) {
responseParts.push(`--- ${p.name} (priority: ${p.priority}) ---\n${p.content}\n`);
}
if (matchResult.indexOnly.length > 0) {
responseParts.push('Additional matched prompts (too large to include, try more specific keywords):');
for (const p of matchResult.indexOnly) {
responseParts.push(` - ${p.name}: ${p.summary ?? 'No description'}`);
}
}
return {
jsonrpc: '2.0',
id: request.id,
result: {
content: [{ type: 'text', text: responseParts.join('\n') }],
},
};
} catch (err) {
return {
jsonrpc: '2.0',
id: request.id,
error: { code: -32603, message: `read_prompts failed: ${err instanceof Error ? err.message : String(err)}` },
};
}
}
/**
* Intercept handler: when a gated session tries to call a real tool,
* extract keywords from the tool call, auto-ungate, and prepend a briefing.
*/
private async handleGatedIntercept(
request: JsonRpcRequest,
context: RouteContext,
toolName: string,
toolArgs: Record<string, unknown>,
): Promise<JsonRpcResponse> {
const sessionId = context.sessionId!;
// Extract keywords from the tool call as a fallback
const tags = extractKeywordsFromToolCall(toolName, toolArgs);
try {
const promptIndex = await this.fetchPromptIndex();
const matchResult = this.tagMatcher!.match(tags, promptIndex);
// Ungate the session
this.sessionGate.ungate(sessionId, tags, matchResult);
// Build briefing from matched content
const briefingParts: string[] = [];
if (matchResult.fullContent.length > 0) {
const preamble = await this.getSystemPrompt(
'gate-intercept-preamble',
'The following project context was automatically retrieved based on your tool call.',
);
briefingParts.push(`--- ${preamble} ---\n`);
for (const p of matchResult.fullContent) {
briefingParts.push(`--- ${p.name} (priority: ${p.priority}) ---\n${p.content}\n`);
}
briefingParts.push('--- End of project context ---\n');
}
if (matchResult.remaining.length > 0 || matchResult.indexOnly.length > 0) {
briefingParts.push('Other prompts available (use read_prompts to retrieve):');
for (const p of [...matchResult.indexOnly, ...matchResult.remaining]) {
briefingParts.push(` - ${p.name}: ${p.summary ?? 'No description'}`);
}
briefingParts.push('');
}
// Now route the actual tool call
const response = await this.routeNamespacedCall(request, 'name', this.toolToServer);
const paginatedResponse = await this.maybePaginate(toolName, response);
// Prepend briefing to the response
if (briefingParts.length > 0 && paginatedResponse.result && !paginatedResponse.error) {
const result = paginatedResponse.result as { content?: Array<{ type: string; text: string }> };
const briefing = briefingParts.join('\n');
if (result.content && Array.isArray(result.content)) {
result.content.unshift({ type: 'text', text: briefing });
} else {
(paginatedResponse.result as Record<string, unknown>)['_briefing'] = briefing;
}
}
return paginatedResponse;
} catch {
// If prompt retrieval fails, just ungate and route normally
this.sessionGate.ungate(sessionId, tags, { fullContent: [], indexOnly: [], remaining: [] });
return this.routeNamespacedCall(request, 'name', this.toolToServer);
}
}
/**
* Fetch prompt index from mcpd with caching.
*/
private async fetchPromptIndex(): Promise<PromptIndexEntry[]> {
const now = Date.now();
if (this.cachedPromptIndex && (now - this.promptIndexFetchedAt) < this.PROMPT_INDEX_TTL_MS) {
return this.cachedPromptIndex;
}
if (!this.mcpdClient || !this.projectName) {
return [];
}
const index = await this.mcpdClient.get<Array<{
name: string;
priority: number;
summary: string | null;
chapters: string[] | null;
content?: string;
}>>(
`/api/v1/projects/${encodeURIComponent(this.projectName)}/prompts/visible`,
);
this.cachedPromptIndex = index.map((p) => ({
name: p.name,
priority: p.priority,
summary: p.summary,
chapters: p.chapters,
content: p.content ?? '',
}));
this.promptIndexFetchedAt = now;
return this.cachedPromptIndex;
}
/**
* Build instructions for gated projects: base instructions + gate message + prompt index.
*/
private async buildGatedInstructions(baseInstructions: string): Promise<string> {
const parts: string[] = [];
if (baseInstructions) {
parts.push(baseInstructions);
}
const gateInstructions = await this.getSystemPrompt(
'gate-instructions',
'IMPORTANT: This project uses a gated session. You must call begin_session with keywords describing your task before using any other tools. This will provide you with relevant project context, policies, and guidelines.',
);
parts.push(`\n${gateInstructions}`);
// Append compact prompt index so the LLM knows what's available
try {
const promptIndex = await this.fetchPromptIndex();
if (promptIndex.length > 0) {
// Cap at 50 entries; if over 50, show priority 7+ only
let displayIndex = promptIndex;
if (displayIndex.length > 50) {
displayIndex = displayIndex.filter((p) => p.priority >= 7);
}
// Sort by priority descending
displayIndex.sort((a, b) => b.priority - a.priority);
parts.push('\nAvailable project prompts:');
for (const p of displayIndex) {
const summary = p.summary ? `: ${p.summary}` : '';
parts.push(`- ${p.name} (priority ${p.priority})${summary}`);
}
parts.push(
'\nChoose your begin_session keywords based on which of these prompts seem relevant to your task.',
);
}
} catch {
// Prompt index is optional — don't fail initialization
}
return parts.join('\n');
}
/**
* Fetch a system prompt from mcpctl-system project, with caching and fallback.
*/
private async getSystemPrompt(name: string, fallback: string): Promise<string> {
const now = Date.now();
const cached = this.systemPromptCache.get(name);
if (cached && (now - cached.fetchedAt) < this.SYSTEM_PROMPT_TTL_MS) {
return cached.content;
}
if (!this.mcpdClient) return fallback;
try {
const visible = await this.mcpdClient.get<Array<{ name: string; content: string }>>(
'/api/v1/projects/mcpctl-system/prompts/visible',
);
// Cache all system prompts from the response
for (const p of visible) {
this.systemPromptCache.set(p.name, { content: p.content, fetchedAt: now });
}
const found = visible.find((p) => p.name === name);
return found?.content ?? fallback;
} catch {
return fallback;
}
}
// ── Session cleanup ──
cleanupSession(sessionId: string): void {
this.sessionGate.removeSession(sessionId);
}
getUpstreamNames(): string[] {
return [...this.upstreams.keys()];
}

View File

@@ -0,0 +1,133 @@
import type { McpdClient } from '../http/mcpd-client.js';
export interface LinkResolution {
content: string | null;
status: 'alive' | 'dead' | 'unknown';
error?: string;
}
interface CacheEntry {
resolution: LinkResolution;
expiresAt: number;
}
interface ParsedLink {
project: string;
server: string;
uri: string;
}
/**
* Resolves prompt links by fetching MCP resources from source projects via mcpd.
* Link format: project/server:resource-uri
*/
export class LinkResolver {
private cache = new Map<string, CacheEntry>();
constructor(
private readonly mcpdClient: McpdClient,
private readonly cacheTtlMs = 5 * 60 * 1000, // 5 minutes
) {}
/**
* Parse a link target string into its components.
* Format: project/server:resource-uri
*/
parseLink(linkTarget: string): ParsedLink {
const slashIdx = linkTarget.indexOf('/');
if (slashIdx < 1) throw new Error(`Invalid link format (missing project): ${linkTarget}`);
const project = linkTarget.slice(0, slashIdx);
const rest = linkTarget.slice(slashIdx + 1);
const colonIdx = rest.indexOf(':');
if (colonIdx < 1) throw new Error(`Invalid link format (missing server:uri): ${linkTarget}`);
const server = rest.slice(0, colonIdx);
const uri = rest.slice(colonIdx + 1);
if (!uri) throw new Error(`Invalid link format (empty uri): ${linkTarget}`);
return { project, server, uri };
}
/**
* Resolve a link target and return the fetched content + status.
* Results are cached with a configurable TTL.
*/
async resolve(linkTarget: string): Promise<LinkResolution> {
// Check cache first
const cached = this.cache.get(linkTarget);
if (cached && cached.expiresAt > Date.now()) {
return cached.resolution;
}
let resolution: LinkResolution;
try {
const { project, server, uri } = this.parseLink(linkTarget);
const content = await this.fetchResource(project, server, uri);
resolution = { content, status: 'alive' };
} catch (err) {
const message = err instanceof Error ? err.message : String(err);
console.error(`[link-resolver] Dead link: ${linkTarget}${message}`);
resolution = { content: null, status: 'dead', error: message };
}
// Cache the result
this.cache.set(linkTarget, {
resolution,
expiresAt: Date.now() + this.cacheTtlMs,
});
return resolution;
}
/**
* Check link health without returning full content (uses cache if available).
*/
async checkHealth(linkTarget: string): Promise<'alive' | 'dead' | 'unknown'> {
const cached = this.cache.get(linkTarget);
if (cached && cached.expiresAt > Date.now()) {
return cached.resolution.status;
}
// Don't do a full resolve just for health — return unknown
return 'unknown';
}
/** Clear all cached resolutions. */
clearCache(): void {
this.cache.clear();
}
private async fetchResource(project: string, server: string, uri: string): Promise<string> {
// Step 1: Resolve server name → server ID from the project's servers
const servers = await this.mcpdClient.get<Array<{ id: string; name: string }>>(
`/api/v1/projects/${encodeURIComponent(project)}/servers`,
);
const target = servers.find((s) => s.name === server);
if (!target) {
throw new Error(`Server '${server}' not found in project '${project}'`);
}
// Step 2: Call resources/read via the MCP proxy
const proxyResponse = await this.mcpdClient.post<{
result?: { contents?: Array<{ text?: string; uri?: string }> };
error?: { code: number; message: string };
}>('/api/v1/mcp/proxy', {
serverId: target.id,
method: 'resources/read',
params: { uri },
});
if (proxyResponse.error) {
throw new Error(`MCP error: ${proxyResponse.error.message}`);
}
const contents = proxyResponse.result?.contents;
if (!contents || contents.length === 0) {
throw new Error(`No content returned for resource: ${uri}`);
}
// Concatenate all text contents
return contents.map((c) => c.text ?? '').join('\n');
}
}

View File

@@ -63,6 +63,8 @@ export interface ProxyConfig {
export interface UpstreamConnection {
/** Server name */
name: string;
/** Human-readable description of the server's purpose */
description?: string;
/** Send a JSON-RPC request and get a response */
send(request: JsonRpcRequest): Promise<JsonRpcResponse>;
/** Disconnect from the upstream */

View File

@@ -18,14 +18,17 @@ interface McpdProxyResponse {
*/
export class McpdUpstream implements UpstreamConnection {
readonly name: string;
readonly description?: string;
private alive = true;
constructor(
private serverId: string,
serverName: string,
private mcpdClient: McpdClient,
serverDescription?: string,
) {
this.name = serverName;
if (serverDescription !== undefined) this.description = serverDescription;
}
async send(request: JsonRpcRequest): Promise<JsonRpcResponse> {

View File

@@ -0,0 +1,486 @@
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
import { EventEmitter, Readable } from 'node:stream';
import { AcpClient } from '../src/providers/acp-client.js';
import type { AcpClientConfig } from '../src/providers/acp-client.js';
/**
* Creates a mock child process that speaks ACP protocol.
* Returns the mock process and helpers to send responses.
*/
function createMockProcess() {
const stdin = {
write: vi.fn(),
writable: true,
};
const stdoutEmitter = new EventEmitter();
const stdout = Object.assign(stdoutEmitter, {
readable: true,
// readline needs these
[Symbol.asyncIterator]: undefined,
pause: vi.fn(),
resume: vi.fn(),
isPaused: () => false,
setEncoding: vi.fn(),
read: vi.fn(),
destroy: vi.fn(),
pipe: vi.fn(),
unpipe: vi.fn(),
unshift: vi.fn(),
wrap: vi.fn(),
}) as unknown as Readable;
const proc = Object.assign(new EventEmitter(), {
stdin,
stdout,
stderr: new EventEmitter(),
pid: 12345,
killed: false,
kill: vi.fn(function (this: { killed: boolean }) {
this.killed = true;
}),
});
/** Send a line of JSON from the "agent" to our client */
function sendLine(data: unknown) {
stdoutEmitter.emit('data', Buffer.from(JSON.stringify(data) + '\n'));
}
/** Send a JSON-RPC response */
function sendResponse(id: number, result: unknown) {
sendLine({ jsonrpc: '2.0', id, result });
}
/** Send a JSON-RPC error */
function sendError(id: number, code: number, message: string) {
sendLine({ jsonrpc: '2.0', id, error: { code, message } });
}
/** Send a session/update notification with agent_message_chunk */
function sendChunk(sessionId: string, text: string) {
sendLine({
jsonrpc: '2.0',
method: 'session/update',
params: {
sessionId,
update: {
sessionUpdate: 'agent_message_chunk',
content: [{ type: 'text', text }],
},
},
});
}
/** Send a session/request_permission request */
function sendPermissionRequest(id: number, sessionId: string) {
sendLine({
jsonrpc: '2.0',
id,
method: 'session/request_permission',
params: { sessionId },
});
}
return { proc, stdin, stdout: stdoutEmitter, sendLine, sendResponse, sendError, sendChunk, sendPermissionRequest };
}
function createConfig(overrides?: Partial<AcpClientConfig>): AcpClientConfig {
return {
binaryPath: '/usr/bin/gemini',
model: 'gemini-2.5-flash',
requestTimeoutMs: 5000,
initTimeoutMs: 5000,
...overrides,
};
}
describe('AcpClient', () => {
let client: AcpClient;
let mock: ReturnType<typeof createMockProcess>;
beforeEach(() => {
mock = createMockProcess();
});
afterEach(() => {
client?.dispose();
});
function createClient(configOverrides?: Partial<AcpClientConfig>) {
const config = createConfig({
spawn: (() => mock.proc) as unknown as AcpClientConfig['spawn'],
...configOverrides,
});
client = new AcpClient(config);
return client;
}
/** Helper: auto-respond to the initialize + session/new handshake */
function autoHandshake(sessionId = 'test-session') {
mock.stdin.write.mockImplementation((data: string) => {
const msg = JSON.parse(data.trim()) as { id: number; method: string };
if (msg.method === 'initialize') {
// Respond async to simulate real behavior
setImmediate(() => mock.sendResponse(msg.id, {
protocolVersion: 1,
agentInfo: { name: 'gemini-cli', version: '1.0.0' },
}));
} else if (msg.method === 'session/new') {
setImmediate(() => mock.sendResponse(msg.id, { sessionId }));
}
});
}
describe('ensureReady', () => {
it('spawns process and completes ACP handshake', async () => {
createClient();
autoHandshake();
await client.ensureReady();
expect(client.isAlive).toBe(true);
// Verify initialize was sent
const calls = mock.stdin.write.mock.calls.map((c) => JSON.parse(c[0] as string));
expect(calls[0].method).toBe('initialize');
expect(calls[0].params.protocolVersion).toBe(1);
expect(calls[0].params.clientInfo.name).toBe('mcpctl');
// Verify session/new was sent
expect(calls[1].method).toBe('session/new');
expect(calls[1].params.cwd).toBe('/tmp');
expect(calls[1].params.mcpServers).toEqual([]);
});
it('is idempotent when already ready', async () => {
createClient();
autoHandshake();
await client.ensureReady();
await client.ensureReady();
// Should only have sent initialize + session/new once
const calls = mock.stdin.write.mock.calls;
expect(calls.length).toBe(2);
});
it('shares init promise for concurrent calls', async () => {
createClient();
autoHandshake();
const p1 = client.ensureReady();
const p2 = client.ensureReady();
await Promise.all([p1, p2]);
expect(mock.stdin.write.mock.calls.length).toBe(2);
});
});
describe('prompt', () => {
it('sends session/prompt and collects agent_message_chunk text', async () => {
createClient();
const sessionId = 'sess-123';
autoHandshake(sessionId);
await client.ensureReady();
// Now set up the prompt response handler
mock.stdin.write.mockImplementation((data: string) => {
const msg = JSON.parse(data.trim()) as { id: number; method: string };
if (msg.method === 'session/prompt') {
setImmediate(() => {
mock.sendChunk(sessionId, 'Hello ');
mock.sendChunk(sessionId, 'world!');
mock.sendResponse(msg.id, { stopReason: 'end_turn' });
});
}
});
const result = await client.prompt('Say hello');
expect(result).toBe('Hello world!');
});
it('handles multi-block content in a single chunk', async () => {
createClient();
autoHandshake('sess-1');
await client.ensureReady();
mock.stdin.write.mockImplementation((data: string) => {
const msg = JSON.parse(data.trim()) as { id: number; method: string };
if (msg.method === 'session/prompt') {
setImmediate(() => {
mock.sendLine({
jsonrpc: '2.0',
method: 'session/update',
params: {
sessionId: 'sess-1',
update: {
sessionUpdate: 'agent_message_chunk',
content: [
{ type: 'text', text: 'Part A' },
{ type: 'text', text: ' Part B' },
],
},
},
});
mock.sendResponse(msg.id, { stopReason: 'end_turn' });
});
}
});
const result = await client.prompt('test');
expect(result).toBe('Part A Part B');
});
it('handles single-object content (real Gemini ACP format)', async () => {
createClient();
autoHandshake('sess-1');
await client.ensureReady();
mock.stdin.write.mockImplementation((data: string) => {
const msg = JSON.parse(data.trim()) as { id: number; method: string };
if (msg.method === 'session/prompt') {
setImmediate(() => {
// Real Gemini ACP sends content as a single object, not an array
mock.sendLine({
jsonrpc: '2.0',
method: 'session/update',
params: {
sessionId: 'sess-1',
update: {
sessionUpdate: 'agent_message_chunk',
content: { type: 'text', text: 'ok' },
},
},
});
mock.sendResponse(msg.id, { stopReason: 'end_turn' });
});
}
});
const result = await client.prompt('test');
expect(result).toBe('ok');
});
it('ignores agent_thought_chunk notifications', async () => {
createClient();
autoHandshake('sess-1');
await client.ensureReady();
mock.stdin.write.mockImplementation((data: string) => {
const msg = JSON.parse(data.trim()) as { id: number; method: string };
if (msg.method === 'session/prompt') {
setImmediate(() => {
// Gemini sends thought chunks before message chunks
mock.sendLine({
jsonrpc: '2.0',
method: 'session/update',
params: {
sessionId: 'sess-1',
update: {
sessionUpdate: 'agent_thought_chunk',
content: { type: 'text', text: 'Thinking about it...' },
},
},
});
mock.sendLine({
jsonrpc: '2.0',
method: 'session/update',
params: {
sessionId: 'sess-1',
update: {
sessionUpdate: 'agent_message_chunk',
content: { type: 'text', text: 'ok' },
},
},
});
mock.sendResponse(msg.id, { stopReason: 'end_turn' });
});
}
});
const result = await client.prompt('test');
expect(result).toBe('ok');
});
it('calls ensureReady automatically (lazy init)', async () => {
createClient();
autoHandshake('sess-auto');
// After handshake, handle prompts
const originalWrite = mock.stdin.write;
let handshakeDone = false;
mock.stdin.write.mockImplementation((data: string) => {
const msg = JSON.parse(data.trim()) as { id: number; method: string };
if (msg.method === 'initialize') {
setImmediate(() => mock.sendResponse(msg.id, { protocolVersion: 1 }));
} else if (msg.method === 'session/new') {
setImmediate(() => {
mock.sendResponse(msg.id, { sessionId: 'sess-auto' });
handshakeDone = true;
});
} else if (msg.method === 'session/prompt') {
setImmediate(() => {
mock.sendChunk('sess-auto', 'ok');
mock.sendResponse(msg.id, { stopReason: 'end_turn' });
});
}
});
// Call prompt directly without ensureReady
const result = await client.prompt('test');
expect(result).toBe('ok');
});
});
describe('auto-restart', () => {
it('restarts after process exit', async () => {
createClient();
autoHandshake('sess-1');
await client.ensureReady();
expect(client.isAlive).toBe(true);
// Simulate process exit
mock.proc.killed = true;
mock.proc.emit('exit', 1);
expect(client.isAlive).toBe(false);
// Create a new mock for the respawned process
mock = createMockProcess();
// Update the spawn function to return new mock
(client as unknown as { config: { spawn: unknown } }).config.spawn = () => mock.proc;
autoHandshake('sess-2');
await client.ensureReady();
expect(client.isAlive).toBe(true);
});
});
describe('timeout', () => {
it('kills process and rejects on request timeout', async () => {
createClient({ requestTimeoutMs: 50 });
autoHandshake('sess-1');
await client.ensureReady();
// Don't respond to the prompt — let it timeout
mock.stdin.write.mockImplementation(() => {});
await expect(client.prompt('test')).rejects.toThrow('timed out');
expect(client.isAlive).toBe(false);
});
it('rejects on init timeout', async () => {
createClient({ initTimeoutMs: 50 });
// Don't respond to initialize
mock.stdin.write.mockImplementation(() => {});
await expect(client.ensureReady()).rejects.toThrow('timed out');
});
});
describe('error handling', () => {
it('rejects on ACP error response', async () => {
createClient();
mock.stdin.write.mockImplementation((data: string) => {
const msg = JSON.parse(data.trim()) as { id: number; method: string };
setImmediate(() => mock.sendError(msg.id, -32603, 'Internal error'));
});
await expect(client.ensureReady()).rejects.toThrow('ACP error -32603: Internal error');
});
it('rejects pending requests on process crash', async () => {
createClient();
autoHandshake('sess-1');
await client.ensureReady();
// Override write so prompt sends but gets no response; then crash the process
mock.stdin.write.mockImplementation(() => {
// After the prompt is sent, simulate a process crash
setImmediate(() => {
mock.proc.killed = true;
mock.proc.emit('exit', 1);
});
});
const promptPromise = client.prompt('test');
await expect(promptPromise).rejects.toThrow('process exited');
});
});
describe('permission requests', () => {
it('rejects session/request_permission from agent', async () => {
createClient();
autoHandshake('sess-1');
await client.ensureReady();
mock.stdin.write.mockImplementation((data: string) => {
const msg = JSON.parse(data.trim()) as { id: number; method: string };
if (msg.method === 'session/prompt') {
setImmediate(() => {
// Agent asks for permission first
mock.sendPermissionRequest(100, 'sess-1');
// Then provides the actual response
mock.sendChunk('sess-1', 'done');
mock.sendResponse(msg.id, { stopReason: 'end_turn' });
});
}
});
const result = await client.prompt('test');
expect(result).toBe('done');
// Verify we sent a rejection for the permission request
const writes = mock.stdin.write.mock.calls.map((c) => {
try { return JSON.parse(c[0] as string); } catch { return null; }
}).filter(Boolean);
const rejection = writes.find((w: Record<string, unknown>) => w.id === 100);
expect(rejection).toBeTruthy();
expect((rejection as { result: { outcome: { outcome: string } } }).result.outcome.outcome).toBe('cancelled');
});
});
describe('dispose', () => {
it('kills process and rejects pending', async () => {
createClient();
autoHandshake('sess-1');
await client.ensureReady();
// Override write so prompt is sent but gets no response; then dispose
mock.stdin.write.mockImplementation(() => {
setImmediate(() => client.dispose());
});
const promptPromise = client.prompt('test');
await expect(promptPromise).rejects.toThrow('disposed');
expect(mock.proc.kill).toHaveBeenCalledWith('SIGTERM');
});
it('is safe to call multiple times', () => {
createClient();
client.dispose();
client.dispose();
// No error thrown
});
});
describe('isAlive', () => {
it('returns false before init', () => {
createClient();
expect(client.isAlive).toBe(false);
});
it('returns true after successful init', async () => {
createClient();
autoHandshake();
await client.ensureReady();
expect(client.isAlive).toBe(true);
});
it('returns false after dispose', async () => {
createClient();
autoHandshake();
await client.ensureReady();
client.dispose();
expect(client.isAlive).toBe(false);
});
});
});

View File

@@ -0,0 +1,200 @@
import { describe, it, expect, vi, beforeEach } from 'vitest';
const mockEnsureReady = vi.fn(async () => {});
const mockPrompt = vi.fn(async () => 'mock response');
const mockDispose = vi.fn();
vi.mock('../src/providers/acp-client.js', () => ({
AcpClient: vi.fn(function (this: Record<string, unknown>) {
this.ensureReady = mockEnsureReady;
this.prompt = mockPrompt;
this.dispose = mockDispose;
}),
}));
// Must import after mock setup
const { GeminiAcpProvider } = await import('../src/providers/gemini-acp.js');
describe('GeminiAcpProvider', () => {
let provider: InstanceType<typeof GeminiAcpProvider>;
beforeEach(() => {
vi.clearAllMocks();
mockPrompt.mockResolvedValue('mock response');
provider = new GeminiAcpProvider({ binaryPath: '/usr/bin/gemini', defaultModel: 'gemini-2.5-flash' });
});
describe('complete', () => {
it('builds prompt from messages and returns CompletionResult', async () => {
mockPrompt.mockResolvedValueOnce('The answer is 42.');
const result = await provider.complete({
messages: [
{ role: 'system', content: 'You are helpful.' },
{ role: 'user', content: 'What is the answer?' },
],
});
expect(result.content).toBe('The answer is 42.');
expect(result.toolCalls).toEqual([]);
expect(result.finishReason).toBe('stop');
const promptText = mockPrompt.mock.calls[0][0] as string;
expect(promptText).toContain('System: You are helpful.');
expect(promptText).toContain('What is the answer?');
});
it('formats assistant messages with prefix', async () => {
mockPrompt.mockResolvedValueOnce('ok');
await provider.complete({
messages: [
{ role: 'user', content: 'Hello' },
{ role: 'assistant', content: 'Hi there' },
{ role: 'user', content: 'How are you?' },
],
});
const promptText = mockPrompt.mock.calls[0][0] as string;
expect(promptText).toContain('Assistant: Hi there');
});
it('trims response content', async () => {
mockPrompt.mockResolvedValueOnce(' padded response \n');
const result = await provider.complete({
messages: [{ role: 'user', content: 'test' }],
});
expect(result.content).toBe('padded response');
});
it('serializes concurrent calls to same model', async () => {
const callOrder: number[] = [];
let callCount = 0;
mockPrompt.mockImplementation(async () => {
const myCall = ++callCount;
callOrder.push(myCall);
await new Promise((r) => setTimeout(r, 10));
return `response-${myCall}`;
});
const [r1, r2, r3] = await Promise.all([
provider.complete({ messages: [{ role: 'user', content: 'a' }] }),
provider.complete({ messages: [{ role: 'user', content: 'b' }] }),
provider.complete({ messages: [{ role: 'user', content: 'c' }] }),
]);
expect(r1.content).toBe('response-1');
expect(r2.content).toBe('response-2');
expect(r3.content).toBe('response-3');
expect(callOrder).toEqual([1, 2, 3]);
});
it('continues queue after error', async () => {
mockPrompt
.mockRejectedValueOnce(new Error('first fails'))
.mockResolvedValueOnce('second works');
const results = await Promise.allSettled([
provider.complete({ messages: [{ role: 'user', content: 'a' }] }),
provider.complete({ messages: [{ role: 'user', content: 'b' }] }),
]);
expect(results[0].status).toBe('rejected');
expect(results[1].status).toBe('fulfilled');
if (results[1].status === 'fulfilled') {
expect(results[1].value.content).toBe('second works');
}
});
});
describe('session pool', () => {
it('creates separate pool entries for different models', async () => {
mockPrompt.mockResolvedValue('ok');
await provider.complete({ messages: [{ role: 'user', content: 'a' }], model: 'gemini-2.5-flash' });
await provider.complete({ messages: [{ role: 'user', content: 'b' }], model: 'gemini-2.5-pro' });
expect(provider.poolSize).toBe(2);
});
it('reuses existing pool entry for same model', async () => {
mockPrompt.mockResolvedValue('ok');
await provider.complete({ messages: [{ role: 'user', content: 'a' }], model: 'gemini-2.5-flash' });
await provider.complete({ messages: [{ role: 'user', content: 'b' }], model: 'gemini-2.5-flash' });
expect(provider.poolSize).toBe(1);
});
it('uses defaultModel when no model specified', async () => {
mockPrompt.mockResolvedValue('ok');
await provider.complete({ messages: [{ role: 'user', content: 'a' }] });
expect(provider.poolSize).toBe(1);
});
it('evicts idle sessions', async () => {
// Use a very short TTL for testing
const shortTtl = new GeminiAcpProvider({
binaryPath: '/usr/bin/gemini',
defaultModel: 'gemini-2.5-flash',
idleTtlMs: 1, // 1ms TTL
});
mockPrompt.mockResolvedValue('ok');
await shortTtl.complete({ messages: [{ role: 'user', content: 'a' }], model: 'model-a' });
expect(shortTtl.poolSize).toBe(1);
// Wait for TTL to expire
await new Promise((r) => setTimeout(r, 10));
// Next complete call triggers eviction of old entry and creates new one
await shortTtl.complete({ messages: [{ role: 'user', content: 'b' }], model: 'model-b' });
// model-a should have been evicted, only model-b remains
expect(shortTtl.poolSize).toBe(1);
expect(mockDispose).toHaveBeenCalled();
shortTtl.dispose();
});
it('dispose kills all pooled clients', async () => {
mockPrompt.mockResolvedValue('ok');
await provider.complete({ messages: [{ role: 'user', content: 'a' }], model: 'model-a' });
await provider.complete({ messages: [{ role: 'user', content: 'b' }], model: 'model-b' });
expect(provider.poolSize).toBe(2);
provider.dispose();
expect(provider.poolSize).toBe(0);
expect(mockDispose).toHaveBeenCalledTimes(2);
});
});
describe('listModels', () => {
it('returns static model list', async () => {
const models = await provider.listModels();
expect(models).toContain('gemini-2.5-flash');
expect(models).toContain('gemini-2.5-pro');
expect(models).toContain('gemini-2.0-flash');
});
});
describe('dispose', () => {
it('delegates to all pooled AcpClients', async () => {
mockPrompt.mockResolvedValue('ok');
await provider.complete({ messages: [{ role: 'user', content: 'test' }] });
provider.dispose();
expect(mockDispose).toHaveBeenCalled();
});
});
describe('name', () => {
it('is gemini-cli for config compatibility', () => {
expect(provider.name).toBe('gemini-cli');
});
});
});

View File

@@ -0,0 +1,69 @@
import { describe, it, expect, vi, afterEach, beforeEach } from 'vitest';
import { loadLlmConfig, resetConfigCache } from '../../src/http/config.js';
import { existsSync, readFileSync } from 'node:fs';
vi.mock('node:fs', async () => {
const actual = await vi.importActual<typeof import('node:fs')>('node:fs');
return {
...actual,
existsSync: vi.fn(),
readFileSync: vi.fn(),
};
});
beforeEach(() => {
resetConfigCache();
});
afterEach(() => {
vi.restoreAllMocks();
});
describe('loadLlmConfig', () => {
it('returns undefined when config file does not exist', () => {
vi.mocked(existsSync).mockReturnValue(false);
expect(loadLlmConfig()).toBeUndefined();
});
it('returns undefined when config has no llm section', () => {
vi.mocked(existsSync).mockReturnValue(true);
vi.mocked(readFileSync).mockReturnValue(JSON.stringify({ mcplocalUrl: 'http://localhost:3200' }));
expect(loadLlmConfig()).toBeUndefined();
});
it('returns undefined when provider is none', () => {
vi.mocked(existsSync).mockReturnValue(true);
vi.mocked(readFileSync).mockReturnValue(JSON.stringify({ llm: { provider: 'none' } }));
expect(loadLlmConfig()).toBeUndefined();
});
it('returns LLM config when provider is configured', () => {
vi.mocked(existsSync).mockReturnValue(true);
vi.mocked(readFileSync).mockReturnValue(JSON.stringify({
llm: { provider: 'anthropic', model: 'claude-haiku-3-5-20241022' },
}));
const result = loadLlmConfig();
expect(result).toEqual({ provider: 'anthropic', model: 'claude-haiku-3-5-20241022' });
});
it('returns full LLM config with all fields', () => {
vi.mocked(existsSync).mockReturnValue(true);
vi.mocked(readFileSync).mockReturnValue(JSON.stringify({
llm: { provider: 'vllm', model: 'my-model', url: 'http://gpu:8000' },
}));
const result = loadLlmConfig();
expect(result).toEqual({ provider: 'vllm', model: 'my-model', url: 'http://gpu:8000' });
});
it('returns undefined on malformed JSON', () => {
vi.mocked(existsSync).mockReturnValue(true);
vi.mocked(readFileSync).mockReturnValue('NOT JSON!!!');
expect(loadLlmConfig()).toBeUndefined();
});
it('returns undefined on read error', () => {
vi.mocked(existsSync).mockReturnValue(true);
vi.mocked(readFileSync).mockImplementation(() => { throw new Error('EACCES'); });
expect(loadLlmConfig()).toBeUndefined();
});
});

View File

@@ -6,13 +6,14 @@
* (node:http) and a mock LLM provider. No Docker or external services needed.
*/
import { describe, it, expect, beforeEach, afterEach, afterAll } from 'vitest';
import { describe, it, expect, vi, beforeEach, afterEach, afterAll } from 'vitest';
import { createServer, type Server, type IncomingMessage, type ServerResponse } from 'node:http';
import { McpRouter } from '../../src/router.js';
import { McpdUpstream } from '../../src/upstream/mcpd.js';
import { McpdClient } from '../../src/http/mcpd-client.js';
import { LlmProcessor, DEFAULT_PROCESSOR_CONFIG } from '../../src/llm/processor.js';
import { ResponsePaginator } from '../../src/llm/pagination.js';
import { ProviderRegistry } from '../../src/providers/registry.js';
import { TieredHealthMonitor } from '../../src/health/tiered.js';
import { refreshUpstreams } from '../../src/discovery.js';
@@ -1096,4 +1097,429 @@ describe('End-to-end integration: 3-tier architecture', () => {
}
});
});
// -----------------------------------------------------------------------
// 8. Smart pagination through the full pipeline
// -----------------------------------------------------------------------
describe('Smart pagination', () => {
// Helper: generate a large JSON response (~100KB)
function makeLargeToolResult(): { flows: Array<{ id: string; type: string; label: string; wires: string[] }> } {
return {
flows: Array.from({ length: 200 }, (_, i) => ({
id: `flow-${String(i).padStart(4, '0')}`,
type: i % 3 === 0 ? 'function' : i % 3 === 1 ? 'http request' : 'inject',
label: `Node ${String(i)}: ${i % 3 === 0 ? 'Data transform' : i % 3 === 1 ? 'API call' : 'Timer trigger'}`,
wires: [`flow-${String(i + 1).padStart(4, '0')}`],
})),
};
}
it('paginates large tool response with smart AI summaries through router', async () => {
const largeResult = makeLargeToolResult();
mockMcpd = await startMockMcpd({
servers: [{ id: 'srv-nodered', name: 'node-red', transport: 'stdio' }],
proxyResponses: new Map([
['srv-nodered:tools/list', {
result: { tools: [{ name: 'get_flows', description: 'Get all flows' }] },
}],
['srv-nodered:tools/call', {
result: largeResult,
}],
]),
});
const client = new McpdClient(mockMcpd.baseUrl, mockMcpd.config.expectedToken);
router = new McpRouter();
await refreshUpstreams(router, client);
await router.discoverTools();
// Set up paginator with LLM provider for smart summaries
const registry = new ProviderRegistry();
const completeFn = vi.fn().mockImplementation(() => ({
content: JSON.stringify([
{ page: 1, summary: 'Function nodes and data transforms (flow-0000 through flow-0050)' },
{ page: 2, summary: 'HTTP request nodes and API integrations (flow-0051 through flow-0100)' },
{ page: 3, summary: 'Inject/timer nodes and triggers (flow-0101 through flow-0150)' },
{ page: 4, summary: 'Remaining nodes and wire connections (flow-0151 through flow-0199)' },
]),
}));
const mockProvider: LlmProvider = {
name: 'test-paginator',
isAvailable: () => true,
complete: completeFn,
};
registry.register(mockProvider);
// Low threshold so our response triggers pagination
const paginator = new ResponsePaginator(registry, {
sizeThreshold: 1000,
pageSize: 8000,
});
router.setPaginator(paginator);
// Call the tool — should get pagination index, not raw data
const response = await router.route({
jsonrpc: '2.0',
id: 'paginate-1',
method: 'tools/call',
params: { name: 'node-red/get_flows', arguments: {} },
});
expect(response.error).toBeUndefined();
const result = response.result as { content: Array<{ type: string; text: string }> };
expect(result.content).toHaveLength(1);
const indexText = result.content[0]!.text;
// Verify smart index with AI summaries
expect(indexText).toContain('AI-generated summaries');
expect(indexText).toContain('Function nodes and data transforms');
expect(indexText).toContain('HTTP request nodes');
expect(indexText).toContain('_resultId');
expect(indexText).toContain('_page');
// LLM was called to generate summaries
expect(completeFn).toHaveBeenCalledOnce();
const llmCall = completeFn.mock.calls[0]![0]!;
expect(llmCall.messages[0].role).toBe('system');
expect(llmCall.messages[1].content).toContain('node-red/get_flows');
});
it('retrieves specific pages after pagination via _resultId/_page', async () => {
const largeResult = makeLargeToolResult();
mockMcpd = await startMockMcpd({
servers: [{ id: 'srv-nodered', name: 'node-red', transport: 'stdio' }],
proxyResponses: new Map([
['srv-nodered:tools/list', {
result: { tools: [{ name: 'get_flows', description: 'Get all flows' }] },
}],
['srv-nodered:tools/call', {
result: largeResult,
}],
]),
});
const client = new McpdClient(mockMcpd.baseUrl, mockMcpd.config.expectedToken);
router = new McpRouter();
await refreshUpstreams(router, client);
await router.discoverTools();
// Simple paginator (no LLM) for predictable behavior
const paginator = new ResponsePaginator(null, {
sizeThreshold: 1000,
pageSize: 8000,
});
router.setPaginator(paginator);
// First call — get the pagination index
const indexResponse = await router.route({
jsonrpc: '2.0',
id: 'idx-1',
method: 'tools/call',
params: { name: 'node-red/get_flows', arguments: {} },
});
expect(indexResponse.error).toBeUndefined();
const indexResult = indexResponse.result as { content: Array<{ text: string }> };
const indexText = indexResult.content[0]!.text;
const resultIdMatch = /"_resultId": "([^"]+)"/.exec(indexText);
expect(resultIdMatch).not.toBeNull();
const resultId = resultIdMatch![1]!;
// Second call — retrieve page 1 via _resultId/_page
const page1Response = await router.route({
jsonrpc: '2.0',
id: 'page-1',
method: 'tools/call',
params: {
name: 'node-red/get_flows',
arguments: { _resultId: resultId, _page: 1 },
},
});
expect(page1Response.error).toBeUndefined();
const page1Result = page1Response.result as { content: Array<{ text: string }> };
expect(page1Result.content[0]!.text).toContain('Page 1/');
// Page content should contain flow data
expect(page1Result.content[0]!.text).toContain('flow-');
// Third call — retrieve page 2
const page2Response = await router.route({
jsonrpc: '2.0',
id: 'page-2',
method: 'tools/call',
params: {
name: 'node-red/get_flows',
arguments: { _resultId: resultId, _page: 2 },
},
});
expect(page2Response.error).toBeUndefined();
const page2Result = page2Response.result as { content: Array<{ text: string }> };
expect(page2Result.content[0]!.text).toContain('Page 2/');
});
it('retrieves full content with _page=all', async () => {
const largeResult = makeLargeToolResult();
mockMcpd = await startMockMcpd({
servers: [{ id: 'srv-nodered', name: 'node-red', transport: 'stdio' }],
proxyResponses: new Map([
['srv-nodered:tools/list', {
result: { tools: [{ name: 'get_flows', description: 'Get all flows' }] },
}],
['srv-nodered:tools/call', {
result: largeResult,
}],
]),
});
const client = new McpdClient(mockMcpd.baseUrl, mockMcpd.config.expectedToken);
router = new McpRouter();
await refreshUpstreams(router, client);
await router.discoverTools();
const paginator = new ResponsePaginator(null, {
sizeThreshold: 1000,
pageSize: 8000,
});
router.setPaginator(paginator);
// Get index
const indexResponse = await router.route({
jsonrpc: '2.0',
id: 'all-idx',
method: 'tools/call',
params: { name: 'node-red/get_flows', arguments: {} },
});
const indexText = (indexResponse.result as { content: Array<{ text: string }> }).content[0]!.text;
const resultId = /"_resultId": "([^"]+)"/.exec(indexText)![1]!;
// Request all pages
const allResponse = await router.route({
jsonrpc: '2.0',
id: 'all-1',
method: 'tools/call',
params: {
name: 'node-red/get_flows',
arguments: { _resultId: resultId, _page: 'all' },
},
});
expect(allResponse.error).toBeUndefined();
const allResult = allResponse.result as { content: Array<{ text: string }> };
// Full response should contain the original JSON
const fullText = allResult.content[0]!.text;
expect(fullText).toContain('flow-0000');
expect(fullText).toContain('flow-0199');
// Should be the full serialized result
expect(JSON.parse(fullText)).toEqual(largeResult);
});
it('falls back to simple index when LLM fails', async () => {
const largeResult = makeLargeToolResult();
mockMcpd = await startMockMcpd({
servers: [{ id: 'srv-nodered', name: 'node-red', transport: 'stdio' }],
proxyResponses: new Map([
['srv-nodered:tools/list', {
result: { tools: [{ name: 'get_flows', description: 'Get all flows' }] },
}],
['srv-nodered:tools/call', {
result: largeResult,
}],
]),
});
const client = new McpdClient(mockMcpd.baseUrl, mockMcpd.config.expectedToken);
router = new McpRouter();
await refreshUpstreams(router, client);
await router.discoverTools();
// Set up paginator with a failing LLM
const registry = new ProviderRegistry();
registry.register(createFailingLlmProvider('broken-llm'));
const paginator = new ResponsePaginator(registry, {
sizeThreshold: 1000,
pageSize: 8000,
});
router.setPaginator(paginator);
const response = await router.route({
jsonrpc: '2.0',
id: 'fallback-idx',
method: 'tools/call',
params: { name: 'node-red/get_flows', arguments: {} },
});
expect(response.error).toBeUndefined();
const text = (response.result as { content: Array<{ text: string }> }).content[0]!.text;
// Should still paginate, just without AI summaries
expect(text).toContain('_resultId');
expect(text).not.toContain('AI-generated summaries');
expect(text).toContain('Page 1:');
});
it('returns expired cache message for stale _resultId', async () => {
router = new McpRouter();
const paginator = new ResponsePaginator(null, { sizeThreshold: 100, pageSize: 50 });
router.setPaginator(paginator);
// Try to retrieve a page with an unknown resultId
const response = await router.route({
jsonrpc: '2.0',
id: 'stale-1',
method: 'tools/call',
params: {
name: 'anything/tool',
arguments: { _resultId: 'nonexistent-id', _page: 1 },
},
});
expect(response.error).toBeUndefined();
const text = (response.result as { content: Array<{ text: string }> }).content[0]!.text;
expect(text).toContain('expired');
expect(text).toContain('re-call');
});
it('skips pagination for small responses', async () => {
mockMcpd = await startMockMcpd({
servers: [{ id: 'srv-small', name: 'smallserver', transport: 'stdio' }],
proxyResponses: new Map([
['srv-small:tools/list', {
result: { tools: [{ name: 'get_status', description: 'Get status' }] },
}],
['srv-small:tools/call', {
result: { status: 'ok', uptime: 12345 },
}],
]),
});
const client = new McpdClient(mockMcpd.baseUrl, mockMcpd.config.expectedToken);
router = new McpRouter();
await refreshUpstreams(router, client);
await router.discoverTools();
const paginator = new ResponsePaginator(null, { sizeThreshold: 80000, pageSize: 40000 });
router.setPaginator(paginator);
const response = await router.route({
jsonrpc: '2.0',
id: 'small-1',
method: 'tools/call',
params: { name: 'smallserver/get_status', arguments: {} },
});
expect(response.error).toBeUndefined();
// Should return the raw result directly, not a pagination index
expect(response.result).toEqual({ status: 'ok', uptime: 12345 });
});
it('handles markdown-fenced LLM responses (Gemini quirk)', async () => {
const largeResult = makeLargeToolResult();
mockMcpd = await startMockMcpd({
servers: [{ id: 'srv-nodered', name: 'node-red', transport: 'stdio' }],
proxyResponses: new Map([
['srv-nodered:tools/list', {
result: { tools: [{ name: 'get_flows', description: 'Get all flows' }] },
}],
['srv-nodered:tools/call', {
result: largeResult,
}],
]),
});
const client = new McpdClient(mockMcpd.baseUrl, mockMcpd.config.expectedToken);
router = new McpRouter();
await refreshUpstreams(router, client);
await router.discoverTools();
// Simulate Gemini wrapping JSON in ```json fences
const registry = new ProviderRegistry();
const mockProvider: LlmProvider = {
name: 'gemini-mock',
isAvailable: () => true,
complete: vi.fn().mockResolvedValue({
content: '```json\n' + JSON.stringify([
{ page: 1, summary: 'Climate automation flows' },
{ page: 2, summary: 'Lighting control flows' },
]) + '\n```',
}),
};
registry.register(mockProvider);
const paginator = new ResponsePaginator(registry, {
sizeThreshold: 1000,
pageSize: 8000,
});
router.setPaginator(paginator);
const response = await router.route({
jsonrpc: '2.0',
id: 'fence-1',
method: 'tools/call',
params: { name: 'node-red/get_flows', arguments: {} },
});
expect(response.error).toBeUndefined();
const text = (response.result as { content: Array<{ text: string }> }).content[0]!.text;
// Fences were stripped — smart summaries should appear
expect(text).toContain('AI-generated summaries');
expect(text).toContain('Climate automation flows');
expect(text).toContain('Lighting control flows');
});
it('passes model override to LLM when project has custom model', async () => {
const largeResult = makeLargeToolResult();
mockMcpd = await startMockMcpd({
servers: [{ id: 'srv-nodered', name: 'node-red', transport: 'stdio' }],
proxyResponses: new Map([
['srv-nodered:tools/list', {
result: { tools: [{ name: 'get_flows', description: 'Get all flows' }] },
}],
['srv-nodered:tools/call', {
result: largeResult,
}],
]),
});
const client = new McpdClient(mockMcpd.baseUrl, mockMcpd.config.expectedToken);
router = new McpRouter();
await refreshUpstreams(router, client);
await router.discoverTools();
const registry = new ProviderRegistry();
const completeFn = vi.fn().mockResolvedValue({
content: JSON.stringify([{ page: 1, summary: 'test' }]),
});
const mockProvider: LlmProvider = {
name: 'test-model-override',
isAvailable: () => true,
complete: completeFn,
};
registry.register(mockProvider);
// Paginator with per-project model override
const paginator = new ResponsePaginator(registry, {
sizeThreshold: 1000,
pageSize: 80000, // One big page so we get exactly 1 summary
}, 'gemini-2.5-pro');
router.setPaginator(paginator);
await router.route({
jsonrpc: '2.0',
id: 'model-1',
method: 'tools/call',
params: { name: 'node-red/get_flows', arguments: {} },
});
// Verify the model was passed through to the LLM call
expect(completeFn).toHaveBeenCalledOnce();
const llmOpts = completeFn.mock.calls[0]![0]!;
expect(llmOpts.model).toBe('gemini-2.5-pro');
});
});
});

View File

@@ -0,0 +1,241 @@
import { describe, it, expect, vi, beforeEach } from 'vitest';
import { LinkResolver } from '../src/services/link-resolver.js';
import type { McpdClient } from '../src/http/mcpd-client.js';
function mockClient(): McpdClient {
return {
get: vi.fn(),
post: vi.fn(),
put: vi.fn(),
delete: vi.fn(),
forward: vi.fn(),
withHeaders: vi.fn(),
} as unknown as McpdClient;
}
describe('LinkResolver', () => {
let client: McpdClient;
let resolver: LinkResolver;
beforeEach(() => {
client = mockClient();
resolver = new LinkResolver(client, 1000); // 1s TTL for tests
});
// ── parseLink ──
describe('parseLink', () => {
it('parses valid link target', () => {
const result = resolver.parseLink('my-project/docmost-mcp:docmost://pages/abc');
expect(result).toEqual({
project: 'my-project',
server: 'docmost-mcp',
uri: 'docmost://pages/abc',
});
});
it('parses link with complex URI', () => {
const result = resolver.parseLink('proj/srv:file:///path/to/resource');
expect(result).toEqual({
project: 'proj',
server: 'srv',
uri: 'file:///path/to/resource',
});
});
it('throws on missing project separator', () => {
expect(() => resolver.parseLink('noslash')).toThrow('missing project');
});
it('throws on missing server:uri separator', () => {
expect(() => resolver.parseLink('proj/nocolon')).toThrow('missing server:uri');
});
it('throws on empty uri', () => {
expect(() => resolver.parseLink('proj/srv:')).toThrow('empty uri');
});
it('throws when project is empty', () => {
expect(() => resolver.parseLink('/srv:uri')).toThrow('missing project');
});
it('throws when server is empty', () => {
expect(() => resolver.parseLink('proj/:uri')).toThrow('missing server:uri');
});
});
// ── resolve ──
describe('resolve', () => {
it('fetches resource content successfully', async () => {
vi.mocked(client.get).mockResolvedValue([
{ id: 'srv-id-1', name: 'docmost-mcp' },
]);
vi.mocked(client.post).mockResolvedValue({
result: { contents: [{ text: 'Hello from docmost', uri: 'docmost://pages/abc' }] },
});
const result = await resolver.resolve('my-project/docmost-mcp:docmost://pages/abc');
expect(result).toEqual({ content: 'Hello from docmost', status: 'alive' });
expect(client.get).toHaveBeenCalledWith('/api/v1/projects/my-project/servers');
expect(client.post).toHaveBeenCalledWith('/api/v1/mcp/proxy', {
serverId: 'srv-id-1',
method: 'resources/read',
params: { uri: 'docmost://pages/abc' },
});
});
it('returns dead status when server not found in project', async () => {
vi.mocked(client.get).mockResolvedValue([
{ id: 'srv-other', name: 'other-server' },
]);
const result = await resolver.resolve('proj/missing-srv:some://uri');
expect(result.status).toBe('dead');
expect(result.content).toBeNull();
expect(result.error).toContain("Server 'missing-srv' not found");
});
it('returns dead status when MCP proxy returns error', async () => {
vi.mocked(client.get).mockResolvedValue([{ id: 'srv-1', name: 'srv' }]);
vi.mocked(client.post).mockResolvedValue({
error: { code: -32601, message: 'Method not found' },
});
const result = await resolver.resolve('proj/srv:some://uri');
expect(result.status).toBe('dead');
expect(result.error).toContain('Method not found');
});
it('returns dead status when no content returned', async () => {
vi.mocked(client.get).mockResolvedValue([{ id: 'srv-1', name: 'srv' }]);
vi.mocked(client.post).mockResolvedValue({
result: { contents: [] },
});
const result = await resolver.resolve('proj/srv:some://uri');
expect(result.status).toBe('dead');
expect(result.error).toContain('No content returned');
});
it('returns dead status on network error', async () => {
vi.mocked(client.get).mockRejectedValue(new Error('Connection refused'));
const result = await resolver.resolve('proj/srv:some://uri');
expect(result.status).toBe('dead');
expect(result.error).toContain('Connection refused');
});
it('concatenates multiple content entries', async () => {
vi.mocked(client.get).mockResolvedValue([{ id: 'srv-1', name: 'srv' }]);
vi.mocked(client.post).mockResolvedValue({
result: {
contents: [
{ text: 'Part 1', uri: 'uri1' },
{ text: 'Part 2', uri: 'uri2' },
],
},
});
const result = await resolver.resolve('proj/srv:some://uri');
expect(result.content).toBe('Part 1\nPart 2');
expect(result.status).toBe('alive');
});
it('logs dead link to console.error', async () => {
vi.mocked(client.get).mockRejectedValue(new Error('fail'));
const spy = vi.spyOn(console, 'error').mockImplementation(() => {});
await resolver.resolve('proj/srv:some://uri');
expect(spy).toHaveBeenCalledWith(expect.stringContaining('[link-resolver] Dead link'));
spy.mockRestore();
});
});
// ── caching ──
describe('caching', () => {
it('returns cached result on second call', async () => {
vi.mocked(client.get).mockResolvedValue([{ id: 'srv-1', name: 'srv' }]);
vi.mocked(client.post).mockResolvedValue({
result: { contents: [{ text: 'cached content' }] },
});
const first = await resolver.resolve('proj/srv:some://uri');
const second = await resolver.resolve('proj/srv:some://uri');
expect(first).toEqual(second);
// Only one HTTP call — second was cached
expect(client.get).toHaveBeenCalledTimes(1);
});
it('refetches after cache expires', async () => {
vi.mocked(client.get).mockResolvedValue([{ id: 'srv-1', name: 'srv' }]);
vi.mocked(client.post).mockResolvedValue({
result: { contents: [{ text: 'content' }] },
});
await resolver.resolve('proj/srv:some://uri');
// Advance time past TTL
vi.useFakeTimers();
vi.advanceTimersByTime(1500);
await resolver.resolve('proj/srv:some://uri');
expect(client.get).toHaveBeenCalledTimes(2);
vi.useRealTimers();
});
it('clearCache removes all entries', async () => {
vi.mocked(client.get).mockResolvedValue([{ id: 'srv-1', name: 'srv' }]);
vi.mocked(client.post).mockResolvedValue({
result: { contents: [{ text: 'content' }] },
});
await resolver.resolve('proj/srv:some://uri');
resolver.clearCache();
await resolver.resolve('proj/srv:some://uri');
expect(client.get).toHaveBeenCalledTimes(2);
});
});
// ── checkHealth ──
describe('checkHealth', () => {
it('returns cached status if available', async () => {
vi.mocked(client.get).mockResolvedValue([{ id: 'srv-1', name: 'srv' }]);
vi.mocked(client.post).mockResolvedValue({
result: { contents: [{ text: 'content' }] },
});
await resolver.resolve('proj/srv:some://uri');
const health = await resolver.checkHealth('proj/srv:some://uri');
expect(health).toBe('alive');
});
it('returns unknown if not cached', async () => {
const health = await resolver.checkHealth('proj/srv:some://uri');
expect(health).toBe('unknown');
});
it('returns dead from cached dead link', async () => {
vi.mocked(client.get).mockRejectedValue(new Error('fail'));
vi.spyOn(console, 'error').mockImplementation(() => {});
await resolver.resolve('proj/srv:some://uri');
const health = await resolver.checkHealth('proj/srv:some://uri');
expect(health).toBe('dead');
});
});
});

View File

@@ -0,0 +1,135 @@
import { describe, it, expect, vi } from 'vitest';
import { createProviderFromConfig } from '../src/llm-config.js';
import type { SecretStore } from '@mcpctl/shared';
function mockSecretStore(secrets: Record<string, string> = {}): SecretStore {
return {
get: vi.fn(async (key: string) => secrets[key] ?? null),
set: vi.fn(async () => {}),
delete: vi.fn(async () => true),
backend: () => 'mock',
};
}
describe('createProviderFromConfig', () => {
it('returns empty registry for undefined config', async () => {
const store = mockSecretStore();
const registry = await createProviderFromConfig(undefined, store);
expect(registry.getActive()).toBeNull();
expect(registry.list()).toEqual([]);
});
it('returns empty registry for provider=none', async () => {
const store = mockSecretStore();
const registry = await createProviderFromConfig({ provider: 'none' }, store);
expect(registry.getActive()).toBeNull();
});
it('creates gemini-cli provider using ACP', async () => {
const store = mockSecretStore();
const registry = await createProviderFromConfig(
{ provider: 'gemini-cli', model: 'gemini-2.5-flash', binaryPath: '/usr/bin/gemini' },
store,
);
expect(registry.getActive()).not.toBeNull();
expect(registry.getActive()!.name).toBe('gemini-cli');
// ACP provider has dispose method
expect(typeof registry.getActive()!.dispose).toBe('function');
});
it('creates ollama provider', async () => {
const store = mockSecretStore();
const registry = await createProviderFromConfig(
{ provider: 'ollama', model: 'llama3.2', url: 'http://localhost:11434' },
store,
);
expect(registry.getActive()!.name).toBe('ollama');
});
it('creates anthropic provider with API key from secret store', async () => {
const store = mockSecretStore({ 'anthropic-api-key': 'sk-ant-test' });
const registry = await createProviderFromConfig(
{ provider: 'anthropic', model: 'claude-haiku-3-5-20241022' },
store,
);
expect(registry.getActive()!.name).toBe('anthropic');
expect(store.get).toHaveBeenCalledWith('anthropic-api-key');
});
it('returns empty registry when anthropic API key is missing', async () => {
const store = mockSecretStore();
const stderrSpy = vi.spyOn(process.stderr, 'write').mockImplementation(() => true);
const registry = await createProviderFromConfig(
{ provider: 'anthropic', model: 'claude-haiku-3-5-20241022' },
store,
);
expect(registry.getActive()).toBeNull();
expect(stderrSpy).toHaveBeenCalledWith(expect.stringContaining('Anthropic API key not found'));
stderrSpy.mockRestore();
});
it('creates openai provider with API key from secret store', async () => {
const store = mockSecretStore({ 'openai-api-key': 'sk-test' });
const registry = await createProviderFromConfig(
{ provider: 'openai', model: 'gpt-4o', url: 'https://api.openai.com' },
store,
);
expect(registry.getActive()!.name).toBe('openai');
expect(store.get).toHaveBeenCalledWith('openai-api-key');
});
it('returns empty registry when openai API key is missing', async () => {
const store = mockSecretStore();
const stderrSpy = vi.spyOn(process.stderr, 'write').mockImplementation(() => true);
const registry = await createProviderFromConfig(
{ provider: 'openai' },
store,
);
expect(registry.getActive()).toBeNull();
stderrSpy.mockRestore();
});
it('creates deepseek provider with API key from secret store', async () => {
const store = mockSecretStore({ 'deepseek-api-key': 'sk-ds-test' });
const registry = await createProviderFromConfig(
{ provider: 'deepseek', model: 'deepseek-chat' },
store,
);
expect(registry.getActive()!.name).toBe('deepseek');
expect(store.get).toHaveBeenCalledWith('deepseek-api-key');
});
it('returns empty registry when deepseek API key is missing', async () => {
const store = mockSecretStore();
const stderrSpy = vi.spyOn(process.stderr, 'write').mockImplementation(() => true);
const registry = await createProviderFromConfig(
{ provider: 'deepseek' },
store,
);
expect(registry.getActive()).toBeNull();
stderrSpy.mockRestore();
});
it('creates vllm provider using OpenAI provider', async () => {
const store = mockSecretStore();
const registry = await createProviderFromConfig(
{ provider: 'vllm', model: 'my-model', url: 'http://gpu-server:8000' },
store,
);
// vLLM reuses OpenAI provider under the hood, wrapped with NamedProvider
expect(registry.getActive()).not.toBeNull();
expect(registry.getActive()!.name).toBe('vllm');
});
it('returns empty registry when vllm URL is missing', async () => {
const store = mockSecretStore();
const stderrSpy = vi.spyOn(process.stderr, 'write').mockImplementation(() => true);
const registry = await createProviderFromConfig(
{ provider: 'vllm' },
store,
);
expect(registry.getActive()).toBeNull();
expect(stderrSpy).toHaveBeenCalledWith(expect.stringContaining('vLLM URL not configured'));
stderrSpy.mockRestore();
});
});

View File

@@ -0,0 +1,166 @@
import { describe, it, expect, vi } from 'vitest';
import { LlmPromptSelector, type PromptIndexForLlm } from '../src/gate/llm-selector.js';
import { ProviderRegistry } from '../src/providers/registry.js';
import type { LlmProvider, CompletionOptions, CompletionResult } from '../src/providers/types.js';
function makeMockProvider(responseContent: string): LlmProvider {
return {
name: 'mock-heavy',
complete: vi.fn().mockResolvedValue({
content: responseContent,
toolCalls: [],
usage: { promptTokens: 100, completionTokens: 50, totalTokens: 150 },
finishReason: 'stop',
} satisfies CompletionResult),
listModels: vi.fn().mockResolvedValue(['mock-model']),
isAvailable: vi.fn().mockResolvedValue(true),
};
}
function makeRegistry(provider: LlmProvider): ProviderRegistry {
const registry = new ProviderRegistry();
registry.register(provider);
registry.assignTier(provider.name, 'heavy');
return registry;
}
const sampleIndex: PromptIndexForLlm[] = [
{ name: 'zigbee-pairing', priority: 7, summary: 'How to pair Zigbee devices', chapters: ['Setup', 'Troubleshooting'] },
{ name: 'mqtt-config', priority: 5, summary: 'MQTT broker configuration', chapters: null },
{ name: 'common-mistakes', priority: 10, summary: 'Critical safety rules', chapters: null },
];
describe('LlmPromptSelector', () => {
it('sends tags and index to heavy LLM and parses response', async () => {
const provider = makeMockProvider(
'```json\n{ "selectedNames": ["zigbee-pairing"], "reasoning": "User is working with zigbee" }\n```',
);
const registry = makeRegistry(provider);
const selector = new LlmPromptSelector(registry);
const result = await selector.selectPrompts(['zigbee', 'pairing'], sampleIndex);
expect(result.selectedNames).toContain('zigbee-pairing');
expect(result.selectedNames).toContain('common-mistakes'); // Priority 10 always included
expect(result.reasoning).toBe('User is working with zigbee');
});
it('always includes priority 10 prompts even if LLM omits them', async () => {
const provider = makeMockProvider(
'{ "selectedNames": ["mqtt-config"], "reasoning": "MQTT related" }',
);
const registry = makeRegistry(provider);
const selector = new LlmPromptSelector(registry);
const result = await selector.selectPrompts(['mqtt'], sampleIndex);
expect(result.selectedNames).toContain('mqtt-config');
expect(result.selectedNames).toContain('common-mistakes');
});
it('does not duplicate priority 10 if LLM already selected them', async () => {
const provider = makeMockProvider(
'{ "selectedNames": ["common-mistakes", "mqtt-config"], "reasoning": "Both needed" }',
);
const registry = makeRegistry(provider);
const selector = new LlmPromptSelector(registry);
const result = await selector.selectPrompts(['mqtt'], sampleIndex);
const count = result.selectedNames.filter((n) => n === 'common-mistakes').length;
expect(count).toBe(1);
});
it('passes system and user messages to provider.complete', async () => {
const provider = makeMockProvider(
'{ "selectedNames": [], "reasoning": "none" }',
);
const registry = makeRegistry(provider);
const selector = new LlmPromptSelector(registry);
await selector.selectPrompts(['test'], sampleIndex);
expect(provider.complete).toHaveBeenCalledOnce();
const call = (provider.complete as ReturnType<typeof vi.fn>).mock.calls[0]![0] as CompletionOptions;
expect(call.messages).toHaveLength(2);
expect(call.messages[0]!.role).toBe('system');
expect(call.messages[1]!.role).toBe('user');
expect(call.messages[1]!.content).toContain('test');
expect(call.temperature).toBe(0);
});
it('passes model override to complete options', async () => {
const provider = makeMockProvider(
'{ "selectedNames": [], "reasoning": "" }',
);
const registry = makeRegistry(provider);
const selector = new LlmPromptSelector(registry, 'gemini-pro');
await selector.selectPrompts(['test'], sampleIndex);
const call = (provider.complete as ReturnType<typeof vi.fn>).mock.calls[0]![0] as CompletionOptions;
expect(call.model).toBe('gemini-pro');
});
it('throws when no heavy provider is available', async () => {
const registry = new ProviderRegistry(); // Empty registry
const selector = new LlmPromptSelector(registry);
await expect(selector.selectPrompts(['test'], sampleIndex)).rejects.toThrow(
'No heavy LLM provider available',
);
});
it('throws when LLM response has no valid JSON', async () => {
const provider = makeMockProvider('I cannot help with that request.');
const registry = makeRegistry(provider);
const selector = new LlmPromptSelector(registry);
await expect(selector.selectPrompts(['test'], sampleIndex)).rejects.toThrow(
'LLM response did not contain valid selection JSON',
);
});
it('handles response with empty selectedNames', async () => {
const provider = makeMockProvider('{ "selectedNames": [], "reasoning": "nothing matched" }');
const registry = makeRegistry(provider);
const selector = new LlmPromptSelector(registry);
// Empty selectedNames, but priority 10 should still be included
const result = await selector.selectPrompts(['test'], sampleIndex);
expect(result.selectedNames).toEqual(['common-mistakes']);
expect(result.reasoning).toBe('nothing matched');
});
it('handles response with reasoning missing', async () => {
const provider = makeMockProvider('{ "selectedNames": ["mqtt-config"] }');
const registry = makeRegistry(provider);
const selector = new LlmPromptSelector(registry);
const result = await selector.selectPrompts(['test'], sampleIndex);
expect(result.reasoning).toBe('');
expect(result.selectedNames).toContain('mqtt-config');
});
it('includes prompt details in the user prompt', async () => {
const indexWithNull: PromptIndexForLlm[] = [
...sampleIndex,
{ name: 'no-desc', priority: 3, summary: null, chapters: null },
];
const provider = makeMockProvider(
'{ "selectedNames": [], "reasoning": "" }',
);
const registry = makeRegistry(provider);
const selector = new LlmPromptSelector(registry);
await selector.selectPrompts(['zigbee'], indexWithNull);
const call = (provider.complete as ReturnType<typeof vi.fn>).mock.calls[0]![0] as CompletionOptions;
const userMsg = call.messages[1]!.content;
expect(userMsg).toContain('zigbee-pairing');
expect(userMsg).toContain('priority: 7');
expect(userMsg).toContain('How to pair Zigbee devices');
expect(userMsg).toContain('Setup, Troubleshooting');
expect(userMsg).toContain('No summary'); // For prompts with null summary
});
});

View File

@@ -0,0 +1,508 @@
import { describe, it, expect, vi, afterEach } from 'vitest';
import { ResponsePaginator, DEFAULT_PAGINATION_CONFIG } from '../src/llm/pagination.js';
import type { ProviderRegistry } from '../src/providers/registry.js';
import type { LlmProvider } from '../src/providers/types.js';
function makeProvider(response: string): ProviderRegistry {
const provider: LlmProvider = {
name: 'test',
isAvailable: () => true,
complete: vi.fn().mockResolvedValue({ content: response }),
};
return {
getActive: () => provider,
getProvider: () => provider,
register: vi.fn(),
setActive: vi.fn(),
listProviders: () => [{ name: 'test', available: true, active: true }],
} as unknown as ProviderRegistry;
}
function makeLargeString(size: number, pattern = 'x'): string {
return pattern.repeat(size);
}
function makeLargeStringWithNewlines(size: number, lineLen = 100): string {
const lines: string[] = [];
let total = 0;
let lineNum = 0;
while (total < size) {
const line = `line-${String(lineNum).padStart(5, '0')} ${'x'.repeat(lineLen - 15)}`;
lines.push(line);
total += line.length + 1; // +1 for newline
lineNum++;
}
return lines.join('\n');
}
describe('ResponsePaginator', () => {
afterEach(() => {
vi.restoreAllMocks();
});
// --- shouldPaginate ---
describe('shouldPaginate', () => {
it('returns false for strings below threshold', () => {
const paginator = new ResponsePaginator(null);
expect(paginator.shouldPaginate('short string')).toBe(false);
});
it('returns false for strings just below threshold', () => {
const paginator = new ResponsePaginator(null);
const str = makeLargeString(DEFAULT_PAGINATION_CONFIG.sizeThreshold - 1);
expect(paginator.shouldPaginate(str)).toBe(false);
});
it('returns true for strings at threshold', () => {
const paginator = new ResponsePaginator(null);
const str = makeLargeString(DEFAULT_PAGINATION_CONFIG.sizeThreshold);
expect(paginator.shouldPaginate(str)).toBe(true);
});
it('returns true for strings above threshold', () => {
const paginator = new ResponsePaginator(null);
const str = makeLargeString(DEFAULT_PAGINATION_CONFIG.sizeThreshold + 1000);
expect(paginator.shouldPaginate(str)).toBe(true);
});
it('respects custom threshold', () => {
const paginator = new ResponsePaginator(null, { sizeThreshold: 100 });
expect(paginator.shouldPaginate('x'.repeat(99))).toBe(false);
expect(paginator.shouldPaginate('x'.repeat(100))).toBe(true);
});
});
// --- paginate (no LLM) ---
describe('paginate without LLM', () => {
it('returns null for small responses', async () => {
const paginator = new ResponsePaginator(null);
const result = await paginator.paginate('test/tool', 'small response');
expect(result).toBeNull();
});
it('paginates large responses with simple index', async () => {
const paginator = new ResponsePaginator(null, { sizeThreshold: 100, pageSize: 50 });
const raw = makeLargeStringWithNewlines(200);
const result = await paginator.paginate('test/tool', raw);
expect(result).not.toBeNull();
expect(result!.content).toHaveLength(1);
expect(result!.content[0]!.type).toBe('text');
const text = result!.content[0]!.text;
expect(text).toContain('too large to return directly');
expect(text).toContain('_resultId');
expect(text).toContain('_page');
expect(text).not.toContain('AI-generated summaries');
});
it('includes correct page count in index', async () => {
const paginator = new ResponsePaginator(null, { sizeThreshold: 100, pageSize: 50 });
const raw = 'x'.repeat(200);
const result = await paginator.paginate('test/tool', raw);
expect(result).not.toBeNull();
const text = result!.content[0]!.text;
// 200 chars / 50 per page = 4 pages
expect(text).toContain('4 pages');
expect(text).toContain('Page 1:');
expect(text).toContain('Page 4:');
});
it('caches the result for later page retrieval', async () => {
const paginator = new ResponsePaginator(null, { sizeThreshold: 100, pageSize: 50 });
const raw = 'x'.repeat(200);
await paginator.paginate('test/tool', raw);
expect(paginator.cacheSize).toBe(1);
});
it('includes page instructions with _resultId and _page', async () => {
const paginator = new ResponsePaginator(null, { sizeThreshold: 100, pageSize: 50 });
const raw = 'x'.repeat(200);
const result = await paginator.paginate('test/tool', raw);
const text = result!.content[0]!.text;
expect(text).toContain('"_resultId"');
expect(text).toContain('"_page"');
expect(text).toContain('"all"');
});
});
// --- paginate (with LLM) ---
describe('paginate with LLM', () => {
it('generates smart index when provider available', async () => {
const summaries = JSON.stringify([
{ page: 1, summary: 'Configuration nodes and global settings' },
{ page: 2, summary: 'HTTP request nodes and API integrations' },
]);
const registry = makeProvider(summaries);
const paginator = new ResponsePaginator(registry, { sizeThreshold: 100, pageSize: 60 });
const raw = makeLargeStringWithNewlines(150);
const result = await paginator.paginate('node-red/get_flows', raw);
expect(result).not.toBeNull();
const text = result!.content[0]!.text;
expect(text).toContain('AI-generated summaries');
expect(text).toContain('Configuration nodes and global settings');
expect(text).toContain('HTTP request nodes and API integrations');
});
it('strips markdown code fences from LLM JSON response', async () => {
const summaries = [
{ page: 1, summary: 'Config section' },
{ page: 2, summary: 'Data section' },
];
// Gemini often wraps JSON in ```json ... ``` fences
const fenced = '```json\n' + JSON.stringify(summaries) + '\n```';
const registry = makeProvider(fenced);
const paginator = new ResponsePaginator(registry, { sizeThreshold: 100, pageSize: 60 });
const raw = makeLargeStringWithNewlines(150);
const result = await paginator.paginate('test/tool', raw);
expect(result).not.toBeNull();
const text = result!.content[0]!.text;
expect(text).toContain('AI-generated summaries');
expect(text).toContain('Config section');
expect(text).toContain('Data section');
});
it('falls back to simple index on LLM failure', async () => {
const provider: LlmProvider = {
name: 'test',
isAvailable: () => true,
complete: vi.fn().mockRejectedValue(new Error('LLM unavailable')),
};
const registry = {
getActive: () => provider,
getProvider: () => provider,
register: vi.fn(),
setActive: vi.fn(),
listProviders: () => [{ name: 'test', available: true, active: true }],
} as unknown as ProviderRegistry;
const paginator = new ResponsePaginator(registry, { sizeThreshold: 100, pageSize: 50 });
const raw = 'x'.repeat(200);
const result = await paginator.paginate('test/tool', raw);
expect(result).not.toBeNull();
const text = result!.content[0]!.text;
// Should NOT contain AI-generated label
expect(text).not.toContain('AI-generated summaries');
expect(text).toContain('Page 1:');
});
it('sends page previews to LLM, not full content', async () => {
const completeFn = vi.fn().mockResolvedValue({
content: JSON.stringify([
{ page: 1, summary: 'test' },
{ page: 2, summary: 'test2' },
{ page: 3, summary: 'test3' },
]),
});
const provider: LlmProvider = {
name: 'test',
isAvailable: () => true,
complete: completeFn,
};
const registry = {
getActive: () => provider,
getProvider: () => provider,
register: vi.fn(),
setActive: vi.fn(),
listProviders: () => [{ name: 'test', available: true, active: true }],
} as unknown as ProviderRegistry;
// Use a large enough string (3000 chars, pages of 1000) so previews (500 per page) are smaller than raw
const paginator = new ResponsePaginator(registry, { sizeThreshold: 2000, pageSize: 1000 });
const raw = makeLargeStringWithNewlines(3000);
await paginator.paginate('test/tool', raw);
expect(completeFn).toHaveBeenCalledOnce();
const call = completeFn.mock.calls[0]![0]!;
const userMsg = call.messages.find((m: { role: string }) => m.role === 'user');
// Should contain page preview markers
expect(userMsg.content).toContain('Page 1');
// The LLM prompt should be significantly smaller than the full content
// (each page sends ~500 chars preview, not full 1000 chars)
expect(userMsg.content.length).toBeLessThan(raw.length);
});
it('falls back to simple when no active provider', async () => {
const registry = {
getActive: () => null,
getProvider: () => null,
register: vi.fn(),
setActive: vi.fn(),
listProviders: () => [],
} as unknown as ProviderRegistry;
const paginator = new ResponsePaginator(registry, { sizeThreshold: 100, pageSize: 50 });
const raw = 'x'.repeat(200);
const result = await paginator.paginate('test/tool', raw);
expect(result).not.toBeNull();
const text = result!.content[0]!.text;
expect(text).not.toContain('AI-generated summaries');
});
it('passes modelOverride to provider.complete()', async () => {
const completeFn = vi.fn().mockResolvedValue({
content: JSON.stringify([{ page: 1, summary: 'test' }, { page: 2, summary: 'test2' }]),
});
const provider: LlmProvider = {
name: 'test',
isAvailable: () => true,
complete: completeFn,
};
const registry = {
getActive: () => provider,
getProvider: () => provider,
register: vi.fn(),
setActive: vi.fn(),
listProviders: () => [{ name: 'test', available: true, active: true }],
} as unknown as ProviderRegistry;
const paginator = new ResponsePaginator(registry, { sizeThreshold: 100, pageSize: 60 }, 'gemini-2.5-pro');
const raw = makeLargeStringWithNewlines(150);
await paginator.paginate('test/tool', raw);
expect(completeFn).toHaveBeenCalledOnce();
const call = completeFn.mock.calls[0]![0]!;
expect(call.model).toBe('gemini-2.5-pro');
});
it('omits model when no modelOverride set', async () => {
const completeFn = vi.fn().mockResolvedValue({
content: JSON.stringify([{ page: 1, summary: 'test' }, { page: 2, summary: 'test2' }]),
});
const provider: LlmProvider = {
name: 'test',
isAvailable: () => true,
complete: completeFn,
};
const registry = {
getActive: () => provider,
getProvider: () => provider,
register: vi.fn(),
setActive: vi.fn(),
listProviders: () => [{ name: 'test', available: true, active: true }],
} as unknown as ProviderRegistry;
const paginator = new ResponsePaginator(registry, { sizeThreshold: 100, pageSize: 60 });
const raw = makeLargeStringWithNewlines(150);
await paginator.paginate('test/tool', raw);
expect(completeFn).toHaveBeenCalledOnce();
const call = completeFn.mock.calls[0]![0]!;
expect(call.model).toBeUndefined();
});
});
// --- getPage ---
describe('getPage', () => {
it('returns specific page content', async () => {
const paginator = new ResponsePaginator(null, { sizeThreshold: 100, pageSize: 50 });
const raw = 'AAAA'.repeat(25) + 'BBBB'.repeat(25); // 200 chars total
await paginator.paginate('test/tool', raw);
// Extract resultId from cache (there should be exactly 1 entry)
expect(paginator.cacheSize).toBe(1);
// We need the resultId — get it from the index response
const indexResult = await paginator.paginate('test/tool2', 'C'.repeat(200));
const text = indexResult!.content[0]!.text;
const match = /"_resultId": "([^"]+)"/.exec(text);
expect(match).not.toBeNull();
const resultId = match![1]!;
const page1 = paginator.getPage(resultId, 1);
expect(page1).not.toBeNull();
expect(page1!.content[0]!.text).toContain('Page 1/');
expect(page1!.content[0]!.text).toContain('C');
});
it('returns full content with _page=all', async () => {
const paginator = new ResponsePaginator(null, { sizeThreshold: 100, pageSize: 50 });
const raw = 'D'.repeat(200);
const indexResult = await paginator.paginate('test/tool', raw);
const match = /"_resultId": "([^"]+)"/.exec(indexResult!.content[0]!.text);
const resultId = match![1]!;
const allPages = paginator.getPage(resultId, 'all');
expect(allPages).not.toBeNull();
expect(allPages!.content[0]!.text).toBe(raw);
});
it('returns null for unknown resultId (cache miss)', () => {
const paginator = new ResponsePaginator(null);
const result = paginator.getPage('nonexistent-id', 1);
expect(result).toBeNull();
});
it('returns error for out-of-range page', async () => {
const paginator = new ResponsePaginator(null, { sizeThreshold: 100, pageSize: 50 });
const raw = 'x'.repeat(200);
const indexResult = await paginator.paginate('test/tool', raw);
const match = /"_resultId": "([^"]+)"/.exec(indexResult!.content[0]!.text);
const resultId = match![1]!;
const page999 = paginator.getPage(resultId, 999);
expect(page999).not.toBeNull();
expect(page999!.content[0]!.text).toContain('out of range');
});
it('returns null after TTL expiry', async () => {
const now = Date.now();
vi.spyOn(Date, 'now').mockReturnValue(now);
const paginator = new ResponsePaginator(null, { sizeThreshold: 100, pageSize: 50, ttlMs: 1000 });
const raw = 'x'.repeat(200);
const indexResult = await paginator.paginate('test/tool', raw);
const match = /"_resultId": "([^"]+)"/.exec(indexResult!.content[0]!.text);
const resultId = match![1]!;
// Within TTL — should work
expect(paginator.getPage(resultId, 1)).not.toBeNull();
// Past TTL — should be null
vi.spyOn(Date, 'now').mockReturnValue(now + 1001);
expect(paginator.getPage(resultId, 1)).toBeNull();
});
});
// --- extractPaginationParams ---
describe('extractPaginationParams', () => {
it('returns null when no pagination params', () => {
expect(ResponsePaginator.extractPaginationParams({ query: 'test' })).toBeNull();
});
it('returns null when only _resultId (no _page)', () => {
expect(ResponsePaginator.extractPaginationParams({ _resultId: 'abc' })).toBeNull();
});
it('returns null when only _page (no _resultId)', () => {
expect(ResponsePaginator.extractPaginationParams({ _page: 1 })).toBeNull();
});
it('extracts numeric page', () => {
const result = ResponsePaginator.extractPaginationParams({ _resultId: 'abc-123', _page: 2 });
expect(result).toEqual({ resultId: 'abc-123', page: 2 });
});
it('extracts _page=all', () => {
const result = ResponsePaginator.extractPaginationParams({ _resultId: 'abc-123', _page: 'all' });
expect(result).toEqual({ resultId: 'abc-123', page: 'all' });
});
it('rejects negative page numbers', () => {
expect(ResponsePaginator.extractPaginationParams({ _resultId: 'abc', _page: -1 })).toBeNull();
});
it('rejects zero page number', () => {
expect(ResponsePaginator.extractPaginationParams({ _resultId: 'abc', _page: 0 })).toBeNull();
});
it('rejects non-integer page numbers', () => {
expect(ResponsePaginator.extractPaginationParams({ _resultId: 'abc', _page: 1.5 })).toBeNull();
});
it('requires string resultId', () => {
expect(ResponsePaginator.extractPaginationParams({ _resultId: 123, _page: 1 })).toBeNull();
});
});
// --- Cache management ---
describe('cache management', () => {
it('evicts expired entries on paginate', async () => {
const now = Date.now();
vi.spyOn(Date, 'now').mockReturnValue(now);
const paginator = new ResponsePaginator(null, { sizeThreshold: 100, pageSize: 50, ttlMs: 1000 });
await paginator.paginate('test/tool1', 'x'.repeat(200));
expect(paginator.cacheSize).toBe(1);
// Advance past TTL and paginate again
vi.spyOn(Date, 'now').mockReturnValue(now + 1001);
await paginator.paginate('test/tool2', 'y'.repeat(200));
// Old entry evicted, new one added
expect(paginator.cacheSize).toBe(1);
});
it('evicts LRU at capacity', async () => {
const paginator = new ResponsePaginator(null, { sizeThreshold: 100, pageSize: 50, maxCachedResults: 2 });
await paginator.paginate('test/tool1', 'A'.repeat(200));
await paginator.paginate('test/tool2', 'B'.repeat(200));
expect(paginator.cacheSize).toBe(2);
// Third entry should evict the first
await paginator.paginate('test/tool3', 'C'.repeat(200));
expect(paginator.cacheSize).toBe(2);
});
it('clearCache removes all entries', async () => {
const paginator = new ResponsePaginator(null, { sizeThreshold: 100, pageSize: 50 });
await paginator.paginate('test/tool1', 'x'.repeat(200));
await paginator.paginate('test/tool2', 'y'.repeat(200));
expect(paginator.cacheSize).toBe(2);
paginator.clearCache();
expect(paginator.cacheSize).toBe(0);
});
});
// --- Page splitting ---
describe('page splitting', () => {
it('breaks at newline boundaries', async () => {
// Create content where a newline falls within the page boundary
const paginator = new ResponsePaginator(null, { sizeThreshold: 100, pageSize: 60 });
const lines = Array.from({ length: 10 }, (_, i) => `line${String(i).padStart(3, '0')} ${'x'.repeat(20)}`);
const raw = lines.join('\n');
// raw is ~269 chars
const result = await paginator.paginate('test/tool', raw);
expect(result).not.toBeNull();
// Pages should break at newline boundaries, not mid-line
const text = result!.content[0]!.text;
const match = /"_resultId": "([^"]+)"/.exec(text);
const resultId = match![1]!;
const page1 = paginator.getPage(resultId, 1);
expect(page1).not.toBeNull();
// Page content should end at a newline boundary (no partial lines)
const pageText = page1!.content[0]!.text;
// Remove the header line
const contentStart = pageText.indexOf('\n\n') + 2;
const pageContent = pageText.slice(contentStart);
// Content should contain complete lines
expect(pageContent).toMatch(/line\d{3}/);
});
it('handles content without newlines', async () => {
const paginator = new ResponsePaginator(null, { sizeThreshold: 100, pageSize: 50 });
const raw = 'x'.repeat(200); // No newlines at all
const result = await paginator.paginate('test/tool', raw);
expect(result).not.toBeNull();
const text = result!.content[0]!.text;
expect(text).toContain('4 pages'); // 200/50 = 4
});
it('handles content that fits exactly in one page at threshold', async () => {
const paginator = new ResponsePaginator(null, { sizeThreshold: 100, pageSize: 100 });
const raw = 'x'.repeat(100); // Exactly at threshold and page size
const result = await paginator.paginate('test/tool', raw);
expect(result).not.toBeNull();
const text = result!.content[0]!.text;
expect(text).toContain('1 pages');
});
});
});

View File

@@ -54,7 +54,7 @@ describe('refreshProjectUpstreams', () => {
const client = mockMcpdClient(servers);
await refreshProjectUpstreams(router, client as any, 'smart-home', 'user-token-123');
expect(client.forward).toHaveBeenCalledWith('GET', '/api/v1/projects/smart-home/servers', '', undefined);
expect(client.forward).toHaveBeenCalledWith('GET', '/api/v1/projects/smart-home/servers', '', undefined, 'user-token-123');
expect(router.getUpstreamNames()).toContain('grafana');
});

View File

@@ -6,12 +6,22 @@ import { registerProjectMcpEndpoint } from '../src/http/project-mcp-endpoint.js'
// Mock discovery module — we don't want real HTTP calls
vi.mock('../src/discovery.js', () => ({
refreshProjectUpstreams: vi.fn(async () => ['mock-server']),
fetchProjectLlmConfig: vi.fn(async () => ({})),
}));
// Mock config module — don't read real config files
vi.mock('../src/http/config.js', async () => {
const actual = await vi.importActual<typeof import('../src/http/config.js')>('../src/http/config.js');
return {
...actual,
loadProjectLlmOverride: vi.fn(() => undefined),
};
});
import { refreshProjectUpstreams } from '../src/discovery.js';
function mockMcpdClient() {
return {
const client: Record<string, unknown> = {
baseUrl: 'http://test:3100',
token: 'test-token',
get: vi.fn(async () => []),
@@ -19,7 +29,11 @@ function mockMcpdClient() {
put: vi.fn(),
delete: vi.fn(),
forward: vi.fn(async () => ({ status: 200, body: [] })),
withHeaders: vi.fn(),
};
// withHeaders returns a new client-like object (returns self for simplicity)
(client.withHeaders as ReturnType<typeof vi.fn>).mockReturnValue(client);
return client;
}
describe('registerProjectMcpEndpoint', () => {

View File

@@ -115,4 +115,105 @@ describe('ProviderRegistry', () => {
expect(models).toEqual(['anthropic-model-1', 'anthropic-model-2']);
});
describe('tier management', () => {
it('assigns providers to tiers', () => {
registry.register(mockProvider('vllm'));
registry.register(mockProvider('gemini'));
registry.assignTier('vllm', 'fast');
registry.assignTier('gemini', 'heavy');
expect(registry.getTierProviders('fast')).toEqual(['vllm']);
expect(registry.getTierProviders('heavy')).toEqual(['gemini']);
expect(registry.hasTierConfig()).toBe(true);
});
it('getProvider returns tier-specific provider', () => {
const vllm = mockProvider('vllm');
const gemini = mockProvider('gemini');
registry.register(vllm);
registry.register(gemini);
registry.assignTier('vllm', 'fast');
registry.assignTier('gemini', 'heavy');
expect(registry.getProvider('fast')).toBe(vllm);
expect(registry.getProvider('heavy')).toBe(gemini);
});
it('getProvider falls back to other tier', () => {
const vllm = mockProvider('vllm');
registry.register(vllm);
registry.assignTier('vllm', 'fast');
// Requesting heavy but only fast exists → falls back to fast
expect(registry.getProvider('heavy')).toBe(vllm);
});
it('getProvider falls back to getActive when no tiers', () => {
const openai = mockProvider('openai');
registry.register(openai);
// No tier assignments → falls back to legacy getActive()
expect(registry.getProvider('fast')).toBe(openai);
expect(registry.getProvider('heavy')).toBe(openai);
expect(registry.hasTierConfig()).toBe(false);
});
it('unregister removes from tier assignments', () => {
registry.register(mockProvider('vllm'));
registry.register(mockProvider('gemini'));
registry.assignTier('vllm', 'fast');
registry.assignTier('gemini', 'heavy');
registry.unregister('vllm');
expect(registry.getTierProviders('fast')).toEqual([]);
expect(registry.getTierProviders('heavy')).toEqual(['gemini']);
});
it('assignTier throws for unregistered provider', () => {
expect(() => registry.assignTier('unknown', 'fast')).toThrow("Provider 'unknown' is not registered");
});
it('multiple providers in same tier uses first', () => {
const vllm = mockProvider('vllm');
const ollama = mockProvider('ollama');
registry.register(vllm);
registry.register(ollama);
registry.assignTier('vllm', 'fast');
registry.assignTier('ollama', 'fast');
expect(registry.getProvider('fast')).toBe(vllm);
expect(registry.getTierProviders('fast')).toEqual(['vllm', 'ollama']);
});
it('listProviders includes tier info', () => {
registry.register(mockProvider('vllm'));
registry.register(mockProvider('gemini'));
registry.assignTier('vllm', 'fast');
registry.assignTier('gemini', 'heavy');
const providers = registry.listProviders();
expect(providers).toEqual([
{ name: 'vllm', tiers: ['fast'] },
{ name: 'gemini', tiers: ['heavy'] },
]);
});
it('disposeAll calls dispose on all providers', () => {
const disposeFn = vi.fn();
const provider = { ...mockProvider('test'), dispose: disposeFn };
registry.register(provider);
registry.disposeAll();
expect(disposeFn).toHaveBeenCalledOnce();
});
});
});

View File

@@ -0,0 +1,520 @@
import { describe, it, expect, vi, beforeEach } from 'vitest';
import { McpRouter } from '../src/router.js';
import type { UpstreamConnection, JsonRpcRequest, JsonRpcResponse, JsonRpcNotification } from '../src/types.js';
import type { McpdClient } from '../src/http/mcpd-client.js';
import { ProviderRegistry } from '../src/providers/registry.js';
import type { LlmProvider, CompletionResult } from '../src/providers/types.js';
function mockUpstream(
name: string,
opts: { tools?: Array<{ name: string; description?: string }> } = {},
): UpstreamConnection {
return {
name,
isAlive: vi.fn(() => true),
close: vi.fn(async () => {}),
onNotification: vi.fn(),
send: vi.fn(async (req: JsonRpcRequest): Promise<JsonRpcResponse> => {
if (req.method === 'tools/list') {
return { jsonrpc: '2.0', id: req.id, result: { tools: opts.tools ?? [] } };
}
if (req.method === 'tools/call') {
return {
jsonrpc: '2.0',
id: req.id,
result: { content: [{ type: 'text', text: `Called ${(req.params as Record<string, unknown>)?.name}` }] },
};
}
if (req.method === 'resources/list') {
return { jsonrpc: '2.0', id: req.id, result: { resources: [] } };
}
if (req.method === 'prompts/list') {
return { jsonrpc: '2.0', id: req.id, result: { prompts: [] } };
}
return { jsonrpc: '2.0', id: req.id, error: { code: -32601, message: 'Not found' } };
}),
} as UpstreamConnection;
}
function mockMcpdClient(prompts: Array<{ name: string; priority: number; summary: string | null; chapters: string[] | null; content: string; type?: string }> = []): McpdClient {
return {
get: vi.fn(async (path: string) => {
if (path.includes('/prompts/visible')) {
return prompts.map((p) => ({ ...p, type: p.type ?? 'prompt' }));
}
if (path.includes('/prompt-index')) {
return prompts.map((p) => ({
name: p.name,
priority: p.priority,
summary: p.summary,
chapters: p.chapters,
}));
}
return [];
}),
post: vi.fn(async () => ({})),
put: vi.fn(async () => ({})),
delete: vi.fn(async () => {}),
forward: vi.fn(async () => ({ status: 200, body: {} })),
withHeaders: vi.fn(function (this: McpdClient) { return this; }),
} as unknown as McpdClient;
}
const samplePrompts = [
{ name: 'common-mistakes', priority: 10, summary: 'Critical safety rules everyone must follow', chapters: null, content: 'NEVER do X. ALWAYS do Y.' },
{ name: 'zigbee-pairing', priority: 7, summary: 'How to pair Zigbee devices with the hub', chapters: ['Setup', 'Troubleshooting'], content: 'Step 1: Put device in pairing mode...' },
{ name: 'mqtt-config', priority: 5, summary: 'MQTT broker configuration guide', chapters: ['Broker Setup', 'Authentication'], content: 'Configure the MQTT broker at...' },
{ name: 'security-policy', priority: 8, summary: 'Security policies for production deployments', chapters: ['Network', 'Auth'], content: 'All connections must use TLS...' },
];
function setupGatedRouter(
opts: {
gated?: boolean;
prompts?: typeof samplePrompts;
withLlm?: boolean;
llmResponse?: string;
} = {},
): { router: McpRouter; mcpdClient: McpdClient } {
const router = new McpRouter();
const prompts = opts.prompts ?? samplePrompts;
const mcpdClient = mockMcpdClient(prompts);
router.setPromptConfig(mcpdClient, 'test-project');
let providerRegistry: ProviderRegistry | null = null;
if (opts.withLlm) {
providerRegistry = new ProviderRegistry();
const mockProvider: LlmProvider = {
name: 'mock-heavy',
complete: vi.fn().mockResolvedValue({
content: opts.llmResponse ?? '{ "selectedNames": ["zigbee-pairing"], "reasoning": "User is working with zigbee" }',
toolCalls: [],
usage: { promptTokens: 100, completionTokens: 50, totalTokens: 150 },
finishReason: 'stop',
} satisfies CompletionResult),
listModels: vi.fn().mockResolvedValue([]),
isAvailable: vi.fn().mockResolvedValue(true),
};
providerRegistry.register(mockProvider);
providerRegistry.assignTier(mockProvider.name, 'heavy');
}
router.setGateConfig({
gated: opts.gated !== false,
providerRegistry,
});
return { router, mcpdClient };
}
describe('McpRouter gating', () => {
describe('initialize with gating', () => {
it('creates gated session on initialize', async () => {
const { router } = setupGatedRouter();
const res = await router.route(
{ jsonrpc: '2.0', id: 1, method: 'initialize' },
{ sessionId: 's1' },
);
expect(res.result).toBeDefined();
// The session should be gated now
const toolsRes = await router.route(
{ jsonrpc: '2.0', id: 2, method: 'tools/list' },
{ sessionId: 's1' },
);
const tools = (toolsRes.result as { tools: Array<{ name: string }> }).tools;
expect(tools).toHaveLength(1);
expect(tools[0]!.name).toBe('begin_session');
});
it('creates ungated session when project is not gated', async () => {
const { router } = setupGatedRouter({ gated: false });
router.addUpstream(mockUpstream('ha', { tools: [{ name: 'get_entities' }] }));
await router.route(
{ jsonrpc: '2.0', id: 1, method: 'initialize' },
{ sessionId: 's1' },
);
const toolsRes = await router.route(
{ jsonrpc: '2.0', id: 2, method: 'tools/list' },
{ sessionId: 's1' },
);
const tools = (toolsRes.result as { tools: Array<{ name: string }> }).tools;
const names = tools.map((t) => t.name);
expect(names).toContain('ha/get_entities');
expect(names).toContain('read_prompts');
expect(names).not.toContain('begin_session');
});
});
describe('tools/list gating', () => {
it('shows only begin_session when session is gated', async () => {
const { router } = setupGatedRouter();
await router.route({ jsonrpc: '2.0', id: 1, method: 'initialize' }, { sessionId: 's1' });
const res = await router.route(
{ jsonrpc: '2.0', id: 2, method: 'tools/list' },
{ sessionId: 's1' },
);
const tools = (res.result as { tools: Array<{ name: string }> }).tools;
expect(tools).toHaveLength(1);
expect(tools[0]!.name).toBe('begin_session');
});
it('shows all tools plus read_prompts after ungating', async () => {
const { router } = setupGatedRouter();
router.addUpstream(mockUpstream('ha', { tools: [{ name: 'get_entities' }] }));
await router.route({ jsonrpc: '2.0', id: 1, method: 'initialize' }, { sessionId: 's1' });
// Ungate via begin_session
await router.route(
{ jsonrpc: '2.0', id: 2, method: 'tools/call', params: { name: 'begin_session', arguments: { tags: ['zigbee'] } } },
{ sessionId: 's1' },
);
const toolsRes = await router.route(
{ jsonrpc: '2.0', id: 3, method: 'tools/list' },
{ sessionId: 's1' },
);
const tools = (toolsRes.result as { tools: Array<{ name: string }> }).tools;
const names = tools.map((t) => t.name);
expect(names).toContain('ha/get_entities');
expect(names).toContain('propose_prompt');
expect(names).toContain('read_prompts');
expect(names).not.toContain('begin_session');
});
});
describe('begin_session', () => {
it('returns matched prompts with keyword matching', async () => {
const { router } = setupGatedRouter();
await router.route({ jsonrpc: '2.0', id: 1, method: 'initialize' }, { sessionId: 's1' });
const res = await router.route(
{ jsonrpc: '2.0', id: 2, method: 'tools/call', params: { name: 'begin_session', arguments: { tags: ['zigbee', 'pairing'] } } },
{ sessionId: 's1' },
);
expect(res.error).toBeUndefined();
const text = ((res.result as { content: Array<{ text: string }> }).content[0]!.text);
// Should include priority 10 prompt
expect(text).toContain('common-mistakes');
expect(text).toContain('NEVER do X');
// Should include zigbee-pairing (matches both tags)
expect(text).toContain('zigbee-pairing');
expect(text).toContain('pairing mode');
// Should include encouragement
expect(text).toContain('read_prompts');
});
it('includes priority 10 prompts even without matching tags', async () => {
const { router } = setupGatedRouter();
await router.route({ jsonrpc: '2.0', id: 1, method: 'initialize' }, { sessionId: 's1' });
const res = await router.route(
{ jsonrpc: '2.0', id: 2, method: 'tools/call', params: { name: 'begin_session', arguments: { tags: ['unrelated-keyword'] } } },
{ sessionId: 's1' },
);
const text = ((res.result as { content: Array<{ text: string }> }).content[0]!.text);
expect(text).toContain('common-mistakes');
expect(text).toContain('NEVER do X');
});
it('uses LLM selection when provider is available', async () => {
const { router } = setupGatedRouter({
withLlm: true,
llmResponse: '{ "selectedNames": ["zigbee-pairing", "security-policy"], "reasoning": "Zigbee pairing needs security awareness" }',
});
await router.route({ jsonrpc: '2.0', id: 1, method: 'initialize' }, { sessionId: 's1' });
const res = await router.route(
{ jsonrpc: '2.0', id: 2, method: 'tools/call', params: { name: 'begin_session', arguments: { tags: ['zigbee'] } } },
{ sessionId: 's1' },
);
const text = ((res.result as { content: Array<{ text: string }> }).content[0]!.text);
expect(text).toContain('Zigbee pairing needs security awareness');
expect(text).toContain('zigbee-pairing');
expect(text).toContain('security-policy');
expect(text).toContain('common-mistakes'); // priority 10 always included
});
it('rejects empty tags', async () => {
const { router } = setupGatedRouter();
await router.route({ jsonrpc: '2.0', id: 1, method: 'initialize' }, { sessionId: 's1' });
const res = await router.route(
{ jsonrpc: '2.0', id: 2, method: 'tools/call', params: { name: 'begin_session', arguments: { tags: [] } } },
{ sessionId: 's1' },
);
expect(res.error).toBeDefined();
expect(res.error!.code).toBe(-32602);
});
it('returns message when session is already ungated', async () => {
const { router } = setupGatedRouter();
await router.route({ jsonrpc: '2.0', id: 1, method: 'initialize' }, { sessionId: 's1' });
// First call ungates
await router.route(
{ jsonrpc: '2.0', id: 2, method: 'tools/call', params: { name: 'begin_session', arguments: { tags: ['zigbee'] } } },
{ sessionId: 's1' },
);
// Second call tells user to use read_prompts
const res = await router.route(
{ jsonrpc: '2.0', id: 3, method: 'tools/call', params: { name: 'begin_session', arguments: { tags: ['mqtt'] } } },
{ sessionId: 's1' },
);
const text = ((res.result as { content: Array<{ text: string }> }).content[0]!.text);
expect(text).toContain('already started');
expect(text).toContain('read_prompts');
});
it('lists remaining prompts for awareness', async () => {
const { router } = setupGatedRouter();
await router.route({ jsonrpc: '2.0', id: 1, method: 'initialize' }, { sessionId: 's1' });
const res = await router.route(
{ jsonrpc: '2.0', id: 2, method: 'tools/call', params: { name: 'begin_session', arguments: { tags: ['zigbee'] } } },
{ sessionId: 's1' },
);
const text = ((res.result as { content: Array<{ text: string }> }).content[0]!.text);
// Non-matching prompts should be listed as "other available prompts"
// security-policy doesn't match 'zigbee' in keyword mode
expect(text).toContain('security-policy');
});
});
describe('read_prompts', () => {
it('returns prompts matching keywords', async () => {
const { router } = setupGatedRouter({ gated: false });
await router.route({ jsonrpc: '2.0', id: 1, method: 'initialize' }, { sessionId: 's1' });
const res = await router.route(
{ jsonrpc: '2.0', id: 2, method: 'tools/call', params: { name: 'read_prompts', arguments: { tags: ['mqtt', 'broker'] } } },
{ sessionId: 's1' },
);
expect(res.error).toBeUndefined();
const text = ((res.result as { content: Array<{ text: string }> }).content[0]!.text);
expect(text).toContain('mqtt-config');
expect(text).toContain('Configure the MQTT broker');
});
it('filters out already-sent prompts', async () => {
const { router } = setupGatedRouter();
await router.route({ jsonrpc: '2.0', id: 1, method: 'initialize' }, { sessionId: 's1' });
// begin_session sends common-mistakes (priority 10) and zigbee-pairing
await router.route(
{ jsonrpc: '2.0', id: 2, method: 'tools/call', params: { name: 'begin_session', arguments: { tags: ['zigbee'] } } },
{ sessionId: 's1' },
);
// read_prompts for mqtt should not re-send common-mistakes
const res = await router.route(
{ jsonrpc: '2.0', id: 3, method: 'tools/call', params: { name: 'read_prompts', arguments: { tags: ['mqtt'] } } },
{ sessionId: 's1' },
);
const text = ((res.result as { content: Array<{ text: string }> }).content[0]!.text);
expect(text).toContain('mqtt-config');
// common-mistakes was already sent, should not appear again
expect(text).not.toContain('NEVER do X');
});
it('returns message when no new prompts match', async () => {
const { router } = setupGatedRouter({ prompts: [] });
await router.route({ jsonrpc: '2.0', id: 1, method: 'initialize' }, { sessionId: 's1' });
const res = await router.route(
{ jsonrpc: '2.0', id: 2, method: 'tools/call', params: { name: 'read_prompts', arguments: { tags: ['nonexistent'] } } },
{ sessionId: 's1' },
);
const text = ((res.result as { content: Array<{ text: string }> }).content[0]!.text);
expect(text).toContain('No new matching prompts');
});
it('rejects empty tags', async () => {
const { router } = setupGatedRouter({ gated: false });
await router.route({ jsonrpc: '2.0', id: 1, method: 'initialize' }, { sessionId: 's1' });
const res = await router.route(
{ jsonrpc: '2.0', id: 2, method: 'tools/call', params: { name: 'read_prompts', arguments: { tags: [] } } },
{ sessionId: 's1' },
);
expect(res.error).toBeDefined();
expect(res.error!.code).toBe(-32602);
});
});
describe('gated intercept', () => {
it('auto-ungates when gated session calls a real tool', async () => {
const { router } = setupGatedRouter();
const ha = mockUpstream('ha', { tools: [{ name: 'get_entities' }] });
router.addUpstream(ha);
await router.discoverTools();
await router.route({ jsonrpc: '2.0', id: 1, method: 'initialize' }, { sessionId: 's1' });
// Call a real tool while gated — should intercept, extract keywords, and route
const res = await router.route(
{ jsonrpc: '2.0', id: 2, method: 'tools/call', params: { name: 'ha/get_entities', arguments: { domain: 'light' } } },
{ sessionId: 's1' },
);
// Response should include the tool result
expect(res.error).toBeUndefined();
const result = res.result as { content: Array<{ type: string; text: string }> };
// Should have briefing prepended
expect(result.content.length).toBeGreaterThanOrEqual(1);
// Session should now be ungated
const toolsRes = await router.route(
{ jsonrpc: '2.0', id: 3, method: 'tools/list' },
{ sessionId: 's1' },
);
const tools = (toolsRes.result as { tools: Array<{ name: string }> }).tools;
expect(tools.map((t) => t.name)).toContain('ha/get_entities');
});
it('includes project context in intercepted response', async () => {
const { router } = setupGatedRouter();
const ha = mockUpstream('ha', { tools: [{ name: 'get_entities' }] });
router.addUpstream(ha);
await router.discoverTools();
await router.route({ jsonrpc: '2.0', id: 1, method: 'initialize' }, { sessionId: 's1' });
const res = await router.route(
{ jsonrpc: '2.0', id: 2, method: 'tools/call', params: { name: 'ha/get_entities', arguments: { domain: 'light' } } },
{ sessionId: 's1' },
);
const result = res.result as { content: Array<{ type: string; text: string }> };
// First content block should be the briefing (priority 10 at minimum)
const briefing = result.content[0]!.text;
expect(briefing).toContain('common-mistakes');
expect(briefing).toContain('NEVER do X');
});
});
describe('initialize instructions for gated projects', () => {
it('includes gate message and prompt index in instructions', async () => {
const { router } = setupGatedRouter();
const res = await router.route(
{ jsonrpc: '2.0', id: 1, method: 'initialize' },
{ sessionId: 's1' },
);
const result = res.result as { instructions?: string };
expect(result.instructions).toBeDefined();
expect(result.instructions).toContain('begin_session');
expect(result.instructions).toContain('gated session');
// Should list available prompts
expect(result.instructions).toContain('common-mistakes');
expect(result.instructions).toContain('zigbee-pairing');
});
it('does not include gate message for non-gated projects', async () => {
const { router } = setupGatedRouter({ gated: false });
router.setInstructions('Base project instructions');
const res = await router.route(
{ jsonrpc: '2.0', id: 1, method: 'initialize' },
{ sessionId: 's1' },
);
const result = res.result as { instructions?: string };
expect(result.instructions).toBe('Base project instructions');
expect(result.instructions).not.toContain('gated session');
});
it('preserves base instructions and appends gate message', async () => {
const { router } = setupGatedRouter();
router.setInstructions('You are a helpful assistant.');
const res = await router.route(
{ jsonrpc: '2.0', id: 1, method: 'initialize' },
{ sessionId: 's1' },
);
const result = res.result as { instructions?: string };
expect(result.instructions).toContain('You are a helpful assistant.');
expect(result.instructions).toContain('begin_session');
});
it('sorts prompt index by priority descending', async () => {
const { router } = setupGatedRouter();
const res = await router.route(
{ jsonrpc: '2.0', id: 1, method: 'initialize' },
{ sessionId: 's1' },
);
const result = res.result as { instructions: string };
const lines = result.instructions.split('\n');
// Find the prompt index lines
const promptLines = lines.filter((l) => l.startsWith('- ') && l.includes('priority'));
// priority 10 should come first
expect(promptLines[0]).toContain('common-mistakes');
expect(promptLines[0]).toContain('priority 10');
});
});
describe('session cleanup', () => {
it('cleanupSession removes gate state', async () => {
const { router } = setupGatedRouter();
await router.route({ jsonrpc: '2.0', id: 1, method: 'initialize' }, { sessionId: 's1' });
// Session is gated
let toolsRes = await router.route(
{ jsonrpc: '2.0', id: 2, method: 'tools/list' },
{ sessionId: 's1' },
);
expect((toolsRes.result as { tools: Array<{ name: string }> }).tools[0]!.name).toBe('begin_session');
// Cleanup
router.cleanupSession('s1');
// After cleanup, session is treated as unknown (ungated)
toolsRes = await router.route(
{ jsonrpc: '2.0', id: 3, method: 'tools/list' },
{ sessionId: 's1' },
);
const tools = (toolsRes.result as { tools: Array<{ name: string }> }).tools;
expect(tools.map((t) => t.name)).not.toContain('begin_session');
});
});
describe('prompt index caching', () => {
it('caches prompt index for 60 seconds', async () => {
const { router, mcpdClient } = setupGatedRouter({ gated: false });
await router.route({ jsonrpc: '2.0', id: 1, method: 'initialize' }, { sessionId: 's1' });
// First read_prompts call fetches from mcpd
await router.route(
{ jsonrpc: '2.0', id: 2, method: 'tools/call', params: { name: 'read_prompts', arguments: { tags: ['mqtt'] } } },
{ sessionId: 's1' },
);
// Second call should use cache
await router.route(
{ jsonrpc: '2.0', id: 3, method: 'tools/call', params: { name: 'read_prompts', arguments: { tags: ['zigbee'] } } },
{ sessionId: 's1' },
);
// mcpdClient.get should have been called only once for prompts/visible
const getCalls = vi.mocked(mcpdClient.get).mock.calls.filter((c) => (c[0] as string).includes('/prompts/visible'));
expect(getCalls).toHaveLength(1);
});
});
});

View File

@@ -0,0 +1,292 @@
import { describe, it, expect, vi, beforeEach } from 'vitest';
import { McpRouter } from '../src/router.js';
import type { UpstreamConnection, JsonRpcRequest, JsonRpcResponse, JsonRpcNotification } from '../src/types.js';
import type { McpdClient } from '../src/http/mcpd-client.js';
function mockUpstream(name: string, opts?: {
tools?: Array<{ name: string; description?: string; inputSchema?: unknown }>;
}): UpstreamConnection {
return {
name,
isAlive: vi.fn(() => true),
close: vi.fn(async () => {}),
onNotification: vi.fn(),
send: vi.fn(async (req: JsonRpcRequest): Promise<JsonRpcResponse> => {
if (req.method === 'tools/list') {
return { jsonrpc: '2.0', id: req.id, result: { tools: opts?.tools ?? [] } };
}
if (req.method === 'resources/list') {
return { jsonrpc: '2.0', id: req.id, result: { resources: [] } };
}
return { jsonrpc: '2.0', id: req.id, result: {} };
}),
};
}
function mockMcpdClient(): McpdClient {
return {
get: vi.fn(async () => []),
post: vi.fn(async () => ({})),
put: vi.fn(async () => ({})),
delete: vi.fn(async () => {}),
forward: vi.fn(async () => ({ status: 200, body: {} })),
withHeaders: vi.fn(function (this: McpdClient) { return this; }),
} as unknown as McpdClient;
}
describe('McpRouter - Prompt Integration', () => {
let router: McpRouter;
let mcpdClient: McpdClient;
beforeEach(() => {
router = new McpRouter();
mcpdClient = mockMcpdClient();
});
describe('propose_prompt tool', () => {
it('should include propose_prompt in tools/list when prompt config is set', async () => {
router.setPromptConfig(mcpdClient, 'test-project');
router.addUpstream(mockUpstream('server1'));
const response = await router.route({
jsonrpc: '2.0',
id: 1,
method: 'tools/list',
});
const tools = (response.result as { tools: Array<{ name: string }> }).tools;
expect(tools.some((t) => t.name === 'propose_prompt')).toBe(true);
});
it('should NOT include propose_prompt when no prompt config', async () => {
router.addUpstream(mockUpstream('server1'));
const response = await router.route({
jsonrpc: '2.0',
id: 1,
method: 'tools/list',
});
const tools = (response.result as { tools: Array<{ name: string }> }).tools;
expect(tools.some((t) => t.name === 'propose_prompt')).toBe(false);
});
it('should call mcpd to create a prompt request', async () => {
router.setPromptConfig(mcpdClient, 'my-project');
const response = await router.route(
{
jsonrpc: '2.0',
id: 2,
method: 'tools/call',
params: {
name: 'propose_prompt',
arguments: { name: 'my-prompt', content: 'Hello world' },
},
},
{ sessionId: 'sess-123' },
);
expect(response.error).toBeUndefined();
expect(mcpdClient.post).toHaveBeenCalledWith(
'/api/v1/projects/my-project/promptrequests',
{ name: 'my-prompt', content: 'Hello world', createdBySession: 'sess-123' },
);
});
it('should return error when name or content missing', async () => {
router.setPromptConfig(mcpdClient, 'proj');
const response = await router.route({
jsonrpc: '2.0',
id: 3,
method: 'tools/call',
params: {
name: 'propose_prompt',
arguments: { name: 'only-name' },
},
});
expect(response.error?.code).toBe(-32602);
expect(response.error?.message).toContain('Missing required arguments');
});
it('should return error when mcpd call fails', async () => {
router.setPromptConfig(mcpdClient, 'proj');
vi.mocked(mcpdClient.post).mockRejectedValue(new Error('mcpd returned 409'));
const response = await router.route({
jsonrpc: '2.0',
id: 4,
method: 'tools/call',
params: {
name: 'propose_prompt',
arguments: { name: 'dup', content: 'x' },
},
});
expect(response.error?.code).toBe(-32603);
expect(response.error?.message).toContain('mcpd returned 409');
});
});
describe('prompt resources', () => {
it('should include prompt resources in resources/list', async () => {
router.setPromptConfig(mcpdClient, 'test-project');
vi.mocked(mcpdClient.get).mockResolvedValue([
{ name: 'approved-prompt', content: 'Content A', type: 'prompt' },
{ name: 'pending-req', content: 'Content B', type: 'promptrequest' },
]);
const response = await router.route(
{ jsonrpc: '2.0', id: 1, method: 'resources/list' },
{ sessionId: 'sess-1' },
);
const resources = (response.result as { resources: Array<{ uri: string; description?: string }> }).resources;
expect(resources).toHaveLength(2);
expect(resources[0]!.uri).toBe('mcpctl://prompts/approved-prompt');
expect(resources[0]!.description).toContain('Approved');
expect(resources[1]!.uri).toBe('mcpctl://prompts/pending-req');
expect(resources[1]!.description).toContain('Pending');
});
it('should pass session ID when fetching visible prompts', async () => {
router.setPromptConfig(mcpdClient, 'proj');
vi.mocked(mcpdClient.get).mockResolvedValue([]);
await router.route(
{ jsonrpc: '2.0', id: 1, method: 'resources/list' },
{ sessionId: 'my-session' },
);
expect(mcpdClient.get).toHaveBeenCalledWith(
'/api/v1/projects/proj/prompts/visible?session=my-session',
);
});
it('should read mcpctl resource content live from mcpd', async () => {
router.setPromptConfig(mcpdClient, 'proj');
vi.mocked(mcpdClient.get).mockResolvedValue([
{ name: 'my-prompt', content: 'The content here', type: 'prompt' },
]);
// Read directly — no need to list first
const response = await router.route({
jsonrpc: '2.0',
id: 2,
method: 'resources/read',
params: { uri: 'mcpctl://prompts/my-prompt' },
});
expect(response.error).toBeUndefined();
const contents = (response.result as { contents: Array<{ text: string }> }).contents;
expect(contents[0]!.text).toBe('The content here');
});
it('should return fresh content after prompt update', async () => {
router.setPromptConfig(mcpdClient, 'proj');
// First call returns old content
vi.mocked(mcpdClient.get).mockResolvedValueOnce([
{ name: 'my-prompt', content: 'Old content', type: 'prompt' },
]);
await router.route({
jsonrpc: '2.0', id: 1, method: 'resources/read',
params: { uri: 'mcpctl://prompts/my-prompt' },
});
// Second call returns updated content
vi.mocked(mcpdClient.get).mockResolvedValueOnce([
{ name: 'my-prompt', content: 'Updated content', type: 'prompt' },
]);
const response = await router.route({
jsonrpc: '2.0', id: 2, method: 'resources/read',
params: { uri: 'mcpctl://prompts/my-prompt' },
});
const contents = (response.result as { contents: Array<{ text: string }> }).contents;
expect(contents[0]!.text).toBe('Updated content');
});
it('should fall back to cache when mcpd is unreachable on read', async () => {
router.setPromptConfig(mcpdClient, 'proj');
// Populate cache via list
vi.mocked(mcpdClient.get).mockResolvedValueOnce([
{ name: 'cached-prompt', content: 'Cached content', type: 'prompt' },
]);
await router.route({ jsonrpc: '2.0', id: 1, method: 'resources/list' });
// mcpd goes down for read
vi.mocked(mcpdClient.get).mockRejectedValueOnce(new Error('Connection refused'));
const response = await router.route({
jsonrpc: '2.0', id: 2, method: 'resources/read',
params: { uri: 'mcpctl://prompts/cached-prompt' },
});
expect(response.error).toBeUndefined();
const contents = (response.result as { contents: Array<{ text: string }> }).contents;
expect(contents[0]!.text).toBe('Cached content');
});
it('should return error for unknown mcpctl resource', async () => {
router.setPromptConfig(mcpdClient, 'proj');
vi.mocked(mcpdClient.get).mockResolvedValue([]);
const response = await router.route({
jsonrpc: '2.0',
id: 3,
method: 'resources/read',
params: { uri: 'mcpctl://prompts/nonexistent' },
});
expect(response.error?.code).toBe(-32602);
expect(response.error?.message).toContain('Resource not found');
});
it('should not fail when mcpd is unavailable', async () => {
router.setPromptConfig(mcpdClient, 'proj');
vi.mocked(mcpdClient.get).mockRejectedValue(new Error('Connection refused'));
const response = await router.route({ jsonrpc: '2.0', id: 1, method: 'resources/list' });
// Should succeed with empty resources (upstream errors are swallowed)
expect(response.error).toBeUndefined();
const resources = (response.result as { resources: unknown[] }).resources;
expect(resources).toEqual([]);
});
});
describe('session isolation', () => {
it('should not include session parameter when no sessionId in context', async () => {
router.setPromptConfig(mcpdClient, 'proj');
vi.mocked(mcpdClient.get).mockResolvedValue([]);
await router.route({ jsonrpc: '2.0', id: 1, method: 'resources/list' });
expect(mcpdClient.get).toHaveBeenCalledWith(
'/api/v1/projects/proj/prompts/visible',
);
});
it('should not include session in propose when no context', async () => {
router.setPromptConfig(mcpdClient, 'proj');
await router.route({
jsonrpc: '2.0',
id: 2,
method: 'tools/call',
params: {
name: 'propose_prompt',
arguments: { name: 'test', content: 'stuff' },
},
});
expect(mcpdClient.post).toHaveBeenCalledWith(
'/api/v1/projects/proj/promptrequests',
{ name: 'test', content: 'stuff' },
);
});
});
});

View File

@@ -0,0 +1,155 @@
import { describe, it, expect } from 'vitest';
import { SessionGate } from '../src/gate/session-gate.js';
import type { TagMatchResult, PromptIndexEntry } from '../src/gate/tag-matcher.js';
function makeMatchResult(names: string[]): TagMatchResult {
return {
fullContent: names.map((name) => ({
name,
priority: 5,
summary: null,
chapters: null,
content: `Content of ${name}`,
})),
indexOnly: [],
remaining: [],
};
}
describe('SessionGate', () => {
it('creates a gated session when project is gated', () => {
const gate = new SessionGate();
gate.createSession('s1', true);
expect(gate.isGated('s1')).toBe(true);
});
it('creates an ungated session when project is not gated', () => {
const gate = new SessionGate();
gate.createSession('s1', false);
expect(gate.isGated('s1')).toBe(false);
});
it('unknown sessions are treated as ungated', () => {
const gate = new SessionGate();
expect(gate.isGated('nonexistent')).toBe(false);
});
it('getSession returns null for unknown sessions', () => {
const gate = new SessionGate();
expect(gate.getSession('nonexistent')).toBeNull();
});
it('getSession returns session state', () => {
const gate = new SessionGate();
gate.createSession('s1', true);
const state = gate.getSession('s1');
expect(state).not.toBeNull();
expect(state!.gated).toBe(true);
expect(state!.tags).toEqual([]);
expect(state!.retrievedPrompts.size).toBe(0);
expect(state!.briefing).toBeNull();
});
it('ungate marks session as ungated and records tags', () => {
const gate = new SessionGate();
gate.createSession('s1', true);
gate.ungate('s1', ['zigbee', 'mqtt'], makeMatchResult(['prompt-a', 'prompt-b']));
expect(gate.isGated('s1')).toBe(false);
const state = gate.getSession('s1');
expect(state!.tags).toEqual(['zigbee', 'mqtt']);
expect(state!.retrievedPrompts.has('prompt-a')).toBe(true);
expect(state!.retrievedPrompts.has('prompt-b')).toBe(true);
});
it('ungate appends tags on repeated calls', () => {
const gate = new SessionGate();
gate.createSession('s1', true);
gate.ungate('s1', ['zigbee'], makeMatchResult(['p1']));
gate.ungate('s1', ['mqtt'], makeMatchResult(['p2']));
const state = gate.getSession('s1');
expect(state!.tags).toEqual(['zigbee', 'mqtt']);
expect(state!.retrievedPrompts.has('p1')).toBe(true);
expect(state!.retrievedPrompts.has('p2')).toBe(true);
});
it('ungate is no-op for unknown sessions', () => {
const gate = new SessionGate();
// Should not throw
gate.ungate('nonexistent', ['tag'], makeMatchResult(['p']));
});
it('addRetrievedPrompts records additional prompts', () => {
const gate = new SessionGate();
gate.createSession('s1', true);
gate.ungate('s1', ['zigbee'], makeMatchResult(['p1']));
gate.addRetrievedPrompts('s1', ['mqtt', 'lights'], ['p2', 'p3']);
const state = gate.getSession('s1');
expect(state!.tags).toEqual(['zigbee', 'mqtt', 'lights']);
expect(state!.retrievedPrompts.has('p2')).toBe(true);
expect(state!.retrievedPrompts.has('p3')).toBe(true);
});
it('addRetrievedPrompts is no-op for unknown sessions', () => {
const gate = new SessionGate();
gate.addRetrievedPrompts('nonexistent', ['tag'], ['p']);
});
it('filterAlreadySent removes already-sent prompts', () => {
const gate = new SessionGate();
gate.createSession('s1', true);
gate.ungate('s1', ['zigbee'], makeMatchResult(['p1']));
const prompts: PromptIndexEntry[] = [
{ name: 'p1', priority: 5, summary: 'already sent', chapters: null, content: 'x' },
{ name: 'p2', priority: 5, summary: 'new', chapters: null, content: 'y' },
];
const filtered = gate.filterAlreadySent('s1', prompts);
expect(filtered).toHaveLength(1);
expect(filtered[0]!.name).toBe('p2');
});
it('filterAlreadySent returns all prompts for unknown sessions', () => {
const gate = new SessionGate();
const prompts: PromptIndexEntry[] = [
{ name: 'p1', priority: 5, summary: null, chapters: null, content: 'x' },
];
const filtered = gate.filterAlreadySent('nonexistent', prompts);
expect(filtered).toHaveLength(1);
});
it('removeSession cleans up state', () => {
const gate = new SessionGate();
gate.createSession('s1', true);
expect(gate.getSession('s1')).not.toBeNull();
gate.removeSession('s1');
expect(gate.getSession('s1')).toBeNull();
expect(gate.isGated('s1')).toBe(false);
});
it('removeSession is safe for unknown sessions', () => {
const gate = new SessionGate();
gate.removeSession('nonexistent'); // Should not throw
});
it('manages multiple sessions independently', () => {
const gate = new SessionGate();
gate.createSession('s1', true);
gate.createSession('s2', false);
expect(gate.isGated('s1')).toBe(true);
expect(gate.isGated('s2')).toBe(false);
gate.ungate('s1', ['zigbee'], makeMatchResult(['p1']));
expect(gate.isGated('s1')).toBe(false);
expect(gate.getSession('s2')!.tags).toEqual([]); // s2 untouched
});
});

View File

@@ -0,0 +1,165 @@
import { describe, it, expect } from 'vitest';
import { TagMatcher, extractKeywordsFromToolCall, type PromptIndexEntry } from '../src/gate/tag-matcher.js';
function makePrompt(overrides: Partial<PromptIndexEntry> = {}): PromptIndexEntry {
return {
name: 'test-prompt',
priority: 5,
summary: 'A test prompt for testing',
chapters: ['Chapter One', 'Chapter Two'],
content: 'Full content of the test prompt.',
...overrides,
};
}
describe('TagMatcher', () => {
it('returns priority 10 prompts regardless of tags', () => {
const matcher = new TagMatcher();
const critical = makePrompt({ name: 'common-mistakes', priority: 10, summary: 'Unrelated stuff' });
const normal = makePrompt({ name: 'normal', priority: 5, summary: 'Something else' });
const result = matcher.match([], [critical, normal]);
expect(result.fullContent.map((p) => p.name)).toEqual(['common-mistakes']);
expect(result.remaining.map((p) => p.name)).toEqual(['normal']);
});
it('scores by matching_tags * priority', () => {
const matcher = new TagMatcher();
const high = makePrompt({ name: 'important', priority: 8, summary: 'zigbee mqtt pairing' });
const low = makePrompt({ name: 'basic', priority: 3, summary: 'zigbee basics' });
// Both match "zigbee": high scores 1*8=8, low scores 1*3=3
const result = matcher.match(['zigbee'], [low, high]);
expect(result.fullContent[0]!.name).toBe('important');
expect(result.fullContent[1]!.name).toBe('basic');
});
it('matches more tags = higher score', () => {
const matcher = new TagMatcher();
const twoMatch = makePrompt({ name: 'two-match', priority: 5, summary: 'zigbee mqtt' });
const oneMatch = makePrompt({ name: 'one-match', priority: 5, summary: 'zigbee only' });
// two-match: 2*5=10, one-match: 1*5=5
const result = matcher.match(['zigbee', 'mqtt'], [oneMatch, twoMatch]);
expect(result.fullContent[0]!.name).toBe('two-match');
});
it('performs case-insensitive matching', () => {
const matcher = new TagMatcher();
const prompt = makePrompt({ name: 'test', summary: 'ZIGBEE Protocol Setup' });
const result = matcher.match(['zigbee'], [prompt]);
expect(result.fullContent).toHaveLength(1);
});
it('matches against name, summary, and chapters', () => {
const matcher = new TagMatcher();
const byName = makePrompt({ name: 'zigbee-config', summary: 'unrelated', chapters: [] });
const bySummary = makePrompt({ name: 'setup', summary: 'zigbee setup guide', chapters: [] });
const byChapter = makePrompt({ name: 'guide', summary: 'unrelated', chapters: ['Zigbee Pairing'] });
const result = matcher.match(['zigbee'], [byName, bySummary, byChapter]);
expect(result.fullContent).toHaveLength(3);
});
it('respects byte budget', () => {
const matcher = new TagMatcher(100); // Very small budget
const small = makePrompt({ name: 'small', summary: 'zigbee', content: 'Short.' }); // ~6 bytes
const big = makePrompt({ name: 'big', summary: 'zigbee', content: 'x'.repeat(200) }); // 200 bytes
const result = matcher.match(['zigbee'], [small, big]);
expect(result.fullContent.map((p) => p.name)).toEqual(['small']);
expect(result.indexOnly.map((p) => p.name)).toEqual(['big']);
});
it('puts non-matched prompts in remaining', () => {
const matcher = new TagMatcher();
const matched = makePrompt({ name: 'matched', summary: 'zigbee stuff' });
const unmatched = makePrompt({ name: 'unmatched', summary: 'completely different topic' });
const result = matcher.match(['zigbee'], [matched, unmatched]);
expect(result.fullContent.map((p) => p.name)).toEqual(['matched']);
expect(result.remaining.map((p) => p.name)).toEqual(['unmatched']);
});
it('handles empty tags — only priority 10 matched', () => {
const matcher = new TagMatcher();
const critical = makePrompt({ name: 'critical', priority: 10 });
const normal = makePrompt({ name: 'normal', priority: 5 });
const result = matcher.match([], [critical, normal]);
expect(result.fullContent.map((p) => p.name)).toEqual(['critical']);
expect(result.remaining.map((p) => p.name)).toEqual(['normal']);
});
it('handles empty prompts array', () => {
const matcher = new TagMatcher();
const result = matcher.match(['zigbee'], []);
expect(result.fullContent).toEqual([]);
expect(result.indexOnly).toEqual([]);
expect(result.remaining).toEqual([]);
});
it('all priority 10 prompts are included even beyond budget', () => {
const matcher = new TagMatcher(50); // Tiny budget
const c1 = makePrompt({ name: 'c1', priority: 10, content: 'x'.repeat(40) });
const c2 = makePrompt({ name: 'c2', priority: 10, content: 'y'.repeat(40) });
const result = matcher.match([], [c1, c2]);
// Both should be in fullContent — priority 10 has Infinity score
// First one fits budget, second overflows but still priority 10
expect(result.fullContent.length + result.indexOnly.length).toBe(2);
// At minimum the first one is in fullContent
expect(result.fullContent[0]!.name).toBe('c1');
});
it('sorts matched by score descending', () => {
const matcher = new TagMatcher();
const p1 = makePrompt({ name: 'p1', priority: 3, summary: 'mqtt zigbee lights' }); // 3 matches * 3 = 9
const p2 = makePrompt({ name: 'p2', priority: 8, summary: 'mqtt' }); // 1 match * 8 = 8
const p3 = makePrompt({ name: 'p3', priority: 2, summary: 'mqtt zigbee lights pairing automation' }); // 5 * 2 = 10
const result = matcher.match(['mqtt', 'zigbee', 'lights', 'pairing', 'automation'], [p1, p2, p3]);
expect(result.fullContent.map((p) => p.name)).toEqual(['p3', 'p1', 'p2']);
});
});
describe('extractKeywordsFromToolCall', () => {
it('extracts from tool name', () => {
const keywords = extractKeywordsFromToolCall('home-assistant/get_entities', {});
expect(keywords).toContain('home');
expect(keywords).toContain('assistant');
expect(keywords).toContain('get_entities');
});
it('extracts from string arguments', () => {
const keywords = extractKeywordsFromToolCall('tool', { domain: 'light', area: 'kitchen' });
expect(keywords).toContain('light');
expect(keywords).toContain('kitchen');
});
it('ignores short words (<=2 chars)', () => {
const keywords = extractKeywordsFromToolCall('ab', { x: 'hi' });
expect(keywords).not.toContain('ab');
expect(keywords).not.toContain('hi');
});
it('ignores long string values (>200 chars)', () => {
const keywords = extractKeywordsFromToolCall('tool', { data: 'x'.repeat(201) });
// Only 'tool' from the name
expect(keywords).toEqual(['tool']);
});
it('caps at 10 keywords', () => {
const args: Record<string, string> = {};
for (let i = 0; i < 20; i++) args[`key${i}`] = `keyword${i}value`;
const keywords = extractKeywordsFromToolCall('tool', args);
expect(keywords.length).toBeLessThanOrEqual(10);
});
it('lowercases all keywords', () => {
const keywords = extractKeywordsFromToolCall('MyTool', { name: 'MQTT' });
expect(keywords).toContain('mytool');
expect(keywords).toContain('mqtt');
});
});

View File

@@ -2,3 +2,4 @@ export * from './types/index.js';
export * from './validation/index.js';
export * from './constants/index.js';
export * from './utils/index.js';
export * from './secrets/index.js';

View File

@@ -0,0 +1,63 @@
import { existsSync, mkdirSync, readFileSync, writeFileSync, chmodSync } from 'node:fs';
import { join } from 'node:path';
import { homedir } from 'node:os';
import type { SecretStore, SecretStoreDeps } from './types.js';
function defaultConfigDir(): string {
return join(homedir(), '.mcpctl');
}
function secretsPath(configDir: string): string {
return join(configDir, 'secrets');
}
export class FileSecretStore implements SecretStore {
private readonly configDir: string;
constructor(deps?: SecretStoreDeps) {
this.configDir = deps?.configDir ?? defaultConfigDir();
}
backend(): string {
return 'file';
}
async get(key: string): Promise<string | null> {
const data = this.readAll();
return data[key] ?? null;
}
async set(key: string, value: string): Promise<void> {
const data = this.readAll();
data[key] = value;
this.writeAll(data);
}
async delete(key: string): Promise<boolean> {
const data = this.readAll();
if (!(key in data)) return false;
delete data[key];
this.writeAll(data);
return true;
}
private readAll(): Record<string, string> {
const path = secretsPath(this.configDir);
if (!existsSync(path)) return {};
try {
const raw = readFileSync(path, 'utf-8');
return JSON.parse(raw) as Record<string, string>;
} catch {
return {};
}
}
private writeAll(data: Record<string, string>): void {
if (!existsSync(this.configDir)) {
mkdirSync(this.configDir, { recursive: true });
}
const path = secretsPath(this.configDir);
writeFileSync(path, JSON.stringify(data, null, 2) + '\n', 'utf-8');
chmodSync(path, 0o600);
}
}

View File

@@ -0,0 +1,97 @@
import { spawn } from 'node:child_process';
import { execFile } from 'node:child_process';
import { promisify } from 'node:util';
import type { SecretStore } from './types.js';
const execFileAsync = promisify(execFile);
const SERVICE = 'mcpctl';
export type RunCommand = (cmd: string, args: string[], stdin?: string) => Promise<{ stdout: string; code: number }>;
function defaultRunCommand(cmd: string, args: string[], stdin?: string): Promise<{ stdout: string; code: number }> {
return new Promise((resolve, reject) => {
const child = spawn(cmd, args, {
stdio: ['pipe', 'pipe', 'pipe'],
timeout: 5000,
});
const stdoutChunks: Buffer[] = [];
child.stdout.on('data', (chunk: Buffer) => stdoutChunks.push(chunk));
child.on('error', reject);
child.on('close', (code) => {
const stdout = Buffer.concat(stdoutChunks).toString('utf-8');
resolve({ stdout, code: code ?? 1 });
});
if (stdin !== undefined) {
child.stdin.write(stdin);
child.stdin.end();
} else {
child.stdin.end();
}
});
}
export interface GnomeKeyringDeps {
run?: RunCommand;
}
export class GnomeKeyringStore implements SecretStore {
private readonly run: RunCommand;
constructor(deps?: GnomeKeyringDeps) {
this.run = deps?.run ?? defaultRunCommand;
}
backend(): string {
return 'gnome-keyring';
}
async get(key: string): Promise<string | null> {
try {
const { stdout, code } = await this.run(
'secret-tool', ['lookup', 'service', SERVICE, 'key', key],
);
if (code !== 0 || !stdout) return null;
return stdout;
} catch {
return null;
}
}
async set(key: string, value: string): Promise<void> {
const { code } = await this.run(
'secret-tool',
['store', '--label', `mcpctl: ${key}`, 'service', SERVICE, 'key', key],
value,
);
if (code !== 0) {
throw new Error(`secret-tool store exited with code ${code}`);
}
}
async delete(key: string): Promise<boolean> {
try {
const { code } = await this.run(
'secret-tool', ['clear', 'service', SERVICE, 'key', key],
);
return code === 0;
} catch {
return false;
}
}
static async isAvailable(deps?: { run?: RunCommand }): Promise<boolean> {
try {
if (deps?.run) {
const { code } = await deps.run('secret-tool', ['--version']);
return code === 0;
}
await execFileAsync('secret-tool', ['--version'], { timeout: 3000 });
return true;
} catch {
return false;
}
}
}

View File

@@ -0,0 +1,15 @@
export type { SecretStore, SecretStoreDeps } from './types.js';
export { FileSecretStore } from './file-store.js';
export { GnomeKeyringStore } from './gnome-keyring.js';
export type { GnomeKeyringDeps, RunCommand } from './gnome-keyring.js';
import { GnomeKeyringStore } from './gnome-keyring.js';
import { FileSecretStore } from './file-store.js';
import type { SecretStore, SecretStoreDeps } from './types.js';
export async function createSecretStore(deps?: SecretStoreDeps): Promise<SecretStore> {
if (await GnomeKeyringStore.isAvailable()) {
return new GnomeKeyringStore();
}
return new FileSecretStore(deps);
}

View File

@@ -0,0 +1,10 @@
export interface SecretStore {
get(key: string): Promise<string | null>;
set(key: string, value: string): Promise<void>;
delete(key: string): Promise<boolean>;
backend(): string;
}
export interface SecretStoreDeps {
configDir?: string;
}

Some files were not shown because too many files have changed in this diff Show More