feat: mcpctl v0.0.1 — first public release
Some checks are pending
CI / lint (push) Waiting to run
CI / typecheck (push) Waiting to run
CI / test (push) Waiting to run
CI / build (push) Blocked by required conditions
CI / package (push) Blocked by required conditions

Comprehensive MCP server management with kubectl-style CLI.

Key features in this release:
- Declarative YAML apply/get round-trip with project cloning support
- Gated sessions with prompt intelligence for Claude
- Interactive MCP console with traffic inspector
- Persistent STDIO connections for containerized servers
- RBAC with name-scoped bindings
- Shell completions (fish + bash) auto-generated
- Rate-limit retry with exponential backoff in apply
- Project-scoped prompt management
- Credential scrubbing from git history

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
Michal
2026-02-27 17:05:05 +00:00
parent 414a8d3774
commit 69867bd47a
65 changed files with 5710 additions and 695 deletions

6
.gitignore vendored
View File

@@ -38,3 +38,9 @@ pgdata/
# Prisma # Prisma
src/db/prisma/migrations/*.sql.backup src/db/prisma/migrations/*.sql.backup
logs.sh logs.sh
# Temp/test files
*.backup.json
mcpctl-backup.json
a.yaml
test-mcp.sh

359
README.md Normal file
View File

@@ -0,0 +1,359 @@
# mcpctl
**kubectl for MCP servers.** A management system for [Model Context Protocol](https://modelcontextprotocol.io) servers — define, deploy, and connect MCP servers to Claude using familiar kubectl-style commands.
```
mcpctl get servers
NAME TRANSPORT REPLICAS DOCKER IMAGE DESCRIPTION
grafana STDIO 1 grafana/mcp-grafana:latest Grafana MCP server
home-assistant SSE 1 ghcr.io/homeassistant-ai/ha-mcp:latest Home Assistant MCP
docmost SSE 1 10.0.0.194:3012/michal/docmost-mcp:latest Docmost wiki MCP
```
## What is this?
mcpctl manages MCP servers the same way kubectl manages Kubernetes pods. You define servers declaratively in YAML, group them into projects, and connect them to Claude Code or any MCP client through a local proxy.
**The architecture:**
```
Claude Code <--STDIO--> mcplocal (local proxy) <--HTTP--> mcpd (daemon) <--Docker--> MCP servers
```
- **mcpd** — the daemon. Runs on a server, manages MCP server containers (Docker/Podman), stores configuration in PostgreSQL.
- **mcplocal** — local proxy. Runs on your machine, presents a single MCP endpoint to Claude that merges tools from all your servers. Handles namespacing (`grafana/search_dashboards`), gated sessions, and prompt delivery.
- **mcpctl** — the CLI. Talks to mcpd (via mcplocal or directly) to manage everything.
## Quick Start
### 1. Install
```bash
# From RPM repository
sudo dnf config-manager --add-repo https://your-registry/api/packages/mcpctl/rpm.repo
sudo dnf install mcpctl
# Or build from source
git clone https://github.com/your-org/mcpctl.git
cd mcpctl
pnpm install
pnpm build
pnpm rpm:build # requires bun and nfpm
```
### 2. Connect to a daemon
```bash
# Login to an mcpd instance
mcpctl login --mcpd-url http://your-server:3000
# Check connectivity
mcpctl status
```
### 3. Create your first secret
Secrets store credentials that servers need — API tokens, passwords, etc.
```bash
mcpctl create secret grafana-token \
--data TOKEN=glsa_xxxxxxxxxxxx
```
### 4. Create your first server
A server is an MCP server definition — what Docker image to run, what transport it speaks, what environment it needs.
```bash
mcpctl create server grafana \
--docker-image grafana/mcp-grafana:latest \
--transport STDIO \
--env GRAFANA_URL=http://grafana.local:3000 \
--env GRAFANA_AUTH_TOKEN=secretRef:grafana-token:TOKEN
```
mcpd pulls the image, starts a container, and keeps it running. Check on it:
```bash
mcpctl get instances # See running containers
mcpctl logs grafana # View server logs
mcpctl describe server grafana # Full details
```
### 5. Create a project
A project groups servers together and configures how Claude interacts with them.
```bash
mcpctl create project monitoring \
--description "Grafana dashboards and alerting" \
--server grafana \
--no-gated
```
### 6. Connect Claude Code
Generate the `.mcp.json` config for Claude Code:
```bash
mcpctl config claude --project monitoring
```
This writes a `.mcp.json` that tells Claude Code to connect through mcplocal. Restart Claude Code and your Grafana tools appear:
```
mcpctl console monitoring # Preview what Claude sees
```
## Declarative Configuration
Everything can be defined in YAML and applied with `mcpctl apply`:
```yaml
# infrastructure.yaml
secrets:
- name: grafana-token
data:
TOKEN: "glsa_xxxxxxxxxxxx"
servers:
- name: grafana
description: "Grafana dashboards and alerting"
dockerImage: grafana/mcp-grafana:latest
transport: STDIO
env:
- name: GRAFANA_URL
value: "http://grafana.local:3000"
- name: GRAFANA_AUTH_TOKEN
valueFrom:
secretRef:
name: grafana-token
key: TOKEN
projects:
- name: monitoring
description: "Infrastructure monitoring"
gated: false
servers:
- grafana
```
```bash
mcpctl apply -f infrastructure.yaml
```
Round-trip works too — export, edit, re-apply:
```bash
mcpctl get all --project monitoring -o yaml > backup.yaml
# edit backup.yaml...
mcpctl apply -f backup.yaml
```
## Resources
| Resource | What it is | Example |
|----------|-----------|---------|
| **server** | MCP server definition | Docker image + transport + env vars |
| **instance** | Running container (immutable) | Auto-created from server replicas |
| **secret** | Key-value credentials | API tokens, passwords |
| **template** | Reusable server blueprint | Community server configs |
| **project** | Workspace grouping servers | "monitoring", "home-automation" |
| **prompt** | Curated content for Claude | Instructions, docs, guides |
| **promptrequest** | Pending prompt proposal | LLM-submitted, needs approval |
| **rbac** | Access control bindings | Who can do what |
| **serverattachment** | Server-to-project link | Virtual resource for `apply` |
## Commands
```bash
# List resources
mcpctl get servers
mcpctl get instances
mcpctl get projects
mcpctl get prompts --project myproject
# Detailed view
mcpctl describe server grafana
mcpctl describe project monitoring
# Create resources
mcpctl create server <name> [flags]
mcpctl create secret <name> --data KEY=value
mcpctl create project <name> --server <srv> [--gated]
mcpctl create prompt <name> --project <proj> --content "..."
# Modify resources
mcpctl edit server grafana # Opens in $EDITOR
mcpctl patch project myproj gated=true
mcpctl apply -f config.yaml # Declarative create/update
# Delete resources
mcpctl delete server grafana
# Logs and debugging
mcpctl logs grafana # Container logs
mcpctl console monitoring # Interactive MCP console
mcpctl console --inspect # Traffic inspector
# Backup and restore
mcpctl backup -o backup.json
mcpctl restore -i backup.json
# Project management
mcpctl --project monitoring get servers # Project-scoped listing
mcpctl --project monitoring attach-server grafana
mcpctl --project monitoring detach-server grafana
```
## Templates
Templates are reusable server configurations. Create a server from a template without repeating all the config:
```bash
# Register a template
mcpctl create template home-assistant \
--docker-image "ghcr.io/homeassistant-ai/ha-mcp:latest" \
--transport SSE \
--container-port 8086
# Create a server from it
mcpctl create server my-ha \
--from-template home-assistant \
--env-from-secret ha-secrets
```
## Gated Sessions
Projects are **gated** by default. When Claude connects to a gated project:
1. Claude sees only a `begin_session` tool initially
2. Claude calls `begin_session` with a description of its task
3. mcplocal matches relevant prompts and delivers them
4. The full tool list is revealed
This keeps Claude's context focused — instead of dumping 100+ tools and pages of docs upfront, only the relevant ones are delivered based on the task at hand.
```bash
# Enable/disable gating
mcpctl patch project monitoring gated=true
mcpctl patch project monitoring gated=false
```
## Prompts
Prompts are curated content delivered to Claude through the MCP protocol. They can be plain text or linked to external MCP resources (like wiki pages).
```bash
# Create a text prompt
mcpctl create prompt deployment-guide \
--project monitoring \
--content-file docs/deployment.md \
--priority 7
# Create a linked prompt (content fetched live from an MCP resource)
mcpctl create prompt wiki-page \
--project monitoring \
--link "monitoring/docmost:docmost://pages/abc123" \
--priority 5
```
Claude can also **propose** prompts during a session. These appear as prompt requests that you can review and approve:
```bash
mcpctl get promptrequests
mcpctl approve promptrequest proposed-guide
```
## Interactive Console
The console lets you see exactly what Claude sees — tools, resources, prompts — and call tools interactively:
```bash
mcpctl console monitoring
```
The traffic inspector watches MCP traffic from other clients in real-time:
```bash
mcpctl console --inspect
```
## Architecture
```
┌─────────────────────────────────────────┐
│ mcpd (daemon) │
│ │
│ REST API (/api/v1/*) │
│ PostgreSQL (Prisma ORM) │
│ Docker/Podman container management │
│ Health probes (STDIO, SSE, HTTP) │
│ RBAC enforcement │
└──────────────┬──────────────────────────┘
│ HTTP
┌──────────────┐ STDIO ┌──────────────┴──────────────────────────┐
│ Claude Code │◄─────────►│ mcplocal (proxy) │
│ │ │ │
│ (or any MCP │ │ Namespace-merging MCP proxy │
│ client) │ │ Gated sessions + prompt delivery │
│ │ │ Per-project endpoints │
└──────────────┘ │ Traffic inspection │
└──────────────┬──────────────────────────┘
│ STDIO/SSE/HTTP
┌──────────────┴──────────────────────────┐
│ MCP Server Containers │
│ │
│ grafana/ home-assistant/ docmost/ │
│ (tools are namespaced by server name) │
└─────────────────────────────────────────┘
```
**Tool namespacing**: When Claude connects to a project with servers `grafana` and `slack`, it sees tools like `grafana/search_dashboards` and `slack/send_message`. The proxy routes each call to the correct upstream server.
## Project Structure
```
mcpctl/
├── src/
│ ├── cli/ # mcpctl command-line interface (Commander.js)
│ ├── mcpd/ # Daemon server (Fastify 5, REST API)
│ ├── mcplocal/ # Local MCP proxy (namespace merging, gating)
│ ├── db/ # Database schema (Prisma) and migrations
│ └── shared/ # Shared types and utilities
├── deploy/ # Docker Compose for local development
├── stack/ # Production deployment (Portainer)
├── scripts/ # Build, release, and deploy scripts
├── examples/ # Example YAML configurations
└── completions/ # Shell completions (fish, bash)
```
## Development
```bash
# Prerequisites: Node.js 20+, pnpm 9+, Docker/Podman
# Install dependencies
pnpm install
# Start local database
pnpm db:up
# Generate Prisma client
cd src/db && npx prisma generate && cd ../..
# Build all packages
pnpm build
# Run tests
pnpm test:run
# Development mode (mcpd with hot-reload)
cd src/mcpd && pnpm dev
```
## License
MIT

View File

@@ -1,28 +1,32 @@
# mcpctl bash completions — auto-generated by scripts/generate-completions.ts
# DO NOT EDIT MANUALLY — run: pnpm completions:generate
_mcpctl() { _mcpctl() {
local cur prev words cword local cur prev words cword
_init_completion || return _init_completion || return
local commands="status login logout config get describe delete logs create edit apply backup restore mcp console approve help" local commands="status login logout config get describe delete logs create edit apply patch backup restore approve console"
local project_commands="attach-server detach-server get describe delete logs create edit help" local project_commands="get describe delete logs create edit attach-server detach-server"
local global_opts="-v --version --daemon-url --direct --project -h --help" local global_opts="-v --version --daemon-url --direct -p --project -h --help"
local resources="servers instances secrets templates projects users groups rbac prompts promptrequests" local resources="servers instances secrets templates projects users groups rbac prompts promptrequests serverattachments all"
local resource_aliases="servers instances secrets templates projects users groups rbac prompts promptrequests serverattachments all server srv instance inst secret sec template tpl project proj user group rbac-definition rbac-binding prompt promptrequest pr serverattachment sa"
# Check if --project was given # Check if --project/-p was given
local has_project=false local has_project=false
local i local i
for ((i=1; i < cword; i++)); do for ((i=1; i < cword; i++)); do
if [[ "${words[i]}" == "--project" ]]; then if [[ "${words[i]}" == "--project" || "${words[i]}" == "-p" ]]; then
has_project=true has_project=true
break break
fi fi
done done
# Find the first subcommand (skip --project and its argument, skip flags) # Find the first subcommand
local subcmd="" local subcmd=""
local subcmd_pos=0 local subcmd_pos=0
for ((i=1; i < cword; i++)); do for ((i=1; i < cword; i++)); do
if [[ "${words[i]}" == "--project" || "${words[i]}" == "--daemon-url" ]]; then if [[ "${words[i]}" == "--project" || "${words[i]}" == "--daemon-url" || "${words[i]}" == "-p" ]]; then
((i++)) # skip the argument ((i++))
continue continue
fi fi
if [[ "${words[i]}" != -* ]]; then if [[ "${words[i]}" != -* ]]; then
@@ -32,116 +36,215 @@ _mcpctl() {
fi fi
done done
# Find the resource type after get/describe/delete/edit # Find the resource type after resource commands
local resource_type="" local resource_type=""
if [[ -n "$subcmd_pos" ]] && [[ $subcmd_pos -gt 0 ]]; then if [[ -n "$subcmd_pos" ]] && [[ $subcmd_pos -gt 0 ]]; then
for ((i=subcmd_pos+1; i < cword; i++)); do for ((i=subcmd_pos+1; i < cword; i++)); do
if [[ "${words[i]}" != -* ]] && [[ " $resources " == *" ${words[i]} "* ]]; then if [[ "${words[i]}" != -* ]] && [[ " $resource_aliases " == *" ${words[i]} "* ]]; then
resource_type="${words[i]}" resource_type="${words[i]}"
break break
fi fi
done done
fi fi
# If completing the --project value # Helper: get --project/-p value
if [[ "$prev" == "--project" ]]; then
local names
names=$(mcpctl get projects -o json 2>/dev/null | jq -r '.[][].name' 2>/dev/null)
COMPREPLY=($(compgen -W "$names" -- "$cur"))
return
fi
# Fetch resource names dynamically (jq extracts only top-level names)
_mcpctl_resource_names() {
local rt="$1"
if [[ -n "$rt" ]]; then
# Instances don't have a name field — use server.name instead
if [[ "$rt" == "instances" ]]; then
mcpctl get instances -o json 2>/dev/null | jq -r '.[][].server.name' 2>/dev/null
else
mcpctl get "$rt" -o json 2>/dev/null | jq -r '.[][].name' 2>/dev/null
fi
fi
}
# Get the --project value from the command line
_mcpctl_get_project_value() { _mcpctl_get_project_value() {
local i local i
for ((i=1; i < cword; i++)); do for ((i=1; i < cword; i++)); do
if [[ "${words[i]}" == "--project" ]] && (( i+1 < cword )); then if [[ "${words[i]}" == "--project" || "${words[i]}" == "-p" ]] && (( i+1 < cword )); then
echo "${words[i+1]}" echo "${words[i+1]}"
return return
fi fi
done done
} }
case "$subcmd" in # Helper: fetch resource names
config) _mcpctl_resource_names() {
if [[ $((cword - subcmd_pos)) -eq 1 ]]; then local rt="$1"
COMPREPLY=($(compgen -W "view set path reset claude claude-generate setup impersonate help" -- "$cur")) if [[ -n "$rt" ]]; then
if [[ "$rt" == "instances" ]]; then
mcpctl get instances -o json 2>/dev/null | jq -r '.[][].server.name' 2>/dev/null
else
mcpctl get "$rt" -o json 2>/dev/null | jq -r '.[].name' 2>/dev/null
fi fi
return ;; fi
}
# Helper: find sub-subcommand (for config/create)
_mcpctl_get_subcmd() {
local parent_pos="$1"
local i
for ((i=parent_pos+1; i < cword; i++)); do
if [[ "${words[i]}" != -* ]]; then
echo "${words[i]}"
return
fi
done
}
# If completing option values
if [[ "$prev" == "--project" || "$prev" == "-p" ]]; then
local names
names=$(mcpctl get projects -o json 2>/dev/null | jq -r '.[].name' 2>/dev/null)
COMPREPLY=($(compgen -W "$names" -- "$cur"))
return
fi
case "$subcmd" in
status) status)
COMPREPLY=($(compgen -W "-h --help" -- "$cur")) COMPREPLY=($(compgen -W "-o --output -h --help" -- "$cur"))
return ;; return ;;
login) login)
COMPREPLY=($(compgen -W "--url --email --password -h --help" -- "$cur")) COMPREPLY=($(compgen -W "--mcpd-url -h --help" -- "$cur"))
return ;; return ;;
logout) logout)
COMPREPLY=($(compgen -W "-h --help" -- "$cur"))
return ;; return ;;
mcp) config)
return ;; local config_sub=$(_mcpctl_get_subcmd $subcmd_pos)
console) if [[ -z "$config_sub" ]]; then
# First arg is project name COMPREPLY=($(compgen -W "view set path reset claude claude-generate setup impersonate help" -- "$cur"))
if [[ $((cword - subcmd_pos)) -eq 1 ]]; then else
local names case "$config_sub" in
names=$(mcpctl get projects -o json 2>/dev/null | jq -r '.[][].name' 2>/dev/null) view)
COMPREPLY=($(compgen -W "$names" -- "$cur")) COMPREPLY=($(compgen -W "-o --output -h --help" -- "$cur"))
;;
set)
COMPREPLY=($(compgen -W "-h --help" -- "$cur"))
;;
path)
COMPREPLY=($(compgen -W "-h --help" -- "$cur"))
;;
reset)
COMPREPLY=($(compgen -W "-h --help" -- "$cur"))
;;
claude)
COMPREPLY=($(compgen -W "--project -o --output --inspect --stdout -h --help" -- "$cur"))
;;
claude-generate)
COMPREPLY=($(compgen -W "--project -o --output --inspect --stdout -h --help" -- "$cur"))
;;
setup)
COMPREPLY=($(compgen -W "-h --help" -- "$cur"))
;;
impersonate)
COMPREPLY=($(compgen -W "--quit -h --help" -- "$cur"))
;;
*)
COMPREPLY=($(compgen -W "-h --help" -- "$cur"))
;;
esac
fi fi
return ;; return ;;
get|describe|delete) get)
if [[ -z "$resource_type" ]]; then if [[ -z "$resource_type" ]]; then
COMPREPLY=($(compgen -W "$resources" -- "$cur")) COMPREPLY=($(compgen -W "$resources -o --output --project -A --all -h --help" -- "$cur"))
else else
local names local names
names=$(_mcpctl_resource_names "$resource_type") names=$(_mcpctl_resource_names "$resource_type")
COMPREPLY=($(compgen -W "$names -o --output -h --help" -- "$cur")) COMPREPLY=($(compgen -W "$names -o --output --project -A --all -h --help" -- "$cur"))
fi
return ;;
describe)
if [[ -z "$resource_type" ]]; then
COMPREPLY=($(compgen -W "$resources -o --output --show-values -h --help" -- "$cur"))
else
local names
names=$(_mcpctl_resource_names "$resource_type")
COMPREPLY=($(compgen -W "$names -o --output --show-values -h --help" -- "$cur"))
fi
return ;;
delete)
if [[ -z "$resource_type" ]]; then
COMPREPLY=($(compgen -W "$resources --project -h --help" -- "$cur"))
else
local names
names=$(_mcpctl_resource_names "$resource_type")
COMPREPLY=($(compgen -W "$names --project -h --help" -- "$cur"))
fi
return ;;
logs)
if [[ $((cword - subcmd_pos)) -eq 1 ]]; then
local names
names=$(mcpctl get instances -o json 2>/dev/null | jq -r '.[][].server.name' 2>/dev/null)
COMPREPLY=($(compgen -W "$names -t --tail -i --instance -h --help" -- "$cur"))
else
COMPREPLY=($(compgen -W "-t --tail -i --instance -h --help" -- "$cur"))
fi
return ;;
create)
local create_sub=$(_mcpctl_get_subcmd $subcmd_pos)
if [[ -z "$create_sub" ]]; then
COMPREPLY=($(compgen -W "server secret project user group rbac prompt serverattachment promptrequest help" -- "$cur"))
else
case "$create_sub" in
server)
COMPREPLY=($(compgen -W "-d --description --package-name --docker-image --transport --repository-url --external-url --command --container-port --replicas --env --from-template --env-from-secret --force -h --help" -- "$cur"))
;;
secret)
COMPREPLY=($(compgen -W "--data --force -h --help" -- "$cur"))
;;
project)
COMPREPLY=($(compgen -W "-d --description --proxy-mode --prompt --gated --no-gated --server --force -h --help" -- "$cur"))
;;
user)
COMPREPLY=($(compgen -W "--password --name --force -h --help" -- "$cur"))
;;
group)
COMPREPLY=($(compgen -W "--description --member --force -h --help" -- "$cur"))
;;
rbac)
COMPREPLY=($(compgen -W "--subject --binding --operation --force -h --help" -- "$cur"))
;;
prompt)
COMPREPLY=($(compgen -W "--project --content --content-file --priority --link -h --help" -- "$cur"))
;;
serverattachment)
COMPREPLY=($(compgen -W "--project -h --help" -- "$cur"))
;;
promptrequest)
COMPREPLY=($(compgen -W "--project --content --content-file --priority -h --help" -- "$cur"))
;;
*)
COMPREPLY=($(compgen -W "-h --help" -- "$cur"))
;;
esac
fi fi
return ;; return ;;
edit) edit)
if [[ -z "$resource_type" ]]; then if [[ -z "$resource_type" ]]; then
COMPREPLY=($(compgen -W "servers projects" -- "$cur")) COMPREPLY=($(compgen -W "servers secrets projects groups rbac prompts promptrequests -h --help" -- "$cur"))
else else
local names local names
names=$(_mcpctl_resource_names "$resource_type") names=$(_mcpctl_resource_names "$resource_type")
COMPREPLY=($(compgen -W "$names -h --help" -- "$cur")) COMPREPLY=($(compgen -W "$names -h --help" -- "$cur"))
fi fi
return ;; return ;;
logs) apply)
COMPREPLY=($(compgen -W "--tail --since -f --follow -h --help" -- "$cur")) COMPREPLY=($(compgen -f -W "-f --file --dry-run -h --help" -- "$cur"))
return ;; return ;;
create) patch)
if [[ $((cword - subcmd_pos)) -eq 1 ]]; then if [[ -z "$resource_type" ]]; then
COMPREPLY=($(compgen -W "server secret project user group rbac prompt promptrequest help" -- "$cur")) COMPREPLY=($(compgen -W "$resources -h --help" -- "$cur"))
else
local names
names=$(_mcpctl_resource_names "$resource_type")
COMPREPLY=($(compgen -W "$names -h --help" -- "$cur"))
fi fi
return ;; return ;;
apply)
COMPREPLY=($(compgen -f -- "$cur"))
return ;;
backup) backup)
COMPREPLY=($(compgen -W "-o --output -p --password -h --help" -- "$cur")) COMPREPLY=($(compgen -W "-o --output -p --password -r --resources -h --help" -- "$cur"))
return ;; return ;;
restore) restore)
COMPREPLY=($(compgen -W "-i --input -p --password -c --conflict -h --help" -- "$cur")) COMPREPLY=($(compgen -W "-i --input -p --password -c --conflict -h --help" -- "$cur"))
return ;; return ;;
attach-server) attach-server)
# Only complete if no server arg given yet (first arg after subcmd)
if [[ $((cword - subcmd_pos)) -ne 1 ]]; then return; fi if [[ $((cword - subcmd_pos)) -ne 1 ]]; then return; fi
local proj names all_servers proj_servers local proj names all_servers proj_servers
proj=$(_mcpctl_get_project_value) proj=$(_mcpctl_get_project_value)
if [[ -n "$proj" ]]; then if [[ -n "$proj" ]]; then
all_servers=$(mcpctl get servers -o json 2>/dev/null | jq -r '.[][].name' 2>/dev/null) all_servers=$(mcpctl get servers -o json 2>/dev/null | jq -r '.[].name' 2>/dev/null)
proj_servers=$(mcpctl --project "$proj" get servers -o json 2>/dev/null | jq -r '.[][].name' 2>/dev/null) proj_servers=$(mcpctl --project "$proj" get servers -o json 2>/dev/null | jq -r '.[].name' 2>/dev/null)
names=$(comm -23 <(echo "$all_servers" | sort) <(echo "$proj_servers" | sort)) names=$(comm -23 <(echo "$all_servers" | sort) <(echo "$proj_servers" | sort))
else else
names=$(_mcpctl_resource_names "servers") names=$(_mcpctl_resource_names "servers")
@@ -149,22 +252,33 @@ _mcpctl() {
COMPREPLY=($(compgen -W "$names" -- "$cur")) COMPREPLY=($(compgen -W "$names" -- "$cur"))
return ;; return ;;
detach-server) detach-server)
# Only complete if no server arg given yet (first arg after subcmd)
if [[ $((cword - subcmd_pos)) -ne 1 ]]; then return; fi if [[ $((cword - subcmd_pos)) -ne 1 ]]; then return; fi
local proj names local proj names
proj=$(_mcpctl_get_project_value) proj=$(_mcpctl_get_project_value)
if [[ -n "$proj" ]]; then if [[ -n "$proj" ]]; then
names=$(mcpctl --project "$proj" get servers -o json 2>/dev/null | jq -r '.[][].name' 2>/dev/null) names=$(mcpctl --project "$proj" get servers -o json 2>/dev/null | jq -r '.[].name' 2>/dev/null)
fi fi
COMPREPLY=($(compgen -W "$names" -- "$cur")) COMPREPLY=($(compgen -W "$names" -- "$cur"))
return ;; return ;;
approve) approve)
if [[ -z "$resource_type" ]]; then if [[ -z "$resource_type" ]]; then
COMPREPLY=($(compgen -W "promptrequest" -- "$cur")) COMPREPLY=($(compgen -W "promptrequest -h --help" -- "$cur"))
else else
local names local names
names=$(_mcpctl_resource_names "$resource_type") names=$(_mcpctl_resource_names "$resource_type")
COMPREPLY=($(compgen -W "$names" -- "$cur")) COMPREPLY=($(compgen -W "$names -h --help" -- "$cur"))
fi
return ;;
mcp)
COMPREPLY=($(compgen -W "-p --project -h --help" -- "$cur"))
return ;;
console)
if [[ $((cword - subcmd_pos)) -eq 1 ]]; then
local names
names=$(mcpctl get projects -o json 2>/dev/null | jq -r '.[].name' 2>/dev/null)
COMPREPLY=($(compgen -W "$names --inspect --stdin-mcp -h --help" -- "$cur"))
else
COMPREPLY=($(compgen -W "--inspect --stdin-mcp -h --help" -- "$cur"))
fi fi
return ;; return ;;
help) help)

View File

@@ -1,10 +1,11 @@
# mcpctl fish completions # mcpctl fish completions — auto-generated by scripts/generate-completions.ts
# DO NOT EDIT MANUALLY — run: pnpm completions:generate
# Erase any stale completions from previous versions # Erase any stale completions from previous versions
complete -c mcpctl -e complete -c mcpctl -e
set -l commands status login logout config get describe delete logs create edit apply patch backup restore mcp console approve help set -l commands status login logout config get describe delete logs create edit apply patch backup restore approve console
set -l project_commands attach-server detach-server get describe delete logs create edit help set -l project_commands get describe delete logs create edit attach-server detach-server
# Disable file completions by default # Disable file completions by default
complete -c mcpctl -f complete -c mcpctl -f
@@ -12,37 +13,37 @@ complete -c mcpctl -f
# Global options # Global options
complete -c mcpctl -s v -l version -d 'Show version' complete -c mcpctl -s v -l version -d 'Show version'
complete -c mcpctl -l daemon-url -d 'mcplocal daemon URL' -x complete -c mcpctl -l daemon-url -d 'mcplocal daemon URL' -x
complete -c mcpctl -l direct -d 'Bypass mcplocal, connect directly to mcpd' complete -c mcpctl -l direct -d 'bypass mcplocal and connect directly to mcpd'
complete -c mcpctl -l project -d 'Target project context' -x complete -c mcpctl -s p -l project -d 'Target project for project commands' -xa '(__mcpctl_project_names)'
complete -c mcpctl -s h -l help -d 'Show help' complete -c mcpctl -s h -l help -d 'Show help'
# Helper: check if --project was given # ---- Runtime helpers ----
# Helper: check if --project or -p was given
function __mcpctl_has_project function __mcpctl_has_project
set -l tokens (commandline -opc) set -l tokens (commandline -opc)
for i in (seq (count $tokens)) for i in (seq (count $tokens))
if test "$tokens[$i]" = "--project" if test "$tokens[$i]" = "--project" -o "$tokens[$i]" = "-p"
return 0 return 0
end end
end end
return 1 return 1
end end
# Helper: check if a resource type has been selected after get/describe/delete/edit # Resource type detection
set -l resources servers instances secrets templates projects users groups rbac prompts promptrequests set -l resources servers instances secrets templates projects users groups rbac prompts promptrequests serverattachments all
# All accepted resource aliases (plural + singular + short forms)
set -l resource_aliases servers server srv instances instance inst secrets secret sec templates template tpl projects project proj users user groups group rbac rbac-definition rbac-binding prompts prompt promptrequests promptrequest pr
function __mcpctl_needs_resource_type function __mcpctl_needs_resource_type
set -l resource_aliases servers instances secrets templates projects users groups rbac prompts promptrequests serverattachments all server srv instance inst secret sec template tpl project proj user group rbac-definition rbac-binding prompt promptrequest pr serverattachment sa
set -l tokens (commandline -opc) set -l tokens (commandline -opc)
set -l found_cmd false set -l found_cmd false
for tok in $tokens for tok in $tokens
if $found_cmd if $found_cmd
# Check if next token after get/describe/delete/edit is a resource type or alias
if contains -- $tok $resource_aliases if contains -- $tok $resource_aliases
return 1 # resource type already present return 1 # resource type already present
end end
end end
if contains -- $tok get describe delete edit patch if contains -- $tok get describe delete edit patch approve
set found_cmd true set found_cmd true
end end
end end
@@ -55,21 +56,24 @@ end
# Map any resource alias to the canonical plural form for API calls # Map any resource alias to the canonical plural form for API calls
function __mcpctl_resolve_resource function __mcpctl_resolve_resource
switch $argv[1] switch $argv[1]
case server srv servers; echo servers case server srv servers; echo servers
case instance inst instances; echo instances case instance inst instances; echo instances
case secret sec secrets; echo secrets case secret sec secrets; echo secrets
case template tpl templates; echo templates case template tpl templates; echo templates
case project proj projects; echo projects case project proj projects; echo projects
case user users; echo users case user users; echo users
case group groups; echo groups case group groups; echo groups
case rbac rbac-definition rbac-binding; echo rbac case rbac rbac-definition rbac-binding; echo rbac
case prompt prompts; echo prompts case prompt prompts; echo prompts
case promptrequest promptrequests pr; echo promptrequests case promptrequest promptrequests pr; echo promptrequests
case serverattachment serverattachments sa; echo serverattachments
case all; echo all
case '*'; echo $argv[1] case '*'; echo $argv[1]
end end
end end
function __mcpctl_get_resource_type function __mcpctl_get_resource_type
set -l resource_aliases servers instances secrets templates projects users groups rbac prompts promptrequests serverattachments all server srv instance inst secret sec template tpl project proj user group rbac-definition rbac-binding prompt promptrequest pr serverattachment sa
set -l tokens (commandline -opc) set -l tokens (commandline -opc)
set -l found_cmd false set -l found_cmd false
for tok in $tokens for tok in $tokens
@@ -79,39 +83,37 @@ function __mcpctl_get_resource_type
return return
end end
end end
if contains -- $tok get describe delete edit patch if contains -- $tok get describe delete edit patch approve
set found_cmd true set found_cmd true
end end
end end
end end
# Fetch resource names dynamically from the API (jq extracts only top-level names) # Fetch resource names dynamically from the API
function __mcpctl_resource_names function __mcpctl_resource_names
set -l resource (__mcpctl_get_resource_type) set -l resource (__mcpctl_get_resource_type)
if test -z "$resource" if test -z "$resource"
return return
end end
# Instances don't have a name field — use server.name instead
if test "$resource" = "instances" if test "$resource" = "instances"
mcpctl get instances -o json 2>/dev/null | jq -r '.[][].server.name' 2>/dev/null mcpctl get instances -o json 2>/dev/null | jq -r '.[][].server.name' 2>/dev/null
else if test "$resource" = "prompts" -o "$resource" = "promptrequests" else if test "$resource" = "prompts" -o "$resource" = "promptrequests"
# Use -A to include all projects, not just global mcpctl get $resource -A -o json 2>/dev/null | jq -r '.[].name' 2>/dev/null
mcpctl get $resource -A -o json 2>/dev/null | jq -r '.[][].name' 2>/dev/null
else else
mcpctl get $resource -o json 2>/dev/null | jq -r '.[][].name' 2>/dev/null mcpctl get $resource -o json 2>/dev/null | jq -r '.[].name' 2>/dev/null
end end
end end
# Fetch project names for --project value # Fetch project names for --project value
function __mcpctl_project_names function __mcpctl_project_names
mcpctl get projects -o json 2>/dev/null | jq -r '.[][].name' 2>/dev/null mcpctl get projects -o json 2>/dev/null | jq -r '.[].name' 2>/dev/null
end end
# Helper: get the --project value from the command line # Helper: get the --project/-p value from the command line
function __mcpctl_get_project_value function __mcpctl_get_project_value
set -l tokens (commandline -opc) set -l tokens (commandline -opc)
for i in (seq (count $tokens)) for i in (seq (count $tokens))
if test "$tokens[$i]" = "--project"; and test $i -lt (count $tokens) if test "$tokens[$i]" = "--project" -o "$tokens[$i]" = "-p"; and test $i -lt (count $tokens)
echo $tokens[(math $i + 1)] echo $tokens[(math $i + 1)]
return return
end end
@@ -124,19 +126,18 @@ function __mcpctl_project_servers
if test -z "$proj" if test -z "$proj"
return return
end end
mcpctl --project $proj get servers -o json 2>/dev/null | jq -r '.[][].name' 2>/dev/null mcpctl --project $proj get servers -o json 2>/dev/null | jq -r '.[].name' 2>/dev/null
end end
# Servers NOT attached to the project (for attach-server) # Servers NOT attached to the project (for attach-server)
function __mcpctl_available_servers function __mcpctl_available_servers
set -l proj (__mcpctl_get_project_value) set -l proj (__mcpctl_get_project_value)
if test -z "$proj" if test -z "$proj"
# No project — show all servers mcpctl get servers -o json 2>/dev/null | jq -r '.[].name' 2>/dev/null
mcpctl get servers -o json 2>/dev/null | jq -r '.[][].name' 2>/dev/null
return return
end end
set -l all (mcpctl get servers -o json 2>/dev/null | jq -r '.[][].name' 2>/dev/null) set -l all (mcpctl get servers -o json 2>/dev/null | jq -r '.[].name' 2>/dev/null)
set -l attached (mcpctl --project $proj get servers -o json 2>/dev/null | jq -r '.[][].name' 2>/dev/null) set -l attached (mcpctl --project $proj get servers -o json 2>/dev/null | jq -r '.[].name' 2>/dev/null)
for s in $all for s in $all
if not contains -- $s $attached if not contains -- $s $attached
echo $s echo $s
@@ -144,45 +145,31 @@ function __mcpctl_available_servers
end end
end end
# --project value completion # Instance names for logs
complete -c mcpctl -l project -xa '(__mcpctl_project_names)' function __mcpctl_instance_names
mcpctl get instances -o json 2>/dev/null | jq -r '.[][].server.name' 2>/dev/null
end
# Top-level commands (without --project) # Helper: check if a positional arg has been given for a specific command
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a status -d 'Show status and connectivity' function __mcpctl_needs_arg_for
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a login -d 'Authenticate with mcpd' set -l cmd $argv[1]
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a logout -d 'Log out' set -l tokens (commandline -opc)
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a config -d 'Manage configuration' set -l found false
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a get -d 'List resources' for tok in $tokens
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a describe -d 'Show resource details' if $found
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a delete -d 'Delete a resource' if not string match -q -- '-*' $tok
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a logs -d 'Get instance logs' return 1 # arg already present
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a create -d 'Create a resource' end
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a edit -d 'Edit a resource' end
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a apply -d 'Apply configuration from file' if test "$tok" = "$cmd"
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a backup -d 'Backup configuration' set found true
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a restore -d 'Restore from backup' end
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a patch -d 'Patch a resource field' end
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a console -d 'Interactive MCP console' if $found
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a approve -d 'Approve a prompt request' return 0 # command found but no arg yet
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a help -d 'Show help' end
return 1
# Project-scoped commands (with --project) end
complete -c mcpctl -n "__mcpctl_has_project; and not __fish_seen_subcommand_from $project_commands" -a attach-server -d 'Attach a server to the project'
complete -c mcpctl -n "__mcpctl_has_project; and not __fish_seen_subcommand_from $project_commands" -a detach-server -d 'Detach a server from the project'
complete -c mcpctl -n "__mcpctl_has_project; and not __fish_seen_subcommand_from $project_commands" -a get -d 'List resources (scoped to project)'
complete -c mcpctl -n "__mcpctl_has_project; and not __fish_seen_subcommand_from $project_commands" -a describe -d 'Show resource details'
complete -c mcpctl -n "__mcpctl_has_project; and not __fish_seen_subcommand_from $project_commands" -a delete -d 'Delete a resource'
complete -c mcpctl -n "__mcpctl_has_project; and not __fish_seen_subcommand_from $project_commands" -a logs -d 'Get instance logs'
complete -c mcpctl -n "__mcpctl_has_project; and not __fish_seen_subcommand_from $project_commands" -a create -d 'Create a resource'
complete -c mcpctl -n "__mcpctl_has_project; and not __fish_seen_subcommand_from $project_commands" -a edit -d 'Edit a resource'
complete -c mcpctl -n "__mcpctl_has_project; and not __fish_seen_subcommand_from $project_commands" -a help -d 'Show help'
# Resource types — only when resource type not yet selected
complete -c mcpctl -n "__fish_seen_subcommand_from get describe delete patch; and __mcpctl_needs_resource_type" -a "$resources" -d 'Resource type'
complete -c mcpctl -n "__fish_seen_subcommand_from edit; and __mcpctl_needs_resource_type" -a 'servers secrets projects groups rbac prompts promptrequests' -d 'Resource type'
# Resource names — after resource type is selected
complete -c mcpctl -n "__fish_seen_subcommand_from get describe delete edit patch; and not __mcpctl_needs_resource_type" -a '(__mcpctl_resource_names)' -d 'Resource name'
# Helper: check if attach-server/detach-server already has a server argument # Helper: check if attach-server/detach-server already has a server argument
function __mcpctl_needs_server_arg function __mcpctl_needs_server_arg
@@ -199,150 +186,223 @@ function __mcpctl_needs_server_arg
end end
end end
if $found_cmd if $found_cmd
return 0 # command found but no server arg yet return 0
end end
return 1 return 1
end end
# Helper: check if a specific parent-child subcommand pair is active
function __mcpctl_subcmd_active
set -l parent $argv[1]
set -l child $argv[2]
set -l tokens (commandline -opc)
set -l found_parent false
for tok in $tokens
if $found_parent
if test "$tok" = "$child"
return 0
end
if not string match -q -- '-*' $tok
return 1 # different subcommand
end
end
if test "$tok" = "$parent"
set found_parent true
end
end
return 1
end
# Top-level commands (without --project)
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a status -d 'Show mcpctl status and connectivity'
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a login -d 'Authenticate with mcpd'
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a logout -d 'Log out and remove stored credentials'
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a config -d 'Manage mcpctl configuration'
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a get -d 'List resources (servers, projects, instances, all)'
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a describe -d 'Show detailed information about a resource'
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a delete -d 'Delete a resource (server, instance, secret, project, user, group, rbac)'
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a logs -d 'Get logs from an MCP server instance'
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a create -d 'Create a resource (server, secret, project, user, group, rbac, serverattachment, prompt)'
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a edit -d 'Edit a resource in your default editor (server, project)'
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a apply -d 'Apply declarative configuration from a YAML or JSON file'
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a patch -d 'Patch a resource field (e.g. mcpctl patch project myproj llmProvider=none)'
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a backup -d 'Backup mcpctl configuration to a JSON file'
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a restore -d 'Restore mcpctl configuration from a backup file'
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a approve -d 'Approve a pending prompt request (atomic: delete request, create prompt)'
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a console -d 'Interactive MCP console — see what an LLM sees when attached to a project'
# Project-scoped commands (with --project)
complete -c mcpctl -n "__mcpctl_has_project; and not __fish_seen_subcommand_from $project_commands" -a get -d 'List resources (servers, projects, instances, all)'
complete -c mcpctl -n "__mcpctl_has_project; and not __fish_seen_subcommand_from $project_commands" -a describe -d 'Show detailed information about a resource'
complete -c mcpctl -n "__mcpctl_has_project; and not __fish_seen_subcommand_from $project_commands" -a delete -d 'Delete a resource (server, instance, secret, project, user, group, rbac)'
complete -c mcpctl -n "__mcpctl_has_project; and not __fish_seen_subcommand_from $project_commands" -a logs -d 'Get logs from an MCP server instance'
complete -c mcpctl -n "__mcpctl_has_project; and not __fish_seen_subcommand_from $project_commands" -a create -d 'Create a resource (server, secret, project, user, group, rbac, serverattachment, prompt)'
complete -c mcpctl -n "__mcpctl_has_project; and not __fish_seen_subcommand_from $project_commands" -a edit -d 'Edit a resource in your default editor (server, project)'
complete -c mcpctl -n "__mcpctl_has_project; and not __fish_seen_subcommand_from $project_commands" -a attach-server -d 'Attach a server to a project (requires --project)'
complete -c mcpctl -n "__mcpctl_has_project; and not __fish_seen_subcommand_from $project_commands" -a detach-server -d 'Detach a server from a project (requires --project)'
# Resource types — only when resource type not yet selected
complete -c mcpctl -n "__fish_seen_subcommand_from get describe delete patch; and __mcpctl_needs_resource_type" -a "$resources" -d 'Resource type'
complete -c mcpctl -n "__fish_seen_subcommand_from edit; and __mcpctl_needs_resource_type" -a 'servers secrets projects groups rbac prompts promptrequests' -d 'Resource type'
complete -c mcpctl -n "__fish_seen_subcommand_from approve; and __mcpctl_needs_resource_type" -a 'promptrequest' -d 'Resource type'
# Resource names — after resource type is selected
complete -c mcpctl -n "__fish_seen_subcommand_from get describe delete edit patch approve; and not __mcpctl_needs_resource_type" -a '(__mcpctl_resource_names)' -d 'Resource name'
# config subcommands
set -l config_cmds view set path reset claude claude-generate setup impersonate
complete -c mcpctl -n "__fish_seen_subcommand_from config; and not __fish_seen_subcommand_from $config_cmds" -a view -d 'Show current configuration'
complete -c mcpctl -n "__fish_seen_subcommand_from config; and not __fish_seen_subcommand_from $config_cmds" -a set -d 'Set a configuration value'
complete -c mcpctl -n "__fish_seen_subcommand_from config; and not __fish_seen_subcommand_from $config_cmds" -a path -d 'Show configuration file path'
complete -c mcpctl -n "__fish_seen_subcommand_from config; and not __fish_seen_subcommand_from $config_cmds" -a reset -d 'Reset configuration to defaults'
complete -c mcpctl -n "__fish_seen_subcommand_from config; and not __fish_seen_subcommand_from $config_cmds" -a claude -d 'Generate .mcp.json that connects a project via mcpctl mcp bridge'
complete -c mcpctl -n "__fish_seen_subcommand_from config; and not __fish_seen_subcommand_from $config_cmds" -a claude-generate -d ''
complete -c mcpctl -n "__fish_seen_subcommand_from config; and not __fish_seen_subcommand_from $config_cmds" -a setup -d 'Interactive LLM provider setup wizard'
complete -c mcpctl -n "__fish_seen_subcommand_from config; and not __fish_seen_subcommand_from $config_cmds" -a impersonate -d 'Impersonate another user or return to original identity'
# config view options
complete -c mcpctl -n "__mcpctl_subcmd_active config view" -s o -l output -d 'output format (json, yaml)' -x
# config claude options
complete -c mcpctl -n "__mcpctl_subcmd_active config claude" -l project -d 'Project name' -xa '(__mcpctl_project_names)'
complete -c mcpctl -n "__mcpctl_subcmd_active config claude" -s o -l output -d 'Output file path' -x
complete -c mcpctl -n "__mcpctl_subcmd_active config claude" -l inspect -d 'Include mcpctl-inspect MCP server for traffic monitoring'
complete -c mcpctl -n "__mcpctl_subcmd_active config claude" -l stdout -d 'Print to stdout instead of writing a file'
# config claude-generate options
complete -c mcpctl -n "__mcpctl_subcmd_active config claude-generate" -l project -d 'Project name' -xa '(__mcpctl_project_names)'
complete -c mcpctl -n "__mcpctl_subcmd_active config claude-generate" -s o -l output -d 'Output file path' -x
complete -c mcpctl -n "__mcpctl_subcmd_active config claude-generate" -l inspect -d 'Include mcpctl-inspect MCP server for traffic monitoring'
complete -c mcpctl -n "__mcpctl_subcmd_active config claude-generate" -l stdout -d 'Print to stdout instead of writing a file'
# config impersonate options
complete -c mcpctl -n "__mcpctl_subcmd_active config impersonate" -l quit -d 'Stop impersonating and return to original identity'
# create subcommands
set -l create_cmds server secret project user group rbac prompt serverattachment promptrequest
complete -c mcpctl -n "__fish_seen_subcommand_from create; and not __fish_seen_subcommand_from $create_cmds" -a server -d 'Create an MCP server definition'
complete -c mcpctl -n "__fish_seen_subcommand_from create; and not __fish_seen_subcommand_from $create_cmds" -a secret -d 'Create a secret'
complete -c mcpctl -n "__fish_seen_subcommand_from create; and not __fish_seen_subcommand_from $create_cmds" -a project -d 'Create a project'
complete -c mcpctl -n "__fish_seen_subcommand_from create; and not __fish_seen_subcommand_from $create_cmds" -a user -d 'Create a user'
complete -c mcpctl -n "__fish_seen_subcommand_from create; and not __fish_seen_subcommand_from $create_cmds" -a group -d 'Create a group'
complete -c mcpctl -n "__fish_seen_subcommand_from create; and not __fish_seen_subcommand_from $create_cmds" -a rbac -d 'Create an RBAC binding definition'
complete -c mcpctl -n "__fish_seen_subcommand_from create; and not __fish_seen_subcommand_from $create_cmds" -a prompt -d 'Create an approved prompt'
complete -c mcpctl -n "__fish_seen_subcommand_from create; and not __fish_seen_subcommand_from $create_cmds" -a serverattachment -d 'Attach a server to a project'
complete -c mcpctl -n "__fish_seen_subcommand_from create; and not __fish_seen_subcommand_from $create_cmds" -a promptrequest -d 'Create a prompt request (pending proposal that needs approval)'
# create server options
complete -c mcpctl -n "__mcpctl_subcmd_active create server" -s d -l description -d 'Server description' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create server" -l package-name -d 'NPM package name' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create server" -l docker-image -d 'Docker image' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create server" -l transport -d 'Transport type (STDIO, SSE, STREAMABLE_HTTP)' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create server" -l repository-url -d 'Source repository URL' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create server" -l external-url -d 'External endpoint URL' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create server" -l command -d 'Command argument (repeat for multiple)' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create server" -l container-port -d 'Container port number' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create server" -l replicas -d 'Number of replicas' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create server" -l env -d 'Env var: KEY=value (inline) or KEY=secretRef:SECRET:KEY (secret ref, repeat for multiple)' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create server" -l from-template -d 'Create from template (name or name:version)' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create server" -l env-from-secret -d 'Map template env vars from a secret' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create server" -l force -d 'Update if already exists'
# create secret options
complete -c mcpctl -n "__mcpctl_subcmd_active create secret" -l data -d 'Secret data KEY=value (repeat for multiple)' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create secret" -l force -d 'Update if already exists'
# create project options
complete -c mcpctl -n "__mcpctl_subcmd_active create project" -s d -l description -d 'Project description' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create project" -l proxy-mode -d 'Proxy mode (direct, filtered)' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create project" -l prompt -d 'Project-level prompt / instructions for the LLM' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create project" -l gated -d 'Enable gated sessions (default: true)'
complete -c mcpctl -n "__mcpctl_subcmd_active create project" -l no-gated -d 'Disable gated sessions'
complete -c mcpctl -n "__mcpctl_subcmd_active create project" -l server -d 'Server name (repeat for multiple)' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create project" -l force -d 'Update if already exists'
# create user options
complete -c mcpctl -n "__mcpctl_subcmd_active create user" -l password -d 'User password' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create user" -l name -d 'User display name' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create user" -l force -d 'Update if already exists'
# create group options
complete -c mcpctl -n "__mcpctl_subcmd_active create group" -l description -d 'Group description' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create group" -l member -d 'Member email (repeat for multiple)' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create group" -l force -d 'Update if already exists'
# create rbac options
complete -c mcpctl -n "__mcpctl_subcmd_active create rbac" -l subject -d 'Subject as Kind:name (repeat for multiple)' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create rbac" -l binding -d 'Role binding as role:resource (e.g. edit:servers, run:projects)' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create rbac" -l operation -d 'Operation binding (e.g. logs, backup)' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create rbac" -l force -d 'Update if already exists'
# create prompt options
complete -c mcpctl -n "__mcpctl_subcmd_active create prompt" -l project -d 'Project name to scope the prompt to' -xa '(__mcpctl_project_names)'
complete -c mcpctl -n "__mcpctl_subcmd_active create prompt" -l content -d 'Prompt content text' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create prompt" -l content-file -d 'Read prompt content from file' -rF
complete -c mcpctl -n "__mcpctl_subcmd_active create prompt" -l priority -d 'Priority 1-10 (default: 5, higher = more important)' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create prompt" -l link -d 'Link to MCP resource (format: project/server:uri)' -x
# create serverattachment options
complete -c mcpctl -n "__mcpctl_subcmd_active create serverattachment" -l project -d 'Project name' -xa '(__mcpctl_project_names)'
# create promptrequest options
complete -c mcpctl -n "__mcpctl_subcmd_active create promptrequest" -l project -d 'Project name to scope the prompt request to' -xa '(__mcpctl_project_names)'
complete -c mcpctl -n "__mcpctl_subcmd_active create promptrequest" -l content -d 'Prompt content text' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create promptrequest" -l content-file -d 'Read prompt content from file' -rF
complete -c mcpctl -n "__mcpctl_subcmd_active create promptrequest" -l priority -d 'Priority 1-10 (default: 5, higher = more important)' -x
# status options
complete -c mcpctl -n "__fish_seen_subcommand_from status" -s o -l output -d 'output format (table, json, yaml)' -x
# login options
complete -c mcpctl -n "__fish_seen_subcommand_from login" -l mcpd-url -d 'mcpd URL to authenticate against' -x
# get options
complete -c mcpctl -n "__fish_seen_subcommand_from get" -s o -l output -d 'output format (table, json, yaml)' -x
complete -c mcpctl -n "__fish_seen_subcommand_from get" -l project -d 'Filter by project' -xa '(__mcpctl_project_names)'
complete -c mcpctl -n "__fish_seen_subcommand_from get" -s A -l all -d 'Show all (including project-scoped) resources'
# describe options
complete -c mcpctl -n "__fish_seen_subcommand_from describe" -s o -l output -d 'output format (detail, json, yaml)' -x
complete -c mcpctl -n "__fish_seen_subcommand_from describe" -l show-values -d 'Show secret values (default: masked)'
# delete options
complete -c mcpctl -n "__fish_seen_subcommand_from delete" -l project -d 'Project name (for serverattachment)' -xa '(__mcpctl_project_names)'
# logs options
complete -c mcpctl -n "__fish_seen_subcommand_from logs" -s t -l tail -d 'Number of lines to show' -x
complete -c mcpctl -n "__fish_seen_subcommand_from logs" -s i -l instance -d 'Instance/replica index (0-based, for servers with multiple replicas)' -x
# apply options
complete -c mcpctl -n "__fish_seen_subcommand_from apply" -s f -l file -d 'Path to config file (alternative to positional arg)' -rF
complete -c mcpctl -n "__fish_seen_subcommand_from apply" -l dry-run -d 'Validate and show changes without applying'
# backup options
complete -c mcpctl -n "__fish_seen_subcommand_from backup" -s o -l output -d 'output file path' -rF
complete -c mcpctl -n "__fish_seen_subcommand_from backup" -s p -l password -d 'encrypt sensitive values with password' -x
complete -c mcpctl -n "__fish_seen_subcommand_from backup" -s r -l resources -d 'resource types to backup (comma-separated: servers,profiles,projects)' -x
# restore options
complete -c mcpctl -n "__fish_seen_subcommand_from restore" -s i -l input -d 'backup file path' -rF
complete -c mcpctl -n "__fish_seen_subcommand_from restore" -s p -l password -d 'decryption password for encrypted backups' -x
complete -c mcpctl -n "__fish_seen_subcommand_from restore" -s c -l conflict -d 'conflict resolution: skip, overwrite, fail' -x
# console options
complete -c mcpctl -n "__fish_seen_subcommand_from console" -l inspect -d 'Passive traffic inspector — observe other clients\' MCP traffic'
complete -c mcpctl -n "__fish_seen_subcommand_from console" -l stdin-mcp -d 'Run inspector as MCP server over stdin/stdout (for Claude)'
# logs: takes a server/instance name
complete -c mcpctl -n "__fish_seen_subcommand_from logs; and __mcpctl_needs_arg_for logs" -a '(__mcpctl_instance_names)' -d 'Server name'
# console: takes a project name
complete -c mcpctl -n "__fish_seen_subcommand_from console; and __mcpctl_needs_arg_for console" -a '(__mcpctl_project_names)' -d 'Project name'
# attach-server: show servers NOT in the project (only if no server arg yet) # attach-server: show servers NOT in the project (only if no server arg yet)
complete -c mcpctl -n "__fish_seen_subcommand_from attach-server; and __mcpctl_needs_server_arg" -a '(__mcpctl_available_servers)' -d 'Server' complete -c mcpctl -n "__fish_seen_subcommand_from attach-server; and __mcpctl_needs_server_arg" -a '(__mcpctl_available_servers)' -d 'Server'
# detach-server: show servers IN the project (only if no server arg yet) # detach-server: show servers IN the project (only if no server arg yet)
complete -c mcpctl -n "__fish_seen_subcommand_from detach-server; and __mcpctl_needs_server_arg" -a '(__mcpctl_project_servers)' -d 'Server' complete -c mcpctl -n "__fish_seen_subcommand_from detach-server; and __mcpctl_needs_server_arg" -a '(__mcpctl_project_servers)' -d 'Server'
# get/describe options # apply: allow file completions for positional argument
complete -c mcpctl -n "__fish_seen_subcommand_from get" -s o -l output -d 'Output format' -xa 'table json yaml'
complete -c mcpctl -n "__fish_seen_subcommand_from get" -l project -d 'Filter by project' -xa '(__mcpctl_project_names)'
complete -c mcpctl -n "__fish_seen_subcommand_from get" -s A -l all -d 'Show all resources across projects'
complete -c mcpctl -n "__fish_seen_subcommand_from describe" -s o -l output -d 'Output format' -xa 'detail json yaml'
complete -c mcpctl -n "__fish_seen_subcommand_from describe" -l show-values -d 'Show secret values'
# login options
complete -c mcpctl -n "__fish_seen_subcommand_from login" -l url -d 'mcpd URL' -x
complete -c mcpctl -n "__fish_seen_subcommand_from login" -l email -d 'Email address' -x
complete -c mcpctl -n "__fish_seen_subcommand_from login" -l password -d 'Password' -x
# config subcommands
set -l config_cmds view set path reset claude claude-generate setup impersonate
complete -c mcpctl -n "__fish_seen_subcommand_from config; and not __fish_seen_subcommand_from $config_cmds" -a view -d 'Show configuration'
complete -c mcpctl -n "__fish_seen_subcommand_from config; and not __fish_seen_subcommand_from $config_cmds" -a set -d 'Set a config value'
complete -c mcpctl -n "__fish_seen_subcommand_from config; and not __fish_seen_subcommand_from $config_cmds" -a path -d 'Show config file path'
complete -c mcpctl -n "__fish_seen_subcommand_from config; and not __fish_seen_subcommand_from $config_cmds" -a reset -d 'Reset to defaults'
complete -c mcpctl -n "__fish_seen_subcommand_from config; and not __fish_seen_subcommand_from $config_cmds" -a claude -d 'Generate .mcp.json for project'
complete -c mcpctl -n "__fish_seen_subcommand_from config; and not __fish_seen_subcommand_from $config_cmds" -a setup -d 'Configure LLM provider'
complete -c mcpctl -n "__fish_seen_subcommand_from config; and not __fish_seen_subcommand_from $config_cmds" -a impersonate -d 'Impersonate a user'
# create subcommands
set -l create_cmds server secret project user group rbac prompt promptrequest
complete -c mcpctl -n "__fish_seen_subcommand_from create; and not __fish_seen_subcommand_from $create_cmds" -a server -d 'Create a server'
complete -c mcpctl -n "__fish_seen_subcommand_from create; and not __fish_seen_subcommand_from $create_cmds" -a secret -d 'Create a secret'
complete -c mcpctl -n "__fish_seen_subcommand_from create; and not __fish_seen_subcommand_from $create_cmds" -a project -d 'Create a project'
complete -c mcpctl -n "__fish_seen_subcommand_from create; and not __fish_seen_subcommand_from $create_cmds" -a user -d 'Create a user'
complete -c mcpctl -n "__fish_seen_subcommand_from create; and not __fish_seen_subcommand_from $create_cmds" -a group -d 'Create a group'
complete -c mcpctl -n "__fish_seen_subcommand_from create; and not __fish_seen_subcommand_from $create_cmds" -a rbac -d 'Create an RBAC binding'
complete -c mcpctl -n "__fish_seen_subcommand_from create; and not __fish_seen_subcommand_from $create_cmds" -a prompt -d 'Create an approved prompt'
complete -c mcpctl -n "__fish_seen_subcommand_from create; and not __fish_seen_subcommand_from $create_cmds" -a promptrequest -d 'Create a prompt request'
# create prompt/promptrequest options
complete -c mcpctl -n "__fish_seen_subcommand_from create; and __fish_seen_subcommand_from prompt promptrequest" -l project -d 'Project name' -xa '(__mcpctl_project_names)'
complete -c mcpctl -n "__fish_seen_subcommand_from create; and __fish_seen_subcommand_from prompt promptrequest" -l content -d 'Prompt content text' -x
complete -c mcpctl -n "__fish_seen_subcommand_from create; and __fish_seen_subcommand_from prompt promptrequest" -l content-file -d 'Read content from file' -rF
complete -c mcpctl -n "__fish_seen_subcommand_from create; and __fish_seen_subcommand_from prompt promptrequest" -l priority -d 'Priority 1-10' -xa '(seq 1 10)'
complete -c mcpctl -n "__fish_seen_subcommand_from create; and __fish_seen_subcommand_from prompt" -l link -d 'Link to MCP resource (project/server:uri)' -x
# create project --gated/--no-gated
complete -c mcpctl -n "__fish_seen_subcommand_from create; and __fish_seen_subcommand_from project" -l gated -d 'Enable gated sessions'
complete -c mcpctl -n "__fish_seen_subcommand_from create; and __fish_seen_subcommand_from project" -l no-gated -d 'Disable gated sessions'
# logs: takes a server/instance name, then options
function __mcpctl_instance_names
mcpctl get instances -o json 2>/dev/null | jq -r '.[][].server.name' 2>/dev/null
end
complete -c mcpctl -n "__fish_seen_subcommand_from logs" -a '(__mcpctl_instance_names)' -d 'Server name'
complete -c mcpctl -n "__fish_seen_subcommand_from logs" -l tail -d 'Number of lines' -x
complete -c mcpctl -n "__fish_seen_subcommand_from logs" -l since -d 'Since timestamp' -x
complete -c mcpctl -n "__fish_seen_subcommand_from logs" -s f -l follow -d 'Follow log output'
# backup options
complete -c mcpctl -n "__fish_seen_subcommand_from backup" -s o -l output -d 'Output file' -rF
complete -c mcpctl -n "__fish_seen_subcommand_from backup" -s p -l password -d 'Encryption password' -x
# restore options
complete -c mcpctl -n "__fish_seen_subcommand_from restore" -s i -l input -d 'Input file' -rF
complete -c mcpctl -n "__fish_seen_subcommand_from restore" -s p -l password -d 'Decryption password' -x
complete -c mcpctl -n "__fish_seen_subcommand_from restore" -s c -l conflict -d 'Conflict strategy' -xa 'skip overwrite fail'
# approve: first arg is resource type, second is name
function __mcpctl_approve_needs_type
set -l tokens (commandline -opc)
set -l found false
for tok in $tokens
if $found
if contains -- $tok promptrequest promptrequests
return 1 # type already given
end
end
if test "$tok" = "approve"
set found true
end
end
if $found
return 0 # approve found but no type yet
end
return 1
end
function __mcpctl_approve_needs_name
set -l tokens (commandline -opc)
set -l found_type false
for tok in $tokens
if $found_type
# next non-flag token after type is the name
if not string match -q -- '-*' $tok
return 1 # name already given
end
end
if contains -- $tok promptrequest promptrequests
set found_type true
end
end
if $found_type
return 0 # type given but no name yet
end
return 1
end
function __mcpctl_promptrequest_names
mcpctl get promptrequests -A -o json 2>/dev/null | jq -r '.[][].name' 2>/dev/null
end
complete -c mcpctl -n "__fish_seen_subcommand_from approve; and __mcpctl_approve_needs_type" -a 'promptrequest' -d 'Resource type'
complete -c mcpctl -n "__fish_seen_subcommand_from approve; and __mcpctl_approve_needs_name" -a '(__mcpctl_promptrequest_names)' -d 'Prompt request name'
# console: takes a project name as first argument
function __mcpctl_console_needs_project
set -l tokens (commandline -opc)
set -l found false
for tok in $tokens
if $found
if not string match -q -- '-*' $tok
return 1 # project arg already present
end
end
if test "$tok" = "console"
set found true
end
end
if $found
return 0 # console found but no project yet
end
return 1
end
complete -c mcpctl -n "__fish_seen_subcommand_from console; and __mcpctl_console_needs_project" -a '(__mcpctl_project_names)' -d 'Project name'
# apply takes a file
complete -c mcpctl -n "__fish_seen_subcommand_from apply" -s f -l file -d 'Configuration file' -rF
complete -c mcpctl -n "__fish_seen_subcommand_from apply" -F complete -c mcpctl -n "__fish_seen_subcommand_from apply" -F
# help completions # help completions

View File

@@ -0,0 +1,20 @@
# Docker image for MrMartiniMo/docmost-mcp (TypeScript STDIO MCP server)
# Not published to npm, so we clone + build from source.
# Includes patches for list_pages pagination and search response handling.
FROM node:20-slim
WORKDIR /mcp
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
RUN git clone --depth 1 https://github.com/MrMartiniMo/docmost-mcp.git . \
&& npm install \
&& rm -rf .git
# Apply our fixes before building
COPY deploy/docmost-mcp-fixes.patch /tmp/fixes.patch
RUN git init && git add -A && git apply /tmp/fixes.patch && rm -rf .git /tmp/fixes.patch
RUN npm run build
ENTRYPOINT ["node", "build/index.js"]

View File

@@ -0,0 +1,106 @@
diff --git a/src/index.ts b/src/index.ts
index 83c251d..852ee0e 100644
--- a/src/index.ts
+++ b/src/index.ts
@@ -1,4 +1,4 @@
-import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
+import { McpServer, ResourceTemplate } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import FormData from "form-data";
import axios, { AxiosInstance } from "axios";
@@ -130,10 +130,18 @@ class DocmostClient {
return groups.map((group) => filterGroup(group));
}
- async listPages(spaceId?: string) {
- const payload = spaceId ? { spaceId } : {};
- const pages = await this.paginateAll("/pages/recent", payload);
- return pages.map((page) => filterPage(page));
+ async listPages(spaceId?: string, page: number = 1, limit: number = 50) {
+ await this.ensureAuthenticated();
+ const clampedLimit = Math.max(1, Math.min(100, limit));
+ const payload: Record<string, any> = { page, limit: clampedLimit };
+ if (spaceId) payload.spaceId = spaceId;
+ const response = await this.client.post("/pages/recent", payload);
+ const data = response.data;
+ const items = data.data?.items || data.items || [];
+ return {
+ pages: items.map((p: any) => filterPage(p)),
+ meta: data.data?.meta || data.meta || {},
+ };
}
async listSidebarPages(spaceId: string, pageId: string) {
@@ -283,8 +291,9 @@ class DocmostClient {
spaceId,
});
- // Filter search results (data is directly an array)
- const items = response.data?.data || [];
+ // Handle both array and {items: [...]} response formats
+ const rawData = response.data?.data;
+ const items = Array.isArray(rawData) ? rawData : (rawData?.items || []);
const filteredItems = items.map((item: any) => filterSearchResult(item));
return {
@@ -384,13 +393,15 @@ server.registerTool(
server.registerTool(
"list_pages",
{
- description: "List pages in a space ordered by updatedAt (descending).",
+ description: "List pages in a space ordered by updatedAt (descending). Returns one page of results.",
inputSchema: {
spaceId: z.string().optional(),
+ page: z.number().optional().describe("Page number (default: 1)"),
+ limit: z.number().optional().describe("Items per page, 1-100 (default: 50)"),
},
},
- async ({ spaceId }) => {
- const result = await docmostClient.listPages(spaceId);
+ async ({ spaceId, page, limit }) => {
+ const result = await docmostClient.listPages(spaceId, page, limit);
return jsonContent(result);
},
);
@@ -544,6 +555,41 @@ server.registerTool(
},
);
+// Resource template: docmost://pages/{pageId}
+// Allows MCP clients to read page content as resources
+server.resource(
+ "page",
+ new ResourceTemplate("docmost://pages/{pageId}", {
+ list: async () => {
+ // List recent pages as browsable resources
+ try {
+ const result = await docmostClient.listPages(undefined, 1, 100);
+ return result.pages.map((page: any) => ({
+ uri: `docmost://pages/${page.id}`,
+ name: page.title || page.id,
+ mimeType: "text/markdown",
+ }));
+ } catch {
+ return [];
+ }
+ },
+ }),
+ { description: "A Docmost wiki page", mimeType: "text/markdown" },
+ async (uri: URL, variables: Record<string, string | string[]>) => {
+ const pageId = Array.isArray(variables.pageId) ? variables.pageId[0]! : variables.pageId!;
+ const page = await docmostClient.getPage(pageId);
+ return {
+ contents: [
+ {
+ uri: uri.href,
+ text: page.data.content || `# ${page.data.title || "Untitled"}\n\n(No content)`,
+ mimeType: "text/markdown",
+ },
+ ],
+ };
+ },
+);
+
async function run() {
const transport = new StdioServerTransport();
await server.connect(transport);

232
docs/gate-design-lessons.md Normal file
View File

@@ -0,0 +1,232 @@
# Gated MCP Sessions: What Claude Recognizes (and What It Doesn't)
Lessons learned from building and testing mcpctl's gated session system with Claude Code (Opus 4.6, v2.1.59). These patterns apply to any MCP proxy that needs to control tool access through a gate step.
## The Problem
When Claude connects to an MCP server, it receives an `initialize` response with `instructions`, then calls `tools/list` to see available tools. In a gated session, we want Claude to call `begin_session` before accessing real tools. This is surprisingly hard to get right because Claude has strong default behaviors that fight against the gate pattern.
---
## What Works
### 1. One gate tool, zero ambiguity
When `tools/list` returns exactly ONE tool (`begin_session`), Claude recognizes it must call that tool first. Having multiple tools available in the gated state confuses Claude — it may try to call a "real" tool and skip the gate entirely.
**Working pattern:**
```json
{
"tools": [{
"name": "begin_session",
"description": "Start your session by providing keywords...",
"inputSchema": { ... }
}]
}
```
### 2. "Check its input schema" instead of naming parameters
Claude reads the tool's `inputSchema` to understand what arguments are needed. When the instructions **name a specific parameter** that doesn't exist in the schema, Claude gets confused and may not call the tool at all.
**FAILED — named wrong parameter:**
> "Call begin_session with a description of the user's task"
This failed because the noLLM mode tool has `tags`, not `description`. Claude saw the mismatch between instructions and schema, got confused, and went exploring the filesystem instead.
**WORKS — schema-agnostic:**
> "Call begin_session immediately using the arguments it requires (check its input schema). If it accepts a description, briefly describe the user's task. If it accepts tags, provide 3-7 keywords relevant to the user's request."
This works for both LLM mode (`description` param) and noLLM mode (`tags` param) because Claude reads the actual schema.
### 3. Instructions must say "immediately" and "required"
Without urgency words, Claude may acknowledge the gate exists but decide to "explore first" before calling it. Two critical phrases:
- **"immediately"** — prevents Claude from doing reconnaissance first
- **"required before using other tools"** — makes it clear this isn't optional
**Working instruction block:**
```
This project uses a gated session. Before you can access tools, you must start a session by calling begin_session.
Call begin_session immediately using the arguments it requires (check its input schema).
```
### 4. Show available tools as a preview (names only)
Listing tool names in the initialize instructions (without making them callable) helps Claude understand what's available and craft better `begin_session` keywords. Claude uses this list to generate relevant tags.
**Working pattern:**
```
Available MCP server tools (accessible after begin_session):
my-node-red/get_flows
my-node-red/create_flow
my-home-assistant/ha_get_entity
...
```
Claude then produces tags like `["node-red", "flows", "automation"]` — directly informed by the tool names it saw.
### 5. Show prompt index with priorities
When the instructions list available prompts with priorities, Claude uses them to choose relevant `begin_session` keywords:
```
Available project prompts:
- pnpm (priority 5)
- stack (priority 5)
Choose your begin_session keywords based on which of these prompts seem relevant to your task.
```
### 6. `tools/list_changed` notification after ungating
After `begin_session` succeeds, the server must send a `notifications/tools/list_changed` notification. Claude then re-fetches `tools/list` and sees all 108+ tools. Without this notification, Claude continues thinking only `begin_session` is available.
### 7. The intercept fallback (auto-ungate on real tool call)
If Claude somehow bypasses the gate and calls a real tool directly, the server auto-ungates the session, extracts keywords from the tool call, matches relevant prompts, and prepends the context as a preamble to the tool result. This is a safety net, not the primary path.
---
## What Fails
### 1. Referencing parameters that don't exist in the schema
If instructions say "call begin_session with a description" but the schema only has `tags`, Claude recognizes the inconsistency and may refuse to call the tool entirely. It falls back to filesystem exploration or asks the user for help.
**Root cause:** Claude cross-references instruction text against tool schemas. Mismatches create distrust.
### 2. Complex conditional instructions
Don't write instructions like:
> "If the project is gated, check for begin_session. If begin_session accepts tags, provide tags. Otherwise if it accepts description, provide a description. But first check if..."
Claude handles simple, direct instructions better than decision trees. One clear path: "Call begin_session immediately, check its input schema for what arguments it needs."
### 3. Having read_prompts available in gated state
In early iterations, both `begin_session` and `read_prompts` were available in the gated state. Claude sometimes called `read_prompts` instead of `begin_session`, or tried to use `read_prompts` to understand the environment before beginning the session. This delayed or skipped the gate.
**Fix:** Only `begin_session` is available when gated. `read_prompts` appears after ungating.
### 4. Putting gate instructions only in the tool description
The tool description alone is not enough. Claude reads `instructions` from the initialize response first and forms its plan there. If the initialize instructions don't mention the gate, Claude may ignore the tool description and try to find other ways to accomplish the task.
**Both are needed:**
- Initialize `instructions` field: explains the gate and what to do
- Tool `description` field: reinforces the purpose of begin_session
### 5. Long instructions that bury the call-to-action
If the initialize instructions contain 200 lines of context before mentioning "call begin_session", Claude may not reach that instruction. The gate call-to-action must be in the **first few lines** of the instructions.
### 6. Expecting Claude to remember instructions across reconnects
Each new session starts fresh. Claude doesn't carry over knowledge from previous sessions. The gate instructions must be self-contained in every initialize response.
---
## Prompt Scoring: Ensuring Prompts Reach Claude
### The byte budget problem
When `begin_session` returns matched prompts, there's a byte budget (default 8KB) to prevent token overflow. Prompts are included in score order until the budget is full. Prompts that don't fit get listed as index-only (name + summary).
### Scoring formula: `priority + (matchCount * priority)`
- **Priority alone is the baseline** — every prompt gets at least its priority score
- **Tag matches multiply the priority** — relevant prompts score much higher
- **Priority 10 = Infinity** — system prompts always included regardless of budget
**Failed formula:** `matchCount * priority`
This meant prompts with zero tag matches scored 0 and were never included, even if they were high-priority global prompts (like "stack" with priority 5). A priority-5 prompt with no tag matches should still compete for inclusion.
**Working formula:** `priority + (matchCount * priority)`
A priority-5 prompt with 0 matches scores 5 (baseline). With 2 matches it scores 15. This ensures global prompts are included when budget allows.
### Response truncation safety cap
All responses are capped at 24,000 characters. Larger responses get truncated with a message to use `read_prompts` for the full content. This prevents a single massive prompt from consuming Claude's entire context window.
---
## The Complete Flow (What Actually Happens)
```
Client mcplocal upstream servers
│ │ │
│── initialize ───────────>│ │
│<── instructions + caps ──│ (instructions contain │
│ │ gate-instructions, │
│ │ tool list preview, │
│ │ prompt index) │
│── tools/list ──────────>│ │
│<── [begin_session] ─────│ (ONLY begin_session) │
│ │ │
│── prompts/list ────────>│ │
│<── [] ──────────────────│ (empty - gated) │
│ │ │
│── resources/list ──────>│ │
│<── [prompt resources] ──│ (prompts visible as │
│ │ resources always) │
│ │ │
│ Claude reads instructions, sees begin_session is the │
│ only tool, calls it with relevant tags/description │
│ │ │
│── tools/call ──────────>│ │
│ begin_session │── match prompts ────────────>│
│ {tags:[...]} │<── prompt content ──────────│
│ │ │
│<── matched prompts ─────│ (full content of matched │
│ + tool list │ prompts, tool names, │
│ + encouragement │ encouragement to use │
│ │ read_prompts later) │
│ │ │
│<── notification ────────│ tools/list_changed │
│ │ │
│── tools/list ──────────>│ │
│<── [108 tools] ─────────│ (ALL tools now visible) │
│ │ │
│ Claude proceeds with the user's original request │
│ using the full tool set │
```
---
## Testing Gate Behavior
The MCP Inspector (`mcpctl console --inspect`) is essential for debugging gate issues. It shows the exact sequence of requests/responses between Claude and mcplocal, including:
- What Claude sees in the initialize response
- Whether Claude calls `begin_session` or tries to bypass it
- What tags/description Claude provides
- What prompts are matched and returned
- Whether `tools/list_changed` notification fires
- The full tool list after ungating
Run it alongside Claude Code to see exactly what happens:
```bash
# Terminal 1: Inspector
mcpctl console --inspect
# Terminal 2: Claude Code connected to the project
claude
```
---
## Checklist for New Gate Configurations
- [ ] Initialize instructions mention gate in first 3 lines
- [ ] Instructions say "immediately" and "required"
- [ ] Instructions say "check its input schema" (not "pass description/tags")
- [ ] Only `begin_session` in tools/list when gated
- [ ] Tool names listed in instructions as preview
- [ ] Prompt index shown with priorities
- [ ] `tools/list_changed` notification sent after ungate
- [ ] Response size under 24K characters
- [ ] Prompt scoring uses baseline priority (not just match count)
- [ ] Test with Inspector to verify the full flow

View File

@@ -20,9 +20,13 @@ servers:
name: ha-secrets name: ha-secrets
key: token key: token
profiles: secrets:
- name: production - name: ha-secrets
server: ha-mcp data:
envOverrides: token: "your-home-assistant-long-lived-access-token"
HOMEASSISTANT_URL: "https://your-ha-instance.example.com"
HOMEASSISTANT_TOKEN: "REDACTED-TOKEN" projects:
- name: smart-home
description: "Home automation project"
servers:
- ha-mcp

57
i.sh
View File

@@ -1,57 +0,0 @@
#!/bin/bash
# 1. Install & Set Fish
sudo dnf install -y fish byobu curl wl-clipboard
chsh -s /usr/bin/fish
# 2. SILENCE THE PROMPTS (The "Wtf" Fix)
mkdir -p ~/.byobu
byobu-ctrl-a emacs
# 3. Configure Byobu Core (Clean Paths)
byobu-enable
mkdir -p ~/.byobu/bin
# We REMOVED the -S flag to stop those random files appearing in your folders
echo "set -g default-shell /usr/bin/fish" > ~/.byobu/.tmux.conf
echo "set -g default-command /usr/bin/fish" >> ~/.byobu/.tmux.conf
echo "set -g mouse off" >> ~/.byobu/.tmux.conf
echo "set -s set-clipboard on" >> ~/.byobu/.tmux.conf
# 4. Create the Smart Mouse Indicator
cat <<EOF > ~/.byobu/bin/custom
#!/bin/bash
if tmux show-options -g mouse | grep -q "on"; then
echo "#[fg=green]MOUSE: ON (Nav)#[default]"
else
echo "#[fg=red]Alt+F12 (Copy Mode)#[default]"
fi
EOF
chmod +x ~/.byobu/bin/custom
# 5. Setup Status Bar
echo 'tmux_left="session"' > ~/.byobu/status
echo 'tmux_right="custom cpu_temp load_average"' >> ~/.byobu/status
# 6. Atuin Global History
if ! command -v atuin &> /dev/null; then
curl --proto '=https' --tlsv1.2 -sSf https://setup.atuin.sh | sh
fi
# 7. Final Fish Config (The Clean Sticky Logic)
mkdir -p ~/.config/fish
cat <<EOF > ~/.config/fish/config.fish
# Atuin Setup
source ~/.atuin/bin/env.fish
atuin init fish | source
# Start a UNIQUE session per window without cluttering project folders
if status is-interactive
and not set -q BYOBU_RUN_DIR
# We use a human-readable name: FolderName-Time
set SESSION_NAME (basename (pwd))-(date +%H%M)
exec byobu new-session -A -s "\$SESSION_NAME"
end
EOF
# Kill any existing server to wipe the old "socket" logic
byobu kill-server 2>/dev/null
echo "Done! No more random files in your project folders."

View File

@@ -1,6 +1,6 @@
name: mcpctl name: mcpctl
arch: amd64 arch: amd64
version: 0.1.0 version: 0.0.1
release: "1" release: "1"
maintainer: michal maintainer: michal
description: kubectl-like CLI for managing MCP servers description: kubectl-like CLI for managing MCP servers

View File

@@ -1,6 +1,6 @@
{ {
"name": "mcpctl", "name": "mcpctl",
"version": "0.1.0", "version": "0.0.1",
"private": true, "private": true,
"description": "kubectl-like CLI for managing MCP servers", "description": "kubectl-like CLI for managing MCP servers",
"type": "module", "type": "module",
@@ -16,6 +16,8 @@
"db:up": "docker compose -f deploy/docker-compose.yml up -d", "db:up": "docker compose -f deploy/docker-compose.yml up -d",
"db:down": "docker compose -f deploy/docker-compose.yml down", "db:down": "docker compose -f deploy/docker-compose.yml down",
"typecheck": "tsc --build", "typecheck": "tsc --build",
"completions:generate": "tsx scripts/generate-completions.ts --write",
"completions:check": "tsx scripts/generate-completions.ts --check",
"rpm:build": "bash scripts/build-rpm.sh", "rpm:build": "bash scripts/build-rpm.sh",
"rpm:publish": "bash scripts/publish-rpm.sh", "rpm:publish": "bash scripts/publish-rpm.sh",
"release": "bash scripts/release.sh", "release": "bash scripts/release.sh",

View File

@@ -0,0 +1,32 @@
#!/bin/bash
# Build docmost-mcp Docker image and push to Gitea container registry
set -e
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
PROJECT_ROOT="$(dirname "$SCRIPT_DIR")"
cd "$PROJECT_ROOT"
# Load .env for GITEA_TOKEN
if [ -f .env ]; then
set -a; source .env; set +a
fi
# Push directly to internal address (external proxy has body size limit)
REGISTRY="10.0.0.194:3012"
IMAGE="docmost-mcp"
TAG="${1:-latest}"
echo "==> Building docmost-mcp image..."
podman build -t "$IMAGE:$TAG" -f deploy/Dockerfile.docmost-mcp .
echo "==> Tagging as $REGISTRY/michal/$IMAGE:$TAG..."
podman tag "$IMAGE:$TAG" "$REGISTRY/michal/$IMAGE:$TAG"
echo "==> Logging in to $REGISTRY..."
podman login --tls-verify=false -u michal -p "$GITEA_TOKEN" "$REGISTRY"
echo "==> Pushing to $REGISTRY/michal/$IMAGE:$TAG..."
podman push --tls-verify=false "$REGISTRY/michal/$IMAGE:$TAG"
echo "==> Done!"
echo " Image: $REGISTRY/michal/$IMAGE:$TAG"

View File

@@ -16,6 +16,9 @@ export PATH="$HOME/.npm-global/bin:$HOME/.bun/bin:$HOME/.local/bin:$PATH"
echo "==> Building TypeScript..." echo "==> Building TypeScript..."
pnpm build pnpm build
echo "==> Generating shell completions..."
pnpm completions:generate
echo "==> Bundling standalone binaries..." echo "==> Bundling standalone binaries..."
mkdir -p dist mkdir -p dist
rm -f dist/mcpctl dist/mcpctl-local dist/mcpctl-*.rpm rm -f dist/mcpctl dist/mcpctl-local dist/mcpctl-*.rpm

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,6 @@
{ {
"name": "@mcpctl/cli", "name": "@mcpctl/cli",
"version": "0.1.0", "version": "0.0.1",
"private": true, "private": true,
"type": "module", "type": "module",
"bin": { "bin": {

View File

@@ -106,12 +106,19 @@ const RbacBindingSpecSchema = z.object({
const PromptSpecSchema = z.object({ const PromptSpecSchema = z.object({
name: z.string().min(1).max(100).regex(/^[a-z0-9-]+$/), name: z.string().min(1).max(100).regex(/^[a-z0-9-]+$/),
content: z.string().min(1).max(50000), content: z.string().min(1).max(50000).optional(),
projectId: z.string().optional(), projectId: z.string().optional(),
project: z.string().optional(),
priority: z.number().int().min(1).max(10).optional(), priority: z.number().int().min(1).max(10).optional(),
link: z.string().optional(),
linkTarget: z.string().optional(), linkTarget: z.string().optional(),
}); });
const ServerAttachmentSpecSchema = z.object({
server: z.string().min(1),
project: z.string().min(1),
});
const ProjectSpecSchema = z.object({ const ProjectSpecSchema = z.object({
name: z.string().min(1), name: z.string().min(1),
description: z.string().default(''), description: z.string().default(''),
@@ -130,6 +137,7 @@ const ApplyConfigSchema = z.object({
groups: z.array(GroupSpecSchema).default([]), groups: z.array(GroupSpecSchema).default([]),
projects: z.array(ProjectSpecSchema).default([]), projects: z.array(ProjectSpecSchema).default([]),
templates: z.array(TemplateSpecSchema).default([]), templates: z.array(TemplateSpecSchema).default([]),
serverattachments: z.array(ServerAttachmentSpecSchema).default([]),
rbacBindings: z.array(RbacBindingSpecSchema).default([]), rbacBindings: z.array(RbacBindingSpecSchema).default([]),
rbac: z.array(RbacBindingSpecSchema).default([]), rbac: z.array(RbacBindingSpecSchema).default([]),
prompts: z.array(PromptSpecSchema).default([]), prompts: z.array(PromptSpecSchema).default([]),
@@ -169,6 +177,7 @@ export function createApplyCommand(deps: ApplyCommandDeps): Command {
if (config.groups.length > 0) log(` ${config.groups.length} group(s)`); if (config.groups.length > 0) log(` ${config.groups.length} group(s)`);
if (config.projects.length > 0) log(` ${config.projects.length} project(s)`); if (config.projects.length > 0) log(` ${config.projects.length} project(s)`);
if (config.templates.length > 0) log(` ${config.templates.length} template(s)`); if (config.templates.length > 0) log(` ${config.templates.length} template(s)`);
if (config.serverattachments.length > 0) log(` ${config.serverattachments.length} serverattachment(s)`);
if (config.rbacBindings.length > 0) log(` ${config.rbacBindings.length} rbacBinding(s)`); if (config.rbacBindings.length > 0) log(` ${config.rbacBindings.length} rbacBinding(s)`);
if (config.prompts.length > 0) log(` ${config.prompts.length} prompt(s)`); if (config.prompts.length > 0) log(` ${config.prompts.length} prompt(s)`);
return; return;
@@ -194,14 +203,62 @@ function readStdin(): string {
return Buffer.concat(chunks).toString('utf-8'); return Buffer.concat(chunks).toString('utf-8');
} }
/** Map singular kind → plural resource key used by ApplyConfigSchema */
const KIND_TO_RESOURCE: Record<string, string> = {
server: 'servers',
project: 'projects',
secret: 'secrets',
template: 'templates',
user: 'users',
group: 'groups',
rbac: 'rbac',
prompt: 'prompts',
promptrequest: 'promptrequests',
serverattachment: 'serverattachments',
};
/**
* Convert multi-doc format (array of {kind, ...} items) into the grouped
* format that ApplyConfigSchema expects.
*/
function multiDocToGrouped(docs: Array<Record<string, unknown>>): Record<string, unknown[]> {
const grouped: Record<string, unknown[]> = {};
for (const doc of docs) {
const kind = doc.kind as string;
const resource = KIND_TO_RESOURCE[kind] ?? kind;
const { kind: _k, ...rest } = doc;
if (!grouped[resource]) grouped[resource] = [];
grouped[resource].push(rest);
}
return grouped;
}
function loadConfigFile(path: string): ApplyConfig { function loadConfigFile(path: string): ApplyConfig {
const raw = path === '-' ? readStdin() : readFileSync(path, 'utf-8'); const raw = path === '-' ? readStdin() : readFileSync(path, 'utf-8');
let parsed: unknown; let parsed: unknown;
if (path === '-' ? raw.trimStart().startsWith('{') : path.endsWith('.json')) { const isJson = path === '-' ? raw.trimStart().startsWith('{') || raw.trimStart().startsWith('[') : path.endsWith('.json');
if (isJson) {
parsed = JSON.parse(raw); parsed = JSON.parse(raw);
} else { } else {
parsed = yaml.load(raw); // Try multi-document YAML first
const docs: unknown[] = [];
yaml.loadAll(raw, (doc) => docs.push(doc));
const allDocs = docs.flatMap((d) => Array.isArray(d) ? d : [d]) as Array<Record<string, unknown>>;
if (allDocs.length > 0 && allDocs[0] != null && 'kind' in allDocs[0]) {
// Multi-doc or single doc with kind field
parsed = multiDocToGrouped(allDocs);
} else {
parsed = docs[0]; // Fall back to single-doc grouped format
}
}
// JSON: handle array of {kind, ...} docs
if (Array.isArray(parsed)) {
const arr = parsed as Array<Record<string, unknown>>;
if (arr.length > 0 && arr[0] != null && 'kind' in arr[0]) {
parsed = multiDocToGrouped(arr);
}
} }
return ApplyConfigSchema.parse(parsed); return ApplyConfigSchema.parse(parsed);
@@ -210,15 +267,59 @@ function loadConfigFile(path: string): ApplyConfig {
async function applyConfig(client: ApiClient, config: ApplyConfig, log: (...args: unknown[]) => void): Promise<void> { async function applyConfig(client: ApiClient, config: ApplyConfig, log: (...args: unknown[]) => void): Promise<void> {
// Apply order: secrets, servers, users, groups, projects, templates, rbacBindings // Apply order: secrets, servers, users, groups, projects, templates, rbacBindings
// Cache for name→record lookups to avoid repeated API calls (rate limit protection)
const nameCache = new Map<string, Map<string, { id: string; [key: string]: unknown }>>();
async function cachedFindByName(resource: string, name: string): Promise<{ id: string; [key: string]: unknown } | null> {
if (!nameCache.has(resource)) {
try {
const items = await client.get<Array<{ id: string; name: string }>>(`/api/v1/${resource}`);
const map = new Map<string, { id: string; [key: string]: unknown }>();
for (const item of items) {
if (item.name) map.set(item.name, item);
}
nameCache.set(resource, map);
} catch {
nameCache.set(resource, new Map());
}
}
return nameCache.get(resource)!.get(name) ?? null;
}
/** Invalidate a resource cache after a create/update so subsequent lookups see it */
function invalidateCache(resource: string): void {
nameCache.delete(resource);
}
/** Retry a function on 429 rate-limit errors with exponential backoff */
async function withRetry<T>(fn: () => Promise<T>, maxRetries = 5): Promise<T> {
for (let attempt = 0; ; attempt++) {
try {
return await fn();
} catch (err) {
const msg = err instanceof Error ? err.message : String(err);
if (attempt < maxRetries && msg.includes('429')) {
const delay = 2000 * Math.pow(2, attempt); // 2s, 4s, 8s, 16s, 32s
process.stderr.write(`\r\x1b[33mRate limited, retrying in ${delay / 1000}s...\x1b[0m`);
await new Promise((r) => setTimeout(r, delay));
process.stderr.write('\r\x1b[K'); // clear the line
continue;
}
throw err;
}
}
}
// Apply secrets // Apply secrets
for (const secret of config.secrets) { for (const secret of config.secrets) {
try { try {
const existing = await findByName(client, 'secrets', secret.name); const existing = await cachedFindByName('secrets', secret.name);
if (existing) { if (existing) {
await client.put(`/api/v1/secrets/${(existing as { id: string }).id}`, { data: secret.data }); await withRetry(() => client.put(`/api/v1/secrets/${existing.id}`, { data: secret.data }));
log(`Updated secret: ${secret.name}`); log(`Updated secret: ${secret.name}`);
} else { } else {
await client.post('/api/v1/secrets', secret); await withRetry(() => client.post('/api/v1/secrets', secret));
invalidateCache('secrets');
log(`Created secret: ${secret.name}`); log(`Created secret: ${secret.name}`);
} }
} catch (err) { } catch (err) {
@@ -229,12 +330,13 @@ async function applyConfig(client: ApiClient, config: ApplyConfig, log: (...args
// Apply servers // Apply servers
for (const server of config.servers) { for (const server of config.servers) {
try { try {
const existing = await findByName(client, 'servers', server.name); const existing = await cachedFindByName('servers', server.name);
if (existing) { if (existing) {
await client.put(`/api/v1/servers/${(existing as { id: string }).id}`, server); await withRetry(() => client.put(`/api/v1/servers/${existing.id}`, server));
log(`Updated server: ${server.name}`); log(`Updated server: ${server.name}`);
} else { } else {
await client.post('/api/v1/servers', server); await withRetry(() => client.post('/api/v1/servers', server));
invalidateCache('servers');
log(`Created server: ${server.name}`); log(`Created server: ${server.name}`);
} }
} catch (err) { } catch (err) {
@@ -245,12 +347,13 @@ async function applyConfig(client: ApiClient, config: ApplyConfig, log: (...args
// Apply users (matched by email) // Apply users (matched by email)
for (const user of config.users) { for (const user of config.users) {
try { try {
// Users use email, not name — use uncached findByField
const existing = await findByField(client, 'users', 'email', user.email); const existing = await findByField(client, 'users', 'email', user.email);
if (existing) { if (existing) {
await client.put(`/api/v1/users/${(existing as { id: string }).id}`, user); await withRetry(() => client.put(`/api/v1/users/${(existing as { id: string }).id}`, user));
log(`Updated user: ${user.email}`); log(`Updated user: ${user.email}`);
} else { } else {
await client.post('/api/v1/users', user); await withRetry(() => client.post('/api/v1/users', user));
log(`Created user: ${user.email}`); log(`Created user: ${user.email}`);
} }
} catch (err) { } catch (err) {
@@ -261,12 +364,13 @@ async function applyConfig(client: ApiClient, config: ApplyConfig, log: (...args
// Apply groups // Apply groups
for (const group of config.groups) { for (const group of config.groups) {
try { try {
const existing = await findByName(client, 'groups', group.name); const existing = await cachedFindByName('groups', group.name);
if (existing) { if (existing) {
await client.put(`/api/v1/groups/${(existing as { id: string }).id}`, group); await withRetry(() => client.put(`/api/v1/groups/${existing.id}`, group));
log(`Updated group: ${group.name}`); log(`Updated group: ${group.name}`);
} else { } else {
await client.post('/api/v1/groups', group); await withRetry(() => client.post('/api/v1/groups', group));
invalidateCache('groups');
log(`Created group: ${group.name}`); log(`Created group: ${group.name}`);
} }
} catch (err) { } catch (err) {
@@ -277,12 +381,13 @@ async function applyConfig(client: ApiClient, config: ApplyConfig, log: (...args
// Apply projects (send full spec including servers) // Apply projects (send full spec including servers)
for (const project of config.projects) { for (const project of config.projects) {
try { try {
const existing = await findByName(client, 'projects', project.name); const existing = await cachedFindByName('projects', project.name);
if (existing) { if (existing) {
await client.put(`/api/v1/projects/${(existing as { id: string }).id}`, project); await withRetry(() => client.put(`/api/v1/projects/${existing.id}`, project));
log(`Updated project: ${project.name}`); log(`Updated project: ${project.name}`);
} else { } else {
await client.post('/api/v1/projects', project); await withRetry(() => client.post('/api/v1/projects', project));
invalidateCache('projects');
log(`Created project: ${project.name}`); log(`Created project: ${project.name}`);
} }
} catch (err) { } catch (err) {
@@ -293,12 +398,13 @@ async function applyConfig(client: ApiClient, config: ApplyConfig, log: (...args
// Apply templates // Apply templates
for (const template of config.templates) { for (const template of config.templates) {
try { try {
const existing = await findByName(client, 'templates', template.name); const existing = await cachedFindByName('templates', template.name);
if (existing) { if (existing) {
await client.put(`/api/v1/templates/${(existing as { id: string }).id}`, template); await withRetry(() => client.put(`/api/v1/templates/${existing.id}`, template));
log(`Updated template: ${template.name}`); log(`Updated template: ${template.name}`);
} else { } else {
await client.post('/api/v1/templates', template); await withRetry(() => client.post('/api/v1/templates', template));
invalidateCache('templates');
log(`Created template: ${template.name}`); log(`Created template: ${template.name}`);
} }
} catch (err) { } catch (err) {
@@ -306,15 +412,37 @@ async function applyConfig(client: ApiClient, config: ApplyConfig, log: (...args
} }
} }
// Apply server attachments (after projects and servers exist)
for (const sa of config.serverattachments) {
try {
const project = await cachedFindByName('projects', sa.project);
if (!project) {
log(`Error applying serverattachment: project '${sa.project}' not found`);
continue;
}
await withRetry(() => client.post(`/api/v1/projects/${project.id}/servers`, { server: sa.server }));
log(`Attached server '${sa.server}' to project '${sa.project}'`);
} catch (err) {
const msg = err instanceof Error ? err.message : String(err);
// Ignore "already attached" conflicts silently
if (msg.includes('409') || msg.includes('already')) {
log(`Server '${sa.server}' already attached to project '${sa.project}'`);
} else {
log(`Error applying serverattachment '${sa.project}/${sa.server}': ${msg}`);
}
}
}
// Apply RBAC bindings // Apply RBAC bindings
for (const rbacBinding of config.rbacBindings) { for (const rbacBinding of config.rbacBindings) {
try { try {
const existing = await findByName(client, 'rbac', rbacBinding.name); const existing = await cachedFindByName('rbac', rbacBinding.name);
if (existing) { if (existing) {
await client.put(`/api/v1/rbac/${(existing as { id: string }).id}`, rbacBinding); await withRetry(() => client.put(`/api/v1/rbac/${existing.id}`, rbacBinding));
log(`Updated rbacBinding: ${rbacBinding.name}`); log(`Updated rbacBinding: ${rbacBinding.name}`);
} else { } else {
await client.post('/api/v1/rbac', rbacBinding); await withRetry(() => client.post('/api/v1/rbac', rbacBinding));
invalidateCache('rbac');
log(`Created rbacBinding: ${rbacBinding.name}`); log(`Created rbacBinding: ${rbacBinding.name}`);
} }
} catch (err) { } catch (err) {
@@ -322,17 +450,77 @@ async function applyConfig(client: ApiClient, config: ApplyConfig, log: (...args
} }
} }
// Apply prompts // Apply prompts — project-scoped: same name in different projects are distinct resources.
// Cache project-scoped prompt lookups separately from global cache.
const promptProjectIds = new Map<string, string>();
const projectPromptCache = new Map<string, Map<string, { id: string; [key: string]: unknown }>>();
async function findPromptInProject(name: string, projectId: string | undefined): Promise<{ id: string; [key: string]: unknown } | null> {
// Global prompts (no project) — use standard cache
if (!projectId) {
return cachedFindByName('prompts', name);
}
// Project-scoped: query prompts filtered by projectId
if (!projectPromptCache.has(projectId)) {
try {
const items = await client.get<Array<{ id: string; name: string; projectId?: string }>>(`/api/v1/prompts?projectId=${projectId}`);
const map = new Map<string, { id: string; [key: string]: unknown }>();
for (const item of items) {
if (item.name) map.set(item.name, item);
}
projectPromptCache.set(projectId, map);
} catch {
projectPromptCache.set(projectId, new Map());
}
}
return projectPromptCache.get(projectId)!.get(name) ?? null;
}
for (const prompt of config.prompts) { for (const prompt of config.prompts) {
try { try {
const existing = await findByName(client, 'prompts', prompt.name); // Resolve project name → projectId if needed
let projectId = prompt.projectId;
if (!projectId && prompt.project) {
if (promptProjectIds.has(prompt.project)) {
projectId = promptProjectIds.get(prompt.project)!;
} else {
const proj = await cachedFindByName('projects', prompt.project);
if (!proj) {
log(`Error applying prompt '${prompt.name}': project '${prompt.project}' not found`);
continue;
}
projectId = proj.id;
promptProjectIds.set(prompt.project, projectId);
}
}
// Normalize: accept both `link` and `linkTarget`, prefer `link`
const linkTarget = prompt.link ?? prompt.linkTarget;
// Linked prompts use placeholder content if none provided
const content = prompt.content ?? (linkTarget ? `Linked prompt — content fetched from ${linkTarget}` : '');
if (!content) {
log(`Error applying prompt '${prompt.name}': content is required (or provide link)`);
continue;
}
// Build API body (strip the `project` name field, use projectId)
const body: Record<string, unknown> = { name: prompt.name, content };
if (projectId) body.projectId = projectId;
if (prompt.priority !== undefined) body.priority = prompt.priority;
if (linkTarget) body.linkTarget = linkTarget;
const existing = await findPromptInProject(prompt.name, projectId);
if (existing) { if (existing) {
const updateData: Record<string, unknown> = { content: prompt.content }; const updateData: Record<string, unknown> = { content };
if (projectId) updateData.projectId = projectId;
if (prompt.priority !== undefined) updateData.priority = prompt.priority; if (prompt.priority !== undefined) updateData.priority = prompt.priority;
await client.put(`/api/v1/prompts/${(existing as { id: string }).id}`, updateData); if (linkTarget) updateData.linkTarget = linkTarget;
await withRetry(() => client.put(`/api/v1/prompts/${existing.id}`, updateData));
log(`Updated prompt: ${prompt.name}`); log(`Updated prompt: ${prompt.name}`);
} else { } else {
await client.post('/api/v1/prompts', prompt); await withRetry(() => client.post('/api/v1/prompts', body));
projectPromptCache.delete(projectId ?? '');
log(`Created prompt: ${prompt.name}`); log(`Created prompt: ${prompt.name}`);
} }
} catch (err) { } catch (err) {
@@ -341,15 +529,6 @@ async function applyConfig(client: ApiClient, config: ApplyConfig, log: (...args
} }
} }
async function findByName(client: ApiClient, resource: string, name: string): Promise<unknown | null> {
try {
const items = await client.get<Array<{ name: string }>>(`/api/v1/${resource}`);
return items.find((item) => item.name === name) ?? null;
} catch {
return null;
}
}
async function findByField<T extends string>(client: ApiClient, resource: string, field: T, value: string): Promise<unknown | null> { async function findByField<T extends string>(client: ApiClient, resource: string, field: T, value: string): Promise<unknown | null> {
try { try {
const items = await client.get<Array<Record<string, unknown>>>(`/api/v1/${resource}`); const items = await client.get<Array<Record<string, unknown>>>(`/api/v1/${resource}`);

View File

@@ -90,39 +90,51 @@ export function createConfigCommand(deps?: Partial<ConfigCommandDeps>, apiDeps?:
const cmd = config const cmd = config
.command(name) .command(name)
.description(hidden ? '' : 'Generate .mcp.json that connects a project via mcpctl mcp bridge') .description(hidden ? '' : 'Generate .mcp.json that connects a project via mcpctl mcp bridge')
.requiredOption('--project <name>', 'Project name') .option('--project <name>', 'Project name')
.option('-o, --output <path>', 'Output file path', '.mcp.json') .option('-o, --output <path>', 'Output file path', '.mcp.json')
.option('--merge', 'Merge with existing .mcp.json instead of overwriting') .option('--inspect', 'Include mcpctl-inspect MCP server for traffic monitoring')
.option('--stdout', 'Print to stdout instead of writing a file') .option('--stdout', 'Print to stdout instead of writing a file')
.action((opts: { project: string; output: string; merge?: boolean; stdout?: boolean }) => { .action((opts: { project?: string; output: string; inspect?: boolean; stdout?: boolean }) => {
const mcpConfig: McpConfig = { if (!opts.project && !opts.inspect) {
mcpServers: { log('Error: at least one of --project or --inspect is required');
[opts.project]: { process.exitCode = 1;
command: 'mcpctl', return;
args: ['mcp', '-p', opts.project], }
},
}, const servers: McpConfig['mcpServers'] = {};
}; if (opts.project) {
servers[opts.project] = {
command: 'mcpctl',
args: ['mcp', '-p', opts.project],
};
}
if (opts.inspect) {
servers['mcpctl-inspect'] = {
command: 'mcpctl',
args: ['console', '--inspect', '--stdin-mcp'],
};
}
if (opts.stdout) { if (opts.stdout) {
log(JSON.stringify(mcpConfig, null, 2)); log(JSON.stringify({ mcpServers: servers }, null, 2));
return; return;
} }
const outputPath = resolve(opts.output); const outputPath = resolve(opts.output);
let finalConfig = mcpConfig; let finalConfig: McpConfig = { mcpServers: servers };
if (opts.merge && existsSync(outputPath)) { // Always merge with existing .mcp.json — never overwrite other servers
if (existsSync(outputPath)) {
try { try {
const existing = JSON.parse(readFileSync(outputPath, 'utf-8')) as McpConfig; const existing = JSON.parse(readFileSync(outputPath, 'utf-8')) as McpConfig;
finalConfig = { finalConfig = {
mcpServers: { mcpServers: {
...existing.mcpServers, ...existing.mcpServers,
...mcpConfig.mcpServers, ...servers,
}, },
}; };
} catch { } catch {
// If existing file is invalid, just overwrite // If existing file is invalid, start fresh
} }
} }

View File

@@ -9,8 +9,10 @@ export interface ConsoleCommandDeps {
export function createConsoleCommand(deps: ConsoleCommandDeps): Command { export function createConsoleCommand(deps: ConsoleCommandDeps): Command {
const cmd = new Command('console') const cmd = new Command('console')
.description('Interactive MCP console — see what an LLM sees when attached to a project') .description('Interactive MCP console — see what an LLM sees when attached to a project')
.argument('<project>', 'Project name to connect to') .argument('[project]', 'Project name to connect to')
.action(async (projectName: string) => { .option('--inspect', 'Passive traffic inspector — observe other clients\' MCP traffic')
.option('--stdin-mcp', 'Run inspector as MCP server over stdin/stdout (for Claude)')
.action(async (projectName: string | undefined, opts: { inspect?: boolean; stdinMcp?: boolean }) => {
let mcplocalUrl = 'http://localhost:3200'; let mcplocalUrl = 'http://localhost:3200';
if (deps.configLoader) { if (deps.configLoader) {
mcplocalUrl = deps.configLoader().mcplocalUrl; mcplocalUrl = deps.configLoader().mcplocalUrl;
@@ -23,6 +25,28 @@ export function createConsoleCommand(deps: ConsoleCommandDeps): Command {
} }
} }
// --inspect --stdin-mcp: MCP server for Claude
if (opts.inspect && opts.stdinMcp) {
const { runInspectMcp } = await import('./inspect-mcp.js');
await runInspectMcp(mcplocalUrl);
return;
}
// --inspect: TUI traffic inspector
if (opts.inspect) {
const { renderInspect } = await import('./inspect-app.js');
await renderInspect({ mcplocalUrl, projectFilter: projectName });
return;
}
// Regular interactive console — requires project name
if (!projectName) {
console.error('Error: project name is required for interactive console mode.');
console.error('Usage: mcpctl console <project>');
console.error(' mcpctl console --inspect [project]');
process.exit(1);
}
let token: string | undefined; let token: string | undefined;
if (deps.credentialsLoader) { if (deps.credentialsLoader) {
token = deps.credentialsLoader()?.token; token = deps.credentialsLoader()?.token;

View File

@@ -0,0 +1,825 @@
/**
* Inspector TUI — passive MCP traffic sniffer.
*
* Connects to mcplocal's /inspect SSE endpoint and displays
* live traffic per project/session with color coding.
*
* Keys:
* s toggle sidebar
* j/k navigate events
* Enter expand/collapse event detail
* Esc close detail / deselect
* ↑/↓ select session (when sidebar visible)
* a all sessions
* c clear traffic
* q quit
*/
import { useState, useEffect, useRef } from 'react';
import { render, Box, Text, useInput, useApp, useStdout } from 'ink';
import type { IncomingMessage } from 'node:http';
import { request as httpRequest } from 'node:http';
// ── Types matching mcplocal's TrafficEvent ──
interface TrafficEvent {
timestamp: string;
projectName: string;
sessionId: string;
eventType: string;
method?: string;
upstreamName?: string;
body: unknown;
durationMs?: number;
}
interface ActiveSession {
sessionId: string;
projectName: string;
startedAt: string;
}
// ── SSE Client ──
function connectSSE(
url: string,
opts: {
onSessions: (sessions: ActiveSession[]) => void;
onEvent: (event: TrafficEvent) => void;
onLive: () => void;
onError: (err: string) => void;
},
): () => void {
let aborted = false;
const parsed = new URL(url);
const req = httpRequest(
{
hostname: parsed.hostname,
port: parsed.port,
path: parsed.pathname + parsed.search,
headers: { Accept: 'text/event-stream' },
},
(res: IncomingMessage) => {
let buffer = '';
let currentEventType = 'message';
res.setEncoding('utf-8');
res.on('data', (chunk: string) => {
buffer += chunk;
const lines = buffer.split('\n');
buffer = lines.pop()!; // Keep incomplete line
for (const line of lines) {
if (line.startsWith('event: ')) {
currentEventType = line.slice(7).trim();
} else if (line.startsWith('data: ')) {
const data = line.slice(6);
try {
const parsed = JSON.parse(data);
if (currentEventType === 'sessions') {
opts.onSessions(parsed as ActiveSession[]);
} else if (currentEventType === 'live') {
opts.onLive();
} else {
opts.onEvent(parsed as TrafficEvent);
}
} catch {
// Ignore unparseable data
}
currentEventType = 'message';
}
// Ignore comments (: keepalive) and blank lines
}
});
res.on('end', () => {
if (!aborted) opts.onError('SSE connection closed');
});
res.on('error', (err) => {
if (!aborted) opts.onError(err.message);
});
},
);
req.on('error', (err) => {
if (!aborted) opts.onError(err.message);
});
req.end();
return () => {
aborted = true;
req.destroy();
};
}
// ── Formatting helpers ──
/** Safely dig into unknown objects */
function dig(obj: unknown, ...keys: string[]): unknown {
let cur = obj;
for (const k of keys) {
if (cur === null || cur === undefined || typeof cur !== 'object') return undefined;
cur = (cur as Record<string, unknown>)[k];
}
return cur;
}
function trunc(s: string, maxLen: number): string {
return s.length > maxLen ? s.slice(0, maxLen - 1) + '…' : s;
}
function nameList(items: unknown[], key: string, max: number): string {
if (items.length === 0) return '(none)';
const names = items.map((it) => dig(it, key) as string).filter(Boolean);
const shown = names.slice(0, max);
const rest = names.length - shown.length;
return shown.join(', ') + (rest > 0 ? ` +${rest} more` : '');
}
/** Extract meaningful summary from request params (strips jsonrpc/id boilerplate) */
function summarizeRequest(method: string, body: unknown): string {
const params = dig(body, 'params') as Record<string, unknown> | undefined;
switch (method) {
case 'initialize': {
const name = dig(params, 'clientInfo', 'name') ?? '?';
const ver = dig(params, 'clientInfo', 'version') ?? '';
const proto = dig(params, 'protocolVersion') ?? '';
return `client=${name}${ver ? ` v${ver}` : ''} proto=${proto}`;
}
case 'tools/call': {
const toolName = dig(params, 'name') as string ?? '?';
const args = dig(params, 'arguments') as Record<string, unknown> | undefined;
if (!args || Object.keys(args).length === 0) return `${toolName}()`;
const pairs = Object.entries(args).map(([k, v]) => {
const vs = typeof v === 'string' ? v : JSON.stringify(v);
return `${k}: ${trunc(vs, 40)}`;
});
return `${toolName}(${trunc(pairs.join(', '), 80)})`;
}
case 'resources/read': {
const uri = dig(params, 'uri') as string ?? '';
return uri;
}
case 'prompts/get': {
const name = dig(params, 'name') as string ?? '';
return name;
}
case 'tools/list':
case 'resources/list':
case 'prompts/list':
case 'notifications/initialized':
return '';
default: {
if (!params || Object.keys(params).length === 0) return '';
const s = JSON.stringify(params);
return trunc(s, 80);
}
}
}
/** Extract meaningful summary from response result */
function summarizeResponse(method: string, body: unknown): string {
const error = dig(body, 'error') as { message?: string; code?: number } | undefined;
if (error) {
return `ERROR ${error.code ?? ''}: ${error.message ?? 'unknown'}`;
}
const result = dig(body, 'result') as Record<string, unknown> | undefined;
if (!result) return '';
switch (method) {
case 'initialize': {
const name = dig(result, 'serverInfo', 'name') ?? '?';
const ver = dig(result, 'serverInfo', 'version') ?? '';
const caps = dig(result, 'capabilities') as Record<string, unknown> | undefined;
const capList = caps ? Object.keys(caps).filter((k) => caps[k] && Object.keys(caps[k] as object).length > 0) : [];
return `server=${name}${ver ? ` v${ver}` : ''}${capList.length ? ` caps=[${capList.join(',')}]` : ''}`;
}
case 'tools/list': {
const tools = (result.tools ?? []) as unknown[];
return `${tools.length} tools: ${nameList(tools, 'name', 6)}`;
}
case 'resources/list': {
const resources = (result.resources ?? []) as unknown[];
return `${resources.length} resources: ${nameList(resources, 'name', 6)}`;
}
case 'prompts/list': {
const prompts = (result.prompts ?? []) as unknown[];
if (prompts.length === 0) return '0 prompts';
return `${prompts.length} prompts: ${nameList(prompts, 'name', 6)}`;
}
case 'tools/call': {
const content = (result.content ?? []) as unknown[];
const isError = result.isError;
const first = content[0];
const text = (dig(first, 'text') as string) ?? '';
const prefix = isError ? 'ERROR: ' : '';
if (text) return prefix + trunc(text.replace(/\n/g, ' '), 100);
return prefix + `${content.length} content block(s)`;
}
case 'resources/read': {
const contents = (result.contents ?? []) as unknown[];
const first = contents[0];
const text = (dig(first, 'text') as string) ?? '';
if (text) return trunc(text.replace(/\n/g, ' '), 80);
return `${contents.length} content block(s)`;
}
case 'notifications/initialized':
return 'ok';
default: {
if (Object.keys(result).length === 0) return 'ok';
const s = JSON.stringify(result);
return trunc(s, 80);
}
}
}
/** Format full event body for expanded detail view (multi-line, readable) */
function formatBodyDetail(event: TrafficEvent): string[] {
const body = event.body as Record<string, unknown> | null;
if (!body) return ['(no body)'];
const lines: string[] = [];
const method = event.method ?? '';
// Strip jsonrpc envelope — show meaningful content only
if (event.eventType.includes('request') || event.eventType === 'client_notification') {
const params = body['params'] as Record<string, unknown> | undefined;
if (method === 'tools/call' && params) {
lines.push(`Tool: ${params['name'] as string}`);
const args = params['arguments'] as Record<string, unknown> | undefined;
if (args && Object.keys(args).length > 0) {
lines.push('Arguments:');
for (const [k, v] of Object.entries(args)) {
const vs = typeof v === 'string' ? v : JSON.stringify(v, null, 2);
for (const vl of vs.split('\n')) {
lines.push(` ${k}: ${vl}`);
}
}
}
} else if (method === 'initialize' && params) {
const ci = params['clientInfo'] as Record<string, unknown> | undefined;
lines.push(`Client: ${ci?.['name'] ?? '?'} v${ci?.['version'] ?? '?'}`);
lines.push(`Protocol: ${params['protocolVersion'] ?? '?'}`);
const caps = params['capabilities'] as Record<string, unknown> | undefined;
if (caps) lines.push(`Capabilities: ${JSON.stringify(caps)}`);
} else if (params && Object.keys(params).length > 0) {
for (const l of JSON.stringify(params, null, 2).split('\n')) {
lines.push(l);
}
} else {
lines.push('(empty params)');
}
} else if (event.eventType.includes('response')) {
const error = body['error'] as Record<string, unknown> | undefined;
if (error) {
lines.push(`Error ${error['code']}: ${error['message']}`);
if (error['data']) {
for (const l of JSON.stringify(error['data'], null, 2).split('\n')) {
lines.push(` ${l}`);
}
}
} else {
const result = body['result'] as Record<string, unknown> | undefined;
if (!result) {
lines.push('(empty result)');
} else if (method === 'tools/list') {
const tools = (result['tools'] ?? []) as Array<{ name: string; description?: string }>;
lines.push(`${tools.length} tools:`);
for (const t of tools) {
lines.push(` ${t.name}${t.description ? `${trunc(t.description, 60)}` : ''}`);
}
} else if (method === 'resources/list') {
const resources = (result['resources'] ?? []) as Array<{ name: string; uri?: string; description?: string }>;
lines.push(`${resources.length} resources:`);
for (const r of resources) {
lines.push(` ${r.name}${r.uri ? ` (${r.uri})` : ''}${r.description ? `${trunc(r.description, 50)}` : ''}`);
}
} else if (method === 'prompts/list') {
const prompts = (result['prompts'] ?? []) as Array<{ name: string; description?: string }>;
lines.push(`${prompts.length} prompts:`);
for (const p of prompts) {
lines.push(` ${p.name}${p.description ? `${trunc(p.description, 60)}` : ''}`);
}
} else if (method === 'tools/call') {
const isErr = result['isError'];
const content = (result['content'] ?? []) as Array<{ type?: string; text?: string }>;
if (isErr) lines.push('(error response)');
for (const c of content) {
if (c.text) {
for (const l of c.text.split('\n')) {
lines.push(l);
}
} else {
lines.push(`[${c.type ?? 'unknown'} content]`);
}
}
} else if (method === 'initialize') {
const si = result['serverInfo'] as Record<string, unknown> | undefined;
lines.push(`Server: ${si?.['name'] ?? '?'} v${si?.['version'] ?? '?'}`);
lines.push(`Protocol: ${result['protocolVersion'] ?? '?'}`);
const caps = result['capabilities'] as Record<string, unknown> | undefined;
if (caps) {
lines.push('Capabilities:');
for (const [k, v] of Object.entries(caps)) {
if (v && typeof v === 'object' && Object.keys(v).length > 0) {
lines.push(` ${k}: ${JSON.stringify(v)}`);
}
}
}
const instructions = result['instructions'] as string | undefined;
if (instructions) {
lines.push('');
lines.push('Instructions:');
for (const l of instructions.split('\n')) {
lines.push(` ${l}`);
}
}
} else {
for (const l of JSON.stringify(result, null, 2).split('\n')) {
lines.push(l);
}
}
}
} else {
// Lifecycle events
for (const l of JSON.stringify(body, null, 2).split('\n')) {
lines.push(l);
}
}
return lines;
}
interface FormattedEvent {
arrow: string;
color: string;
label: string;
detail: string;
detailColor?: string | undefined;
}
function formatEvent(event: TrafficEvent): FormattedEvent {
const method = event.method ?? '';
switch (event.eventType) {
case 'client_request':
return { arrow: '→', color: 'green', label: method, detail: summarizeRequest(method, event.body) };
case 'client_response': {
const detail = summarizeResponse(method, event.body);
const hasError = detail.startsWith('ERROR');
return { arrow: '←', color: 'blue', label: method, detail, detailColor: hasError ? 'red' : undefined };
}
case 'client_notification':
return { arrow: '◂', color: 'magenta', label: method, detail: summarizeRequest(method, event.body) };
case 'upstream_request':
return { arrow: ' ⇢', color: 'yellowBright', label: `${event.upstreamName ?? '?'}/${method}`, detail: summarizeRequest(method, event.body) };
case 'upstream_response': {
const ms = event.durationMs !== undefined ? `${event.durationMs}ms` : '';
const detail = summarizeResponse(method, event.body);
const hasError = detail.startsWith('ERROR');
return { arrow: ' ⇠', color: 'yellowBright', label: `${event.upstreamName ?? '?'}/${method}`, detail: ms ? `[${ms}] ${detail}` : detail, detailColor: hasError ? 'red' : undefined };
}
case 'session_created':
return { arrow: '●', color: 'cyan', label: `session ${event.sessionId.slice(0, 8)}`, detail: `project=${event.projectName}` };
case 'session_closed':
return { arrow: '○', color: 'red', label: `session ${event.sessionId.slice(0, 8)}`, detail: 'closed' };
default:
return { arrow: '?', color: 'white', label: event.eventType, detail: '' };
}
}
function formatTime(iso: string): string {
try {
const d = new Date(iso);
return d.toLocaleTimeString('en-GB', { hour12: false, hour: '2-digit', minute: '2-digit', second: '2-digit' });
} catch {
return '??:??:??';
}
}
// ── Session Sidebar ──
function SessionList({ sessions, selected, eventCounts }: {
sessions: ActiveSession[];
selected: number;
eventCounts: Map<string, number>;
}) {
return (
<Box flexDirection="column" width={32} borderStyle="round" borderColor="gray" paddingX={1}>
<Text bold color="cyan">
{' '}Sessions{' '}
<Text dimColor>({sessions.length})</Text>
</Text>
<Box marginTop={0}>
<Text color={selected === -1 ? 'cyan' : undefined} bold={selected === -1}>
{selected === -1 ? ' ▸ ' : ' '}
<Text>all sessions</Text>
</Text>
</Box>
{sessions.length === 0 && (
<Box marginTop={1}>
<Text dimColor> waiting for connections</Text>
</Box>
)}
{sessions.map((s, i) => {
const count = eventCounts.get(s.sessionId) ?? 0;
return (
<Box key={s.sessionId} flexDirection="column">
<Text wrap="truncate">
<Text color={i === selected ? 'cyan' : undefined} bold={i === selected}>
{i === selected ? ' ▸ ' : ' '}
{s.projectName}
</Text>
</Text>
<Text wrap="truncate" dimColor>
{' '}
{s.sessionId.slice(0, 8)}
{count > 0 ? ` · ${count} events` : ''}
</Text>
</Box>
);
})}
<Box flexGrow={1} />
<Box borderStyle="single" borderTop borderColor="gray" paddingTop={0}>
<Text dimColor>
{'[↑↓] session [a] all\n[s] sidebar [c] clear\n[j/k] event [⏎] expand\n[q] quit'}
</Text>
</Box>
</Box>
);
}
// ── Traffic Log ──
function TrafficLog({ events, height, showProject, focusedIdx }: {
events: TrafficEvent[];
height: number;
showProject: boolean;
focusedIdx: number; // -1 = no focus (auto-scroll to bottom)
}) {
// When focusedIdx >= 0, center the focused event in the view
// When focusedIdx === -1, show the latest events (auto-scroll)
const maxVisible = height - 2;
let startIdx: number;
if (focusedIdx >= 0) {
// Center focused event, but clamp to valid range
startIdx = Math.max(0, Math.min(focusedIdx - Math.floor(maxVisible / 2), events.length - maxVisible));
} else {
startIdx = Math.max(0, events.length - maxVisible);
}
const visible = events.slice(startIdx, startIdx + maxVisible);
const visibleBaseIdx = startIdx;
return (
<Box flexDirection="column" flexGrow={1} paddingLeft={1}>
<Text bold>
Traffic <Text dimColor>({events.length} events{focusedIdx >= 0 ? ` · #${focusedIdx + 1} selected` : ''})</Text>
</Text>
{visible.length === 0 && (
<Box marginTop={1}>
<Text dimColor> waiting for traffic</Text>
</Box>
)}
{visible.map((event, vi) => {
const absIdx = visibleBaseIdx + vi;
const isFocused = absIdx === focusedIdx;
const { arrow, color, label, detail, detailColor } = formatEvent(event);
const isUpstream = event.eventType.startsWith('upstream_');
const isLifecycle = event.eventType === 'session_created' || event.eventType === 'session_closed';
const marker = isFocused ? '▸' : ' ';
if (isLifecycle) {
return (
<Text key={vi} wrap="truncate">
<Text color={isFocused ? 'cyan' : undefined}>{marker}</Text>
<Text dimColor>{formatTime(event.timestamp)} </Text>
<Text color={color} bold>{arrow} {label}</Text>
<Text dimColor> {detail}</Text>
</Text>
);
}
return (
<Text key={vi} wrap="truncate">
<Text color={isFocused ? 'cyan' : undefined}>{marker}</Text>
<Text dimColor>{formatTime(event.timestamp)} </Text>
{showProject && <Text color="gray">[{trunc(event.projectName, 12)}] </Text>}
<Text color={color}>{arrow} </Text>
<Text bold={!isUpstream} color={color}>{label}</Text>
{detail ? (
<Text color={detailColor} dimColor={!detailColor}> {detail}</Text>
) : null}
</Text>
);
})}
</Box>
);
}
// ── Detail Pane ──
function DetailPane({ event, maxLines, scrollOffset }: {
event: TrafficEvent;
maxLines: number;
scrollOffset: number;
}) {
const { arrow, color, label } = formatEvent(event);
const allLines = formatBodyDetail(event);
const bodyHeight = maxLines - 3; // header + border
const visibleLines = allLines.slice(scrollOffset, scrollOffset + bodyHeight);
const totalLines = allLines.length;
const canScroll = totalLines > bodyHeight;
const atEnd = scrollOffset + bodyHeight >= totalLines;
return (
<Box flexDirection="column" borderStyle="round" borderColor="gray" paddingX={1} height={maxLines}>
<Text bold>
<Text color={color}>{arrow} {label}</Text>
<Text dimColor> {formatTime(event.timestamp)} {event.projectName}/{event.sessionId.slice(0, 8)}</Text>
{canScroll ? (
<Text dimColor> [{scrollOffset + 1}-{Math.min(scrollOffset + bodyHeight, totalLines)}/{totalLines}] scroll Esc close</Text>
) : (
<Text dimColor> Esc to close</Text>
)}
</Text>
{visibleLines.map((line, i) => (
<Text key={i} wrap="truncate" dimColor={line.startsWith(' ')}>
{line}
</Text>
))}
{canScroll && !atEnd && (
<Text dimColor> +{totalLines - scrollOffset - bodyHeight} more lines </Text>
)}
</Box>
);
}
// ── Root App ──
interface InspectAppProps {
inspectUrl: string;
projectFilter?: string;
}
function InspectApp({ inspectUrl, projectFilter }: InspectAppProps) {
const { exit } = useApp();
const { stdout } = useStdout();
const termHeight = stdout?.rows ?? 24;
const [sessions, setSessions] = useState<ActiveSession[]>([]);
const [events, setEvents] = useState<TrafficEvent[]>([]);
const [selectedSession, setSelectedSession] = useState(-1); // -1 = all
const [connected, setConnected] = useState(false);
const [error, setError] = useState<string | null>(null);
const [showSidebar, setShowSidebar] = useState(true);
const [focusedEvent, setFocusedEvent] = useState(-1); // -1 = auto-scroll
const [expandedEvent, setExpandedEvent] = useState(false);
const [detailScroll, setDetailScroll] = useState(0);
// Track latest event count for auto-follow
const prevCountRef = useRef(0);
useEffect(() => {
const url = new URL(inspectUrl);
if (projectFilter) url.searchParams.set('project', projectFilter);
const disconnect = connectSSE(url.toString(), {
onSessions: (s) => setSessions(s),
onEvent: (e) => {
setEvents((prev) => [...prev, e]);
// Auto-add new sessions we haven't seen
if (e.eventType === 'session_created') {
setSessions((prev) => {
if (prev.some((s) => s.sessionId === e.sessionId)) return prev;
return [...prev, { sessionId: e.sessionId, projectName: e.projectName, startedAt: e.timestamp }];
});
}
if (e.eventType === 'session_closed') {
setSessions((prev) => prev.filter((s) => s.sessionId !== e.sessionId));
}
},
onLive: () => setConnected(true),
onError: (msg) => setError(msg),
});
return disconnect;
}, [inspectUrl, projectFilter]);
// Filter events by selected session
const filteredEvents = selectedSession === -1
? events
: events.filter((e) => e.sessionId === sessions[selectedSession]?.sessionId);
// Auto-follow: when new events arrive and we're not browsing, stay at bottom
useEffect(() => {
if (focusedEvent === -1 && filteredEvents.length > prevCountRef.current) {
// Auto-scrolling (focusedEvent === -1 means "follow tail")
}
prevCountRef.current = filteredEvents.length;
}, [filteredEvents.length, focusedEvent]);
// Event counts per session
const eventCounts = new Map<string, number>();
for (const e of events) {
eventCounts.set(e.sessionId, (eventCounts.get(e.sessionId) ?? 0) + 1);
}
const showProject = selectedSession === -1 && sessions.length > 1;
// Keyboard
useInput((input, key) => {
if (input === 'q') {
exit();
return;
}
// When detail pane is expanded, arrows scroll the detail content
if (expandedEvent && focusedEvent >= 0) {
if (key.escape) {
setExpandedEvent(false);
setDetailScroll(0);
return;
}
if (key.downArrow || input === 'j') {
setDetailScroll((s) => s + 1);
return;
}
if (key.upArrow || input === 'k') {
setDetailScroll((s) => Math.max(0, s - 1));
return;
}
// Enter: close detail
if (key.return) {
setExpandedEvent(false);
setDetailScroll(0);
return;
}
// q still quits even in detail mode
return;
}
// Esc: deselect event
if (key.escape) {
if (focusedEvent >= 0) {
setFocusedEvent(-1);
}
return;
}
// Enter: open detail pane for focused event
if (key.return && focusedEvent >= 0 && focusedEvent < filteredEvents.length) {
setExpandedEvent(true);
setDetailScroll(0);
return;
}
// s: toggle sidebar
if (input === 's') {
setShowSidebar((prev) => !prev);
return;
}
// a: all sessions
if (input === 'a') {
setSelectedSession(-1);
setFocusedEvent(-1);
setExpandedEvent(false);
setDetailScroll(0);
return;
}
// c: clear
if (input === 'c') {
setEvents([]);
setFocusedEvent(-1);
setExpandedEvent(false);
setDetailScroll(0);
return;
}
// j/k or arrow keys: navigate events
if (input === 'j' || key.downArrow) {
if (key.downArrow && showSidebar && focusedEvent < 0) {
// Arrow keys control session selection when sidebar visible and no event focused
setSelectedSession((s) => Math.min(sessions.length - 1, s + 1));
} else {
// j always controls event navigation, down-arrow too when event is focused
setFocusedEvent((prev) => {
const next = prev + 1;
return next >= filteredEvents.length ? filteredEvents.length - 1 : next;
});
setExpandedEvent(false);
}
return;
}
if (input === 'k' || key.upArrow) {
if (key.upArrow && showSidebar && focusedEvent < 0) {
setSelectedSession((s) => Math.max(-1, s - 1));
} else {
setFocusedEvent((prev) => {
if (prev <= 0) return -1; // Back to auto-scroll
return prev - 1;
});
setExpandedEvent(false);
}
return;
}
// G: jump to latest (end)
if (input === 'G') {
setFocusedEvent(-1);
setExpandedEvent(false);
setDetailScroll(0);
return;
}
});
// Layout calculations
const headerHeight = 1;
const footerHeight = 1;
// Detail pane takes up to half the screen
const detailHeight = expandedEvent && focusedEvent >= 0 ? Math.max(6, Math.floor(termHeight * 0.45)) : 0;
const contentHeight = termHeight - headerHeight - footerHeight - detailHeight;
const focusedEventObj = focusedEvent >= 0 ? filteredEvents[focusedEvent] : undefined;
return (
<Box flexDirection="column" height={termHeight}>
{/* ── Header ── */}
<Box paddingX={1}>
<Text bold color="cyan">MCP Inspector</Text>
<Text dimColor> </Text>
<Text color={connected ? 'green' : 'yellow'}>{connected ? '● live' : '○ connecting…'}</Text>
{projectFilter && <Text dimColor> project: {projectFilter}</Text>}
{selectedSession >= 0 && sessions[selectedSession] && (
<Text dimColor> session: {sessions[selectedSession]!.sessionId.slice(0, 8)}</Text>
)}
{!showSidebar && <Text dimColor> [s] show sidebar</Text>}
</Box>
{error && (
<Box paddingX={1}>
<Text color="red"> {error}</Text>
</Box>
)}
{/* ── Main content ── */}
<Box flexDirection="row" height={contentHeight}>
{showSidebar && (
<SessionList
sessions={sessions}
selected={selectedSession}
eventCounts={eventCounts}
/>
)}
<TrafficLog
events={filteredEvents}
height={contentHeight}
showProject={showProject}
focusedIdx={focusedEvent}
/>
</Box>
{/* ── Detail pane ── */}
{expandedEvent && focusedEventObj && (
<DetailPane event={focusedEventObj} maxLines={detailHeight} scrollOffset={detailScroll} />
)}
{/* ── Footer legend ── */}
<Box paddingX={1}>
<Text dimColor>
<Text color="green"> req</Text>
{' '}
<Text color="blue"> resp</Text>
{' '}
<Text color="yellowBright"> upstream</Text>
{' '}
<Text color="magenta"> notify</Text>
{' │ '}
{!showSidebar && <Text>[s] sidebar </Text>}
<Text>[j/k] navigate [] expand [G] latest [q] quit</Text>
</Text>
</Box>
</Box>
);
}
// ── Render entrypoint ──
export interface InspectRenderOptions {
mcplocalUrl: string;
projectFilter?: string;
}
export async function renderInspect(opts: InspectRenderOptions): Promise<void> {
const inspectUrl = `${opts.mcplocalUrl.replace(/\/$/, '')}/inspect`;
const instance = render(
<InspectApp inspectUrl={inspectUrl} projectFilter={opts.projectFilter} />,
);
await instance.waitUntilExit();
}

View File

@@ -0,0 +1,404 @@
/**
* MCP server over stdin/stdout for the traffic inspector.
*
* Claude adds this to .mcp.json as:
* { "mcpctl-inspect": { "command": "mcpctl", "args": ["console", "--inspect", "--stdin-mcp"] } }
*
* Subscribes to mcplocal's /inspect SSE endpoint and exposes traffic
* data via MCP tools: list_sessions, get_traffic, get_session_info.
*/
import { createInterface } from 'node:readline';
import { request as httpRequest } from 'node:http';
import type { IncomingMessage } from 'node:http';
// ── Types ──
interface TrafficEvent {
timestamp: string;
projectName: string;
sessionId: string;
eventType: string;
method?: string;
upstreamName?: string;
body: unknown;
durationMs?: number;
}
interface ActiveSession {
sessionId: string;
projectName: string;
startedAt: string;
eventCount: number;
}
interface JsonRpcRequest {
jsonrpc: string;
id: string | number;
method: string;
params?: Record<string, unknown>;
}
// ── State ──
const sessions = new Map<string, ActiveSession>();
const events: TrafficEvent[] = [];
const MAX_EVENTS = 10000;
// ── SSE Client ──
function connectSSE(url: string): void {
const parsed = new URL(url);
const req = httpRequest(
{
hostname: parsed.hostname,
port: parsed.port,
path: parsed.pathname + parsed.search,
headers: { Accept: 'text/event-stream' },
},
(res: IncomingMessage) => {
let buffer = '';
let currentEventType = 'message';
res.setEncoding('utf-8');
res.on('data', (chunk: string) => {
buffer += chunk;
const lines = buffer.split('\n');
buffer = lines.pop()!;
for (const line of lines) {
if (line.startsWith('event: ')) {
currentEventType = line.slice(7).trim();
} else if (line.startsWith('data: ')) {
try {
const data = JSON.parse(line.slice(6));
if (currentEventType === 'sessions') {
for (const s of data as Array<{ sessionId: string; projectName: string; startedAt: string }>) {
sessions.set(s.sessionId, { ...s, eventCount: 0 });
}
} else if (currentEventType !== 'live') {
handleEvent(data as TrafficEvent);
}
} catch {
// ignore
}
currentEventType = 'message';
}
}
});
res.on('end', () => {
// Reconnect after 2s
setTimeout(() => connectSSE(url), 2000);
});
res.on('error', () => {
setTimeout(() => connectSSE(url), 2000);
});
},
);
req.on('error', () => {
setTimeout(() => connectSSE(url), 2000);
});
req.end();
}
function handleEvent(event: TrafficEvent): void {
events.push(event);
if (events.length > MAX_EVENTS) {
events.splice(0, events.length - MAX_EVENTS);
}
// Track sessions
if (event.eventType === 'session_created') {
sessions.set(event.sessionId, {
sessionId: event.sessionId,
projectName: event.projectName,
startedAt: event.timestamp,
eventCount: 0,
});
} else if (event.eventType === 'session_closed') {
sessions.delete(event.sessionId);
}
// Increment event count
const session = sessions.get(event.sessionId);
if (session) {
session.eventCount++;
}
}
// ── MCP Protocol Handlers ──
const TOOLS = [
{
name: 'list_sessions',
description: 'List all active MCP sessions with their project name, start time, and event count.',
inputSchema: {
type: 'object' as const,
properties: {
project: { type: 'string' as const, description: 'Filter by project name' },
},
},
},
{
name: 'get_traffic',
description: 'Get captured MCP traffic events. Returns recent events, optionally filtered by session, method, or event type.',
inputSchema: {
type: 'object' as const,
properties: {
sessionId: { type: 'string' as const, description: 'Filter by session ID (first 8 chars is enough)' },
method: { type: 'string' as const, description: 'Filter by JSON-RPC method (e.g. "tools/call", "initialize")' },
eventType: { type: 'string' as const, description: 'Filter by event type: client_request, client_response, client_notification, upstream_request, upstream_response' },
limit: { type: 'number' as const, description: 'Max events to return (default: 50)' },
offset: { type: 'number' as const, description: 'Skip first N matching events' },
},
},
},
{
name: 'get_session_info',
description: 'Get detailed information about a specific session including its recent traffic summary.',
inputSchema: {
type: 'object' as const,
properties: {
sessionId: { type: 'string' as const, description: 'Session ID (first 8 chars is enough)' },
},
required: ['sessionId'] as const,
},
},
];
function handleInitialize(id: string | number): void {
send({
jsonrpc: '2.0',
id,
result: {
protocolVersion: '2024-11-05',
serverInfo: { name: 'mcpctl-inspector', version: '1.0.0' },
capabilities: { tools: {} },
},
});
}
function handleToolsList(id: string | number): void {
send({ jsonrpc: '2.0', id, result: { tools: TOOLS } });
}
function handleToolsCall(id: string | number, params: { name: string; arguments?: Record<string, unknown> }): void {
const args = params.arguments ?? {};
switch (params.name) {
case 'list_sessions': {
let result = [...sessions.values()];
const project = args['project'] as string | undefined;
if (project) {
result = result.filter((s) => s.projectName === project);
}
send({
jsonrpc: '2.0',
id,
result: {
content: [{ type: 'text', text: JSON.stringify(result, null, 2) }],
},
});
break;
}
case 'get_traffic': {
const sessionFilter = args['sessionId'] as string | undefined;
const methodFilter = args['method'] as string | undefined;
const typeFilter = args['eventType'] as string | undefined;
const limit = (args['limit'] as number | undefined) ?? 50;
const offset = (args['offset'] as number | undefined) ?? 0;
let filtered = events;
if (sessionFilter) {
filtered = filtered.filter((e) => e.sessionId.startsWith(sessionFilter));
}
if (methodFilter) {
filtered = filtered.filter((e) => e.method === methodFilter);
}
if (typeFilter) {
filtered = filtered.filter((e) => e.eventType === typeFilter);
}
const sliced = filtered.slice(offset, offset + limit);
// Format as readable lines (strip jsonrpc/id boilerplate)
const lines = sliced.map((e) => {
const arrow = e.eventType === 'client_request' ? '→'
: e.eventType === 'client_response' ? '←'
: e.eventType === 'client_notification' ? '◂'
: e.eventType === 'upstream_request' ? '⇢'
: e.eventType === 'upstream_response' ? '⇠'
: e.eventType === 'session_created' ? '●'
: e.eventType === 'session_closed' ? '○'
: '?';
const layer = e.eventType.startsWith('upstream') ? 'internal' : 'client';
const ms = e.durationMs !== undefined ? ` (${e.durationMs}ms)` : '';
const upstream = e.upstreamName ? `${e.upstreamName}/` : '';
const time = e.timestamp.split('T')[1]?.replace('Z', '') ?? e.timestamp;
// Extract meaningful content from body (strip jsonrpc/id envelope)
const body = e.body as Record<string, unknown> | null;
let content = '';
if (body) {
if (e.eventType.includes('request') || e.eventType === 'client_notification') {
const params = body['params'] as Record<string, unknown> | undefined;
if (e.method === 'tools/call' && params) {
const toolArgs = params['arguments'] as Record<string, unknown> | undefined;
content = `tool=${params['name']}${toolArgs ? ` args=${JSON.stringify(toolArgs)}` : ''}`;
} else if (e.method === 'resources/read' && params) {
content = `uri=${params['uri']}`;
} else if (e.method === 'initialize' && params) {
const ci = params['clientInfo'] as Record<string, unknown> | undefined;
content = ci ? `client=${ci['name']} v${ci['version']}` : '';
} else if (params && Object.keys(params).length > 0) {
content = JSON.stringify(params);
}
} else if (e.eventType.includes('response')) {
const result = body['result'] as Record<string, unknown> | undefined;
const error = body['error'] as Record<string, unknown> | undefined;
if (error) {
content = `ERROR ${error['code']}: ${error['message']}`;
} else if (result) {
if (e.method === 'tools/list') {
const tools = (result['tools'] ?? []) as Array<{ name: string }>;
content = `${tools.length} tools: ${tools.map((t) => t.name).join(', ')}`;
} else if (e.method === 'resources/list') {
const res = (result['resources'] ?? []) as Array<{ name: string }>;
content = `${res.length} resources: ${res.map((r) => r.name).join(', ')}`;
} else if (e.method === 'tools/call') {
const c = (result['content'] ?? []) as Array<{ text?: string }>;
const text = c[0]?.text ?? '';
content = text.length > 200 ? text.slice(0, 200) + '…' : text;
} else if (e.method === 'initialize') {
const si = result['serverInfo'] as Record<string, unknown> | undefined;
content = si ? `server=${si['name']} v${si['version']}` : '';
} else if (Object.keys(result).length > 0) {
const s = JSON.stringify(result);
content = s.length > 200 ? s.slice(0, 200) + '…' : s;
}
}
}
}
return `${time} ${arrow} [${layer}] ${upstream}${e.method ?? e.eventType}${ms}${content ? ' ' + content : ''}`;
});
send({
jsonrpc: '2.0',
id,
result: {
content: [{
type: 'text',
text: `${filtered.length} total events (showing ${offset + 1}-${offset + sliced.length})\n\n${lines.join('\n')}`,
}],
},
});
break;
}
case 'get_session_info': {
const sid = args['sessionId'] as string;
const session = [...sessions.values()].find((s) => s.sessionId.startsWith(sid));
if (!session) {
send({
jsonrpc: '2.0',
id,
result: {
content: [{ type: 'text', text: `Session not found: ${sid}` }],
isError: true,
},
});
return;
}
const sessionEvents = events.filter((e) => e.sessionId === session.sessionId);
const methods = new Map<string, number>();
for (const e of sessionEvents) {
if (e.method) {
methods.set(e.method, (methods.get(e.method) ?? 0) + 1);
}
}
const info = {
...session,
totalEvents: sessionEvents.length,
methodCounts: Object.fromEntries(methods),
lastEvent: sessionEvents.length > 0
? sessionEvents[sessionEvents.length - 1]!.timestamp
: null,
};
send({
jsonrpc: '2.0',
id,
result: {
content: [{ type: 'text', text: JSON.stringify(info, null, 2) }],
},
});
break;
}
default:
send({
jsonrpc: '2.0',
id,
error: { code: -32601, message: `Unknown tool: ${params.name}` },
});
}
}
function handleRequest(request: JsonRpcRequest): void {
switch (request.method) {
case 'initialize':
handleInitialize(request.id);
break;
case 'notifications/initialized':
// Notification — no response
break;
case 'tools/list':
handleToolsList(request.id);
break;
case 'tools/call':
handleToolsCall(request.id, request.params as { name: string; arguments?: Record<string, unknown> });
break;
default:
if (request.id !== undefined) {
send({
jsonrpc: '2.0',
id: request.id,
error: { code: -32601, message: `Method not supported: ${request.method}` },
});
}
}
}
function send(message: unknown): void {
process.stdout.write(JSON.stringify(message) + '\n');
}
// ── Entrypoint ──
export async function runInspectMcp(mcplocalUrl: string): Promise<void> {
const inspectUrl = `${mcplocalUrl.replace(/\/$/, '')}/inspect`;
connectSSE(inspectUrl);
const rl = createInterface({ input: process.stdin });
for await (const line of rl) {
const trimmed = line.trim();
if (!trimmed) continue;
try {
const request = JSON.parse(trimmed) as JsonRpcRequest;
handleRequest(request);
} catch {
// Ignore unparseable lines
}
}
}

View File

@@ -1,5 +1,6 @@
import { Command } from 'commander'; import { Command } from 'commander';
import { type ApiClient, ApiError } from '../api-client.js'; import { type ApiClient, ApiError } from '../api-client.js';
import { resolveNameOrId } from './shared.js';
export interface CreateCommandDeps { export interface CreateCommandDeps {
client: ApiClient; client: ApiClient;
log: (...args: unknown[]) => void; log: (...args: unknown[]) => void;
@@ -55,7 +56,7 @@ export function createCreateCommand(deps: CreateCommandDeps): Command {
const { client, log } = deps; const { client, log } = deps;
const cmd = new Command('create') const cmd = new Command('create')
.description('Create a resource (server, secret, project, user, group, rbac)'); .description('Create a resource (server, secret, project, user, group, rbac, serverattachment, prompt)');
// --- create server --- // --- create server ---
cmd.command('server') cmd.command('server')
@@ -72,6 +73,7 @@ export function createCreateCommand(deps: CreateCommandDeps): Command {
.option('--replicas <count>', 'Number of replicas') .option('--replicas <count>', 'Number of replicas')
.option('--env <entry>', 'Env var: KEY=value (inline) or KEY=secretRef:SECRET:KEY (secret ref, repeat for multiple)', collect, []) .option('--env <entry>', 'Env var: KEY=value (inline) or KEY=secretRef:SECRET:KEY (secret ref, repeat for multiple)', collect, [])
.option('--from-template <name>', 'Create from template (name or name:version)') .option('--from-template <name>', 'Create from template (name or name:version)')
.option('--env-from-secret <secret>', 'Map template env vars from a secret')
.option('--force', 'Update if already exists') .option('--force', 'Update if already exists')
.action(async (name: string, opts) => { .action(async (name: string, opts) => {
let base: Record<string, unknown> = {}; let base: Record<string, unknown> = {};
@@ -103,7 +105,33 @@ export function createCreateCommand(deps: CreateCommandDeps): Command {
// Convert template env (description/required) to server env (name/value/valueFrom) // Convert template env (description/required) to server env (name/value/valueFrom)
const tplEnv = template.env as Array<{ name: string; description?: string; required?: boolean; defaultValue?: string }> | undefined; const tplEnv = template.env as Array<{ name: string; description?: string; required?: boolean; defaultValue?: string }> | undefined;
if (tplEnv && tplEnv.length > 0) { if (tplEnv && tplEnv.length > 0) {
base.env = tplEnv.map((e) => ({ name: e.name, value: e.defaultValue ?? '' })); if (opts.envFromSecret) {
// --env-from-secret: map all template env vars from the specified secret
const secretName = opts.envFromSecret as string;
const secrets = await client.get<Array<{ name: string; data: Record<string, string> }>>('/api/v1/secrets');
const secret = secrets.find((s) => s.name === secretName);
if (!secret) throw new Error(`Secret '${secretName}' not found`);
const missing = tplEnv
.filter((e) => e.required !== false && !(e.name in secret.data))
.map((e) => e.name);
if (missing.length > 0) {
throw new Error(
`Secret '${secretName}' is missing required keys: ${missing.join(', ')}\n` +
`Secret has: ${Object.keys(secret.data).join(', ')}`,
);
}
base.env = tplEnv.map((e) => {
if (e.name in secret.data) {
return { name: e.name, valueFrom: { secretRef: { name: secretName, key: e.name } } };
}
return { name: e.name, value: e.defaultValue ?? '' };
});
log(`Mapped ${tplEnv.filter((e) => e.name in secret.data).length} env var(s) from secret '${secretName}'`);
} else {
base.env = tplEnv.map((e) => ({ name: e.name, value: e.defaultValue ?? '' }));
}
} }
// Track template origin // Track template origin
@@ -363,6 +391,10 @@ export function createCreateCommand(deps: CreateCommandDeps): Command {
const fs = await import('node:fs/promises'); const fs = await import('node:fs/promises');
content = await fs.readFile(opts.contentFile as string, 'utf-8'); content = await fs.readFile(opts.contentFile as string, 'utf-8');
} }
// For linked prompts, auto-generate placeholder content if none provided
if (!content && opts.link) {
content = `Linked prompt — content fetched from ${opts.link as string}`;
}
if (!content) { if (!content) {
throw new Error('--content or --content-file is required'); throw new Error('--content or --content-file is required');
} }
@@ -390,6 +422,22 @@ export function createCreateCommand(deps: CreateCommandDeps): Command {
log(`prompt '${prompt.name}' created (id: ${prompt.id})`); log(`prompt '${prompt.name}' created (id: ${prompt.id})`);
}); });
// --- create serverattachment ---
cmd.command('serverattachment')
.alias('sa')
.description('Attach a server to a project')
.argument('<server>', 'Server name')
.option('--project <name>', 'Project name')
.action(async (serverName: string, opts) => {
const projectName = opts.project as string | undefined;
if (!projectName) {
throw new Error('--project is required. Usage: mcpctl create serverattachment <server> --project <name>');
}
const projectId = await resolveNameOrId(client, 'projects', projectName);
await client.post(`/api/v1/projects/${projectId}/servers`, { server: serverName });
log(`server '${serverName}' attached to project '${projectName}'`);
});
// --- create promptrequest --- // --- create promptrequest ---
cmd.command('promptrequest') cmd.command('promptrequest')
.description('Create a prompt request (pending proposal that needs approval)') .description('Create a prompt request (pending proposal that needs approval)')

View File

@@ -14,9 +14,21 @@ export function createDeleteCommand(deps: DeleteCommandDeps): Command {
.description('Delete a resource (server, instance, secret, project, user, group, rbac)') .description('Delete a resource (server, instance, secret, project, user, group, rbac)')
.argument('<resource>', 'resource type') .argument('<resource>', 'resource type')
.argument('<id>', 'resource ID or name') .argument('<id>', 'resource ID or name')
.action(async (resourceArg: string, idOrName: string) => { .option('--project <name>', 'Project name (for serverattachment)')
.action(async (resourceArg: string, idOrName: string, opts: { project?: string }) => {
const resource = resolveResource(resourceArg); const resource = resolveResource(resourceArg);
// Serverattachments: delete serverattachment <server> --project <project>
if (resource === 'serverattachments') {
if (!opts.project) {
throw new Error('--project is required. Usage: mcpctl delete serverattachment <server> --project <name>');
}
const projectId = await resolveNameOrId(client, 'projects', opts.project);
await client.delete(`/api/v1/projects/${projectId}/servers/${idOrName}`);
log(`server '${idOrName}' detached from project '${opts.project}'`);
return;
}
// Resolve name → ID for any resource type // Resolve name → ID for any resource type
let id: string; let id: string;
try { try {

View File

@@ -504,6 +504,95 @@ function formatRbacDetail(rbac: Record<string, unknown>): string {
return lines.join('\n'); return lines.join('\n');
} }
async function formatPromptDetail(prompt: Record<string, unknown>, client?: ApiClient): Promise<string> {
const lines: string[] = [];
lines.push(`=== Prompt: ${prompt.name} ===`);
lines.push(`${pad('Name:')}${prompt.name}`);
const proj = prompt.project as { name: string } | null | undefined;
lines.push(`${pad('Project:')}${proj?.name ?? (prompt.projectId ? String(prompt.projectId) : '(global)')}`);
lines.push(`${pad('Priority:')}${prompt.priority ?? 5}`);
// Link info
const link = prompt.linkTarget as string | null | undefined;
if (link) {
lines.push('');
lines.push('Link:');
lines.push(` ${pad('Target:', 12)}${link}`);
const status = prompt.linkStatus as string | null | undefined;
if (status) lines.push(` ${pad('Status:', 12)}${status}`);
}
// Content — resolve linked content if possible
let content = prompt.content as string | undefined;
if (link && client) {
const resolved = await resolveLink(link, client);
if (resolved) content = resolved;
}
lines.push('');
lines.push('Content:');
if (content) {
// Indent content with 2 spaces for readability
for (const line of content.split('\n')) {
lines.push(` ${line}`);
}
} else {
lines.push(' (no content)');
}
lines.push('');
lines.push('Metadata:');
lines.push(` ${pad('ID:', 12)}${prompt.id}`);
if (prompt.version) lines.push(` ${pad('Version:', 12)}${prompt.version}`);
if (prompt.createdAt) lines.push(` ${pad('Created:', 12)}${prompt.createdAt}`);
if (prompt.updatedAt) lines.push(` ${pad('Updated:', 12)}${prompt.updatedAt}`);
return lines.join('\n');
}
/**
* Resolve a prompt link target via mcpd proxy's resources/read.
* Returns resolved content string or null on failure.
*/
async function resolveLink(linkTarget: string, client: ApiClient): Promise<string | null> {
try {
// Parse link: project/server:uri
const slashIdx = linkTarget.indexOf('/');
if (slashIdx < 1) return null;
const project = linkTarget.slice(0, slashIdx);
const rest = linkTarget.slice(slashIdx + 1);
const colonIdx = rest.indexOf(':');
if (colonIdx < 1) return null;
const serverName = rest.slice(0, colonIdx);
const uri = rest.slice(colonIdx + 1);
// Resolve server name → ID
const servers = await client.get<Array<{ id: string; name: string }>>(
`/api/v1/projects/${encodeURIComponent(project)}/servers`,
);
const target = servers.find((s) => s.name === serverName);
if (!target) return null;
// Call resources/read via proxy
const proxyResponse = await client.post<{
result?: { contents?: Array<{ text?: string }> };
error?: { code: number; message: string };
}>('/api/v1/mcp/proxy', {
serverId: target.id,
method: 'resources/read',
params: { uri },
});
if (proxyResponse.error) return null;
const contents = proxyResponse.result?.contents;
if (!contents || contents.length === 0) return null;
return contents.map((c) => c.text ?? '').join('\n');
} catch {
return null; // Silently fall back to stored content
}
}
function formatGenericDetail(obj: Record<string, unknown>): string { function formatGenericDetail(obj: Record<string, unknown>): string {
const lines: string[] = []; const lines: string[] = [];
for (const [key, value] of Object.entries(obj)) { for (const [key, value] of Object.entries(obj)) {
@@ -563,10 +652,15 @@ export function createDescribeCommand(deps: DescribeCommandDeps): Command {
} }
} }
} else { } else {
try { // Prompts/promptrequests: let fetchResource handle scoping (it respects --project)
id = await resolveNameOrId(deps.client, resource, idOrName); if (resource === 'prompts' || resource === 'promptrequests') {
} catch {
id = idOrName; id = idOrName;
} else {
try {
id = await resolveNameOrId(deps.client, resource, idOrName);
} catch {
id = idOrName;
}
} }
} }
@@ -630,6 +724,9 @@ export function createDescribeCommand(deps: DescribeCommandDeps): Command {
case 'rbac': case 'rbac':
deps.log(formatRbacDetail(item)); deps.log(formatRbacDetail(item));
break; break;
case 'prompts':
deps.log(await formatPromptDetail(item, deps.client));
break;
default: default:
deps.log(formatGenericDetail(item)); deps.log(formatGenericDetail(item));
} }

View File

@@ -1,12 +1,13 @@
import { Command } from 'commander'; import { Command } from 'commander';
import { formatTable } from '../formatters/table.js'; import { formatTable } from '../formatters/table.js';
import { formatJson, formatYaml } from '../formatters/output.js'; import { formatJson, formatYamlMultiDoc } from '../formatters/output.js';
import type { Column } from '../formatters/table.js'; import type { Column } from '../formatters/table.js';
import { resolveResource, stripInternalFields } from './shared.js'; import { resolveResource, stripInternalFields } from './shared.js';
export interface GetCommandDeps { export interface GetCommandDeps {
fetchResource: (resource: string, id?: string, opts?: { project?: string; all?: boolean }) => Promise<unknown[]>; fetchResource: (resource: string, id?: string, opts?: { project?: string; all?: boolean }) => Promise<unknown[]>;
log: (...args: string[]) => void; log: (...args: string[]) => void;
getProject?: () => string | undefined;
} }
interface ServerRow { interface ServerRow {
@@ -179,6 +180,16 @@ const instanceColumns: Column<InstanceRow>[] = [
{ header: 'ID', key: 'id' }, { header: 'ID', key: 'id' },
]; ];
interface ServerAttachmentRow {
project: string;
server: string;
}
const serverAttachmentColumns: Column<ServerAttachmentRow>[] = [
{ header: 'SERVER', key: 'server', width: 25 },
{ header: 'PROJECT', key: 'project', width: 25 },
];
function getColumnsForResource(resource: string): Column<Record<string, unknown>>[] { function getColumnsForResource(resource: string): Column<Record<string, unknown>>[] {
switch (resource) { switch (resource) {
case 'servers': case 'servers':
@@ -201,6 +212,8 @@ function getColumnsForResource(resource: string): Column<Record<string, unknown>
return promptColumns as unknown as Column<Record<string, unknown>>[]; return promptColumns as unknown as Column<Record<string, unknown>>[];
case 'promptrequests': case 'promptrequests':
return promptRequestColumns as unknown as Column<Record<string, unknown>>[]; return promptRequestColumns as unknown as Column<Record<string, unknown>>[];
case 'serverattachments':
return serverAttachmentColumns as unknown as Column<Record<string, unknown>>[];
default: default:
return [ return [
{ header: 'ID', key: 'id' as keyof Record<string, unknown> }, { header: 'ID', key: 'id' as keyof Record<string, unknown> },
@@ -209,38 +222,61 @@ function getColumnsForResource(resource: string): Column<Record<string, unknown>
} }
} }
/** Map plural resource name → singular kind for YAML documents */
const RESOURCE_KIND: Record<string, string> = {
servers: 'server',
projects: 'project',
secrets: 'secret',
templates: 'template',
instances: 'instance',
users: 'user',
groups: 'group',
rbac: 'rbac',
prompts: 'prompt',
promptrequests: 'promptrequest',
serverattachments: 'serverattachment',
};
/** /**
* Transform API response items into apply-compatible format. * Transform API response items into apply-compatible multi-doc format.
* Strips internal fields and wraps in the resource key. * Each item gets a `kind` field and internal fields stripped.
*/ */
function toApplyFormat(resource: string, items: unknown[]): Record<string, unknown[]> { function toApplyDocs(resource: string, items: unknown[]): Array<{ kind: string } & Record<string, unknown>> {
const cleaned = items.map((item) => { const kind = RESOURCE_KIND[resource] ?? resource;
return stripInternalFields(item as Record<string, unknown>); return items.map((item) => {
const cleaned = stripInternalFields(item as Record<string, unknown>);
return { kind, ...cleaned };
}); });
return { [resource]: cleaned };
} }
export function createGetCommand(deps: GetCommandDeps): Command { export function createGetCommand(deps: GetCommandDeps): Command {
return new Command('get') return new Command('get')
.description('List resources (servers, projects, instances)') .description('List resources (servers, projects, instances, all)')
.argument('<resource>', 'resource type (servers, projects, instances)') .argument('<resource>', 'resource type (servers, projects, instances, all)')
.argument('[id]', 'specific resource ID or name') .argument('[id]', 'specific resource ID or name')
.option('-o, --output <format>', 'output format (table, json, yaml)', 'table') .option('-o, --output <format>', 'output format (table, json, yaml)', 'table')
.option('--project <name>', 'Filter by project') .option('--project <name>', 'Filter by project')
.option('-A, --all', 'Show all (including project-scoped) resources') .option('-A, --all', 'Show all (including project-scoped) resources')
.action(async (resourceArg: string, id: string | undefined, opts: { output: string; project?: string; all?: true }) => { .action(async (resourceArg: string, id: string | undefined, opts: { output: string; project?: string; all?: true }) => {
const resource = resolveResource(resourceArg); const resource = resolveResource(resourceArg);
// Merge parent --project with local --project
const project = opts.project ?? deps.getProject?.();
// Handle `get all --project X` composite export
if (resource === 'all') {
await handleGetAll(deps, { ...opts, project });
return;
}
const fetchOpts: { project?: string; all?: boolean } = {}; const fetchOpts: { project?: string; all?: boolean } = {};
if (opts.project) fetchOpts.project = opts.project; if (project) fetchOpts.project = project;
if (opts.all) fetchOpts.all = true; if (opts.all) fetchOpts.all = true;
const items = await deps.fetchResource(resource, id, Object.keys(fetchOpts).length > 0 ? fetchOpts : undefined); const items = await deps.fetchResource(resource, id, Object.keys(fetchOpts).length > 0 ? fetchOpts : undefined);
if (opts.output === 'json') { if (opts.output === 'json') {
// Apply-compatible JSON wrapped in resource key deps.log(formatJson(toApplyDocs(resource, items)));
deps.log(formatJson(toApplyFormat(resource, items)));
} else if (opts.output === 'yaml') { } else if (opts.output === 'yaml') {
// Apply-compatible YAML wrapped in resource key deps.log(formatYamlMultiDoc(toApplyDocs(resource, items)));
deps.log(formatYaml(toApplyFormat(resource, items)));
} else { } else {
if (items.length === 0) { if (items.length === 0) {
deps.log(`No ${resource} found.`); deps.log(`No ${resource} found.`);
@@ -251,3 +287,59 @@ export function createGetCommand(deps: GetCommandDeps): Command {
} }
}); });
} }
async function handleGetAll(
deps: GetCommandDeps,
opts: { output: string; project?: string },
): Promise<void> {
if (!opts.project) {
throw new Error('--project is required with "get all". Usage: mcpctl get all --project <name>');
}
const docs: Array<{ kind: string } & Record<string, unknown>> = [];
// 1. Fetch the project
const projects = await deps.fetchResource('projects', opts.project);
if (projects.length === 0) {
deps.log(`Project '${opts.project}' not found.`);
return;
}
// 2. Add the project itself
for (const p of projects) {
docs.push({ kind: 'project', ...stripInternalFields(p as Record<string, unknown>) });
}
// 3. Extract serverattachments from project's server list
const project = projects[0] as ProjectRow;
let attachmentCount = 0;
if (project.servers && project.servers.length > 0) {
for (const ps of project.servers) {
docs.push({
kind: 'serverattachment',
server: typeof ps === 'string' ? ps : ps.server.name,
project: project.name,
});
attachmentCount++;
}
}
// 4. Fetch prompts owned by this project (exclude global prompts)
const prompts = await deps.fetchResource('prompts', undefined, { project: opts.project });
const projectPrompts = prompts.filter((p) => (p as { projectId?: string }).projectId != null);
for (const p of projectPrompts) {
docs.push({ kind: 'prompt', ...stripInternalFields(p as Record<string, unknown>) });
}
if (opts.output === 'json') {
deps.log(formatJson(docs));
} else if (opts.output === 'yaml') {
deps.log(formatYamlMultiDoc(docs));
} else {
// Table output: show summary
deps.log(`Project: ${opts.project}`);
deps.log(` Server Attachments: ${attachmentCount}`);
deps.log(` Prompts: ${projectPrompts.length}`);
deps.log(`\nUse -o yaml or -o json for apply-compatible output.`);
}
}

View File

@@ -0,0 +1,58 @@
import { Command } from 'commander';
import type { ApiClient } from '../api-client.js';
import { resolveResource, resolveNameOrId } from './shared.js';
export interface PatchCommandDeps {
client: ApiClient;
log: (...args: string[]) => void;
}
/**
* Parse "key=value" pairs into a partial update object.
* Supports: key=value, key=null (sets null), key=123 (number if parseable).
*/
function parsePatches(pairs: string[]): Record<string, unknown> {
const result: Record<string, unknown> = {};
for (const pair of pairs) {
const eqIdx = pair.indexOf('=');
if (eqIdx === -1) {
throw new Error(`Invalid patch format '${pair}'. Expected key=value`);
}
const key = pair.slice(0, eqIdx);
const raw = pair.slice(eqIdx + 1);
if (raw === 'null') {
result[key] = null;
} else if (raw === 'true') {
result[key] = true;
} else if (raw === 'false') {
result[key] = false;
} else if (/^\d+$/.test(raw)) {
result[key] = parseInt(raw, 10);
} else {
result[key] = raw;
}
}
return result;
}
export function createPatchCommand(deps: PatchCommandDeps): Command {
const { client, log } = deps;
return new Command('patch')
.description('Patch a resource field (e.g. mcpctl patch project myproj llmProvider=none)')
.argument('<resource>', 'resource type (server, project, secret, ...)')
.argument('<name>', 'resource name or ID')
.argument('<patches...>', 'key=value pairs to patch')
.action(async (resourceArg: string, nameOrId: string, patches: string[]) => {
const resource = resolveResource(resourceArg);
const id = await resolveNameOrId(client, resource, nameOrId);
const body = parsePatches(patches);
await client.put(`/api/v1/${resource}/${id}`, body);
const fields = Object.entries(body)
.map(([k, v]) => `${k}=${v === null ? 'null' : String(v)}`)
.join(', ');
log(`patched ${resource.replace(/s$/, '')} '${nameOrId}': ${fields}`);
});
}

View File

@@ -21,6 +21,10 @@ export const RESOURCE_ALIASES: Record<string, string> = {
promptrequest: 'promptrequests', promptrequest: 'promptrequests',
promptrequests: 'promptrequests', promptrequests: 'promptrequests',
pr: 'promptrequests', pr: 'promptrequests',
serverattachment: 'serverattachments',
serverattachments: 'serverattachments',
sa: 'serverattachments',
all: 'all',
}; };
export function resolveResource(name: string): string { export function resolveResource(name: string): string {
@@ -61,21 +65,53 @@ export async function resolveNameOrId(
/** Strip internal/read-only fields from an API response to make it apply-compatible. */ /** Strip internal/read-only fields from an API response to make it apply-compatible. */
export function stripInternalFields(obj: Record<string, unknown>): Record<string, unknown> { export function stripInternalFields(obj: Record<string, unknown>): Record<string, unknown> {
const result = { ...obj }; const result = { ...obj };
for (const key of ['id', 'createdAt', 'updatedAt', 'version', 'ownerId', 'summary', 'chapters']) { for (const key of ['id', 'createdAt', 'updatedAt', 'version', 'ownerId', 'summary', 'chapters', 'linkStatus', 'serverId']) {
delete result[key]; delete result[key];
} }
// Strip relationship joins that aren't part of the resource spec (like k8s namespaces don't list deployments)
if ('servers' in result && Array.isArray(result.servers)) { // Rename linkTarget → link for cleaner YAML
delete result.servers; if ('linkTarget' in result) {
result.link = result.linkTarget;
delete result.linkTarget;
// Linked prompts: strip content (it's fetched from the link source, not static)
if (result.link) {
delete result.content;
}
} }
// Convert project servers join array → string[] of server names
if ('servers' in result && Array.isArray(result.servers)) {
const entries = result.servers as Array<{ server?: { name: string } }>;
if (entries.length > 0 && entries[0]?.server) {
result.servers = entries.map((e) => e.server!.name);
} else if (entries.length === 0) {
result.servers = [];
} else {
delete result.servers;
}
}
// Convert prompt projectId CUID → project name string
if ('project' in result && typeof result.project === 'object' && result.project !== null) {
const proj = result.project as { name: string };
result.project = proj.name;
delete result.projectId;
}
// Strip remaining relationship objects
if ('owner' in result && typeof result.owner === 'object') { if ('owner' in result && typeof result.owner === 'object') {
delete result.owner; delete result.owner;
} }
if ('members' in result && Array.isArray(result.members)) { if ('members' in result && Array.isArray(result.members)) {
delete result.members; delete result.members;
} }
if ('project' in result && typeof result.project === 'object' && result.project !== null) {
delete result.project; // Strip null values last (null = unset, omitting from YAML is cleaner and equivalent)
for (const key of Object.keys(result)) {
if (result[key] === null) {
delete result[key];
}
} }
return result; return result;
} }

View File

@@ -15,10 +15,14 @@ export function reorderKeys(obj: unknown): unknown {
if (Array.isArray(obj)) return obj.map(reorderKeys); if (Array.isArray(obj)) return obj.map(reorderKeys);
if (obj !== null && typeof obj === 'object') { if (obj !== null && typeof obj === 'object') {
const rec = obj as Record<string, unknown>; const rec = obj as Record<string, unknown>;
const lastKeys = ['content', 'prompt']; const firstKeys = ['kind'];
const lastKeys = ['link', 'content', 'prompt'];
const ordered: Record<string, unknown> = {}; const ordered: Record<string, unknown> = {};
for (const key of firstKeys) {
if (key in rec) ordered[key] = rec[key];
}
for (const key of Object.keys(rec)) { for (const key of Object.keys(rec)) {
if (!lastKeys.includes(key)) ordered[key] = reorderKeys(rec[key]); if (!firstKeys.includes(key) && !lastKeys.includes(key)) ordered[key] = reorderKeys(rec[key]);
} }
for (const key of lastKeys) { for (const key of lastKeys) {
if (key in rec) ordered[key] = rec[key]; if (key in rec) ordered[key] = rec[key];
@@ -32,3 +36,16 @@ export function formatYaml(data: unknown): string {
const reordered = reorderKeys(data); const reordered = reorderKeys(data);
return yaml.dump(reordered, { lineWidth: 120, noRefs: true }).trimEnd(); return yaml.dump(reordered, { lineWidth: 120, noRefs: true }).trimEnd();
} }
/**
* Format multiple resources as Kubernetes-style multi-document YAML.
* Each item gets its own `---` separated document with a `kind` field.
*/
export function formatYamlMultiDoc(items: Array<{ kind: string } & Record<string, unknown>>): string {
return items
.map((item) => {
const reordered = reorderKeys(item);
return '---\n' + yaml.dump(reordered, { lineWidth: 120, noRefs: true }).trimEnd();
})
.join('\n');
}

View File

@@ -29,7 +29,7 @@ export function createProgram(): Command {
.enablePositionalOptions() .enablePositionalOptions()
.option('--daemon-url <url>', 'mcplocal daemon URL') .option('--daemon-url <url>', 'mcplocal daemon URL')
.option('--direct', 'bypass mcplocal and connect directly to mcpd') .option('--direct', 'bypass mcplocal and connect directly to mcpd')
.option('--project <name>', 'Target project for project commands'); .option('-p, --project <name>', 'Target project for project commands');
program.addCommand(createStatusCommand()); program.addCommand(createStatusCommand());
program.addCommand(createLoginCommand()); program.addCommand(createLoginCommand());
@@ -59,17 +59,26 @@ export function createProgram(): Command {
const fetchResource = async (resource: string, nameOrId?: string, opts?: { project?: string; all?: boolean }): Promise<unknown[]> => { const fetchResource = async (resource: string, nameOrId?: string, opts?: { project?: string; all?: boolean }): Promise<unknown[]> => {
const projectName = opts?.project ?? program.opts().project as string | undefined; const projectName = opts?.project ?? program.opts().project as string | undefined;
// --project scoping for servers and instances // Virtual resource: serverattachments (composed from project data)
if (projectName && !nameOrId && (resource === 'servers' || resource === 'instances')) { if (resource === 'serverattachments') {
const projectId = await resolveNameOrId(client, 'projects', projectName); type ProjectWithServers = { name: string; id: string; servers?: Array<{ server: { name: string } }> };
if (resource === 'servers') { let projects: ProjectWithServers[];
return client.get<unknown[]>(`/api/v1/projects/${projectId}/servers`); if (projectName) {
const projectId = await resolveNameOrId(client, 'projects', projectName);
const project = await client.get<ProjectWithServers>(`/api/v1/projects/${projectId}`);
projects = [project];
} else {
projects = await client.get<ProjectWithServers[]>('/api/v1/projects');
} }
// instances: fetch project servers, then filter instances by serverId const attachments: Array<{ project: string; server: string }> = [];
const projectServers = await client.get<Array<{ id: string }>>(`/api/v1/projects/${projectId}/servers`); for (const p of projects) {
const serverIds = new Set(projectServers.map((s) => s.id)); if (p.servers) {
const allInstances = await client.get<Array<{ serverId: string }>>(`/api/v1/instances`); for (const ps of p.servers) {
return allInstances.filter((inst) => serverIds.has(inst.serverId)); attachments.push({ server: ps.server.name, project: p.name });
}
}
}
return attachments;
} }
// --project scoping for prompts and promptrequests // --project scoping for prompts and promptrequests
@@ -101,6 +110,21 @@ export function createProgram(): Command {
}; };
const fetchSingleResource = async (resource: string, nameOrId: string): Promise<unknown> => { const fetchSingleResource = async (resource: string, nameOrId: string): Promise<unknown> => {
const projectName = program.opts().project as string | undefined;
// Prompts: resolve within project scope (or global-only without --project)
if (resource === 'prompts' || resource === 'promptrequests') {
const scope = projectName
? `?project=${encodeURIComponent(projectName)}`
: '?scope=global';
const items = await client.get<Array<Record<string, unknown>>>(`/api/v1/${resource}${scope}`);
const match = items.find((item) => item.name === nameOrId);
if (!match) {
throw new Error(`${resource.replace(/s$/, '')} '${nameOrId}' not found${projectName ? ` in project '${projectName}'` : ' (global scope). Use --project to specify a project'}`);
}
return client.get(`/api/v1/${resource}/${match.id as string}`);
}
let id: string; let id: string;
try { try {
id = await resolveNameOrId(client, resource, nameOrId); id = await resolveNameOrId(client, resource, nameOrId);
@@ -113,6 +137,7 @@ export function createProgram(): Command {
program.addCommand(createGetCommand({ program.addCommand(createGetCommand({
fetchResource, fetchResource,
log: (...args) => console.log(...args), log: (...args) => console.log(...args),
getProject: () => program.opts().project as string | undefined,
})); }));
program.addCommand(createDescribeCommand({ program.addCommand(createDescribeCommand({

View File

@@ -9,7 +9,7 @@ describe('createProgram', () => {
it('has version flag', () => { it('has version flag', () => {
const program = createProgram(); const program = createProgram();
expect(program.version()).toBe('0.1.0'); expect(program.version()).toBe('0.0.1');
}); });
it('has config subcommand', () => { it('has config subcommand', () => {

View File

@@ -64,7 +64,7 @@ describe('config claude', () => {
}); });
}); });
it('merges with existing .mcp.json', async () => { it('always merges with existing .mcp.json', async () => {
const outPath = join(tmpDir, '.mcp.json'); const outPath = join(tmpDir, '.mcp.json');
writeFileSync(outPath, JSON.stringify({ writeFileSync(outPath, JSON.stringify({
mcpServers: { 'existing--server': { command: 'echo', args: [] } }, mcpServers: { 'existing--server': { command: 'echo', args: [] } },
@@ -74,7 +74,7 @@ describe('config claude', () => {
{ configDeps: { configDir: tmpDir }, log }, { configDeps: { configDir: tmpDir }, log },
{ client, credentialsDeps: { configDir: tmpDir }, log }, { client, credentialsDeps: { configDir: tmpDir }, log },
); );
await cmd.parseAsync(['claude', '--project', 'proj-1', '-o', outPath, '--merge'], { from: 'user' }); await cmd.parseAsync(['claude', '--project', 'proj-1', '-o', outPath], { from: 'user' });
const written = JSON.parse(readFileSync(outPath, 'utf-8')); const written = JSON.parse(readFileSync(outPath, 'utf-8'));
expect(written.mcpServers['existing--server']).toBeDefined(); expect(written.mcpServers['existing--server']).toBeDefined();
@@ -85,6 +85,36 @@ describe('config claude', () => {
expect(output.join('\n')).toContain('2 server(s)'); expect(output.join('\n')).toContain('2 server(s)');
}); });
it('adds inspect MCP server with --inspect', async () => {
const outPath = join(tmpDir, '.mcp.json');
const cmd = createConfigCommand(
{ configDeps: { configDir: tmpDir }, log },
{ client, credentialsDeps: { configDir: tmpDir }, log },
);
await cmd.parseAsync(['claude', '--inspect', '-o', outPath], { from: 'user' });
const written = JSON.parse(readFileSync(outPath, 'utf-8'));
expect(written.mcpServers['mcpctl-inspect']).toEqual({
command: 'mcpctl',
args: ['console', '--inspect', '--stdin-mcp'],
});
expect(output.join('\n')).toContain('1 server(s)');
});
it('adds both project and inspect with --project --inspect', async () => {
const outPath = join(tmpDir, '.mcp.json');
const cmd = createConfigCommand(
{ configDeps: { configDir: tmpDir }, log },
{ client, credentialsDeps: { configDir: tmpDir }, log },
);
await cmd.parseAsync(['claude', '--project', 'ha', '--inspect', '-o', outPath], { from: 'user' });
const written = JSON.parse(readFileSync(outPath, 'utf-8'));
expect(written.mcpServers['ha']).toBeDefined();
expect(written.mcpServers['mcpctl-inspect']).toBeDefined();
expect(output.join('\n')).toContain('2 server(s)');
});
it('backward compat: claude-generate still works', async () => { it('backward compat: claude-generate still works', async () => {
const outPath = join(tmpDir, '.mcp.json'); const outPath = join(tmpDir, '.mcp.json');
const cmd = createConfigCommand( const cmd = createConfigCommand(

View File

@@ -41,27 +41,28 @@ describe('get command', () => {
expect(deps.fetchResource).toHaveBeenCalledWith('servers', 'srv-1', undefined); expect(deps.fetchResource).toHaveBeenCalledWith('servers', 'srv-1', undefined);
}); });
it('outputs apply-compatible JSON format', async () => { it('outputs apply-compatible JSON format (multi-doc)', async () => {
const deps = makeDeps([{ id: 'srv-1', name: 'slack', createdAt: '2025-01-01', updatedAt: '2025-01-01', version: 1 }]); const deps = makeDeps([{ id: 'srv-1', name: 'slack', createdAt: '2025-01-01', updatedAt: '2025-01-01', version: 1 }]);
const cmd = createGetCommand(deps); const cmd = createGetCommand(deps);
await cmd.parseAsync(['node', 'test', 'servers', '-o', 'json']); await cmd.parseAsync(['node', 'test', 'servers', '-o', 'json']);
const parsed = JSON.parse(deps.output[0] ?? ''); const parsed = JSON.parse(deps.output[0] ?? '');
// Wrapped in resource key, internal fields stripped // Array of documents with kind field, internal fields stripped
expect(parsed).toHaveProperty('servers'); expect(Array.isArray(parsed)).toBe(true);
expect(parsed.servers[0].name).toBe('slack'); expect(parsed[0].kind).toBe('server');
expect(parsed.servers[0]).not.toHaveProperty('id'); expect(parsed[0].name).toBe('slack');
expect(parsed.servers[0]).not.toHaveProperty('createdAt'); expect(parsed[0]).not.toHaveProperty('id');
expect(parsed.servers[0]).not.toHaveProperty('updatedAt'); expect(parsed[0]).not.toHaveProperty('createdAt');
expect(parsed.servers[0]).not.toHaveProperty('version'); expect(parsed[0]).not.toHaveProperty('updatedAt');
expect(parsed[0]).not.toHaveProperty('version');
}); });
it('outputs apply-compatible YAML format', async () => { it('outputs apply-compatible YAML format (multi-doc)', async () => {
const deps = makeDeps([{ id: 'srv-1', name: 'slack', createdAt: '2025-01-01' }]); const deps = makeDeps([{ id: 'srv-1', name: 'slack', createdAt: '2025-01-01' }]);
const cmd = createGetCommand(deps); const cmd = createGetCommand(deps);
await cmd.parseAsync(['node', 'test', 'servers', '-o', 'yaml']); await cmd.parseAsync(['node', 'test', 'servers', '-o', 'yaml']);
const text = deps.output[0]; const text = deps.output[0];
expect(text).toContain('servers:'); expect(text).toContain('kind: server');
expect(text).toContain('name: slack'); expect(text).toContain('name: slack');
expect(text).not.toContain('id:'); expect(text).not.toContain('id:');
expect(text).not.toContain('createdAt:'); expect(text).not.toContain('createdAt:');

View File

@@ -76,7 +76,7 @@ describe('status command', () => {
const cmd = createStatusCommand(baseDeps()); const cmd = createStatusCommand(baseDeps());
await cmd.parseAsync(['-o', 'json'], { from: 'user' }); await cmd.parseAsync(['-o', 'json'], { from: 'user' });
const parsed = JSON.parse(output[0]) as Record<string, unknown>; const parsed = JSON.parse(output[0]) as Record<string, unknown>;
expect(parsed['version']).toBe('0.1.0'); expect(parsed['version']).toBe('0.0.1');
expect(parsed['mcplocalReachable']).toBe(true); expect(parsed['mcplocalReachable']).toBe(true);
expect(parsed['mcpdReachable']).toBe(true); expect(parsed['mcpdReachable']).toBe(true);
}); });

View File

@@ -1,12 +1,22 @@
import { describe, it, expect } from 'vitest'; import { describe, it, expect } from 'vitest';
import { readFileSync } from 'node:fs'; import { readFileSync, existsSync } from 'node:fs';
import { join, dirname } from 'node:path'; import { join, dirname } from 'node:path';
import { fileURLToPath } from 'node:url'; import { fileURLToPath } from 'node:url';
import { execSync } from 'node:child_process';
const root = join(dirname(fileURLToPath(import.meta.url)), '..', '..', '..'); const root = join(dirname(fileURLToPath(import.meta.url)), '..', '..', '..');
const fishFile = readFileSync(join(root, 'completions', 'mcpctl.fish'), 'utf-8'); const fishFile = readFileSync(join(root, 'completions', 'mcpctl.fish'), 'utf-8');
const bashFile = readFileSync(join(root, 'completions', 'mcpctl.bash'), 'utf-8'); const bashFile = readFileSync(join(root, 'completions', 'mcpctl.bash'), 'utf-8');
describe('freshness', () => {
it('committed completions match generator output', () => {
const generatorPath = join(root, 'scripts', 'generate-completions.ts');
expect(existsSync(generatorPath), 'generator script must exist').toBe(true);
// Run the generator in --check mode; exit 0 means files are up to date
execSync(`npx tsx ${generatorPath} --check`, { cwd: root, stdio: 'pipe' });
});
});
describe('fish completions', () => { describe('fish completions', () => {
it('erases stale completions at the top', () => { it('erases stale completions at the top', () => {
const lines = fishFile.split('\n'); const lines = fishFile.split('\n');
@@ -52,8 +62,8 @@ describe('fish completions', () => {
} }
}); });
it('defines --project option', () => { it('defines --project option with -p shorthand', () => {
expect(fishFile).toContain("complete -c mcpctl -l project"); expect(fishFile).toContain("-s p -l project");
}); });
it('attach-server command only shows with --project', () => { it('attach-server command only shows with --project', () => {
@@ -139,8 +149,11 @@ describe('bash completions', () => {
it('fetches resource names dynamically after resource type', () => { it('fetches resource names dynamically after resource type', () => {
expect(bashFile).toContain('_mcpctl_resource_names'); expect(bashFile).toContain('_mcpctl_resource_names');
// get/describe/delete should use resource_names when resource_type is set // get, describe, and delete should each use resource_names when resource_type is set
expect(bashFile).toMatch(/get\|describe\|delete\)[\s\S]*?_mcpctl_resource_names/); for (const cmd of ['get', 'describe', 'delete']) {
const block = bashFile.match(new RegExp(`${cmd}\\)[\\s\\S]*?return ;;`))?.[0] ?? '';
expect(block, `${cmd} case must use _mcpctl_resource_names`).toContain('_mcpctl_resource_names');
}
}); });
it('attach-server filters out already-attached servers and guards against repeat', () => { it('attach-server filters out already-attached servers and guards against repeat', () => {

View File

@@ -1,6 +1,6 @@
{ {
"name": "@mcpctl/db", "name": "@mcpctl/db",
"version": "0.1.0", "version": "0.0.1",
"private": true, "private": true,
"type": "module", "type": "module",
"main": "./dist/index.js", "main": "./dist/index.js",

View File

@@ -0,0 +1,20 @@
/**
* Vitest globalSetup: push schema once before all db tests.
* Runs in the main vitest process, outside test workers.
*/
import { execSync } from 'node:child_process';
const TEST_DATABASE_URL = process.env['DATABASE_URL'] ??
'postgresql://mcpctl:mcpctl_test@localhost:5433/mcpctl_test';
export function setup(): void {
execSync('npx prisma db push --force-reset --skip-generate', {
cwd: new URL('..', import.meta.url).pathname,
env: {
...process.env,
DATABASE_URL: TEST_DATABASE_URL,
PRISMA_USER_CONSENT_FOR_DANGEROUS_AI_ACTION: 'yes',
},
stdio: 'pipe',
});
}

View File

@@ -1,11 +1,9 @@
import { PrismaClient } from '@prisma/client'; import { PrismaClient } from '@prisma/client';
import { execSync } from 'node:child_process';
const TEST_DATABASE_URL = process.env['DATABASE_URL'] ?? const TEST_DATABASE_URL = process.env['DATABASE_URL'] ??
'postgresql://mcpctl:mcpctl_test@localhost:5433/mcpctl_test'; 'postgresql://mcpctl:mcpctl_test@localhost:5433/mcpctl_test';
let prisma: PrismaClient | undefined; let prisma: PrismaClient | undefined;
let schemaReady = false;
export function getTestClient(): PrismaClient { export function getTestClient(): PrismaClient {
if (!prisma) { if (!prisma) {
@@ -16,26 +14,9 @@ export function getTestClient(): PrismaClient {
return prisma; return prisma;
} }
/** Return a connected test client. Schema is pushed by globalSetup. */
export async function setupTestDb(): Promise<PrismaClient> { export async function setupTestDb(): Promise<PrismaClient> {
const client = getTestClient(); return getTestClient();
// Only push schema once per process (multiple test files share the worker)
if (!schemaReady) {
execSync('npx prisma db push --force-reset --skip-generate', {
cwd: new URL('..', import.meta.url).pathname,
env: {
...process.env,
DATABASE_URL: TEST_DATABASE_URL,
// Consent required when Prisma detects AI agent context.
// This targets the ephemeral test database (tmpfs-backed, port 5433).
PRISMA_USER_CONSENT_FOR_DANGEROUS_AI_ACTION: 'yes',
},
stdio: 'pipe',
});
schemaReady = true;
}
return client;
} }
export async function cleanupTestDb(): Promise<void> { export async function cleanupTestDb(): Promise<void> {
@@ -49,8 +30,9 @@ export async function clearAllTables(client: PrismaClient): Promise<void> {
// Delete in order respecting foreign keys // Delete in order respecting foreign keys
await client.auditLog.deleteMany(); await client.auditLog.deleteMany();
await client.mcpInstance.deleteMany(); await client.mcpInstance.deleteMany();
await client.promptRequest.deleteMany();
await client.prompt.deleteMany();
await client.projectServer.deleteMany(); await client.projectServer.deleteMany();
await client.projectMember.deleteMany();
await client.secret.deleteMany(); await client.secret.deleteMany();
await client.session.deleteMany(); await client.session.deleteMany();
await client.project.deleteMany(); await client.project.deleteMany();

View File

@@ -1,6 +1,12 @@
import { describe, it, expect, beforeAll, afterAll, beforeEach } from 'vitest'; import { describe, it, expect, beforeAll, afterAll, beforeEach } from 'vitest';
import type { PrismaClient } from '@prisma/client'; import type { PrismaClient } from '@prisma/client';
import { setupTestDb, cleanupTestDb, clearAllTables, getTestClient } from './helpers.js'; import { setupTestDb, cleanupTestDb, clearAllTables, getTestClient } from './helpers.js';
import { seedTemplates } from '../src/seed/index.js';
import type { SeedTemplate } from '../src/seed/index.js';
// Wrap all tests in a single describe to scope lifecycle hooks
// and prevent leakage when running in the same worker as other test files.
describe('db models', () => {
let prisma: PrismaClient; let prisma: PrismaClient;
@@ -496,45 +502,6 @@ describe('ProjectServer', () => {
}); });
}); });
// ── ProjectMember model ──
describe('ProjectMember', () => {
it('links project to user with role', async () => {
const user = await createUser();
const project = await createProject({ ownerId: user.id });
const pm = await prisma.projectMember.create({
data: { projectId: project.id, userId: user.id, role: 'admin' },
});
expect(pm.role).toBe('admin');
});
it('defaults role to member', async () => {
const user = await createUser();
const project = await createProject({ ownerId: user.id });
const pm = await prisma.projectMember.create({
data: { projectId: project.id, userId: user.id },
});
expect(pm.role).toBe('member');
});
it('enforces unique project-user pair', async () => {
const user = await createUser();
const project = await createProject({ ownerId: user.id });
await prisma.projectMember.create({ data: { projectId: project.id, userId: user.id } });
await expect(
prisma.projectMember.create({ data: { projectId: project.id, userId: user.id } }),
).rejects.toThrow();
});
it('cascades delete when project is deleted', async () => {
const user = await createUser();
const project = await createProject({ ownerId: user.id });
await prisma.projectMember.create({ data: { projectId: project.id, userId: user.id } });
await prisma.project.delete({ where: { id: project.id } });
const members = await prisma.projectMember.findMany({ where: { projectId: project.id } });
expect(members).toHaveLength(0);
});
});
// ── Project new fields ── // ── Project new fields ──
@@ -566,3 +533,74 @@ describe('Project new fields', () => {
expect(project.llmModel).toBeNull(); expect(project.llmModel).toBeNull();
}); });
}); });
// ── seedTemplates ──
const testTemplates: SeedTemplate[] = [
{
name: 'github',
version: '1.0.0',
description: 'GitHub MCP server',
packageName: '@anthropic/github-mcp',
transport: 'STDIO',
env: [{ name: 'GITHUB_TOKEN', description: 'Personal access token', required: true }],
},
{
name: 'slack',
version: '1.0.0',
description: 'Slack MCP server',
packageName: '@anthropic/slack-mcp',
transport: 'STDIO',
env: [],
},
];
describe('seedTemplates', () => {
it('seeds templates', async () => {
const count = await seedTemplates(prisma, testTemplates);
expect(count).toBe(2);
const templates = await prisma.mcpTemplate.findMany({ orderBy: { name: 'asc' } });
expect(templates).toHaveLength(2);
expect(templates.map((t) => t.name)).toEqual(['github', 'slack']);
});
it('is idempotent (upsert)', async () => {
await seedTemplates(prisma, testTemplates);
const count = await seedTemplates(prisma, testTemplates);
expect(count).toBe(2);
const templates = await prisma.mcpTemplate.findMany();
expect(templates).toHaveLength(2);
});
it('seeds env correctly', async () => {
await seedTemplates(prisma, testTemplates);
const github = await prisma.mcpTemplate.findUnique({ where: { name: 'github' } });
const env = github!.env as Array<{ name: string; description?: string; required?: boolean }>;
expect(env).toHaveLength(1);
expect(env[0].name).toBe('GITHUB_TOKEN');
expect(env[0].required).toBe(true);
});
it('accepts custom template list', async () => {
const custom: SeedTemplate[] = [
{
name: 'custom-template',
version: '2.0.0',
description: 'Custom test template',
packageName: '@test/custom',
transport: 'STDIO',
env: [],
},
];
const count = await seedTemplates(prisma, custom);
expect(count).toBe(1);
const templates = await prisma.mcpTemplate.findMany();
expect(templates).toHaveLength(1);
expect(templates[0].name).toBe('custom-template');
});
});
}); // close 'db models' wrapper

View File

@@ -1,86 +0,0 @@
import { describe, it, expect, beforeAll, afterAll, beforeEach } from 'vitest';
import type { PrismaClient } from '@prisma/client';
import { setupTestDb, cleanupTestDb, clearAllTables } from './helpers.js';
import { seedTemplates } from '../src/seed/index.js';
import type { SeedTemplate } from '../src/seed/index.js';
let prisma: PrismaClient;
beforeAll(async () => {
prisma = await setupTestDb();
}, 30_000);
afterAll(async () => {
await cleanupTestDb();
});
beforeEach(async () => {
await clearAllTables(prisma);
});
const testTemplates: SeedTemplate[] = [
{
name: 'github',
version: '1.0.0',
description: 'GitHub MCP server',
packageName: '@anthropic/github-mcp',
transport: 'STDIO',
env: [{ name: 'GITHUB_TOKEN', description: 'Personal access token', required: true }],
},
{
name: 'slack',
version: '1.0.0',
description: 'Slack MCP server',
packageName: '@anthropic/slack-mcp',
transport: 'STDIO',
env: [],
},
];
describe('seedTemplates', () => {
it('seeds templates', async () => {
const count = await seedTemplates(prisma, testTemplates);
expect(count).toBe(2);
const templates = await prisma.mcpTemplate.findMany({ orderBy: { name: 'asc' } });
expect(templates).toHaveLength(2);
expect(templates.map((t) => t.name)).toEqual(['github', 'slack']);
});
it('is idempotent (upsert)', async () => {
await seedTemplates(prisma, testTemplates);
const count = await seedTemplates(prisma, testTemplates);
expect(count).toBe(2);
const templates = await prisma.mcpTemplate.findMany();
expect(templates).toHaveLength(2);
});
it('seeds env correctly', async () => {
await seedTemplates(prisma, testTemplates);
const github = await prisma.mcpTemplate.findUnique({ where: { name: 'github' } });
const env = github!.env as Array<{ name: string; description?: string; required?: boolean }>;
expect(env).toHaveLength(1);
expect(env[0].name).toBe('GITHUB_TOKEN');
expect(env[0].required).toBe(true);
});
it('accepts custom template list', async () => {
const custom: SeedTemplate[] = [
{
name: 'custom-template',
version: '2.0.0',
description: 'Custom test template',
packageName: '@test/custom',
transport: 'STDIO',
env: [],
},
];
const count = await seedTemplates(prisma, custom);
expect(count).toBe(1);
const templates = await prisma.mcpTemplate.findMany();
expect(templates).toHaveLength(1);
expect(templates[0].name).toBe('custom-template');
});
});

View File

@@ -1,10 +1,10 @@
import { defineProject } from 'vitest/config'; import { defineConfig } from 'vitest/config';
export default defineProject({ export default defineConfig({
test: { test: {
name: 'db', name: 'db',
include: ['tests/**/*.test.ts'], include: ['tests/**/*.test.ts'],
// Test files share the same database — run sequentially // Schema pushed once by globalSetup before any tests.
fileParallelism: false, globalSetup: ['tests/global-setup.ts'],
}, },
}); });

View File

@@ -1,6 +1,6 @@
{ {
"name": "@mcpctl/mcpd", "name": "@mcpctl/mcpd",
"version": "0.1.0", "version": "0.0.1",
"private": true, "private": true,
"type": "module", "type": "module",
"main": "./dist/index.js", "main": "./dist/index.js",

View File

@@ -23,14 +23,11 @@ const SYSTEM_PROMPTS: SystemPromptDef[] = [
{ {
name: 'gate-instructions', name: 'gate-instructions',
priority: 10, priority: 10,
content: `This project uses a gated session. Before you can access tools, you must describe your current task by calling begin_session with 3-7 keywords. content: `This project uses a gated session. Before you can access tools, you must start a session by calling begin_session.
After calling begin_session, you will receive: Call begin_session immediately using the arguments it requires (check its input schema). If it accepts a description, briefly describe the user's task. If it accepts tags, provide 3-7 keywords relevant to the user's request.
1. Relevant project prompts matched to your keywords
2. A list of other available prompts
3. Full access to all project tools
Choose your keywords carefully — they determine which context you receive.`, The available tools and prompts are listed below. After calling begin_session, you will receive relevant project context and full access to all tools.`,
}, },
{ {
name: 'gate-encouragement', name: 'gate-encouragement',
@@ -46,12 +43,19 @@ It is better to check and not need it than to proceed without important context.
Review this context carefully — it may contain important guidelines, constraints, or patterns relevant to your work. If you need more context, use read_prompts({ tags: [...] }) at any time.`, Review this context carefully — it may contain important guidelines, constraints, or patterns relevant to your work. If you need more context, use read_prompts({ tags: [...] }) at any time.`,
}, },
{
name: 'gate-session-active',
priority: 10,
content: `The session is now active with full tool access. Proceed with the user's original request using the tools listed above.`,
},
{ {
name: 'session-greeting', name: 'session-greeting',
priority: 10, priority: 10,
content: `Welcome to this project. To get started, call begin_session with keywords describing your task. content: `Welcome to this project. To get started, call begin_session with the arguments it requires.
Example: begin_session({ tags: ["zigbee", "pairing", "mqtt"] }) Examples:
begin_session({ tags: ["zigbee", "pairing", "mqtt"] })
begin_session({ description: "I want to pair a new Zigbee device" })
This will load relevant project context, policies, and guidelines tailored to your work.`, This will load relevant project context, policies, and guidelines tailored to your work.`,
}, },

View File

@@ -35,7 +35,10 @@ export class PromptRepository implements IPromptRepository {
} }
async findById(id: string): Promise<Prompt | null> { async findById(id: string): Promise<Prompt | null> {
return this.prisma.prompt.findUnique({ where: { id } }); return this.prisma.prompt.findUnique({
where: { id },
include: { project: { select: { name: true } } },
});
} }
async findByNameAndProject(name: string, projectId: string | null): Promise<Prompt | null> { async findByNameAndProject(name: string, projectId: string | null): Promise<Prompt | null> {

View File

@@ -6,6 +6,7 @@ import type {
ContainerInfo, ContainerInfo,
ContainerLogs, ContainerLogs,
ExecResult, ExecResult,
InteractiveExec,
} from '../orchestrator.js'; } from '../orchestrator.js';
import { DEFAULT_MEMORY_LIMIT } from '../orchestrator.js'; import { DEFAULT_MEMORY_LIMIT } from '../orchestrator.js';
@@ -239,4 +240,32 @@ export class DockerContainerManager implements McpOrchestrator {
}); });
}); });
} }
async execInteractive(containerId: string, cmd: string[]): Promise<InteractiveExec> {
const container = this.docker.getContainer(containerId);
const exec = await container.exec({
Cmd: cmd,
AttachStdin: true,
AttachStdout: true,
AttachStderr: true,
});
const stream = await exec.start({ hijack: true, stdin: true });
// Demux Docker's multiplexed stream into separate stdout/stderr
const stdout = new PassThrough();
const stderr = new PassThrough();
this.docker.modem.demuxStream(stream, stdout, stderr);
return {
stdout,
write(data: string) {
stream.write(data);
},
close() {
try { stream.end(); } catch { /* ignore */ }
},
};
}
} }

View File

@@ -218,7 +218,7 @@ export class HealthProbeRunner {
headers: { 'Content-Type': 'application/json', 'Accept': 'application/json, text/event-stream' }, headers: { 'Content-Type': 'application/json', 'Accept': 'application/json, text/event-stream' },
body: JSON.stringify({ body: JSON.stringify({
jsonrpc: '2.0', id: 1, method: 'initialize', jsonrpc: '2.0', id: 1, method: 'initialize',
params: { protocolVersion: '2024-11-05', capabilities: {}, clientInfo: { name: 'mcpctl-health', version: '0.1.0' } }, params: { protocolVersion: '2024-11-05', capabilities: {}, clientInfo: { name: 'mcpctl-health', version: '0.0.1' } },
}), }),
signal: controller.signal, signal: controller.signal,
}); });
@@ -333,7 +333,7 @@ export class HealthProbeRunner {
method: 'POST', headers: postHeaders, method: 'POST', headers: postHeaders,
body: JSON.stringify({ body: JSON.stringify({
jsonrpc: '2.0', id: 1, method: 'initialize', jsonrpc: '2.0', id: 1, method: 'initialize',
params: { protocolVersion: '2024-11-05', capabilities: {}, clientInfo: { name: 'mcpctl-health', version: '0.1.0' } }, params: { protocolVersion: '2024-11-05', capabilities: {}, clientInfo: { name: 'mcpctl-health', version: '0.0.1' } },
}), }),
signal: controller.signal, signal: controller.signal,
}); });
@@ -424,9 +424,16 @@ export class HealthProbeRunner {
const start = Date.now(); const start = Date.now();
const packageName = server.packageName as string | null; const packageName = server.packageName as string | null;
const command = server.command as string[] | null;
if (!packageName) { // Determine how to spawn the MCP server inside the container
return { healthy: false, latencyMs: 0, message: 'No package name for STDIO server' }; let spawnCmd: string[];
if (packageName) {
spawnCmd = ['npx', '--prefer-offline', '-y', packageName];
} else if (command && command.length > 0) {
spawnCmd = command;
} else {
return { healthy: false, latencyMs: 0, message: 'No packageName or command for STDIO server' };
} }
// Build JSON-RPC messages for the health probe // Build JSON-RPC messages for the health probe
@@ -435,7 +442,7 @@ export class HealthProbeRunner {
params: { params: {
protocolVersion: '2024-11-05', protocolVersion: '2024-11-05',
capabilities: {}, capabilities: {},
clientInfo: { name: 'mcpctl-health', version: '0.1.0' }, clientInfo: { name: 'mcpctl-health', version: '0.0.1' },
}, },
}); });
const initializedMsg = JSON.stringify({ const initializedMsg = JSON.stringify({
@@ -447,13 +454,15 @@ export class HealthProbeRunner {
}); });
// Use a Node.js inline script that: // Use a Node.js inline script that:
// 1. Spawns the MCP server binary via npx // 1. Spawns the MCP server binary
// 2. Sends initialize + initialized + tool call via stdin // 2. Sends initialize + initialized + tool call via stdin
// 3. Reads responses from stdout // 3. Reads responses from stdout
// 4. Exits with 0 if tool call succeeds, 1 if it fails // 4. Exits with 0 if tool call succeeds, 1 if it fails
const spawnArgs = JSON.stringify(spawnCmd);
const probeScript = ` const probeScript = `
const { spawn } = require('child_process'); const { spawn } = require('child_process');
const proc = spawn('npx', ['--prefer-offline', '-y', ${JSON.stringify(packageName)}], { stdio: ['pipe', 'pipe', 'pipe'] }); const args = ${spawnArgs};
const proc = spawn(args[0], args.slice(1), { stdio: ['pipe', 'pipe', 'pipe'] });
let output = ''; let output = '';
let responded = false; let responded = false;
proc.stdout.on('data', d => { proc.stdout.on('data', d => {

View File

@@ -5,6 +5,7 @@ import { NotFoundError } from './mcp-server.service.js';
import { InvalidStateError } from './instance.service.js'; import { InvalidStateError } from './instance.service.js';
import { sendViaSse } from './transport/sse-client.js'; import { sendViaSse } from './transport/sse-client.js';
import { sendViaStdio } from './transport/stdio-client.js'; import { sendViaStdio } from './transport/stdio-client.js';
import { PersistentStdioClient } from './transport/persistent-stdio.js';
export interface McpProxyRequest { export interface McpProxyRequest {
serverId: string; serverId: string;
@@ -37,6 +38,8 @@ function parseStreamableResponse(body: string): McpProxyResponse {
export class McpProxyService { export class McpProxyService {
/** Session IDs per server for streamable-http protocol */ /** Session IDs per server for streamable-http protocol */
private sessions = new Map<string, string>(); private sessions = new Map<string, string>();
/** Persistent STDIO connections keyed by containerId */
private stdioClients = new Map<string, PersistentStdioClient>();
constructor( constructor(
private readonly instanceRepo: IMcpInstanceRepository, private readonly instanceRepo: IMcpInstanceRepository,
@@ -44,6 +47,23 @@ export class McpProxyService {
private readonly orchestrator?: McpOrchestrator, private readonly orchestrator?: McpOrchestrator,
) {} ) {}
/** Clean up all persistent connections (call on shutdown). */
closeAll(): void {
for (const [, client] of this.stdioClients) {
client.close();
}
this.stdioClients.clear();
}
/** Remove persistent connection for a container (call when instance stops). */
removeClient(containerId: string): void {
const client = this.stdioClients.get(containerId);
if (client) {
client.close();
this.stdioClients.delete(containerId);
}
}
async execute(request: McpProxyRequest): Promise<McpProxyResponse> { async execute(request: McpProxyRequest): Promise<McpProxyResponse> {
const server = await this.serverRepo.findById(request.serverId); const server = await this.serverRepo.findById(request.serverId);
if (!server) { if (!server) {
@@ -95,7 +115,7 @@ export class McpProxyService {
): Promise<McpProxyResponse> { ): Promise<McpProxyResponse> {
const transport = server.transport as string; const transport = server.transport as string;
// STDIO: use docker exec // STDIO: use persistent connection (falls back to one-shot on error)
if (transport === 'STDIO') { if (transport === 'STDIO') {
if (!this.orchestrator) { if (!this.orchestrator) {
throw new InvalidStateError('Orchestrator required for STDIO transport'); throw new InvalidStateError('Orchestrator required for STDIO transport');
@@ -104,10 +124,24 @@ export class McpProxyService {
throw new InvalidStateError(`Instance '${instance.id}' has no container ID`); throw new InvalidStateError(`Instance '${instance.id}' has no container ID`);
} }
const packageName = server.packageName as string | null; const packageName = server.packageName as string | null;
if (!packageName) { const command = server.command as string[] | null;
throw new InvalidStateError(`Server '${server.id}' has no package name for STDIO transport`); if (!packageName && (!command || command.length === 0)) {
throw new InvalidStateError(`Server '${server.id}' has no packageName or command for STDIO transport`);
}
// Build the spawn command for persistent mode
const spawnCmd = command && command.length > 0
? command
: ['npx', '--prefer-offline', '-y', packageName!];
// Try persistent connection first
try {
return await this.sendViaPersistentStdio(instance.containerId, spawnCmd, method, params);
} catch {
// Persistent failed — fall back to one-shot
this.removeClient(instance.containerId);
return sendViaStdio(this.orchestrator, instance.containerId, packageName, method, params, 120_000, command);
} }
return sendViaStdio(this.orchestrator, instance.containerId, packageName, method, params);
} }
// SSE or STREAMABLE_HTTP: need a base URL // SSE or STREAMABLE_HTTP: need a base URL
@@ -121,6 +155,23 @@ export class McpProxyService {
return this.sendStreamableHttp(server.id, baseUrl, method, params); return this.sendStreamableHttp(server.id, baseUrl, method, params);
} }
/**
* Send via a persistent STDIO connection (reused across calls).
*/
private async sendViaPersistentStdio(
containerId: string,
command: string[],
method: string,
params?: Record<string, unknown>,
): Promise<McpProxyResponse> {
let client = this.stdioClients.get(containerId);
if (!client) {
client = new PersistentStdioClient(this.orchestrator!, containerId, command);
this.stdioClients.set(containerId, client);
}
return client.send(method, params);
}
/** /**
* Resolve the base URL for an HTTP-based managed server. * Resolve the base URL for an HTTP-based managed server.
* Prefers container internal IP on Docker network, falls back to localhost:port. * Prefers container internal IP on Docker network, falls back to localhost:port.
@@ -218,7 +269,7 @@ export class McpProxyService {
params: { params: {
protocolVersion: '2025-03-26', protocolVersion: '2025-03-26',
capabilities: {}, capabilities: {},
clientInfo: { name: 'mcpctl', version: '0.1.0' }, clientInfo: { name: 'mcpctl', version: '0.0.1' },
}, },
}; };

View File

@@ -68,10 +68,23 @@ export interface McpOrchestrator {
/** Execute a command inside a running container with optional stdin */ /** Execute a command inside a running container with optional stdin */
execInContainer(containerId: string, cmd: string[], opts?: { stdin?: string; timeoutMs?: number }): Promise<ExecResult>; execInContainer(containerId: string, cmd: string[], opts?: { stdin?: string; timeoutMs?: number }): Promise<ExecResult>;
/** Start a long-running interactive exec session (bidirectional stdio stream). */
execInteractive?(containerId: string, cmd: string[]): Promise<InteractiveExec>;
/** Check if the orchestrator runtime is available */ /** Check if the orchestrator runtime is available */
ping(): Promise<boolean>; ping(): Promise<boolean>;
} }
/** A bidirectional stream to an interactive exec session. */
export interface InteractiveExec {
/** Demuxed stdout stream (JSON-RPC responses come here). */
stdout: NodeJS.ReadableStream;
/** Write raw bytes to the process stdin. */
write(data: string): void;
/** Kill the exec process. */
close(): void;
}
/** Default resource limits */ /** Default resource limits */
export const DEFAULT_MEMORY_LIMIT = 512 * 1024 * 1024; // 512 MB export const DEFAULT_MEMORY_LIMIT = 512 * 1024 * 1024; // 512 MB
export const DEFAULT_NANO_CPUS = 500_000_000; // 0.5 CPU export const DEFAULT_NANO_CPUS = 500_000_000; // 0.5 CPU

View File

@@ -176,20 +176,20 @@ export class PromptService {
async getVisiblePrompts( async getVisiblePrompts(
projectId?: string, projectId?: string,
sessionId?: string, sessionId?: string,
): Promise<Array<{ name: string; content: string; type: 'prompt' | 'promptrequest' }>> { ): Promise<Array<{ name: string; content: string; priority: number; summary: string | null; chapters: string[] | null; linkTarget: string | null; type: 'prompt' | 'promptrequest' }>> {
const results: Array<{ name: string; content: string; type: 'prompt' | 'promptrequest' }> = []; const results: Array<{ name: string; content: string; priority: number; summary: string | null; chapters: string[] | null; linkTarget: string | null; type: 'prompt' | 'promptrequest' }> = [];
// Approved prompts (project-scoped + global) // Approved prompts (project-scoped + global)
const prompts = await this.promptRepo.findAll(projectId); const prompts = await this.promptRepo.findAll(projectId);
for (const p of prompts) { for (const p of prompts) {
results.push({ name: p.name, content: p.content, type: 'prompt' }); results.push({ name: p.name, content: p.content, priority: p.priority, summary: p.summary, chapters: p.chapters as string[] | null, linkTarget: p.linkTarget, type: 'prompt' });
} }
// Session's own pending requests // Session's own pending requests
if (sessionId) { if (sessionId) {
const requests = await this.promptRequestRepo.findBySession(sessionId, projectId); const requests = await this.promptRequestRepo.findBySession(sessionId, projectId);
for (const r of requests) { for (const r of requests) {
results.push({ name: r.name, content: r.content, type: 'promptrequest' }); results.push({ name: r.name, content: r.content, priority: 5, summary: null, chapters: null, linkTarget: null, type: 'promptrequest' });
} }
} }

View File

@@ -0,0 +1,188 @@
import type { McpOrchestrator, InteractiveExec } from '../orchestrator.js';
import type { McpProxyResponse } from '../mcp-proxy-service.js';
/**
* Persistent STDIO connection to an MCP server running inside a Docker container.
*
* Instead of cold-starting a new process per call (docker exec one-shot), this keeps
* a long-running `docker exec -i <cmd>` session alive. The MCP init handshake runs
* once, then tool calls are multiplexed over the same stdin/stdout pipe.
*
* Falls back gracefully: if the process dies, the next call will reconnect.
*/
export class PersistentStdioClient {
private exec: InteractiveExec | null = null;
private buffer = '';
private nextId = 1;
private initialized = false;
private connecting: Promise<void> | null = null;
private pendingRequests = new Map<number, {
resolve: (res: McpProxyResponse) => void;
reject: (err: Error) => void;
timer: ReturnType<typeof setTimeout>;
}>();
constructor(
private readonly orchestrator: McpOrchestrator,
private readonly containerId: string,
private readonly command: string[],
private readonly timeoutMs = 120_000,
) {}
/**
* Send a JSON-RPC request and wait for the matching response.
*/
async send(method: string, params?: Record<string, unknown>): Promise<McpProxyResponse> {
await this.ensureReady();
const id = this.nextId++;
const request: Record<string, unknown> = { jsonrpc: '2.0', id, method };
if (params !== undefined) {
request.params = params;
}
return new Promise<McpProxyResponse>((resolve, reject) => {
const timer = setTimeout(() => {
this.pendingRequests.delete(id);
reject(new Error(`Request timed out after ${this.timeoutMs}ms`));
}, this.timeoutMs);
this.pendingRequests.set(id, { resolve, reject, timer });
this.write(request);
});
}
/** Shut down the persistent connection. */
close(): void {
if (this.exec) {
this.exec.close();
this.exec = null;
}
this.initialized = false;
this.connecting = null;
for (const [, pending] of this.pendingRequests) {
clearTimeout(pending.timer);
pending.reject(new Error('Connection closed'));
}
this.pendingRequests.clear();
}
get isConnected(): boolean {
return this.initialized && this.exec !== null;
}
// ── internals ──
private async ensureReady(): Promise<void> {
if (this.initialized && this.exec) return;
if (this.connecting) {
await this.connecting;
return;
}
this.connecting = this.connect();
try {
await this.connecting;
} finally {
this.connecting = null;
}
}
private async connect(): Promise<void> {
this.close();
if (!this.orchestrator.execInteractive) {
throw new Error('Orchestrator does not support interactive exec');
}
const exec = await this.orchestrator.execInteractive(this.containerId, this.command);
this.exec = exec;
this.buffer = '';
// Parse JSON-RPC responses line by line from stdout
exec.stdout.on('data', (chunk: Buffer) => {
this.buffer += chunk.toString('utf-8');
this.processBuffer();
});
exec.stdout.on('end', () => {
this.initialized = false;
this.exec = null;
for (const [, pending] of this.pendingRequests) {
clearTimeout(pending.timer);
pending.reject(new Error('STDIO process exited'));
}
this.pendingRequests.clear();
});
// Run MCP init handshake
const initId = this.nextId++;
const initPromise = new Promise<void>((resolve, reject) => {
const timer = setTimeout(() => {
this.pendingRequests.delete(initId);
reject(new Error('MCP init handshake timed out'));
}, 30_000);
this.pendingRequests.set(initId, {
resolve: () => {
clearTimeout(timer);
resolve();
},
reject: (err) => {
clearTimeout(timer);
reject(err);
},
timer,
});
});
this.write({
jsonrpc: '2.0',
id: initId,
method: 'initialize',
params: {
protocolVersion: '2024-11-05',
capabilities: {},
clientInfo: { name: 'mcpctl-proxy', version: '0.0.1' },
},
});
await initPromise;
// Send initialized notification (no response expected)
this.write({ jsonrpc: '2.0', method: 'notifications/initialized' });
// Small delay to let the server process the notification
await new Promise((r) => setTimeout(r, 100));
this.initialized = true;
}
private write(msg: Record<string, unknown>): void {
if (!this.exec) throw new Error('Not connected');
this.exec.write(JSON.stringify(msg) + '\n');
}
private processBuffer(): void {
const lines = this.buffer.split('\n');
this.buffer = lines.pop() ?? '';
for (const line of lines) {
const trimmed = line.trim();
if (!trimmed) continue;
try {
const msg = JSON.parse(trimmed) as Record<string, unknown>;
if ('id' in msg && msg.id !== undefined) {
const pending = this.pendingRequests.get(msg.id as number);
if (pending) {
this.pendingRequests.delete(msg.id as number);
clearTimeout(pending.timer);
pending.resolve(msg as unknown as McpProxyResponse);
}
}
// Notifications from server are ignored (not needed for proxy)
} catch {
// Skip non-JSON lines
}
}
}
}

View File

@@ -73,7 +73,7 @@ export async function sendViaSse(
params: { params: {
protocolVersion: '2024-11-05', protocolVersion: '2024-11-05',
capabilities: {}, capabilities: {},
clientInfo: { name: 'mcpctl-proxy', version: '0.1.0' }, clientInfo: { name: 'mcpctl-proxy', version: '0.0.1' },
}, },
}), }),
signal: controller.signal, signal: controller.signal,

View File

@@ -12,10 +12,11 @@ import type { McpProxyResponse } from '../mcp-proxy-service.js';
export async function sendViaStdio( export async function sendViaStdio(
orchestrator: McpOrchestrator, orchestrator: McpOrchestrator,
containerId: string, containerId: string,
packageName: string, packageName: string | null,
method: string, method: string,
params?: Record<string, unknown>, params?: Record<string, unknown>,
timeoutMs = 30_000, timeoutMs = 30_000,
command?: string[] | null,
): Promise<McpProxyResponse> { ): Promise<McpProxyResponse> {
const initMsg = JSON.stringify({ const initMsg = JSON.stringify({
jsonrpc: '2.0', jsonrpc: '2.0',
@@ -24,7 +25,7 @@ export async function sendViaStdio(
params: { params: {
protocolVersion: '2024-11-05', protocolVersion: '2024-11-05',
capabilities: {}, capabilities: {},
clientInfo: { name: 'mcpctl-proxy', version: '0.1.0' }, clientInfo: { name: 'mcpctl-proxy', version: '0.0.1' },
}, },
}); });
const initializedMsg = JSON.stringify({ const initializedMsg = JSON.stringify({
@@ -42,14 +43,26 @@ export async function sendViaStdio(
} }
const requestMsg = JSON.stringify(requestBody); const requestMsg = JSON.stringify(requestBody);
// Determine spawn command
let spawnCmd: string[];
if (packageName) {
spawnCmd = ['npx', '--prefer-offline', '-y', packageName];
} else if (command && command.length > 0) {
spawnCmd = command;
} else {
return errorResponse('No packageName or command for STDIO server');
}
const spawnArgs = JSON.stringify(spawnCmd);
// Inline Node.js script that: // Inline Node.js script that:
// 1. Spawns the MCP server binary via npx // 1. Spawns the MCP server binary
// 2. Sends initialize → initialized → actual request via stdin // 2. Sends initialize → initialized → actual request via stdin
// 3. Reads stdout for JSON-RPC response with id: 2 // 3. Reads stdout for JSON-RPC response with id: 2
// 4. Outputs the full JSON-RPC response to stdout // 4. Outputs the full JSON-RPC response to stdout
const probeScript = ` const probeScript = `
const { spawn } = require('child_process'); const { spawn } = require('child_process');
const proc = spawn('npx', ['--prefer-offline', '-y', ${JSON.stringify(packageName)}], { stdio: ['pipe', 'pipe', 'pipe'] }); const args = ${spawnArgs};
const proc = spawn(args[0], args.slice(1), { stdio: ['pipe', 'pipe', 'pipe'] });
let output = ''; let output = '';
let responded = false; let responded = false;
proc.stdout.on('data', d => { proc.stdout.on('data', d => {

View File

@@ -301,7 +301,7 @@ describe('RestoreService', () => {
const validBundle = { const validBundle = {
version: '1', version: '1',
mcpctlVersion: '0.1.0', mcpctlVersion: '0.0.1',
createdAt: new Date().toISOString(), createdAt: new Date().toISOString(),
encrypted: false, encrypted: false,
servers: [{ name: 'github', description: 'GitHub', packageName: null, dockerImage: null, transport: 'STDIO', repositoryUrl: null, env: [] }], servers: [{ name: 'github', description: 'GitHub', packageName: null, dockerImage: null, transport: 'STDIO', repositoryUrl: null, env: [] }],

View File

@@ -1,6 +1,6 @@
{ {
"name": "@mcpctl/mcplocal", "name": "@mcpctl/mcplocal",
"version": "0.1.0", "version": "0.0.1",
"private": true, "private": true,
"type": "module", "type": "module",
"main": "./dist/index.js", "main": "./dist/index.js",

View File

@@ -60,26 +60,63 @@ export class TagMatcher {
} }
private computeScore(lowerTags: string[], prompt: PromptIndexEntry): number { private computeScore(lowerTags: string[], prompt: PromptIndexEntry): number {
// Priority 10 always included // Priority 10 always included at the top
if (prompt.priority === 10) return Infinity; if (prompt.priority === 10) return Infinity;
if (lowerTags.length === 0) return 0; // Baseline score = priority (so all prompts compete for the byte budget)
// Tag matches boost the score further (matchCount * priority on top)
let boost = 0;
if (lowerTags.length > 0) {
const searchText = [
prompt.name,
prompt.summary ?? '',
...(prompt.chapters ?? []),
].join(' ').toLowerCase();
const searchText = [ for (const tag of lowerTags) {
prompt.name, if (searchText.includes(tag)) boost++;
prompt.summary ?? '', }
...(prompt.chapters ?? []), boost *= prompt.priority;
].join(' ').toLowerCase();
let matchCount = 0;
for (const tag of lowerTags) {
if (searchText.includes(tag)) matchCount++;
} }
return matchCount * prompt.priority; return prompt.priority + boost;
} }
} }
const STOP_WORDS = new Set([
'the', 'a', 'an', 'is', 'to', 'for', 'of', 'and', 'or', 'in', 'on', 'at',
'by', 'with', 'from', 'this', 'that', 'it', 'its', 'as', 'be', 'are', 'was',
'were', 'been', 'has', 'have', 'had', 'do', 'does', 'did', 'but', 'not',
'can', 'will', 'would', 'could', 'should', 'may', 'might', 'shall', 'must',
'so', 'if', 'then', 'than', 'too', 'very', 'just', 'about', 'up', 'out',
'no', 'yes', 'all', 'any', 'some', 'my', 'your', 'our', 'their', 'what',
'which', 'who', 'how', 'when', 'where', 'why', 'want', 'need', 'get', 'set',
'use', 'like', 'make', 'know', 'help', 'try',
]);
/**
* Convert a natural-language description into keyword tags.
* Splits on whitespace/punctuation, lowercases, filters stop words and short words, caps at 10.
*/
export function tokenizeDescription(description: string): string[] {
const words = description
.toLowerCase()
.split(/[\s.,;:!?'"()\[\]{}<>|/\\@#$%^&*+=~`]+/)
.map((w) => w.replace(/[^a-z0-9-]/g, ''))
.filter((w) => w.length >= 3 && !STOP_WORDS.has(w));
// Deduplicate while preserving order
const seen = new Set<string>();
const unique: string[] = [];
for (const w of words) {
if (!seen.has(w)) {
seen.add(w);
unique.push(w);
}
}
return unique.slice(0, 10);
}
/** /**
* Extract keywords from a tool call for the intercept fallback path. * Extract keywords from a tool call for the intercept fallback path.
* Pulls words from the tool name and string argument values. * Pulls words from the tool name and string argument values.

View File

@@ -0,0 +1,82 @@
/**
* SSE endpoint for the MCP traffic inspector.
*
* GET /inspect?project=X&session=Y
*
* Streams TrafficEvents as SSE data lines. On connect, sends a snapshot
* of active sessions and recent buffered events, then streams live.
*/
import type { FastifyInstance } from 'fastify';
import type { TrafficCapture, TrafficFilter } from './traffic.js';
export function registerInspectEndpoint(app: FastifyInstance, capture: TrafficCapture): void {
app.get<{
Querystring: { project?: string; session?: string };
}>('/inspect', async (request, reply) => {
const filter: TrafficFilter = {
project: request.query.project,
session: request.query.session,
};
// Set SSE headers
reply.raw.writeHead(200, {
'Content-Type': 'text/event-stream',
'Cache-Control': 'no-cache',
'Connection': 'keep-alive',
'X-Accel-Buffering': 'no', // Disable nginx buffering
});
// Send active sessions snapshot
const sessions = capture.getActiveSessions();
const filteredSessions = filter.project
? sessions.filter((s) => s.projectName === filter.project)
: sessions;
reply.raw.write(`event: sessions\ndata: ${JSON.stringify(filteredSessions)}\n\n`);
// Send buffered events
const buffered = capture.getBuffer(filter);
for (const event of buffered) {
reply.raw.write(`data: ${JSON.stringify(event)}\n\n`);
}
// Flush marker so client knows history is done
reply.raw.write(`event: live\ndata: {}\n\n`);
// Subscribe to live events
const matchesFilter = (e: { projectName: string; sessionId: string }): boolean => {
if (filter.project && e.projectName !== filter.project) return false;
if (filter.session && e.sessionId !== filter.session) return false;
return true;
};
const unsubscribe = capture.subscribe((event) => {
if (!matchesFilter(event)) return;
try {
reply.raw.write(`data: ${JSON.stringify(event)}\n\n`);
} catch {
unsubscribe();
}
});
// Keep-alive ping every 30s
const keepAlive = setInterval(() => {
try {
reply.raw.write(': keepalive\n\n');
} catch {
clearInterval(keepAlive);
unsubscribe();
}
}, 30_000);
// Cleanup on disconnect
request.raw.on('close', () => {
clearInterval(keepAlive);
unsubscribe();
});
// Hijack so Fastify doesn't try to send its own response
reply.hijack();
});
}

View File

@@ -18,6 +18,7 @@ import { loadProjectLlmOverride } from './config.js';
import type { McpdClient } from './mcpd-client.js'; import type { McpdClient } from './mcpd-client.js';
import type { ProviderRegistry } from '../providers/registry.js'; import type { ProviderRegistry } from '../providers/registry.js';
import type { JsonRpcRequest } from '../types.js'; import type { JsonRpcRequest } from '../types.js';
import type { TrafficCapture } from './traffic.js';
interface ProjectCacheEntry { interface ProjectCacheEntry {
router: McpRouter; router: McpRouter;
@@ -31,7 +32,7 @@ interface SessionEntry {
const CACHE_TTL_MS = 60_000; // 60 seconds const CACHE_TTL_MS = 60_000; // 60 seconds
export function registerProjectMcpEndpoint(app: FastifyInstance, mcpdClient: McpdClient, providerRegistry?: ProviderRegistry | null): void { export function registerProjectMcpEndpoint(app: FastifyInstance, mcpdClient: McpdClient, providerRegistry?: ProviderRegistry | null, trafficCapture?: TrafficCapture | null): void {
const projectCache = new Map<string, ProjectCacheEntry>(); const projectCache = new Map<string, ProjectCacheEntry>();
const sessions = new Map<string, SessionEntry>(); const sessions = new Map<string, SessionEntry>();
@@ -131,13 +132,88 @@ export function registerProjectMcpEndpoint(app: FastifyInstance, mcpdClient: Mcp
sessionIdGenerator: () => randomUUID(), sessionIdGenerator: () => randomUUID(),
onsessioninitialized: (id) => { onsessioninitialized: (id) => {
sessions.set(id, { transport, projectName }); sessions.set(id, { transport, projectName });
trafficCapture?.emit({
timestamp: new Date().toISOString(),
projectName,
sessionId: id,
eventType: 'session_created',
body: null,
});
}, },
}); });
// Wire upstream call tracing into the router
if (trafficCapture) {
router.onUpstreamCall = (info) => {
const sid = transport.sessionId ?? 'unknown';
trafficCapture.emit({
timestamp: new Date().toISOString(),
projectName,
sessionId: sid,
eventType: 'upstream_request',
method: info.method,
upstreamName: info.upstream,
body: info.request,
});
trafficCapture.emit({
timestamp: new Date().toISOString(),
projectName,
sessionId: sid,
eventType: 'upstream_response',
method: info.method,
upstreamName: info.upstream,
body: info.response,
durationMs: info.durationMs,
});
};
}
transport.onmessage = async (message: JSONRPCMessage) => { transport.onmessage = async (message: JSONRPCMessage) => {
if ('method' in message && 'id' in message) { if ('method' in message && 'id' in message) {
const requestId = message.id as string | number;
const sid = transport.sessionId ?? 'unknown';
const method = (message as { method?: string }).method;
// Capture client request
trafficCapture?.emit({
timestamp: new Date().toISOString(),
projectName,
sessionId: sid,
eventType: 'client_request',
method,
body: message,
});
const ctx = transport.sessionId ? { sessionId: transport.sessionId } : undefined; const ctx = transport.sessionId ? { sessionId: transport.sessionId } : undefined;
const response = await router.route(message as unknown as JsonRpcRequest, ctx); const response = await router.route(message as unknown as JsonRpcRequest, ctx);
// Forward queued notifications BEFORE the response — the response send
// closes the POST SSE stream, so notifications must go first.
// relatedRequestId routes them onto the same SSE stream as the response.
if (transport.sessionId) {
for (const n of router.consumeNotifications(transport.sessionId)) {
trafficCapture?.emit({
timestamp: new Date().toISOString(),
projectName,
sessionId: sid,
eventType: 'client_notification',
method: (n as { method?: string }).method,
body: n,
});
await transport.send(n as unknown as JSONRPCMessage, { relatedRequestId: requestId });
}
}
// Capture client response
trafficCapture?.emit({
timestamp: new Date().toISOString(),
projectName,
sessionId: sid,
eventType: 'client_response',
method,
body: response,
});
await transport.send(response as unknown as JSONRPCMessage); await transport.send(response as unknown as JSONRPCMessage);
} }
}; };
@@ -145,6 +221,13 @@ export function registerProjectMcpEndpoint(app: FastifyInstance, mcpdClient: Mcp
transport.onclose = () => { transport.onclose = () => {
const id = transport.sessionId; const id = transport.sessionId;
if (id) { if (id) {
trafficCapture?.emit({
timestamp: new Date().toISOString(),
projectName,
sessionId: id,
eventType: 'session_closed',
body: null,
});
sessions.delete(id); sessions.delete(id);
router.cleanupSession(id); router.cleanupSession(id);
} }

View File

@@ -7,6 +7,8 @@ import { McpdClient } from './mcpd-client.js';
import { registerProxyRoutes } from './routes/proxy.js'; import { registerProxyRoutes } from './routes/proxy.js';
import { registerMcpEndpoint } from './mcp-endpoint.js'; import { registerMcpEndpoint } from './mcp-endpoint.js';
import { registerProjectMcpEndpoint } from './project-mcp-endpoint.js'; import { registerProjectMcpEndpoint } from './project-mcp-endpoint.js';
import { registerInspectEndpoint } from './inspect-endpoint.js';
import { TrafficCapture } from './traffic.js';
import type { McpRouter } from '../router.js'; import type { McpRouter } from '../router.js';
import type { HealthMonitor } from '../health.js'; import type { HealthMonitor } from '../health.js';
import type { TieredHealthMonitor } from '../health/tiered.js'; import type { TieredHealthMonitor } from '../health/tiered.js';
@@ -181,11 +183,15 @@ export async function createHttpServer(
const mcpdClient = new McpdClient(config.mcpdUrl, config.mcpdToken); const mcpdClient = new McpdClient(config.mcpdUrl, config.mcpdToken);
registerProxyRoutes(app, mcpdClient); registerProxyRoutes(app, mcpdClient);
// Traffic inspector
const trafficCapture = new TrafficCapture();
registerInspectEndpoint(app, trafficCapture);
// Streamable HTTP MCP protocol endpoint at /mcp // Streamable HTTP MCP protocol endpoint at /mcp
registerMcpEndpoint(app, deps.router); registerMcpEndpoint(app, deps.router);
// Project-scoped MCP endpoint at /projects/:projectName/mcp // Project-scoped MCP endpoint at /projects/:projectName/mcp
registerProjectMcpEndpoint(app, mcpdClient, deps.providerRegistry); registerProjectMcpEndpoint(app, mcpdClient, deps.providerRegistry, trafficCapture);
return app; return app;
} }

View File

@@ -0,0 +1,116 @@
/**
* Traffic capture for the MCP inspector.
*
* Records all MCP traffic flowing through mcplocal — both client-facing
* messages and internal upstream routing. Events are stored in a ring
* buffer and streamed to SSE subscribers in real-time.
*/
export type TrafficEventType =
| 'client_request'
| 'client_response'
| 'client_notification'
| 'upstream_request'
| 'upstream_response'
| 'session_created'
| 'session_closed';
export interface TrafficEvent {
timestamp: string;
projectName: string;
sessionId: string;
eventType: TrafficEventType;
method?: string | undefined;
upstreamName?: string | undefined;
body: unknown;
durationMs?: number | undefined;
}
export interface ActiveSession {
sessionId: string;
projectName: string;
startedAt: string;
}
export interface TrafficFilter {
project?: string | undefined;
session?: string | undefined;
}
type Listener = (event: TrafficEvent) => void;
const DEFAULT_MAX_BUFFER = 5000;
export class TrafficCapture {
private listeners = new Set<Listener>();
private buffer: TrafficEvent[] = [];
private readonly maxBuffer: number;
private activeSessions = new Map<string, ActiveSession>();
constructor(maxBuffer = DEFAULT_MAX_BUFFER) {
this.maxBuffer = maxBuffer;
}
emit(event: TrafficEvent): void {
// Track active sessions
if (event.eventType === 'session_created') {
this.activeSessions.set(event.sessionId, {
sessionId: event.sessionId,
projectName: event.projectName,
startedAt: event.timestamp,
});
} else if (event.eventType === 'session_closed') {
this.activeSessions.delete(event.sessionId);
}
// Ring buffer
this.buffer.push(event);
if (this.buffer.length > this.maxBuffer) {
this.buffer.splice(0, this.buffer.length - this.maxBuffer);
}
// Notify subscribers
for (const listener of this.listeners) {
try {
listener(event);
} catch {
// Don't let a bad listener break the pipeline
}
}
}
/** Subscribe to live events. Returns unsubscribe function. */
subscribe(cb: Listener): () => void {
this.listeners.add(cb);
return () => {
this.listeners.delete(cb);
};
}
/** Get buffered events, optionally filtered. */
getBuffer(filter?: TrafficFilter): TrafficEvent[] {
let events = this.buffer;
if (filter?.project) {
events = events.filter((e) => e.projectName === filter.project);
}
if (filter?.session) {
events = events.filter((e) => e.sessionId === filter.session);
}
return events;
}
/** Get all currently active sessions. */
getActiveSessions(): ActiveSession[] {
return [...this.activeSessions.values()];
}
/** Number of subscribers (for health/debug). */
get subscriberCount(): number {
return this.listeners.size;
}
/** Total events in buffer. */
get bufferSize(): number {
return this.buffer.length;
}
}

View File

@@ -3,10 +3,11 @@ import type { LlmProcessor } from './llm/processor.js';
import { ResponsePaginator } from './llm/pagination.js'; import { ResponsePaginator } from './llm/pagination.js';
import type { McpdClient } from './http/mcpd-client.js'; import type { McpdClient } from './http/mcpd-client.js';
import { SessionGate } from './gate/session-gate.js'; import { SessionGate } from './gate/session-gate.js';
import { TagMatcher, extractKeywordsFromToolCall } from './gate/tag-matcher.js'; import { TagMatcher, extractKeywordsFromToolCall, tokenizeDescription } from './gate/tag-matcher.js';
import type { PromptIndexEntry, TagMatchResult } from './gate/tag-matcher.js'; import type { PromptIndexEntry, TagMatchResult } from './gate/tag-matcher.js';
import { LlmPromptSelector } from './gate/llm-selector.js'; import { LlmPromptSelector } from './gate/llm-selector.js';
import type { ProviderRegistry } from './providers/registry.js'; import type { ProviderRegistry } from './providers/registry.js';
import { LinkResolver } from './services/link-resolver.js';
export interface RouteContext { export interface RouteContext {
sessionId?: string; sessionId?: string;
@@ -47,8 +48,13 @@ export class McpRouter {
private cachedPromptIndex: PromptIndexEntry[] | null = null; private cachedPromptIndex: PromptIndexEntry[] | null = null;
private promptIndexFetchedAt = 0; private promptIndexFetchedAt = 0;
private readonly PROMPT_INDEX_TTL_MS = 60_000; private readonly PROMPT_INDEX_TTL_MS = 60_000;
private linkResolver: LinkResolver | null = null;
private systemPromptCache = new Map<string, { content: string; fetchedAt: number }>(); private systemPromptCache = new Map<string, { content: string; fetchedAt: number }>();
private readonly SYSTEM_PROMPT_TTL_MS = 300_000; // 5 minutes private readonly SYSTEM_PROMPT_TTL_MS = 300_000; // 5 minutes
private pendingNotifications = new Map<string, JsonRpcNotification[]>();
/** Optional callback for traffic inspection — called after each upstream call completes. */
onUpstreamCall: ((info: { upstream: string; method?: string; request: unknown; response: unknown; durationMs: number }) => void) | null = null;
setPaginator(paginator: ResponsePaginator): void { setPaginator(paginator: ResponsePaginator): void {
this.paginator = paginator; this.paginator = paginator;
@@ -73,6 +79,7 @@ export class McpRouter {
setPromptConfig(mcpdClient: McpdClient, projectName: string): void { setPromptConfig(mcpdClient: McpdClient, projectName: string): void {
this.mcpdClient = mcpdClient; this.mcpdClient = mcpdClient;
this.projectName = projectName; this.projectName = projectName;
this.linkResolver = new LinkResolver(mcpdClient);
} }
addUpstream(connection: UpstreamConnection): void { addUpstream(connection: UpstreamConnection): void {
@@ -277,6 +284,14 @@ export class McpRouter {
}, },
}; };
if (this.onUpstreamCall) {
const start = performance.now();
const response = await upstream.send(upstreamRequest);
const durationMs = Math.round(performance.now() - start);
this.onUpstreamCall({ upstream: serverName, method: request.method, request: upstreamRequest, response, durationMs });
return response;
}
return upstream.send(upstreamRequest); return upstream.send(upstreamRequest);
} }
@@ -303,10 +318,10 @@ export class McpRouter {
protocolVersion: '2024-11-05', protocolVersion: '2024-11-05',
serverInfo: { serverInfo: {
name: 'mcpctl-proxy', name: 'mcpctl-proxy',
version: '0.1.0', version: '0.0.1',
}, },
capabilities: { capabilities: {
tools: {}, tools: { listChanged: true },
resources: {}, resources: {},
prompts: {}, prompts: {},
}, },
@@ -455,16 +470,48 @@ export class McpRouter {
return this.routeNamespacedCall(request, 'uri', this.resourceToServer); return this.routeNamespacedCall(request, 'uri', this.resourceToServer);
case 'prompts/list': { case 'prompts/list': {
const prompts = await this.discoverPrompts(); const upstreamPrompts = await this.discoverPrompts();
// Include mcpctl-managed prompts from mcpd alongside upstream prompts
const managedIndex = await this.fetchPromptIndex();
const managedPrompts = managedIndex.map((p) => ({
name: `mcpctl/${p.name}`,
description: p.summary ?? `Priority ${p.priority} prompt`,
}));
return { return {
jsonrpc: '2.0', jsonrpc: '2.0',
id: request.id, id: request.id,
result: { prompts }, result: { prompts: [...upstreamPrompts, ...managedPrompts] },
}; };
} }
case 'prompts/get': case 'prompts/get': {
const promptName = (request.params as Record<string, unknown> | undefined)?.name as string | undefined;
if (promptName?.startsWith('mcpctl/')) {
const shortName = promptName.slice('mcpctl/'.length);
const managedIndex = await this.fetchPromptIndex();
const entry = managedIndex.find((p) => p.name === shortName);
if (!entry) {
return { jsonrpc: '2.0', id: request.id, error: { code: -32601, message: `Unknown name: ${promptName}` } };
}
return {
jsonrpc: '2.0',
id: request.id,
result: {
prompt: {
name: promptName,
description: entry.summary ?? `Priority ${entry.priority} prompt`,
},
messages: [
{
role: 'user',
content: { type: 'text', text: entry.content || '(empty)' },
},
],
},
};
}
return this.routeNamespacedCall(request, 'name', this.promptToServer); return this.routeNamespacedCall(request, 'name', this.promptToServer);
}
// Handle MCP notifications (no response expected, but return empty result if called as request) // Handle MCP notifications (no response expected, but return empty result if called as request)
case 'notifications/initialized': case 'notifications/initialized':
@@ -634,6 +681,24 @@ export class McpRouter {
// ── Gate tool definitions ── // ── Gate tool definitions ──
private getBeginSessionTool(): { name: string; description: string; inputSchema: unknown } { private getBeginSessionTool(): { name: string; description: string; inputSchema: unknown } {
// LLM available → description mode (natural language, LLM selects prompts)
// No LLM → keywords mode (deterministic tag matching)
if (this.llmSelector) {
return {
name: 'begin_session',
description: 'Start your session by describing what you want to accomplish. You will receive relevant project context, policies, and guidelines. This is required before using other tools.',
inputSchema: {
type: 'object',
properties: {
description: {
type: 'string',
description: "Describe what you're trying to do in a sentence or two (e.g. \"I want to pair a new Zigbee device with the hub\")",
},
},
required: ['description'],
},
};
}
return { return {
name: 'begin_session', name: 'begin_session',
description: 'Start your session by providing keywords that describe your current task. You will receive relevant project context, policies, and guidelines. This is required before using other tools.', description: 'Start your session by providing keywords that describe your current task. You will receive relevant project context, policies, and guidelines. This is required before using other tools.',
@@ -680,10 +745,16 @@ export class McpRouter {
const params = request.params as Record<string, unknown> | undefined; const params = request.params as Record<string, unknown> | undefined;
const args = (params?.['arguments'] ?? {}) as Record<string, unknown>; const args = (params?.['arguments'] ?? {}) as Record<string, unknown>;
const tags = args['tags'] as string[] | undefined; const rawTags = args['tags'] as string[] | undefined;
const description = args['description'] as string | undefined;
if (!tags || !Array.isArray(tags) || tags.length === 0) { let tags: string[];
return { jsonrpc: '2.0', id: request.id, error: { code: -32602, message: 'Missing or empty tags array' } }; if (rawTags && Array.isArray(rawTags) && rawTags.length > 0) {
tags = rawTags;
} else if (description && description.trim().length > 0) {
tags = tokenizeDescription(description);
} else {
return { jsonrpc: '2.0', id: request.id, error: { code: -32602, message: 'Provide tags or description' } };
} }
const sessionId = context?.sessionId; const sessionId = context?.sessionId;
@@ -739,6 +810,7 @@ export class McpRouter {
// Ungate the session // Ungate the session
if (sessionId) { if (sessionId) {
this.sessionGate.ungate(sessionId, tags, matchResult); this.sessionGate.ungate(sessionId, tags, matchResult);
this.queueNotification(sessionId, { jsonrpc: '2.0', method: 'notifications/tools/list_changed' });
} }
// Build response // Build response
@@ -778,11 +850,38 @@ export class McpRouter {
); );
responseParts.push(encouragement); responseParts.push(encouragement);
// Append tool inventory (names only — full descriptions available via tools/list)
try {
const tools = await this.discoverTools();
if (tools.length > 0) {
responseParts.push('\nAvailable MCP server tools:');
for (const t of tools) {
responseParts.push(` ${t.name}`);
}
}
} catch {
// Tool discovery is optional
}
// Retry instruction (from system prompt)
const retryInstruction = await this.getSystemPrompt(
'gate-session-active',
"The session is now active with full tool access. Proceed with the user's original request using the tools listed above.",
);
responseParts.push(`\n${retryInstruction}`);
// Safety cap to prevent token overflow (prompts first = most important, tool inventory last = least)
const MAX_RESPONSE_CHARS = 24_000;
let text = responseParts.join('\n');
if (text.length > MAX_RESPONSE_CHARS) {
text = text.slice(0, MAX_RESPONSE_CHARS) + '\n\n[Response truncated. Use read_prompts to retrieve full content.]';
}
return { return {
jsonrpc: '2.0', jsonrpc: '2.0',
id: request.id, id: request.id,
result: { result: {
content: [{ type: 'text', text: responseParts.join('\n') }], content: [{ type: 'text', text }],
}, },
}; };
} catch (err) { } catch (err) {
@@ -886,6 +985,7 @@ export class McpRouter {
// Ungate the session // Ungate the session
this.sessionGate.ungate(sessionId, tags, matchResult); this.sessionGate.ungate(sessionId, tags, matchResult);
this.queueNotification(sessionId, { jsonrpc: '2.0', method: 'notifications/tools/list_changed' });
// Build briefing from matched content // Build briefing from matched content
const briefingParts: string[] = []; const briefingParts: string[] = [];
@@ -909,6 +1009,20 @@ export class McpRouter {
briefingParts.push(''); briefingParts.push('');
} }
// Append tool inventory (names only — full descriptions available via tools/list)
try {
const tools = await this.discoverTools();
if (tools.length > 0) {
briefingParts.push('Available MCP server tools:');
for (const t of tools) {
briefingParts.push(` ${t.name}`);
}
briefingParts.push('');
}
} catch {
// Tool discovery is optional
}
// Now route the actual tool call // Now route the actual tool call
const response = await this.routeNamespacedCall(request, 'name', this.toolToServer); const response = await this.routeNamespacedCall(request, 'name', this.toolToServer);
const paginatedResponse = await this.maybePaginate(toolName, response); const paginatedResponse = await this.maybePaginate(toolName, response);
@@ -928,6 +1042,7 @@ export class McpRouter {
} catch { } catch {
// If prompt retrieval fails, just ungate and route normally // If prompt retrieval fails, just ungate and route normally
this.sessionGate.ungate(sessionId, tags, { fullContent: [], indexOnly: [], remaining: [] }); this.sessionGate.ungate(sessionId, tags, { fullContent: [], indexOnly: [], remaining: [] });
this.queueNotification(sessionId, { jsonrpc: '2.0', method: 'notifications/tools/list_changed' });
return this.routeNamespacedCall(request, 'name', this.toolToServer); return this.routeNamespacedCall(request, 'name', this.toolToServer);
} }
} }
@@ -951,17 +1066,35 @@ export class McpRouter {
summary: string | null; summary: string | null;
chapters: string[] | null; chapters: string[] | null;
content?: string; content?: string;
linkTarget?: string | null;
}>>( }>>(
`/api/v1/projects/${encodeURIComponent(this.projectName)}/prompts/visible`, `/api/v1/projects/${encodeURIComponent(this.projectName)}/prompts/visible`,
); );
this.cachedPromptIndex = index.map((p) => ({ // Resolve linked prompts: fetch fresh content from linked MCP resources
name: p.name, const entries: PromptIndexEntry[] = [];
priority: p.priority, for (const p of index) {
summary: p.summary, let content = p.content ?? '';
chapters: p.chapters, if (p.linkTarget && this.linkResolver) {
content: p.content ?? '', try {
})); const resolution = await this.linkResolver.resolve(p.linkTarget);
if (resolution.status === 'alive' && resolution.content) {
content = resolution.content;
}
} catch {
// Keep static content as fallback
}
}
entries.push({
name: p.name,
priority: p.priority,
summary: p.summary,
chapters: p.chapters,
content,
});
}
this.cachedPromptIndex = entries;
this.promptIndexFetchedAt = now; this.promptIndexFetchedAt = now;
return this.cachedPromptIndex; return this.cachedPromptIndex;
} }
@@ -981,6 +1114,19 @@ export class McpRouter {
); );
parts.push(`\n${gateInstructions}`); parts.push(`\n${gateInstructions}`);
// Append tool inventory (names only — descriptions come from tools/list after ungating)
try {
const tools = await this.discoverTools();
if (tools.length > 0) {
parts.push('\nAvailable MCP server tools (accessible after begin_session):');
for (const t of tools) {
parts.push(` ${t.name}`);
}
}
} catch {
// Tool discovery is optional — don't fail initialization
}
// Append compact prompt index so the LLM knows what's available // Append compact prompt index so the LLM knows what's available
try { try {
const promptIndex = await this.fetchPromptIndex(); const promptIndex = await this.fetchPromptIndex();
@@ -1036,10 +1182,27 @@ export class McpRouter {
} }
} }
// ── Notification queue ──
private queueNotification(sessionId: string | undefined, notification: JsonRpcNotification): void {
if (!sessionId) return;
const queue = this.pendingNotifications.get(sessionId) ?? [];
queue.push(notification);
this.pendingNotifications.set(sessionId, queue);
}
/** Consume and return any pending notifications for a session (e.g., tools/list_changed after ungating). */
consumeNotifications(sessionId: string): JsonRpcNotification[] {
const notifications = this.pendingNotifications.get(sessionId) ?? [];
this.pendingNotifications.delete(sessionId);
return notifications;
}
// ── Session cleanup ── // ── Session cleanup ──
cleanupSession(sessionId: string): void { cleanupSession(sessionId: string): void {
this.sessionGate.removeSession(sessionId); this.sessionGate.removeSession(sessionId);
this.pendingNotifications.delete(sessionId);
} }
getUpstreamNames(): string[] { getUpstreamNames(): string[] {

View File

@@ -32,7 +32,7 @@ export class HttpUpstream implements UpstreamConnection {
port: parsed.port, port: parsed.port,
path: parsed.pathname, path: parsed.pathname,
method: 'POST', method: 'POST',
timeout: 30000, timeout: 120_000,
headers: { headers: {
'Content-Type': 'application/json', 'Content-Type': 'application/json',
'Content-Length': Buffer.byteLength(body), 'Content-Length': Buffer.byteLength(body),

View File

@@ -73,6 +73,7 @@ function setupGatedRouter(
prompts?: typeof samplePrompts; prompts?: typeof samplePrompts;
withLlm?: boolean; withLlm?: boolean;
llmResponse?: string; llmResponse?: string;
byteBudget?: number;
} = {}, } = {},
): { router: McpRouter; mcpdClient: McpdClient } { ): { router: McpRouter; mcpdClient: McpdClient } {
const router = new McpRouter(); const router = new McpRouter();
@@ -101,6 +102,7 @@ function setupGatedRouter(
router.setGateConfig({ router.setGateConfig({
gated: opts.gated !== false, gated: opts.gated !== false,
providerRegistry, providerRegistry,
byteBudget: opts.byteBudget,
}); });
return { router, mcpdClient }; return { router, mcpdClient };
@@ -309,16 +311,18 @@ describe('McpRouter gating', () => {
}); });
it('filters out already-sent prompts', async () => { it('filters out already-sent prompts', async () => {
const { router } = setupGatedRouter(); // Use a tight byte budget so begin_session only sends the top-scoring prompts
const { router } = setupGatedRouter({ byteBudget: 80 });
await router.route({ jsonrpc: '2.0', id: 1, method: 'initialize' }, { sessionId: 's1' }); await router.route({ jsonrpc: '2.0', id: 1, method: 'initialize' }, { sessionId: 's1' });
// begin_session sends common-mistakes (priority 10) and zigbee-pairing // begin_session with ['zigbee'] sends common-mistakes (priority 10, Inf) and
// zigbee-pairing (7+7=14) within 80 bytes. Lower-scored prompts overflow.
await router.route( await router.route(
{ jsonrpc: '2.0', id: 2, method: 'tools/call', params: { name: 'begin_session', arguments: { tags: ['zigbee'] } } }, { jsonrpc: '2.0', id: 2, method: 'tools/call', params: { name: 'begin_session', arguments: { tags: ['zigbee'] } } },
{ sessionId: 's1' }, { sessionId: 's1' },
); );
// read_prompts for mqtt should not re-send common-mistakes // read_prompts for mqtt should find mqtt-config (wasn't fully sent), not re-send common-mistakes
const res = await router.route( const res = await router.route(
{ jsonrpc: '2.0', id: 3, method: 'tools/call', params: { name: 'read_prompts', arguments: { tags: ['mqtt'] } } }, { jsonrpc: '2.0', id: 3, method: 'tools/call', params: { name: 'read_prompts', arguments: { tags: ['mqtt'] } } },
{ sessionId: 's1' }, { sessionId: 's1' },
@@ -495,6 +499,121 @@ describe('McpRouter gating', () => {
}); });
}); });
describe('tool inventory', () => {
it('includes tool names but NOT descriptions in gated initialize instructions', async () => {
const { router } = setupGatedRouter();
router.addUpstream(mockUpstream('ha', { tools: [{ name: 'get_entities', description: 'Get all entities' }] }));
router.addUpstream(mockUpstream('node-red', { tools: [{ name: 'get_flows', description: 'Get all flows' }] }));
const res = await router.route(
{ jsonrpc: '2.0', id: 1, method: 'initialize' },
{ sessionId: 's1' },
);
const result = res.result as { instructions: string };
expect(result.instructions).toContain('ha/get_entities');
expect(result.instructions).toContain('node-red/get_flows');
expect(result.instructions).toContain('after begin_session');
// Descriptions should NOT be in init instructions (names only)
expect(result.instructions).not.toContain('Get all entities');
expect(result.instructions).not.toContain('Get all flows');
});
it('includes tool names but NOT descriptions in begin_session response', async () => {
const { router } = setupGatedRouter();
router.addUpstream(mockUpstream('ha', { tools: [{ name: 'get_entities', description: 'Get all entities' }] }));
await router.route({ jsonrpc: '2.0', id: 1, method: 'initialize' }, { sessionId: 's1' });
const res = await router.route(
{ jsonrpc: '2.0', id: 2, method: 'tools/call', params: { name: 'begin_session', arguments: { tags: ['zigbee'] } } },
{ sessionId: 's1' },
);
const text = (res.result as { content: Array<{ text: string }> }).content[0]!.text;
expect(text).toContain('ha/get_entities');
expect(text).not.toContain('Get all entities');
});
it('includes retry instruction in begin_session response', async () => {
const { router } = setupGatedRouter();
await router.route({ jsonrpc: '2.0', id: 1, method: 'initialize' }, { sessionId: 's1' });
const res = await router.route(
{ jsonrpc: '2.0', id: 2, method: 'tools/call', params: { name: 'begin_session', arguments: { tags: ['zigbee'] } } },
{ sessionId: 's1' },
);
const text = (res.result as { content: Array<{ text: string }> }).content[0]!.text;
expect(text).toContain('Proceed with');
});
it('includes tool names but NOT descriptions in gated intercept briefing', async () => {
const { router } = setupGatedRouter();
const ha = mockUpstream('ha', { tools: [{ name: 'get_entities', description: 'Get all entities' }] });
router.addUpstream(ha);
await router.discoverTools();
await router.route({ jsonrpc: '2.0', id: 1, method: 'initialize' }, { sessionId: 's1' });
const res = await router.route(
{ jsonrpc: '2.0', id: 2, method: 'tools/call', params: { name: 'ha/get_entities', arguments: {} } },
{ sessionId: 's1' },
);
const result = res.result as { content: Array<{ type: string; text: string }> };
const briefing = result.content[0]!.text;
expect(briefing).toContain('ha/get_entities');
expect(briefing).not.toContain('Get all entities');
});
});
describe('notifications after ungating', () => {
it('queues tools/list_changed after begin_session ungating', async () => {
const { router } = setupGatedRouter();
await router.route({ jsonrpc: '2.0', id: 1, method: 'initialize' }, { sessionId: 's1' });
await router.route(
{ jsonrpc: '2.0', id: 2, method: 'tools/call', params: { name: 'begin_session', arguments: { tags: ['zigbee'] } } },
{ sessionId: 's1' },
);
const notifications = router.consumeNotifications('s1');
expect(notifications).toHaveLength(1);
expect(notifications[0]!.method).toBe('notifications/tools/list_changed');
});
it('queues tools/list_changed after gated intercept', async () => {
const { router } = setupGatedRouter();
const ha = mockUpstream('ha', { tools: [{ name: 'get_entities' }] });
router.addUpstream(ha);
await router.discoverTools();
await router.route({ jsonrpc: '2.0', id: 1, method: 'initialize' }, { sessionId: 's1' });
await router.route(
{ jsonrpc: '2.0', id: 2, method: 'tools/call', params: { name: 'ha/get_entities', arguments: {} } },
{ sessionId: 's1' },
);
const notifications = router.consumeNotifications('s1');
expect(notifications).toHaveLength(1);
expect(notifications[0]!.method).toBe('notifications/tools/list_changed');
});
it('consumeNotifications clears the queue', async () => {
const { router } = setupGatedRouter();
await router.route({ jsonrpc: '2.0', id: 1, method: 'initialize' }, { sessionId: 's1' });
await router.route(
{ jsonrpc: '2.0', id: 2, method: 'tools/call', params: { name: 'begin_session', arguments: { tags: ['zigbee'] } } },
{ sessionId: 's1' },
);
// First consume returns the notification
expect(router.consumeNotifications('s1')).toHaveLength(1);
// Second consume returns empty
expect(router.consumeNotifications('s1')).toHaveLength(0);
});
});
describe('prompt index caching', () => { describe('prompt index caching', () => {
it('caches prompt index for 60 seconds', async () => { it('caches prompt index for 60 seconds', async () => {
const { router, mcpdClient } = setupGatedRouter({ gated: false }); const { router, mcpdClient } = setupGatedRouter({ gated: false });
@@ -517,4 +636,216 @@ describe('McpRouter gating', () => {
expect(getCalls).toHaveLength(1); expect(getCalls).toHaveLength(1);
}); });
}); });
describe('begin_session description field', () => {
it('accepts description and tokenizes to keywords', async () => {
const { router } = setupGatedRouter();
await router.route({ jsonrpc: '2.0', id: 1, method: 'initialize' }, { sessionId: 's1' });
const res = await router.route(
{ jsonrpc: '2.0', id: 2, method: 'tools/call', params: { name: 'begin_session', arguments: { description: 'I want to pair a zigbee device with mqtt' } } },
{ sessionId: 's1' },
);
expect(res.error).toBeUndefined();
const text = (res.result as { content: Array<{ text: string }> }).content[0]!.text;
// Should match zigbee-pairing and mqtt-config via tokenized keywords
expect(text).toContain('zigbee-pairing');
expect(text).toContain('mqtt-config');
});
it('prefers tags over description when both provided', async () => {
const { router } = setupGatedRouter();
await router.route({ jsonrpc: '2.0', id: 1, method: 'initialize' }, { sessionId: 's1' });
const res = await router.route(
{ jsonrpc: '2.0', id: 2, method: 'tools/call', params: { name: 'begin_session', arguments: { tags: ['mqtt'], description: 'zigbee pairing' } } },
{ sessionId: 's1' },
);
expect(res.error).toBeUndefined();
const text = (res.result as { content: Array<{ text: string }> }).content[0]!.text;
// Tags take priority — mqtt-config should match, zigbee-pairing should not
expect(text).toContain('mqtt-config');
});
it('rejects calls with neither tags nor description', async () => {
const { router } = setupGatedRouter();
await router.route({ jsonrpc: '2.0', id: 1, method: 'initialize' }, { sessionId: 's1' });
const res = await router.route(
{ jsonrpc: '2.0', id: 2, method: 'tools/call', params: { name: 'begin_session', arguments: {} } },
{ sessionId: 's1' },
);
expect(res.error).toBeDefined();
expect(res.error!.code).toBe(-32602);
expect(res.error!.message).toContain('tags or description');
});
it('rejects empty description with no tags', async () => {
const { router } = setupGatedRouter();
await router.route({ jsonrpc: '2.0', id: 1, method: 'initialize' }, { sessionId: 's1' });
const res = await router.route(
{ jsonrpc: '2.0', id: 2, method: 'tools/call', params: { name: 'begin_session', arguments: { description: ' ' } } },
{ sessionId: 's1' },
);
expect(res.error).toBeDefined();
expect(res.error!.code).toBe(-32602);
});
});
describe('gate config refresh', () => {
it('new sessions pick up gate config change (gated → ungated)', async () => {
const { router } = setupGatedRouter({ gated: true });
router.addUpstream(mockUpstream('ha', { tools: [{ name: 'get_entities' }] }));
// First session is gated
await router.route({ jsonrpc: '2.0', id: 1, method: 'initialize' }, { sessionId: 's1' });
let toolsRes = await router.route(
{ jsonrpc: '2.0', id: 2, method: 'tools/list' },
{ sessionId: 's1' },
);
expect((toolsRes.result as { tools: Array<{ name: string }> }).tools[0]!.name).toBe('begin_session');
// Project config changes: gated → ungated
router.setGateConfig({ gated: false, providerRegistry: null });
// New session should be ungated
await router.route({ jsonrpc: '2.0', id: 3, method: 'initialize' }, { sessionId: 's2' });
toolsRes = await router.route(
{ jsonrpc: '2.0', id: 4, method: 'tools/list' },
{ sessionId: 's2' },
);
const names = (toolsRes.result as { tools: Array<{ name: string }> }).tools.map((t) => t.name);
expect(names).toContain('ha/get_entities');
expect(names).not.toContain('begin_session');
});
it('new sessions pick up gate config change (ungated → gated)', async () => {
const { router } = setupGatedRouter({ gated: false });
router.addUpstream(mockUpstream('ha', { tools: [{ name: 'get_entities' }] }));
// First session is ungated
await router.route({ jsonrpc: '2.0', id: 1, method: 'initialize' }, { sessionId: 's1' });
let toolsRes = await router.route(
{ jsonrpc: '2.0', id: 2, method: 'tools/list' },
{ sessionId: 's1' },
);
let names = (toolsRes.result as { tools: Array<{ name: string }> }).tools.map((t) => t.name);
expect(names).toContain('ha/get_entities');
// Project config changes: ungated → gated
router.setGateConfig({ gated: true, providerRegistry: null });
// New session should be gated
await router.route({ jsonrpc: '2.0', id: 3, method: 'initialize' }, { sessionId: 's2' });
toolsRes = await router.route(
{ jsonrpc: '2.0', id: 4, method: 'tools/list' },
{ sessionId: 's2' },
);
names = (toolsRes.result as { tools: Array<{ name: string }> }).tools.map((t) => t.name);
expect(names).toHaveLength(1);
expect(names[0]).toBe('begin_session');
});
it('existing sessions retain gate state after config change', async () => {
const { router } = setupGatedRouter({ gated: true });
router.addUpstream(mockUpstream('ha', { tools: [{ name: 'get_entities' }] }));
// Session created while gated
await router.route({ jsonrpc: '2.0', id: 1, method: 'initialize' }, { sessionId: 's1' });
// Config changes to ungated
router.setGateConfig({ gated: false, providerRegistry: null });
// Existing session s1 should STILL be gated (session state is immutable after creation)
const toolsRes = await router.route(
{ jsonrpc: '2.0', id: 2, method: 'tools/list' },
{ sessionId: 's1' },
);
expect((toolsRes.result as { tools: Array<{ name: string }> }).tools[0]!.name).toBe('begin_session');
});
it('already-ungated sessions remain ungated after config changes to gated', async () => {
const { router } = setupGatedRouter({ gated: false });
router.addUpstream(mockUpstream('ha', { tools: [{ name: 'get_entities' }] }));
// Session created while ungated
await router.route({ jsonrpc: '2.0', id: 1, method: 'initialize' }, { sessionId: 's1' });
// Config changes to gated
router.setGateConfig({ gated: true, providerRegistry: null });
// Existing session s1 should remain ungated
const toolsRes = await router.route(
{ jsonrpc: '2.0', id: 2, method: 'tools/list' },
{ sessionId: 's1' },
);
const names = (toolsRes.result as { tools: Array<{ name: string }> }).tools.map((t) => t.name);
expect(names).toContain('ha/get_entities');
expect(names).not.toContain('begin_session');
});
it('config refresh does not reset sessions that ungated via begin_session', async () => {
const { router } = setupGatedRouter({ gated: true });
router.addUpstream(mockUpstream('ha', { tools: [{ name: 'get_entities' }] }));
// Session starts gated and ungates
await router.route({ jsonrpc: '2.0', id: 1, method: 'initialize' }, { sessionId: 's1' });
await router.route(
{ jsonrpc: '2.0', id: 2, method: 'tools/call', params: { name: 'begin_session', arguments: { tags: ['zigbee'] } } },
{ sessionId: 's1' },
);
// Config refreshes (still gated)
router.setGateConfig({ gated: true, providerRegistry: null });
// Session should remain ungated — begin_session already completed
const toolsRes = await router.route(
{ jsonrpc: '2.0', id: 3, method: 'tools/list' },
{ sessionId: 's1' },
);
const names = (toolsRes.result as { tools: Array<{ name: string }> }).tools.map((t) => t.name);
expect(names).toContain('ha/get_entities');
expect(names).not.toContain('begin_session');
});
});
describe('response size cap', () => {
it('truncates begin_session response over 24K chars', async () => {
// Create prompts with very large content to exceed 24K
// Use byteBudget large enough so content is included in fullContent
const largePrompts = [
{ name: 'huge-prompt', priority: 10, summary: 'A very large prompt', chapters: null, content: 'x'.repeat(30_000) },
];
const { router } = setupGatedRouter({ prompts: largePrompts, byteBudget: 50_000 });
await router.route({ jsonrpc: '2.0', id: 1, method: 'initialize' }, { sessionId: 's1' });
const res = await router.route(
{ jsonrpc: '2.0', id: 2, method: 'tools/call', params: { name: 'begin_session', arguments: { tags: ['huge'] } } },
{ sessionId: 's1' },
);
expect(res.error).toBeUndefined();
const text = (res.result as { content: Array<{ text: string }> }).content[0]!.text;
expect(text.length).toBeLessThanOrEqual(24_000 + 100); // allow for truncation message
expect(text).toContain('[Response truncated');
});
it('does not truncate responses under 24K chars', async () => {
const { router } = setupGatedRouter();
await router.route({ jsonrpc: '2.0', id: 1, method: 'initialize' }, { sessionId: 's1' });
const res = await router.route(
{ jsonrpc: '2.0', id: 2, method: 'tools/call', params: { name: 'begin_session', arguments: { tags: ['zigbee'] } } },
{ sessionId: 's1' },
);
const text = (res.result as { content: Array<{ text: string }> }).content[0]!.text;
expect(text).not.toContain('[Response truncated');
});
});
}); });

View File

@@ -1,5 +1,5 @@
import { describe, it, expect } from 'vitest'; import { describe, it, expect } from 'vitest';
import { TagMatcher, extractKeywordsFromToolCall, type PromptIndexEntry } from '../src/gate/tag-matcher.js'; import { TagMatcher, extractKeywordsFromToolCall, tokenizeDescription, type PromptIndexEntry } from '../src/gate/tag-matcher.js';
function makePrompt(overrides: Partial<PromptIndexEntry> = {}): PromptIndexEntry { function makePrompt(overrides: Partial<PromptIndexEntry> = {}): PromptIndexEntry {
return { return {
@@ -13,22 +13,23 @@ function makePrompt(overrides: Partial<PromptIndexEntry> = {}): PromptIndexEntry
} }
describe('TagMatcher', () => { describe('TagMatcher', () => {
it('returns priority 10 prompts regardless of tags', () => { it('returns priority 10 prompts first, then others by priority', () => {
const matcher = new TagMatcher(); const matcher = new TagMatcher();
const critical = makePrompt({ name: 'common-mistakes', priority: 10, summary: 'Unrelated stuff' }); const critical = makePrompt({ name: 'common-mistakes', priority: 10, summary: 'Unrelated stuff' });
const normal = makePrompt({ name: 'normal', priority: 5, summary: 'Something else' }); const normal = makePrompt({ name: 'normal', priority: 5, summary: 'Something else' });
const result = matcher.match([], [critical, normal]); const result = matcher.match([], [critical, normal]);
expect(result.fullContent.map((p) => p.name)).toEqual(['common-mistakes']); // Both included — priority 10 first (Infinity), then priority 5 (baseline 5)
expect(result.remaining.map((p) => p.name)).toEqual(['normal']); expect(result.fullContent.map((p) => p.name)).toEqual(['common-mistakes', 'normal']);
expect(result.remaining).toEqual([]);
}); });
it('scores by matching_tags * priority', () => { it('scores by priority baseline + matching_tags * priority', () => {
const matcher = new TagMatcher(); const matcher = new TagMatcher();
const high = makePrompt({ name: 'important', priority: 8, summary: 'zigbee mqtt pairing' }); const high = makePrompt({ name: 'important', priority: 8, summary: 'zigbee mqtt pairing' });
const low = makePrompt({ name: 'basic', priority: 3, summary: 'zigbee basics' }); const low = makePrompt({ name: 'basic', priority: 3, summary: 'zigbee basics' });
// Both match "zigbee": high scores 1*8=8, low scores 1*3=3 // high: 8 + 1*8 = 16, low: 3 + 1*3 = 6
const result = matcher.match(['zigbee'], [low, high]); const result = matcher.match(['zigbee'], [low, high]);
expect(result.fullContent[0]!.name).toBe('important'); expect(result.fullContent[0]!.name).toBe('important');
expect(result.fullContent[1]!.name).toBe('basic'); expect(result.fullContent[1]!.name).toBe('basic');
@@ -39,7 +40,7 @@ describe('TagMatcher', () => {
const twoMatch = makePrompt({ name: 'two-match', priority: 5, summary: 'zigbee mqtt' }); const twoMatch = makePrompt({ name: 'two-match', priority: 5, summary: 'zigbee mqtt' });
const oneMatch = makePrompt({ name: 'one-match', priority: 5, summary: 'zigbee only' }); const oneMatch = makePrompt({ name: 'one-match', priority: 5, summary: 'zigbee only' });
// two-match: 2*5=10, one-match: 1*5=5 // two-match: 5 + 2*5 = 15, one-match: 5 + 1*5 = 10
const result = matcher.match(['zigbee', 'mqtt'], [oneMatch, twoMatch]); const result = matcher.match(['zigbee', 'mqtt'], [oneMatch, twoMatch]);
expect(result.fullContent[0]!.name).toBe('two-match'); expect(result.fullContent[0]!.name).toBe('two-match');
}); });
@@ -72,24 +73,50 @@ describe('TagMatcher', () => {
expect(result.indexOnly.map((p) => p.name)).toEqual(['big']); expect(result.indexOnly.map((p) => p.name)).toEqual(['big']);
}); });
it('puts non-matched prompts in remaining', () => { it('includes all prompts — tag-matched ranked higher', () => {
const matcher = new TagMatcher(); const matcher = new TagMatcher();
const matched = makePrompt({ name: 'matched', summary: 'zigbee stuff' }); const matched = makePrompt({ name: 'matched', summary: 'zigbee stuff' });
const unmatched = makePrompt({ name: 'unmatched', summary: 'completely different topic' }); const unmatched = makePrompt({ name: 'unmatched', summary: 'completely different topic' });
const result = matcher.match(['zigbee'], [matched, unmatched]); const result = matcher.match(['zigbee'], [matched, unmatched]);
expect(result.fullContent.map((p) => p.name)).toEqual(['matched']); // matched: 5 + 1*5 = 10, unmatched: 5 + 0 = 5 — both included, matched first
expect(result.remaining.map((p) => p.name)).toEqual(['unmatched']); expect(result.fullContent.map((p) => p.name)).toEqual(['matched', 'unmatched']);
expect(result.remaining).toEqual([]);
}); });
it('handles empty tags — only priority 10 matched', () => { it('handles empty tags — all prompts included by priority', () => {
const matcher = new TagMatcher(); const matcher = new TagMatcher();
const critical = makePrompt({ name: 'critical', priority: 10 }); const critical = makePrompt({ name: 'critical', priority: 10 });
const normal = makePrompt({ name: 'normal', priority: 5 }); const normal = makePrompt({ name: 'normal', priority: 5 });
const result = matcher.match([], [critical, normal]); const result = matcher.match([], [critical, normal]);
expect(result.fullContent.map((p) => p.name)).toEqual(['critical']); // priority 10 → Infinity, priority 5 → baseline 5
expect(result.remaining.map((p) => p.name)).toEqual(['normal']); expect(result.fullContent.map((p) => p.name)).toEqual(['critical', 'normal']);
expect(result.remaining).toEqual([]);
});
it('includes unrelated prompts within byte budget (priority baseline)', () => {
const matcher = new TagMatcher(500);
const related = makePrompt({ name: 'node-red-flows', priority: 5, summary: 'node-red flow management' });
const unrelated = makePrompt({ name: 'stack', priority: 5, summary: 'project stack overview', content: 'Tech stack info...' });
// Tags match "node-red-flows" but not "stack" — both should be included
const result = matcher.match(['node-red', 'flows'], [related, unrelated]);
expect(result.fullContent.map((p) => p.name)).toContain('stack');
expect(result.fullContent.map((p) => p.name)).toContain('node-red-flows');
// Related prompt should be ranked higher
expect(result.fullContent[0]!.name).toBe('node-red-flows');
});
it('pushes low-priority unrelated prompts to indexOnly when budget is tight', () => {
const matcher = new TagMatcher(100);
const related = makePrompt({ name: 'related', priority: 5, summary: 'zigbee', content: 'x'.repeat(80) });
const unrelated = makePrompt({ name: 'unrelated', priority: 3, summary: 'other', content: 'y'.repeat(80) });
const result = matcher.match(['zigbee'], [related, unrelated]);
// related: 5 + 1*5 = 10 (higher score, fits budget), unrelated: 3 + 0 = 3 (overflow)
expect(result.fullContent.map((p) => p.name)).toEqual(['related']);
expect(result.indexOnly.map((p) => p.name)).toEqual(['unrelated']);
}); });
it('handles empty prompts array', () => { it('handles empty prompts array', () => {
@@ -115,12 +142,13 @@ describe('TagMatcher', () => {
it('sorts matched by score descending', () => { it('sorts matched by score descending', () => {
const matcher = new TagMatcher(); const matcher = new TagMatcher();
const p1 = makePrompt({ name: 'p1', priority: 3, summary: 'mqtt zigbee lights' }); // 3 matches * 3 = 9 const p1 = makePrompt({ name: 'p1', priority: 3, summary: 'mqtt zigbee lights' }); // 3 + 3*3 = 12
const p2 = makePrompt({ name: 'p2', priority: 8, summary: 'mqtt' }); // 1 match * 8 = 8 const p2 = makePrompt({ name: 'p2', priority: 8, summary: 'mqtt' }); // 8 + 1*8 = 16
const p3 = makePrompt({ name: 'p3', priority: 2, summary: 'mqtt zigbee lights pairing automation' }); // 5 * 2 = 10 const p3 = makePrompt({ name: 'p3', priority: 2, summary: 'mqtt zigbee lights pairing automation' }); // 2 + 5*2 = 12
const result = matcher.match(['mqtt', 'zigbee', 'lights', 'pairing', 'automation'], [p1, p2, p3]); const result = matcher.match(['mqtt', 'zigbee', 'lights', 'pairing', 'automation'], [p1, p2, p3]);
expect(result.fullContent.map((p) => p.name)).toEqual(['p3', 'p1', 'p2']); // p2 (16) > p1 (12) = p3 (12), tie-break by input order
expect(result.fullContent[0]!.name).toBe('p2');
}); });
}); });
@@ -163,3 +191,67 @@ describe('extractKeywordsFromToolCall', () => {
expect(keywords).toContain('mqtt'); expect(keywords).toContain('mqtt');
}); });
}); });
describe('tokenizeDescription', () => {
it('extracts meaningful words from a sentence', () => {
const result = tokenizeDescription('I want to get node-red flows');
expect(result).toContain('node-red');
expect(result).toContain('flows');
});
it('filters stop words', () => {
const result = tokenizeDescription('I want to get the flows for my project');
expect(result).not.toContain('want');
expect(result).not.toContain('the');
expect(result).not.toContain('for');
expect(result).toContain('flows');
expect(result).toContain('project');
});
it('filters words shorter than 3 characters', () => {
const result = tokenizeDescription('go to my HA setup');
expect(result).not.toContain('go');
expect(result).not.toContain('to');
expect(result).not.toContain('my');
expect(result).not.toContain('ha');
expect(result).toContain('setup');
});
it('lowercases all tokens', () => {
const result = tokenizeDescription('Configure MQTT Broker Settings');
expect(result).toContain('configure');
expect(result).toContain('mqtt');
expect(result).toContain('broker');
expect(result).toContain('settings');
});
it('caps at 10 keywords', () => {
const result = tokenizeDescription(
'alpha bravo charlie delta echo foxtrot golf hotel india juliet kilo lima mike november oscar papa',
);
expect(result.length).toBeLessThanOrEqual(10);
});
it('deduplicates words', () => {
const result = tokenizeDescription('zigbee zigbee zigbee pairing');
expect(result.filter((w) => w === 'zigbee')).toHaveLength(1);
expect(result).toContain('pairing');
});
it('handles punctuation and special characters', () => {
const result = tokenizeDescription('home-assistant; mqtt/broker (setup)');
// Hyphens are preserved within words (compound names)
expect(result).toContain('home-assistant');
expect(result).toContain('mqtt');
expect(result).toContain('broker');
expect(result).toContain('setup');
});
it('returns empty array for empty string', () => {
expect(tokenizeDescription('')).toEqual([]);
});
it('returns empty array for only stop words', () => {
expect(tokenizeDescription('I want to get the')).toEqual([]);
});
});

View File

@@ -1,6 +1,6 @@
{ {
"name": "@mcpctl/shared", "name": "@mcpctl/shared",
"version": "0.1.0", "version": "0.0.1",
"private": true, "private": true,
"type": "module", "type": "module",
"main": "./dist/index.js", "main": "./dist/index.js",

View File

@@ -1,5 +1,5 @@
// Shared constants // Shared constants
export const APP_NAME = 'mcpctl'; export const APP_NAME = 'mcpctl';
export const APP_VERSION = '0.1.0'; export const APP_VERSION = '0.0.1';
export const DEFAULT_MCPD_URL = 'http://localhost:3000'; export const DEFAULT_MCPD_URL = 'http://localhost:3000';
export const DEFAULT_DB_PORT = 5432; export const DEFAULT_DB_PORT = 5432;

View File

@@ -7,7 +7,7 @@ describe('shared package', () => {
}); });
it('exports APP_VERSION constant', () => { it('exports APP_VERSION constant', () => {
expect(APP_VERSION).toBe('0.1.0'); expect(APP_VERSION).toBe('0.0.1');
}); });
it('exports DEFAULT_MCPD_URL constant', () => { it('exports DEFAULT_MCPD_URL constant', () => {

18
templates/docmost.yaml Normal file
View File

@@ -0,0 +1,18 @@
name: docmost
version: "1.0.0"
description: Docmost MCP server for wiki/documentation page management and search
dockerImage: "mysources.co.uk/michal/docmost-mcp:latest"
transport: STDIO
repositoryUrl: https://github.com/MrMartiniMo/docmost-mcp
# Health check disabled: STDIO health probe requires packageName (npm-based servers).
# This server uses a custom dockerImage. Probe support for dockerImage STDIO servers is TODO.
env:
- name: DOCMOST_API_URL
description: Docmost API URL (e.g. http://100.88.157.6:3000/api)
required: true
- name: DOCMOST_EMAIL
description: Docmost user email for authentication
required: true
- name: DOCMOST_PASSWORD
description: Docmost user password for authentication
required: true