feat: v2 3-tier architecture (mcpctl → mcplocal → mcpd) #3

Merged
michal merged 1 commits from feat/v2-architecture into main 2026-02-22 11:44:03 +00:00
82 changed files with 5832 additions and 123 deletions

View File

@@ -12,4 +12,4 @@ dist
.env.* .env.*
deploy/docker-compose.yml deploy/docker-compose.yml
src/cli src/cli
src/local-proxy src/mcplocal

View File

@@ -0,0 +1,272 @@
# mcpctl v2 - Corrected 3-Tier Architecture PRD
## Overview
mcpctl is a kubectl-inspired system for managing MCP (Model Context Protocol) servers. It consists of 4 components arranged in a 3-tier architecture:
```
Claude Code
|
v (stdio - MCP protocol)
mcplocal (Local Daemon - runs on developer machine)
|
v (HTTP REST)
mcpd (External Daemon - runs on server/NAS)
|
v (Docker API / K8s API)
mcp_servers (MCP server containers)
```
## Components
### 1. mcpctl (CLI Tool)
- **Package**: `src/cli/` (`@mcpctl/cli`)
- **What it is**: kubectl-like CLI for managing the entire system
- **Talks to**: mcplocal (local daemon) via HTTP REST
- **Key point**: mcpctl does NOT talk to mcpd directly. It always goes through mcplocal.
- **Distributed as**: RPM package via Gitea registry (bun compile + nfpm)
- **Commands**: get, describe, apply, setup, instance, claude, project, backup, restore, config, status
### 2. mcplocal (Local Daemon)
- **Package**: `src/local-proxy/` (rename to `src/mcplocal/`)
- **What it is**: Local daemon running on the developer's machine
- **Talks to**: mcpd (external daemon) via HTTP REST
- **Exposes to Claude**: MCP protocol via stdio (tools, resources, prompts)
- **Exposes to mcpctl**: HTTP REST API for management commands
**Core responsibility: LLM Pre-processing**
This is the intelligence layer. When Claude asks for data from MCP servers, mcplocal:
1. Receives Claude's request (e.g., "get Slack messages about security")
2. Uses a local/cheap LLM (Gemini CLI binary, Ollama, vLLM, DeepSeek API) to interpret what Claude actually wants
3. Sends narrow, filtered requests to mcpd which forwards to the actual MCP servers
4. Receives raw results from MCP servers (via mcpd)
5. Uses the local LLM again to filter/summarize results - extracting only what's relevant
6. Returns the smallest, most comprehensive response to Claude
**Why**: Claude Code tokens are expensive. Instead of dumping 500 Slack messages into Claude's context window, mcplocal uses a cheap LLM to pre-filter to the 12 relevant ones.
**LLM Provider Strategy** (already partially exists):
- Gemini CLI binary (local, free)
- Ollama (local, free)
- vLLM (local, free)
- DeepSeek API (cheap)
- OpenAI API (fallback)
- Anthropic API (fallback)
**Additional mcplocal responsibilities**:
- MCP protocol routing (namespace tools: `slack/send_message`, `jira/create_issue`)
- Connection health monitoring for upstream MCP servers
- Caching frequently requested data
- Proxying mcpctl management commands to mcpd
### 3. mcpd (External Daemon)
- **Package**: `src/mcpd/` (`@mcpctl/mcpd`)
- **What it is**: Server-side daemon that runs on centralized infrastructure (Synology NAS, cloud server, etc.)
- **Deployed via**: Docker Compose (Dockerfile + docker-compose.yml)
- **Database**: PostgreSQL for state, audit logs, access control
**Core responsibilities**:
- **Deploy and run MCP server containers** (Docker now, Kubernetes later)
- **Instance lifecycle management**: start, stop, restart, logs, inspect
- **MCP server registry**: Store server definitions, configuration templates, profiles
- **Project management**: Group MCP profiles into projects for Claude sessions
- **Auditing**: Log every operation - who ran what, when, with what result
- **Access management**: Users, sessions, permissions - who can access which MCP servers
- **Credential storage**: MCP servers often need API tokens (Slack, Jira, GitHub) - stored securely on server side, never exposed to local machine
- **Backup/restore**: Export and import configuration
**Key point**: mcpd holds the credentials. When mcplocal asks mcpd to query Slack, mcpd runs the Slack MCP server container with the proper SLACK_TOKEN injected - mcplocal never sees the token.
### 4. mcp_servers (MCP Server Containers)
- **What they are**: The actual MCP server processes (Slack, Jira, GitHub, Terraform, filesystem, postgres, etc.)
- **Managed by**: mcpd via Docker/Podman API
- **Network**: Isolated network, only accessible by mcpd
- **Credentials**: Injected by mcpd as environment variables
- **Communication**: MCP protocol (stdio or SSE/HTTP) between mcpd and the containers
## Data Flow Examples
### Example 1: Claude asks for Slack messages
```
Claude: "Get messages about security incidents from the last week"
|
v (MCP tools/call: slack/search_messages)
mcplocal:
1. Intercepts the tool call
2. Calls local Gemini: "User wants security incident messages from last week.
Generate optimal Slack search query and date filters."
3. Gemini returns: query="security incident OR vulnerability OR CVE", after="2024-01-15"
4. Sends filtered request to mcpd
|
v (HTTP POST /api/v1/mcp/proxy)
mcpd:
1. Looks up Slack MCP instance (injects SLACK_TOKEN)
2. Forwards narrowed query to Slack MCP server container
3. Returns raw results (200 messages)
|
v (response)
mcplocal:
1. Receives 200 messages
2. Calls local Gemini: "Filter these 200 Slack messages. Keep only those
directly about security incidents. Return message IDs and 1-line summaries."
3. Gemini returns: 15 relevant messages with summaries
4. Returns filtered result to Claude
|
v (MCP response: 15 messages instead of 200)
Claude: processes only the relevant 15 messages
```
### Example 2: mcpctl management command
```
$ mcpctl get servers
|
v (HTTP GET)
mcplocal:
1. Recognizes this is a management command (not MCP data)
2. Proxies directly to mcpd (no LLM processing needed)
|
v (HTTP GET /api/v1/servers)
mcpd:
1. Queries PostgreSQL for server definitions
2. Returns list
|
v (proxied response)
mcplocal -> mcpctl -> formatted table output
```
### Example 3: mcpctl instance management
```
$ mcpctl instance start slack
|
v
mcplocal -> mcpd:
1. Creates Docker container for Slack MCP server
2. Injects SLACK_TOKEN from secure storage
3. Connects to isolated mcp-servers network
4. Logs audit entry: "user X started slack instance"
5. Returns instance status
```
## What Already Exists (completed work)
### Done and reusable as-is:
- Project structure: pnpm monorepo, TypeScript strict mode, Vitest, ESLint
- Database schema: Prisma + PostgreSQL (User, McpServer, McpProfile, Project, McpInstance, AuditLog)
- mcpd server framework: Fastify 5, routes, services, repositories, middleware
- mcpd MCP server CRUD: registration, profiles, projects
- mcpd Docker container management: dockerode, instance lifecycle
- mcpd audit logging, health monitoring, metrics, backup/restore
- mcpctl CLI framework: Commander.js, commands, config, API client, formatters
- mcpctl RPM distribution: bun compile, nfpm, Gitea publishing, shell completions
- MCP protocol routing in local-proxy: namespace tools, resources, prompts
- LLM provider abstractions: OpenAI, Anthropic, Ollama adapters (defined but unused)
- Shared types and profile templates
### Needs rework:
- mcpctl currently talks to mcpd directly -> must talk to mcplocal instead
- local-proxy is just a dumb router -> needs LLM pre-processing intelligence
- local-proxy has no HTTP API for mcpctl -> needs REST endpoints for management proxying
- mcpd has no MCP proxy endpoint -> needs endpoint that mcplocal can call to execute MCP tool calls on managed instances
- No integration between LLM providers and MCP request/response pipeline
## New Tasks Needed
### Phase 1: Rename and restructure local-proxy -> mcplocal
- Rename `src/local-proxy/` to `src/mcplocal/`
- Update all package references and imports
- Add HTTP REST server (Fastify) alongside existing stdio server
- mcplocal needs TWO interfaces: stdio for Claude, HTTP for mcpctl
### Phase 2: mcplocal management proxy
- Add REST endpoints that mirror mcpd's API (get servers, instances, projects, etc.)
- mcpctl config changes: `daemonUrl` now points to mcplocal (e.g., localhost:3200) instead of mcpd
- mcplocal proxies management requests to mcpd (configurable `mcpdUrl` e.g., http://nas:3100)
- Pass-through with no LLM processing for management commands
### Phase 3: mcpd MCP proxy endpoint
- Add `/api/v1/mcp/proxy` endpoint to mcpd
- Accepts: `{ serverId, method, params }` - execute an MCP tool call on a managed instance
- mcpd looks up the instance, connects to the container, executes the MCP call, returns result
- This is how mcplocal talks to MCP servers without needing direct Docker access
### Phase 4: LLM pre-processing pipeline in mcplocal
- Create request interceptor in mcplocal's MCP router
- Before forwarding `tools/call` to mcpd, run the request through LLM for interpretation
- After receiving response from mcpd, run through LLM for filtering/summarization
- LLM provider selection based on config (prefer local/cheap models)
- Configurable: enable/disable pre-processing per server or per tool
- Bypass for simple operations (list, create, delete - no filtering needed)
### Phase 5: Smart context optimization
- Token counting: estimate how many tokens the raw response would consume
- Decision logic: if raw response < threshold, skip LLM filtering (not worth the latency)
- If raw response > threshold, filter with LLM
- Cache LLM filtering decisions for repeated similar queries
- Metrics: track tokens saved, latency added by filtering
### Phase 6: mcpctl -> mcplocal migration
- Update mcpctl's default daemonUrl to point to mcplocal (localhost:3200)
- Update all CLI commands to work through mcplocal proxy
- Add `mcpctl config set mcpd-url <url>` for configuring upstream mcpd
- Add `mcpctl config set mcplocal-url <url>` for configuring local daemon
- Health check: `mcpctl status` shows both mcplocal and mcpd connectivity
- Shell completions update if needed
### Phase 7: End-to-end integration testing
- Test full flow: mcpctl -> mcplocal -> mcpd -> mcp_server -> response -> LLM filter -> Claude
- Test management commands pass through correctly
- Test LLM pre-processing reduces context window size
- Test credential isolation (mcplocal never sees MCP server credentials)
- Test health monitoring across all tiers
## Authentication & Authorization
### Database ownership
- **mcpd owns the database** (PostgreSQL). It is the only component that talks to the DB.
- mcplocal has NO database. It is stateless (config file only).
- mcpctl has NO database. It stores user credentials locally in `~/.mcpctl/config.yaml`.
### Auth flow
```
mcpctl login
|
v (user enters mcpd URL + credentials)
mcpctl stores API token in ~/.mcpctl/config.yaml
|
v (passes token to mcplocal config)
mcplocal authenticates to mcpd using Bearer token on every request
|
v (Authorization: Bearer <token>)
mcpd validates token against Session table in PostgreSQL
|
v (authenticated request proceeds)
```
### mcpctl responsibilities
- `mcpctl login` command: prompts user for mcpd URL and credentials (username/password or API token)
- `mcpctl login` calls mcpd's auth endpoint to get a session token
- Stores the token in `~/.mcpctl/config.yaml` (or `~/.mcpctl/credentials` with restricted permissions)
- Passes the token to mcplocal (either via config or as startup argument)
- `mcpctl logout` command: invalidates the session token
### mcplocal responsibilities
- Reads auth token from its config (set by mcpctl)
- Attaches `Authorization: Bearer <token>` header to ALL requests to mcpd
- If mcpd returns 401, mcplocal returns appropriate error to mcpctl/Claude
- Does NOT store credentials itself - they come from mcpctl's config
### mcpd responsibilities
- Owns User and Session tables
- Provides auth endpoints: `POST /api/v1/auth/login`, `POST /api/v1/auth/logout`
- Validates Bearer tokens on every request via auth middleware (already exists)
- Returns 401 for invalid/expired tokens
- Audit logs include the authenticated user
## Non-functional Requirements
- mcplocal must start fast (developer's machine, runs per-session or as daemon)
- LLM pre-processing must not add more than 2-3 seconds latency
- If local LLM is unavailable, fall back to passing data through unfiltered
- All components must be independently deployable and testable
- mcpd must remain stateless (outside of DB) and horizontally scalable

View File

@@ -531,8 +531,9 @@
"7", "7",
"4" "4"
], ],
"status": "pending", "status": "done",
"subtasks": [] "subtasks": [],
"updatedAt": "2026-02-21T05:14:48.368Z"
}, },
{ {
"id": "10", "id": "10",
@@ -545,8 +546,9 @@
"7", "7",
"5" "5"
], ],
"status": "pending", "status": "done",
"subtasks": [] "subtasks": [],
"updatedAt": "2026-02-21T05:17:02.390Z"
}, },
{ {
"id": "11", "id": "11",
@@ -558,9 +560,9 @@
"dependencies": [ "dependencies": [
"1" "1"
], ],
"status": "in-progress", "status": "done",
"subtasks": [], "subtasks": [],
"updatedAt": "2026-02-21T04:56:01.658Z" "updatedAt": "2026-02-21T05:00:28.388Z"
}, },
{ {
"id": "12", "id": "12",
@@ -572,8 +574,74 @@
"dependencies": [ "dependencies": [
"11" "11"
], ],
"status": "pending", "status": "done",
"subtasks": [] "subtasks": [
{
"id": 1,
"title": "Create main.ts entry point with configuration loading",
"description": "Implement the main.ts entry point that reads proxy configuration from file or CLI arguments, initializes upstreams based on config, and boots the StdioProxyServer.",
"dependencies": [],
"details": "Create src/local-proxy/src/main.ts that: 1) Parses command-line arguments (--config flag for JSON config path, or individual --upstream flags), 2) Loads ProxyConfig from JSON file if specified, 3) Instantiates StdioUpstream or HttpUpstream for each UpstreamConfig based on transport type, 4) Calls start() on each StdioUpstream to spawn child processes, 5) Adds all upstreams to McpRouter via addUpstream(), 6) Creates StdioProxyServer with the router and calls start(), 7) Handles SIGTERM/SIGINT for graceful shutdown calling router.closeAll(). Use a simple arg parser or process.argv directly. Export a main() function and call it when run directly.",
"status": "done",
"testStrategy": "Test config file loading with valid/invalid JSON. Test CLI argument parsing. Integration test: spawn proxy with mock upstream config and verify it starts and responds to initialize request.",
"parentId": "undefined",
"updatedAt": "2026-02-21T05:05:48.624Z"
},
{
"id": 2,
"title": "Add resource forwarding support to McpRouter",
"description": "Extend McpRouter to handle resources/list and resources/read methods, forwarding them to upstream servers with proper namespacing similar to tools.",
"dependencies": [
1
],
"details": "Modify src/local-proxy/src/router.ts to: 1) Add a resourceToServer Map similar to toolToServer, 2) Create discoverResources() method that calls resources/list on each upstream and aggregates results with namespaced URIs (e.g., 'servername://resource'), 3) Add 'resources' to capabilities in initialize response, 4) Handle 'resources/list' in route() by calling discoverResources(), 5) Handle 'resources/read' by parsing the namespaced URI, extracting server name, stripping prefix, and forwarding to correct upstream, 6) Handle 'resources/subscribe' and 'resources/unsubscribe' if needed for completeness. Update types.ts if additional resource-related types are needed.",
"status": "done",
"testStrategy": "Unit test discoverResources() with mocked upstreams returning different resources. Test resources/read routing extracts correct server and forwards properly. Test error handling when resource URI has unknown server prefix.",
"parentId": "undefined",
"updatedAt": "2026-02-21T05:05:48.626Z"
},
{
"id": 3,
"title": "Add prompt forwarding support to McpRouter",
"description": "Extend McpRouter to handle prompts/list and prompts/get methods, forwarding them to upstream servers with proper namespacing.",
"dependencies": [
1
],
"details": "Modify src/local-proxy/src/router.ts to: 1) Add a promptToServer Map for tracking prompt origins, 2) Create discoverPrompts() method that calls prompts/list on each upstream and aggregates with namespaced names (e.g., 'servername/prompt-name'), 3) Add 'prompts' to capabilities in initialize response, 4) Handle 'prompts/list' in route() by calling discoverPrompts(), 5) Handle 'prompts/get' by parsing namespaced prompt name, extracting server, stripping prefix, and forwarding to correct upstream. Follow same pattern as tools for consistency.",
"status": "done",
"testStrategy": "Unit test discoverPrompts() with mocked upstreams. Test prompts/get routing correctly forwards to upstream. Test error handling for unknown prompt names.",
"parentId": "undefined",
"updatedAt": "2026-02-21T05:05:48.638Z"
},
{
"id": 4,
"title": "Implement notification forwarding from upstreams to client",
"description": "Add support for forwarding JSON-RPC notifications from upstream servers to the proxy client, enabling real-time updates like progress notifications.",
"dependencies": [
1
],
"details": "Modify upstream classes and server: 1) Add onNotification callback to UpstreamConnection interface in types.ts, 2) Update StdioUpstream to detect notifications (messages without 'id' field) in stdout handler and invoke onNotification callback with namespaced method if needed, 3) Update HttpUpstream if SSE support is needed (may require EventSource or SSE client for true streaming), 4) Add setNotificationHandler(callback) method to McpRouter that registers handler and wires it to all upstreams, 5) Update StdioProxyServer to call router.setNotificationHandler() with a function that writes notification JSON to stdout, 6) Consider namespacing notification params to indicate source server.",
"status": "done",
"testStrategy": "Test StdioUpstream correctly identifies and forwards notifications. Integration test: upstream sends progress notification, verify proxy forwards it to stdout. Test notifications are properly namespaced with source server name.",
"parentId": "undefined",
"updatedAt": "2026-02-21T05:05:48.641Z"
},
{
"id": 5,
"title": "Implement connection health monitoring with reconnection",
"description": "Add health monitoring for upstream connections with automatic status tracking, health check pings, and reconnection logic for failed STDIO upstreams.",
"dependencies": [
1,
4
],
"details": "Create src/local-proxy/src/health.ts with HealthMonitor class: 1) Track connection state for each upstream (healthy, degraded, disconnected), 2) Implement periodic health checks using ping/pong or a lightweight method like calling initialize, 3) Emit health status change events via EventEmitter pattern, 4) Add reconnection logic for StdioUpstream: detect process exit, attempt restart with exponential backoff (1s, 2s, 4s... max 30s), 5) Update McpRouter to accept HealthMonitor instance and use it to filter available upstreams, 6) Add health status to proxy logs/stderr for debugging, 7) Optionally expose health status via a special proxy method (e.g., 'proxy/health'). Update main.ts to instantiate and wire HealthMonitor.",
"status": "done",
"testStrategy": "Test health check detects unresponsive upstream. Test reconnection attempts with mocked process that fails then succeeds. Test exponential backoff timing. Test degraded upstream is excluded from tool discovery until healthy.",
"parentId": "undefined",
"updatedAt": "2026-02-21T05:05:48.643Z"
}
],
"updatedAt": "2026-02-21T05:05:48.643Z"
}, },
{ {
"id": "13", "id": "13",
@@ -585,8 +653,9 @@
"dependencies": [ "dependencies": [
"12" "12"
], ],
"status": "pending", "status": "done",
"subtasks": [] "subtasks": [],
"updatedAt": "2026-02-21T05:22:44.011Z"
}, },
{ {
"id": "14", "id": "14",
@@ -598,8 +667,9 @@
"dependencies": [ "dependencies": [
"3" "3"
], ],
"status": "pending", "status": "done",
"subtasks": [] "subtasks": [],
"updatedAt": "2026-02-21T05:09:18.694Z"
}, },
{ {
"id": "15", "id": "15",
@@ -611,8 +681,71 @@
"dependencies": [ "dependencies": [
"4" "4"
], ],
"status": "pending", "status": "done",
"subtasks": [] "subtasks": [
{
"id": 1,
"title": "Define Profile Template Types and Schemas",
"description": "Create TypeScript interfaces and Zod validation schemas for profile templates that extend the existing McpProfile type.",
"dependencies": [],
"details": "Create src/shared/src/profiles/types.ts with ProfileTemplate interface containing: id, serverType, name, displayName, description, category (filesystem/database/integration/etc), command, args, requiredEnvVars (with EnvTemplateEntry array), optionalEnvVars, defaultPermissions, setupInstructions, and documentationUrl. Also create profileTemplateSchema.ts with Zod schemas for validation. The templates should be immutable definitions that can be instantiated into actual profiles.",
"status": "pending",
"testStrategy": "Unit test Zod schemas with valid and invalid template data. Verify type compatibility with existing McpServerConfig and McpProfile types.",
"parentId": "undefined"
},
{
"id": 2,
"title": "Implement Common MCP Server Profile Templates",
"description": "Create profile template definitions for common MCP servers including filesystem, github, postgres, slack, and other popular integrations.",
"dependencies": [
1
],
"details": "Create src/shared/src/profiles/templates/ directory with individual template files: filesystem.ts (npx @modelcontextprotocol/server-filesystem with path args), github.ts (npx @modelcontextprotocol/server-github with GITHUB_TOKEN env), postgres.ts (npx @modelcontextprotocol/server-postgres with DATABASE_URL), slack.ts (npx @modelcontextprotocol/server-slack with SLACK_TOKEN), memory.ts, and fetch.ts. Each template exports a ProfileTemplate constant with pre-configured best-practice settings. Include clear descriptions and setup guides for each.",
"status": "pending",
"testStrategy": "Validate each template against the ProfileTemplate Zod schema. Verify all required fields are populated. Test that commands and args are syntactically correct.",
"parentId": "undefined"
},
{
"id": 3,
"title": "Build Profile Registry with Lookup and Filtering",
"description": "Create a profile registry that aggregates all templates and provides lookup, filtering, and search capabilities.",
"dependencies": [
1,
2
],
"details": "Create src/shared/src/profiles/registry.ts implementing a ProfileRegistry class with methods: getAll(), getById(id), getByCategory(category), getByServerType(type), search(query), and getCategories(). The registry should be a singleton that lazily loads all templates from the templates directory. Export a default registry instance. Also create src/shared/src/profiles/index.ts to export all profile-related types, templates, and the registry.",
"status": "pending",
"testStrategy": "Test registry initialization loads all templates. Test each lookup method returns correct results. Test search functionality with partial matches. Verify no duplicate IDs across templates.",
"parentId": "undefined"
},
{
"id": 4,
"title": "Add Profile Validation and Instantiation Utilities",
"description": "Create utility functions to validate profile templates and instantiate them into concrete profile configurations.",
"dependencies": [
1,
3
],
"details": "Create src/shared/src/profiles/utils.ts with functions: validateTemplate(template) - validates a ProfileTemplate against schema, instantiateProfile(templateId, envValues) - creates a concrete profile config from a template by filling in env vars, validateEnvValues(template, envValues) - checks if all required env vars are provided, getMissingEnvVars(template, envValues) - returns list of missing required env vars, and generateMcpJsonEntry(profile) - converts instantiated profile to .mcp.json format entry.",
"status": "pending",
"testStrategy": "Test validateTemplate with valid and invalid templates. Test instantiateProfile produces correct configs. Test env validation catches missing required vars. Test .mcp.json output matches expected format.",
"parentId": "undefined"
},
{
"id": 5,
"title": "Export Profiles Module and Add Integration Tests",
"description": "Export the profiles module from shared package main entry and create comprehensive integration tests.",
"dependencies": [
3,
4
],
"details": "Update src/shared/src/index.ts to add 'export * from ./profiles/index.js'. Create src/shared/src/profiles/__tests__/profiles.test.ts with tests covering: all templates are valid, registry contains expected templates, instantiation works for each template type, .mcp.json generation produces valid output, and round-trip validation (instantiate then validate). Also add documentation comments to all exported functions and types.",
"status": "pending",
"testStrategy": "Run full test suite with vitest. Verify exports are accessible from @mcpctl/shared. Integration test the full workflow: lookup template, validate, instantiate with env vars, generate .mcp.json entry.",
"parentId": "undefined"
}
],
"updatedAt": "2026-02-21T05:26:02.010Z"
}, },
{ {
"id": "16", "id": "16",
@@ -624,8 +757,9 @@
"dependencies": [ "dependencies": [
"6" "6"
], ],
"status": "pending", "status": "done",
"subtasks": [] "subtasks": [],
"updatedAt": "2026-02-21T05:11:52.795Z"
}, },
{ {
"id": "17", "id": "17",
@@ -637,8 +771,70 @@
"dependencies": [ "dependencies": [
"6" "6"
], ],
"status": "pending", "status": "done",
"subtasks": [] "subtasks": [
{
"id": 1,
"title": "Create K8s API HTTP client and connection handling",
"description": "Implement a Kubernetes API client using node:http/https to communicate with the K8s API server, including authentication, TLS handling, and base request/response utilities.",
"dependencies": [],
"details": "Create src/mcpd/src/services/k8s/k8s-client.ts with: 1) K8sClientConfig interface supporting kubeconfig file parsing, in-cluster config detection, and direct API server URL/token config. 2) HTTP client wrapper using node:http/https that handles TLS certificates, bearer token auth, and API versioning. 3) Base request methods (get, post, delete, patch) with proper error handling and response parsing. 4) Support for watching resources with streaming responses. Reference the Docker container-manager.ts pattern for constructor options and ping() implementation.",
"status": "pending",
"testStrategy": "Unit tests with mocked HTTP responses for successful API calls, auth failures, connection errors. Test kubeconfig parsing with sample config files. Test in-cluster config detection by mocking environment variables and service account token file.",
"parentId": "undefined"
},
{
"id": 2,
"title": "Implement K8s manifest generation for MCP servers",
"description": "Create manifest generator that converts ContainerSpec to Kubernetes Pod and Deployment YAML/JSON specifications with proper resource limits and security contexts.",
"dependencies": [
1
],
"details": "Create src/mcpd/src/services/k8s/manifest-generator.ts with: 1) generatePodSpec(spec: ContainerSpec, namespace: string) that creates a Pod manifest with container image, env vars, resource limits (CPU/memory from spec.nanoCpus and spec.memoryLimit), and labels including mcpctl.managed=true. 2) generateDeploymentSpec() for replicated deployments with selector labels. 3) generateServiceSpec() for exposing container ports. 4) Security context configuration (non-root user, read-only root filesystem, drop capabilities). 5) Map ContainerSpec fields to K8s equivalents (memoryLimit to resources.limits.memory, nanoCpus to resources.limits.cpu, etc.).",
"status": "pending",
"testStrategy": "Unit tests validating generated manifests match expected K8s spec structure. Test resource limit conversion (bytes to Ki/Mi/Gi, nanoCPUs to millicores). Test label propagation from ContainerSpec.labels. Validate manifests against K8s API schema if possible.",
"parentId": "undefined"
},
{
"id": 3,
"title": "Implement KubernetesOrchestrator class with McpOrchestrator interface",
"description": "Create the main KubernetesOrchestrator class that implements the McpOrchestrator interface using the K8s client and manifest generator.",
"dependencies": [
1,
2
],
"details": "Create src/mcpd/src/services/k8s/kubernetes-orchestrator.ts implementing McpOrchestrator interface: 1) Constructor accepting K8sClientConfig and default namespace. 2) ping() - call /api/v1 endpoint to verify cluster connectivity. 3) pullImage() - no-op for K8s (images pulled on pod schedule) or optionally create a pre-pull DaemonSet. 4) createContainer(spec) - generate Pod/Deployment manifest, POST to K8s API, wait for pod Ready condition, return ContainerInfo with pod name as containerId. 5) stopContainer(containerId) - scale deployment to 0 or delete pod. 6) removeContainer(containerId) - DELETE the pod/deployment resource. 7) inspectContainer(containerId) - GET pod status, map phase to ContainerInfo state (Running→running, Pending→starting, Failed→error, etc.). 8) getContainerLogs(containerId) - GET /api/v1/namespaces/{ns}/pods/{name}/log endpoint.",
"status": "pending",
"testStrategy": "Integration tests with mocked K8s API responses for each method. Test createContainer returns valid ContainerInfo with mapped state. Test state mapping from K8s pod phases. Test log retrieval with tail and since parameters. Test error handling when pod not found or API errors.",
"parentId": "undefined"
},
{
"id": 4,
"title": "Add namespace and multi-namespace support",
"description": "Extend KubernetesOrchestrator to support configurable namespaces, namespace creation, and querying resources across namespaces.",
"dependencies": [
3
],
"details": "Enhance src/mcpd/src/services/k8s/kubernetes-orchestrator.ts with: 1) Add namespace parameter to ContainerSpec or use labels to specify target namespace. 2) ensureNamespace(name) method that creates namespace if not exists (POST /api/v1/namespaces). 3) listContainers(namespace?: string) method to list all mcpctl-managed pods in a namespace or all namespaces. 4) Add namespace to ContainerInfo response. 5) Support 'default' namespace fallback and configurable default namespace in constructor. 6) Add namespace label to generated manifests for filtering. 7) Validate namespace names (DNS-1123 label format).",
"status": "pending",
"testStrategy": "Test namespace creation with mocked API. Test namespace validation for invalid names. Test listing pods across namespaces. Test ContainerInfo includes correct namespace. Test default namespace fallback behavior.",
"parentId": "undefined"
},
{
"id": 5,
"title": "Add comprehensive tests and module exports",
"description": "Create unit tests with mocked K8s API responses, integration test utilities, and export the KubernetesOrchestrator from the services module.",
"dependencies": [
3,
4
],
"details": "1) Create src/mcpd/src/services/k8s/index.ts exporting KubernetesOrchestrator, K8sClientConfig, and helper types. 2) Update src/mcpd/src/services/index.ts to export k8s module. 3) Create src/mcpd/src/services/k8s/__tests__/kubernetes-orchestrator.test.ts with mocked HTTP responses using vitest's mock system. 4) Create mock-k8s-api.ts helper that simulates K8s API responses (pod list, pod status, logs, errors). 5) Test all McpOrchestrator interface methods with success and error cases. 6) Add tests for resource limit edge cases (0 memory, very high CPU). 7) Document usage examples in code comments showing how to switch from DockerContainerManager to KubernetesOrchestrator.",
"status": "pending",
"testStrategy": "Ensure all tests pass with mocked responses. Verify test coverage for all public methods. Test error scenarios (404 pod not found, 403 forbidden, 500 server error). Optional: Add integration test script that runs against kind/minikube if available.",
"parentId": "undefined"
}
],
"updatedAt": "2026-02-21T05:30:53.921Z"
}, },
{ {
"id": "18", "id": "18",
@@ -653,8 +849,9 @@
"9", "9",
"10" "10"
], ],
"status": "pending", "status": "done",
"subtasks": [] "subtasks": [],
"updatedAt": "2026-02-21T05:19:02.525Z"
}, },
{ {
"id": "19", "id": "19",
@@ -662,7 +859,7 @@
"description": "Merged into Task 3 subtasks", "description": "Merged into Task 3 subtasks",
"details": null, "details": null,
"testStrategy": null, "testStrategy": null,
"priority": null, "priority": "low",
"dependencies": [], "dependencies": [],
"status": "cancelled", "status": "cancelled",
"subtasks": [], "subtasks": [],
@@ -674,7 +871,7 @@
"description": "Merged into Task 5", "description": "Merged into Task 5",
"details": null, "details": null,
"testStrategy": null, "testStrategy": null,
"priority": null, "priority": "low",
"dependencies": [], "dependencies": [],
"status": "cancelled", "status": "cancelled",
"subtasks": [], "subtasks": [],
@@ -686,7 +883,7 @@
"description": "Merged into Task 14", "description": "Merged into Task 14",
"details": null, "details": null,
"testStrategy": null, "testStrategy": null,
"priority": null, "priority": "low",
"dependencies": [], "dependencies": [],
"status": "cancelled", "status": "cancelled",
"subtasks": [], "subtasks": [],
@@ -703,8 +900,72 @@
"6", "6",
"14" "14"
], ],
"status": "pending", "status": "done",
"subtasks": [] "subtasks": [
{
"id": 1,
"title": "Create MetricsCollector Service",
"description": "Implement a MetricsCollector service in src/mcpd/src/services/metrics-collector.ts that tracks instance health metrics, uptime, request counts, error rates, and resource usage data.",
"dependencies": [],
"details": "Create MetricsCollector class with methods: recordRequest(), recordError(), updateInstanceMetrics(), getMetrics(). Store metrics in-memory using Map<instanceId, InstanceMetrics>. Define InstanceMetrics interface with fields: instanceId, status, uptime, requestCount, errorCount, lastRequestAt, memoryUsage, cpuUsage. Inject IMcpInstanceRepository and McpOrchestrator dependencies to gather real-time instance status from containers. Export service from src/mcpd/src/services/index.ts.",
"status": "pending",
"testStrategy": "Unit tests with mocked repository and orchestrator dependencies. Test metric recording, aggregation, and retrieval. Verify error rate calculations and uptime tracking accuracy.",
"parentId": "undefined"
},
{
"id": 2,
"title": "Implement Health Aggregation Service",
"description": "Create a HealthAggregator service that computes overall system health by aggregating health status across all MCP server instances.",
"dependencies": [
1
],
"details": "Add HealthAggregator class in src/mcpd/src/services/health-aggregator.ts. Methods: getOverview() returns SystemHealth with totalInstances, healthyCount, unhealthyCount, errorCount, and overallStatus (healthy/degraded/unhealthy). Use MetricsCollector to gather per-instance metrics. Include orchestrator.ping() check for runtime availability. Compute aggregate error rate and average uptime. Export from services/index.ts.",
"status": "pending",
"testStrategy": "Unit tests with mocked MetricsCollector. Test aggregation logic for various instance states. Verify overall status determination rules (e.g., >50% unhealthy = degraded).",
"parentId": "undefined"
},
{
"id": 3,
"title": "Create Health Monitoring REST Endpoints",
"description": "Implement REST endpoints for health monitoring: GET /api/v1/health/overview, GET /api/v1/health/instances/:id, and GET /api/v1/metrics in src/mcpd/src/routes/health-monitoring.ts.",
"dependencies": [
1,
2
],
"details": "Create registerHealthMonitoringRoutes(app, deps) function. GET /api/v1/health/overview returns SystemHealth from HealthAggregator.getOverview(). GET /api/v1/health/instances/:id returns InstanceMetrics for specific instance from MetricsCollector. GET /api/v1/metrics returns all metrics in Prometheus-compatible format or JSON. Add proper error handling for 404 when instance not found. Register routes in src/mcpd/src/routes/index.ts and wire up in server.ts.",
"status": "pending",
"testStrategy": "Integration tests using Fastify inject(). Test all three endpoints with mocked services. Verify 200 responses with correct payload structure, 404 for missing instances.",
"parentId": "undefined"
},
{
"id": 4,
"title": "Add Request/Error Metrics Middleware",
"description": "Create middleware in src/mcpd/src/middleware/metrics.ts that intercepts requests to record metrics for request counts and error rates per instance.",
"dependencies": [
1
],
"details": "Implement Fastify preHandler hook that extracts instance ID from request params/query where applicable. Record request start time. Use onResponse hook to record completion and calculate latency. Use onError hook to record errors with MetricsCollector.recordError(). Track metrics per-route and per-instance. Register middleware in src/mcpd/src/middleware/index.ts. Apply to instance-related routes (/api/v1/instances/*) to track per-instance metrics.",
"status": "pending",
"testStrategy": "Unit tests verifying hooks call MetricsCollector methods. Integration tests confirming request/error counts increment correctly after API calls.",
"parentId": "undefined"
},
{
"id": 5,
"title": "Write Comprehensive Health Monitoring Tests",
"description": "Create test suite in src/mcpd/tests/health-monitoring.test.ts covering MetricsCollector, HealthAggregator, health monitoring routes, and metrics middleware.",
"dependencies": [
1,
2,
3,
4
],
"details": "Write tests for: MetricsCollector - test recordRequest(), recordError(), getMetrics(), concurrent access safety. HealthAggregator - test getOverview() with various instance states, edge cases (no instances, all unhealthy). Routes - test /api/v1/health/overview, /api/v1/health/instances/:id, /api/v1/metrics endpoints with mocked dependencies. Middleware - test request counting, error tracking, latency recording. Use vi.mock() for dependencies following existing test patterns in the codebase.",
"status": "pending",
"testStrategy": "Self-referential - this subtask IS the test implementation. Verify all tests pass with `npm test`. Aim for >80% coverage on new health monitoring code.",
"parentId": "undefined"
}
],
"updatedAt": "2026-02-21T05:34:25.289Z"
}, },
{ {
"id": "23", "id": "23",
@@ -717,8 +978,71 @@
"2", "2",
"5" "5"
], ],
"status": "pending", "status": "done",
"subtasks": [] "subtasks": [
{
"id": 1,
"title": "Implement BackupService for JSON export",
"description": "Create BackupService in src/mcpd/src/services/backup/ that exports servers, profiles, and projects from repositories to a structured JSON bundle.",
"dependencies": [],
"details": "Create BackupService class that uses IMcpServerRepository, IMcpProfileRepository, and IProjectRepository to fetch all data. Define a BackupBundle interface with metadata (version, timestamp, mcpctlVersion), servers array, profiles array, and projects array. Implement createBackup() method that aggregates all data into the bundle format. Add optional filtering by resource type (e.g., only servers, or only specific profiles). Export via services/index.ts following existing patterns.",
"status": "pending",
"testStrategy": "Unit test BackupService with mocked repositories. Verify bundle structure includes all expected fields. Test filtering options. Test handling of empty repositories.",
"parentId": "undefined"
},
{
"id": 2,
"title": "Add secrets encryption using Node crypto",
"description": "Implement AES-256-GCM encryption for sensitive data in backup bundles using password-derived keys via scrypt.",
"dependencies": [
1
],
"details": "Create crypto utility module in src/mcpd/src/services/backup/crypto.ts using Node's built-in crypto module. Implement deriveKey() using scrypt with configurable salt length and key length. Implement encrypt() that creates IV, encrypts data with AES-256-GCM, and returns base64-encoded result with IV and auth tag prepended. Implement decrypt() that reverses the process. In BackupService, detect fields containing secrets (env vars with sensitive patterns like *_KEY, *_SECRET, *_TOKEN, PASSWORD) and encrypt them. Store encryption metadata (algorithm, salt) in bundle header.",
"status": "pending",
"testStrategy": "Test encryption/decryption round-trip with various data sizes. Verify wrong password fails decryption. Test key derivation produces consistent results with same inputs. Test detection of sensitive field patterns.",
"parentId": "undefined"
},
{
"id": 3,
"title": "Implement RestoreService for JSON import",
"description": "Create RestoreService that imports a backup bundle back into the system, handling decryption and conflict resolution.",
"dependencies": [
1,
2
],
"details": "Create RestoreService class in src/mcpd/src/services/backup/. Implement restore() method that parses JSON bundle, validates version compatibility, decrypts encrypted fields using provided password, and imports data using repositories. Support conflict resolution strategies: 'skip' (ignore existing), 'overwrite' (replace existing), 'fail' (abort on conflict). Implement validateBundle() for schema validation before import. Handle partial failures with transaction-like rollback or detailed error reporting.",
"status": "pending",
"testStrategy": "Test restore with valid bundle creates expected resources. Test conflict resolution modes (skip, overwrite, fail). Test encrypted bundle restore with correct/incorrect passwords. Test invalid bundle rejection.",
"parentId": "undefined"
},
{
"id": 4,
"title": "Add REST endpoints for backup and restore",
"description": "Create REST API routes in src/mcpd/src/routes/ for triggering backup creation and restore operations.",
"dependencies": [
1,
2,
3
],
"details": "Create backup.ts routes file with: POST /api/v1/backup (create backup, optional password for encryption, returns JSON bundle), POST /api/v1/restore (accepts JSON bundle in body, password if encrypted, conflict strategy option, returns import summary). Register routes in routes/index.ts. Define BackupDeps interface following existing patterns. Add appropriate error handling for invalid bundles, decryption failures, and conflict errors. Include validation schemas for request bodies.",
"status": "pending",
"testStrategy": "Integration test backup endpoint returns valid JSON bundle. Test restore endpoint with valid/invalid bundles. Test encrypted backup/restore round-trip via API. Test error responses for various failure scenarios.",
"parentId": "undefined"
},
{
"id": 5,
"title": "Add CLI commands for backup and restore",
"description": "Implement CLI commands in src/cli/src/commands/ for backup export to file and restore from file.",
"dependencies": [
4
],
"details": "Create backup.ts commands file with: 'mcpctl backup' command with options --output/-o (file path), --encrypt (prompt for password), --resources (filter: servers,profiles,projects). Create 'mcpctl restore' command with options --input/-i (file path), --password (or prompt if encrypted), --conflict (skip|overwrite|fail). Commands should call the daemon API endpoints. Add progress output and summary of backed up/restored resources. Register commands in cli/src/index.ts following existing createXxxCommand pattern.",
"status": "pending",
"testStrategy": "Test backup command creates valid file. Test restore command from backup file. Test encryption password prompting. Test --resources filtering. Test various conflict resolution modes via CLI.",
"parentId": "undefined"
}
],
"updatedAt": "2026-02-21T05:40:51.787Z"
}, },
{ {
"id": "24", "id": "24",
@@ -730,15 +1054,367 @@
"dependencies": [ "dependencies": [
"1" "1"
], ],
"status": "pending", "status": "done",
"subtasks": [] "subtasks": [],
"updatedAt": "2026-02-21T05:12:31.235Z"
},
{
"id": "25",
"title": "Rename local-proxy to mcplocal",
"description": "Rename the src/local-proxy directory to src/mcplocal and update all package references, imports, and build configurations throughout the monorepo.",
"details": "1. Rename directory: `mv src/local-proxy src/mcplocal`\n2. Update package.json name from `@mcpctl/local-proxy` to `@mcpctl/mcplocal`\n3. Update pnpm-workspace.yaml if needed\n4. Update all imports in other packages that reference local-proxy:\n - Search for `@mcpctl/local-proxy` and replace with `@mcpctl/mcplocal`\n - Check tsconfig references and path mappings\n5. Update any scripts in package.json root that reference local-proxy\n6. Update docker-compose files in deploy/ if they reference local-proxy\n7. Update documentation and README references\n8. Run `pnpm install` to regenerate lockfile with new package name\n9. Verify TypeScript compilation succeeds: `pnpm build`\n10. Run existing tests to ensure nothing broke: `pnpm test`",
"testStrategy": "1. Verify directory rename completed: `ls src/mcplocal`\n2. Verify package.json has correct name\n3. Run `pnpm install` - should complete without errors\n4. Run `pnpm build` - all packages should compile\n5. Run `pnpm test` - all existing tests should pass\n6. Grep codebase for 'local-proxy' - should find no stale references except git history",
"priority": "high",
"dependencies": [],
"status": "done",
"subtasks": [],
"updatedAt": "2026-02-21T18:04:17.018Z"
},
{
"id": "26",
"title": "Add HTTP REST server to mcplocal",
"description": "Add a Fastify HTTP server to mcplocal that runs alongside the existing stdio server, providing REST endpoints for mcpctl management commands.",
"details": "1. Add Fastify dependency to mcplocal package.json: `@fastify/cors`, `fastify`\n2. Create `src/mcplocal/src/http/server.ts` with Fastify app setup:\n ```typescript\n import Fastify from 'fastify';\n import cors from '@fastify/cors';\n \n export async function createHttpServer(config: HttpServerConfig) {\n const app = Fastify({ logger: true });\n await app.register(cors, { origin: true });\n // Register routes\n return app;\n }\n ```\n3. Create `src/mcplocal/src/http/routes/` directory structure\n4. Create health check endpoint: `GET /health`\n5. Create config types in `src/mcplocal/src/config.ts`:\n - `httpPort`: number (default 3200)\n - `httpHost`: string (default '127.0.0.1')\n - `mcpdUrl`: string (default 'http://localhost:3100')\n6. Update mcplocal entry point to start both servers:\n - stdio server for Claude MCP protocol\n - HTTP server for mcpctl REST API\n7. Add graceful shutdown handling for both servers",
"testStrategy": "1. Unit test: HTTP server starts on configured port\n2. Unit test: Health endpoint returns 200 OK\n3. Integration test: Both stdio and HTTP servers can run simultaneously\n4. Test graceful shutdown stops both servers cleanly\n5. Test CORS headers are present on responses\n6. Manual test: curl http://localhost:3200/health",
"priority": "high",
"dependencies": [
"25"
],
"status": "done",
"subtasks": [],
"updatedAt": "2026-02-21T18:09:26.322Z"
},
{
"id": "27",
"title": "Implement mcplocal management proxy routes",
"description": "Add REST endpoints to mcplocal that mirror mcpd's API and proxy management requests to mcpd without LLM processing. All requests must include proper authentication to mcpd using a Bearer token read from mcplocal config.",
"status": "done",
"dependencies": [
"26"
],
"priority": "high",
"details": "1. Create HTTP client for mcpd communication with auth: `src/local-proxy/src/http/mcpd-client.ts`\n ```typescript\n export class McpdClient {\n private token: string;\n \n constructor(private baseUrl: string, token: string) {\n this.token = token;\n }\n \n private getHeaders(): Record<string, string> {\n return {\n 'Content-Type': 'application/json',\n 'Authorization': `Bearer ${this.token}`\n };\n }\n \n async get<T>(path: string): Promise<T> {\n const response = await fetch(`${this.baseUrl}${path}`, {\n method: 'GET',\n headers: this.getHeaders()\n });\n await this.handleAuthError(response);\n return response.json();\n }\n \n async post<T>(path: string, body: unknown): Promise<T> {\n const response = await fetch(`${this.baseUrl}${path}`, {\n method: 'POST',\n headers: this.getHeaders(),\n body: JSON.stringify(body)\n });\n await this.handleAuthError(response);\n return response.json();\n }\n \n async put<T>(path: string, body: unknown): Promise<T> {\n const response = await fetch(`${this.baseUrl}${path}`, {\n method: 'PUT',\n headers: this.getHeaders(),\n body: JSON.stringify(body)\n });\n await this.handleAuthError(response);\n return response.json();\n }\n \n async delete<T>(path: string): Promise<T> {\n const response = await fetch(`${this.baseUrl}${path}`, {\n method: 'DELETE',\n headers: this.getHeaders()\n });\n await this.handleAuthError(response);\n return response.json();\n }\n \n private async handleAuthError(response: Response): Promise<void> {\n if (response.status === 401) {\n throw new AuthenticationError('Invalid or expired token. Please check mcplocal config.');\n }\n }\n }\n \n export class AuthenticationError extends Error {\n constructor(message: string) {\n super(message);\n this.name = 'AuthenticationError';\n }\n }\n ```\n2. Add token to mcplocal config type (extend ProxyConfig or similar):\n ```typescript\n export interface McpdAuthConfig {\n /** Bearer token for mcpd API authentication */\n mcpdToken: string;\n }\n ```\n3. Create proxy routes in `src/local-proxy/src/http/routes/`:\n - `servers.ts`: GET/POST /api/v1/servers, GET/PUT/DELETE /api/v1/servers/:id\n - `profiles.ts`: GET/POST /api/v1/profiles, GET/PUT/DELETE /api/v1/profiles/:id\n - `instances.ts`: GET/POST /api/v1/instances, GET/POST/DELETE /api/v1/instances/:id, etc.\n - `projects.ts`: GET/POST /api/v1/projects, etc.\n - `audit.ts`: GET /api/v1/audit-logs\n - `backup.ts`: POST /api/v1/backup, POST /api/v1/restore\n4. Each route handler forwards to mcpd with auth:\n ```typescript\n app.get('/api/v1/servers', async (req, reply) => {\n try {\n const result = await mcpdClient.get('/api/v1/servers');\n return result;\n } catch (error) {\n if (error instanceof AuthenticationError) {\n return reply.status(401).send({ error: error.message });\n }\n throw error;\n }\n });\n ```\n5. Add comprehensive error handling:\n - If mcpd is unreachable, return 503 Service Unavailable\n - If mcpd returns 401, return 401 with clear message about token configuration\n - Forward other HTTP errors from mcpd with appropriate status codes\n6. Add request/response logging for debugging",
"testStrategy": "1. Unit test: McpdClient attaches Authorization header to all request methods (GET, POST, PUT, DELETE)\n2. Unit test: McpdClient throws AuthenticationError on 401 response from mcpd\n3. Unit test: Each proxy route forwards requests correctly with auth headers\n4. Unit test: Error handling when mcpd is unreachable (503 response)\n5. Unit test: Error handling when mcpd returns 401 (clear error message returned)\n6. Integration test: Full request flow mcpctl -> mcplocal -> mcpd with valid token\n7. Integration test: Full request flow with invalid token returns 401\n8. Test query parameters are forwarded correctly\n9. Test request body is forwarded correctly for POST/PUT\n10. Test path parameters (:id) are passed through correctly\n11. Mock mcpd responses and verify mcplocal returns them unchanged\n12. Test token is read correctly from mcplocal config",
"subtasks": [],
"updatedAt": "2026-02-21T18:34:20.942Z"
},
{
"id": "28",
"title": "Add MCP proxy endpoint to mcpd",
"description": "Create a new endpoint in mcpd at /api/v1/mcp/proxy that accepts MCP tool call requests and executes them on managed MCP server instances. Also add authentication endpoints (login/logout) that mcpctl will use to authenticate users.",
"status": "done",
"dependencies": [],
"priority": "high",
"details": "## MCP Proxy Endpoint\n\n1. Create new route file: `src/mcpd/src/routes/mcp-proxy.ts`\n2. Define request schema:\n ```typescript\n interface McpProxyRequest {\n serverId: string; // or instanceId\n method: string; // e.g., 'tools/call', 'resources/read'\n params: Record<string, unknown>;\n }\n ```\n3. Create McpProxyService in `src/mcpd/src/services/mcp-proxy-service.ts`:\n - Look up instance by serverId (auto-start if profile allows)\n - Connect to the container via stdio or HTTP (depending on transport type)\n - Execute the MCP JSON-RPC call\n - Return the result\n4. Handle MCP JSON-RPC protocol:\n ```typescript\n async executeCall(instanceId: string, method: string, params: unknown) {\n const instance = await this.instanceService.getInstance(instanceId);\n const connection = await this.getOrCreateConnection(instance);\n const result = await connection.call(method, params);\n return result;\n }\n ```\n5. Connection pooling: maintain persistent connections to running instances\n6. Add route: `POST /api/v1/mcp/proxy` (must be behind auth middleware)\n7. Add audit logging for all MCP proxy calls - include authenticated userId from request.userId\n8. Handle errors: instance not found, instance not running, MCP call failed\n\n## Authentication Endpoints\n\n9. Create auth routes file: `src/mcpd/src/routes/auth.ts`\n10. Implement `POST /api/v1/auth/login`:\n - Request body: `{ username: string, password: string }`\n - Validate credentials against User table (use bcrypt for password comparison)\n - Create new Session record with token (use crypto.randomUUID or similar)\n - Response: `{ token: string, expiresAt: string }`\n11. Implement `POST /api/v1/auth/logout`:\n - Requires Bearer token in Authorization header\n - Delete/invalidate the Session record\n - Response: `{ success: true }`\n\n## Auth Integration Notes\n\n- Existing auth middleware in `src/mcpd/src/middleware/auth.ts` validates Bearer tokens against Session table\n- It sets `request.userId` on successful authentication\n- MCP proxy endpoint MUST use this auth middleware\n- Auth endpoints (login) should NOT require auth middleware\n- Logout endpoint SHOULD require auth middleware to validate the session being invalidated",
"testStrategy": "1. Unit test: Proxy service looks up correct instance\n2. Unit test: JSON-RPC call is formatted correctly\n3. Integration test: Full flow with a mock MCP server container\n4. Test error handling: non-existent server returns 404\n5. Test error handling: stopped instance returns appropriate error\n6. Test audit log entries include authenticated userId\n7. Test connection reuse for multiple calls to same instance\n8. Test login endpoint: valid credentials return session token\n9. Test login endpoint: invalid credentials return 401\n10. Test logout endpoint: valid session is invalidated\n11. Test logout endpoint: invalid/missing token returns 401\n12. Test MCP proxy endpoint without auth token returns 401\n13. Test MCP proxy endpoint with expired token returns 401\n14. Test MCP proxy endpoint with valid token succeeds and logs userId in audit",
"subtasks": [
{
"id": 1,
"title": "Create auth routes with login/logout endpoints",
"description": "Create src/mcpd/src/routes/auth.ts with POST /api/v1/auth/login and POST /api/v1/auth/logout endpoints for mcpctl authentication.",
"dependencies": [],
"details": "Implement login endpoint: validate username/password against User table using bcrypt, create Session record with generated token and expiry. Implement logout endpoint: require auth middleware, delete/invalidate Session record. Login does NOT require auth, logout DOES require auth. Export registerAuthRoutes function and update routes/index.ts.",
"status": "pending",
"testStrategy": "Test login with valid/invalid credentials. Test logout invalidates session. Test logout requires valid auth token. Test session token format and expiry.",
"parentId": "undefined"
},
{
"id": 2,
"title": "Create MCP proxy route file with auth middleware",
"description": "Create src/mcpd/src/routes/mcp-proxy.ts with POST /api/v1/mcp/proxy endpoint protected by auth middleware.",
"dependencies": [
1
],
"details": "Define McpProxyRequest interface (serverId, method, params). Register route handler that extracts userId from request.userId (set by auth middleware). Apply auth middleware using preHandler hook. Validate request body schema.",
"status": "pending",
"testStrategy": "Test endpoint returns 401 without auth token. Test endpoint returns 401 with invalid/expired token. Test valid auth token allows request through.",
"parentId": "undefined"
},
{
"id": 3,
"title": "Create McpProxyService for instance lookup and connection",
"description": "Create src/mcpd/src/services/mcp-proxy-service.ts to handle instance lookup, connection management, and MCP call execution.",
"dependencies": [],
"details": "Implement getInstance to look up by serverId, auto-start if profile allows. Implement getOrCreateConnection for connection pooling. Handle both stdio and HTTP transports. Implement executeCall method that formats JSON-RPC call and returns result.",
"status": "pending",
"testStrategy": "Unit test instance lookup. Unit test connection pooling reuses connections. Test auto-start behavior. Test both transport types.",
"parentId": "undefined"
},
{
"id": 4,
"title": "Implement MCP JSON-RPC call execution",
"description": "Implement the core JSON-RPC call logic in McpProxyService to execute tool calls on MCP server instances.",
"dependencies": [
3
],
"details": "Format JSON-RPC 2.0 request with method and params. Send request over established connection (stdio/HTTP). Parse JSON-RPC response and handle errors. Return result or throw appropriate error for failed calls.",
"status": "pending",
"testStrategy": "Unit test JSON-RPC request formatting. Test successful call returns result. Test JSON-RPC error responses are handled. Integration test with mock MCP server.",
"parentId": "undefined"
},
{
"id": 5,
"title": "Add audit logging with userId for MCP proxy calls",
"description": "Ensure all MCP proxy calls are logged to audit log including the authenticated userId from the session.",
"dependencies": [
2,
4
],
"details": "Use existing audit middleware/service. Include userId from request.userId in audit log entry. Log serverId, method, and outcome (success/failure). Log any errors that occur during MCP call execution.",
"status": "pending",
"testStrategy": "Test audit log entries contain userId. Test audit log entries contain serverId and method. Test failed calls are logged with error details.",
"parentId": "undefined"
},
{
"id": 6,
"title": "Integrate auth and proxy routes into server.ts",
"description": "Register the new auth and mcp-proxy routes in the Fastify server with proper auth middleware wiring.",
"dependencies": [
1,
2,
5
],
"details": "Update server.ts to register auth routes (no auth required for login). Register mcp-proxy routes with auth middleware. Ensure auth middleware is wired with findSession dependency from Prisma. Update routes/index.ts exports.",
"status": "pending",
"testStrategy": "Integration test full login -> proxy call flow. Test auth middleware correctly protects proxy endpoint. Test health endpoints remain unauthenticated.",
"parentId": "undefined"
}
],
"updatedAt": "2026-02-21T18:09:26.327Z"
},
{
"id": "29",
"title": "Implement LLM pre-processing pipeline in mcplocal",
"description": "Create the core LLM pre-processing pipeline that intercepts MCP tool calls, uses a local LLM to optimize requests before sending to mcpd, and filters responses before returning to Claude.",
"details": "1. Create `src/mcplocal/src/llm/processor.ts` - the core pipeline:\n ```typescript\n export class LlmProcessor {\n constructor(\n private providerRegistry: ProviderRegistry,\n private config: LlmProcessorConfig\n ) {}\n \n async preprocessRequest(toolName: string, params: unknown): Promise<ProcessedRequest> {\n // Use LLM to interpret and optimize the request\n const prompt = this.buildRequestPrompt(toolName, params);\n const result = await this.providerRegistry.getActiveProvider().complete({\n systemPrompt: REQUEST_OPTIMIZATION_SYSTEM_PROMPT,\n userPrompt: prompt\n });\n return this.parseOptimizedRequest(result);\n }\n \n async filterResponse(toolName: string, originalRequest: unknown, rawResponse: unknown): Promise<FilteredResponse> {\n // Use LLM to filter/summarize the response\n const prompt = this.buildFilterPrompt(toolName, originalRequest, rawResponse);\n const result = await this.providerRegistry.getActiveProvider().complete({\n systemPrompt: RESPONSE_FILTER_SYSTEM_PROMPT,\n userPrompt: prompt\n });\n return this.parseFilteredResponse(result);\n }\n }\n ```\n2. Create system prompts in `src/mcplocal/src/llm/prompts.ts`:\n - REQUEST_OPTIMIZATION_SYSTEM_PROMPT: instruct LLM to generate optimal queries\n - RESPONSE_FILTER_SYSTEM_PROMPT: instruct LLM to extract relevant information\n3. Integrate into router.ts - wrap tools/call handler:\n ```typescript\n async handleToolsCall(request: JsonRpcRequest) {\n if (this.shouldPreprocess(request.params.name)) {\n const processed = await this.llmProcessor.preprocessRequest(...);\n // Call mcpd with processed request\n const rawResponse = await this.callMcpd(processed);\n const filtered = await this.llmProcessor.filterResponse(...);\n return filtered;\n }\n return this.callMcpd(request.params);\n }\n ```\n4. Add configuration options:\n - `enablePreprocessing`: boolean\n - `preprocessingExclude`: string[] (tool names to skip)\n - `preferredProvider`: string (ollama, gemini, deepseek, etc.)\n5. Add bypass logic for simple operations (list, create, delete)",
"testStrategy": "1. Unit test: Request preprocessing generates optimized queries\n2. Unit test: Response filtering reduces data volume\n3. Unit test: Bypass logic works for excluded tools\n4. Integration test: Full pipeline with mock LLM provider\n5. Test error handling: LLM failure falls back to unfiltered pass-through\n6. Test configuration options are respected\n7. Measure: response size reduction percentage",
"priority": "high",
"dependencies": [
"25",
"27",
"28"
],
"status": "done",
"subtasks": [],
"updatedAt": "2026-02-21T18:41:26.539Z"
},
{
"id": "30",
"title": "Add Gemini CLI LLM provider",
"description": "Implement a new LLM provider that uses the Gemini CLI binary for local, free LLM inference as the preferred provider for pre-processing.",
"details": "1. Create `src/mcplocal/src/providers/gemini-cli.ts`:\n ```typescript\n import { spawn } from 'child_process';\n \n export class GeminiCliProvider implements LlmProvider {\n readonly name = 'gemini-cli';\n private binaryPath: string;\n \n constructor(config: GeminiCliConfig) {\n this.binaryPath = config.binaryPath || 'gemini';\n }\n \n async isAvailable(): Promise<boolean> {\n // Check if gemini binary exists and is executable\n try {\n await this.runCommand(['--version']);\n return true;\n } catch {\n return false;\n }\n }\n \n async complete(options: CompletionOptions): Promise<CompletionResult> {\n const input = this.formatPrompt(options);\n const output = await this.runCommand(['--prompt', input]);\n return { content: output, model: 'gemini-cli' };\n }\n \n private async runCommand(args: string[]): Promise<string> {\n // Spawn gemini CLI process and capture output\n }\n }\n ```\n2. Research actual Gemini CLI interface and adjust implementation\n3. Add to provider registry with high priority (prefer over API providers)\n4. Add configuration: `geminiCliBinaryPath`\n5. Handle timeout for slow inference\n6. Add fallback to next provider if Gemini CLI fails",
"testStrategy": "1. Unit test: Provider correctly detects CLI availability\n2. Unit test: Prompt formatting is correct\n3. Unit test: Output parsing handles various formats\n4. Integration test: Full completion with actual Gemini CLI (if available)\n5. Test timeout handling for slow responses\n6. Test fallback when CLI is not installed",
"priority": "medium",
"dependencies": [
"25"
],
"status": "done",
"subtasks": [],
"updatedAt": "2026-02-21T18:34:20.968Z"
},
{
"id": "31",
"title": "Add DeepSeek API LLM provider",
"description": "Implement DeepSeek API provider as a cheap cloud-based fallback when local LLMs are unavailable.",
"details": "1. Create `src/mcplocal/src/providers/deepseek.ts`:\n ```typescript\n export class DeepSeekProvider implements LlmProvider {\n readonly name = 'deepseek';\n private apiKey: string;\n private baseUrl = 'https://api.deepseek.com/v1';\n \n constructor(config: DeepSeekConfig) {\n this.apiKey = config.apiKey || process.env.DEEPSEEK_API_KEY;\n }\n \n async isAvailable(): Promise<boolean> {\n return !!this.apiKey;\n }\n \n async complete(options: CompletionOptions): Promise<CompletionResult> {\n // DeepSeek uses OpenAI-compatible API\n const response = await fetch(`${this.baseUrl}/chat/completions`, {\n method: 'POST',\n headers: {\n 'Authorization': `Bearer ${this.apiKey}`,\n 'Content-Type': 'application/json'\n },\n body: JSON.stringify({\n model: 'deepseek-chat',\n messages: [{ role: 'user', content: options.userPrompt }]\n })\n });\n // Parse and return\n }\n }\n ```\n2. Add DEEPSEEK_API_KEY to configuration\n3. Register in provider registry with medium priority\n4. Support both deepseek-chat and deepseek-coder models\n5. Add rate limiting handling",
"testStrategy": "1. Unit test: Provider correctly checks API key availability\n2. Unit test: Request formatting matches DeepSeek API spec\n3. Unit test: Response parsing handles all fields\n4. Integration test: Full completion with actual API (with valid key)\n5. Test error handling for rate limits\n6. Test error handling for invalid API key",
"priority": "medium",
"dependencies": [
"25"
],
"status": "done",
"subtasks": [],
"updatedAt": "2026-02-21T18:34:20.974Z"
},
{
"id": "32",
"title": "Implement smart context optimization",
"description": "Add token counting and decision logic to intelligently skip LLM filtering when responses are small enough, and cache filtering decisions for repeated queries.",
"details": "1. Create `src/mcplocal/src/llm/token-counter.ts`:\n ```typescript\n export function estimateTokens(text: string): number {\n // Simple estimation: ~4 chars per token for English\n // More accurate: use tiktoken or similar library\n return Math.ceil(text.length / 4);\n }\n ```\n2. Create `src/mcplocal/src/llm/filter-cache.ts`:\n ```typescript\n export class FilterCache {\n private cache: LRUCache<string, FilterDecision>;\n \n shouldFilter(toolName: string, params: unknown, responseSize: number): boolean {\n const key = this.computeKey(toolName, params);\n const cached = this.cache.get(key);\n if (cached) return cached.shouldFilter;\n // No cache hit - use default threshold logic\n return responseSize > this.tokenThreshold;\n }\n \n recordDecision(toolName: string, params: unknown, decision: FilterDecision): void {\n const key = this.computeKey(toolName, params);\n this.cache.set(key, decision);\n }\n }\n ```\n3. Add configuration options:\n - `tokenThreshold`: number (default 1000 tokens)\n - `filterCacheSize`: number (default 1000 entries)\n - `filterCacheTtl`: number (default 3600 seconds)\n4. Integrate into LlmProcessor:\n ```typescript\n async filterResponse(...) {\n const tokens = estimateTokens(JSON.stringify(rawResponse));\n if (tokens < this.config.tokenThreshold) {\n // Not worth filtering - return as-is\n return { filtered: false, response: rawResponse };\n }\n // Proceed with LLM filtering\n }\n ```\n5. Add metrics tracking:\n - Total tokens processed\n - Tokens saved by filtering\n - Filter cache hit rate\n - Average latency added by filtering",
"testStrategy": "1. Unit test: Token estimation is reasonably accurate\n2. Unit test: Cache correctly stores and retrieves decisions\n3. Unit test: Threshold logic skips filtering for small responses\n4. Unit test: Cache TTL expiration works correctly\n5. Integration test: Metrics are recorded accurately\n6. Performance test: Cache improves latency for repeated queries",
"priority": "medium",
"dependencies": [
"29"
],
"status": "done",
"subtasks": [],
"updatedAt": "2026-02-21T18:47:07.709Z"
},
{
"id": "33",
"title": "Update mcpctl to use mcplocal as daemon",
"description": "Modify mcpctl CLI to connect to mcplocal instead of mcpd directly, update configuration options, add dual connectivity status checking, and implement authentication commands (login/logout) with secure credential storage.",
"status": "done",
"dependencies": [
"27"
],
"priority": "high",
"details": "1. Update `src/cli/src/config/schema.ts`:\n ```typescript\n export interface McpctlConfig {\n mcplocalUrl: string; // NEW: default 'http://localhost:3200'\n mcpdUrl: string; // Keep for reference/direct access if needed\n // ... other fields\n }\n ```\n2. Update `src/cli/src/config/defaults.ts`:\n - Change default daemonUrl to http://localhost:3200 (mcplocal)\n3. Update `src/cli/src/api-client.ts`:\n - Default baseUrl now points to mcplocal\n4. Add new config commands in `src/cli/src/commands/config.ts`:\n ```typescript\n .command('set-mcplocal-url <url>')\n .command('set-mcpd-url <url>')\n .command('get-mcplocal-url')\n .command('get-mcpd-url')\n ```\n5. Update `src/cli/src/commands/status.ts` to show both connections and auth status:\n ```\n $ mcpctl status\n mcplocal: connected (localhost:3200)\n mcpd: connected (nas.local:3100) via mcplocal\n Auth: logged in as user@example.com\n LLM Provider: ollama (llama3.2)\n Token savings: 45% (last 24h)\n ```\n6. Update CLI --daemon-url flag to point to mcplocal\n7. Add --direct flag to bypass mcplocal and talk to mcpd directly (for debugging)\n8. Create `src/cli/src/commands/auth.ts` with login/logout commands:\n - `mcpctl login`: Prompt for mcpd URL (if not configured) and credentials\n - Call POST /api/v1/auth/login with { email, password }\n - Store session token in ~/.mcpctl/credentials with 0600 permissions\n - `mcpctl logout`: Invalidate session and delete stored token\n9. Create `src/cli/src/auth/credentials.ts` for secure token storage:\n - Use fs.chmod to set 0600 permissions on credentials file\n - Token format: { token: string, mcpdUrl: string, user: string, expiresAt?: string }\n10. Update api-client.ts to include stored token in requests to mcplocal\n - mcplocal passes this token to mcpd for authentication",
"testStrategy": "1. Unit test: Default config points to mcplocal URL\n2. Unit test: Config commands update correct fields\n3. Integration test: CLI commands work through mcplocal proxy\n4. Test status command shows both mcplocal and mcpd status\n5. Test --direct flag bypasses mcplocal\n6. Test backward compatibility with existing config files\n7. Unit test: login command stores token with correct permissions (0600)\n8. Unit test: logout command removes credentials file\n9. Integration test: login flow with POST /api/v1/auth/login\n10. Test status command shows auth status (logged in as user)\n11. Test token is passed to mcplocal in API requests\n12. Test invalid credentials return appropriate error message\n13. Test expired token handling",
"subtasks": [
{
"id": 1,
"title": "Update config schema for mcplocal and mcpd URLs",
"description": "Modify McpctlConfigSchema in src/cli/src/config/schema.ts to include separate mcplocalUrl and mcpdUrl fields with appropriate defaults.",
"dependencies": [],
"details": "Update the Zod schema to add mcplocalUrl (default: http://localhost:3200) and mcpdUrl (default: http://localhost:3100). Update DEFAULT_CONFIG and ensure backward compatibility with existing daemonUrl field by mapping it to mcplocalUrl.",
"status": "pending",
"testStrategy": "Unit test schema validation for new URL fields. Test default values are correct. Test backward compatibility mapping.",
"parentId": "undefined"
},
{
"id": 2,
"title": "Create auth credentials storage module",
"description": "Create src/cli/src/auth/credentials.ts to handle secure storage and retrieval of session tokens in ~/.mcpctl/credentials.",
"dependencies": [],
"details": "Implement saveCredentials(token, mcpdUrl, user), loadCredentials(), and deleteCredentials() functions. Use fs.chmod to set 0600 permissions. Store JSON format: { token, mcpdUrl, user, expiresAt }. Handle file not found gracefully in loadCredentials.",
"status": "pending",
"testStrategy": "Unit test credentials are saved with 0600 permissions. Test load returns null when file doesn't exist. Test delete removes the file.",
"parentId": "undefined"
},
{
"id": 3,
"title": "Implement login command",
"description": "Create src/cli/src/commands/auth.ts with mcpctl login command that prompts for mcpd URL and credentials, calls POST /api/v1/auth/login, and stores the session token.",
"dependencies": [
2
],
"details": "Use inquirer or prompts library for interactive credential input (email, password). If mcpdUrl not configured, prompt for it. Call POST /api/v1/auth/login with credentials. On success, save token using credentials module. Display 'Logged in as {user}' on success. Handle errors (invalid credentials, network errors) with clear messages.",
"status": "pending",
"testStrategy": "Test prompts collect correct input. Test successful login stores credentials. Test failed login shows error without storing token.",
"parentId": "undefined"
},
{
"id": 4,
"title": "Implement logout command",
"description": "Add mcpctl logout command to auth.ts that invalidates the session and removes stored credentials.",
"dependencies": [
2
],
"details": "Load stored credentials, optionally call a logout endpoint on mcpd to invalidate server-side session, then delete the local credentials file. Display 'Logged out successfully' or 'Not logged in' as appropriate.",
"status": "pending",
"testStrategy": "Test logout removes credentials file. Test logout when not logged in shows appropriate message.",
"parentId": "undefined"
},
{
"id": 5,
"title": "Update api-client to include auth token",
"description": "Modify src/cli/src/api-client.ts to load and include stored session token in Authorization header for requests to mcplocal.",
"dependencies": [
2
],
"details": "Import loadCredentials from auth module. Add Authorization: Bearer {token} header to requests when credentials exist. Handle expired token by returning appropriate error suggesting re-login.",
"status": "pending",
"testStrategy": "Test requests include Authorization header when logged in. Test requests work without token when not logged in.",
"parentId": "undefined"
},
{
"id": 6,
"title": "Update status command to show auth status",
"description": "Modify src/cli/src/commands/status.ts to display authentication status (logged in as user X or not logged in) along with mcplocal and mcpd connectivity.",
"dependencies": [
2,
5
],
"details": "Load credentials and display auth status line: 'Auth: logged in as {user}' or 'Auth: not logged in'. Update status output format to show mcplocal and mcpd status separately with the auth info.",
"status": "pending",
"testStrategy": "Test status shows 'logged in as user' when credentials exist. Test status shows 'not logged in' when no credentials.",
"parentId": "undefined"
},
{
"id": 7,
"title": "Add config commands for mcplocal and mcpd URLs",
"description": "Add set-mcplocal-url, set-mcpd-url, get-mcplocal-url, and get-mcpd-url commands to src/cli/src/commands/config.ts.",
"dependencies": [
1
],
"details": "Add four new subcommands to the config command for setting and getting the mcplocal and mcpd URLs independently. Update the generic 'set' command to handle these new schema fields.",
"status": "pending",
"testStrategy": "Test each command correctly reads/writes the appropriate config field.",
"parentId": "undefined"
},
{
"id": 8,
"title": "Add --direct flag for mcpd bypass",
"description": "Add --direct flag to CLI commands that bypasses mcplocal and connects directly to mcpd for debugging purposes.",
"dependencies": [
1,
5
],
"details": "Add global --direct option to the main CLI. When set, api-client uses mcpdUrl instead of mcplocalUrl. Useful for debugging connectivity issues between mcplocal and mcpd.",
"status": "pending",
"testStrategy": "Test --direct flag causes requests to use mcpdUrl. Test normal operation uses mcplocalUrl.",
"parentId": "undefined"
},
{
"id": 9,
"title": "Register auth commands in CLI entry point",
"description": "Import and register the login and logout commands in src/cli/src/index.ts.",
"dependencies": [
3,
4
],
"details": "Import createAuthCommand from commands/auth.ts and add it to the main program with program.addCommand(createAuthCommand()).",
"status": "pending",
"testStrategy": "Test mcpctl login and mcpctl logout are available as commands.",
"parentId": "undefined"
}
],
"updatedAt": "2026-02-21T18:39:11.345Z"
},
{
"id": "34",
"title": "Connect mcplocal MCP router to mcpd proxy endpoint",
"description": "Update mcplocal's MCP router to forward tool calls to mcpd's new /api/v1/mcp/proxy endpoint instead of connecting to MCP servers directly.",
"details": "1. Update `src/mcplocal/src/router.ts` to use mcpd proxy:\n ```typescript\n class Router {\n private mcpdClient: McpdClient;\n \n async handleToolsCall(request: JsonRpcRequest) {\n const { name, arguments: args } = request.params;\n const [serverName, toolName] = name.split('/');\n \n // Pre-process with LLM if enabled\n const processedArgs = this.config.enablePreprocessing\n ? await this.llmProcessor.preprocessRequest(toolName, args)\n : args;\n \n // Call mcpd proxy endpoint\n const result = await this.mcpdClient.post('/api/v1/mcp/proxy', {\n serverId: serverName,\n method: 'tools/call',\n params: { name: toolName, arguments: processedArgs }\n });\n \n // Post-process response with LLM if enabled\n return this.config.enablePreprocessing\n ? await this.llmProcessor.filterResponse(toolName, args, result)\n : result;\n }\n }\n ```\n2. Update upstream configuration:\n - Remove direct upstream connections for managed servers\n - Keep option for local/unmanaged upstreams\n3. Add server discovery from mcpd:\n ```typescript\n async refreshServerList() {\n const servers = await this.mcpdClient.get('/api/v1/servers');\n this.updateAvailableTools(servers);\n }\n ```\n4. Handle tools/list by aggregating from mcpd servers\n5. Handle resources/list and prompts/list similarly",
"testStrategy": "1. Unit test: Tool calls are forwarded to mcpd proxy correctly\n2. Unit test: Server name is extracted from namespaced tool name\n3. Integration test: Full flow Claude -> mcplocal -> mcpd -> container\n4. Test tools/list aggregates from all mcpd servers\n5. Test error handling when mcpd is unreachable\n6. Test LLM preprocessing is applied when enabled",
"priority": "high",
"dependencies": [
"28",
"29"
],
"status": "done",
"subtasks": [],
"updatedAt": "2026-02-21T18:43:14.673Z"
},
{
"id": "35",
"title": "Implement health monitoring across all tiers",
"description": "Extend health monitoring to track connectivity and status across mcplocal, mcpd, and individual MCP server instances.",
"details": "1. Update mcplocal health monitor in `src/mcplocal/src/health.ts`:\n ```typescript\n export class TieredHealthMonitor {\n async checkHealth(): Promise<TieredHealthStatus> {\n return {\n mcplocal: {\n status: 'healthy',\n llmProvider: await this.checkLlmProvider(),\n uptime: process.uptime()\n },\n mcpd: await this.checkMcpdHealth(),\n instances: await this.checkInstancesHealth()\n };\n }\n \n private async checkMcpdHealth(): Promise<McpdHealth> {\n try {\n const health = await this.mcpdClient.get('/api/v1/health');\n return { status: 'connected', ...health };\n } catch {\n return { status: 'disconnected' };\n }\n }\n \n private async checkInstancesHealth(): Promise<InstanceHealth[]> {\n const instances = await this.mcpdClient.get('/api/v1/instances');\n return instances.map(i => ({\n name: i.name,\n status: i.status,\n lastHealthCheck: i.lastHealthCheck\n }));\n }\n }\n ```\n2. Add health endpoint to mcplocal HTTP server: `GET /health`\n3. Update mcpctl status command to display tiered health\n4. Add degraded state detection:\n - LLM provider unavailable but mcpd reachable\n - Some instances down but others healthy\n5. Add health event notifications for state transitions\n6. Add configurable health check intervals",
"testStrategy": "1. Unit test: Health check correctly identifies all states\n2. Unit test: Degraded state is detected correctly\n3. Integration test: Full health check across all tiers\n4. Test health endpoint returns correct format\n5. Test mcpctl status displays health correctly\n6. Test state transition events are emitted",
"priority": "medium",
"dependencies": [
"33",
"34"
],
"status": "done",
"subtasks": [],
"updatedAt": "2026-02-21T18:46:07.885Z"
},
{
"id": "36",
"title": "End-to-end integration testing",
"description": "Create comprehensive integration tests that validate the full data flow from mcpctl through mcplocal to mcpd to MCP server containers and back.",
"details": "1. Create test fixtures in `src/mcplocal/test/fixtures/`:\n - Mock MCP server that returns predictable responses\n - Test configuration files\n - Sample tool call payloads\n2. Create integration test suite in `src/mcplocal/test/integration/`:\n ```typescript\n describe('End-to-end flow', () => {\n it('mcpctl -> mcplocal -> mcpd -> mcp_server', async () => {\n // Start mock MCP server\n // Start mcpd with test config\n // Start mcplocal pointing to mcpd\n // Execute mcpctl command\n // Verify response flows back correctly\n });\n \n it('LLM pre-processing reduces response size', async () => {\n // Send query that returns large dataset\n // Verify LLM filtering reduces token count\n // Verify relevant data is preserved\n });\n \n it('credentials never leave mcpd', async () => {\n // Monitor all traffic from mcplocal\n // Verify no credentials appear in requests/responses\n });\n });\n ```\n3. Test scenarios:\n - Management commands (get servers, instances, etc.)\n - MCP tool calls with LLM preprocessing\n - MCP tool calls without preprocessing\n - Error handling (mcpd down, instance down, LLM failure)\n - Health monitoring accuracy\n4. Add CI integration test workflow\n5. Create docker-compose.test.yml for test environment",
"testStrategy": "1. All integration tests pass in CI environment\n2. Test coverage includes happy path and error scenarios\n3. Performance benchmarks: measure latency at each tier\n4. Security test: verify credential isolation\n5. Load test: multiple concurrent requests\n6. Chaos test: random component failures",
"priority": "high",
"dependencies": [
"29",
"33",
"34",
"35"
],
"status": "done",
"subtasks": [],
"updatedAt": "2026-02-21T18:52:29.084Z"
} }
], ],
"metadata": { "metadata": {
"version": "1.0.0", "version": "1.0.0",
"lastModified": "2026-02-21T04:56:01.659Z", "lastModified": "2026-02-21T18:52:29.084Z",
"taskCount": 24, "taskCount": 36,
"completedCount": 8, "completedCount": 33,
"tags": [ "tags": [
"master" "master"
] ]

View File

@@ -8,6 +8,9 @@ if [ -f .env ]; then
set -a; source .env; set +a set -a; source .env; set +a
fi fi
# Ensure tools are on PATH
export PATH="$HOME/.npm-global/bin:$HOME/.bun/bin:$HOME/.local/bin:$PATH"
echo "=== mcpctl CLI build & release ===" echo "=== mcpctl CLI build & release ==="
echo "" echo ""

93
completions/mcpctl.bash Normal file
View File

@@ -0,0 +1,93 @@
_mcpctl() {
local cur prev words cword
_init_completion || return
local commands="config status get describe instance instances apply setup claude project projects backup restore help"
local global_opts="-v --version -o --output --daemon-url -h --help"
local resources="servers profiles projects instances"
case "${words[1]}" in
config)
COMPREPLY=($(compgen -W "view set path reset help" -- "$cur"))
return ;;
status)
COMPREPLY=($(compgen -W "--daemon-url -h --help" -- "$cur"))
return ;;
get)
if [[ $cword -eq 2 ]]; then
COMPREPLY=($(compgen -W "$resources" -- "$cur"))
else
COMPREPLY=($(compgen -W "-o --output --daemon-url -h --help" -- "$cur"))
fi
return ;;
describe)
if [[ $cword -eq 2 ]]; then
COMPREPLY=($(compgen -W "$resources" -- "$cur"))
else
COMPREPLY=($(compgen -W "-o --output --daemon-url -h --help" -- "$cur"))
fi
return ;;
instance|instances)
if [[ $cword -eq 2 ]]; then
COMPREPLY=($(compgen -W "list ls start stop restart remove rm logs inspect help" -- "$cur"))
else
case "${words[2]}" in
logs)
COMPREPLY=($(compgen -W "--tail --since -h --help" -- "$cur"))
;;
start)
COMPREPLY=($(compgen -W "--env --image -h --help" -- "$cur"))
;;
list|ls)
COMPREPLY=($(compgen -W "--server-id -o --output -h --help" -- "$cur"))
;;
esac
fi
return ;;
claude)
if [[ $cword -eq 2 ]]; then
COMPREPLY=($(compgen -W "generate show add remove help" -- "$cur"))
else
case "${words[2]}" in
generate|show|add|remove)
COMPREPLY=($(compgen -W "--path -p -h --help" -- "$cur"))
;;
esac
fi
return ;;
project|projects)
if [[ $cword -eq 2 ]]; then
COMPREPLY=($(compgen -W "list ls create delete rm show profiles set-profiles help" -- "$cur"))
else
case "${words[2]}" in
create)
COMPREPLY=($(compgen -W "--description -d -h --help" -- "$cur"))
;;
list|ls)
COMPREPLY=($(compgen -W "-o --output -h --help" -- "$cur"))
;;
esac
fi
return ;;
apply)
COMPREPLY=($(compgen -f -- "$cur"))
return ;;
backup)
COMPREPLY=($(compgen -W "-o --output -p --password -r --resources -h --help" -- "$cur"))
return ;;
restore)
COMPREPLY=($(compgen -W "-i --input -p --password -c --conflict -h --help" -- "$cur"))
return ;;
setup)
return ;;
help)
COMPREPLY=($(compgen -W "$commands" -- "$cur"))
return ;;
esac
if [[ $cword -eq 1 ]]; then
COMPREPLY=($(compgen -W "$commands $global_opts" -- "$cur"))
fi
}
complete -F _mcpctl mcpctl

81
completions/mcpctl.fish Normal file
View File

@@ -0,0 +1,81 @@
# mcpctl fish completions
set -l commands config status get describe instance instances apply setup claude project projects backup restore help
# Disable file completions by default
complete -c mcpctl -f
# Global options
complete -c mcpctl -s v -l version -d 'Show version'
complete -c mcpctl -s o -l output -d 'Output format' -xa 'table json yaml'
complete -c mcpctl -l daemon-url -d 'mcpd daemon URL' -x
complete -c mcpctl -s h -l help -d 'Show help'
# Top-level commands
complete -c mcpctl -n "not __fish_seen_subcommand_from $commands" -a config -d 'Manage configuration'
complete -c mcpctl -n "not __fish_seen_subcommand_from $commands" -a status -d 'Show status and connectivity'
complete -c mcpctl -n "not __fish_seen_subcommand_from $commands" -a get -d 'List resources'
complete -c mcpctl -n "not __fish_seen_subcommand_from $commands" -a describe -d 'Show resource details'
complete -c mcpctl -n "not __fish_seen_subcommand_from $commands" -a instance -d 'Manage instances'
complete -c mcpctl -n "not __fish_seen_subcommand_from $commands" -a apply -d 'Apply configuration from file'
complete -c mcpctl -n "not __fish_seen_subcommand_from $commands" -a setup -d 'Interactive setup wizard'
complete -c mcpctl -n "not __fish_seen_subcommand_from $commands" -a claude -d 'Manage Claude .mcp.json'
complete -c mcpctl -n "not __fish_seen_subcommand_from $commands" -a project -d 'Manage projects'
complete -c mcpctl -n "not __fish_seen_subcommand_from $commands" -a backup -d 'Backup configuration'
complete -c mcpctl -n "not __fish_seen_subcommand_from $commands" -a restore -d 'Restore from backup'
complete -c mcpctl -n "not __fish_seen_subcommand_from $commands" -a help -d 'Show help'
# get/describe resources
complete -c mcpctl -n "__fish_seen_subcommand_from get describe" -a 'servers profiles projects instances' -d 'Resource type'
# config subcommands
complete -c mcpctl -n "__fish_seen_subcommand_from config; and not __fish_seen_subcommand_from view set path reset" -a view -d 'Show configuration'
complete -c mcpctl -n "__fish_seen_subcommand_from config; and not __fish_seen_subcommand_from view set path reset" -a set -d 'Set a config value'
complete -c mcpctl -n "__fish_seen_subcommand_from config; and not __fish_seen_subcommand_from view set path reset" -a path -d 'Show config file path'
complete -c mcpctl -n "__fish_seen_subcommand_from config; and not __fish_seen_subcommand_from view set path reset" -a reset -d 'Reset to defaults'
# instance subcommands
set -l instance_cmds list ls start stop restart remove rm logs inspect
complete -c mcpctl -n "__fish_seen_subcommand_from instance instances; and not __fish_seen_subcommand_from $instance_cmds" -a list -d 'List instances'
complete -c mcpctl -n "__fish_seen_subcommand_from instance instances; and not __fish_seen_subcommand_from $instance_cmds" -a start -d 'Start instance'
complete -c mcpctl -n "__fish_seen_subcommand_from instance instances; and not __fish_seen_subcommand_from $instance_cmds" -a stop -d 'Stop instance'
complete -c mcpctl -n "__fish_seen_subcommand_from instance instances; and not __fish_seen_subcommand_from $instance_cmds" -a restart -d 'Restart instance'
complete -c mcpctl -n "__fish_seen_subcommand_from instance instances; and not __fish_seen_subcommand_from $instance_cmds" -a remove -d 'Remove instance'
complete -c mcpctl -n "__fish_seen_subcommand_from instance instances; and not __fish_seen_subcommand_from $instance_cmds" -a logs -d 'Get logs'
complete -c mcpctl -n "__fish_seen_subcommand_from instance instances; and not __fish_seen_subcommand_from $instance_cmds" -a inspect -d 'Inspect container'
complete -c mcpctl -n "__fish_seen_subcommand_from instance instances; and __fish_seen_subcommand_from logs" -l tail -d 'Number of lines' -x
complete -c mcpctl -n "__fish_seen_subcommand_from instance instances; and __fish_seen_subcommand_from logs" -l since -d 'Since timestamp' -x
# claude subcommands
set -l claude_cmds generate show add remove
complete -c mcpctl -n "__fish_seen_subcommand_from claude; and not __fish_seen_subcommand_from $claude_cmds" -a generate -d 'Generate .mcp.json'
complete -c mcpctl -n "__fish_seen_subcommand_from claude; and not __fish_seen_subcommand_from $claude_cmds" -a show -d 'Show .mcp.json'
complete -c mcpctl -n "__fish_seen_subcommand_from claude; and not __fish_seen_subcommand_from $claude_cmds" -a add -d 'Add server entry'
complete -c mcpctl -n "__fish_seen_subcommand_from claude; and not __fish_seen_subcommand_from $claude_cmds" -a remove -d 'Remove server entry'
complete -c mcpctl -n "__fish_seen_subcommand_from claude; and __fish_seen_subcommand_from $claude_cmds" -s p -l path -d 'Path to .mcp.json' -rF
# project subcommands
set -l project_cmds list ls create delete rm show profiles set-profiles
complete -c mcpctl -n "__fish_seen_subcommand_from project projects; and not __fish_seen_subcommand_from $project_cmds" -a list -d 'List projects'
complete -c mcpctl -n "__fish_seen_subcommand_from project projects; and not __fish_seen_subcommand_from $project_cmds" -a create -d 'Create project'
complete -c mcpctl -n "__fish_seen_subcommand_from project projects; and not __fish_seen_subcommand_from $project_cmds" -a delete -d 'Delete project'
complete -c mcpctl -n "__fish_seen_subcommand_from project projects; and not __fish_seen_subcommand_from $project_cmds" -a show -d 'Show project'
complete -c mcpctl -n "__fish_seen_subcommand_from project projects; and not __fish_seen_subcommand_from $project_cmds" -a profiles -d 'List profiles'
complete -c mcpctl -n "__fish_seen_subcommand_from project projects; and not __fish_seen_subcommand_from $project_cmds" -a set-profiles -d 'Set profiles'
complete -c mcpctl -n "__fish_seen_subcommand_from project projects; and __fish_seen_subcommand_from create" -s d -l description -d 'Description' -x
# backup options
complete -c mcpctl -n "__fish_seen_subcommand_from backup" -s o -l output -d 'Output file' -rF
complete -c mcpctl -n "__fish_seen_subcommand_from backup" -s p -l password -d 'Encryption password' -x
complete -c mcpctl -n "__fish_seen_subcommand_from backup" -s r -l resources -d 'Resources to backup' -xa 'servers profiles projects'
# restore options
complete -c mcpctl -n "__fish_seen_subcommand_from restore" -s i -l input -d 'Input file' -rF
complete -c mcpctl -n "__fish_seen_subcommand_from restore" -s p -l password -d 'Decryption password' -x
complete -c mcpctl -n "__fish_seen_subcommand_from restore" -s c -l conflict -d 'Conflict strategy' -xa 'skip overwrite fail'
# apply takes a file
complete -c mcpctl -n "__fish_seen_subcommand_from apply" -F
# help completions
complete -c mcpctl -n "__fish_seen_subcommand_from help" -a "$commands"

View File

@@ -10,3 +10,11 @@ contents:
dst: /usr/bin/mcpctl dst: /usr/bin/mcpctl
file_info: file_info:
mode: 0755 mode: 0755
- src: ./completions/mcpctl.bash
dst: /usr/share/bash-completion/completions/mcpctl
file_info:
mode: 0644
- src: ./completions/mcpctl.fish
dst: /usr/share/fish/vendor_completions.d/mcpctl.fish
file_info:
mode: 0644

373
pnpm-lock.yaml generated
View File

@@ -83,19 +83,6 @@ importers:
specifier: ^6.0.0 specifier: ^6.0.0
version: 6.19.2(typescript@5.9.3) version: 6.19.2(typescript@5.9.3)
src/local-proxy:
dependencies:
'@mcpctl/shared':
specifier: workspace:*
version: link:../shared
'@modelcontextprotocol/sdk':
specifier: ^1.0.0
version: 1.26.0(zod@3.25.76)
devDependencies:
'@types/node':
specifier: ^25.3.0
version: 25.3.0
src/mcpd: src/mcpd:
dependencies: dependencies:
'@fastify/cors': '@fastify/cors':
@@ -116,6 +103,9 @@ importers:
'@prisma/client': '@prisma/client':
specifier: ^6.0.0 specifier: ^6.0.0
version: 6.19.2(prisma@6.19.2(typescript@5.9.3))(typescript@5.9.3) version: 6.19.2(prisma@6.19.2(typescript@5.9.3))(typescript@5.9.3)
bcrypt:
specifier: ^5.1.1
version: 5.1.1
dockerode: dockerode:
specifier: ^4.0.9 specifier: ^4.0.9
version: 4.0.9 version: 4.0.9
@@ -126,6 +116,9 @@ importers:
specifier: ^3.24.0 specifier: ^3.24.0
version: 3.25.76 version: 3.25.76
devDependencies: devDependencies:
'@types/bcrypt':
specifier: ^5.0.2
version: 5.0.2
'@types/dockerode': '@types/dockerode':
specifier: ^4.0.1 specifier: ^4.0.1
version: 4.0.1 version: 4.0.1
@@ -133,6 +126,25 @@ importers:
specifier: ^25.3.0 specifier: ^25.3.0
version: 25.3.0 version: 25.3.0
src/mcplocal:
dependencies:
'@fastify/cors':
specifier: ^10.0.0
version: 10.1.0
'@mcpctl/shared':
specifier: workspace:*
version: link:../shared
'@modelcontextprotocol/sdk':
specifier: ^1.0.0
version: 1.26.0(zod@3.25.76)
fastify:
specifier: ^5.0.0
version: 5.7.4
devDependencies:
'@types/node':
specifier: ^25.3.0
version: 25.3.0
src/shared: src/shared:
dependencies: dependencies:
zod: zod:
@@ -565,6 +577,10 @@ packages:
resolution: {integrity: sha512-9I2Zn6+NJLfaGoz9jN3lpwDgAYvfGeNYdbAIjJOqzs4Tpc+VU3Jqq4IofSUBKajiDS8k9fZIg18/z13mpk1bsA==} resolution: {integrity: sha512-9I2Zn6+NJLfaGoz9jN3lpwDgAYvfGeNYdbAIjJOqzs4Tpc+VU3Jqq4IofSUBKajiDS8k9fZIg18/z13mpk1bsA==}
engines: {node: '>=8'} engines: {node: '>=8'}
'@mapbox/node-pre-gyp@1.0.11':
resolution: {integrity: sha512-Yhlar6v9WQgUp/He7BdgzOz8lqMQ8sU+jkCq7Wx8Myc5YFJLbEe7lgui/V7G1qB1DJykHSGwreceSaD60Y0PUQ==}
hasBin: true
'@modelcontextprotocol/sdk@1.26.0': '@modelcontextprotocol/sdk@1.26.0':
resolution: {integrity: sha512-Y5RmPncpiDtTXDbLKswIJzTqu2hyBKxTNsgKqKclDbhIgg1wgtf1fRuvxgTnRfcnxtvvgbIEcqUOzZrJ6iSReg==} resolution: {integrity: sha512-Y5RmPncpiDtTXDbLKswIJzTqu2hyBKxTNsgKqKclDbhIgg1wgtf1fRuvxgTnRfcnxtvvgbIEcqUOzZrJ6iSReg==}
engines: {node: '>=18'} engines: {node: '>=18'}
@@ -766,6 +782,9 @@ packages:
'@standard-schema/spec@1.1.0': '@standard-schema/spec@1.1.0':
resolution: {integrity: sha512-l2aFy5jALhniG5HgqrD6jXLi/rUWrKvqN/qJx6yoJsgKhblVd+iqqU4RCXavm/jPityDo5TCvKMnpjKnOriy0w==} resolution: {integrity: sha512-l2aFy5jALhniG5HgqrD6jXLi/rUWrKvqN/qJx6yoJsgKhblVd+iqqU4RCXavm/jPityDo5TCvKMnpjKnOriy0w==}
'@types/bcrypt@5.0.2':
resolution: {integrity: sha512-6atioO8Y75fNcbmj0G7UjI9lXN2pQ/IGJ2FWT4a/btd0Lk9lQalHLKhkgKVZ3r+spnmWUKfbMi1GEe9wyHQfNQ==}
'@types/chai@5.2.3': '@types/chai@5.2.3':
resolution: {integrity: sha512-Mw558oeA9fFbv65/y4mHtXDs9bPnFMZAL/jxdPFUpOHHIXX91mcgEHbS5Lahr+pwZFR8A7GQleRWeI6cGFC2UA==} resolution: {integrity: sha512-Mw558oeA9fFbv65/y4mHtXDs9bPnFMZAL/jxdPFUpOHHIXX91mcgEHbS5Lahr+pwZFR8A7GQleRWeI6cGFC2UA==}
@@ -896,6 +915,9 @@ packages:
'@vitest/utils@4.0.18': '@vitest/utils@4.0.18':
resolution: {integrity: sha512-msMRKLMVLWygpK3u2Hybgi4MNjcYJvwTb0Ru09+fOyCXIgT5raYP041DRRdiJiI3k/2U6SEbAETB3YtBrUkCFA==} resolution: {integrity: sha512-msMRKLMVLWygpK3u2Hybgi4MNjcYJvwTb0Ru09+fOyCXIgT5raYP041DRRdiJiI3k/2U6SEbAETB3YtBrUkCFA==}
abbrev@1.1.1:
resolution: {integrity: sha512-nne9/IiQ/hzIhY6pdDnbBtz7DjPTKrY00P/zvPSm5pOFkl6xuGrGnXn/VtTNNfNtAfZ9/1RtehkszU9qcTii0Q==}
abstract-logging@2.0.1: abstract-logging@2.0.1:
resolution: {integrity: sha512-2BjRTZxTPvheOvGbBslFSYOUkr+SjPtOnrLP33f+VIWLzezQpZcqVg7ja3L4dBXmzzgwT+a029jRx5PCi3JuiA==} resolution: {integrity: sha512-2BjRTZxTPvheOvGbBslFSYOUkr+SjPtOnrLP33f+VIWLzezQpZcqVg7ja3L4dBXmzzgwT+a029jRx5PCi3JuiA==}
@@ -913,6 +935,10 @@ packages:
engines: {node: '>=0.4.0'} engines: {node: '>=0.4.0'}
hasBin: true hasBin: true
agent-base@6.0.2:
resolution: {integrity: sha512-RZNwNclF7+MS/8bDg70amg32dyeZGZxiDuQmZxKLAlQjr3jGyLx+4Kkk58UO7D2QdgFIQCovuSuZESne6RG6XQ==}
engines: {node: '>= 6.0.0'}
ajv-formats@3.0.1: ajv-formats@3.0.1:
resolution: {integrity: sha512-8iUql50EUR+uUcdRQ3HDqa6EVyo3docL8g5WJ3FNcWmu62IbkGUue/pEyLBW8VGKKucTPgqeks4fIU1DA4yowQ==} resolution: {integrity: sha512-8iUql50EUR+uUcdRQ3HDqa6EVyo3docL8g5WJ3FNcWmu62IbkGUue/pEyLBW8VGKKucTPgqeks4fIU1DA4yowQ==}
peerDependencies: peerDependencies:
@@ -935,6 +961,14 @@ packages:
resolution: {integrity: sha512-zbB9rCJAT1rbjiVDb2hqKFHNYLxgtk8NURxZ3IZwD3F6NtxbXZQCnnSi1Lkx+IDohdPlFp222wVALIheZJQSEg==} resolution: {integrity: sha512-zbB9rCJAT1rbjiVDb2hqKFHNYLxgtk8NURxZ3IZwD3F6NtxbXZQCnnSi1Lkx+IDohdPlFp222wVALIheZJQSEg==}
engines: {node: '>=8'} engines: {node: '>=8'}
aproba@2.1.0:
resolution: {integrity: sha512-tLIEcj5GuR2RSTnxNKdkK0dJ/GrC7P38sUkiDmDuHfsHmbagTFAxDVIBltoklXEVIQ/f14IL8IMJ5pn9Hez1Ew==}
are-we-there-yet@2.0.0:
resolution: {integrity: sha512-Ci/qENmwHnsYo9xKIcUJN5LeDKdJ6R1Z1j9V/J5wyq8nh/mYPEpIKJbBZXtZjG04HiK7zV/p6Vs9952MrMeUIw==}
engines: {node: '>=10'}
deprecated: This package is no longer supported.
argparse@2.0.1: argparse@2.0.1:
resolution: {integrity: sha512-8+9WqebbFzpX9OR+Wa6O29asIogeRMzcGtAINdpMHHyAg10f05aSFVBbcEqGf/PXw1EjAZ+q2/bEBg3DvurK3Q==} resolution: {integrity: sha512-8+9WqebbFzpX9OR+Wa6O29asIogeRMzcGtAINdpMHHyAg10f05aSFVBbcEqGf/PXw1EjAZ+q2/bEBg3DvurK3Q==}
@@ -968,6 +1002,10 @@ packages:
bcrypt-pbkdf@1.0.2: bcrypt-pbkdf@1.0.2:
resolution: {integrity: sha512-qeFIXtP4MSoi6NLqO12WfqARWWuCKi2Rn/9hJLEmtB5yTNr9DqFWkJRCf2qShWzPeAMRnOgCrq0sg/KLv5ES9w==} resolution: {integrity: sha512-qeFIXtP4MSoi6NLqO12WfqARWWuCKi2Rn/9hJLEmtB5yTNr9DqFWkJRCf2qShWzPeAMRnOgCrq0sg/KLv5ES9w==}
bcrypt@5.1.1:
resolution: {integrity: sha512-AGBHOG5hPYZ5Xl9KXzU5iKq9516yEmvCKDg3ecP5kX2aB6UqTeXZxk2ELnDgDm6BQSMlLt9rDB4LoSMx0rYwww==}
engines: {node: '>= 10.0.0'}
bl@4.1.0: bl@4.1.0:
resolution: {integrity: sha512-1W07cM9gS6DcLperZfFSj+bWLtaPGSOHWhPiGzXmvVJbRLdG82sH/Kn8EtW1VqWVA54AKf2h5k5BbnIbwF3h6w==} resolution: {integrity: sha512-1W07cM9gS6DcLperZfFSj+bWLtaPGSOHWhPiGzXmvVJbRLdG82sH/Kn8EtW1VqWVA54AKf2h5k5BbnIbwF3h6w==}
@@ -975,6 +1013,9 @@ packages:
resolution: {integrity: sha512-oP5VkATKlNwcgvxi0vM0p/D3n2C3EReYVX+DNYs5TjZFn/oQt2j+4sVJtSMr18pdRr8wjTcBl6LoV+FUwzPmNA==} resolution: {integrity: sha512-oP5VkATKlNwcgvxi0vM0p/D3n2C3EReYVX+DNYs5TjZFn/oQt2j+4sVJtSMr18pdRr8wjTcBl6LoV+FUwzPmNA==}
engines: {node: '>=18'} engines: {node: '>=18'}
brace-expansion@1.1.12:
resolution: {integrity: sha512-9T9UjW3r0UW5c1Q7GTwllptXwhvYmEzFhzMfZ9H7FQWt+uZePjZPjBP/W1ZEyZ1twGWom5/56TF4lPcqjnDHcg==}
brace-expansion@2.0.2: brace-expansion@2.0.2:
resolution: {integrity: sha512-Jt0vHyM+jmUBqojB7E1NIYadt0vI0Qxjxd2TErW94wDz+E2LAm5vKMXXwg6ZZBTHPuUlDgQHKXvjGBdfcF1ZDQ==} resolution: {integrity: sha512-Jt0vHyM+jmUBqojB7E1NIYadt0vI0Qxjxd2TErW94wDz+E2LAm5vKMXXwg6ZZBTHPuUlDgQHKXvjGBdfcF1ZDQ==}
@@ -1027,6 +1068,10 @@ packages:
chownr@1.1.4: chownr@1.1.4:
resolution: {integrity: sha512-jJ0bqzaylmJtVnNgzTeSOs8DPavpbYgEr/b0YL8/2GO3xJEhInFmhKMUnEJQjZumK7KXGFhUy89PrsJWlakBVg==} resolution: {integrity: sha512-jJ0bqzaylmJtVnNgzTeSOs8DPavpbYgEr/b0YL8/2GO3xJEhInFmhKMUnEJQjZumK7KXGFhUy89PrsJWlakBVg==}
chownr@2.0.0:
resolution: {integrity: sha512-bIomtDF5KGpdogkLd9VspvFzk9KfpyyGlS8YFVZl7TGPBHL5snIOnxeshwVgPteQ9b4Eydl+pVbIyE1DcvCWgQ==}
engines: {node: '>=10'}
citty@0.1.6: citty@0.1.6:
resolution: {integrity: sha512-tskPPKEs8D2KPafUypv2gxwJP8h/OaJmC82QQGGDQcHvXX43xF2VDACcJVmZ0EuSxkpO9Kc4MlrA3q0+FG58AQ==} resolution: {integrity: sha512-tskPPKEs8D2KPafUypv2gxwJP8h/OaJmC82QQGGDQcHvXX43xF2VDACcJVmZ0EuSxkpO9Kc4MlrA3q0+FG58AQ==}
@@ -1048,10 +1093,17 @@ packages:
color-name@1.1.4: color-name@1.1.4:
resolution: {integrity: sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA==} resolution: {integrity: sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA==}
color-support@1.1.3:
resolution: {integrity: sha512-qiBjkpbMLO/HL68y+lh4q0/O1MZFj2RX6X/KmMa3+gJD3z+WwI1ZzDHysvqHGS3mP6mznPckpXmw1nI9cJjyRg==}
hasBin: true
commander@13.1.0: commander@13.1.0:
resolution: {integrity: sha512-/rFeCpNJQbhSZjGVwO9RFV3xPqbnERS8MmIQzCtD/zl6gpJuV/bMLuN92oG3F7d8oDEHHRrujSXNUr8fpjntKw==} resolution: {integrity: sha512-/rFeCpNJQbhSZjGVwO9RFV3xPqbnERS8MmIQzCtD/zl6gpJuV/bMLuN92oG3F7d8oDEHHRrujSXNUr8fpjntKw==}
engines: {node: '>=18'} engines: {node: '>=18'}
concat-map@0.0.1:
resolution: {integrity: sha512-/Srv4dswyQNBfohGpz9o6Yb3Gz3SrUDqBH5rTuhGR7ahtlbYKnVxw2bCFMRljaA7EXHaXZ8wsHdodFvbkhKmqg==}
confbox@0.2.4: confbox@0.2.4:
resolution: {integrity: sha512-ysOGlgTFbN2/Y6Cg3Iye8YKulHw+R2fNXHrgSmXISQdMnomY6eNDprVdW9R5xBguEqI954+S6709UyiO7B+6OQ==} resolution: {integrity: sha512-ysOGlgTFbN2/Y6Cg3Iye8YKulHw+R2fNXHrgSmXISQdMnomY6eNDprVdW9R5xBguEqI954+S6709UyiO7B+6OQ==}
@@ -1059,6 +1111,9 @@ packages:
resolution: {integrity: sha512-5IKcdX0nnYavi6G7TtOhwkYzyjfJlatbjMjuLSfE2kYT5pMDOilZ4OvMhi637CcDICTmz3wARPoyhqyX1Y+XvA==} resolution: {integrity: sha512-5IKcdX0nnYavi6G7TtOhwkYzyjfJlatbjMjuLSfE2kYT5pMDOilZ4OvMhi637CcDICTmz3wARPoyhqyX1Y+XvA==}
engines: {node: ^14.18.0 || >=16.10.0} engines: {node: ^14.18.0 || >=16.10.0}
console-control-strings@1.1.0:
resolution: {integrity: sha512-ty/fTekppD2fIwRvnZAVdeOiGd1c7YXEixbgJTNzqcxJWKQnjJ/V1bNEEE6hygpM3WjwHFUVK6HTjWSzV4a8sQ==}
content-disposition@1.0.1: content-disposition@1.0.1:
resolution: {integrity: sha512-oIXISMynqSqm241k6kcQ5UwttDILMK4BiurCfGEREw6+X9jkkpEe5T9FZaApyLGGOnFuyMWZpdolTXMtvEJ08Q==} resolution: {integrity: sha512-oIXISMynqSqm241k6kcQ5UwttDILMK4BiurCfGEREw6+X9jkkpEe5T9FZaApyLGGOnFuyMWZpdolTXMtvEJ08Q==}
engines: {node: '>=18'} engines: {node: '>=18'}
@@ -1110,6 +1165,9 @@ packages:
defu@6.1.4: defu@6.1.4:
resolution: {integrity: sha512-mEQCMmwJu317oSz8CwdIOdwf3xMif1ttiM8LTufzc3g6kR+9Pe236twL8j3IYT1F7GfRgGcW6MWxzZjLIkuHIg==} resolution: {integrity: sha512-mEQCMmwJu317oSz8CwdIOdwf3xMif1ttiM8LTufzc3g6kR+9Pe236twL8j3IYT1F7GfRgGcW6MWxzZjLIkuHIg==}
delegates@1.0.0:
resolution: {integrity: sha512-bd2L678uiWATM6m5Z1VzNCErI3jiGzt6HGY8OVICs40JQq/HALfbyNJmp0UDakEY4pMMaN0Ly5om/B1VI/+xfQ==}
depd@2.0.0: depd@2.0.0:
resolution: {integrity: sha512-g7nH6P6dyDioJogAAGprGpCtVImJhpPk/roCzdb3fIh61/s/nPsfR6onyMwkCAR/OlC3yBC0lESvUoQEAssIrw==} resolution: {integrity: sha512-g7nH6P6dyDioJogAAGprGpCtVImJhpPk/roCzdb3fIh61/s/nPsfR6onyMwkCAR/OlC3yBC0lESvUoQEAssIrw==}
engines: {node: '>= 0.8'} engines: {node: '>= 0.8'}
@@ -1121,6 +1179,10 @@ packages:
destr@2.0.5: destr@2.0.5:
resolution: {integrity: sha512-ugFTXCtDZunbzasqBxrK93Ik/DRYsO6S/fedkWEMKqt04xZ4csmnmwGDBAb07QWNaGMAmnTIemsYZCksjATwsA==} resolution: {integrity: sha512-ugFTXCtDZunbzasqBxrK93Ik/DRYsO6S/fedkWEMKqt04xZ4csmnmwGDBAb07QWNaGMAmnTIemsYZCksjATwsA==}
detect-libc@2.1.2:
resolution: {integrity: sha512-Btj2BOOO83o3WyH59e8MgXsxEQVcarkUOpEYrubB0urwnN10yQ364rsiByU11nZlqWYZm05i/of7io4mzihBtQ==}
engines: {node: '>=8'}
docker-modem@5.0.6: docker-modem@5.0.6:
resolution: {integrity: sha512-ens7BiayssQz/uAxGzH8zGXCtiV24rRWXdjNha5V4zSOcxmAZsfGVm/PPFbwQdqEkDnhG+SyR9E3zSHUbOKXBQ==} resolution: {integrity: sha512-ens7BiayssQz/uAxGzH8zGXCtiV24rRWXdjNha5V4zSOcxmAZsfGVm/PPFbwQdqEkDnhG+SyR9E3zSHUbOKXBQ==}
engines: {node: '>= 8.0'} engines: {node: '>= 8.0'}
@@ -1345,6 +1407,13 @@ packages:
fs-constants@1.0.0: fs-constants@1.0.0:
resolution: {integrity: sha512-y6OAwoSIf7FyjMIv94u+b5rdheZEjzR63GTyZJm5qh4Bi+2YgwLCcI/fPFZkL5PSixOt6ZNKm+w+Hfp/Bciwow==} resolution: {integrity: sha512-y6OAwoSIf7FyjMIv94u+b5rdheZEjzR63GTyZJm5qh4Bi+2YgwLCcI/fPFZkL5PSixOt6ZNKm+w+Hfp/Bciwow==}
fs-minipass@2.1.0:
resolution: {integrity: sha512-V/JgOLFCS+R6Vcq0slCuaeWEdNC3ouDlJMNIsacH2VtALiu9mV4LPrHc5cDl8k5aw6J8jwgWWpiTo5RYhmIzvg==}
engines: {node: '>= 8'}
fs.realpath@1.0.0:
resolution: {integrity: sha512-OO0pH2lK6a0hZnAdau5ItzHPI6pUlvI7jMVnxUQRtw4owF2wk8lOSabtGDCTP4Ggrg2MbGnWO9X8K1t4+fGMDw==}
fsevents@2.3.3: fsevents@2.3.3:
resolution: {integrity: sha512-5xoDfX+fL7faATnagmWPpbFtwh/R77WmMMqqHGS65C3vvB0YHrgF+B1YmZ3441tMj5n63k0212XNoJwzlhffQw==} resolution: {integrity: sha512-5xoDfX+fL7faATnagmWPpbFtwh/R77WmMMqqHGS65C3vvB0YHrgF+B1YmZ3441tMj5n63k0212XNoJwzlhffQw==}
engines: {node: ^8.16.0 || ^10.6.0 || >=11.0.0} engines: {node: ^8.16.0 || ^10.6.0 || >=11.0.0}
@@ -1353,6 +1422,11 @@ packages:
function-bind@1.1.2: function-bind@1.1.2:
resolution: {integrity: sha512-7XHNxH7qX9xG5mIwxkhumTox/MIRNcOgDrxWsMt2pAr23WHp6MrRlN7FBSFpCpr+oVO0F744iUgR82nJMfG2SA==} resolution: {integrity: sha512-7XHNxH7qX9xG5mIwxkhumTox/MIRNcOgDrxWsMt2pAr23WHp6MrRlN7FBSFpCpr+oVO0F744iUgR82nJMfG2SA==}
gauge@3.0.2:
resolution: {integrity: sha512-+5J6MS/5XksCuXq++uFRsnUd7Ovu1XenbeuIuNRJxYWjgQbPuFhT14lAvsWfqfAmnwluf1OwMjz39HjfLPci0Q==}
engines: {node: '>=10'}
deprecated: This package is no longer supported.
get-caller-file@2.0.5: get-caller-file@2.0.5:
resolution: {integrity: sha512-DyFP3BM/3YHTQOCUL/w0OZHR0lpKeGrxotcHWcqNEdnltqFwXVfhEBQ94eIo34AfQpo0rGki4cyIiftY06h2Fg==} resolution: {integrity: sha512-DyFP3BM/3YHTQOCUL/w0OZHR0lpKeGrxotcHWcqNEdnltqFwXVfhEBQ94eIo34AfQpo0rGki4cyIiftY06h2Fg==}
engines: {node: 6.* || 8.* || >= 10.*} engines: {node: 6.* || 8.* || >= 10.*}
@@ -1380,6 +1454,10 @@ packages:
resolution: {integrity: sha512-Wjlyrolmm8uDpm/ogGyXZXb1Z+Ca2B8NbJwqBVg0axK9GbBeoS7yGV6vjXnYdGm6X53iehEuxxbyiKp8QmN4Vw==} resolution: {integrity: sha512-Wjlyrolmm8uDpm/ogGyXZXb1Z+Ca2B8NbJwqBVg0axK9GbBeoS7yGV6vjXnYdGm6X53iehEuxxbyiKp8QmN4Vw==}
engines: {node: 18 || 20 || >=22} engines: {node: 18 || 20 || >=22}
glob@7.2.3:
resolution: {integrity: sha512-nFR0zLpU2YCaRxwoCJvL6UvCH2JFyFVIvwTLsIf21AuHlMskA1hhTdk+LlYJtOlYt9v6dvszD2BGRqBL+iQK9Q==}
deprecated: Old versions of glob are not supported, and contain widely publicized security vulnerabilities, which have been fixed in the current version. Please update. Support for old versions may be purchased (at exorbitant rates) by contacting i@izs.me
gopd@1.2.0: gopd@1.2.0:
resolution: {integrity: sha512-ZUKRh6/kUFoAiTAtTYPZJ3hw9wNxx+BIBOijnlG9PnrJsCcSjs1wyyD6vJpaYtgnzDrKYRSqf3OO6Rfa93xsRg==} resolution: {integrity: sha512-ZUKRh6/kUFoAiTAtTYPZJ3hw9wNxx+BIBOijnlG9PnrJsCcSjs1wyyD6vJpaYtgnzDrKYRSqf3OO6Rfa93xsRg==}
engines: {node: '>= 0.4'} engines: {node: '>= 0.4'}
@@ -1392,6 +1470,9 @@ packages:
resolution: {integrity: sha512-1cDNdwJ2Jaohmb3sg4OmKaMBwuC48sYni5HUw2DvsC8LjGTLK9h+eb1X6RyuOHe4hT0ULCW68iomhjUoKUqlPQ==} resolution: {integrity: sha512-1cDNdwJ2Jaohmb3sg4OmKaMBwuC48sYni5HUw2DvsC8LjGTLK9h+eb1X6RyuOHe4hT0ULCW68iomhjUoKUqlPQ==}
engines: {node: '>= 0.4'} engines: {node: '>= 0.4'}
has-unicode@2.0.1:
resolution: {integrity: sha512-8Rf9Y83NBReMnx0gFzA8JImQACstCYWUplepDa9xprwwtmgEZUF0h/i5xSA625zB/I37EtrswSST6OXxwaaIJQ==}
hasown@2.0.2: hasown@2.0.2:
resolution: {integrity: sha512-0hJU9SCPvmMzIBdZFqNPXWa6dqh7WdH0cII9y+CyS8rG3nL48Bclra9HmKhVVUHyPWNH5Y7xDwAB7bfgSjkUMQ==} resolution: {integrity: sha512-0hJU9SCPvmMzIBdZFqNPXWa6dqh7WdH0cII9y+CyS8rG3nL48Bclra9HmKhVVUHyPWNH5Y7xDwAB7bfgSjkUMQ==}
engines: {node: '>= 0.4'} engines: {node: '>= 0.4'}
@@ -1411,6 +1492,10 @@ packages:
resolution: {integrity: sha512-4FbRdAX+bSdmo4AUFuS0WNiPz8NgFt+r8ThgNWmlrjQjt1Q7ZR9+zTlce2859x4KSXrwIsaeTqDoKQmtP8pLmQ==} resolution: {integrity: sha512-4FbRdAX+bSdmo4AUFuS0WNiPz8NgFt+r8ThgNWmlrjQjt1Q7ZR9+zTlce2859x4KSXrwIsaeTqDoKQmtP8pLmQ==}
engines: {node: '>= 0.8'} engines: {node: '>= 0.8'}
https-proxy-agent@5.0.1:
resolution: {integrity: sha512-dFcAjpTQFgoLMzC2VwU+C/CbS7uRL0lWmxDITmqm7C+7F0Odmj6s9l6alZc6AELXhrnggM2CeWSXHGOdX2YtwA==}
engines: {node: '>= 6'}
iconv-lite@0.7.2: iconv-lite@0.7.2:
resolution: {integrity: sha512-im9DjEDQ55s9fL4EYzOAv0yMqmMBSZp6G0VvFyTMPKWxiSBHUj9NW/qqLmXUwXrrM7AvqSlTCfvqRb0cM8yYqw==} resolution: {integrity: sha512-im9DjEDQ55s9fL4EYzOAv0yMqmMBSZp6G0VvFyTMPKWxiSBHUj9NW/qqLmXUwXrrM7AvqSlTCfvqRb0cM8yYqw==}
engines: {node: '>=0.10.0'} engines: {node: '>=0.10.0'}
@@ -1430,6 +1515,10 @@ packages:
resolution: {integrity: sha512-JmXMZ6wuvDmLiHEml9ykzqO6lwFbof0GG4IkcGaENdCRDDmMVnny7s5HsIgHCbaq0w2MyPhDqkhTUgS2LU2PHA==} resolution: {integrity: sha512-JmXMZ6wuvDmLiHEml9ykzqO6lwFbof0GG4IkcGaENdCRDDmMVnny7s5HsIgHCbaq0w2MyPhDqkhTUgS2LU2PHA==}
engines: {node: '>=0.8.19'} engines: {node: '>=0.8.19'}
inflight@1.0.6:
resolution: {integrity: sha512-k92I/b08q4wvFscXCLvqfsHCrjrF7yiXsQuIVvVE7N82W3+aqpzuUdBbfhWcy/FZR3/4IgflMgKLOsvPDrGCJA==}
deprecated: This module is not supported, and leaks memory. Do not use it. Check out lru-cache if you want a good and tested way to coalesce async requests by a key value, which is much more comprehensive and powerful.
inherits@2.0.4: inherits@2.0.4:
resolution: {integrity: sha512-k/vGaX4/Yla3WzyMCvTQOXYeIHvqOKtnqBduzTHpzpQZzAskKMhZ2K+EnBiSM9zGSoIFeMpXKxa4dYeZIQqewQ==} resolution: {integrity: sha512-k/vGaX4/Yla3WzyMCvTQOXYeIHvqOKtnqBduzTHpzpQZzAskKMhZ2K+EnBiSM9zGSoIFeMpXKxa4dYeZIQqewQ==}
@@ -1546,6 +1635,10 @@ packages:
magicast@0.5.2: magicast@0.5.2:
resolution: {integrity: sha512-E3ZJh4J3S9KfwdjZhe2afj6R9lGIN5Pher1pF39UGrXRqq/VDaGVIGN13BjHd2u8B61hArAGOnso7nBOouW3TQ==} resolution: {integrity: sha512-E3ZJh4J3S9KfwdjZhe2afj6R9lGIN5Pher1pF39UGrXRqq/VDaGVIGN13BjHd2u8B61hArAGOnso7nBOouW3TQ==}
make-dir@3.1.0:
resolution: {integrity: sha512-g3FeP20LNwhALb/6Cz6Dd4F2ngze0jz7tbzrD2wAV+o9FeNHe4rL+yK2md0J/fiSf1sa1ADhXqi5+oVwOM/eGw==}
engines: {node: '>=8'}
make-dir@4.0.0: make-dir@4.0.0:
resolution: {integrity: sha512-hXdUTZYIVOt1Ex//jAQi+wTZZpUpwBj/0QsOzqegb3rGMMeJiSEu5xLHnYfBrRV4RH2+OCSOO95Is/7x1WJ4bw==} resolution: {integrity: sha512-hXdUTZYIVOt1Ex//jAQi+wTZZpUpwBj/0QsOzqegb3rGMMeJiSEu5xLHnYfBrRV4RH2+OCSOO95Is/7x1WJ4bw==}
engines: {node: '>=10'} engines: {node: '>=10'}
@@ -1574,17 +1667,37 @@ packages:
resolution: {integrity: sha512-+G4CpNBxa5MprY+04MbgOw1v7So6n5JY166pFi9KfYwT78fxScCeSNQSNzp6dpPSW2rONOps6Ocam1wFhCgoVw==} resolution: {integrity: sha512-+G4CpNBxa5MprY+04MbgOw1v7So6n5JY166pFi9KfYwT78fxScCeSNQSNzp6dpPSW2rONOps6Ocam1wFhCgoVw==}
engines: {node: 18 || 20 || >=22} engines: {node: 18 || 20 || >=22}
minimatch@3.1.2:
resolution: {integrity: sha512-J7p63hRiAjw1NDEww1W7i37+ByIrOWO5XQQAzZ3VOcL0PNybwpfmV/N05zFAzwQ9USyEcX6t3UO+K5aqBQOIHw==}
minimatch@9.0.5: minimatch@9.0.5:
resolution: {integrity: sha512-G6T0ZX48xgozx7587koeX9Ys2NYy6Gmv//P89sEte9V9whIapMNF4idKxnW2QtCcLiTWlb/wfCabAtAFWhhBow==} resolution: {integrity: sha512-G6T0ZX48xgozx7587koeX9Ys2NYy6Gmv//P89sEte9V9whIapMNF4idKxnW2QtCcLiTWlb/wfCabAtAFWhhBow==}
engines: {node: '>=16 || 14 >=14.17'} engines: {node: '>=16 || 14 >=14.17'}
minipass@3.3.6:
resolution: {integrity: sha512-DxiNidxSEK+tHG6zOIklvNOwm3hvCrbUrdtzY74U6HKTJxvIDfOUL5W5P2Ghd3DTkhhKPYGqeNUIh5qcM4YBfw==}
engines: {node: '>=8'}
minipass@5.0.0:
resolution: {integrity: sha512-3FnjYuehv9k6ovOEbyOswadCDPX1piCfhV8ncmYtHOjuPwylVWsghTLo7rabjC3Rx5xD4HDx8Wm1xnMF7S5qFQ==}
engines: {node: '>=8'}
minipass@7.1.3: minipass@7.1.3:
resolution: {integrity: sha512-tEBHqDnIoM/1rXME1zgka9g6Q2lcoCkxHLuc7ODJ5BxbP5d4c2Z5cGgtXAku59200Cx7diuHTOYfSBD8n6mm8A==} resolution: {integrity: sha512-tEBHqDnIoM/1rXME1zgka9g6Q2lcoCkxHLuc7ODJ5BxbP5d4c2Z5cGgtXAku59200Cx7diuHTOYfSBD8n6mm8A==}
engines: {node: '>=16 || 14 >=14.17'} engines: {node: '>=16 || 14 >=14.17'}
minizlib@2.1.2:
resolution: {integrity: sha512-bAxsR8BVfj60DWXHE3u30oHzfl4G7khkSuPW+qvpd7jFRHm7dLxOjUk1EHACJ/hxLY8phGJ0YhYHZo7jil7Qdg==}
engines: {node: '>= 8'}
mkdirp-classic@0.5.3: mkdirp-classic@0.5.3:
resolution: {integrity: sha512-gKLcREMhtuZRwRAfqP3RFW+TK4JqApVBtOIftVgjuABpAtpxhPGaDcfvbhNvD0B8iD1oUr/txX35NjcaY6Ns/A==} resolution: {integrity: sha512-gKLcREMhtuZRwRAfqP3RFW+TK4JqApVBtOIftVgjuABpAtpxhPGaDcfvbhNvD0B8iD1oUr/txX35NjcaY6Ns/A==}
mkdirp@1.0.4:
resolution: {integrity: sha512-vVqVZQyf3WLx2Shd0qJ9xuvqgAyKPLAiqITEtqW0oIUjzo3PePDd6fW9iFz30ef7Ysp/oiWqbhszeGWW2T6Gzw==}
engines: {node: '>=10'}
hasBin: true
mnemonist@0.40.0: mnemonist@0.40.0:
resolution: {integrity: sha512-kdd8AFNig2AD5Rkih7EPCXhu/iMvwevQFX/uEiGhZyPZi7fHqOoF4V4kHLpCfysxXMgQ4B52kdPMCwARshKvEg==} resolution: {integrity: sha512-kdd8AFNig2AD5Rkih7EPCXhu/iMvwevQFX/uEiGhZyPZi7fHqOoF4V4kHLpCfysxXMgQ4B52kdPMCwARshKvEg==}
@@ -1610,9 +1723,30 @@ packages:
resolution: {integrity: sha512-8Ofs/AUQh8MaEcrlq5xOX0CQ9ypTF5dl78mjlMNfOK08fzpgTHQRQPBxcPlEtIw0yRpws+Zo/3r+5WRby7u3Gg==} resolution: {integrity: sha512-8Ofs/AUQh8MaEcrlq5xOX0CQ9ypTF5dl78mjlMNfOK08fzpgTHQRQPBxcPlEtIw0yRpws+Zo/3r+5WRby7u3Gg==}
engines: {node: '>= 0.6'} engines: {node: '>= 0.6'}
node-addon-api@5.1.0:
resolution: {integrity: sha512-eh0GgfEkpnoWDq+VY8OyvYhFEzBk6jIYbRKdIlyTiAXIVJ8PyBaKb0rp7oDtoddbdoHWhq8wwr+XZ81F1rpNdA==}
node-fetch-native@1.6.7: node-fetch-native@1.6.7:
resolution: {integrity: sha512-g9yhqoedzIUm0nTnTqAQvueMPVOuIY16bqgAJJC8XOOubYFNwz6IER9qs0Gq2Xd0+CecCKFjtdDTMA4u4xG06Q==} resolution: {integrity: sha512-g9yhqoedzIUm0nTnTqAQvueMPVOuIY16bqgAJJC8XOOubYFNwz6IER9qs0Gq2Xd0+CecCKFjtdDTMA4u4xG06Q==}
node-fetch@2.7.0:
resolution: {integrity: sha512-c4FRfUm/dbcWZ7U+1Wq0AwCyFL+3nt2bEw05wfxSz+DWpWsitgmSgYmy2dQdWyKC1694ELPqMs/YzUSNozLt8A==}
engines: {node: 4.x || >=6.0.0}
peerDependencies:
encoding: ^0.1.0
peerDependenciesMeta:
encoding:
optional: true
nopt@5.0.0:
resolution: {integrity: sha512-Tbj67rffqceeLpcRXrT7vKAN8CwfPeIBgM7E6iBkmKLV7bEMwpGgYLGv0jACUsECaa/vuxP0IjEont6umdMgtQ==}
engines: {node: '>=6'}
hasBin: true
npmlog@5.0.1:
resolution: {integrity: sha512-AqZtDUWOMKs1G/8lwylVjrdYgqA4d9nu8hc+0gzRxlDb1I10+FHBGMXs6aiQHFdCUUlqH99MUMuLfzWDNDtfxw==}
deprecated: This package is no longer supported.
nypm@0.6.5: nypm@0.6.5:
resolution: {integrity: sha512-K6AJy1GMVyfyMXRVB88700BJqNUkByijGJM8kEHpLdcAt+vSQAVfkWWHYzuRXHSY6xA2sNc5RjTj0p9rE2izVQ==} resolution: {integrity: sha512-K6AJy1GMVyfyMXRVB88700BJqNUkByijGJM8kEHpLdcAt+vSQAVfkWWHYzuRXHSY6xA2sNc5RjTj0p9rE2izVQ==}
engines: {node: '>=18'} engines: {node: '>=18'}
@@ -1669,6 +1803,10 @@ packages:
resolution: {integrity: sha512-ak9Qy5Q7jYb2Wwcey5Fpvg2KoAc/ZIhLSLOSBmRmygPsGwkVVt0fZa0qrtMz+m6tJTAHfZQ8FnmB4MG4LWy7/w==} resolution: {integrity: sha512-ak9Qy5Q7jYb2Wwcey5Fpvg2KoAc/ZIhLSLOSBmRmygPsGwkVVt0fZa0qrtMz+m6tJTAHfZQ8FnmB4MG4LWy7/w==}
engines: {node: '>=8'} engines: {node: '>=8'}
path-is-absolute@1.0.1:
resolution: {integrity: sha512-AVbw3UJ2e9bq64vSaS9Am0fje1Pa8pbGqTTsmXfaIiMpnr5DlDhfJOuLj9Sf95ZPVDAUerDfEk88MPmPe7UCQg==}
engines: {node: '>=0.10.0'}
path-key@3.1.1: path-key@3.1.1:
resolution: {integrity: sha512-ojmeN0qd+y0jszEtoY48r0Peq5dwMEkIlCOu6Q5f41lfkswXuKtYrhgoTpLnyIcHm24Uhqx+5Tqm2InSwLhE6Q==} resolution: {integrity: sha512-ojmeN0qd+y0jszEtoY48r0Peq5dwMEkIlCOu6Q5f41lfkswXuKtYrhgoTpLnyIcHm24Uhqx+5Tqm2InSwLhE6Q==}
engines: {node: '>=8'} engines: {node: '>=8'}
@@ -1804,6 +1942,11 @@ packages:
rfdc@1.4.1: rfdc@1.4.1:
resolution: {integrity: sha512-q1b3N5QkRUWUl7iyylaaj3kOpIT0N2i9MqIEQXP73GVsN9cw3fdx8X63cEmWhJGi2PPCF23Ijp7ktmd39rawIA==} resolution: {integrity: sha512-q1b3N5QkRUWUl7iyylaaj3kOpIT0N2i9MqIEQXP73GVsN9cw3fdx8X63cEmWhJGi2PPCF23Ijp7ktmd39rawIA==}
rimraf@3.0.2:
resolution: {integrity: sha512-JZkJMZkAGFFPP2YqXZXPbMlMBgsxzE8ILs4lMIX/2o0L9UBw9O/Y3o6wFw/i9YLapcUJWwqbi3kdxIPdC62TIA==}
deprecated: Rimraf versions prior to v4 are no longer supported
hasBin: true
rimraf@6.1.3: rimraf@6.1.3:
resolution: {integrity: sha512-LKg+Cr2ZF61fkcaK1UdkH2yEBBKnYjTyWzTJT6KNPcSPaiT7HSdhtMXQuN5wkTX0Xu72KQ1l8S42rlmexS2hSA==} resolution: {integrity: sha512-LKg+Cr2ZF61fkcaK1UdkH2yEBBKnYjTyWzTJT6KNPcSPaiT7HSdhtMXQuN5wkTX0Xu72KQ1l8S42rlmexS2hSA==}
engines: {node: 20 || >=22} engines: {node: 20 || >=22}
@@ -1841,6 +1984,10 @@ packages:
secure-json-parse@4.1.0: secure-json-parse@4.1.0:
resolution: {integrity: sha512-l4KnYfEyqYJxDwlNVyRfO2E4NTHfMKAWdUuA8J0yve2Dz/E/PdBepY03RvyJpssIpRFwJoCD55wA+mEDs6ByWA==} resolution: {integrity: sha512-l4KnYfEyqYJxDwlNVyRfO2E4NTHfMKAWdUuA8J0yve2Dz/E/PdBepY03RvyJpssIpRFwJoCD55wA+mEDs6ByWA==}
semver@6.3.1:
resolution: {integrity: sha512-BR7VvDCVHO+q2xBEWskxS6DJE1qRnb7DxzUrogb71CWoSficBxYsiAGd+Kl0mmq/MprG9yArRkyrQxTO6XjMzA==}
hasBin: true
semver@7.7.4: semver@7.7.4:
resolution: {integrity: sha512-vFKC2IEtQnVhpT78h1Yp8wzwrf8CM+MzKMHGJZfBtzhZNycRFnXsHk6E5TxIkkMsgNS7mdX3AGB7x2QM2di4lA==} resolution: {integrity: sha512-vFKC2IEtQnVhpT78h1Yp8wzwrf8CM+MzKMHGJZfBtzhZNycRFnXsHk6E5TxIkkMsgNS7mdX3AGB7x2QM2di4lA==}
engines: {node: '>=10'} engines: {node: '>=10'}
@@ -1854,6 +2001,9 @@ packages:
resolution: {integrity: sha512-xRXBn0pPqQTVQiC8wyQrKs2MOlX24zQ0POGaj0kultvoOCstBQM5yvOhAVSUwOMjQtTvsPWoNCHfPGwaaQJhTw==} resolution: {integrity: sha512-xRXBn0pPqQTVQiC8wyQrKs2MOlX24zQ0POGaj0kultvoOCstBQM5yvOhAVSUwOMjQtTvsPWoNCHfPGwaaQJhTw==}
engines: {node: '>= 18'} engines: {node: '>= 18'}
set-blocking@2.0.0:
resolution: {integrity: sha512-KiKBS8AnWGEyLzofFfmvKwpdPzqiy16LvQfK3yv/fVH7Bj13/wl3JSR1J+rfgRE9q7xUJK4qvgS8raSOeLUehw==}
set-cookie-parser@2.7.2: set-cookie-parser@2.7.2:
resolution: {integrity: sha512-oeM1lpU/UvhTxw+g3cIfxXHyJRc/uidd3yK1P242gzHds0udQBYzs3y8j4gCCW+ZJ7ad0yctld8RYO+bdurlvw==} resolution: {integrity: sha512-oeM1lpU/UvhTxw+g3cIfxXHyJRc/uidd3yK1P242gzHds0udQBYzs3y8j4gCCW+ZJ7ad0yctld8RYO+bdurlvw==}
@@ -1887,6 +2037,9 @@ packages:
siginfo@2.0.0: siginfo@2.0.0:
resolution: {integrity: sha512-ybx0WO1/8bSBLEWXZvEd7gMW3Sn3JFlW3TvX1nREbDLRNQNaeNN8WK0meBwPdAaOI7TtRRRJn/Es1zhrrCHu7g==} resolution: {integrity: sha512-ybx0WO1/8bSBLEWXZvEd7gMW3Sn3JFlW3TvX1nREbDLRNQNaeNN8WK0meBwPdAaOI7TtRRRJn/Es1zhrrCHu7g==}
signal-exit@3.0.7:
resolution: {integrity: sha512-wnD2ZE+l+SPC/uoS0vXeE9L1+0wuaMqKlfz9AMUo38JsyLSBWSFcHR1Rri62LZc12vLr1gb3jl7iwQhgwpAbGQ==}
signal-exit@4.1.0: signal-exit@4.1.0:
resolution: {integrity: sha512-bzyZ1e88w9O1iNJbKnOlvYTrWPDl46O1bG0D3XInv+9tkPrxrN8jUUTiFlDkkmKWgn1M6CfIA13SuGqOa9Korw==} resolution: {integrity: sha512-bzyZ1e88w9O1iNJbKnOlvYTrWPDl46O1bG0D3XInv+9tkPrxrN8jUUTiFlDkkmKWgn1M6CfIA13SuGqOa9Korw==}
engines: {node: '>=14'} engines: {node: '>=14'}
@@ -1941,6 +2094,11 @@ packages:
resolution: {integrity: sha512-ujeqbceABgwMZxEJnk2HDY2DlnUZ+9oEcb1KzTVfYHio0UE6dG71n60d8D2I4qNvleWrrXpmjpt7vZeF1LnMZQ==} resolution: {integrity: sha512-ujeqbceABgwMZxEJnk2HDY2DlnUZ+9oEcb1KzTVfYHio0UE6dG71n60d8D2I4qNvleWrrXpmjpt7vZeF1LnMZQ==}
engines: {node: '>=6'} engines: {node: '>=6'}
tar@6.2.1:
resolution: {integrity: sha512-DZ4yORTwrbTj/7MZYq2w+/ZFdI6OZ/f9SFHR+71gIVUZhOQPHzVCLpvRnPgyaMpfWxxk/4ONva3GQSyNIKRv6A==}
engines: {node: '>=10'}
deprecated: Old versions of tar are not supported, and contain widely publicized security vulnerabilities, which have been fixed in the current version. Please update. Support for old versions may be purchased (at exorbitant rates) by contacting i@izs.me
thread-stream@4.0.0: thread-stream@4.0.0:
resolution: {integrity: sha512-4iMVL6HAINXWf1ZKZjIPcz5wYaOdPhtO8ATvZ+Xqp3BTdaqtAwQkNmKORqcIo5YkQqGXq5cwfswDwMqqQNrpJA==} resolution: {integrity: sha512-4iMVL6HAINXWf1ZKZjIPcz5wYaOdPhtO8ATvZ+Xqp3BTdaqtAwQkNmKORqcIo5YkQqGXq5cwfswDwMqqQNrpJA==}
engines: {node: '>=20'} engines: {node: '>=20'}
@@ -1968,6 +2126,9 @@ packages:
resolution: {integrity: sha512-o5sSPKEkg/DIQNmH43V0/uerLrpzVedkUh8tGNvaeXpfpuwjKenlSox/2O/BTlZUtEe+JG7s5YhEz608PlAHRA==} resolution: {integrity: sha512-o5sSPKEkg/DIQNmH43V0/uerLrpzVedkUh8tGNvaeXpfpuwjKenlSox/2O/BTlZUtEe+JG7s5YhEz608PlAHRA==}
engines: {node: '>=0.6'} engines: {node: '>=0.6'}
tr46@0.0.3:
resolution: {integrity: sha512-N3WMsuqV66lT30CrXNbEjx4GEwlow3v6rr4mCcv6prnfwhS01rkgyFdjPNBYd9br7LpXV1+Emh01fHnq2Gdgrw==}
ts-api-utils@2.4.0: ts-api-utils@2.4.0:
resolution: {integrity: sha512-3TaVTaAv2gTiMB35i3FiGJaRfwb3Pyn/j3m/bfAvGe8FB7CF6u+LMYqYlDh7reQf7UNvoTvdfAqHGmPGOSsPmA==} resolution: {integrity: sha512-3TaVTaAv2gTiMB35i3FiGJaRfwb3Pyn/j3m/bfAvGe8FB7CF6u+LMYqYlDh7reQf7UNvoTvdfAqHGmPGOSsPmA==}
engines: {node: '>=18.12'} engines: {node: '>=18.12'}
@@ -2096,6 +2257,12 @@ packages:
jsdom: jsdom:
optional: true optional: true
webidl-conversions@3.0.1:
resolution: {integrity: sha512-2JAn3z8AR6rjK8Sm8orRC0h/bcl/DqL7tRPdGZ4I1CjdF+EaMLmYxBHyXuKL849eucPFhvBoxMsflfOb8kxaeQ==}
whatwg-url@5.0.0:
resolution: {integrity: sha512-saE57nupxk6v3HY35+jzBwYa0rKSy0XR8JSxZPwgLr7ys0IBzhGviA1/TUGJLmSVqs8pb9AnvICXEuOHLprYTw==}
which@2.0.2: which@2.0.2:
resolution: {integrity: sha512-BLI3Tl1TW3Pvl70l3yq3Y64i+awpwXqsGBYWkkqMtnbXgrMD+yj7rhW0kuEDxzJaYXGjEW5ogapKNMEKNMjibA==} resolution: {integrity: sha512-BLI3Tl1TW3Pvl70l3yq3Y64i+awpwXqsGBYWkkqMtnbXgrMD+yj7rhW0kuEDxzJaYXGjEW5ogapKNMEKNMjibA==}
engines: {node: '>= 8'} engines: {node: '>= 8'}
@@ -2106,6 +2273,9 @@ packages:
engines: {node: '>=8'} engines: {node: '>=8'}
hasBin: true hasBin: true
wide-align@1.1.5:
resolution: {integrity: sha512-eDMORYaPNZ4sQIuuYPDHdQvf4gyCF9rEEV/yPxGfwPkRodwEgiMUUXTx/dex+Me0wxx53S+NgUHaP7y3MGlDmg==}
word-wrap@1.2.5: word-wrap@1.2.5:
resolution: {integrity: sha512-BN22B5eaMMI9UMtjrGd5g5eCYPpCPDUy0FJXbYsaT5zYxjFOckS53SQDE3pWkVoWpHXVb3BrYcEN4Twa55B5cA==} resolution: {integrity: sha512-BN22B5eaMMI9UMtjrGd5g5eCYPpCPDUy0FJXbYsaT5zYxjFOckS53SQDE3pWkVoWpHXVb3BrYcEN4Twa55B5cA==}
engines: {node: '>=0.10.0'} engines: {node: '>=0.10.0'}
@@ -2125,6 +2295,9 @@ packages:
resolution: {integrity: sha512-0pfFzegeDWJHJIAmTLRP2DwHjdF5s7jo9tuztdQxAhINCdvS+3nGINqPd00AphqJR/0LhANUS6/+7SCb98YOfA==} resolution: {integrity: sha512-0pfFzegeDWJHJIAmTLRP2DwHjdF5s7jo9tuztdQxAhINCdvS+3nGINqPd00AphqJR/0LhANUS6/+7SCb98YOfA==}
engines: {node: '>=10'} engines: {node: '>=10'}
yallist@4.0.0:
resolution: {integrity: sha512-3wdGidZyq5PB084XLES5TpOSRA3wjXAlIWMhum2kRcv/41Sn2emQ0dycQW4uZXLejwKvg6EsvbdlVL+FYEct7A==}
yargs-parser@21.1.1: yargs-parser@21.1.1:
resolution: {integrity: sha512-tVpsJW7DdjecAiFpbIB1e3qxIQsE6NoPc5/eTdrbbIC4h0LVsWhnoa3g+m2HclBIujHzsxZ4VJVA+GUuc2/LBw==} resolution: {integrity: sha512-tVpsJW7DdjecAiFpbIB1e3qxIQsE6NoPc5/eTdrbbIC4h0LVsWhnoa3g+m2HclBIujHzsxZ4VJVA+GUuc2/LBw==}
engines: {node: '>=12'} engines: {node: '>=12'}
@@ -2487,6 +2660,21 @@ snapshots:
'@lukeed/ms@2.0.2': {} '@lukeed/ms@2.0.2': {}
'@mapbox/node-pre-gyp@1.0.11':
dependencies:
detect-libc: 2.1.2
https-proxy-agent: 5.0.1
make-dir: 3.1.0
node-fetch: 2.7.0
nopt: 5.0.0
npmlog: 5.0.1
rimraf: 3.0.2
semver: 7.7.4
tar: 6.2.1
transitivePeerDependencies:
- encoding
- supports-color
'@modelcontextprotocol/sdk@1.26.0(zod@3.25.76)': '@modelcontextprotocol/sdk@1.26.0(zod@3.25.76)':
dependencies: dependencies:
'@hono/node-server': 1.19.9(hono@4.12.0) '@hono/node-server': 1.19.9(hono@4.12.0)
@@ -2646,6 +2834,10 @@ snapshots:
'@standard-schema/spec@1.1.0': {} '@standard-schema/spec@1.1.0': {}
'@types/bcrypt@5.0.2':
dependencies:
'@types/node': 25.3.0
'@types/chai@5.2.3': '@types/chai@5.2.3':
dependencies: dependencies:
'@types/deep-eql': 4.0.2 '@types/deep-eql': 4.0.2
@@ -2828,6 +3020,8 @@ snapshots:
'@vitest/pretty-format': 4.0.18 '@vitest/pretty-format': 4.0.18
tinyrainbow: 3.0.3 tinyrainbow: 3.0.3
abbrev@1.1.1: {}
abstract-logging@2.0.1: {} abstract-logging@2.0.1: {}
accepts@2.0.0: accepts@2.0.0:
@@ -2841,6 +3035,12 @@ snapshots:
acorn@8.16.0: {} acorn@8.16.0: {}
agent-base@6.0.2:
dependencies:
debug: 4.4.3
transitivePeerDependencies:
- supports-color
ajv-formats@3.0.1(ajv@8.18.0): ajv-formats@3.0.1(ajv@8.18.0):
optionalDependencies: optionalDependencies:
ajv: 8.18.0 ajv: 8.18.0
@@ -2865,6 +3065,13 @@ snapshots:
dependencies: dependencies:
color-convert: 2.0.1 color-convert: 2.0.1
aproba@2.1.0: {}
are-we-there-yet@2.0.0:
dependencies:
delegates: 1.0.0
readable-stream: 3.6.2
argparse@2.0.1: {} argparse@2.0.1: {}
asn1@0.2.6: asn1@0.2.6:
@@ -2896,6 +3103,14 @@ snapshots:
dependencies: dependencies:
tweetnacl: 0.14.5 tweetnacl: 0.14.5
bcrypt@5.1.1:
dependencies:
'@mapbox/node-pre-gyp': 1.0.11
node-addon-api: 5.1.0
transitivePeerDependencies:
- encoding
- supports-color
bl@4.1.0: bl@4.1.0:
dependencies: dependencies:
buffer: 5.7.1 buffer: 5.7.1
@@ -2916,6 +3131,11 @@ snapshots:
transitivePeerDependencies: transitivePeerDependencies:
- supports-color - supports-color
brace-expansion@1.1.12:
dependencies:
balanced-match: 1.0.2
concat-map: 0.0.1
brace-expansion@2.0.2: brace-expansion@2.0.2:
dependencies: dependencies:
balanced-match: 1.0.2 balanced-match: 1.0.2
@@ -2971,6 +3191,8 @@ snapshots:
chownr@1.1.4: {} chownr@1.1.4: {}
chownr@2.0.0: {}
citty@0.1.6: citty@0.1.6:
dependencies: dependencies:
consola: 3.4.2 consola: 3.4.2
@@ -2991,12 +3213,18 @@ snapshots:
color-name@1.1.4: {} color-name@1.1.4: {}
color-support@1.1.3: {}
commander@13.1.0: {} commander@13.1.0: {}
concat-map@0.0.1: {}
confbox@0.2.4: {} confbox@0.2.4: {}
consola@3.4.2: {} consola@3.4.2: {}
console-control-strings@1.1.0: {}
content-disposition@1.0.1: {} content-disposition@1.0.1: {}
content-type@1.0.5: {} content-type@1.0.5: {}
@@ -3034,12 +3262,16 @@ snapshots:
defu@6.1.4: {} defu@6.1.4: {}
delegates@1.0.0: {}
depd@2.0.0: {} depd@2.0.0: {}
dequal@2.0.3: {} dequal@2.0.3: {}
destr@2.0.5: {} destr@2.0.5: {}
detect-libc@2.1.2: {}
docker-modem@5.0.6: docker-modem@5.0.6:
dependencies: dependencies:
debug: 4.4.3 debug: 4.4.3
@@ -3349,11 +3581,29 @@ snapshots:
fs-constants@1.0.0: {} fs-constants@1.0.0: {}
fs-minipass@2.1.0:
dependencies:
minipass: 3.3.6
fs.realpath@1.0.0: {}
fsevents@2.3.3: fsevents@2.3.3:
optional: true optional: true
function-bind@1.1.2: {} function-bind@1.1.2: {}
gauge@3.0.2:
dependencies:
aproba: 2.1.0
color-support: 1.1.3
console-control-strings: 1.1.0
has-unicode: 2.0.1
object-assign: 4.1.1
signal-exit: 3.0.7
string-width: 4.2.3
strip-ansi: 6.0.1
wide-align: 1.1.5
get-caller-file@2.0.5: {} get-caller-file@2.0.5: {}
get-intrinsic@1.3.0: get-intrinsic@1.3.0:
@@ -3397,12 +3647,23 @@ snapshots:
minipass: 7.1.3 minipass: 7.1.3
path-scurry: 2.0.2 path-scurry: 2.0.2
glob@7.2.3:
dependencies:
fs.realpath: 1.0.0
inflight: 1.0.6
inherits: 2.0.4
minimatch: 3.1.2
once: 1.4.0
path-is-absolute: 1.0.1
gopd@1.2.0: {} gopd@1.2.0: {}
has-flag@4.0.0: {} has-flag@4.0.0: {}
has-symbols@1.1.0: {} has-symbols@1.1.0: {}
has-unicode@2.0.1: {}
hasown@2.0.2: hasown@2.0.2:
dependencies: dependencies:
function-bind: 1.1.2 function-bind: 1.1.2
@@ -3421,6 +3682,13 @@ snapshots:
statuses: 2.0.2 statuses: 2.0.2
toidentifier: 1.0.1 toidentifier: 1.0.1
https-proxy-agent@5.0.1:
dependencies:
agent-base: 6.0.2
debug: 4.4.3
transitivePeerDependencies:
- supports-color
iconv-lite@0.7.2: iconv-lite@0.7.2:
dependencies: dependencies:
safer-buffer: 2.1.2 safer-buffer: 2.1.2
@@ -3433,6 +3701,11 @@ snapshots:
imurmurhash@0.1.4: {} imurmurhash@0.1.4: {}
inflight@1.0.6:
dependencies:
once: 1.4.0
wrappy: 1.0.2
inherits@2.0.4: {} inherits@2.0.4: {}
inquirer@12.11.1(@types/node@25.3.0): inquirer@12.11.1(@types/node@25.3.0):
@@ -3537,6 +3810,10 @@ snapshots:
'@babel/types': 7.29.0 '@babel/types': 7.29.0
source-map-js: 1.2.1 source-map-js: 1.2.1
make-dir@3.1.0:
dependencies:
semver: 6.3.1
make-dir@4.0.0: make-dir@4.0.0:
dependencies: dependencies:
semver: 7.7.4 semver: 7.7.4
@@ -3557,14 +3834,31 @@ snapshots:
dependencies: dependencies:
brace-expansion: 5.0.2 brace-expansion: 5.0.2
minimatch@3.1.2:
dependencies:
brace-expansion: 1.1.12
minimatch@9.0.5: minimatch@9.0.5:
dependencies: dependencies:
brace-expansion: 2.0.2 brace-expansion: 2.0.2
minipass@3.3.6:
dependencies:
yallist: 4.0.0
minipass@5.0.0: {}
minipass@7.1.3: {} minipass@7.1.3: {}
minizlib@2.1.2:
dependencies:
minipass: 3.3.6
yallist: 4.0.0
mkdirp-classic@0.5.3: {} mkdirp-classic@0.5.3: {}
mkdirp@1.0.4: {}
mnemonist@0.40.0: mnemonist@0.40.0:
dependencies: dependencies:
obliterator: 2.0.5 obliterator: 2.0.5
@@ -3582,8 +3876,25 @@ snapshots:
negotiator@1.0.0: {} negotiator@1.0.0: {}
node-addon-api@5.1.0: {}
node-fetch-native@1.6.7: {} node-fetch-native@1.6.7: {}
node-fetch@2.7.0:
dependencies:
whatwg-url: 5.0.0
nopt@5.0.0:
dependencies:
abbrev: 1.1.1
npmlog@5.0.1:
dependencies:
are-we-there-yet: 2.0.0
console-control-strings: 1.1.0
gauge: 3.0.2
set-blocking: 2.0.0
nypm@0.6.5: nypm@0.6.5:
dependencies: dependencies:
citty: 0.2.1 citty: 0.2.1
@@ -3633,6 +3944,8 @@ snapshots:
path-exists@4.0.0: {} path-exists@4.0.0: {}
path-is-absolute@1.0.1: {}
path-key@3.1.1: {} path-key@3.1.1: {}
path-scurry@2.0.2: path-scurry@2.0.2:
@@ -3770,6 +4083,10 @@ snapshots:
rfdc@1.4.1: {} rfdc@1.4.1: {}
rimraf@3.0.2:
dependencies:
glob: 7.2.3
rimraf@6.1.3: rimraf@6.1.3:
dependencies: dependencies:
glob: 13.0.6 glob: 13.0.6
@@ -3834,6 +4151,8 @@ snapshots:
secure-json-parse@4.1.0: {} secure-json-parse@4.1.0: {}
semver@6.3.1: {}
semver@7.7.4: {} semver@7.7.4: {}
send@1.2.1: send@1.2.1:
@@ -3861,6 +4180,8 @@ snapshots:
transitivePeerDependencies: transitivePeerDependencies:
- supports-color - supports-color
set-blocking@2.0.0: {}
set-cookie-parser@2.7.2: {} set-cookie-parser@2.7.2: {}
setprototypeof@1.2.0: {} setprototypeof@1.2.0: {}
@@ -3901,6 +4222,8 @@ snapshots:
siginfo@2.0.0: {} siginfo@2.0.0: {}
signal-exit@3.0.7: {}
signal-exit@4.1.0: {} signal-exit@4.1.0: {}
sonic-boom@4.2.1: sonic-boom@4.2.1:
@@ -3960,6 +4283,15 @@ snapshots:
inherits: 2.0.4 inherits: 2.0.4
readable-stream: 3.6.2 readable-stream: 3.6.2
tar@6.2.1:
dependencies:
chownr: 2.0.0
fs-minipass: 2.1.0
minipass: 5.0.0
minizlib: 2.1.2
mkdirp: 1.0.4
yallist: 4.0.0
thread-stream@4.0.0: thread-stream@4.0.0:
dependencies: dependencies:
real-require: 0.2.0 real-require: 0.2.0
@@ -3979,6 +4311,8 @@ snapshots:
toidentifier@1.0.1: {} toidentifier@1.0.1: {}
tr46@0.0.3: {}
ts-api-utils@2.4.0(typescript@5.9.3): ts-api-utils@2.4.0(typescript@5.9.3):
dependencies: dependencies:
typescript: 5.9.3 typescript: 5.9.3
@@ -4073,6 +4407,13 @@ snapshots:
- tsx - tsx
- yaml - yaml
webidl-conversions@3.0.1: {}
whatwg-url@5.0.0:
dependencies:
tr46: 0.0.3
webidl-conversions: 3.0.1
which@2.0.2: which@2.0.2:
dependencies: dependencies:
isexe: 2.0.0 isexe: 2.0.0
@@ -4082,6 +4423,10 @@ snapshots:
siginfo: 2.0.0 siginfo: 2.0.0
stackback: 0.0.2 stackback: 0.0.2
wide-align@1.1.5:
dependencies:
string-width: 4.2.3
word-wrap@1.2.5: {} word-wrap@1.2.5: {}
wrap-ansi@6.2.0: wrap-ansi@6.2.0:
@@ -4100,6 +4445,8 @@ snapshots:
y18n@5.0.8: {} y18n@5.0.8: {}
yallist@4.0.0: {}
yargs-parser@21.1.1: {} yargs-parser@21.1.1: {}
yargs@17.7.2: yargs@17.7.2:

View File

@@ -10,6 +10,9 @@ if [ -f .env ]; then
set -a; source .env; set +a set -a; source .env; set +a
fi fi
# Ensure tools are on PATH
export PATH="$HOME/.npm-global/bin:$HOME/.bun/bin:$HOME/.local/bin:$PATH"
echo "==> Building TypeScript..." echo "==> Building TypeScript..."
pnpm build pnpm build

View File

@@ -2,7 +2,8 @@ import http from 'node:http';
export interface ApiClientOptions { export interface ApiClientOptions {
baseUrl: string; baseUrl: string;
timeout?: number; timeout?: number | undefined;
token?: string | undefined;
} }
export interface ApiResponse<T = unknown> { export interface ApiResponse<T = unknown> {
@@ -20,16 +21,20 @@ export class ApiError extends Error {
} }
} }
function request<T>(method: string, url: string, timeout: number, body?: unknown): Promise<ApiResponse<T>> { function request<T>(method: string, url: string, timeout: number, body?: unknown, token?: string): Promise<ApiResponse<T>> {
return new Promise((resolve, reject) => { return new Promise((resolve, reject) => {
const parsed = new URL(url); const parsed = new URL(url);
const headers: Record<string, string> = { 'Content-Type': 'application/json' };
if (token) {
headers['Authorization'] = `Bearer ${token}`;
}
const opts: http.RequestOptions = { const opts: http.RequestOptions = {
hostname: parsed.hostname, hostname: parsed.hostname,
port: parsed.port, port: parsed.port,
path: parsed.pathname + parsed.search, path: parsed.pathname + parsed.search,
method, method,
timeout, timeout,
headers: { 'Content-Type': 'application/json' }, headers,
}; };
const req = http.request(opts, (res) => { const req = http.request(opts, (res) => {
@@ -64,28 +69,30 @@ function request<T>(method: string, url: string, timeout: number, body?: unknown
export class ApiClient { export class ApiClient {
private baseUrl: string; private baseUrl: string;
private timeout: number; private timeout: number;
private token?: string | undefined;
constructor(opts: ApiClientOptions) { constructor(opts: ApiClientOptions) {
this.baseUrl = opts.baseUrl.replace(/\/$/, ''); this.baseUrl = opts.baseUrl.replace(/\/$/, '');
this.timeout = opts.timeout ?? 10000; this.timeout = opts.timeout ?? 10000;
this.token = opts.token;
} }
async get<T = unknown>(path: string): Promise<T> { async get<T = unknown>(path: string): Promise<T> {
const res = await request<T>('GET', `${this.baseUrl}${path}`, this.timeout); const res = await request<T>('GET', `${this.baseUrl}${path}`, this.timeout, undefined, this.token);
return res.data; return res.data;
} }
async post<T = unknown>(path: string, body?: unknown): Promise<T> { async post<T = unknown>(path: string, body?: unknown): Promise<T> {
const res = await request<T>('POST', `${this.baseUrl}${path}`, this.timeout, body); const res = await request<T>('POST', `${this.baseUrl}${path}`, this.timeout, body, this.token);
return res.data; return res.data;
} }
async put<T = unknown>(path: string, body?: unknown): Promise<T> { async put<T = unknown>(path: string, body?: unknown): Promise<T> {
const res = await request<T>('PUT', `${this.baseUrl}${path}`, this.timeout, body); const res = await request<T>('PUT', `${this.baseUrl}${path}`, this.timeout, body, this.token);
return res.data; return res.data;
} }
async delete(path: string): Promise<void> { async delete(path: string): Promise<void> {
await request('DELETE', `${this.baseUrl}${path}`, this.timeout); await request('DELETE', `${this.baseUrl}${path}`, this.timeout, undefined, this.token);
} }
} }

View File

@@ -0,0 +1,50 @@
import { existsSync, mkdirSync, readFileSync, writeFileSync, unlinkSync, chmodSync } from 'node:fs';
import { join } from 'node:path';
import { homedir } from 'node:os';
export interface StoredCredentials {
token: string;
mcpdUrl: string;
user: string;
expiresAt?: string;
}
export interface CredentialsDeps {
configDir: string;
}
function defaultConfigDir(): string {
return join(homedir(), '.mcpctl');
}
function credentialsPath(deps?: Partial<CredentialsDeps>): string {
return join(deps?.configDir ?? defaultConfigDir(), 'credentials');
}
export function saveCredentials(creds: StoredCredentials, deps?: Partial<CredentialsDeps>): void {
const dir = deps?.configDir ?? defaultConfigDir();
if (!existsSync(dir)) {
mkdirSync(dir, { recursive: true });
}
const path = credentialsPath(deps);
writeFileSync(path, JSON.stringify(creds, null, 2) + '\n', 'utf-8');
chmodSync(path, 0o600);
}
export function loadCredentials(deps?: Partial<CredentialsDeps>): StoredCredentials | null {
const path = credentialsPath(deps);
if (!existsSync(path)) {
return null;
}
const raw = readFileSync(path, 'utf-8');
return JSON.parse(raw) as StoredCredentials;
}
export function deleteCredentials(deps?: Partial<CredentialsDeps>): boolean {
const path = credentialsPath(deps);
if (!existsSync(path)) {
return false;
}
unlinkSync(path);
return true;
}

View File

@@ -0,0 +1,2 @@
export { saveCredentials, loadCredentials, deleteCredentials } from './credentials.js';
export type { StoredCredentials, CredentialsDeps } from './credentials.js';

View File

@@ -0,0 +1,148 @@
import { Command } from 'commander';
import http from 'node:http';
import { loadConfig } from '../config/index.js';
import type { ConfigLoaderDeps } from '../config/index.js';
import { saveCredentials, loadCredentials, deleteCredentials } from '../auth/index.js';
import type { CredentialsDeps } from '../auth/index.js';
export interface PromptDeps {
input(message: string): Promise<string>;
password(message: string): Promise<string>;
}
export interface AuthCommandDeps {
configDeps: Partial<ConfigLoaderDeps>;
credentialsDeps: Partial<CredentialsDeps>;
prompt: PromptDeps;
log: (...args: string[]) => void;
loginRequest: (mcpdUrl: string, email: string, password: string) => Promise<LoginResponse>;
logoutRequest: (mcpdUrl: string, token: string) => Promise<void>;
}
interface LoginResponse {
token: string;
user: { email: string };
}
function defaultLoginRequest(mcpdUrl: string, email: string, password: string): Promise<LoginResponse> {
return new Promise((resolve, reject) => {
const url = new URL('/api/v1/auth/login', mcpdUrl);
const body = JSON.stringify({ email, password });
const opts: http.RequestOptions = {
hostname: url.hostname,
port: url.port,
path: url.pathname,
method: 'POST',
timeout: 10000,
headers: { 'Content-Type': 'application/json', 'Content-Length': Buffer.byteLength(body) },
};
const req = http.request(opts, (res) => {
const chunks: Buffer[] = [];
res.on('data', (chunk: Buffer) => chunks.push(chunk));
res.on('end', () => {
const raw = Buffer.concat(chunks).toString('utf-8');
if (res.statusCode === 401) {
reject(new Error('Invalid credentials'));
return;
}
if ((res.statusCode ?? 0) >= 400) {
reject(new Error(`Login failed (${res.statusCode}): ${raw}`));
return;
}
resolve(JSON.parse(raw) as LoginResponse);
});
});
req.on('error', (err) => reject(new Error(`Cannot reach mcpd: ${err.message}`)));
req.on('timeout', () => { req.destroy(); reject(new Error('Login request timed out')); });
req.write(body);
req.end();
});
}
function defaultLogoutRequest(mcpdUrl: string, token: string): Promise<void> {
return new Promise((resolve) => {
const url = new URL('/api/v1/auth/logout', mcpdUrl);
const opts: http.RequestOptions = {
hostname: url.hostname,
port: url.port,
path: url.pathname,
method: 'POST',
timeout: 10000,
headers: { 'Authorization': `Bearer ${token}` },
};
const req = http.request(opts, (res) => {
res.resume();
res.on('end', () => resolve());
});
req.on('error', () => resolve()); // Don't fail logout on network errors
req.on('timeout', () => { req.destroy(); resolve(); });
req.end();
});
}
async function defaultInput(message: string): Promise<string> {
const { default: inquirer } = await import('inquirer');
const { answer } = await inquirer.prompt([{ type: 'input', name: 'answer', message }]);
return answer as string;
}
async function defaultPassword(message: string): Promise<string> {
const { default: inquirer } = await import('inquirer');
const { answer } = await inquirer.prompt([{ type: 'password', name: 'answer', message }]);
return answer as string;
}
const defaultDeps: AuthCommandDeps = {
configDeps: {},
credentialsDeps: {},
prompt: { input: defaultInput, password: defaultPassword },
log: (...args) => console.log(...args),
loginRequest: defaultLoginRequest,
logoutRequest: defaultLogoutRequest,
};
export function createLoginCommand(deps?: Partial<AuthCommandDeps>): Command {
const { configDeps, credentialsDeps, prompt, log, loginRequest } = { ...defaultDeps, ...deps };
return new Command('login')
.description('Authenticate with mcpd')
.option('--mcpd-url <url>', 'mcpd URL to authenticate against')
.action(async (opts: { mcpdUrl?: string }) => {
const config = loadConfig(configDeps);
const mcpdUrl = opts.mcpdUrl ?? config.mcpdUrl;
const email = await prompt.input('Email:');
const password = await prompt.password('Password:');
try {
const result = await loginRequest(mcpdUrl, email, password);
saveCredentials({
token: result.token,
mcpdUrl,
user: result.user.email,
}, credentialsDeps);
log(`Logged in as ${result.user.email}`);
} catch (err) {
log(`Login failed: ${(err as Error).message}`);
process.exitCode = 1;
}
});
}
export function createLogoutCommand(deps?: Partial<AuthCommandDeps>): Command {
const { credentialsDeps, log, logoutRequest } = { ...defaultDeps, ...deps };
return new Command('logout')
.description('Log out and remove stored credentials')
.action(async () => {
const creds = loadCredentials(credentialsDeps);
if (!creds) {
log('Not logged in');
return;
}
await logoutRequest(creds.mcpdUrl, creds.token);
deleteCredentials(credentialsDeps);
log('Logged out successfully');
});
}

View File

@@ -41,6 +41,9 @@ export function createConfigCommand(deps?: Partial<ConfigCommandDeps>): Command
updates[key] = parseInt(value, 10); updates[key] = parseInt(value, 10);
} else if (key === 'registries') { } else if (key === 'registries') {
updates[key] = value.split(',').map((s) => s.trim()); updates[key] = value.split(',').map((s) => s.trim());
} else if (key === 'daemonUrl') {
// Backward compat: map daemonUrl to mcplocalUrl
updates['mcplocalUrl'] = value;
} else { } else {
updates[key] = value; updates[key] = value;
} }

View File

@@ -2,16 +2,19 @@ import { Command } from 'commander';
import http from 'node:http'; import http from 'node:http';
import { loadConfig } from '../config/index.js'; import { loadConfig } from '../config/index.js';
import type { ConfigLoaderDeps } from '../config/index.js'; import type { ConfigLoaderDeps } from '../config/index.js';
import { loadCredentials } from '../auth/index.js';
import type { CredentialsDeps } from '../auth/index.js';
import { formatJson, formatYaml } from '../formatters/index.js'; import { formatJson, formatYaml } from '../formatters/index.js';
import { APP_VERSION } from '@mcpctl/shared'; import { APP_VERSION } from '@mcpctl/shared';
export interface StatusCommandDeps { export interface StatusCommandDeps {
configDeps: Partial<ConfigLoaderDeps>; configDeps: Partial<ConfigLoaderDeps>;
credentialsDeps: Partial<CredentialsDeps>;
log: (...args: string[]) => void; log: (...args: string[]) => void;
checkDaemon: (url: string) => Promise<boolean>; checkHealth: (url: string) => Promise<boolean>;
} }
function defaultCheckDaemon(url: string): Promise<boolean> { function defaultCheckHealth(url: string): Promise<boolean> {
return new Promise((resolve) => { return new Promise((resolve) => {
const req = http.get(`${url}/health`, { timeout: 3000 }, (res) => { const req = http.get(`${url}/health`, { timeout: 3000 }, (res) => {
resolve(res.statusCode !== undefined && res.statusCode >= 200 && res.statusCode < 400); resolve(res.statusCode !== undefined && res.statusCode >= 200 && res.statusCode < 400);
@@ -27,24 +30,33 @@ function defaultCheckDaemon(url: string): Promise<boolean> {
const defaultDeps: StatusCommandDeps = { const defaultDeps: StatusCommandDeps = {
configDeps: {}, configDeps: {},
credentialsDeps: {},
log: (...args) => console.log(...args), log: (...args) => console.log(...args),
checkDaemon: defaultCheckDaemon, checkHealth: defaultCheckHealth,
}; };
export function createStatusCommand(deps?: Partial<StatusCommandDeps>): Command { export function createStatusCommand(deps?: Partial<StatusCommandDeps>): Command {
const { configDeps, log, checkDaemon } = { ...defaultDeps, ...deps }; const { configDeps, credentialsDeps, log, checkHealth } = { ...defaultDeps, ...deps };
return new Command('status') return new Command('status')
.description('Show mcpctl status and connectivity') .description('Show mcpctl status and connectivity')
.option('-o, --output <format>', 'output format (table, json, yaml)', 'table') .option('-o, --output <format>', 'output format (table, json, yaml)', 'table')
.action(async (opts: { output: string }) => { .action(async (opts: { output: string }) => {
const config = loadConfig(configDeps); const config = loadConfig(configDeps);
const daemonReachable = await checkDaemon(config.daemonUrl); const creds = loadCredentials(credentialsDeps);
const [mcplocalReachable, mcpdReachable] = await Promise.all([
checkHealth(config.mcplocalUrl),
checkHealth(config.mcpdUrl),
]);
const status = { const status = {
version: APP_VERSION, version: APP_VERSION,
daemonUrl: config.daemonUrl, mcplocalUrl: config.mcplocalUrl,
daemonReachable, mcplocalReachable,
mcpdUrl: config.mcpdUrl,
mcpdReachable,
auth: creds ? { user: creds.user } : null,
registries: config.registries, registries: config.registries,
outputFormat: config.outputFormat, outputFormat: config.outputFormat,
}; };
@@ -55,7 +67,9 @@ export function createStatusCommand(deps?: Partial<StatusCommandDeps>): Command
log(formatYaml(status)); log(formatYaml(status));
} else { } else {
log(`mcpctl v${status.version}`); log(`mcpctl v${status.version}`);
log(`Daemon: ${status.daemonUrl} (${daemonReachable ? 'connected' : 'unreachable'})`); log(`mcplocal: ${status.mcplocalUrl} (${mcplocalReachable ? 'connected' : 'unreachable'})`);
log(`mcpd: ${status.mcpdUrl} (${mcpdReachable ? 'connected' : 'unreachable'})`);
log(`Auth: ${creds ? `logged in as ${creds.user}` : 'not logged in'}`);
log(`Registries: ${status.registries.join(', ')}`); log(`Registries: ${status.registries.join(', ')}`);
log(`Output: ${status.outputFormat}`); log(`Output: ${status.outputFormat}`);
} }

View File

@@ -1,8 +1,12 @@
import { z } from 'zod'; import { z } from 'zod';
export const McpctlConfigSchema = z.object({ export const McpctlConfigSchema = z.object({
/** mcpd daemon endpoint */ /** mcplocal daemon endpoint (local LLM pre-processing proxy) */
daemonUrl: z.string().default('http://localhost:3000'), mcplocalUrl: z.string().default('http://localhost:3200'),
/** mcpd daemon endpoint (remote instance manager) */
mcpdUrl: z.string().default('http://localhost:3100'),
/** @deprecated Use mcplocalUrl instead. Kept for backward compatibility. */
daemonUrl: z.string().optional(),
/** Active registries for search */ /** Active registries for search */
registries: z.array(z.enum(['official', 'glama', 'smithery'])).default(['official', 'glama', 'smithery']), registries: z.array(z.enum(['official', 'glama', 'smithery'])).default(['official', 'glama', 'smithery']),
/** Cache TTL in milliseconds */ /** Cache TTL in milliseconds */
@@ -15,6 +19,13 @@ export const McpctlConfigSchema = z.object({
outputFormat: z.enum(['table', 'json', 'yaml']).default('table'), outputFormat: z.enum(['table', 'json', 'yaml']).default('table'),
/** Smithery API key */ /** Smithery API key */
smitheryApiKey: z.string().optional(), smitheryApiKey: z.string().optional(),
}).transform((cfg) => {
// Backward compatibility: if old daemonUrl is set but mcplocalUrl wasn't explicitly changed,
// use daemonUrl as mcplocalUrl
if (cfg.daemonUrl && cfg.mcplocalUrl === 'http://localhost:3200') {
return { ...cfg, mcplocalUrl: cfg.daemonUrl };
}
return cfg;
}); });
export type McpctlConfig = z.infer<typeof McpctlConfigSchema>; export type McpctlConfig = z.infer<typeof McpctlConfigSchema>;

View File

@@ -11,8 +11,10 @@ import { createSetupCommand } from './commands/setup.js';
import { createClaudeCommand } from './commands/claude.js'; import { createClaudeCommand } from './commands/claude.js';
import { createProjectCommand } from './commands/project.js'; import { createProjectCommand } from './commands/project.js';
import { createBackupCommand, createRestoreCommand } from './commands/backup.js'; import { createBackupCommand, createRestoreCommand } from './commands/backup.js';
import { createLoginCommand, createLogoutCommand } from './commands/auth.js';
import { ApiClient } from './api-client.js'; import { ApiClient } from './api-client.js';
import { loadConfig } from './config/index.js'; import { loadConfig } from './config/index.js';
import { loadCredentials } from './auth/index.js';
export function createProgram(): Command { export function createProgram(): Command {
const program = new Command() const program = new Command()
@@ -20,15 +22,28 @@ export function createProgram(): Command {
.description('Manage MCP servers like kubectl manages containers') .description('Manage MCP servers like kubectl manages containers')
.version(APP_VERSION, '-v, --version') .version(APP_VERSION, '-v, --version')
.option('-o, --output <format>', 'output format (table, json, yaml)', 'table') .option('-o, --output <format>', 'output format (table, json, yaml)', 'table')
.option('--daemon-url <url>', 'mcpd daemon URL'); .option('--daemon-url <url>', 'mcplocal daemon URL')
.option('--direct', 'bypass mcplocal and connect directly to mcpd');
program.addCommand(createConfigCommand()); program.addCommand(createConfigCommand());
program.addCommand(createStatusCommand()); program.addCommand(createStatusCommand());
program.addCommand(createLoginCommand());
program.addCommand(createLogoutCommand());
// Create API-backed commands // Resolve target URL: --direct goes to mcpd, default goes to mcplocal
const config = loadConfig(); const config = loadConfig();
const daemonUrl = program.opts().daemonUrl ?? config.daemonUrl; const creds = loadCredentials();
const client = new ApiClient({ baseUrl: daemonUrl }); const opts = program.opts();
let baseUrl: string;
if (opts.daemonUrl) {
baseUrl = opts.daemonUrl as string;
} else if (opts.direct) {
baseUrl = config.mcpdUrl;
} else {
baseUrl = config.mcplocalUrl;
}
const client = new ApiClient({ baseUrl, token: creds?.token ?? undefined });
const fetchResource = async (resource: string, id?: string): Promise<unknown[]> => { const fetchResource = async (resource: string, id?: string): Promise<unknown[]> => {
if (id) { if (id) {

View File

@@ -74,4 +74,27 @@ describe('ApiClient', () => {
const client = new ApiClient({ baseUrl: 'http://localhost:1' }); const client = new ApiClient({ baseUrl: 'http://localhost:1' });
await expect(client.get('/anything')).rejects.toThrow(); await expect(client.get('/anything')).rejects.toThrow();
}); });
it('sends Authorization header when token provided', async () => {
// We need a separate server to check the header
let receivedAuth = '';
const authServer = http.createServer((req, res) => {
receivedAuth = req.headers['authorization'] ?? '';
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ ok: true }));
});
const authPort = await new Promise<number>((resolve) => {
authServer.listen(0, () => {
const addr = authServer.address();
if (addr && typeof addr === 'object') resolve(addr.port);
});
});
try {
const client = new ApiClient({ baseUrl: `http://localhost:${authPort}`, token: 'my-token' });
await client.get('/test');
expect(receivedAuth).toBe('Bearer my-token');
} finally {
authServer.close();
}
});
}); });

View File

@@ -0,0 +1,59 @@
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
import { mkdtempSync, rmSync, statSync, existsSync } from 'node:fs';
import { join } from 'node:path';
import { tmpdir } from 'node:os';
import { saveCredentials, loadCredentials, deleteCredentials } from '../../src/auth/index.js';
let tempDir: string;
beforeEach(() => {
tempDir = mkdtempSync(join(tmpdir(), 'mcpctl-auth-test-'));
});
afterEach(() => {
rmSync(tempDir, { recursive: true, force: true });
});
describe('saveCredentials', () => {
it('saves credentials file', () => {
saveCredentials({ token: 'tok123', mcpdUrl: 'http://x:3100', user: 'alice@test.com' }, { configDir: tempDir });
expect(existsSync(join(tempDir, 'credentials'))).toBe(true);
});
it('sets 0600 permissions', () => {
saveCredentials({ token: 'tok123', mcpdUrl: 'http://x:3100', user: 'alice@test.com' }, { configDir: tempDir });
const stat = statSync(join(tempDir, 'credentials'));
expect(stat.mode & 0o777).toBe(0o600);
});
it('creates config dir if missing', () => {
const nested = join(tempDir, 'sub', 'dir');
saveCredentials({ token: 'tok', mcpdUrl: 'http://x:3100', user: 'bob' }, { configDir: nested });
expect(existsSync(join(nested, 'credentials'))).toBe(true);
});
});
describe('loadCredentials', () => {
it('returns null when no credentials file', () => {
expect(loadCredentials({ configDir: tempDir })).toBeNull();
});
it('round-trips credentials', () => {
const creds = { token: 'tok456', mcpdUrl: 'http://remote:3100', user: 'charlie@test.com', expiresAt: '2099-01-01' };
saveCredentials(creds, { configDir: tempDir });
const loaded = loadCredentials({ configDir: tempDir });
expect(loaded).toEqual(creds);
});
});
describe('deleteCredentials', () => {
it('returns false when no credentials file', () => {
expect(deleteCredentials({ configDir: tempDir })).toBe(false);
});
it('deletes credentials file', () => {
saveCredentials({ token: 'tok', mcpdUrl: 'http://x:3100', user: 'u' }, { configDir: tempDir });
expect(deleteCredentials({ configDir: tempDir })).toBe(true);
expect(existsSync(join(tempDir, 'credentials'))).toBe(false);
});
});

View File

@@ -0,0 +1,144 @@
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
import { mkdtempSync, rmSync } from 'node:fs';
import { join } from 'node:path';
import { tmpdir } from 'node:os';
import { createLoginCommand, createLogoutCommand } from '../../src/commands/auth.js';
import { saveCredentials, loadCredentials } from '../../src/auth/index.js';
import { saveConfig, DEFAULT_CONFIG } from '../../src/config/index.js';
let tempDir: string;
let output: string[];
function log(...args: string[]) {
output.push(args.join(' '));
}
beforeEach(() => {
tempDir = mkdtempSync(join(tmpdir(), 'mcpctl-auth-cmd-test-'));
output = [];
});
afterEach(() => {
rmSync(tempDir, { recursive: true, force: true });
});
describe('login command', () => {
it('stores credentials on successful login', async () => {
const cmd = createLoginCommand({
configDeps: { configDir: tempDir },
credentialsDeps: { configDir: tempDir },
prompt: {
input: async () => 'alice@test.com',
password: async () => 'secret123',
},
log,
loginRequest: async (_url, email, _password) => ({
token: 'session-token-123',
user: { email },
}),
logoutRequest: async () => {},
});
await cmd.parseAsync([], { from: 'user' });
expect(output[0]).toContain('Logged in as alice@test.com');
const creds = loadCredentials({ configDir: tempDir });
expect(creds).not.toBeNull();
expect(creds!.token).toBe('session-token-123');
expect(creds!.user).toBe('alice@test.com');
});
it('shows error on failed login', async () => {
const cmd = createLoginCommand({
configDeps: { configDir: tempDir },
credentialsDeps: { configDir: tempDir },
prompt: {
input: async () => 'alice@test.com',
password: async () => 'wrong',
},
log,
loginRequest: async () => { throw new Error('Invalid credentials'); },
logoutRequest: async () => {},
});
await cmd.parseAsync([], { from: 'user' });
expect(output[0]).toContain('Login failed');
expect(output[0]).toContain('Invalid credentials');
const creds = loadCredentials({ configDir: tempDir });
expect(creds).toBeNull();
});
it('uses mcpdUrl from config', async () => {
saveConfig({ ...DEFAULT_CONFIG, mcpdUrl: 'http://custom:3100' }, { configDir: tempDir });
let capturedUrl = '';
const cmd = createLoginCommand({
configDeps: { configDir: tempDir },
credentialsDeps: { configDir: tempDir },
prompt: {
input: async () => 'user@test.com',
password: async () => 'pass',
},
log,
loginRequest: async (url, email) => {
capturedUrl = url;
return { token: 'tok', user: { email } };
},
logoutRequest: async () => {},
});
await cmd.parseAsync([], { from: 'user' });
expect(capturedUrl).toBe('http://custom:3100');
});
it('allows --mcpd-url flag override', async () => {
let capturedUrl = '';
const cmd = createLoginCommand({
configDeps: { configDir: tempDir },
credentialsDeps: { configDir: tempDir },
prompt: {
input: async () => 'user@test.com',
password: async () => 'pass',
},
log,
loginRequest: async (url, email) => {
capturedUrl = url;
return { token: 'tok', user: { email } };
},
logoutRequest: async () => {},
});
await cmd.parseAsync(['--mcpd-url', 'http://override:3100'], { from: 'user' });
expect(capturedUrl).toBe('http://override:3100');
});
});
describe('logout command', () => {
it('removes credentials on logout', async () => {
saveCredentials({ token: 'tok', mcpdUrl: 'http://x:3100', user: 'alice' }, { configDir: tempDir });
let logoutCalled = false;
const cmd = createLogoutCommand({
configDeps: { configDir: tempDir },
credentialsDeps: { configDir: tempDir },
prompt: { input: async () => '', password: async () => '' },
log,
loginRequest: async () => ({ token: '', user: { email: '' } }),
logoutRequest: async () => { logoutCalled = true; },
});
await cmd.parseAsync([], { from: 'user' });
expect(output[0]).toContain('Logged out successfully');
expect(logoutCalled).toBe(true);
const creds = loadCredentials({ configDir: tempDir });
expect(creds).toBeNull();
});
it('shows not logged in when no credentials', async () => {
const cmd = createLogoutCommand({
configDeps: { configDir: tempDir },
credentialsDeps: { configDir: tempDir },
prompt: { input: async () => '', password: async () => '' },
log,
loginRequest: async () => ({ token: '', user: { email: '' } }),
logoutRequest: async () => {},
});
await cmd.parseAsync([], { from: 'user' });
expect(output[0]).toContain('Not logged in');
});
});

View File

@@ -34,23 +34,38 @@ describe('config view', () => {
await cmd.parseAsync(['view'], { from: 'user' }); await cmd.parseAsync(['view'], { from: 'user' });
expect(output).toHaveLength(1); expect(output).toHaveLength(1);
const parsed = JSON.parse(output[0]) as Record<string, unknown>; const parsed = JSON.parse(output[0]) as Record<string, unknown>;
expect(parsed['daemonUrl']).toBe('http://localhost:3000'); expect(parsed['mcplocalUrl']).toBe('http://localhost:3200');
expect(parsed['mcpdUrl']).toBe('http://localhost:3100');
}); });
it('outputs config as YAML with --output yaml', async () => { it('outputs config as YAML with --output yaml', async () => {
const cmd = makeCommand(); const cmd = makeCommand();
await cmd.parseAsync(['view', '-o', 'yaml'], { from: 'user' }); await cmd.parseAsync(['view', '-o', 'yaml'], { from: 'user' });
expect(output[0]).toContain('daemonUrl:'); expect(output[0]).toContain('mcplocalUrl:');
}); });
}); });
describe('config set', () => { describe('config set', () => {
it('sets a string value', async () => { it('sets mcplocalUrl', async () => {
const cmd = makeCommand(); const cmd = makeCommand();
await cmd.parseAsync(['set', 'daemonUrl', 'http://new:9000'], { from: 'user' }); await cmd.parseAsync(['set', 'mcplocalUrl', 'http://new:9000'], { from: 'user' });
expect(output[0]).toContain('daemonUrl'); expect(output[0]).toContain('mcplocalUrl');
const config = loadConfig({ configDir: tempDir }); const config = loadConfig({ configDir: tempDir });
expect(config.daemonUrl).toBe('http://new:9000'); expect(config.mcplocalUrl).toBe('http://new:9000');
});
it('sets mcpdUrl', async () => {
const cmd = makeCommand();
await cmd.parseAsync(['set', 'mcpdUrl', 'http://remote:3100'], { from: 'user' });
const config = loadConfig({ configDir: tempDir });
expect(config.mcpdUrl).toBe('http://remote:3100');
});
it('maps daemonUrl to mcplocalUrl for backward compat', async () => {
const cmd = makeCommand();
await cmd.parseAsync(['set', 'daemonUrl', 'http://legacy:3000'], { from: 'user' });
const config = loadConfig({ configDir: tempDir });
expect(config.mcplocalUrl).toBe('http://legacy:3000');
}); });
it('sets cacheTTLMs as integer', async () => { it('sets cacheTTLMs as integer', async () => {
@@ -87,13 +102,13 @@ describe('config path', () => {
describe('config reset', () => { describe('config reset', () => {
it('resets to defaults', async () => { it('resets to defaults', async () => {
// First set a custom value // First set a custom value
saveConfig({ ...DEFAULT_CONFIG, daemonUrl: 'http://custom' }, { configDir: tempDir }); saveConfig({ ...DEFAULT_CONFIG, mcplocalUrl: 'http://custom' }, { configDir: tempDir });
const cmd = makeCommand(); const cmd = makeCommand();
await cmd.parseAsync(['reset'], { from: 'user' }); await cmd.parseAsync(['reset'], { from: 'user' });
expect(output[0]).toContain('reset'); expect(output[0]).toContain('reset');
const config = loadConfig({ configDir: tempDir }); const config = loadConfig({ configDir: tempDir });
expect(config.daemonUrl).toBe(DEFAULT_CONFIG.daemonUrl); expect(config.mcplocalUrl).toBe(DEFAULT_CONFIG.mcplocalUrl);
}); });
}); });

View File

@@ -4,6 +4,7 @@ import { join } from 'node:path';
import { tmpdir } from 'node:os'; import { tmpdir } from 'node:os';
import { createStatusCommand } from '../../src/commands/status.js'; import { createStatusCommand } from '../../src/commands/status.js';
import { saveConfig, DEFAULT_CONFIG } from '../../src/config/index.js'; import { saveConfig, DEFAULT_CONFIG } from '../../src/config/index.js';
import { saveCredentials } from '../../src/auth/index.js';
let tempDir: string; let tempDir: string;
let output: string[]; let output: string[];
@@ -25,67 +26,101 @@ describe('status command', () => {
it('shows status in table format', async () => { it('shows status in table format', async () => {
const cmd = createStatusCommand({ const cmd = createStatusCommand({
configDeps: { configDir: tempDir }, configDeps: { configDir: tempDir },
credentialsDeps: { configDir: tempDir },
log, log,
checkDaemon: async () => true, checkHealth: async () => true,
}); });
await cmd.parseAsync([], { from: 'user' }); await cmd.parseAsync([], { from: 'user' });
expect(output.join('\n')).toContain('mcpctl v'); const out = output.join('\n');
expect(output.join('\n')).toContain('connected'); expect(out).toContain('mcpctl v');
expect(out).toContain('mcplocal:');
expect(out).toContain('mcpd:');
expect(out).toContain('connected');
}); });
it('shows unreachable when daemon is down', async () => { it('shows unreachable when daemons are down', async () => {
const cmd = createStatusCommand({ const cmd = createStatusCommand({
configDeps: { configDir: tempDir }, configDeps: { configDir: tempDir },
credentialsDeps: { configDir: tempDir },
log, log,
checkDaemon: async () => false, checkHealth: async () => false,
}); });
await cmd.parseAsync([], { from: 'user' }); await cmd.parseAsync([], { from: 'user' });
expect(output.join('\n')).toContain('unreachable'); expect(output.join('\n')).toContain('unreachable');
}); });
it('shows not logged in when no credentials', async () => {
const cmd = createStatusCommand({
configDeps: { configDir: tempDir },
credentialsDeps: { configDir: tempDir },
log,
checkHealth: async () => true,
});
await cmd.parseAsync([], { from: 'user' });
expect(output.join('\n')).toContain('not logged in');
});
it('shows logged in user when credentials exist', async () => {
saveCredentials({ token: 'tok', mcpdUrl: 'http://x:3100', user: 'alice@example.com' }, { configDir: tempDir });
const cmd = createStatusCommand({
configDeps: { configDir: tempDir },
credentialsDeps: { configDir: tempDir },
log,
checkHealth: async () => true,
});
await cmd.parseAsync([], { from: 'user' });
expect(output.join('\n')).toContain('logged in as alice@example.com');
});
it('shows status in JSON format', async () => { it('shows status in JSON format', async () => {
const cmd = createStatusCommand({ const cmd = createStatusCommand({
configDeps: { configDir: tempDir }, configDeps: { configDir: tempDir },
credentialsDeps: { configDir: tempDir },
log, log,
checkDaemon: async () => true, checkHealth: async () => true,
}); });
await cmd.parseAsync(['-o', 'json'], { from: 'user' }); await cmd.parseAsync(['-o', 'json'], { from: 'user' });
const parsed = JSON.parse(output[0]) as Record<string, unknown>; const parsed = JSON.parse(output[0]) as Record<string, unknown>;
expect(parsed['version']).toBe('0.1.0'); expect(parsed['version']).toBe('0.1.0');
expect(parsed['daemonReachable']).toBe(true); expect(parsed['mcplocalReachable']).toBe(true);
expect(parsed['mcpdReachable']).toBe(true);
}); });
it('shows status in YAML format', async () => { it('shows status in YAML format', async () => {
const cmd = createStatusCommand({ const cmd = createStatusCommand({
configDeps: { configDir: tempDir }, configDeps: { configDir: tempDir },
credentialsDeps: { configDir: tempDir },
log, log,
checkDaemon: async () => false, checkHealth: async () => false,
}); });
await cmd.parseAsync(['-o', 'yaml'], { from: 'user' }); await cmd.parseAsync(['-o', 'yaml'], { from: 'user' });
expect(output[0]).toContain('daemonReachable: false'); expect(output[0]).toContain('mcplocalReachable: false');
}); });
it('uses custom daemon URL from config', async () => { it('checks correct URLs from config', async () => {
saveConfig({ ...DEFAULT_CONFIG, daemonUrl: 'http://custom:5555' }, { configDir: tempDir }); saveConfig({ ...DEFAULT_CONFIG, mcplocalUrl: 'http://local:3200', mcpdUrl: 'http://remote:3100' }, { configDir: tempDir });
let checkedUrl = ''; const checkedUrls: string[] = [];
const cmd = createStatusCommand({ const cmd = createStatusCommand({
configDeps: { configDir: tempDir }, configDeps: { configDir: tempDir },
credentialsDeps: { configDir: tempDir },
log, log,
checkDaemon: async (url) => { checkHealth: async (url) => {
checkedUrl = url; checkedUrls.push(url);
return false; return false;
}, },
}); });
await cmd.parseAsync([], { from: 'user' }); await cmd.parseAsync([], { from: 'user' });
expect(checkedUrl).toBe('http://custom:5555'); expect(checkedUrls).toContain('http://local:3200');
expect(checkedUrls).toContain('http://remote:3100');
}); });
it('shows registries from config', async () => { it('shows registries from config', async () => {
saveConfig({ ...DEFAULT_CONFIG, registries: ['official'] }, { configDir: tempDir }); saveConfig({ ...DEFAULT_CONFIG, registries: ['official'] }, { configDir: tempDir });
const cmd = createStatusCommand({ const cmd = createStatusCommand({
configDeps: { configDir: tempDir }, configDeps: { configDir: tempDir },
credentialsDeps: { configDir: tempDir },
log, log,
checkDaemon: async () => true, checkHealth: async () => true,
}); });
await cmd.parseAsync([], { from: 'user' }); await cmd.parseAsync([], { from: 'user' });
expect(output.join('\n')).toContain('official'); expect(output.join('\n')).toContain('official');

View File

@@ -28,18 +28,25 @@ describe('loadConfig', () => {
}); });
it('loads config from file', () => { it('loads config from file', () => {
saveConfig({ ...DEFAULT_CONFIG, daemonUrl: 'http://custom:5000' }, { configDir: tempDir }); saveConfig({ ...DEFAULT_CONFIG, mcplocalUrl: 'http://custom:5000' }, { configDir: tempDir });
const config = loadConfig({ configDir: tempDir }); const config = loadConfig({ configDir: tempDir });
expect(config.daemonUrl).toBe('http://custom:5000'); expect(config.mcplocalUrl).toBe('http://custom:5000');
}); });
it('applies defaults for missing fields', () => { it('applies defaults for missing fields', () => {
const { writeFileSync } = require('node:fs') as typeof import('node:fs'); const { writeFileSync } = require('node:fs') as typeof import('node:fs');
writeFileSync(join(tempDir, 'config.json'), '{"daemonUrl":"http://x:1"}'); writeFileSync(join(tempDir, 'config.json'), '{"mcplocalUrl":"http://x:1"}');
const config = loadConfig({ configDir: tempDir }); const config = loadConfig({ configDir: tempDir });
expect(config.daemonUrl).toBe('http://x:1'); expect(config.mcplocalUrl).toBe('http://x:1');
expect(config.registries).toEqual(['official', 'glama', 'smithery']); expect(config.registries).toEqual(['official', 'glama', 'smithery']);
}); });
it('backward compat: daemonUrl maps to mcplocalUrl', () => {
const { writeFileSync } = require('node:fs') as typeof import('node:fs');
writeFileSync(join(tempDir, 'config.json'), '{"daemonUrl":"http://old:3000"}');
const config = loadConfig({ configDir: tempDir });
expect(config.mcplocalUrl).toBe('http://old:3000');
});
}); });
describe('saveConfig', () => { describe('saveConfig', () => {
@@ -57,7 +64,7 @@ describe('saveConfig', () => {
it('round-trips configuration', () => { it('round-trips configuration', () => {
const custom = { const custom = {
...DEFAULT_CONFIG, ...DEFAULT_CONFIG,
daemonUrl: 'http://custom:9000', mcplocalUrl: 'http://custom:9000',
registries: ['official' as const], registries: ['official' as const],
outputFormat: 'json' as const, outputFormat: 'json' as const,
}; };
@@ -70,14 +77,14 @@ describe('saveConfig', () => {
describe('mergeConfig', () => { describe('mergeConfig', () => {
it('merges overrides into existing config', () => { it('merges overrides into existing config', () => {
saveConfig(DEFAULT_CONFIG, { configDir: tempDir }); saveConfig(DEFAULT_CONFIG, { configDir: tempDir });
const merged = mergeConfig({ daemonUrl: 'http://new:1234' }, { configDir: tempDir }); const merged = mergeConfig({ mcplocalUrl: 'http://new:1234' }, { configDir: tempDir });
expect(merged.daemonUrl).toBe('http://new:1234'); expect(merged.mcplocalUrl).toBe('http://new:1234');
expect(merged.registries).toEqual(DEFAULT_CONFIG.registries); expect(merged.registries).toEqual(DEFAULT_CONFIG.registries);
}); });
it('works when no config file exists', () => { it('works when no config file exists', () => {
const merged = mergeConfig({ outputFormat: 'yaml' }, { configDir: tempDir }); const merged = mergeConfig({ outputFormat: 'yaml' }, { configDir: tempDir });
expect(merged.outputFormat).toBe('yaml'); expect(merged.outputFormat).toBe('yaml');
expect(merged.daemonUrl).toBe('http://localhost:3000'); expect(merged.mcplocalUrl).toBe('http://localhost:3200');
}); });
}); });

View File

@@ -4,7 +4,8 @@ import { McpctlConfigSchema, DEFAULT_CONFIG } from '../../src/config/schema.js';
describe('McpctlConfigSchema', () => { describe('McpctlConfigSchema', () => {
it('provides sensible defaults from empty object', () => { it('provides sensible defaults from empty object', () => {
const config = McpctlConfigSchema.parse({}); const config = McpctlConfigSchema.parse({});
expect(config.daemonUrl).toBe('http://localhost:3000'); expect(config.mcplocalUrl).toBe('http://localhost:3200');
expect(config.mcpdUrl).toBe('http://localhost:3100');
expect(config.registries).toEqual(['official', 'glama', 'smithery']); expect(config.registries).toEqual(['official', 'glama', 'smithery']);
expect(config.cacheTTLMs).toBe(3_600_000); expect(config.cacheTTLMs).toBe(3_600_000);
expect(config.outputFormat).toBe('table'); expect(config.outputFormat).toBe('table');
@@ -15,7 +16,8 @@ describe('McpctlConfigSchema', () => {
it('validates a full config', () => { it('validates a full config', () => {
const config = McpctlConfigSchema.parse({ const config = McpctlConfigSchema.parse({
daemonUrl: 'http://custom:4000', mcplocalUrl: 'http://local:3200',
mcpdUrl: 'http://custom:4000',
registries: ['official'], registries: ['official'],
cacheTTLMs: 60_000, cacheTTLMs: 60_000,
httpProxy: 'http://proxy:8080', httpProxy: 'http://proxy:8080',
@@ -23,11 +25,26 @@ describe('McpctlConfigSchema', () => {
outputFormat: 'json', outputFormat: 'json',
smitheryApiKey: 'sk-test', smitheryApiKey: 'sk-test',
}); });
expect(config.daemonUrl).toBe('http://custom:4000'); expect(config.mcplocalUrl).toBe('http://local:3200');
expect(config.mcpdUrl).toBe('http://custom:4000');
expect(config.registries).toEqual(['official']); expect(config.registries).toEqual(['official']);
expect(config.outputFormat).toBe('json'); expect(config.outputFormat).toBe('json');
}); });
it('backward compat: maps daemonUrl to mcplocalUrl', () => {
const config = McpctlConfigSchema.parse({ daemonUrl: 'http://legacy:3000' });
expect(config.mcplocalUrl).toBe('http://legacy:3000');
expect(config.mcpdUrl).toBe('http://localhost:3100');
});
it('mcplocalUrl takes precedence over daemonUrl', () => {
const config = McpctlConfigSchema.parse({
daemonUrl: 'http://legacy:3000',
mcplocalUrl: 'http://explicit:3200',
});
expect(config.mcplocalUrl).toBe('http://explicit:3200');
});
it('rejects invalid registry names', () => { it('rejects invalid registry names', () => {
expect(() => McpctlConfigSchema.parse({ registries: ['invalid'] })).toThrow(); expect(() => McpctlConfigSchema.parse({ registries: ['invalid'] })).toThrow();
}); });

View File

@@ -12,6 +12,8 @@ describe('CLI command registration (e2e)', () => {
expect(commandNames).toContain('config'); expect(commandNames).toContain('config');
expect(commandNames).toContain('status'); expect(commandNames).toContain('status');
expect(commandNames).toContain('login');
expect(commandNames).toContain('logout');
expect(commandNames).toContain('get'); expect(commandNames).toContain('get');
expect(commandNames).toContain('describe'); expect(commandNames).toContain('describe');
expect(commandNames).toContain('instance'); expect(commandNames).toContain('instance');

View File

@@ -10,13 +10,14 @@ datasource db {
// ── Users ── // ── Users ──
model User { model User {
id String @id @default(cuid()) id String @id @default(cuid())
email String @unique email String @unique
name String? name String?
role Role @default(USER) passwordHash String
version Int @default(1) role Role @default(USER)
createdAt DateTime @default(now()) version Int @default(1)
updatedAt DateTime @updatedAt createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
sessions Session[] sessions Session[]
auditLogs AuditLog[] auditLogs AuditLog[]

View File

@@ -20,11 +20,13 @@
"@mcpctl/db": "workspace:*", "@mcpctl/db": "workspace:*",
"@mcpctl/shared": "workspace:*", "@mcpctl/shared": "workspace:*",
"@prisma/client": "^6.0.0", "@prisma/client": "^6.0.0",
"bcrypt": "^5.1.1",
"dockerode": "^4.0.9", "dockerode": "^4.0.9",
"fastify": "^5.0.0", "fastify": "^5.0.0",
"zod": "^3.24.0" "zod": "^3.24.0"
}, },
"devDependencies": { "devDependencies": {
"@types/bcrypt": "^5.0.2",
"@types/dockerode": "^4.0.1", "@types/dockerode": "^4.0.1",
"@types/node": "^25.3.0" "@types/node": "^25.3.0"
} }

View File

@@ -21,6 +21,8 @@ import {
HealthAggregator, HealthAggregator,
BackupService, BackupService,
RestoreService, RestoreService,
AuthService,
McpProxyService,
} from './services/index.js'; } from './services/index.js';
import { import {
registerMcpServerRoutes, registerMcpServerRoutes,
@@ -30,6 +32,8 @@ import {
registerAuditLogRoutes, registerAuditLogRoutes,
registerHealthMonitoringRoutes, registerHealthMonitoringRoutes,
registerBackupRoutes, registerBackupRoutes,
registerAuthRoutes,
registerMcpProxyRoutes,
} from './routes/index.js'; } from './routes/index.js';
async function main(): Promise<void> { async function main(): Promise<void> {
@@ -64,6 +68,8 @@ async function main(): Promise<void> {
const healthAggregator = new HealthAggregator(metricsCollector, orchestrator); const healthAggregator = new HealthAggregator(metricsCollector, orchestrator);
const backupService = new BackupService(serverRepo, profileRepo, projectRepo); const backupService = new BackupService(serverRepo, profileRepo, projectRepo);
const restoreService = new RestoreService(serverRepo, profileRepo, projectRepo); const restoreService = new RestoreService(serverRepo, profileRepo, projectRepo);
const authService = new AuthService(prisma);
const mcpProxyService = new McpProxyService(instanceRepo);
// Server // Server
const app = await createServer(config, { const app = await createServer(config, {
@@ -87,6 +93,12 @@ async function main(): Promise<void> {
registerAuditLogRoutes(app, auditLogService); registerAuditLogRoutes(app, auditLogService);
registerHealthMonitoringRoutes(app, { healthAggregator, metricsCollector }); registerHealthMonitoringRoutes(app, { healthAggregator, metricsCollector });
registerBackupRoutes(app, { backupService, restoreService }); registerBackupRoutes(app, { backupService, restoreService });
registerAuthRoutes(app, { authService });
registerMcpProxyRoutes(app, {
mcpProxyService,
auditLogService,
authDeps: { findSession: (token) => authService.findSession(token) },
});
// Start // Start
await app.listen({ port: config.port, host: config.host }); await app.listen({ port: config.port, host: config.host });

View File

@@ -0,0 +1,31 @@
import type { FastifyInstance } from 'fastify';
import type { AuthService } from '../services/auth.service.js';
import { createAuthMiddleware } from '../middleware/auth.js';
export interface AuthRouteDeps {
authService: AuthService;
}
export function registerAuthRoutes(app: FastifyInstance, deps: AuthRouteDeps): void {
const authMiddleware = createAuthMiddleware({
findSession: (token) => deps.authService.findSession(token),
});
// POST /api/v1/auth/login — no auth required
app.post<{
Body: { email: string; password: string };
}>('/api/v1/auth/login', async (request) => {
const { email, password } = request.body;
const result = await deps.authService.login(email, password);
return result;
});
// POST /api/v1/auth/logout — auth required
app.post('/api/v1/auth/logout', { preHandler: [authMiddleware] }, async (request) => {
const header = request.headers.authorization;
// Auth middleware already validated the header; extract the token
const token = header!.slice(7);
await deps.authService.logout(token);
return { success: true };
});
}

View File

@@ -9,3 +9,7 @@ export { registerHealthMonitoringRoutes } from './health-monitoring.js';
export type { HealthMonitoringDeps } from './health-monitoring.js'; export type { HealthMonitoringDeps } from './health-monitoring.js';
export { registerBackupRoutes } from './backup.js'; export { registerBackupRoutes } from './backup.js';
export type { BackupDeps } from './backup.js'; export type { BackupDeps } from './backup.js';
export { registerAuthRoutes } from './auth.js';
export type { AuthRouteDeps } from './auth.js';
export { registerMcpProxyRoutes } from './mcp-proxy.js';
export type { McpProxyRouteDeps } from './mcp-proxy.js';

View File

@@ -0,0 +1,37 @@
import type { FastifyInstance } from 'fastify';
import type { McpProxyService } from '../services/mcp-proxy-service.js';
import type { AuditLogService } from '../services/audit-log.service.js';
import { createAuthMiddleware, type AuthDeps } from '../middleware/auth.js';
export interface McpProxyRouteDeps {
mcpProxyService: McpProxyService;
auditLogService: AuditLogService;
authDeps: AuthDeps;
}
export function registerMcpProxyRoutes(app: FastifyInstance, deps: McpProxyRouteDeps): void {
const authMiddleware = createAuthMiddleware(deps.authDeps);
app.post<{
Body: {
serverId: string;
method: string;
params?: Record<string, unknown>;
};
}>('/api/v1/mcp/proxy', { preHandler: [authMiddleware] }, async (request) => {
const { serverId, method, params } = request.body;
const result = await deps.mcpProxyService.execute({ serverId, method, params });
// Log to audit with userId (set by auth middleware)
await deps.auditLogService.create({
userId: request.userId!,
action: 'MCP_PROXY',
resource: 'mcp-server',
resourceId: serverId,
details: { method, hasParams: params !== undefined },
});
return result;
});
}

View File

@@ -0,0 +1,66 @@
import { randomUUID } from 'node:crypto';
import type { PrismaClient } from '@prisma/client';
import bcrypt from 'bcrypt';
/** 30 days in milliseconds */
const SESSION_TTL_MS = 30 * 24 * 60 * 60 * 1000;
export interface LoginResult {
token: string;
expiresAt: Date;
user: { id: string; email: string; role: string };
}
export class AuthenticationError extends Error {
readonly statusCode = 401;
constructor(message: string) {
super(message);
this.name = 'AuthenticationError';
}
}
export class AuthService {
constructor(private readonly prisma: PrismaClient) {}
async login(email: string, password: string): Promise<LoginResult> {
const user = await this.prisma.user.findUnique({ where: { email } });
if (user === null) {
throw new AuthenticationError('Invalid email or password');
}
const valid = await bcrypt.compare(password, user.passwordHash);
if (!valid) {
throw new AuthenticationError('Invalid email or password');
}
const token = randomUUID();
const expiresAt = new Date(Date.now() + SESSION_TTL_MS);
await this.prisma.session.create({
data: {
token,
userId: user.id,
expiresAt,
},
});
return {
token,
expiresAt,
user: { id: user.id, email: user.email, role: user.role },
};
}
async logout(token: string): Promise<void> {
// Delete the session by token; ignore if already deleted
await this.prisma.session.deleteMany({ where: { token } });
}
async findSession(token: string): Promise<{ userId: string; expiresAt: Date } | null> {
const session = await this.prisma.session.findUnique({ where: { token } });
if (session === null) {
return null;
}
return { userId: session.userId, expiresAt: session.expiresAt };
}
}

View File

@@ -19,3 +19,7 @@ export { BackupService } from './backup/index.js';
export type { BackupBundle, BackupOptions } from './backup/index.js'; export type { BackupBundle, BackupOptions } from './backup/index.js';
export { RestoreService } from './backup/index.js'; export { RestoreService } from './backup/index.js';
export type { RestoreOptions, RestoreResult, ConflictStrategy } from './backup/index.js'; export type { RestoreOptions, RestoreResult, ConflictStrategy } from './backup/index.js';
export { AuthService, AuthenticationError } from './auth.service.js';
export type { LoginResult } from './auth.service.js';
export { McpProxyService } from './mcp-proxy-service.js';
export type { McpProxyRequest, McpProxyResponse } from './mcp-proxy-service.js';

View File

@@ -0,0 +1,76 @@
import type { McpInstance } from '@prisma/client';
import type { IMcpInstanceRepository } from '../repositories/interfaces.js';
import { NotFoundError } from './mcp-server.service.js';
import { InvalidStateError } from './instance.service.js';
export interface McpProxyRequest {
serverId: string;
method: string;
params?: Record<string, unknown> | undefined;
}
export interface McpProxyResponse {
jsonrpc: '2.0';
id: number;
result?: unknown;
error?: { code: number; message: string; data?: unknown };
}
export class McpProxyService {
constructor(private readonly instanceRepo: IMcpInstanceRepository) {}
async execute(request: McpProxyRequest): Promise<McpProxyResponse> {
// Find a running instance for this server
const instances = await this.instanceRepo.findAll(request.serverId);
const running = instances.find((i) => i.status === 'RUNNING');
if (!running) {
throw new NotFoundError(`No running instance found for server '${request.serverId}'`);
}
if (running.port === null || running.port === undefined) {
throw new InvalidStateError(
`Running instance '${running.id}' for server '${request.serverId}' has no port assigned`,
);
}
return this.sendJsonRpc(running, request.method, request.params);
}
private async sendJsonRpc(
instance: McpInstance,
method: string,
params?: Record<string, unknown>,
): Promise<McpProxyResponse> {
const url = `http://localhost:${instance.port}`;
const body: Record<string, unknown> = {
jsonrpc: '2.0',
id: 1,
method,
};
if (params !== undefined) {
body.params = params;
}
const response = await fetch(url, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(body),
});
if (!response.ok) {
return {
jsonrpc: '2.0',
id: 1,
error: {
code: -32000,
message: `MCP server returned HTTP ${response.status}: ${response.statusText}`,
},
};
}
const result = (await response.json()) as McpProxyResponse;
return result;
}
}

View File

@@ -1,5 +1,5 @@
{ {
"name": "@mcpctl/local-proxy", "name": "@mcpctl/mcplocal",
"version": "0.1.0", "version": "0.1.0",
"private": true, "private": true,
"type": "module", "type": "module",
@@ -14,8 +14,10 @@
"test:run": "vitest run" "test:run": "vitest run"
}, },
"dependencies": { "dependencies": {
"@fastify/cors": "^10.0.0",
"@mcpctl/shared": "workspace:*", "@mcpctl/shared": "workspace:*",
"@modelcontextprotocol/sdk": "^1.0.0" "@modelcontextprotocol/sdk": "^1.0.0",
"fastify": "^5.0.0"
}, },
"devDependencies": { "devDependencies": {
"@types/node": "^25.3.0" "@types/node": "^25.3.0"

View File

@@ -0,0 +1,39 @@
import type { McpdClient } from './http/mcpd-client.js';
import type { McpRouter } from './router.js';
import { McpdUpstream } from './upstream/mcpd.js';
interface McpdServer {
id: string;
name: string;
transport: string;
status?: string;
}
/**
* Discovers MCP servers from mcpd and registers them as upstreams in the router.
* Called periodically or on demand to keep the router in sync with mcpd.
*/
export async function refreshUpstreams(router: McpRouter, mcpdClient: McpdClient): Promise<string[]> {
const servers = await mcpdClient.get<McpdServer[]>('/api/v1/servers');
const registered: string[] = [];
// Remove stale upstreams
const currentNames = new Set(router.getUpstreamNames());
const serverNames = new Set(servers.map((s) => s.name));
for (const name of currentNames) {
if (!serverNames.has(name)) {
router.removeUpstream(name);
}
}
// Add/update upstreams for each server
for (const server of servers) {
if (!currentNames.has(server.name)) {
const upstream = new McpdUpstream(server.id, server.name, mcpdClient);
router.addUpstream(upstream);
}
registered.push(server.name);
}
return registered;
}

View File

@@ -0,0 +1,8 @@
export { TieredHealthMonitor } from './tiered.js';
export type {
TieredHealthStatus,
TieredHealthMonitorDeps,
McplocalHealth,
McpdHealth,
InstanceHealth,
} from './tiered.js';

View File

@@ -0,0 +1,98 @@
import type { McpdClient } from '../http/mcpd-client.js';
import type { ProviderRegistry } from '../providers/registry.js';
export interface McplocalHealth {
status: 'healthy' | 'degraded';
uptime: number;
llmProvider: string | null;
}
export interface McpdHealth {
status: 'connected' | 'disconnected';
url: string;
}
export interface InstanceHealth {
name: string;
status: string;
}
export interface TieredHealthStatus {
mcplocal: McplocalHealth;
mcpd: McpdHealth;
instances: InstanceHealth[];
}
export interface TieredHealthMonitorDeps {
mcpdClient: McpdClient | null;
providerRegistry: ProviderRegistry;
mcpdUrl: string;
}
/**
* Monitors health across all tiers: mcplocal itself, the mcpd daemon, and MCP server instances.
* Aggregates status from multiple sources into a single TieredHealthStatus.
*/
export class TieredHealthMonitor {
private readonly mcpdClient: McpdClient | null;
private readonly providerRegistry: ProviderRegistry;
private readonly mcpdUrl: string;
private readonly startTime: number;
constructor(deps: TieredHealthMonitorDeps) {
this.mcpdClient = deps.mcpdClient;
this.providerRegistry = deps.providerRegistry;
this.mcpdUrl = deps.mcpdUrl;
this.startTime = Date.now();
}
async checkHealth(): Promise<TieredHealthStatus> {
const [mcpdHealth, instances] = await Promise.all([
this.checkMcpd(),
this.fetchInstances(),
]);
const mcplocalHealth = this.checkMcplocal();
return {
mcplocal: mcplocalHealth,
mcpd: mcpdHealth,
instances,
};
}
private checkMcplocal(): McplocalHealth {
const activeProvider = this.providerRegistry.getActive();
return {
status: 'healthy',
uptime: (Date.now() - this.startTime) / 1000,
llmProvider: activeProvider?.name ?? null,
};
}
private async checkMcpd(): Promise<McpdHealth> {
if (this.mcpdClient === null) {
return { status: 'disconnected', url: this.mcpdUrl };
}
try {
await this.mcpdClient.get<unknown>('/health');
return { status: 'connected', url: this.mcpdUrl };
} catch {
return { status: 'disconnected', url: this.mcpdUrl };
}
}
private async fetchInstances(): Promise<InstanceHealth[]> {
if (this.mcpdClient === null) {
return [];
}
try {
const response = await this.mcpdClient.get<{ instances: InstanceHealth[] }>('/instances');
return response.instances;
} catch {
return [];
}
}
}

View File

@@ -0,0 +1,32 @@
/** Configuration for the mcplocal HTTP server. */
export interface HttpConfig {
/** Port for the HTTP server (default: 3200) */
httpPort: number;
/** Host to bind to (default: 127.0.0.1) */
httpHost: string;
/** URL of the mcpd daemon (default: http://localhost:3100) */
mcpdUrl: string;
/** Bearer token for authenticating with mcpd */
mcpdToken: string;
/** Log level (default: info) */
logLevel: 'fatal' | 'error' | 'warn' | 'info' | 'debug' | 'trace';
}
const DEFAULT_HTTP_PORT = 3200;
const DEFAULT_HTTP_HOST = '127.0.0.1';
const DEFAULT_MCPD_URL = 'http://localhost:3100';
const DEFAULT_MCPD_TOKEN = '';
const DEFAULT_LOG_LEVEL = 'info';
export function loadHttpConfig(env: Record<string, string | undefined> = process.env): HttpConfig {
const portStr = env['MCPLOCAL_HTTP_PORT'];
const port = portStr !== undefined ? parseInt(portStr, 10) : DEFAULT_HTTP_PORT;
return {
httpPort: Number.isFinite(port) ? port : DEFAULT_HTTP_PORT,
httpHost: env['MCPLOCAL_HTTP_HOST'] ?? DEFAULT_HTTP_HOST,
mcpdUrl: env['MCPLOCAL_MCPD_URL'] ?? DEFAULT_MCPD_URL,
mcpdToken: env['MCPLOCAL_MCPD_TOKEN'] ?? DEFAULT_MCPD_TOKEN,
logLevel: (env['MCPLOCAL_LOG_LEVEL'] as HttpConfig['logLevel'] | undefined) ?? DEFAULT_LOG_LEVEL,
};
}

View File

@@ -0,0 +1,6 @@
export { createHttpServer } from './server.js';
export type { HttpServerDeps } from './server.js';
export { loadHttpConfig } from './config.js';
export type { HttpConfig } from './config.js';
export { McpdClient, AuthenticationError, ConnectionError } from './mcpd-client.js';
export { registerProxyRoutes } from './routes/proxy.js';

View File

@@ -0,0 +1,105 @@
/**
* HTTP client for communicating with the mcpd daemon.
* Wraps fetch calls with auth headers and error handling.
*/
/** Thrown when mcpd returns a 401 Unauthorized response. */
export class AuthenticationError extends Error {
constructor(message = 'Authentication failed: invalid or expired token') {
super(message);
this.name = 'AuthenticationError';
}
}
/** Thrown when mcpd is unreachable (connection refused, DNS failure, etc.). */
export class ConnectionError extends Error {
constructor(url: string, cause?: unknown) {
const msg = `Cannot connect to mcpd at ${url}`;
super(cause instanceof Error ? `${msg}: ${cause.message}` : msg);
this.name = 'ConnectionError';
}
}
export class McpdClient {
private readonly baseUrl: string;
private readonly token: string;
constructor(baseUrl: string, token: string) {
// Strip trailing slash for consistent URL joining
this.baseUrl = baseUrl.replace(/\/+$/, '');
this.token = token;
}
async get<T>(path: string): Promise<T> {
return this.request<T>('GET', path);
}
async post<T>(path: string, body?: unknown): Promise<T> {
return this.request<T>('POST', path, body);
}
async put<T>(path: string, body?: unknown): Promise<T> {
return this.request<T>('PUT', path, body);
}
async delete(path: string): Promise<void> {
await this.request<unknown>('DELETE', path);
}
/**
* Forward a raw request to mcpd. Returns the status code and body
* so the proxy route can relay them directly.
*/
async forward(
method: string,
path: string,
query: string,
body: unknown | undefined,
): Promise<{ status: number; body: unknown }> {
const url = `${this.baseUrl}${path}${query ? `?${query}` : ''}`;
const headers: Record<string, string> = {
'Authorization': `Bearer ${this.token}`,
'Accept': 'application/json',
};
const init: RequestInit = { method, headers };
if (body !== undefined && body !== null && method !== 'GET' && method !== 'HEAD') {
headers['Content-Type'] = 'application/json';
init.body = JSON.stringify(body);
}
let res: Response;
try {
res = await fetch(url, init);
} catch (err: unknown) {
throw new ConnectionError(this.baseUrl, err);
}
if (res.status === 401) {
throw new AuthenticationError();
}
const text = await res.text();
let parsed: unknown;
try {
parsed = JSON.parse(text);
} catch {
parsed = text;
}
return { status: res.status, body: parsed };
}
private async request<T>(method: string, path: string, body?: unknown): Promise<T> {
const result = await this.forward(method, path, '', body);
if (result.status >= 400) {
const detail = typeof result.body === 'object' && result.body !== null
? JSON.stringify(result.body)
: String(result.body);
throw new Error(`mcpd returned ${String(result.status)}: ${detail}`);
}
return result.body as T;
}
}

View File

@@ -0,0 +1,38 @@
/**
* Catch-all proxy route that forwards /api/v1/* requests to mcpd.
*/
import type { FastifyInstance } from 'fastify';
import { AuthenticationError, ConnectionError } from '../mcpd-client.js';
import type { McpdClient } from '../mcpd-client.js';
export function registerProxyRoutes(app: FastifyInstance, client: McpdClient): void {
app.all('/api/v1/*', async (request, reply) => {
const path = (request.url.split('?')[0]) ?? '/';
const querystring = request.url.includes('?')
? request.url.slice(request.url.indexOf('?') + 1)
: '';
const body = request.method !== 'GET' && request.method !== 'HEAD'
? (request.body as unknown)
: undefined;
try {
const result = await client.forward(request.method, path, querystring, body);
return reply.code(result.status).send(result.body);
} catch (err: unknown) {
if (err instanceof AuthenticationError) {
return reply.code(401).send({
error: 'unauthorized',
message: 'Authentication with mcpd failed. Run `mcpctl login` to refresh your token.',
});
}
if (err instanceof ConnectionError) {
return reply.code(503).send({
error: 'service_unavailable',
message: 'Cannot reach mcpd daemon. Is it running?',
});
}
throw err;
}
});
}

View File

@@ -0,0 +1,85 @@
import Fastify from 'fastify';
import type { FastifyInstance } from 'fastify';
import cors from '@fastify/cors';
import { APP_VERSION } from '@mcpctl/shared';
import type { HttpConfig } from './config.js';
import { McpdClient } from './mcpd-client.js';
import { registerProxyRoutes } from './routes/proxy.js';
import type { McpRouter } from '../router.js';
import type { HealthMonitor } from '../health.js';
import type { TieredHealthMonitor } from '../health/tiered.js';
export interface HttpServerDeps {
router: McpRouter;
healthMonitor?: HealthMonitor | undefined;
tieredHealthMonitor?: TieredHealthMonitor | undefined;
}
export async function createHttpServer(
config: HttpConfig,
deps: HttpServerDeps,
): Promise<FastifyInstance> {
const app = Fastify({
logger: {
level: config.logLevel,
},
});
await app.register(cors, {
origin: true,
methods: ['GET', 'POST', 'PUT', 'DELETE', 'PATCH'],
});
// Health endpoint
app.get('/health', async (_request, reply) => {
const upstreams = deps.router.getUpstreamNames();
const healthStatuses = deps.healthMonitor
? deps.healthMonitor.getAllStatuses()
: undefined;
// Include tiered summary if available
let tieredSummary: { mcpd: string; llmProvider: string | null } | undefined;
if (deps.tieredHealthMonitor) {
const tiered = await deps.tieredHealthMonitor.checkHealth();
tieredSummary = {
mcpd: tiered.mcpd.status,
llmProvider: tiered.mcplocal.llmProvider,
};
}
reply.code(200).send({
status: 'healthy',
version: APP_VERSION,
uptime: process.uptime(),
timestamp: new Date().toISOString(),
upstreams: upstreams.length,
mcpdUrl: config.mcpdUrl,
...(healthStatuses !== undefined ? { health: healthStatuses } : {}),
...(tieredSummary !== undefined ? { tiered: tieredSummary } : {}),
});
});
// Detailed tiered health endpoint
app.get('/health/detailed', async (_request, reply) => {
if (!deps.tieredHealthMonitor) {
reply.code(503).send({
error: 'Tiered health monitor not configured',
});
return;
}
const status = await deps.tieredHealthMonitor.checkHealth();
reply.code(200).send(status);
});
// Liveness probe
app.get('/healthz', async (_request, reply) => {
reply.code(200).send({ status: 'ok' });
});
// Proxy management routes to mcpd
const mcpdClient = new McpdClient(config.mcpdUrl, config.mcpdToken);
registerProxyRoutes(app, mcpdClient);
return app;
}

View File

@@ -4,10 +4,15 @@ export { StdioProxyServer } from './server.js';
export { StdioUpstream, HttpUpstream } from './upstream/index.js'; export { StdioUpstream, HttpUpstream } from './upstream/index.js';
export { HealthMonitor } from './health.js'; export { HealthMonitor } from './health.js';
export type { HealthState, HealthStatus, HealthMonitorOptions } from './health.js'; export type { HealthState, HealthStatus, HealthMonitorOptions } from './health.js';
export { TieredHealthMonitor } from './health/index.js';
export type { TieredHealthStatus, TieredHealthMonitorDeps, McplocalHealth, McpdHealth, InstanceHealth } from './health/index.js';
export { main } from './main.js'; export { main } from './main.js';
export type { MainResult } from './main.js';
export { ProviderRegistry } from './providers/index.js'; export { ProviderRegistry } from './providers/index.js';
export type { LlmProvider, CompletionOptions, CompletionResult, ChatMessage } from './providers/index.js'; export type { LlmProvider, CompletionOptions, CompletionResult, ChatMessage } from './providers/index.js';
export { OpenAiProvider, AnthropicProvider, OllamaProvider } from './providers/index.js'; export { OpenAiProvider, AnthropicProvider, OllamaProvider, GeminiCliProvider, DeepSeekProvider } from './providers/index.js';
export { createHttpServer, loadHttpConfig, McpdClient, AuthenticationError, ConnectionError, registerProxyRoutes } from './http/index.js';
export type { HttpConfig, HttpServerDeps } from './http/index.js';
export type { export type {
JsonRpcRequest, JsonRpcRequest,
JsonRpcResponse, JsonRpcResponse,

View File

@@ -0,0 +1,96 @@
/**
* LRU cache for filter decisions.
*
* Caches whether a given tool name + response size combination should be
* filtered by the LLM pipeline. Avoids redundant LLM calls for repeated
* queries that produce similar-sized responses.
*/
export interface FilterCacheConfig {
/** Maximum number of entries in the cache (default 256) */
maxEntries: number;
/** TTL in milliseconds for cache entries (default 3_600_000 = 1 hour) */
ttlMs: number;
}
export const DEFAULT_FILTER_CACHE_CONFIG: FilterCacheConfig = {
maxEntries: 256,
ttlMs: 3_600_000,
};
interface CacheEntry {
shouldFilter: boolean;
createdAt: number;
}
/**
* Simple LRU cache for filter decisions keyed by tool name.
*
* Uses a Map to maintain insertion order for LRU eviction.
* No external dependencies.
*/
export class FilterCache {
private cache = new Map<string, CacheEntry>();
private readonly config: FilterCacheConfig;
constructor(config: Partial<FilterCacheConfig> = {}) {
this.config = { ...DEFAULT_FILTER_CACHE_CONFIG, ...config };
}
/**
* Look up a cached filter decision.
*
* @param toolName - The MCP tool name.
* @returns `true`/`false` if a cached decision exists, or `null` if no valid entry.
*/
shouldFilter(toolName: string): boolean | null {
const entry = this.cache.get(toolName);
if (!entry) return null;
// Check TTL expiration
if (Date.now() - entry.createdAt > this.config.ttlMs) {
this.cache.delete(toolName);
return null;
}
// Move to end for LRU freshness
this.cache.delete(toolName);
this.cache.set(toolName, entry);
return entry.shouldFilter;
}
/**
* Record a filter decision in the cache.
*
* @param toolName - The MCP tool name.
* @param shouldFilter - Whether the response should be filtered.
*/
recordDecision(toolName: string, shouldFilter: boolean): void {
// If already present, remove to refresh position
this.cache.delete(toolName);
// Evict oldest entry if at capacity
if (this.cache.size >= this.config.maxEntries) {
const oldest = this.cache.keys().next();
if (!oldest.done) {
this.cache.delete(oldest.value);
}
}
this.cache.set(toolName, {
shouldFilter,
createdAt: Date.now(),
});
}
/** Clear all cached entries. */
clear(): void {
this.cache.clear();
}
/** Number of entries currently in the cache. */
get size(): number {
return this.cache.size;
}
}

View File

@@ -0,0 +1,8 @@
export { LlmProcessor, DEFAULT_PROCESSOR_CONFIG } from './processor.js';
export type { LlmProcessorConfig, ProcessedRequest, FilteredResponse } from './processor.js';
export { RESPONSE_FILTER_SYSTEM_PROMPT, REQUEST_OPTIMIZATION_SYSTEM_PROMPT } from './prompts.js';
export { estimateTokens } from './token-counter.js';
export { FilterCache, DEFAULT_FILTER_CACHE_CONFIG } from './filter-cache.js';
export type { FilterCacheConfig } from './filter-cache.js';
export { FilterMetrics } from './metrics.js';
export type { FilterMetricsSnapshot } from './metrics.js';

View File

@@ -0,0 +1,83 @@
/**
* Metrics tracking for the LLM filter pipeline.
*
* Records token savings, cache efficiency, and filter latency to enable
* observability of the smart context optimization layer.
*/
export interface FilterMetricsSnapshot {
/** Total estimated tokens that entered the filter pipeline */
totalTokensProcessed: number;
/** Estimated tokens saved by filtering */
tokensSaved: number;
/** Number of cache hits (filter decision reused) */
cacheHits: number;
/** Number of cache misses (required fresh decision) */
cacheMisses: number;
/** Number of filter operations performed */
filterCount: number;
/** Average filter latency in milliseconds (0 if no operations) */
averageFilterLatencyMs: number;
}
/**
* Accumulates metrics for the LLM filter pipeline.
*
* Thread-safe for single-threaded Node.js usage. Call `getStats()` to
* retrieve a snapshot of current metrics.
*/
export class FilterMetrics {
private totalTokensProcessed = 0;
private tokensSaved = 0;
private cacheHits = 0;
private cacheMisses = 0;
private filterCount = 0;
private totalFilterLatencyMs = 0;
/**
* Record a single filter operation.
*
* @param originalTokens - Estimated tokens before filtering.
* @param filteredTokens - Estimated tokens after filtering.
* @param latencyMs - Time taken for the filter operation in ms.
*/
recordFilter(originalTokens: number, filteredTokens: number, latencyMs: number): void {
this.totalTokensProcessed += originalTokens;
this.tokensSaved += Math.max(0, originalTokens - filteredTokens);
this.filterCount++;
this.totalFilterLatencyMs += latencyMs;
}
/** Record a cache hit. */
recordCacheHit(): void {
this.cacheHits++;
}
/** Record a cache miss. */
recordCacheMiss(): void {
this.cacheMisses++;
}
/** Return a snapshot of all accumulated metrics. */
getStats(): FilterMetricsSnapshot {
return {
totalTokensProcessed: this.totalTokensProcessed,
tokensSaved: this.tokensSaved,
cacheHits: this.cacheHits,
cacheMisses: this.cacheMisses,
filterCount: this.filterCount,
averageFilterLatencyMs:
this.filterCount > 0 ? this.totalFilterLatencyMs / this.filterCount : 0,
};
}
/** Reset all metrics to zero. */
reset(): void {
this.totalTokensProcessed = 0;
this.tokensSaved = 0;
this.cacheHits = 0;
this.cacheMisses = 0;
this.filterCount = 0;
this.totalFilterLatencyMs = 0;
}
}

View File

@@ -0,0 +1,231 @@
import type { ProviderRegistry } from '../providers/registry.js';
import type { JsonRpcResponse } from '../types.js';
import { RESPONSE_FILTER_SYSTEM_PROMPT, REQUEST_OPTIMIZATION_SYSTEM_PROMPT } from './prompts.js';
import { estimateTokens } from './token-counter.js';
import { FilterCache } from './filter-cache.js';
import type { FilterCacheConfig } from './filter-cache.js';
import { FilterMetrics } from './metrics.js';
export interface LlmProcessorConfig {
/** Enable request preprocessing */
enablePreprocessing: boolean;
/** Enable response filtering */
enableFiltering: boolean;
/** Tool name patterns to skip (matched against namespaced name) */
excludeTools: string[];
/** Max tokens for LLM calls */
maxTokens: number;
/** Token threshold below which responses skip LLM filtering (default 250 tokens ~ 1000 chars) */
tokenThreshold: number;
/** Filter cache configuration (optional; omit to use defaults) */
filterCache?: FilterCacheConfig | undefined;
}
export const DEFAULT_PROCESSOR_CONFIG: LlmProcessorConfig = {
enablePreprocessing: false,
enableFiltering: true,
excludeTools: [],
maxTokens: 1024,
tokenThreshold: 250,
};
export interface ProcessedRequest {
optimized: boolean;
params: Record<string, unknown>;
}
export interface FilteredResponse {
filtered: boolean;
result: unknown;
originalSize: number;
filteredSize: number;
}
/**
* LLM pre-processing pipeline. Intercepts MCP tool calls and uses a local
* LLM to optimize requests and filter responses, reducing token usage for
* the upstream Claude model.
*
* Includes smart context optimization:
* - Token-based thresholds to skip filtering small responses
* - LRU cache for filter decisions on repeated tool calls
* - Metrics tracking for observability
*/
export class LlmProcessor {
private readonly filterCache: FilterCache;
private readonly metrics: FilterMetrics;
constructor(
private providers: ProviderRegistry,
private config: LlmProcessorConfig = DEFAULT_PROCESSOR_CONFIG,
) {
this.filterCache = new FilterCache(config.filterCache);
this.metrics = new FilterMetrics();
}
/** Methods that should never be preprocessed (protocol-level or simple CRUD) */
private static readonly BYPASS_METHODS = new Set([
'initialize',
'tools/list',
'resources/list',
'prompts/list',
'prompts/get',
'resources/subscribe',
'resources/unsubscribe',
]);
/** Simple operations that don't benefit from preprocessing */
private static readonly SIMPLE_OPERATIONS = new Set([
'create', 'delete', 'remove', 'subscribe', 'unsubscribe',
]);
shouldProcess(method: string, toolName?: string): boolean {
if (LlmProcessor.BYPASS_METHODS.has(method)) return false;
if (!toolName) return false;
// Check exclude list
if (this.config.excludeTools.some((pattern) => toolName.includes(pattern))) {
return false;
}
// Skip simple CRUD operations
const baseName = toolName.includes('/') ? toolName.split('/').pop()! : toolName;
for (const op of LlmProcessor.SIMPLE_OPERATIONS) {
if (baseName.startsWith(op)) return false;
}
return true;
}
/**
* Optimize request parameters using the active LLM provider.
* Falls back to original params if LLM is unavailable or fails.
*/
async preprocessRequest(toolName: string, params: Record<string, unknown>): Promise<ProcessedRequest> {
if (!this.config.enablePreprocessing) {
return { optimized: false, params };
}
const provider = this.providers.getActive();
if (!provider) {
return { optimized: false, params };
}
try {
const result = await provider.complete({
messages: [
{ role: 'system', content: REQUEST_OPTIMIZATION_SYSTEM_PROMPT },
{ role: 'user', content: `Tool: ${toolName}\nParameters: ${JSON.stringify(params)}` },
],
maxTokens: this.config.maxTokens,
temperature: 0,
});
const optimized = JSON.parse(result.content) as Record<string, unknown>;
return { optimized: true, params: optimized };
} catch {
// LLM failed, fall through to original params
return { optimized: false, params };
}
}
/**
* Filter/summarize a tool response using the active LLM provider.
* Falls back to original response if LLM is unavailable or fails.
*
* Uses token-based thresholds and an LRU filter cache to skip unnecessary
* LLM calls. Records metrics for every filter operation.
*/
async filterResponse(toolName: string, response: JsonRpcResponse): Promise<FilteredResponse> {
if (!this.config.enableFiltering) {
const raw = JSON.stringify(response.result);
return { filtered: false, result: response.result, originalSize: raw.length, filteredSize: raw.length };
}
const provider = this.providers.getActive();
if (!provider) {
const raw = JSON.stringify(response.result);
return { filtered: false, result: response.result, originalSize: raw.length, filteredSize: raw.length };
}
// Don't filter error responses
if (response.error) {
return { filtered: false, result: response.result, originalSize: 0, filteredSize: 0 };
}
const raw = JSON.stringify(response.result);
const tokens = estimateTokens(raw);
// Skip small responses below the token threshold
if (tokens < this.config.tokenThreshold) {
return { filtered: false, result: response.result, originalSize: raw.length, filteredSize: raw.length };
}
// Check filter cache for a prior decision on this tool
const cachedDecision = this.filterCache.shouldFilter(toolName);
if (cachedDecision !== null) {
this.metrics.recordCacheHit();
if (!cachedDecision) {
// Previously decided not to filter this tool's responses
return { filtered: false, result: response.result, originalSize: raw.length, filteredSize: raw.length };
}
} else {
this.metrics.recordCacheMiss();
}
const startTime = performance.now();
try {
const result = await provider.complete({
messages: [
{ role: 'system', content: RESPONSE_FILTER_SYSTEM_PROMPT },
{ role: 'user', content: `Tool: ${toolName}\nResponse (${raw.length} chars):\n${raw}` },
],
maxTokens: this.config.maxTokens,
temperature: 0,
});
const filtered = JSON.parse(result.content) as unknown;
const filteredStr = JSON.stringify(filtered);
const filteredTokens = estimateTokens(filteredStr);
const latencyMs = performance.now() - startTime;
this.metrics.recordFilter(tokens, filteredTokens, latencyMs);
// Cache the decision: if filtering actually saved tokens, remember to filter
const didSave = filteredStr.length < raw.length;
this.filterCache.recordDecision(toolName, didSave);
return {
filtered: true,
result: filtered,
originalSize: raw.length,
filteredSize: filteredStr.length,
};
} catch {
const latencyMs = performance.now() - startTime;
this.metrics.recordFilter(tokens, tokens, latencyMs);
// LLM failed — cache as "don't filter" to avoid repeated failures
this.filterCache.recordDecision(toolName, false);
// LLM failed, return original
return { filtered: false, result: response.result, originalSize: raw.length, filteredSize: raw.length };
}
}
/** Return a snapshot of filter pipeline metrics. */
getMetrics(): ReturnType<FilterMetrics['getStats']> {
return this.metrics.getStats();
}
/** Reset all metrics. */
resetMetrics(): void {
this.metrics.reset();
}
/** Clear the filter decision cache. */
clearFilterCache(): void {
this.filterCache.clear();
}
}

View File

@@ -0,0 +1,21 @@
/**
* System prompts for the LLM pre-processing pipeline.
*/
export const RESPONSE_FILTER_SYSTEM_PROMPT = `You are a data filtering assistant. Your job is to extract only the relevant information from MCP tool responses.
Rules:
- Remove redundant or verbose fields that aren't useful to the user's query
- Keep essential identifiers, names, statuses, and key metrics
- Preserve error messages and warnings in full
- If the response is already concise, return it unchanged
- Output valid JSON only, no markdown or explanations
- If you cannot parse the input, return it unchanged`;
export const REQUEST_OPTIMIZATION_SYSTEM_PROMPT = `You are a query optimization assistant. Your job is to optimize MCP tool call parameters.
Rules:
- Add appropriate filters or limits if the query is too broad
- Keep the original intent of the request
- Output valid JSON with the optimized parameters only, no markdown or explanations
- If no optimization is needed, return the original parameters unchanged`;

View File

@@ -0,0 +1,18 @@
/**
* Simple token estimation utility.
*
* Uses a heuristic of ~4 characters per token, which is a reasonable
* approximation for English text and JSON payloads. For more accurate
* counting, a tokenizer like tiktoken could be used instead.
*/
/**
* Estimate the number of tokens in a text string.
*
* @param text - The input text to estimate tokens for.
* @returns Estimated token count (minimum 0).
*/
export function estimateTokens(text: string): number {
if (text.length === 0) return 0;
return Math.ceil(text.length / 4);
}

View File

@@ -1,14 +1,26 @@
#!/usr/bin/env node #!/usr/bin/env node
import { readFileSync } from 'node:fs'; import { readFileSync } from 'node:fs';
import type { FastifyInstance } from 'fastify';
import type { ProxyConfig, UpstreamConfig } from './types.js'; import type { ProxyConfig, UpstreamConfig } from './types.js';
import { McpRouter } from './router.js'; import { McpRouter } from './router.js';
import { StdioProxyServer } from './server.js'; import { StdioProxyServer } from './server.js';
import { StdioUpstream } from './upstream/stdio.js'; import { StdioUpstream } from './upstream/stdio.js';
import { HttpUpstream } from './upstream/http.js'; import { HttpUpstream } from './upstream/http.js';
import { createHttpServer } from './http/server.js';
import { loadHttpConfig } from './http/config.js';
import type { HttpConfig } from './http/config.js';
function parseArgs(argv: string[]): { configPath: string | undefined; upstreams: string[] } { interface ParsedArgs {
configPath: string | undefined;
upstreams: string[];
noHttp: boolean;
}
function parseArgs(argv: string[]): ParsedArgs {
let configPath: string | undefined; let configPath: string | undefined;
const upstreams: string[] = []; const upstreams: string[] = [];
let noHttp = false;
for (let i = 2; i < argv.length; i++) { for (let i = 2; i < argv.length; i++) {
const arg = argv[i]; const arg = argv[i];
if (arg === '--config' && i + 1 < argv.length) { if (arg === '--config' && i + 1 < argv.length) {
@@ -19,9 +31,11 @@ function parseArgs(argv: string[]): { configPath: string | undefined; upstreams:
upstreams.push(argv[++i]!); upstreams.push(argv[++i]!);
} else if (arg?.startsWith('--upstream=')) { } else if (arg?.startsWith('--upstream=')) {
upstreams.push(arg.slice('--upstream='.length)); upstreams.push(arg.slice('--upstream='.length));
} else if (arg === '--no-http') {
noHttp = true;
} }
} }
return { configPath, upstreams }; return { configPath, upstreams, noHttp };
} }
function loadConfig(configPath: string): ProxyConfig { function loadConfig(configPath: string): ProxyConfig {
@@ -36,8 +50,16 @@ function createUpstream(config: UpstreamConfig) {
return new HttpUpstream(config); return new HttpUpstream(config);
} }
export async function main(argv: string[] = process.argv): Promise<{ router: McpRouter; server: StdioProxyServer }> { export interface MainResult {
router: McpRouter;
server: StdioProxyServer;
httpServer: FastifyInstance | undefined;
httpConfig: HttpConfig;
}
export async function main(argv: string[] = process.argv): Promise<MainResult> {
const args = parseArgs(argv); const args = parseArgs(argv);
const httpConfig = loadHttpConfig();
let upstreamConfigs: UpstreamConfig[] = []; let upstreamConfigs: UpstreamConfig[] = [];
@@ -85,10 +107,29 @@ export async function main(argv: string[] = process.argv): Promise<{ router: Mcp
router.addUpstream(upstream); router.addUpstream(upstream);
} }
// Start stdio proxy server
const server = new StdioProxyServer(router); const server = new StdioProxyServer(router);
server.start();
process.stderr.write(`mcpctl-proxy started with ${upstreamConfigs.length} upstream(s)\n`);
// Start HTTP server unless disabled
let httpServer: FastifyInstance | undefined;
if (!args.noHttp) {
httpServer = await createHttpServer(httpConfig, { router });
await httpServer.listen({ port: httpConfig.httpPort, host: httpConfig.httpHost });
process.stderr.write(`mcpctl-proxy HTTP server listening on ${httpConfig.httpHost}:${httpConfig.httpPort}\n`);
}
// Graceful shutdown
let shuttingDown = false;
const shutdown = async () => { const shutdown = async () => {
if (shuttingDown) return;
shuttingDown = true;
server.stop(); server.stop();
if (httpServer) {
await httpServer.close();
}
await router.closeAll(); await router.closeAll();
process.exit(0); process.exit(0);
}; };
@@ -96,10 +137,7 @@ export async function main(argv: string[] = process.argv): Promise<{ router: Mcp
process.on('SIGTERM', () => void shutdown()); process.on('SIGTERM', () => void shutdown());
process.on('SIGINT', () => void shutdown()); process.on('SIGINT', () => void shutdown());
server.start(); return { router, server, httpServer, httpConfig };
process.stderr.write(`mcpctl-proxy started with ${upstreamConfigs.length} upstream(s)\n`);
return { router, server };
} }
// Run when executed directly // Run when executed directly

View File

@@ -0,0 +1,191 @@
import https from 'node:https';
import type { LlmProvider, CompletionOptions, CompletionResult, ChatMessage, ToolCall } from './types.js';
export interface DeepSeekConfig {
apiKey: string;
baseUrl?: string;
defaultModel?: string;
}
interface DeepSeekMessage {
role: string;
content: string | null;
tool_calls?: Array<{
id: string;
type: 'function';
function: { name: string; arguments: string };
}>;
tool_call_id?: string;
name?: string;
}
/**
* DeepSeek provider using the OpenAI-compatible chat completions API.
* Endpoint: https://api.deepseek.com/v1/chat/completions
*/
export class DeepSeekProvider implements LlmProvider {
readonly name = 'deepseek';
private apiKey: string;
private baseUrl: string;
private defaultModel: string;
constructor(config: DeepSeekConfig) {
this.apiKey = config.apiKey;
this.baseUrl = (config.baseUrl ?? 'https://api.deepseek.com').replace(/\/$/, '');
this.defaultModel = config.defaultModel ?? 'deepseek-chat';
}
async complete(options: CompletionOptions): Promise<CompletionResult> {
const model = options.model ?? this.defaultModel;
const body: Record<string, unknown> = {
model,
messages: options.messages.map(toDeepSeekMessage),
};
if (options.temperature !== undefined) body.temperature = options.temperature;
if (options.maxTokens !== undefined) body.max_tokens = options.maxTokens;
if (options.tools && options.tools.length > 0) {
body.tools = options.tools.map((t) => ({
type: 'function',
function: {
name: t.name,
description: t.description,
parameters: t.inputSchema,
},
}));
}
const response = await this.request('/v1/chat/completions', body);
return parseResponse(response);
}
async listModels(): Promise<string[]> {
// DeepSeek doesn't have a public models listing endpoint;
// return well-known models.
return [
'deepseek-chat',
'deepseek-reasoner',
];
}
async isAvailable(): Promise<boolean> {
if (!this.apiKey) return false;
try {
// Send a minimal request to verify the API key
await this.complete({
messages: [{ role: 'user', content: 'hi' }],
maxTokens: 1,
});
return true;
} catch {
return false;
}
}
private request(path: string, body: unknown, method = 'POST'): Promise<unknown> {
return new Promise((resolve, reject) => {
const url = new URL(path, this.baseUrl);
const payload = body !== undefined ? JSON.stringify(body) : undefined;
const opts = {
hostname: url.hostname,
port: url.port || 443,
path: url.pathname,
method,
timeout: 120000,
headers: {
'Authorization': `Bearer ${this.apiKey}`,
'Content-Type': 'application/json',
...(payload ? { 'Content-Length': Buffer.byteLength(payload) } : {}),
},
};
const req = https.request(opts, (res) => {
const chunks: Buffer[] = [];
res.on('data', (chunk: Buffer) => chunks.push(chunk));
res.on('end', () => {
const raw = Buffer.concat(chunks).toString('utf-8');
// Handle rate limiting
if (res.statusCode === 429) {
const retryAfter = res.headers['retry-after'];
reject(new Error(`DeepSeek rate limit exceeded${retryAfter ? `. Retry after ${retryAfter}s` : ''}`));
return;
}
try {
resolve(JSON.parse(raw));
} catch {
reject(new Error(`Invalid JSON response from DeepSeek ${path}: ${raw.slice(0, 200)}`));
}
});
});
req.on('error', reject);
req.on('timeout', () => {
req.destroy();
reject(new Error('DeepSeek request timed out'));
});
if (payload) req.write(payload);
req.end();
});
}
}
function toDeepSeekMessage(msg: ChatMessage): DeepSeekMessage {
const result: DeepSeekMessage = {
role: msg.role,
content: msg.content,
};
if (msg.toolCallId !== undefined) result.tool_call_id = msg.toolCallId;
if (msg.name !== undefined) result.name = msg.name;
return result;
}
function parseResponse(raw: unknown): CompletionResult {
const data = raw as {
choices?: Array<{
message?: {
content?: string | null;
tool_calls?: Array<{
id: string;
function: { name: string; arguments: string };
}>;
};
finish_reason?: string;
}>;
usage?: {
prompt_tokens?: number;
completion_tokens?: number;
total_tokens?: number;
};
};
const choice = data.choices?.[0];
const toolCalls: ToolCall[] = (choice?.message?.tool_calls ?? []).map((tc) => ({
id: tc.id,
name: tc.function.name,
arguments: safeParse(tc.function.arguments),
}));
const finishReason = choice?.finish_reason === 'tool_calls' ? 'tool_calls' as const
: choice?.finish_reason === 'length' ? 'length' as const
: 'stop' as const;
return {
content: choice?.message?.content ?? '',
toolCalls,
usage: {
promptTokens: data.usage?.prompt_tokens ?? 0,
completionTokens: data.usage?.completion_tokens ?? 0,
totalTokens: data.usage?.total_tokens ?? 0,
},
finishReason,
};
}
function safeParse(json: string): Record<string, unknown> {
try {
return JSON.parse(json) as Record<string, unknown>;
} catch {
return {};
}
}

View File

@@ -0,0 +1,113 @@
import { spawn } from 'node:child_process';
import { execFile } from 'node:child_process';
import { promisify } from 'node:util';
import type { LlmProvider, CompletionOptions, CompletionResult } from './types.js';
const execFileAsync = promisify(execFile);
export interface GeminiCliConfig {
binaryPath?: string;
defaultModel?: string;
timeoutMs?: number;
}
/**
* Gemini CLI provider. Spawns the `gemini` binary in non-interactive mode
* using the -p (--prompt) flag and captures stdout.
*
* Note: This provider does not support tool calls since the CLI interface
* only returns text output. toolCalls will always be an empty array.
*/
export class GeminiCliProvider implements LlmProvider {
readonly name = 'gemini-cli';
private binaryPath: string;
private defaultModel: string;
private timeoutMs: number;
constructor(config?: GeminiCliConfig) {
this.binaryPath = config?.binaryPath ?? 'gemini';
this.defaultModel = config?.defaultModel ?? 'gemini-2.5-flash';
this.timeoutMs = config?.timeoutMs ?? 30000;
}
async complete(options: CompletionOptions): Promise<CompletionResult> {
const model = options.model ?? this.defaultModel;
// Build prompt from messages
const prompt = options.messages
.map((m) => {
if (m.role === 'system') return `System: ${m.content}`;
if (m.role === 'user') return m.content;
if (m.role === 'assistant') return `Assistant: ${m.content}`;
return m.content;
})
.join('\n\n');
const args = ['-p', prompt, '-m', model, '-o', 'text'];
const content = await this.spawn(args);
return {
content: content.trim(),
toolCalls: [],
usage: {
promptTokens: 0,
completionTokens: 0,
totalTokens: 0,
},
finishReason: 'stop',
};
}
async listModels(): Promise<string[]> {
// The Gemini CLI does not expose a model listing command;
// return well-known models.
return [
'gemini-2.5-flash',
'gemini-2.5-pro',
'gemini-2.0-flash',
];
}
async isAvailable(): Promise<boolean> {
try {
await execFileAsync(this.binaryPath, ['--version'], { timeout: 5000 });
return true;
} catch {
return false;
}
}
private spawn(args: string[]): Promise<string> {
return new Promise((resolve, reject) => {
const child = spawn(this.binaryPath, args, {
stdio: ['ignore', 'pipe', 'pipe'],
timeout: this.timeoutMs,
});
const stdoutChunks: Buffer[] = [];
const stderrChunks: Buffer[] = [];
child.stdout.on('data', (chunk: Buffer) => stdoutChunks.push(chunk));
child.stderr.on('data', (chunk: Buffer) => stderrChunks.push(chunk));
child.on('error', (err) => {
if ((err as NodeJS.ErrnoException).code === 'ENOENT') {
reject(new Error(`Gemini CLI binary not found at '${this.binaryPath}'. Install with: npm install -g @google/gemini-cli`));
} else {
reject(err);
}
});
child.on('close', (code) => {
const stdout = Buffer.concat(stdoutChunks).toString('utf-8');
if (code === 0) {
resolve(stdout);
} else {
const stderr = Buffer.concat(stderrChunks).toString('utf-8');
reject(new Error(`Gemini CLI exited with code ${code}: ${stderr.slice(0, 500)}`));
}
});
});
}
}

View File

@@ -5,4 +5,8 @@ export { AnthropicProvider } from './anthropic.js';
export type { AnthropicConfig } from './anthropic.js'; export type { AnthropicConfig } from './anthropic.js';
export { OllamaProvider } from './ollama.js'; export { OllamaProvider } from './ollama.js';
export type { OllamaConfig } from './ollama.js'; export type { OllamaConfig } from './ollama.js';
export { GeminiCliProvider } from './gemini-cli.js';
export type { GeminiCliConfig } from './gemini-cli.js';
export { DeepSeekProvider } from './deepseek.js';
export type { DeepSeekConfig } from './deepseek.js';
export { ProviderRegistry } from './registry.js'; export { ProviderRegistry } from './registry.js';

View File

@@ -1,4 +1,5 @@
import type { UpstreamConnection, JsonRpcRequest, JsonRpcResponse, JsonRpcNotification } from './types.js'; import type { UpstreamConnection, JsonRpcRequest, JsonRpcResponse, JsonRpcNotification } from './types.js';
import type { LlmProcessor } from './llm/processor.js';
/** /**
* Routes MCP requests to the appropriate upstream server. * Routes MCP requests to the appropriate upstream server.
@@ -15,6 +16,11 @@ export class McpRouter {
private resourceToServer = new Map<string, string>(); private resourceToServer = new Map<string, string>();
private promptToServer = new Map<string, string>(); private promptToServer = new Map<string, string>();
private notificationHandler: ((notification: JsonRpcNotification) => void) | null = null; private notificationHandler: ((notification: JsonRpcNotification) => void) | null = null;
private llmProcessor: LlmProcessor | null = null;
setLlmProcessor(processor: LlmProcessor): void {
this.llmProcessor = processor;
}
addUpstream(connection: UpstreamConnection): void { addUpstream(connection: UpstreamConnection): void {
this.upstreams.set(connection.name, connection); this.upstreams.set(connection.name, connection);
@@ -247,7 +253,7 @@ export class McpRouter {
} }
case 'tools/call': case 'tools/call':
return this.routeNamespacedCall(request, 'name', this.toolToServer); return this.routeToolCall(request);
case 'resources/list': { case 'resources/list': {
const resources = await this.discoverResources(); const resources = await this.discoverResources();
@@ -286,6 +292,37 @@ export class McpRouter {
} }
} }
/**
* Route a tools/call request, optionally applying LLM pre/post-processing.
*/
private async routeToolCall(request: JsonRpcRequest): Promise<JsonRpcResponse> {
const params = request.params as Record<string, unknown> | undefined;
const toolName = params?.['name'] as string | undefined;
// If no processor or tool shouldn't be processed, route directly
if (!this.llmProcessor || !toolName || !this.llmProcessor.shouldProcess('tools/call', toolName)) {
return this.routeNamespacedCall(request, 'name', this.toolToServer);
}
// Preprocess request params
const toolParams = (params?.['arguments'] ?? {}) as Record<string, unknown>;
const processed = await this.llmProcessor.preprocessRequest(toolName, toolParams);
const processedRequest: JsonRpcRequest = processed.optimized
? { ...request, params: { ...params, arguments: processed.params } }
: request;
// Route to upstream
const response = await this.routeNamespacedCall(processedRequest, 'name', this.toolToServer);
// Filter response
if (response.error) return response;
const filtered = await this.llmProcessor.filterResponse(toolName, response);
if (filtered.filtered) {
return { ...response, result: filtered.result };
}
return response;
}
getUpstreamNames(): string[] { getUpstreamNames(): string[] {
return [...this.upstreams.keys()]; return [...this.upstreams.keys()];
} }

View File

@@ -1,2 +1,3 @@
export { StdioUpstream } from './stdio.js'; export { StdioUpstream } from './stdio.js';
export { HttpUpstream } from './http.js'; export { HttpUpstream } from './http.js';
export { McpdUpstream } from './mcpd.js';

View File

@@ -0,0 +1,68 @@
import type { UpstreamConnection, JsonRpcRequest, JsonRpcResponse } from '../types.js';
import type { McpdClient } from '../http/mcpd-client.js';
interface McpdProxyRequest {
serverId: string;
method: string;
params?: Record<string, unknown> | undefined;
}
interface McpdProxyResponse {
result?: unknown;
error?: { code: number; message: string; data?: unknown };
}
/**
* An upstream that routes MCP requests through mcpd's /api/v1/mcp/proxy endpoint.
* mcpd holds the credentials and manages the actual MCP server connections.
*/
export class McpdUpstream implements UpstreamConnection {
readonly name: string;
private alive = true;
constructor(
private serverId: string,
serverName: string,
private mcpdClient: McpdClient,
) {
this.name = serverName;
}
async send(request: JsonRpcRequest): Promise<JsonRpcResponse> {
if (!this.alive) {
return {
jsonrpc: '2.0',
id: request.id,
error: { code: -32603, message: `Upstream '${this.name}' is closed` },
};
}
const proxyRequest: McpdProxyRequest = {
serverId: this.serverId,
method: request.method,
params: request.params,
};
try {
const result = await this.mcpdClient.post<McpdProxyResponse>('/api/v1/mcp/proxy', proxyRequest);
if (result.error) {
return { jsonrpc: '2.0', id: request.id, error: result.error };
}
return { jsonrpc: '2.0', id: request.id, result: result.result };
} catch (err) {
return {
jsonrpc: '2.0',
id: request.id,
error: { code: -32603, message: `mcpd proxy error: ${(err as Error).message}` },
};
}
}
async close(): Promise<void> {
this.alive = false;
}
isAlive(): boolean {
return this.alive;
}
}

View File

@@ -0,0 +1,68 @@
import { describe, it, expect, vi } from 'vitest';
import { refreshUpstreams } from '../src/discovery.js';
import { McpRouter } from '../src/router.js';
function mockMcpdClient(servers: Array<{ id: string; name: string; transport: string }>) {
return {
baseUrl: 'http://test:3100',
token: 'test-token',
get: vi.fn(async () => servers),
post: vi.fn(async () => ({ result: {} })),
put: vi.fn(),
delete: vi.fn(),
forward: vi.fn(),
};
}
describe('refreshUpstreams', () => {
it('registers mcpd servers as upstreams', async () => {
const router = new McpRouter();
const client = mockMcpdClient([
{ id: 'srv-1', name: 'slack', transport: 'stdio' },
{ id: 'srv-2', name: 'github', transport: 'stdio' },
]);
const registered = await refreshUpstreams(router, client as any);
expect(registered).toEqual(['slack', 'github']);
expect(router.getUpstreamNames()).toContain('slack');
expect(router.getUpstreamNames()).toContain('github');
});
it('removes stale upstreams', async () => {
const router = new McpRouter();
// First refresh: 2 servers
const client1 = mockMcpdClient([
{ id: 'srv-1', name: 'slack', transport: 'stdio' },
{ id: 'srv-2', name: 'github', transport: 'stdio' },
]);
await refreshUpstreams(router, client1 as any);
expect(router.getUpstreamNames()).toHaveLength(2);
// Second refresh: only 1 server
const client2 = mockMcpdClient([
{ id: 'srv-1', name: 'slack', transport: 'stdio' },
]);
await refreshUpstreams(router, client2 as any);
expect(router.getUpstreamNames()).toEqual(['slack']);
});
it('does not duplicate existing upstreams', async () => {
const router = new McpRouter();
const client = mockMcpdClient([
{ id: 'srv-1', name: 'slack', transport: 'stdio' },
]);
await refreshUpstreams(router, client as any);
await refreshUpstreams(router, client as any);
expect(router.getUpstreamNames()).toEqual(['slack']);
});
it('handles empty server list', async () => {
const router = new McpRouter();
const client = mockMcpdClient([]);
const registered = await refreshUpstreams(router, client as any);
expect(registered).toEqual([]);
expect(router.getUpstreamNames()).toHaveLength(0);
});
});

View File

@@ -0,0 +1,112 @@
import { describe, it, expect, vi, afterEach } from 'vitest';
import { FilterCache, DEFAULT_FILTER_CACHE_CONFIG } from '../src/llm/filter-cache.js';
describe('FilterCache', () => {
afterEach(() => {
vi.restoreAllMocks();
});
it('returns null for unknown tool names', () => {
const cache = new FilterCache();
expect(cache.shouldFilter('unknown/tool')).toBeNull();
});
it('stores and retrieves filter decisions', () => {
const cache = new FilterCache();
cache.recordDecision('slack/search', true);
expect(cache.shouldFilter('slack/search')).toBe(true);
cache.recordDecision('github/list_repos', false);
expect(cache.shouldFilter('github/list_repos')).toBe(false);
});
it('updates existing entries on re-record', () => {
const cache = new FilterCache();
cache.recordDecision('slack/search', true);
expect(cache.shouldFilter('slack/search')).toBe(true);
cache.recordDecision('slack/search', false);
expect(cache.shouldFilter('slack/search')).toBe(false);
});
it('evicts oldest entry when at capacity', () => {
const cache = new FilterCache({ maxEntries: 3 });
cache.recordDecision('tool-a', true);
cache.recordDecision('tool-b', false);
cache.recordDecision('tool-c', true);
expect(cache.size).toBe(3);
// Adding a 4th should evict 'tool-a' (oldest)
cache.recordDecision('tool-d', false);
expect(cache.size).toBe(3);
expect(cache.shouldFilter('tool-a')).toBeNull();
expect(cache.shouldFilter('tool-b')).toBe(false);
expect(cache.shouldFilter('tool-d')).toBe(false);
});
it('refreshes LRU position on access', () => {
const cache = new FilterCache({ maxEntries: 3 });
cache.recordDecision('tool-a', true);
cache.recordDecision('tool-b', false);
cache.recordDecision('tool-c', true);
// Access tool-a to refresh it
cache.shouldFilter('tool-a');
// Now add tool-d — tool-b should be evicted (oldest unreferenced)
cache.recordDecision('tool-d', false);
expect(cache.shouldFilter('tool-a')).toBe(true);
expect(cache.shouldFilter('tool-b')).toBeNull();
});
it('expires entries after TTL', () => {
const now = Date.now();
vi.spyOn(Date, 'now').mockReturnValue(now);
const cache = new FilterCache({ ttlMs: 1000 });
cache.recordDecision('slack/search', true);
expect(cache.shouldFilter('slack/search')).toBe(true);
// Advance time past TTL
vi.spyOn(Date, 'now').mockReturnValue(now + 1001);
expect(cache.shouldFilter('slack/search')).toBeNull();
// Entry should be removed
expect(cache.size).toBe(0);
});
it('does not expire entries within TTL', () => {
const now = Date.now();
vi.spyOn(Date, 'now').mockReturnValue(now);
const cache = new FilterCache({ ttlMs: 1000 });
cache.recordDecision('slack/search', true);
// Advance time within TTL
vi.spyOn(Date, 'now').mockReturnValue(now + 999);
expect(cache.shouldFilter('slack/search')).toBe(true);
});
it('clears all entries', () => {
const cache = new FilterCache();
cache.recordDecision('tool-a', true);
cache.recordDecision('tool-b', false);
expect(cache.size).toBe(2);
cache.clear();
expect(cache.size).toBe(0);
expect(cache.shouldFilter('tool-a')).toBeNull();
});
it('uses default config values', () => {
const cache = new FilterCache();
// Should support the default number of entries without issue
for (let i = 0; i < DEFAULT_FILTER_CACHE_CONFIG.maxEntries; i++) {
cache.recordDecision(`tool-${i}`, true);
}
expect(cache.size).toBe(DEFAULT_FILTER_CACHE_CONFIG.maxEntries);
// One more should trigger eviction
cache.recordDecision('extra-tool', true);
expect(cache.size).toBe(DEFAULT_FILTER_CACHE_CONFIG.maxEntries);
});
});

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,283 @@
import { describe, it, expect, vi } from 'vitest';
import { LlmProcessor, DEFAULT_PROCESSOR_CONFIG } from '../src/llm/processor.js';
import { ProviderRegistry } from '../src/providers/registry.js';
import type { LlmProvider, CompletionResult } from '../src/providers/types.js';
function mockProvider(responses: string[]): LlmProvider {
let callIndex = 0;
return {
name: 'mock',
async complete(): Promise<CompletionResult> {
const content = responses[callIndex] ?? '{}';
callIndex++;
return {
content,
toolCalls: [],
usage: { promptTokens: 10, completionTokens: 5, totalTokens: 15 },
finishReason: 'stop',
};
},
async listModels() { return ['mock-1']; },
async isAvailable() { return true; },
};
}
function makeRegistry(provider?: LlmProvider): ProviderRegistry {
const registry = new ProviderRegistry();
if (provider) {
registry.register(provider);
}
return registry;
}
describe('LlmProcessor.shouldProcess', () => {
it('bypasses protocol-level methods', () => {
const proc = new LlmProcessor(makeRegistry());
expect(proc.shouldProcess('initialize')).toBe(false);
expect(proc.shouldProcess('tools/list')).toBe(false);
expect(proc.shouldProcess('resources/list')).toBe(false);
expect(proc.shouldProcess('prompts/list')).toBe(false);
});
it('returns false when no tool name', () => {
const proc = new LlmProcessor(makeRegistry());
expect(proc.shouldProcess('tools/call')).toBe(false);
});
it('returns true for normal tool calls', () => {
const proc = new LlmProcessor(makeRegistry());
expect(proc.shouldProcess('tools/call', 'slack/search_messages')).toBe(true);
});
it('skips excluded tools', () => {
const proc = new LlmProcessor(makeRegistry(), {
...DEFAULT_PROCESSOR_CONFIG,
excludeTools: ['slack'],
});
expect(proc.shouldProcess('tools/call', 'slack/search_messages')).toBe(false);
expect(proc.shouldProcess('tools/call', 'github/search')).toBe(true);
});
it('skips simple CRUD operations', () => {
const proc = new LlmProcessor(makeRegistry());
expect(proc.shouldProcess('tools/call', 'slack/create_channel')).toBe(false);
expect(proc.shouldProcess('tools/call', 'slack/delete_message')).toBe(false);
expect(proc.shouldProcess('tools/call', 'slack/remove_user')).toBe(false);
});
});
describe('LlmProcessor.preprocessRequest', () => {
it('returns original params when preprocessing disabled', async () => {
const proc = new LlmProcessor(makeRegistry(mockProvider(['{}'])), {
...DEFAULT_PROCESSOR_CONFIG,
enablePreprocessing: false,
});
const result = await proc.preprocessRequest('slack/search', { query: 'test' });
expect(result.optimized).toBe(false);
expect(result.params).toEqual({ query: 'test' });
});
it('returns original params when no provider', async () => {
const proc = new LlmProcessor(makeRegistry(), {
...DEFAULT_PROCESSOR_CONFIG,
enablePreprocessing: true,
});
const result = await proc.preprocessRequest('slack/search', { query: 'test' });
expect(result.optimized).toBe(false);
});
it('optimizes params with LLM', async () => {
const provider = mockProvider([JSON.stringify({ query: 'test', limit: 10 })]);
const proc = new LlmProcessor(makeRegistry(provider), {
...DEFAULT_PROCESSOR_CONFIG,
enablePreprocessing: true,
});
const result = await proc.preprocessRequest('slack/search', { query: 'test' });
expect(result.optimized).toBe(true);
expect(result.params).toEqual({ query: 'test', limit: 10 });
});
it('falls back on LLM error', async () => {
const badProvider: LlmProvider = {
name: 'bad',
async complete() { throw new Error('LLM down'); },
async listModels() { return []; },
async isAvailable() { return false; },
};
const proc = new LlmProcessor(makeRegistry(badProvider), {
...DEFAULT_PROCESSOR_CONFIG,
enablePreprocessing: true,
});
const result = await proc.preprocessRequest('slack/search', { query: 'test' });
expect(result.optimized).toBe(false);
expect(result.params).toEqual({ query: 'test' });
});
});
describe('LlmProcessor.filterResponse', () => {
it('returns original when filtering disabled', async () => {
const proc = new LlmProcessor(makeRegistry(mockProvider([])), {
...DEFAULT_PROCESSOR_CONFIG,
enableFiltering: false,
});
const response = { jsonrpc: '2.0' as const, id: '1', result: { data: 'big' } };
const result = await proc.filterResponse('slack/search', response);
expect(result.filtered).toBe(false);
});
it('returns original when no provider', async () => {
const proc = new LlmProcessor(makeRegistry());
const response = { jsonrpc: '2.0' as const, id: '1', result: { data: 'x'.repeat(600) } };
const result = await proc.filterResponse('slack/search', response);
expect(result.filtered).toBe(false);
});
it('skips small responses below token threshold', async () => {
const proc = new LlmProcessor(makeRegistry(mockProvider([])));
// With default tokenThreshold=250, any response < 1000 chars (~250 tokens) is skipped
const response = { jsonrpc: '2.0' as const, id: '1', result: { data: 'small' } };
const result = await proc.filterResponse('slack/search', response);
expect(result.filtered).toBe(false);
});
it('skips error responses', async () => {
const proc = new LlmProcessor(makeRegistry(mockProvider([])));
const response = { jsonrpc: '2.0' as const, id: '1', error: { code: -1, message: 'fail' } };
const result = await proc.filterResponse('slack/search', response);
expect(result.filtered).toBe(false);
});
it('filters large responses with LLM', async () => {
const largeData = { items: Array.from({ length: 50 }, (_, i) => ({ id: i, name: `item-${i}`, extra: 'x'.repeat(20) })) };
const filteredData = { items: [{ id: 0, name: 'item-0' }, { id: 1, name: 'item-1' }] };
const provider = mockProvider([JSON.stringify(filteredData)]);
const proc = new LlmProcessor(makeRegistry(provider));
const response = { jsonrpc: '2.0' as const, id: '1', result: largeData };
const result = await proc.filterResponse('slack/search', response);
expect(result.filtered).toBe(true);
expect(result.filteredSize).toBeLessThan(result.originalSize);
});
it('falls back on LLM error', async () => {
const badProvider: LlmProvider = {
name: 'bad',
async complete() { throw new Error('LLM down'); },
async listModels() { return []; },
async isAvailable() { return false; },
};
const largeData = { items: Array.from({ length: 50 }, (_, i) => ({ id: i, extra: 'x'.repeat(20) })) };
const proc = new LlmProcessor(makeRegistry(badProvider));
const response = { jsonrpc: '2.0' as const, id: '1', result: largeData };
const result = await proc.filterResponse('slack/search', response);
expect(result.filtered).toBe(false);
expect(result.result).toEqual(largeData);
});
it('respects custom tokenThreshold', async () => {
// Set a very high threshold so that even "big" responses are skipped
const proc = new LlmProcessor(makeRegistry(mockProvider([])), {
...DEFAULT_PROCESSOR_CONFIG,
tokenThreshold: 10_000,
});
const largeData = { items: Array.from({ length: 50 }, (_, i) => ({ id: i, name: `item-${i}` })) };
const response = { jsonrpc: '2.0' as const, id: '1', result: largeData };
const result = await proc.filterResponse('slack/search', response);
expect(result.filtered).toBe(false);
});
it('uses filter cache to skip repeated filtering', async () => {
// First call: LLM returns same-size data => cache records shouldFilter=false
const largeData = { items: Array.from({ length: 50 }, (_, i) => ({ id: i, extra: 'x'.repeat(20) })) };
const raw = JSON.stringify(largeData);
// Return something larger so the cache stores shouldFilter=false (filtered not smaller)
const notSmaller = JSON.stringify(largeData);
const provider = mockProvider([notSmaller]);
const proc = new LlmProcessor(makeRegistry(provider));
const response = { jsonrpc: '2.0' as const, id: '1', result: largeData };
// First call goes to LLM
await proc.filterResponse('slack/search', response);
// Second call should hit cache (shouldFilter=false) and skip LLM
const result2 = await proc.filterResponse('slack/search', response);
expect(result2.filtered).toBe(false);
const metrics = proc.getMetrics();
expect(metrics.cacheHits).toBeGreaterThanOrEqual(1);
});
it('records metrics on filter operations', async () => {
const largeData = { items: Array.from({ length: 50 }, (_, i) => ({ id: i, name: `item-${i}`, extra: 'x'.repeat(20) })) };
const filteredData = { items: [{ id: 0, name: 'item-0' }] };
const provider = mockProvider([JSON.stringify(filteredData)]);
const proc = new LlmProcessor(makeRegistry(provider));
const response = { jsonrpc: '2.0' as const, id: '1', result: largeData };
await proc.filterResponse('slack/search', response);
const metrics = proc.getMetrics();
expect(metrics.filterCount).toBe(1);
expect(metrics.totalTokensProcessed).toBeGreaterThan(0);
expect(metrics.tokensSaved).toBeGreaterThan(0);
expect(metrics.cacheMisses).toBe(1);
});
it('records metrics even on LLM failure', async () => {
const badProvider: LlmProvider = {
name: 'bad',
async complete() { throw new Error('LLM down'); },
async listModels() { return []; },
async isAvailable() { return false; },
};
const largeData = { items: Array.from({ length: 50 }, (_, i) => ({ id: i, extra: 'x'.repeat(20) })) };
const proc = new LlmProcessor(makeRegistry(badProvider));
const response = { jsonrpc: '2.0' as const, id: '1', result: largeData };
await proc.filterResponse('slack/search', response);
const metrics = proc.getMetrics();
expect(metrics.filterCount).toBe(1);
expect(metrics.totalTokensProcessed).toBeGreaterThan(0);
// No tokens saved because filter failed
expect(metrics.tokensSaved).toBe(0);
});
});
describe('LlmProcessor metrics and cache management', () => {
it('exposes metrics via getMetrics()', () => {
const proc = new LlmProcessor(makeRegistry());
const metrics = proc.getMetrics();
expect(metrics.totalTokensProcessed).toBe(0);
expect(metrics.filterCount).toBe(0);
});
it('resets metrics', async () => {
const largeData = { items: Array.from({ length: 50 }, (_, i) => ({ id: i, extra: 'x'.repeat(20) })) };
const provider = mockProvider([JSON.stringify({ summary: 'ok' })]);
const proc = new LlmProcessor(makeRegistry(provider));
const response = { jsonrpc: '2.0' as const, id: '1', result: largeData };
await proc.filterResponse('slack/search', response);
expect(proc.getMetrics().filterCount).toBe(1);
proc.resetMetrics();
expect(proc.getMetrics().filterCount).toBe(0);
});
it('clears filter cache', async () => {
const largeData = { items: Array.from({ length: 50 }, (_, i) => ({ id: i, extra: 'x'.repeat(20) })) };
const filteredData = { items: [{ id: 0 }] };
// Two responses needed: first call filters, second call after cache clear also filters
const provider = mockProvider([JSON.stringify(filteredData), JSON.stringify(filteredData)]);
const proc = new LlmProcessor(makeRegistry(provider));
const response = { jsonrpc: '2.0' as const, id: '1', result: largeData };
await proc.filterResponse('slack/search', response);
proc.clearFilterCache();
// After clearing cache, should get a cache miss again
proc.resetMetrics();
await proc.filterResponse('slack/search', response);
expect(proc.getMetrics().cacheMisses).toBe(1);
});
});

View File

@@ -0,0 +1,110 @@
import { describe, it, expect, vi } from 'vitest';
import { McpdUpstream } from '../src/upstream/mcpd.js';
import type { JsonRpcRequest } from '../src/types.js';
function mockMcpdClient(responses: Map<string, unknown> = new Map()) {
return {
baseUrl: 'http://test:3100',
token: 'test-token',
get: vi.fn(),
post: vi.fn(async (_path: string, body: unknown) => {
const req = body as { serverId: string; method: string };
const key = `${req.serverId}:${req.method}`;
if (responses.has(key)) {
return responses.get(key);
}
return { result: { ok: true } };
}),
put: vi.fn(),
delete: vi.fn(),
forward: vi.fn(),
};
}
describe('McpdUpstream', () => {
it('sends tool calls via mcpd proxy', async () => {
const client = mockMcpdClient(new Map([
['srv-1:tools/call', { result: { content: [{ type: 'text', text: 'hello' }] } }],
]));
const upstream = new McpdUpstream('srv-1', 'slack', client as any);
const request: JsonRpcRequest = {
jsonrpc: '2.0',
id: '1',
method: 'tools/call',
params: { name: 'search', arguments: { query: 'test' } },
};
const response = await upstream.send(request);
expect(response.result).toEqual({ content: [{ type: 'text', text: 'hello' }] });
expect(client.post).toHaveBeenCalledWith('/api/v1/mcp/proxy', {
serverId: 'srv-1',
method: 'tools/call',
params: { name: 'search', arguments: { query: 'test' } },
});
});
it('sends tools/list via mcpd proxy', async () => {
const client = mockMcpdClient(new Map([
['srv-1:tools/list', { result: { tools: [{ name: 'search', description: 'Search' }] } }],
]));
const upstream = new McpdUpstream('srv-1', 'slack', client as any);
const request: JsonRpcRequest = {
jsonrpc: '2.0',
id: '2',
method: 'tools/list',
};
const response = await upstream.send(request);
expect(response.result).toEqual({ tools: [{ name: 'search', description: 'Search' }] });
});
it('returns error when mcpd fails', async () => {
const client = mockMcpdClient();
client.post.mockRejectedValue(new Error('connection refused'));
const upstream = new McpdUpstream('srv-1', 'slack', client as any);
const request: JsonRpcRequest = { jsonrpc: '2.0', id: '3', method: 'tools/list' };
const response = await upstream.send(request);
expect(response.error).toBeDefined();
expect(response.error!.message).toContain('mcpd proxy error');
});
it('returns error when upstream is closed', async () => {
const client = mockMcpdClient();
const upstream = new McpdUpstream('srv-1', 'slack', client as any);
await upstream.close();
const request: JsonRpcRequest = { jsonrpc: '2.0', id: '4', method: 'tools/list' };
const response = await upstream.send(request);
expect(response.error).toBeDefined();
expect(response.error!.message).toContain('closed');
});
it('reports alive status correctly', async () => {
const client = mockMcpdClient();
const upstream = new McpdUpstream('srv-1', 'slack', client as any);
expect(upstream.isAlive()).toBe(true);
await upstream.close();
expect(upstream.isAlive()).toBe(false);
});
it('relays error responses from mcpd', async () => {
const client = mockMcpdClient(new Map([
['srv-1:tools/call', { error: { code: -32601, message: 'Tool not found' } }],
]));
const upstream = new McpdUpstream('srv-1', 'slack', client as any);
const request: JsonRpcRequest = {
jsonrpc: '2.0',
id: '5',
method: 'tools/call',
params: { name: 'nonexistent' },
};
const response = await upstream.send(request);
expect(response.error).toEqual({ code: -32601, message: 'Tool not found' });
});
});

View File

@@ -0,0 +1,93 @@
import { describe, it, expect } from 'vitest';
import { FilterMetrics } from '../src/llm/metrics.js';
describe('FilterMetrics', () => {
it('starts with zeroed stats', () => {
const m = new FilterMetrics();
const stats = m.getStats();
expect(stats.totalTokensProcessed).toBe(0);
expect(stats.tokensSaved).toBe(0);
expect(stats.cacheHits).toBe(0);
expect(stats.cacheMisses).toBe(0);
expect(stats.filterCount).toBe(0);
expect(stats.averageFilterLatencyMs).toBe(0);
});
it('records filter operations and accumulates tokens', () => {
const m = new FilterMetrics();
m.recordFilter(500, 200, 50);
m.recordFilter(300, 100, 30);
const stats = m.getStats();
expect(stats.totalTokensProcessed).toBe(800);
expect(stats.tokensSaved).toBe(500); // (500-200) + (300-100)
expect(stats.filterCount).toBe(2);
expect(stats.averageFilterLatencyMs).toBe(40); // (50+30)/2
});
it('does not allow negative token savings', () => {
const m = new FilterMetrics();
// Filtered output is larger than original (edge case)
m.recordFilter(100, 200, 10);
const stats = m.getStats();
expect(stats.totalTokensProcessed).toBe(100);
expect(stats.tokensSaved).toBe(0); // clamped to 0
});
it('records cache hits and misses independently', () => {
const m = new FilterMetrics();
m.recordCacheHit();
m.recordCacheHit();
m.recordCacheMiss();
const stats = m.getStats();
expect(stats.cacheHits).toBe(2);
expect(stats.cacheMisses).toBe(1);
});
it('computes average latency correctly', () => {
const m = new FilterMetrics();
m.recordFilter(100, 50, 10);
m.recordFilter(100, 50, 20);
m.recordFilter(100, 50, 30);
expect(m.getStats().averageFilterLatencyMs).toBe(20);
});
it('returns 0 average latency when no filter operations', () => {
const m = new FilterMetrics();
// Only cache operations, no filter calls
m.recordCacheHit();
expect(m.getStats().averageFilterLatencyMs).toBe(0);
});
it('resets all metrics to zero', () => {
const m = new FilterMetrics();
m.recordFilter(500, 200, 50);
m.recordCacheHit();
m.recordCacheMiss();
m.reset();
const stats = m.getStats();
expect(stats.totalTokensProcessed).toBe(0);
expect(stats.tokensSaved).toBe(0);
expect(stats.cacheHits).toBe(0);
expect(stats.cacheMisses).toBe(0);
expect(stats.filterCount).toBe(0);
expect(stats.averageFilterLatencyMs).toBe(0);
});
it('returns independent snapshots', () => {
const m = new FilterMetrics();
m.recordFilter(100, 50, 10);
const snap1 = m.getStats();
m.recordFilter(200, 100, 20);
const snap2 = m.getStats();
// snap1 should not have been mutated
expect(snap1.totalTokensProcessed).toBe(100);
expect(snap2.totalTokensProcessed).toBe(300);
});
});

View File

@@ -0,0 +1,304 @@
import { describe, it, expect, vi, beforeEach } from 'vitest';
import { TieredHealthMonitor } from '../src/health/tiered.js';
import type { TieredHealthMonitorDeps } from '../src/health/tiered.js';
import type { McpdClient } from '../src/http/mcpd-client.js';
import { ProviderRegistry } from '../src/providers/registry.js';
import type { LlmProvider } from '../src/providers/types.js';
function mockMcpdClient(overrides?: {
getResult?: unknown;
getFails?: boolean;
instancesResult?: { instances: Array<{ name: string; status: string }> };
instancesFails?: boolean;
}): McpdClient {
const client = {
get: vi.fn(async (path: string) => {
if (path === '/health') {
if (overrides?.getFails) {
throw new Error('Connection refused');
}
return overrides?.getResult ?? { status: 'ok' };
}
if (path === '/instances') {
if (overrides?.instancesFails) {
throw new Error('Connection refused');
}
return overrides?.instancesResult ?? { instances: [] };
}
return {};
}),
post: vi.fn(),
put: vi.fn(),
delete: vi.fn(),
forward: vi.fn(),
} as unknown as McpdClient;
return client;
}
function mockLlmProvider(name: string): LlmProvider {
return {
name,
complete: vi.fn(),
listModels: vi.fn(async () => []),
isAvailable: vi.fn(async () => true),
};
}
describe('TieredHealthMonitor', () => {
let providerRegistry: ProviderRegistry;
beforeEach(() => {
providerRegistry = new ProviderRegistry();
});
describe('mcplocal health', () => {
it('reports healthy status with uptime', async () => {
const monitor = new TieredHealthMonitor({
mcpdClient: null,
providerRegistry,
mcpdUrl: 'http://localhost:3100',
});
const result = await monitor.checkHealth();
expect(result.mcplocal.status).toBe('healthy');
expect(result.mcplocal.uptime).toBeGreaterThanOrEqual(0);
});
it('reports null llmProvider when none registered', async () => {
const monitor = new TieredHealthMonitor({
mcpdClient: null,
providerRegistry,
mcpdUrl: 'http://localhost:3100',
});
const result = await monitor.checkHealth();
expect(result.mcplocal.llmProvider).toBeNull();
});
it('reports active llmProvider name when one is registered', async () => {
const provider = mockLlmProvider('openai');
providerRegistry.register(provider);
const monitor = new TieredHealthMonitor({
mcpdClient: null,
providerRegistry,
mcpdUrl: 'http://localhost:3100',
});
const result = await monitor.checkHealth();
expect(result.mcplocal.llmProvider).toBe('openai');
});
it('reports the currently active provider when multiple registered', async () => {
providerRegistry.register(mockLlmProvider('openai'));
providerRegistry.register(mockLlmProvider('anthropic'));
providerRegistry.setActive('anthropic');
const monitor = new TieredHealthMonitor({
mcpdClient: null,
providerRegistry,
mcpdUrl: 'http://localhost:3100',
});
const result = await monitor.checkHealth();
expect(result.mcplocal.llmProvider).toBe('anthropic');
});
});
describe('mcpd health', () => {
it('reports connected when mcpd /health responds successfully', async () => {
const client = mockMcpdClient();
const monitor = new TieredHealthMonitor({
mcpdClient: client,
providerRegistry,
mcpdUrl: 'http://localhost:3100',
});
const result = await monitor.checkHealth();
expect(result.mcpd.status).toBe('connected');
expect(result.mcpd.url).toBe('http://localhost:3100');
});
it('reports disconnected when mcpd /health throws', async () => {
const client = mockMcpdClient({ getFails: true });
const monitor = new TieredHealthMonitor({
mcpdClient: client,
providerRegistry,
mcpdUrl: 'http://localhost:3100',
});
const result = await monitor.checkHealth();
expect(result.mcpd.status).toBe('disconnected');
expect(result.mcpd.url).toBe('http://localhost:3100');
});
it('reports disconnected when mcpdClient is null', async () => {
const monitor = new TieredHealthMonitor({
mcpdClient: null,
providerRegistry,
mcpdUrl: 'http://localhost:3100',
});
const result = await monitor.checkHealth();
expect(result.mcpd.status).toBe('disconnected');
expect(result.mcpd.url).toBe('http://localhost:3100');
});
it('includes the configured mcpd URL in the response', async () => {
const monitor = new TieredHealthMonitor({
mcpdClient: null,
providerRegistry,
mcpdUrl: 'http://custom-host:9999',
});
const result = await monitor.checkHealth();
expect(result.mcpd.url).toBe('http://custom-host:9999');
});
});
describe('instances', () => {
it('returns instances from mcpd /instances endpoint', async () => {
const client = mockMcpdClient({
instancesResult: {
instances: [
{ name: 'slack', status: 'running' },
{ name: 'github', status: 'stopped' },
],
},
});
const monitor = new TieredHealthMonitor({
mcpdClient: client,
providerRegistry,
mcpdUrl: 'http://localhost:3100',
});
const result = await monitor.checkHealth();
expect(result.instances).toHaveLength(2);
expect(result.instances[0]).toEqual({ name: 'slack', status: 'running' });
expect(result.instances[1]).toEqual({ name: 'github', status: 'stopped' });
});
it('returns empty array when mcpdClient is null', async () => {
const monitor = new TieredHealthMonitor({
mcpdClient: null,
providerRegistry,
mcpdUrl: 'http://localhost:3100',
});
const result = await monitor.checkHealth();
expect(result.instances).toEqual([]);
});
it('returns empty array when /instances request fails', async () => {
const client = mockMcpdClient({ instancesFails: true });
const monitor = new TieredHealthMonitor({
mcpdClient: client,
providerRegistry,
mcpdUrl: 'http://localhost:3100',
});
const result = await monitor.checkHealth();
expect(result.instances).toEqual([]);
});
it('returns empty array when mcpd has no instances', async () => {
const client = mockMcpdClient({
instancesResult: { instances: [] },
});
const monitor = new TieredHealthMonitor({
mcpdClient: client,
providerRegistry,
mcpdUrl: 'http://localhost:3100',
});
const result = await monitor.checkHealth();
expect(result.instances).toEqual([]);
});
});
describe('full integration', () => {
it('returns complete tiered status with all sections', async () => {
providerRegistry.register(mockLlmProvider('openai'));
const client = mockMcpdClient({
instancesResult: {
instances: [
{ name: 'slack', status: 'running' },
],
},
});
const monitor = new TieredHealthMonitor({
mcpdClient: client,
providerRegistry,
mcpdUrl: 'http://localhost:3100',
});
const result = await monitor.checkHealth();
// Verify structure
expect(result).toHaveProperty('mcplocal');
expect(result).toHaveProperty('mcpd');
expect(result).toHaveProperty('instances');
// mcplocal
expect(result.mcplocal.status).toBe('healthy');
expect(typeof result.mcplocal.uptime).toBe('number');
expect(result.mcplocal.llmProvider).toBe('openai');
// mcpd
expect(result.mcpd.status).toBe('connected');
// instances
expect(result.instances).toHaveLength(1);
expect(result.instances[0]?.name).toBe('slack');
});
it('handles degraded scenario: no mcpd, no provider', async () => {
const monitor = new TieredHealthMonitor({
mcpdClient: null,
providerRegistry,
mcpdUrl: 'http://localhost:3100',
});
const result = await monitor.checkHealth();
expect(result.mcplocal.status).toBe('healthy');
expect(result.mcplocal.llmProvider).toBeNull();
expect(result.mcpd.status).toBe('disconnected');
expect(result.instances).toEqual([]);
});
it('handles mcpd connected but instances endpoint failing', async () => {
const client = mockMcpdClient({ instancesFails: true });
const monitor = new TieredHealthMonitor({
mcpdClient: client,
providerRegistry,
mcpdUrl: 'http://localhost:3100',
});
const result = await monitor.checkHealth();
expect(result.mcpd.status).toBe('connected');
expect(result.instances).toEqual([]);
});
});
});

View File

@@ -0,0 +1,45 @@
import { describe, it, expect } from 'vitest';
import { estimateTokens } from '../src/llm/token-counter.js';
describe('estimateTokens', () => {
it('returns 0 for empty string', () => {
expect(estimateTokens('')).toBe(0);
});
it('returns 1 for strings of 1-4 characters', () => {
expect(estimateTokens('a')).toBe(1);
expect(estimateTokens('ab')).toBe(1);
expect(estimateTokens('abc')).toBe(1);
expect(estimateTokens('abcd')).toBe(1);
});
it('returns 2 for strings of 5-8 characters', () => {
expect(estimateTokens('abcde')).toBe(2);
expect(estimateTokens('abcdefgh')).toBe(2);
});
it('estimates roughly 4 chars per token for longer text', () => {
const text = 'a'.repeat(1000);
expect(estimateTokens(text)).toBe(250);
});
it('rounds up partial tokens', () => {
// 7 chars / 4 = 1.75 -> ceil = 2
expect(estimateTokens('abcdefg')).toBe(2);
// 9 chars / 4 = 2.25 -> ceil = 3
expect(estimateTokens('abcdefghi')).toBe(3);
});
it('handles JSON payloads', () => {
const json = JSON.stringify({ key: 'value', nested: { a: 1, b: [1, 2, 3] } });
const expected = Math.ceil(json.length / 4);
expect(estimateTokens(json)).toBe(expected);
});
it('handles unicode text', () => {
// Note: estimation is by string length (code units), not bytes
const text = '\u{1F600}'.repeat(10); // emoji
const expected = Math.ceil(text.length / 4);
expect(estimateTokens(text)).toBe(expected);
});
});

View File

@@ -2,7 +2,7 @@ import { defineProject } from 'vitest/config';
export default defineProject({ export default defineProject({
test: { test: {
name: 'local-proxy', name: 'mcplocal',
include: ['tests/**/*.test.ts'], include: ['tests/**/*.test.ts'],
}, },
}); });

View File

@@ -5,6 +5,6 @@
{ "path": "src/db" }, { "path": "src/db" },
{ "path": "src/cli" }, { "path": "src/cli" },
{ "path": "src/mcpd" }, { "path": "src/mcpd" },
{ "path": "src/local-proxy" } { "path": "src/mcplocal" }
] ]
} }

View File

@@ -5,5 +5,5 @@ export default defineWorkspace([
'src/db', 'src/db',
'src/cli', 'src/cli',
'src/mcpd', 'src/mcpd',
'src/local-proxy', 'src/mcplocal',
]); ]);