first commit
This commit is contained in:
410
.taskmaster/tasks/task_001.md
Normal file
410
.taskmaster/tasks/task_001.md
Normal file
@@ -0,0 +1,410 @@
|
||||
# Task ID: 1
|
||||
|
||||
**Title:** Initialize Project Structure and Core Dependencies
|
||||
|
||||
**Status:** pending
|
||||
|
||||
**Dependencies:** None
|
||||
|
||||
**Priority:** high
|
||||
|
||||
**Description:** Set up the monorepo structure for mcpctl with CLI client, mcpd server, and shared libraries. Configure TypeScript, ESLint, and build tooling.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create a monorepo using pnpm workspaces or npm workspaces with the following structure:
|
||||
|
||||
```
|
||||
mcpctl/
|
||||
├── src/
|
||||
│ ├── cli/ # mcpctl CLI tool
|
||||
│ ├── mcpd/ # Backend daemon server
|
||||
│ ├── shared/ # Shared types, utilities, constants
|
||||
│ └── local-proxy/ # Local LLM proxy component
|
||||
├── docker/
|
||||
│ └── docker-compose.yml
|
||||
├── package.json
|
||||
├── tsconfig.base.json
|
||||
└── pnpm-workspace.yaml
|
||||
```
|
||||
|
||||
Dependencies to install:
|
||||
- TypeScript 5.x
|
||||
- Commander.js for CLI
|
||||
- Express/Fastify for mcpd HTTP server
|
||||
- Zod for schema validation
|
||||
- Winston/Pino for logging
|
||||
- Prisma or Drizzle for database ORM
|
||||
|
||||
Create base tsconfig.json with strict mode, ES2022 target, and module resolution settings. Set up shared ESLint config with TypeScript rules.
|
||||
|
||||
**Test Strategy:**
|
||||
|
||||
Verify project builds successfully with `pnpm build`. Ensure all packages compile without errors. Test workspace linking works correctly between packages.
|
||||
|
||||
## Subtasks
|
||||
|
||||
### 1.1. Initialize pnpm workspace monorepo with future-proof directory structure
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** None
|
||||
|
||||
Create the complete monorepo directory structure using pnpm workspaces that accommodates all 18 planned tasks without requiring future refactoring.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create root package.json with pnpm workspaces configuration. Create pnpm-workspace.yaml defining all workspace packages. Initialize the following directory structure:
|
||||
|
||||
```
|
||||
mcpctl/
|
||||
├── src/
|
||||
│ ├── cli/ # mcpctl CLI tool (Task 7-10)
|
||||
│ │ ├── src/
|
||||
│ │ ├── tests/
|
||||
│ │ └── package.json
|
||||
│ ├── mcpd/ # Backend daemon server (Task 3-6, 14, 16)
|
||||
│ │ ├── src/
|
||||
│ │ ├── tests/
|
||||
│ │ └── package.json
|
||||
│ ├── shared/ # Shared types, utils, constants, validation
|
||||
│ │ ├── src/
|
||||
│ │ │ ├── types/ # TypeScript interfaces/types
|
||||
│ │ │ ├── utils/ # Utility functions
|
||||
│ │ │ ├── constants/# Shared constants
|
||||
│ │ │ ├── validation/ # Zod schemas
|
||||
│ │ │ └── index.ts # Barrel export
|
||||
│ │ ├── tests/
|
||||
│ │ └── package.json
|
||||
│ ├── local-proxy/ # Local LLM proxy (Task 11-13)
|
||||
│ │ ├── src/
|
||||
│ │ ├── tests/
|
||||
│ │ └── package.json
|
||||
│ └── db/ # Database package (Task 2)
|
||||
│ ├── src/
|
||||
│ ├── prisma/ # Schema and migrations
|
||||
│ ├── seed/ # Seed data
|
||||
│ ├── tests/
|
||||
│ └── package.json
|
||||
├── docker/
|
||||
│ └── docker-compose.yml # Local dev services (postgres)
|
||||
├── tests/
|
||||
│ ├── e2e/ # End-to-end tests (Task 18)
|
||||
│ └── integration/ # Integration tests
|
||||
├── docs/ # Documentation (Task 18)
|
||||
├── package.json # Root workspace config
|
||||
├── pnpm-workspace.yaml
|
||||
└── turbo.json # Optional: Turborepo for build orchestration
|
||||
```
|
||||
|
||||
Each package should have:
|
||||
- Empty src/index.ts with barrel export pattern ready
|
||||
- Empty tests/ directory
|
||||
- package.json with correct workspace dependencies (@mcpctl/shared, @mcpctl/db)
|
||||
|
||||
Use dependency injection patterns from the start by creating interfaces in shared/src/types/ for key services.
|
||||
<info added on 2026-02-21T02:33:52.473Z>
|
||||
CRITICAL STRUCTURAL CHANGE: The monorepo workspace packages directory has been renamed from `packages/` to `src/`. All path references in this subtask must use `src/` instead of `packages/`.
|
||||
|
||||
Updated directory structure to implement:
|
||||
|
||||
```
|
||||
mcpctl/
|
||||
├── src/ # All application source code (pnpm workspace packages)
|
||||
│ ├── cli/ # @mcpctl/cli - CLI tool (Task 7-10)
|
||||
│ │ ├── src/
|
||||
│ │ ├── tests/
|
||||
│ │ └── package.json
|
||||
│ ├── mcpd/ # @mcpctl/mcpd - Backend daemon (Task 3-6, 14, 16)
|
||||
│ │ ├── src/
|
||||
│ │ ├── tests/
|
||||
│ │ └── package.json
|
||||
│ ├── shared/ # @mcpctl/shared - Shared types, utils, constants, validation
|
||||
│ │ ├── src/
|
||||
│ │ │ ├── types/ # TypeScript interfaces/types
|
||||
│ │ │ ├── utils/ # Utility functions
|
||||
│ │ │ ├── constants/ # Shared constants
|
||||
│ │ │ ├── validation/ # Zod schemas
|
||||
│ │ │ └── index.ts # Barrel export
|
||||
│ │ ├── tests/
|
||||
│ │ └── package.json
|
||||
│ ├── local-proxy/ # @mcpctl/local-proxy - LLM proxy (Task 11-13)
|
||||
│ │ ├── src/
|
||||
│ │ ├── tests/
|
||||
│ │ └── package.json
|
||||
│ └── db/ # @mcpctl/db - Database/Prisma (Task 2)
|
||||
│ ├── src/
|
||||
│ ├── prisma/ # Schema and migrations
|
||||
│ ├── seed/ # Seed data
|
||||
│ ├── tests/
|
||||
│ └── package.json
|
||||
├── deploy/ # Deployment configs (docker-compose, k8s manifests)
|
||||
│ ├── docker-compose.yml
|
||||
│ ├── docker-compose.dev.yml
|
||||
│ └── Dockerfile.*
|
||||
├── docs/ # Documentation (Task 18)
|
||||
├── tests/ # E2E and integration tests
|
||||
│ ├── e2e/
|
||||
│ └── integration/
|
||||
├── package.json # Root workspace config
|
||||
├── pnpm-workspace.yaml # Points to src/*
|
||||
├── tsconfig.base.json
|
||||
├── eslint.config.js
|
||||
├── vitest.workspace.ts
|
||||
└── turbo.json # Optional: Turborepo for build orchestration
|
||||
```
|
||||
|
||||
The pnpm-workspace.yaml should contain: `packages: ["src/*"]`
|
||||
|
||||
Key differences from previous structure:
|
||||
- `packages/` renamed to `src/` for cleaner separation of app source from project management files
|
||||
- `docker/` renamed to `deploy/` with additional files (docker-compose.dev.yml, Dockerfile.*)
|
||||
- Added root config files: eslint.config.js, vitest.workspace.ts
|
||||
- All workspace package references in pnpm-workspace.yaml use `src/*` pattern
|
||||
</info added on 2026-02-21T02:33:52.473Z>
|
||||
|
||||
### 1.2. Configure TypeScript with strict mode and project references
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 1.1
|
||||
|
||||
Set up TypeScript configuration with strict mode, ES2022 target, and proper project references for monorepo build orchestration.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create root tsconfig.base.json with shared compiler options:
|
||||
```json
|
||||
{
|
||||
"compilerOptions": {
|
||||
"target": "ES2022",
|
||||
"module": "NodeNext",
|
||||
"moduleResolution": "NodeNext",
|
||||
"lib": ["ES2022"],
|
||||
"strict": true,
|
||||
"esModuleInterop": true,
|
||||
"skipLibCheck": true,
|
||||
"forceConsistentCasingInFileNames": true,
|
||||
"declaration": true,
|
||||
"declarationMap": true,
|
||||
"sourceMap": true,
|
||||
"composite": true,
|
||||
"incremental": true,
|
||||
"noUnusedLocals": true,
|
||||
"noUnusedParameters": true,
|
||||
"noImplicitReturns": true,
|
||||
"noFallthroughCasesInSwitch": true,
|
||||
"exactOptionalPropertyTypes": true,
|
||||
"noUncheckedIndexedAccess": true
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Create package-specific tsconfig.json in each package that extends the base and sets appropriate paths:
|
||||
- cli/tsconfig.json: outDir: dist, references to shared and db
|
||||
- mcpd/tsconfig.json: outDir: dist, references to shared and db
|
||||
- shared/tsconfig.json: outDir: dist (no references, it's the base)
|
||||
- local-proxy/tsconfig.json: references to shared
|
||||
- db/tsconfig.json: references to shared
|
||||
|
||||
Create tsconfig.json at root with project references to all packages for unified builds.
|
||||
|
||||
Install TypeScript 5.x as devDependency in root package.json.
|
||||
|
||||
### 1.3. Set up Vitest testing framework with workspace configuration
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 1.2
|
||||
|
||||
Configure Vitest as the test framework across all packages with proper workspace setup, coverage reporting, and test-driven development infrastructure.
|
||||
|
||||
**Details:**
|
||||
|
||||
Install Vitest and related packages at root level:
|
||||
- vitest
|
||||
- @vitest/coverage-v8
|
||||
- @vitest/ui (optional, for visual test running)
|
||||
|
||||
Create root vitest.config.ts:
|
||||
```typescript
|
||||
import { defineConfig } from 'vitest/config';
|
||||
|
||||
export default defineConfig({
|
||||
test: {
|
||||
globals: true,
|
||||
coverage: {
|
||||
provider: 'v8',
|
||||
reporter: ['text', 'json', 'html'],
|
||||
exclude: ['**/node_modules/**', '**/dist/**', '**/*.config.*']
|
||||
},
|
||||
include: ['src/*/tests/**/*.test.ts', 'tests/**/*.test.ts'],
|
||||
testTimeout: 10000
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
Create vitest.workspace.ts for workspace-aware testing:
|
||||
```typescript
|
||||
import { defineWorkspace } from 'vitest/config';
|
||||
|
||||
export default defineWorkspace([
|
||||
'src/cli',
|
||||
'src/mcpd',
|
||||
'src/shared',
|
||||
'src/local-proxy',
|
||||
'src/db'
|
||||
]);
|
||||
```
|
||||
|
||||
Create per-package vitest.config.ts files that extend root config.
|
||||
|
||||
Add npm scripts to root package.json:
|
||||
- "test": "vitest"
|
||||
- "test:run": "vitest run"
|
||||
- "test:coverage": "vitest run --coverage"
|
||||
- "test:ui": "vitest --ui"
|
||||
|
||||
Create initial test file in src/shared/tests/index.test.ts to verify setup works:
|
||||
```typescript
|
||||
import { describe, it, expect } from 'vitest';
|
||||
|
||||
describe('shared package', () => {
|
||||
it('should be configured correctly', () => {
|
||||
expect(true).toBe(true);
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
### 1.4. Configure ESLint with TypeScript rules and docker-compose for local development
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 1.2
|
||||
|
||||
Set up shared ESLint configuration with TypeScript-aware rules, Prettier integration, and docker-compose.yml for local PostgreSQL database.
|
||||
|
||||
**Details:**
|
||||
|
||||
Install ESLint and plugins at root:
|
||||
- eslint
|
||||
- @typescript-eslint/parser
|
||||
- @typescript-eslint/eslint-plugin
|
||||
- eslint-config-prettier
|
||||
- eslint-plugin-import
|
||||
|
||||
Create eslint.config.js (flat config, ESLint 9+):
|
||||
```javascript
|
||||
import tseslint from '@typescript-eslint/eslint-plugin';
|
||||
import tsparser from '@typescript-eslint/parser';
|
||||
|
||||
export default [
|
||||
{
|
||||
files: ['src/*/src/**/*.ts'],
|
||||
languageOptions: {
|
||||
parser: tsparser,
|
||||
parserOptions: {
|
||||
project: ['./src/*/tsconfig.json'],
|
||||
tsconfigRootDir: import.meta.dirname
|
||||
}
|
||||
},
|
||||
plugins: { '@typescript-eslint': tseslint },
|
||||
rules: {
|
||||
'@typescript-eslint/explicit-function-return-type': 'error',
|
||||
'@typescript-eslint/no-explicit-any': 'error',
|
||||
'@typescript-eslint/no-unused-vars': 'error',
|
||||
'@typescript-eslint/strict-boolean-expressions': 'error',
|
||||
'no-console': ['warn', { allow: ['warn', 'error'] }]
|
||||
}
|
||||
}
|
||||
];
|
||||
```
|
||||
|
||||
Create deploy/docker-compose.yml for local development:
|
||||
```yaml
|
||||
version: '3.8'
|
||||
services:
|
||||
postgres:
|
||||
image: postgres:16-alpine
|
||||
container_name: mcpctl-postgres
|
||||
ports:
|
||||
- "5432:5432"
|
||||
environment:
|
||||
POSTGRES_USER: mcpctl
|
||||
POSTGRES_PASSWORD: mcpctl_dev
|
||||
POSTGRES_DB: mcpctl
|
||||
volumes:
|
||||
- mcpctl-pgdata:/var/lib/postgresql/data
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -U mcpctl"]
|
||||
interval: 5s
|
||||
timeout: 5s
|
||||
retries: 5
|
||||
|
||||
volumes:
|
||||
mcpctl-pgdata:
|
||||
```
|
||||
|
||||
Add scripts to root package.json:
|
||||
- "lint": "eslint src/*/src/**/*.ts"
|
||||
- "lint:fix": "eslint src/*/src/**/*.ts --fix"
|
||||
- "db:up": "docker-compose -f deploy/docker-compose.yml up -d"
|
||||
- "db:down": "docker-compose -f deploy/docker-compose.yml down"
|
||||
|
||||
Create .env.example at root with DATABASE_URL template:
|
||||
```
|
||||
DATABASE_URL="postgresql://mcpctl:mcpctl_dev@localhost:5432/mcpctl"
|
||||
```
|
||||
|
||||
### 1.5. Install core dependencies and perform security/architecture review
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 1.1, 1.3, 1.4
|
||||
|
||||
Install all required production dependencies across packages, run security audit, and validate the directory structure supports all 18 planned tasks.
|
||||
|
||||
**Details:**
|
||||
|
||||
Install dependencies per package:
|
||||
|
||||
**src/cli/package.json:**
|
||||
- commander (CLI framework)
|
||||
- chalk (colored output)
|
||||
- js-yaml (YAML parsing)
|
||||
- inquirer (interactive prompts)
|
||||
|
||||
**src/mcpd/package.json:**
|
||||
- fastify (HTTP server)
|
||||
- @fastify/cors, @fastify/helmet, @fastify/rate-limit (middleware)
|
||||
- zod (schema validation) - also add to shared
|
||||
- pino (logging, built into Fastify)
|
||||
|
||||
**src/shared/package.json:**
|
||||
- zod (shared validation schemas)
|
||||
|
||||
**src/db/package.json:**
|
||||
- prisma (ORM)
|
||||
- @prisma/client
|
||||
|
||||
**src/local-proxy/package.json:**
|
||||
- @modelcontextprotocol/sdk (MCP protocol)
|
||||
|
||||
**Root devDependencies:**
|
||||
- typescript
|
||||
- vitest, @vitest/coverage-v8
|
||||
- eslint and plugins (already specified)
|
||||
- tsx (for running TypeScript directly)
|
||||
- rimraf (cross-platform rm -rf for clean scripts)
|
||||
|
||||
**Security Review Checklist:**
|
||||
1. Run 'pnpm audit' and verify no high/critical vulnerabilities
|
||||
2. Verify .gitignore excludes: .env, node_modules, dist, *.log
|
||||
3. Verify .env.example has no real secrets, only templates
|
||||
4. Ensure no API keys or secrets in any committed files
|
||||
5. Document security audit results in SECURITY_AUDIT.md
|
||||
|
||||
**Architecture Review Checklist:**
|
||||
1. Verify structure supports Task 2 (db package with prisma/)
|
||||
2. Verify structure supports Tasks 3-6 (mcpd with src/routes/, src/services/)
|
||||
3. Verify structure supports Tasks 7-10 (cli with src/commands/)
|
||||
4. Verify structure supports Tasks 11-13 (local-proxy with src/providers/)
|
||||
5. Verify tests/ directories exist at package and root level
|
||||
6. Verify dependency injection interfaces are defined in shared/src/types/
|
||||
7. Verify barrel exports in shared/src/index.ts
|
||||
8. Document architecture decisions in ARCHITECTURE.md
|
||||
155
.taskmaster/tasks/task_002.md
Normal file
155
.taskmaster/tasks/task_002.md
Normal file
@@ -0,0 +1,155 @@
|
||||
# Task ID: 2
|
||||
|
||||
**Title:** Design and Implement Database Schema
|
||||
|
||||
**Status:** pending
|
||||
|
||||
**Dependencies:** 1
|
||||
|
||||
**Priority:** high
|
||||
|
||||
**Description:** Create the database schema for storing MCP server configurations, projects, profiles, user sessions, and audit logs. Use PostgreSQL for production readiness.
|
||||
|
||||
**Details:**
|
||||
|
||||
Design PostgreSQL schema using Prisma ORM:
|
||||
|
||||
```prisma
|
||||
model User {
|
||||
id String @id @default(uuid())
|
||||
email String @unique
|
||||
name String?
|
||||
sessions Session[]
|
||||
auditLogs AuditLog[]
|
||||
createdAt DateTime @default(now())
|
||||
}
|
||||
|
||||
model McpServer {
|
||||
id String @id @default(uuid())
|
||||
name String @unique
|
||||
type String // e.g., 'slack', 'jira', 'terraform'
|
||||
command String // npx command or docker image
|
||||
args Json // command arguments
|
||||
envTemplate Json // required env vars template
|
||||
setupGuide String? // markdown guide for setup
|
||||
profiles McpProfile[]
|
||||
instances McpInstance[]
|
||||
}
|
||||
|
||||
model McpProfile {
|
||||
id String @id @default(uuid())
|
||||
name String
|
||||
serverId String
|
||||
server McpServer @relation(fields: [serverId], references: [id])
|
||||
config Json // profile-specific config (read-only, limited endpoints, etc.)
|
||||
filterRules Json? // pre-filtering rules
|
||||
projects ProjectMcpProfile[]
|
||||
}
|
||||
|
||||
model Project {
|
||||
id String @id @default(uuid())
|
||||
name String @unique
|
||||
description String?
|
||||
profiles ProjectMcpProfile[]
|
||||
createdAt DateTime @default(now())
|
||||
}
|
||||
|
||||
model ProjectMcpProfile {
|
||||
projectId String
|
||||
profileId String
|
||||
project Project @relation(fields: [projectId], references: [id])
|
||||
profile McpProfile @relation(fields: [profileId], references: [id])
|
||||
@@id([projectId, profileId])
|
||||
}
|
||||
|
||||
model McpInstance {
|
||||
id String @id @default(uuid())
|
||||
serverId String
|
||||
server McpServer @relation(fields: [serverId], references: [id])
|
||||
containerId String?
|
||||
status String // running, stopped, error
|
||||
config Json
|
||||
createdAt DateTime @default(now())
|
||||
}
|
||||
|
||||
model AuditLog {
|
||||
id String @id @default(uuid())
|
||||
userId String?
|
||||
user User? @relation(fields: [userId], references: [id])
|
||||
action String
|
||||
resource String
|
||||
details Json
|
||||
timestamp DateTime @default(now())
|
||||
}
|
||||
|
||||
model Session {
|
||||
id String @id @default(uuid())
|
||||
userId String
|
||||
user User @relation(fields: [userId], references: [id])
|
||||
token String @unique
|
||||
expiresAt DateTime
|
||||
}
|
||||
```
|
||||
|
||||
Create migrations and seed data for common MCP servers (slack, jira, github, terraform).
|
||||
|
||||
**Test Strategy:**
|
||||
|
||||
Run Prisma migrations against test database. Verify all relations work correctly with seed data. Test CRUD operations for each model using Prisma client.
|
||||
|
||||
## Subtasks
|
||||
|
||||
### 2.1. Set up Prisma ORM and PostgreSQL test infrastructure with docker-compose
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** None
|
||||
|
||||
Initialize Prisma in the db package with PostgreSQL configuration, create docker-compose.yml for local development with separate test database, and set up test database setup/teardown scripts.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/db/prisma directory structure. Install Prisma dependencies (@prisma/client, prisma as devDependency). Configure deploy/docker-compose.yml with two PostgreSQL services: mcpctl-postgres (port 5432) for development and mcpctl-postgres-test (port 5433) for testing. Create src/db/src/test-utils.ts with setupTestDb() and teardownTestDb() functions that handle database connection, schema push, and cleanup. Create .env and .env.test with DATABASE_URL pointing to respective databases. Initialize prisma/schema.prisma with PostgreSQL provider and basic generator config. Write Vitest tests for test utilities to verify they can connect, push schema, and cleanup correctly.
|
||||
|
||||
### 2.2. Write TDD tests for all Prisma models before implementing schema
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 2.1
|
||||
|
||||
Create comprehensive Vitest test suites for all 8 models (User, McpServer, McpProfile, Project, ProjectMcpProfile, McpInstance, AuditLog, Session) testing CRUD operations, relations, constraints, and edge cases.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/db/tests/models directory with separate test files: user.test.ts, mcp-server.test.ts, mcp-profile.test.ts, project.test.ts, mcp-instance.test.ts, audit-log.test.ts, session.test.ts. Each test file should include: (1) CRUD operations (create, read, update, delete), (2) Unique constraint violations (email for User, name for McpServer/Project), (3) Relation tests (User->Sessions, McpServer->McpProfile->Projects, etc.), (4) Cascade delete behavior, (5) JSON field validation for args, envTemplate, config, filterRules, details fields, (6) Default value tests (uuid, timestamps), (7) Edge cases like null optional fields. Tests will initially fail (TDD red phase) until schema is implemented.
|
||||
|
||||
### 2.3. Implement Prisma schema with all models and security considerations
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 2.2
|
||||
|
||||
Create the complete Prisma schema with all 8 models, proper relations, indexes for audit queries, and security-conscious field design for credentials encryption at rest.
|
||||
|
||||
**Details:**
|
||||
|
||||
Implement src/db/prisma/schema.prisma with: User (id uuid, email unique, name optional, createdAt, relations to Session and AuditLog), McpServer (id uuid, name unique, type, command, args Json, envTemplate Json with @@map for encrypted storage notes, setupGuide optional, relations), McpProfile (id uuid, name, serverId FK, config Json, filterRules Json optional, relation to server and projects), Project (id uuid, name unique, description optional, createdAt, relation to profiles), ProjectMcpProfile (composite PK projectId+profileId, relations), McpInstance (id uuid, serverId FK, containerId optional, status enum-like string, config Json, metadata Json for future K8s support, createdAt, updatedAt), AuditLog (id uuid, userId optional FK, action, resource, details Json, timestamp, indexes on userId, timestamp, action for query performance), Session (id uuid, userId FK, token unique with index, expiresAt, createdAt). Add @@index annotations for frequently queried fields. Document in comments that envTemplate and config containing secrets must be encrypted at application layer.
|
||||
|
||||
### 2.4. Create seed data functions with unit tests for common MCP servers
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 2.3
|
||||
|
||||
Implement seed functions for common MCP server configurations (Slack, Jira, GitHub, Terraform) with comprehensive unit tests for each seed function.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/db/seed directory with: index.ts (main seed runner), mcp-servers.ts (server definitions), seed-mcp-servers.ts (seeding function), seed-default-profiles.ts (default profiles per server). Define server configurations: Slack (npx @modelcontextprotocol/server-slack, SLACK_BOT_TOKEN, SLACK_TEAM_ID env template with setup guide), Jira (npx @anthropic/mcp-server-jira, JIRA_URL, JIRA_EMAIL, JIRA_API_TOKEN), GitHub (npx @modelcontextprotocol/server-github, GITHUB_TOKEN), Terraform (npx terraform-docs-mcp). Create src/db/tests/seed directory with tests: seed-mcp-servers.test.ts, seed-default-profiles.test.ts. Tests should verify: (1) Each server is created with correct data, (2) Idempotency (running twice doesn't create duplicates), (3) Default profiles are linked correctly, (4) envTemplate JSON structure is valid.
|
||||
|
||||
### 2.5. Create database migrations and perform security/architecture review
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 2.3, 2.4
|
||||
|
||||
Generate initial Prisma migration, create migration helper utilities with tests, and conduct comprehensive security and architecture review documenting findings.
|
||||
|
||||
**Details:**
|
||||
|
||||
Run 'npx prisma migrate dev --name init' to create initial migration in src/db/prisma/migrations. Create src/db/src/migration-helpers.ts with utilities: resetDatabase(), applyMigrations(), rollbackMigration() with proper error handling. Write unit tests in src/db/tests/migration-helpers.test.ts. Conduct security review and document in src/db/SECURITY_REVIEW.md: (1) PII handling - email in User is only PII, add note about GDPR considerations, (2) Credentials handling - envTemplate, config fields contain secrets, document encryption-at-rest requirement at application layer, (3) Audit log indexes verified for query performance, (4) Cascade delete behavior reviewed (Session deletes with User, but AuditLog userId set to null), (5) No sensitive data in plain text validation. Conduct architecture review documenting in src/db/ARCHITECTURE.md: (1) Schema supports all 18 tasks, (2) McpInstance.metadata Json field ready for K8s pod metadata, (3) AuditLog.details flexible for various action types, (4) Future migration considerations for adding fields without breaking data.
|
||||
205
.taskmaster/tasks/task_003.md
Normal file
205
.taskmaster/tasks/task_003.md
Normal file
@@ -0,0 +1,205 @@
|
||||
# Task ID: 3
|
||||
|
||||
**Title:** Implement mcpd Core Server Framework
|
||||
|
||||
**Status:** pending
|
||||
|
||||
**Dependencies:** 1, 2
|
||||
|
||||
**Priority:** high
|
||||
|
||||
**Description:** Build the mcpd daemon server with Express/Fastify, including middleware for authentication, logging, and error handling. Design for horizontal scalability.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create mcpd server in `src/mcpd/src/`:
|
||||
|
||||
```typescript
|
||||
// server.ts
|
||||
import Fastify from 'fastify';
|
||||
import { PrismaClient } from '@prisma/client';
|
||||
|
||||
const app = Fastify({ logger: true });
|
||||
const prisma = new PrismaClient();
|
||||
|
||||
// Middleware
|
||||
app.register(require('@fastify/cors'));
|
||||
app.register(require('@fastify/helmet'));
|
||||
app.register(require('@fastify/rate-limit'), { max: 100, timeWindow: '1 minute' });
|
||||
|
||||
// Health check for load balancers
|
||||
app.get('/health', async () => ({ status: 'ok', timestamp: new Date().toISOString() }));
|
||||
|
||||
// Auth middleware
|
||||
app.addHook('preHandler', async (request, reply) => {
|
||||
if (request.url === '/health') return;
|
||||
const token = request.headers.authorization?.replace('Bearer ', '');
|
||||
if (!token) return reply.status(401).send({ error: 'Unauthorized' });
|
||||
// Validate token against session table
|
||||
});
|
||||
|
||||
// Audit logging middleware
|
||||
app.addHook('onResponse', async (request, reply) => {
|
||||
await prisma.auditLog.create({
|
||||
data: {
|
||||
action: request.method,
|
||||
resource: request.url,
|
||||
details: { statusCode: reply.statusCode },
|
||||
userId: request.user?.id
|
||||
}
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
Design principles:
|
||||
- Stateless: All state in PostgreSQL, no in-memory session storage
|
||||
- Scalable: Can run multiple instances behind load balancer
|
||||
- Configurable via environment variables
|
||||
- Graceful shutdown handling
|
||||
|
||||
**Test Strategy:**
|
||||
|
||||
Unit test middleware functions. Integration test health endpoint. Load test with multiple concurrent requests. Verify statelessness by running two instances and alternating requests.
|
||||
|
||||
## Subtasks
|
||||
|
||||
### 3.1. Set up mcpd package structure with clean architecture layers and TDD infrastructure
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** None
|
||||
|
||||
Create the src/mcpd directory structure following clean architecture principles with separate layers for routes, controllers, services, and repositories, along with Vitest test configuration.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/mcpd/src/ directory structure with the following layers:
|
||||
|
||||
- routes/ - HTTP route definitions (thin layer, delegates to controllers)
|
||||
- controllers/ - Request/response handling, input validation
|
||||
- services/ - Business logic, orchestrates repositories
|
||||
- repositories/ - Data access layer, Prisma abstraction
|
||||
- middleware/ - Auth, logging, error handling, rate limiting
|
||||
- config/ - Environment configuration with Zod validation
|
||||
- types/ - TypeScript interfaces for dependency injection
|
||||
- utils/ - Utility functions (graceful shutdown, health checks)
|
||||
|
||||
Create src/mcpd/tests/ with matching structure:
|
||||
- unit/ (routes, controllers, services, repositories, middleware)
|
||||
- integration/ (API endpoint tests)
|
||||
- fixtures/ (mock data, Prisma mock setup)
|
||||
|
||||
Set up vitest.config.ts extending root config with mcpd-specific settings. Create test-utils.ts with Prisma mock factory and Fastify test helpers. Install dependencies: fastify, @fastify/cors, @fastify/helmet, @fastify/rate-limit, zod, pino. DevDependencies: vitest, @vitest/coverage-v8, supertest.
|
||||
|
||||
### 3.2. Implement Fastify server core with health endpoint and database connectivity verification
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 3.1
|
||||
|
||||
Create the core Fastify server with health check endpoint that verifies PostgreSQL database connectivity, environment configuration validation, and server lifecycle management.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/mcpd/src/server.ts with Fastify instance factory function createServer(config: ServerConfig) for testability via dependency injection. Implement:
|
||||
|
||||
- config/env.ts: Zod schema for environment variables (DATABASE_URL, PORT, NODE_ENV, LOG_LEVEL)
|
||||
- config/index.ts: loadConfig() function that validates env with Zod
|
||||
- utils/health.ts: checkDatabaseConnectivity(prisma) function
|
||||
- routes/health.ts: GET /health endpoint returning { status: 'ok' | 'degraded', timestamp: ISO8601, db: 'connected' | 'disconnected' }
|
||||
|
||||
Server requirements:
|
||||
- Fastify with pino logger enabled (configurable log level)
|
||||
- Health endpoint bypasses auth middleware
|
||||
- Health endpoint checks actual DB connectivity via prisma.$queryRaw
|
||||
- Server does NOT start if DATABASE_URL is missing (fail fast)
|
||||
- Export createServer() and startServer() separately for testing
|
||||
|
||||
Write TDD tests FIRST in tests/unit/routes/health.test.ts and tests/unit/config/env.test.ts before implementing.
|
||||
|
||||
### 3.3. Implement authentication middleware with JWT validation and session management
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 3.2
|
||||
|
||||
Create authentication preHandler hook that validates Bearer tokens against the Session table in PostgreSQL, with proper error responses and request decoration for downstream handlers.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/mcpd/src/middleware/auth.ts with:
|
||||
|
||||
- authMiddleware(prisma: PrismaClient) factory function (dependency injection)
|
||||
- Fastify preHandler hook implementation
|
||||
- Extract Bearer token from Authorization header
|
||||
- Validate token exists and format is correct
|
||||
- Query Session table: find by token, check expiresAt > now()
|
||||
- Query User by session.userId for request decoration
|
||||
- Decorate request with user: { id, email, name } via fastify.decorateRequest
|
||||
- Return 401 Unauthorized with { error: 'Unauthorized', code: 'TOKEN_REQUIRED' } for missing token
|
||||
- Return 401 with { error: 'Unauthorized', code: 'TOKEN_EXPIRED' } for expired session
|
||||
- Return 401 with { error: 'Unauthorized', code: 'TOKEN_INVALID' } for invalid token
|
||||
|
||||
Create types/fastify.d.ts with FastifyRequest augmentation for user property.
|
||||
|
||||
Write unit tests in tests/unit/middleware/auth.test.ts with mocked Prisma client before implementation.
|
||||
|
||||
### 3.4. Implement security middleware stack with CORS, Helmet, rate limiting, and input sanitization
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 3.2
|
||||
|
||||
Configure and register security middleware including CORS policy, Helmet security headers, rate limiting, and create input sanitization utilities to prevent injection attacks.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/mcpd/src/middleware/security.ts with:
|
||||
|
||||
- registerSecurityPlugins(app: FastifyInstance, config: SecurityConfig) function
|
||||
- CORS configuration: configurable origins (default: same-origin for production, * for development), credentials support, allowed methods/headers
|
||||
- Helmet configuration: contentSecurityPolicy, hsts (enabled in production), noSniff, frameguard
|
||||
- Rate limiting: 100 requests per minute default, configurable via env, different limits for auth endpoints (stricter)
|
||||
|
||||
Create src/mcpd/src/utils/sanitize.ts:
|
||||
- sanitizeInput(input: unknown): sanitized value
|
||||
- stripHtmlTags(), escapeHtml() for XSS prevention
|
||||
- Validate JSON input doesn't exceed size limits
|
||||
|
||||
Create src/mcpd/src/middleware/validate.ts:
|
||||
- createValidationMiddleware(schema: ZodSchema) factory
|
||||
- Validates request.body against Zod schema
|
||||
- Returns 400 Bad Request with Zod errors formatted
|
||||
|
||||
Document security decisions in src/mcpd/SECURITY.md with rationale for each configuration choice.
|
||||
|
||||
### 3.5. Implement error handling, audit logging middleware, and graceful shutdown with comprehensive tests
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 3.2, 3.3, 3.4
|
||||
|
||||
Create global error handler, audit logging onResponse hook that records all operations to database, and graceful shutdown handling with connection draining and proper signal handling.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/mcpd/src/middleware/error-handler.ts:
|
||||
- Global Fastify error handler via setErrorHandler
|
||||
- Handle Zod validation errors -> 400 Bad Request
|
||||
- Handle Prisma errors (P2002 unique, P2025 not found) -> appropriate HTTP codes
|
||||
- Handle custom application errors with error codes
|
||||
- Log errors with pino, include stack trace in development only
|
||||
- Never expose internal errors to clients in production
|
||||
|
||||
Create src/mcpd/src/middleware/audit.ts:
|
||||
- auditMiddleware(prisma: PrismaClient, auditLogger: AuditLogger) factory
|
||||
- Fastify onResponse hook
|
||||
- Create AuditLog record with: userId (from request.user), action (HTTP method), resource (URL), details ({ statusCode, responseTime, ip })
|
||||
- Skip audit logging for /health endpoint
|
||||
- Async write - don't block response
|
||||
- Handle audit write failures gracefully (log warning, don't fail request)
|
||||
|
||||
Create src/mcpd/src/utils/shutdown.ts:
|
||||
- setupGracefulShutdown(app: FastifyInstance, prisma: PrismaClient) function
|
||||
- Handle SIGTERM, SIGINT signals
|
||||
- Stop accepting new connections
|
||||
- Wait for in-flight requests (configurable timeout, default 30s)
|
||||
- Disconnect Prisma client
|
||||
- Exit with appropriate code
|
||||
|
||||
Create services/audit-logger.ts interface that Task 14 will implement.
|
||||
111
.taskmaster/tasks/task_004.md
Normal file
111
.taskmaster/tasks/task_004.md
Normal file
@@ -0,0 +1,111 @@
|
||||
# Task ID: 4
|
||||
|
||||
**Title:** Implement MCP Server Registry and Profile Management
|
||||
|
||||
**Status:** pending
|
||||
|
||||
**Dependencies:** 3
|
||||
|
||||
**Priority:** high
|
||||
|
||||
**Description:** Create APIs for registering MCP servers, managing profiles with different permission levels, and storing configuration templates.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create REST API endpoints in mcpd:
|
||||
|
||||
```typescript
|
||||
// routes/mcp-servers.ts
|
||||
app.post('/api/mcp-servers', async (req) => {
|
||||
const { name, type, command, args, envTemplate, setupGuide } = req.body;
|
||||
return prisma.mcpServer.create({ data: { name, type, command, args, envTemplate, setupGuide } });
|
||||
});
|
||||
|
||||
app.get('/api/mcp-servers', async () => {
|
||||
return prisma.mcpServer.findMany({ include: { profiles: true } });
|
||||
});
|
||||
|
||||
app.get('/api/mcp-servers/:id', async (req) => {
|
||||
return prisma.mcpServer.findUnique({ where: { id: req.params.id }, include: { profiles: true, instances: true } });
|
||||
});
|
||||
|
||||
// Profile management
|
||||
app.post('/api/mcp-servers/:serverId/profiles', async (req) => {
|
||||
const { name, config, filterRules } = req.body;
|
||||
return prisma.mcpProfile.create({
|
||||
data: { name, serverId: req.params.serverId, config, filterRules }
|
||||
});
|
||||
});
|
||||
|
||||
// Example profile configs:
|
||||
// Read-only Jira: { permissions: ['read'], allowedEndpoints: ['/issues/*', '/projects/*'] }
|
||||
// Full Slack: { permissions: ['read', 'write'], channels: ['*'] }
|
||||
// Limited Terraform: { permissions: ['read'], modules: ['aws_*', 'kubernetes_*'] }
|
||||
```
|
||||
|
||||
Create seed data with pre-configured MCP server definitions:
|
||||
- Slack MCP with OAuth setup guide
|
||||
- Jira MCP with API token guide
|
||||
- GitHub MCP with PAT guide
|
||||
- Terraform docs MCP
|
||||
|
||||
**Test Strategy:**
|
||||
|
||||
Test CRUD operations for servers and profiles. Verify profile inheritance works. Test that invalid configurations are rejected by Zod validation.
|
||||
|
||||
## Subtasks
|
||||
|
||||
### 4.1. Create Zod validation schemas with comprehensive TDD test coverage
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** None
|
||||
|
||||
Define and test Zod schemas for MCP server registration, profile management, and configuration templates before implementing any routes or services.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/mcpd/src/validation/mcp-server.schema.ts with schemas: CreateMcpServerSchema (name: string non-empty, type: enum ['slack', 'jira', 'github', 'terraform', 'custom'], command: string, args: array of strings, envTemplate: record with nested schema for { description: string, required: boolean, secret: boolean, setupUrl?: string }, setupGuide?: string). Create UpdateMcpServerSchema as partial of create. Create CreateMcpProfileSchema (name: string, serverId: uuid, config: record with permissions array ['read', 'write'], filterRules?: record). Create src/mcpd/tests/unit/validation/mcp-server.schema.test.ts with TDD tests BEFORE implementation: (1) Test valid server creation passes, (2) Test empty name fails, (3) Test invalid type fails, (4) Test envTemplate validates nested structure, (5) Test profile config validates permissions array only contains 'read'/'write', (6) Test UUID format validation for serverId, (7) Test sanitization of XSS attempts in setupGuide field, (8) Test envTemplate values cannot contain shell injection patterns. Security: Add custom Zod refinements to reject dangerous patterns in envTemplate values like backticks, $(), etc.
|
||||
|
||||
### 4.2. Implement repository pattern for MCP server and profile data access
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 4.1
|
||||
|
||||
Create injectable repository classes for McpServer and McpProfile data access with Prisma, following dependency injection patterns for testability.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/mcpd/src/repositories/interfaces.ts with IMcpServerRepository and IMcpProfileRepository interfaces defining all CRUD methods. Create src/mcpd/src/repositories/mcp-server.repository.ts implementing IMcpServerRepository with methods: create(data: CreateMcpServerInput), findById(id: string, include?: { profiles?: boolean, instances?: boolean }), findByName(name: string), findAll(include?: { profiles?: boolean }), update(id: string, data: UpdateMcpServerInput), delete(id: string). Create src/mcpd/src/repositories/mcp-profile.repository.ts with methods: create(data: CreateMcpProfileInput), findById(id: string), findByServerId(serverId: string), findAll(), update(id: string, data: UpdateMcpProfileInput), delete(id: string), validateProfilePermissions(profileId: string, requestedPermissions: string[]) to check profile cannot escalate beyond server's allowed permissions. Write TDD tests in src/mcpd/tests/unit/repositories/ before implementation using Prisma mock factory from Task 3's test utilities. Architecture note: These repositories will be used by Task 10 (setup wizard) and Task 15 (profiles library).
|
||||
|
||||
### 4.3. Implement MCP server service layer with business logic and authorization
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 4.1, 4.2
|
||||
|
||||
Create McpServerService and McpProfileService with business logic, authorization checks, and validation orchestration using injected repositories.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/mcpd/src/services/mcp-server.service.ts with constructor accepting IMcpServerRepository (DI). Methods: createServer(userId: string, data: CreateMcpServerInput) - validate with Zod schema, check user has 'admin' or 'server:create' permission, call repository; getServer(userId: string, id: string) - check read permission, include profiles if authorized; listServers(userId: string, filters?: ServerFilters); updateServer(userId: string, id: string, data) - check 'server:update' permission; deleteServer(userId: string, id: string) - check 'server:delete', verify no active instances. Create src/mcpd/src/services/mcp-profile.service.ts with methods: createProfile(userId: string, serverId: string, data) - validate profile permissions don't exceed server's capabilities, check 'profile:create' permission; updateProfile(); deleteProfile() - check no active instances using this profile. Security: Implement permission hierarchy where profile.config.permissions must be subset of server's allowed permissions. Create src/mcpd/src/services/authorization.ts with checkPermission(userId: string, resource: string, action: string) helper. Write TDD tests mocking repositories.
|
||||
|
||||
### 4.4. Implement REST API routes for MCP servers and profiles with request validation
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 4.3
|
||||
|
||||
Create Fastify route handlers for MCP server and profile CRUD operations using the service layer, with Zod request validation middleware.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/mcpd/src/routes/mcp-servers.ts with routes: POST /api/mcp-servers (create server, requires auth + admin), GET /api/mcp-servers (list all, requires auth), GET /api/mcp-servers/:id (get by ID with profiles/instances, requires auth), PUT /api/mcp-servers/:id (update, requires auth + admin), DELETE /api/mcp-servers/:id (delete, requires auth + admin). Create src/mcpd/src/routes/mcp-profiles.ts with routes: POST /api/mcp-servers/:serverId/profiles (create profile for server), GET /api/mcp-servers/:serverId/profiles (list profiles for server), GET /api/profiles/:id (get profile by ID), PUT /api/profiles/:id (update profile), DELETE /api/profiles/:id (delete profile). Each route handler: (1) Uses Zod schema via validation middleware from Task 3, (2) Calls appropriate service method, (3) Returns consistent response format { success: boolean, data?: T, error?: { code: string, message: string } }, (4) Uses request.user from auth middleware. Register routes in server.ts. Write integration tests using Fastify's inject() method.
|
||||
|
||||
### 4.5. Create seed data for pre-configured MCP servers and perform security review
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 4.4
|
||||
|
||||
Implement seed data for Slack, Jira, GitHub, and Terraform MCP servers with default profiles, plus comprehensive security review of all implemented code.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/mcpd/src/seed/mcp-servers.seed.ts with seedMcpServers() function using the McpServerService to create: (1) Slack MCP - command: 'npx', args: ['-y', '@modelcontextprotocol/server-slack'], envTemplate with SLACK_BOT_TOKEN (secret, setupUrl to api.slack.com), SLACK_TEAM_ID, setupGuide markdown with OAuth setup steps, default profiles: 'slack-read-only' (permissions: ['read']), 'slack-full' (permissions: ['read', 'write']); (2) Jira MCP - envTemplate with JIRA_URL, JIRA_EMAIL, JIRA_API_TOKEN (secret), setupGuide for API token creation; (3) GitHub MCP - envTemplate with GITHUB_TOKEN (secret, setupUrl to github.com/settings/tokens); (4) Terraform Docs MCP - no env required, read-only profile. Create src/mcpd/src/seed/index.ts that runs all seeders. Security Review - create SECURITY_REVIEW.md documenting: (1) All Zod schemas reviewed for injection prevention, (2) Authorization checked on every route, (3) envTemplate sanitization prevents shell injection, (4) Profile permission escalation prevented, (5) Secrets marked appropriately in envTemplate, (6) No sensitive data in logs or error responses. Run 'pnpm lint' and 'pnpm test:coverage' ensuring >80% coverage.
|
||||
126
.taskmaster/tasks/task_005.md
Normal file
126
.taskmaster/tasks/task_005.md
Normal file
@@ -0,0 +1,126 @@
|
||||
# Task ID: 5
|
||||
|
||||
**Title:** Implement Project Management APIs
|
||||
|
||||
**Status:** pending
|
||||
|
||||
**Dependencies:** 4
|
||||
|
||||
**Priority:** high
|
||||
|
||||
**Description:** Create APIs for managing MCP projects that group multiple MCP profiles together for easy assignment to Claude sessions.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create project management endpoints:
|
||||
|
||||
```typescript
|
||||
// routes/projects.ts
|
||||
app.post('/api/projects', async (req) => {
|
||||
const { name, description, profileIds } = req.body;
|
||||
const project = await prisma.project.create({
|
||||
data: {
|
||||
name,
|
||||
description,
|
||||
profiles: {
|
||||
create: profileIds.map(profileId => ({ profileId }))
|
||||
}
|
||||
},
|
||||
include: { profiles: { include: { profile: { include: { server: true } } } } }
|
||||
});
|
||||
return project;
|
||||
});
|
||||
|
||||
app.get('/api/projects', async () => {
|
||||
return prisma.project.findMany({
|
||||
include: { profiles: { include: { profile: { include: { server: true } } } } }
|
||||
});
|
||||
});
|
||||
|
||||
app.get('/api/projects/:name', async (req) => {
|
||||
return prisma.project.findUnique({
|
||||
where: { name: req.params.name },
|
||||
include: { profiles: { include: { profile: { include: { server: true } } } } }
|
||||
});
|
||||
});
|
||||
|
||||
app.put('/api/projects/:id/profiles', async (req) => {
|
||||
const { profileIds } = req.body;
|
||||
// Update project profiles
|
||||
await prisma.projectMcpProfile.deleteMany({ where: { projectId: req.params.id } });
|
||||
await prisma.projectMcpProfile.createMany({
|
||||
data: profileIds.map(profileId => ({ projectId: req.params.id, profileId }))
|
||||
});
|
||||
});
|
||||
|
||||
// Generate .mcp.json format for Claude
|
||||
app.get('/api/projects/:name/mcp-config', async (req) => {
|
||||
const project = await prisma.project.findUnique({
|
||||
where: { name: req.params.name },
|
||||
include: { profiles: { include: { profile: { include: { server: true } } } } }
|
||||
});
|
||||
// Transform to .mcp.json format
|
||||
return generateMcpConfig(project);
|
||||
});
|
||||
```
|
||||
|
||||
**Test Strategy:**
|
||||
|
||||
Test project CRUD operations. Verify profile associations work correctly. Test MCP config generation produces valid .mcp.json format.
|
||||
|
||||
## Subtasks
|
||||
|
||||
### 5.1. Write TDD tests for project Zod validation schemas and generateMcpConfig function
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** None
|
||||
|
||||
Create comprehensive Vitest test suites for project validation schemas and the critical generateMcpConfig function BEFORE implementing any code, following TDD red phase.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/mcpd/tests/unit/validation/project.schema.test.ts with tests for: (1) CreateProjectSchema validates name (non-empty string, max 64 chars, alphanumeric-dash only), description (optional string, max 500 chars), profileIds (array of valid UUIDs, can be empty); (2) UpdateProjectSchema as partial; (3) UpdateProjectProfilesSchema validates profileIds array. Create src/mcpd/tests/unit/services/generate-mcp-config.test.ts with tests for generateMcpConfig function: (1) Returns valid .mcp.json structure with mcpServers object, (2) Each server entry has command, args, and env keys, (3) SECURITY: env values for secret fields are EXCLUDED or masked (critical requirement from context), (4) Server names are correctly derived from profile.server.name, (5) Empty project returns empty mcpServers object, (6) Multiple profiles from same server are handled correctly (no duplicates or merged appropriately). Security test: Verify generateMcpConfig strips SLACK_BOT_TOKEN, JIRA_API_TOKEN, GITHUB_TOKEN and any field marked secret:true in envTemplate.
|
||||
|
||||
### 5.2. Implement project repository and generateMcpConfig service with security filtering
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 5.1
|
||||
|
||||
Create the project repository following the repository pattern from Task 4, plus the generateMcpConfig function that transforms project data to .mcp.json format while stripping sensitive credentials.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/mcpd/src/repositories/project.repository.ts implementing IProjectRepository interface with methods: create(data: CreateProjectInput), findById(id: string, include?: { profiles?: { include?: { profile?: { include?: { server?: boolean } } } } }), findByName(name: string, include?: same), findAll(include?: same), update(id: string, data: UpdateProjectInput), delete(id: string), updateProfiles(projectId: string, profileIds: string[]) - handles delete-all-then-create pattern from task details. Create src/mcpd/src/services/mcp-config-generator.ts with generateMcpConfig(project: ProjectWithProfiles): McpJsonConfig function. Implementation: (1) Iterate project.profiles, (2) For each profile, get server.command, server.args, (3) Build env object from profile.config BUT filter out any key where server.envTemplate[key].secret === true, (4) Return { mcpServers: { [server.name]: { command, args, env } } }. SECURITY CRITICAL: The env object must NEVER include secret values - these are populated locally by the CLI (Task 9). Add JSDoc comment explaining this security design. Create TypeScript type McpJsonConfig matching .mcp.json schema structure.
|
||||
|
||||
### 5.3. Implement project service layer with authorization and profile validation
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 5.2
|
||||
|
||||
Create ProjectService with business logic including authorization checks, profile existence validation, and orchestration of repository and mcp-config-generator.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/mcpd/src/services/project.service.ts with constructor accepting IProjectRepository and IMcpProfileRepository (DI from Task 4). Methods: createProject(userId: string, data: CreateProjectInput) - validate with Zod schema, check 'project:create' permission, verify all profileIds exist via profile repository, call project repository; getProject(userId: string, nameOrId: string) - check read permission, return project with nested profiles; listProjects(userId: string) - filter based on permissions; updateProject(userId: string, id: string, data: UpdateProjectInput) - check 'project:update' permission; deleteProject(userId: string, id: string) - check 'project:delete' permission; updateProjectProfiles(userId: string, projectId: string, profileIds: string[]) - validate all profiles exist AND user has permission to use each profile (prevents adding profiles user cannot access); getMcpConfig(userId: string, projectName: string) - get project, verify read permission, call generateMcpConfig. Write TDD tests mocking repositories. Note: This service will be consumed by Task 9 (mcpctl claude add-mcp-project).
|
||||
|
||||
### 5.4. Implement REST API routes for project CRUD and mcp-config endpoint
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 5.3
|
||||
|
||||
Create Fastify route handlers for all project management endpoints including the critical /api/projects/:name/mcp-config endpoint used by the CLI.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/mcpd/src/routes/projects.ts with routes: POST /api/projects (create project with optional profileIds array), GET /api/projects (list all projects user can access), GET /api/projects/:name (get project by name with full profile/server hierarchy), PUT /api/projects/:id (update project name/description), DELETE /api/projects/:id (delete project), PUT /api/projects/:id/profiles (replace all profiles - uses delete-then-create pattern per task details), GET /api/projects/:name/mcp-config (generate .mcp.json format output - CRITICAL endpoint for Task 9 CLI integration). Each route: (1) Uses Zod schema validation middleware, (2) Calls ProjectService method, (3) Returns consistent response format from Task 4 pattern. Register routes in server.ts with /api prefix. The mcp-config endpoint response format must be stable as Task 9 depends on it: { mcpServers: { [name: string]: { command: string, args: string[], env: Record<string, string> } } }. Add OpenAPI/Swagger JSDoc annotations for mcp-config endpoint documenting the exact response format.
|
||||
|
||||
### 5.5. Create integration tests and security review for project APIs
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 5.4
|
||||
|
||||
Write comprehensive integration tests simulating the full workflow from project creation through mcp-config generation, plus security review documenting credential handling.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/mcpd/tests/integration/projects.test.ts with end-to-end scenarios: (1) Full workflow test: create MCP server (from Task 4 seed), create profile with credentials, create project referencing profile, call mcp-config endpoint, verify output is valid and EXCLUDES secrets; (2) Multi-profile project: create project with Slack + Jira profiles, verify mcp-config merges correctly; (3) Profile update atomicity: update project profiles, verify old profiles removed and new ones added in single transaction; (4) Authorization flow: verify user A cannot add user B's profiles to their project; (5) Concurrent access: simultaneous project updates don't corrupt data. Create src/mcpd/docs/SECURITY_REVIEW.md section for Task 5 documenting: (1) generateMcpConfig deliberately excludes secret env vars, (2) CLI (Task 9) is responsible for injecting secrets locally from user's credential store, (3) Profile permission checks prevent unauthorized profile usage, (4) Response format designed to be safe for transmission over network. Run 'pnpm test:coverage' targeting >85% coverage for project-related files.
|
||||
181
.taskmaster/tasks/task_006.md
Normal file
181
.taskmaster/tasks/task_006.md
Normal file
@@ -0,0 +1,181 @@
|
||||
# Task ID: 6
|
||||
|
||||
**Title:** Implement Docker Container Management for MCP Servers
|
||||
|
||||
**Status:** pending
|
||||
|
||||
**Dependencies:** 3, 4
|
||||
|
||||
**Priority:** high
|
||||
|
||||
**Description:** Create the container orchestration layer for running MCP servers as Docker containers, with support for docker-compose deployment.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create Docker management module:
|
||||
|
||||
```typescript
|
||||
// services/container-manager.ts
|
||||
import Docker from 'dockerode';
|
||||
|
||||
export class ContainerManager {
|
||||
private docker: Docker;
|
||||
|
||||
constructor() {
|
||||
this.docker = new Docker({ socketPath: '/var/run/docker.sock' });
|
||||
}
|
||||
|
||||
async startMcpServer(server: McpServer, config: McpProfile['config']): Promise<string> {
|
||||
const container = await this.docker.createContainer({
|
||||
Image: server.image || 'node:20-alpine',
|
||||
Cmd: this.buildCommand(server, config),
|
||||
Env: this.buildEnvVars(server, config),
|
||||
Labels: {
|
||||
'mcpctl.server': server.name,
|
||||
'mcpctl.managed': 'true'
|
||||
},
|
||||
HostConfig: {
|
||||
NetworkMode: 'mcpctl-network',
|
||||
RestartPolicy: { Name: 'unless-stopped' }
|
||||
}
|
||||
});
|
||||
await container.start();
|
||||
return container.id;
|
||||
}
|
||||
|
||||
async stopMcpServer(containerId: string): Promise<void> {
|
||||
const container = this.docker.getContainer(containerId);
|
||||
await container.stop();
|
||||
await container.remove();
|
||||
}
|
||||
|
||||
async getMcpServerStatus(containerId: string): Promise<'running' | 'stopped' | 'error'> {
|
||||
try {
|
||||
const container = this.docker.getContainer(containerId);
|
||||
const info = await container.inspect();
|
||||
return info.State.Running ? 'running' : 'stopped';
|
||||
} catch {
|
||||
return 'error';
|
||||
}
|
||||
}
|
||||
|
||||
async listManagedContainers(): Promise<Docker.ContainerInfo[]> {
|
||||
return this.docker.listContainers({
|
||||
filters: { label: ['mcpctl.managed=true'] }
|
||||
});
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Create docker-compose.yml template:
|
||||
```yaml
|
||||
version: '3.8'
|
||||
services:
|
||||
mcpd:
|
||||
build: ./src/mcpd
|
||||
ports:
|
||||
- "3000:3000"
|
||||
environment:
|
||||
- DATABASE_URL=postgresql://...
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
networks:
|
||||
- mcpctl-network
|
||||
|
||||
postgres:
|
||||
image: postgres:15
|
||||
volumes:
|
||||
- pgdata:/var/lib/postgresql/data
|
||||
networks:
|
||||
- mcpctl-network
|
||||
|
||||
networks:
|
||||
mcpctl-network:
|
||||
driver: bridge
|
||||
|
||||
volumes:
|
||||
pgdata:
|
||||
```
|
||||
|
||||
**Test Strategy:**
|
||||
|
||||
Test container creation, start, stop, and removal. Test status checking. Integration test with actual Docker daemon. Verify network isolation works correctly.
|
||||
|
||||
## Subtasks
|
||||
|
||||
### 6.1. Define McpOrchestrator interface and write TDD tests for ContainerManager
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** None
|
||||
|
||||
Define the McpOrchestrator abstraction interface that both DockerOrchestrator (this task) and KubernetesOrchestrator (task 17) will implement. Write comprehensive Vitest unit tests for all ContainerManager methods BEFORE implementation using dockerode mocks.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/mcpd/src/services/orchestrator.ts with the McpOrchestrator interface including: startServer(), stopServer(), getStatus(), getLogs(), listInstances(). Then create src/mcpd/src/services/docker/__tests__/container-manager.test.ts with TDD tests covering: (1) constructor connects to Docker socket, (2) startMcpServer() creates container with correct labels, env vars, and network config, (3) stopMcpServer() stops and removes container, (4) getMcpServerStatus() returns 'running', 'stopped', or 'error' states, (5) listManagedContainers() filters by mcpctl.managed label, (6) buildCommand() generates correct command array from server config, (7) buildEnvVars() maps profile config to environment variables. Use vi.mock('dockerode') to mock all Docker operations. Tests should initially fail (TDD red phase).
|
||||
|
||||
### 6.2. Implement ContainerManager class with DockerOrchestrator strategy pattern
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 6.1
|
||||
|
||||
Implement the ContainerManager class as a DockerOrchestrator implementation using dockerode, with all methods passing the TDD tests from subtask 1.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/mcpd/src/services/docker/container-manager.ts implementing McpOrchestrator interface. Constructor accepts optional Docker socket path (default: /var/run/docker.sock). Implement startMcpServer(): create container with Image (server.image || 'node:20-alpine'), Cmd from buildCommand(), Env from buildEnvVars(), Labels (mcpctl.server, mcpctl.managed, mcpctl.profile), HostConfig with NetworkMode 'mcpctl-network' and RestartPolicy 'unless-stopped'. Implement stopMcpServer(): stop() then remove() the container. Implement getMcpServerStatus(): inspect() container and return state. Implement listManagedContainers(): listContainers() with label filter. Implement buildCommand(): parse server.command template with config substitutions. Implement buildEnvVars(): merge server.envTemplate with profile.config values. Add resource limits to HostConfig (Memory: 512MB default, NanoCPUs: 1e9 default) - these are overridable via server config. All TDD tests from subtask 1 should now pass.
|
||||
|
||||
### 6.3. Create docker-compose.yml template with mcpd, PostgreSQL, and test MCP server
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** None
|
||||
|
||||
Create the production-ready docker-compose.yml template for local development with mcpd service, PostgreSQL database, a test MCP server container, and proper networking configuration.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create deploy/docker-compose.yml with services: (1) mcpd - build from src/mcpd, expose port 3000, DATABASE_URL env var, mount /var/run/docker.sock (read-only), depends_on postgres with healthcheck, deploy resources limits (memory: 512M), restart: unless-stopped. (2) postgres - postgres:15-alpine image, POSTGRES_USER/PASSWORD/DB env vars, healthcheck with pg_isready, volume for pgdata, deploy resources limits (memory: 256M). (3) test-mcp-server - simple echo server image (node:20-alpine with npx @modelcontextprotocol/server-memory), labels for mcpctl.managed and mcpctl.server, same network. Create mcpctl-network as bridge driver. Create named volumes: pgdata. Add .env.example with required environment variables. Ensure all containers have resource limits and no --privileged flag. Add docker-compose.test.yml override for CI testing with ephemeral volumes.
|
||||
|
||||
### 6.4. Write integration tests with real Docker daemon
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 6.2, 6.3
|
||||
|
||||
Create integration test suite that tests ContainerManager against a real Docker daemon, verifying actual container lifecycle operations work correctly.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/mcpd/src/services/docker/__tests__/container-manager.integration.test.ts. Use vitest with longer timeout (30s). Before all: ensure mcpctl-network exists (create if not). After each: cleanup any test containers. Test cases: (1) startMcpServer() creates a real container with test MCP server image, verify container is running with docker inspect, (2) getMcpServerStatus() returns 'running' for active container, (3) stopMcpServer() removes container and getMcpServerStatus() returns 'error', (4) listManagedContainers() returns only containers with mcpctl.managed label, (5) test container networking - two MCP server containers can communicate on mcpctl-network. Use node:20-alpine with simple sleep command as test image. Add CI skip condition (describe.skipIf(!process.env.DOCKER_HOST)) for environments without Docker. Tag tests with '@integration' for selective running.
|
||||
|
||||
### 6.5. Implement container network isolation and resource management
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 6.2
|
||||
|
||||
Add network segmentation utilities and resource management capabilities to ensure proper isolation between MCP server containers and prevent resource exhaustion.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/mcpd/src/services/docker/network-manager.ts with: ensureNetworkExists() - creates mcpctl-network if not present with bridge driver, getNetworkInfo() - returns network details, connectContainer() - adds container to network, disconnectContainer() - removes from network. Add to ContainerManager: getContainerStats() - returns CPU/memory usage via container.stats(), setResourceLimits() - updates container resources. Implement container isolation: each MCP server profile can specify allowed networks, default deny all external network access, only allow container-to-container on mcpctl-network. Add ResourceConfig type with memory (bytes), cpuShares, cpuPeriod, pidsLimit. Write unit tests for network-manager with mocked dockerode. Integration test: start two containers, verify they can reach each other on mcpctl-network but not external network.
|
||||
|
||||
### 6.6. Conduct security review of Docker socket access and container configuration
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 6.2, 6.3, 6.5
|
||||
|
||||
Perform comprehensive security review of all Docker-related code, documenting risks of Docker socket access and implementing security controls for container isolation.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/mcpd/docs/DOCKER_SECURITY_REVIEW.md documenting: (1) Docker socket access risks - socket access grants root-equivalent privileges, mitigations implemented (read-only mount where possible, no container creation with --privileged, no host network mode, no host PID namespace). (2) Container escape prevention - no --privileged containers, no SYS_ADMIN capability, seccomp profile enabled (default), AppArmor profile enabled (default), drop all capabilities except required ones. (3) Image source validation - add validateImageSource() function that checks image against allowlist, reject images from untrusted registries, warn on :latest tags. (4) Resource limits - all containers MUST have memory and CPU limits, pids-limit to prevent fork bombs. (5) Network segmentation - MCP servers isolated to mcpctl-network, no external network access by default. (6) Secrets handling - environment variables with credentials are passed at runtime not build time, no secrets in image layers. Add security tests that verify: no --privileged, caps are dropped, resource limits are set.
|
||||
|
||||
### 6.7. Implement container logs streaming and health monitoring
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 6.2
|
||||
|
||||
Add log streaming capabilities and health monitoring to ContainerManager to support instance lifecycle management (Task 16) and provide observability into running MCP servers.
|
||||
|
||||
**Details:**
|
||||
|
||||
Extend ContainerManager with: getLogs(containerId, options: LogOptions): AsyncIterator<string> - streams logs from container using dockerode container.logs() with follow option, LogOptions includes timestamps, tail lines count, since timestamp. getHealthStatus(containerId): returns health check result if container has HEALTHCHECK, otherwise infers from running state. attachToContainer(containerId): returns bidirectional stream for stdio. Add event subscriptions: onContainerStart, onContainerStop, onContainerDie callbacks using Docker events API. Create src/mcpd/src/services/docker/container-events.ts with ContainerEventEmitter class that listens to Docker daemon events and emits typed events. Write unit tests mocking dockerode stream responses. Integration test: start container, tail logs, verify log output matches container stdout. Test event subscription receives container lifecycle events.
|
||||
311
.taskmaster/tasks/task_007.md
Normal file
311
.taskmaster/tasks/task_007.md
Normal file
@@ -0,0 +1,311 @@
|
||||
# Task ID: 7
|
||||
|
||||
**Title:** Build mcpctl CLI Core Framework
|
||||
|
||||
**Status:** pending
|
||||
|
||||
**Dependencies:** 1
|
||||
|
||||
**Priority:** high
|
||||
|
||||
**Description:** Create the CLI tool foundation using Commander.js with kubectl-inspired command structure, configuration management, and server communication.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create CLI in `src/cli/src/`:
|
||||
|
||||
```typescript
|
||||
// index.ts
|
||||
import { Command } from 'commander';
|
||||
import { loadConfig, saveConfig } from './config';
|
||||
|
||||
const program = new Command();
|
||||
|
||||
program
|
||||
.name('mcpctl')
|
||||
.description('kubectl-like CLI for managing MCP servers')
|
||||
.version('0.1.0');
|
||||
|
||||
// Config management
|
||||
program
|
||||
.command('config')
|
||||
.description('Manage mcpctl configuration')
|
||||
.addCommand(
|
||||
new Command('set-server')
|
||||
.argument('<url>', 'mcpd server URL')
|
||||
.action((url) => {
|
||||
const config = loadConfig();
|
||||
config.serverUrl = url;
|
||||
saveConfig(config);
|
||||
console.log(`Server set to ${url}`);
|
||||
})
|
||||
)
|
||||
.addCommand(
|
||||
new Command('view')
|
||||
.action(() => console.log(loadConfig()))
|
||||
);
|
||||
|
||||
// API client
|
||||
class McpctlClient {
|
||||
constructor(private serverUrl: string, private token?: string) {}
|
||||
|
||||
async get(path: string) {
|
||||
const res = await fetch(`${this.serverUrl}${path}`, {
|
||||
headers: this.token ? { Authorization: `Bearer ${this.token}` } : {}
|
||||
});
|
||||
return res.json();
|
||||
}
|
||||
|
||||
async post(path: string, data: any) {
|
||||
const res = await fetch(`${this.serverUrl}${path}`, {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
...(this.token ? { Authorization: `Bearer ${this.token}` } : {})
|
||||
},
|
||||
body: JSON.stringify(data)
|
||||
});
|
||||
return res.json();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Config file at `~/.mcpctl/config.json`:
|
||||
```json
|
||||
{
|
||||
"serverUrl": "http://localhost:3000",
|
||||
"token": "..."
|
||||
}
|
||||
```
|
||||
|
||||
**Test Strategy:**
|
||||
|
||||
Test CLI argument parsing. Test configuration persistence. Mock API calls and verify request formatting. Test error handling for network failures.
|
||||
|
||||
## Subtasks
|
||||
|
||||
### 7.1. Set up CLI package structure with TDD infrastructure and command registry pattern
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** None
|
||||
|
||||
Create src/cli directory structure with Commander.js foundation, Vitest test configuration, and extensible command registry pattern designed to scale for all planned commands (get, describe, apply, setup, project, claude, audit, start, stop, logs).
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/cli/src/ with the following structure:
|
||||
|
||||
- commands/ - Command modules (empty initially, registry pattern)
|
||||
- config/ - Configuration loading and validation
|
||||
- client/ - API client for mcpd communication
|
||||
- formatters/ - Output formatters (table, json, yaml)
|
||||
- utils/ - Utility functions
|
||||
- types/ - TypeScript interfaces
|
||||
- index.ts - Main entry point with Commander setup
|
||||
|
||||
Create src/cli/tests/ with matching structure:
|
||||
- unit/ (commands, config, client, formatters)
|
||||
- integration/ (CLI end-to-end tests)
|
||||
- fixtures/ (mock data, mock server)
|
||||
|
||||
Implement command registry pattern in src/commands/registry.ts:
|
||||
```typescript
|
||||
export interface CommandModule {
|
||||
name: string;
|
||||
register(program: Command): void;
|
||||
}
|
||||
export class CommandRegistry {
|
||||
register(module: CommandModule): void;
|
||||
registerAll(program: Command): void;
|
||||
}
|
||||
```
|
||||
|
||||
Set up vitest.config.ts extending root config. Install dependencies: commander, chalk, js-yaml, inquirer, zod for validation. DevDependencies: vitest, @vitest/coverage-v8, msw for API mocking.
|
||||
|
||||
Write initial TDD tests before implementation:
|
||||
- tests/unit/commands/registry.test.ts - Test registry adds commands correctly
|
||||
- tests/unit/index.test.ts - Test CLI entry point parses version, help
|
||||
|
||||
### 7.2. Implement secure configuration management with encrypted credential storage
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 7.1
|
||||
|
||||
Create configuration loader/saver with ~/.mcpctl/config.json for settings and ~/.mcpctl/credentials encrypted storage for tokens. Include proxy settings, custom CA certificates support, and Zod validation for enterprise environments.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/cli/src/config/index.ts with:
|
||||
|
||||
- loadConfig(): McpctlConfig - Load from ~/.mcpctl/config.json with Zod validation
|
||||
- saveConfig(config: McpctlConfig): void - Save config atomically (write to temp, rename)
|
||||
- getConfigPath(): string - Platform-aware config directory
|
||||
- initConfig(): void - Create config directory and initial config if not exists
|
||||
|
||||
Create src/cli/src/config/credentials.ts with SECURE credential storage:
|
||||
- loadCredentials(): Credentials - Load encrypted from ~/.mcpctl/credentials
|
||||
- saveCredentials(creds: Credentials): void - Encrypt and save credentials
|
||||
- Use platform keychain when available (keytar package), fallback to encrypted file
|
||||
- NEVER store tokens in plain text or config.json
|
||||
- NEVER log tokens or include in error messages
|
||||
|
||||
Create McpctlConfig schema with Zod:
|
||||
```typescript
|
||||
const ConfigSchema = z.object({
|
||||
serverUrl: z.string().url().default('http://localhost:3000'),
|
||||
proxy: z.object({
|
||||
http: z.string().url().optional(),
|
||||
https: z.string().url().optional(),
|
||||
noProxy: z.array(z.string()).optional()
|
||||
}).optional(),
|
||||
tls: z.object({
|
||||
caFile: z.string().optional(), // Custom CA certificate path
|
||||
insecureSkipVerify: z.boolean().default(false) // For dev only
|
||||
}).optional(),
|
||||
output: z.object({
|
||||
format: z.enum(['table', 'json', 'yaml']).default('table'),
|
||||
color: z.boolean().default(true)
|
||||
}).optional()
|
||||
});
|
||||
```
|
||||
|
||||
Secure token handling:
|
||||
- loadToken(): string | undefined - Get token from credentials store
|
||||
- saveToken(token: string): void - Encrypt and save
|
||||
- clearToken(): void - Securely delete token
|
||||
|
||||
Write TDD tests BEFORE implementation in tests/unit/config/
|
||||
|
||||
### 7.3. Implement McpctlClient API client with enterprise networking support
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 7.2
|
||||
|
||||
Create the HTTP API client for communicating with mcpd server with proper error handling, retry logic, proxy support, custom CA certificates, and request/response interceptors for authentication.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/cli/src/client/index.ts with McpctlClient class:
|
||||
|
||||
```typescript
|
||||
export class McpctlClient {
|
||||
constructor(config: ClientConfig) // DI for testability
|
||||
|
||||
// HTTP methods with proper typing
|
||||
async get<T>(path: string): Promise<T>
|
||||
async post<T>(path: string, data: unknown): Promise<T>
|
||||
async put<T>(path: string, data: unknown): Promise<T>
|
||||
async delete<T>(path: string): Promise<T>
|
||||
|
||||
// Health check for connection testing
|
||||
async healthCheck(): Promise<boolean>
|
||||
}
|
||||
```
|
||||
|
||||
Implement networking features:
|
||||
- Proxy support: Use HTTP_PROXY/HTTPS_PROXY env vars + config.proxy settings
|
||||
- Custom CA: Support config.tls.caFile for enterprise CAs
|
||||
- Retry logic: Exponential backoff for transient failures (503, network errors)
|
||||
- Timeout: Configurable request timeout (default 30s)
|
||||
- Request interceptor: Add Authorization header from credentials store
|
||||
- Response interceptor: Handle 401 (clear cached token, prompt re-auth)
|
||||
|
||||
Create src/cli/src/client/errors.ts:
|
||||
- McpctlClientError base class
|
||||
- NetworkError for connection failures
|
||||
- AuthenticationError for 401
|
||||
- NotFoundError for 404
|
||||
- ServerError for 5xx
|
||||
|
||||
IMPORTANT: Never log request bodies that might contain secrets. Redact Authorization header in debug logs.
|
||||
|
||||
Create mock server in tests/fixtures/mock-server.ts using msw (Mock Service Worker) for offline testing. Write TDD tests before implementation.
|
||||
|
||||
### 7.4. Implement config command group with output formatters for SRE integration
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 7.2, 7.3
|
||||
|
||||
Create the config command group (set-server, view, get-token, set-token) and multi-format output system (table, json, yaml) with --output flag designed for SRE tooling integration (jq, grep, monitoring pipelines).
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/cli/src/commands/config.ts implementing CommandModule:
|
||||
|
||||
```typescript
|
||||
// config set-server <url>
|
||||
// config view
|
||||
// config set-token (interactive, secure input)
|
||||
// config clear-token
|
||||
// config set <key> <value> (generic setter for proxy, tls, etc.)
|
||||
// config get <key>
|
||||
```
|
||||
|
||||
Create src/cli/src/formatters/index.ts:
|
||||
```typescript
|
||||
export function formatOutput(data: unknown, format: OutputFormat): string
|
||||
export function printTable(data: Record<string, unknown>[], columns: ColumnDef[]): void
|
||||
export function printJson(data: unknown): void // Pretty printed, sortable keys
|
||||
export function printYaml(data: unknown): void // Clean YAML output
|
||||
```
|
||||
|
||||
SRE-friendly output requirements:
|
||||
- JSON output must be valid, parseable by jq
|
||||
- YAML output must be valid, parseable by yq
|
||||
- Table output should be grep-friendly (consistent column widths)
|
||||
- All formats support --no-color for CI/scripting
|
||||
- Add --quiet flag to suppress non-essential output
|
||||
- Exit codes: 0 success, 1 error, 2 invalid arguments
|
||||
|
||||
Add global --output/-o flag to main program:
|
||||
```typescript
|
||||
program.option('-o, --output <format>', 'Output format (table, json, yaml)', 'table');
|
||||
```
|
||||
|
||||
Register config command via CommandRegistry. Write TDD tests before implementation.
|
||||
|
||||
### 7.5. Create mock mcpd server and comprehensive security/architecture review
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 7.1, 7.2, 7.3, 7.4
|
||||
|
||||
Build mock mcpd server for offline CLI testing, write integration tests verifying CLI works against local docker-compose mcpd, and perform comprehensive security review of credential handling, CLI history protection, and token security.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/cli/tests/fixtures/mock-mcpd-server.ts:
|
||||
- Full mock of mcpd API endpoints using msw or express
|
||||
- Realistic response data for servers, profiles, projects
|
||||
- Configurable error scenarios (timeout, 500, 401)
|
||||
- Startup/shutdown helpers for test lifecycle
|
||||
|
||||
Create src/cli/tests/integration/cli.test.ts:
|
||||
- Full CLI integration tests using execSync against built CLI
|
||||
- Test against mock server in CI, real docker-compose in local dev
|
||||
- Test full workflow: config -> connect -> list resources
|
||||
|
||||
SECURITY REVIEW - create src/cli/SECURITY_REVIEW.md:
|
||||
|
||||
1. Credential Storage Security:
|
||||
- Verify credentials encrypted at rest (not plain JSON)
|
||||
- Verify keychain integration on macOS/Windows
|
||||
- Verify file permissions are 600 on credential file
|
||||
|
||||
2. CLI History Protection:
|
||||
- Document that tokens should NEVER be passed as CLI arguments
|
||||
- set-token uses stdin or prompt, not --token=xxx
|
||||
- Verify no sensitive data in bash history
|
||||
|
||||
3. Token Handling:
|
||||
- Verify tokens never logged (search codebase for console.log patterns)
|
||||
- Verify error messages don't leak tokens
|
||||
- Verify tokens redacted in debug output
|
||||
|
||||
4. Network Security:
|
||||
- Document TLS verification (not disabled by default)
|
||||
- Document proxy credential handling
|
||||
- Verify no credentials sent over non-HTTPS in production
|
||||
|
||||
Run security audit: 'pnpm audit --audit-level=high'. Document findings.
|
||||
|
||||
Run 'pnpm lint' and 'pnpm test:coverage' ensuring >80% coverage for CLI package.
|
||||
341
.taskmaster/tasks/task_008.md
Normal file
341
.taskmaster/tasks/task_008.md
Normal file
@@ -0,0 +1,341 @@
|
||||
# Task ID: 8
|
||||
|
||||
**Title:** Implement mcpctl Server Management Commands
|
||||
|
||||
**Status:** pending
|
||||
|
||||
**Dependencies:** 7, 4
|
||||
|
||||
**Priority:** high
|
||||
|
||||
**Description:** Add kubectl-style commands for listing, describing, and managing MCP servers (get, describe, apply, delete).
|
||||
|
||||
**Details:**
|
||||
|
||||
Add server management commands:
|
||||
|
||||
```typescript
|
||||
// commands/servers.ts
|
||||
program
|
||||
.command('get')
|
||||
.description('Display resources')
|
||||
.argument('<resource>', 'Resource type (servers, projects, profiles, instances)')
|
||||
.option('-o, --output <format>', 'Output format (json, yaml, table)', 'table')
|
||||
.action(async (resource, options) => {
|
||||
const client = getClient();
|
||||
const data = await client.get(`/api/${resource}`);
|
||||
formatOutput(data, options.output);
|
||||
});
|
||||
|
||||
program
|
||||
.command('describe')
|
||||
.description('Show detailed information')
|
||||
.argument('<resource>', 'Resource type')
|
||||
.argument('<name>', 'Resource name or ID')
|
||||
.action(async (resource, name) => {
|
||||
const client = getClient();
|
||||
const data = await client.get(`/api/${resource}/${name}`);
|
||||
console.log(yaml.dump(data));
|
||||
});
|
||||
|
||||
program
|
||||
.command('apply')
|
||||
.description('Apply configuration from file')
|
||||
.option('-f, --file <path>', 'Path to config file')
|
||||
.action(async (options) => {
|
||||
const config = yaml.load(fs.readFileSync(options.file, 'utf8'));
|
||||
const client = getClient();
|
||||
// Determine resource type and apply
|
||||
const result = await client.post(`/api/${config.kind.toLowerCase()}s`, config.spec);
|
||||
console.log(`${config.kind} "${result.name}" created/updated`);
|
||||
});
|
||||
|
||||
// Resource definition format (kubectl-style)
|
||||
// server.yaml:
|
||||
// kind: McpServer
|
||||
// spec:
|
||||
// name: slack
|
||||
// type: slack
|
||||
// command: npx
|
||||
// args: ["@anthropic/mcp-server-slack"]
|
||||
// envTemplate:
|
||||
// SLACK_TOKEN: "required"
|
||||
```
|
||||
|
||||
Output formatters:
|
||||
```typescript
|
||||
function formatOutput(data: any[], format: string) {
|
||||
if (format === 'json') return console.log(JSON.stringify(data, null, 2));
|
||||
if (format === 'yaml') return console.log(yaml.dump(data));
|
||||
// Table format
|
||||
console.table(data.map(d => ({ NAME: d.name, TYPE: d.type, STATUS: d.status })));
|
||||
}
|
||||
```
|
||||
|
||||
**Test Strategy:**
|
||||
|
||||
Test each command with mock API responses. Test output formatting for all formats. Test apply command with various YAML configurations.
|
||||
|
||||
## Subtasks
|
||||
|
||||
### 8.1. Write TDD test suites for output formatters and resource type validation
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** None
|
||||
|
||||
Create comprehensive Vitest test suites for the output formatting system (JSON, YAML, table formats) and resource type validation BEFORE implementing the actual formatters. Tests must cover all output modes, --no-headers option, exit codes, field selection, and filtering capabilities.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/cli/tests/unit/formatters directory with the following test files:
|
||||
|
||||
1. formatters/output-formatter.test.ts:
|
||||
- Test JSON output produces valid, jq-parseable JSON with proper indentation
|
||||
- Test YAML output produces valid yaml.dump() output
|
||||
- Test table output produces awk/grep-parseable format with consistent column widths
|
||||
- Test --no-headers option removes header row from table output
|
||||
- Test --field flag filters output to only specified fields (e.g., --field name,status)
|
||||
- Test formatOutput() handles empty arrays gracefully
|
||||
- Test formatOutput() handles single object vs array correctly
|
||||
|
||||
2. formatters/resource-types.test.ts:
|
||||
- Test valid resource types (servers, projects, profiles, instances) are accepted
|
||||
- Test invalid resource types throw appropriate error with helpful message
|
||||
- Test resource type normalization (singular to plural: server -> servers)
|
||||
- Test case-insensitive resource matching
|
||||
|
||||
3. Create src/cli/tests/fixtures/mock-resources.ts with sample data:
|
||||
- mockServers: Array of McpServer objects with name, type, status fields
|
||||
- mockProfiles: Array of McpProfile objects
|
||||
- mockProjects: Array of Project objects
|
||||
- mockInstances: Array of McpInstance objects with running/stopped status
|
||||
|
||||
4. Create exit-codes.test.ts:
|
||||
- Test exit code 0 for successful operations
|
||||
- Test exit code 1 for general errors
|
||||
- Test exit code 2 for resource not found
|
||||
- Test exit code 3 for connection/network errors
|
||||
|
||||
All tests should initially fail (TDD red phase). Use Vitest mocks for console.log/console.table to capture output.
|
||||
|
||||
### 8.2. Write TDD test suites for get, describe, apply, and delete commands
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 8.1
|
||||
|
||||
Create comprehensive Vitest test suites for all four server management commands BEFORE implementation. Tests must mock API responses and verify correct CLI argument parsing, option handling, error states, and output generation.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/cli/tests/unit/commands directory with the following test files:
|
||||
|
||||
1. commands/get.test.ts:
|
||||
- Test 'mcpctl get servers' calls GET /api/servers
|
||||
- Test 'mcpctl get profiles' calls GET /api/profiles
|
||||
- Test 'mcpctl get projects' calls GET /api/projects
|
||||
- Test 'mcpctl get instances' calls GET /api/instances
|
||||
- Test '-o json' outputs JSON format
|
||||
- Test '-o yaml' outputs YAML format
|
||||
- Test '-o table' (default) outputs table format
|
||||
- Test '--no-headers' removes table header
|
||||
- Test '--field name,status' filters columns
|
||||
- Test invalid resource type shows error and exits with code 2
|
||||
- Test network error exits with code 3
|
||||
|
||||
2. commands/describe.test.ts:
|
||||
- Test 'mcpctl describe server slack' calls GET /api/servers/slack
|
||||
- Test output is always YAML format for detailed view
|
||||
- Test 404 response shows 'Resource not found' message
|
||||
- Test includes all resource fields in output
|
||||
|
||||
3. commands/apply.test.ts:
|
||||
- Test '-f server.yaml' reads file and sends POST/PUT request
|
||||
- Test validates 'kind' field in YAML (McpServer, McpProfile, Project)
|
||||
- Test validates required 'spec' field exists
|
||||
- Test creates new resource when name doesn't exist (POST)
|
||||
- Test updates existing resource when name exists (PUT)
|
||||
- Test SECURITY: rejects file paths with directory traversal (../, etc.)
|
||||
- Test SECURITY: validates YAML doesn't contain shell injection patterns
|
||||
- Test SECURITY: limits file size to prevent DoS
|
||||
- Test handles malformed YAML with clear error message
|
||||
|
||||
4. commands/delete.test.ts:
|
||||
- Test 'mcpctl delete server slack' calls DELETE /api/servers/slack
|
||||
- Test prompts for confirmation unless --force is passed
|
||||
- Test --force skips confirmation
|
||||
- Test 404 shows appropriate 'not found' message
|
||||
|
||||
Create src/cli/tests/fixtures/yaml-configs/ directory with sample YAML files for testing apply command.
|
||||
|
||||
### 8.3. Implement output formatters with reusable architecture and SRE-friendly features
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 8.1
|
||||
|
||||
Implement the output formatting system with JSON, YAML, and table formats. Include --no-headers option for scripting, parseable exit codes, and --field flag for field selection. Design for reusability across all CLI commands.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/cli/src/formatters directory with the following modules:
|
||||
|
||||
1. formatters/output-formatter.ts:
|
||||
```typescript
|
||||
export type OutputFormat = 'json' | 'yaml' | 'table';
|
||||
|
||||
export interface FormatOptions {
|
||||
format: OutputFormat;
|
||||
noHeaders?: boolean;
|
||||
fields?: string[];
|
||||
}
|
||||
|
||||
export function formatOutput<T extends Record<string, unknown>>(data: T | T[], options: FormatOptions): string;
|
||||
export function printOutput<T extends Record<string, unknown>>(data: T | T[], options: FormatOptions): void;
|
||||
```
|
||||
|
||||
2. formatters/json-formatter.ts:
|
||||
- Produce properly indented JSON (2 spaces)
|
||||
- Ensure jq pipeline compatibility
|
||||
- Support field filtering before output
|
||||
|
||||
3. formatters/yaml-formatter.ts:
|
||||
- Use js-yaml for YAML output
|
||||
- Ensure kubectl-compatible YAML formatting
|
||||
- Support field filtering
|
||||
|
||||
4. formatters/table-formatter.ts:
|
||||
- Fixed-width columns for awk/grep parseability
|
||||
- Tab-separated values for reliable parsing
|
||||
- UPPERCASE header row (NAME, TYPE, STATUS)
|
||||
- --no-headers support for scripting
|
||||
- Auto-truncate long values with ellipsis
|
||||
|
||||
5. formatters/field-selector.ts:
|
||||
- Parse --field flag (comma-separated field names)
|
||||
- Support nested fields with dot notation (spec.name)
|
||||
- Validate fields exist in data schema
|
||||
|
||||
6. Create exit-codes.ts:
|
||||
```typescript
|
||||
export const EXIT_CODES = {
|
||||
SUCCESS: 0,
|
||||
ERROR: 1,
|
||||
NOT_FOUND: 2,
|
||||
NETWORK_ERROR: 3,
|
||||
VALIDATION_ERROR: 4
|
||||
} as const;
|
||||
```
|
||||
|
||||
7. Create resource-types.ts:
|
||||
- Define valid resource types enum
|
||||
- Singular to plural normalization
|
||||
- Validation function with helpful error messages
|
||||
|
||||
Ensure all formatters are pure functions for easy unit testing. Export via barrel file formatters/index.ts.
|
||||
|
||||
### 8.4. Implement get, describe, apply, and delete commands with security hardening
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 8.2, 8.3
|
||||
|
||||
Implement all four server management commands using Commander.js. The apply command must include comprehensive security validation for file paths and YAML content to prevent injection attacks and path traversal vulnerabilities.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/cli/src/commands/resources.ts with all four commands:
|
||||
|
||||
1. 'get' command implementation:
|
||||
- Register as 'mcpctl get <resource> [options]'
|
||||
- Options: -o/--output (json|yaml|table), --no-headers, --field <fields>
|
||||
- Call getClient().get(`/api/${resource}`) from cli-client
|
||||
- Pass result to formatOutput() with options
|
||||
- Handle errors with appropriate exit codes
|
||||
|
||||
2. 'describe' command implementation:
|
||||
- Register as 'mcpctl describe <resource> <name>'
|
||||
- Call getClient().get(`/api/${resource}/${name}`)
|
||||
- Output always in YAML format with full details
|
||||
- Handle 404 with NOT_FOUND exit code
|
||||
|
||||
3. 'apply' command implementation with SECURITY HARDENING:
|
||||
- Register as 'mcpctl apply -f <file>'
|
||||
- SECURITY: Validate file path
|
||||
- Reject paths containing '..' (directory traversal)
|
||||
- Reject absolute paths outside allowed directories
|
||||
- Validate file extension is .yaml or .yml
|
||||
- Check file size < 1MB to prevent DoS
|
||||
- SECURITY: Validate YAML content
|
||||
- Parse with yaml.load() using safe schema
|
||||
- Validate 'kind' is in allowed list (McpServer, McpProfile, Project)
|
||||
- Validate 'spec' object has expected structure
|
||||
- Reject YAML with embedded functions or anchors if not needed
|
||||
- Determine create vs update by checking if resource exists
|
||||
- POST for create, PUT for update
|
||||
- Output success message with resource name
|
||||
|
||||
4. 'delete' command implementation:
|
||||
- Register as 'mcpctl delete <resource> <name>'
|
||||
- Prompt for confirmation using inquirer
|
||||
- Support --force flag to skip confirmation
|
||||
- Call DELETE /api/${resource}/${name}
|
||||
- Output success/failure message
|
||||
|
||||
5. Register all commands in command registry from Task 7.
|
||||
|
||||
Create src/cli/src/utils/path-validator.ts for reusable path validation.
|
||||
Create src/cli/src/utils/yaml-validator.ts for YAML security checks.
|
||||
|
||||
### 8.5. Create integration tests with mock API server and comprehensive security review
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 8.4
|
||||
|
||||
Build complete integration test suite running all commands against a mock mcpd API server. Perform and document comprehensive security review of the apply command and path handling. Ensure all SRE and data engineer requirements are met.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/cli/tests/integration directory with:
|
||||
|
||||
1. integration/mock-mcpd-server.ts:
|
||||
- Express server mocking all required endpoints:
|
||||
- GET/POST /api/servers, GET/PUT/DELETE /api/servers/:name
|
||||
- GET/POST /api/profiles, GET/PUT/DELETE /api/profiles/:name
|
||||
- GET/POST /api/projects, GET/PUT/DELETE /api/projects/:name
|
||||
- GET /api/instances
|
||||
- Configurable responses for testing error scenarios
|
||||
- Realistic latency simulation
|
||||
|
||||
2. integration/commands.test.ts:
|
||||
- Full command execution using execSync against built CLI
|
||||
- Test get/describe/apply/delete for all resource types
|
||||
- Test all output formats work correctly
|
||||
- Test error handling and exit codes
|
||||
- Test --no-headers and --field flags
|
||||
|
||||
3. integration/sre-compatibility.test.ts:
|
||||
- Test output is grep-friendly: 'mcpctl get servers | grep running'
|
||||
- Test output is awk-friendly: 'mcpctl get servers | awk "{print $1}"'
|
||||
- Test JSON is jq-friendly: 'mcpctl get servers -o json | jq .[]'
|
||||
- Test exit codes work with shell scripts: 'mcpctl get servers || echo failed'
|
||||
- Test --no-headers works for scripting
|
||||
|
||||
4. integration/data-engineer.test.ts:
|
||||
- Test --field flag for selecting specific columns
|
||||
- Test filtering capabilities for data pipeline inspection
|
||||
- Test describe provides full resource details
|
||||
|
||||
5. Update src/cli/SECURITY_REVIEW.md:
|
||||
- Document apply command security measures:
|
||||
- Path traversal prevention with test evidence
|
||||
- File size limits
|
||||
- YAML injection prevention
|
||||
- Document how credentials are NOT logged
|
||||
- Document safe handling of user-supplied input
|
||||
- Include security test results and findings
|
||||
|
||||
6. Verify all requirements from task context:
|
||||
- TDD: All unit and integration tests pass
|
||||
- LOCAL DEV: Mock server works offline
|
||||
- SECURITY: Document YAML injection risks and mitigations
|
||||
- ARCHITECTURE: Formatters are reusable across commands
|
||||
- SRE: Output is parseable, exit codes documented
|
||||
- DATA ENGINEER: Field selection and filtering works
|
||||
319
.taskmaster/tasks/task_009.md
Normal file
319
.taskmaster/tasks/task_009.md
Normal file
@@ -0,0 +1,319 @@
|
||||
# Task ID: 9
|
||||
|
||||
**Title:** Implement mcpctl Project Commands
|
||||
|
||||
**Status:** pending
|
||||
|
||||
**Dependencies:** 7, 5
|
||||
|
||||
**Priority:** high
|
||||
|
||||
**Description:** Add commands for managing MCP projects and the critical 'claude add-mcp-project' command for integrating with Claude sessions.
|
||||
|
||||
**Details:**
|
||||
|
||||
Add project commands:
|
||||
|
||||
```typescript
|
||||
// commands/projects.ts
|
||||
const projectCmd = program
|
||||
.command('project')
|
||||
.description('Manage MCP projects');
|
||||
|
||||
projectCmd
|
||||
.command('create')
|
||||
.argument('<name>', 'Project name')
|
||||
.option('--profiles <profiles...>', 'Profile names to include')
|
||||
.action(async (name, options) => {
|
||||
const client = getClient();
|
||||
const profiles = await client.get('/api/profiles');
|
||||
const profileIds = profiles
|
||||
.filter(p => options.profiles?.includes(p.name))
|
||||
.map(p => p.id);
|
||||
const project = await client.post('/api/projects', { name, profileIds });
|
||||
console.log(`Project "${project.name}" created`);
|
||||
});
|
||||
|
||||
projectCmd
|
||||
.command('add-profile')
|
||||
.argument('<project>', 'Project name')
|
||||
.argument('<profile>', 'Profile name to add')
|
||||
.action(async (project, profile) => {
|
||||
// Add profile to project
|
||||
});
|
||||
|
||||
// Critical command: claude add-mcp-project
|
||||
const claudeCmd = program
|
||||
.command('claude')
|
||||
.description('Claude integration commands');
|
||||
|
||||
claudeCmd
|
||||
.command('add-mcp-project')
|
||||
.argument('<project>', 'Project name')
|
||||
.option('--path <path>', 'Path to .mcp.json', '.mcp.json')
|
||||
.action(async (projectName, options) => {
|
||||
const client = getClient();
|
||||
const mcpConfig = await client.get(`/api/projects/${projectName}/mcp-config`);
|
||||
|
||||
// Read existing .mcp.json or create new
|
||||
let existing = {};
|
||||
if (fs.existsSync(options.path)) {
|
||||
existing = JSON.parse(fs.readFileSync(options.path, 'utf8'));
|
||||
}
|
||||
|
||||
// Merge project MCPs into existing config
|
||||
const merged = {
|
||||
mcpServers: {
|
||||
...existing.mcpServers,
|
||||
...mcpConfig.mcpServers
|
||||
}
|
||||
};
|
||||
|
||||
fs.writeFileSync(options.path, JSON.stringify(merged, null, 2));
|
||||
console.log(`Added project "${projectName}" to ${options.path}`);
|
||||
console.log('MCPs added:', Object.keys(mcpConfig.mcpServers).join(', '));
|
||||
});
|
||||
|
||||
claudeCmd
|
||||
.command('remove-mcp-project')
|
||||
.argument('<project>', 'Project name')
|
||||
.action(async (projectName) => {
|
||||
// Remove project MCPs from .mcp.json
|
||||
});
|
||||
```
|
||||
|
||||
**Test Strategy:**
|
||||
|
||||
Test project creation with and without profiles. Test claude add-mcp-project creates valid .mcp.json. Test merging with existing .mcp.json preserves other entries.
|
||||
|
||||
## Subtasks
|
||||
|
||||
### 9.1. Write TDD tests for project command Zod schemas and CLI argument parsing
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** None
|
||||
|
||||
Create comprehensive Vitest test suites for project command validation schemas, CLI argument parsing for project create/add-profile/remove-profile/status commands, and the claude command group structure BEFORE implementing any commands.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/cli/tests/unit/commands/project.test.ts with TDD tests for:
|
||||
|
||||
1. Project command validation schemas:
|
||||
- CreateProjectSchema: name (alphanumeric-dash, 3-64 chars), --profiles array (optional, profile names)
|
||||
- AddProfileSchema: project name (required), profile name (required)
|
||||
- Test invalid project names rejected (spaces, special chars, empty)
|
||||
- Test profile names validated against expected format
|
||||
|
||||
2. CLI argument parsing tests:
|
||||
- Test 'mcpctl project create weekly_reports' parses correctly
|
||||
- Test 'mcpctl project create weekly_reports --profiles slack-ro jira-ro' captures profile array
|
||||
- Test 'mcpctl project add-profile weekly_reports slack-full' captures both arguments
|
||||
- Test 'mcpctl project remove-profile' validates required arguments
|
||||
- Test 'mcpctl project status <name>' parses project name
|
||||
- Test '--help' on project subcommands shows usage
|
||||
|
||||
3. Claude command group structure tests:
|
||||
- Test 'mcpctl claude' shows available subcommands
|
||||
- Test 'mcpctl claude add-mcp-project' is recognized
|
||||
- Test 'mcpctl claude remove-mcp-project' is recognized
|
||||
- Verify extensible command group architecture for future Claude integration features
|
||||
|
||||
Create src/cli/tests/fixtures/mock-profiles.ts with sample profile data (slack-ro, slack-full, jira-ro, jira-full, github-ro). All tests should initially fail (TDD red phase).
|
||||
|
||||
### 9.2. Write TDD tests for claude add-mcp-project with .mcp.json security validation
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 9.1
|
||||
|
||||
Create comprehensive Vitest test suites for the critical claude add-mcp-project command focusing on .mcp.json manipulation, merge behavior with existing configs, path validation, and SECURITY: ensuring secrets are NEVER written to .mcp.json.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/cli/tests/unit/commands/claude.test.ts with TDD tests:
|
||||
|
||||
1. Basic functionality tests:
|
||||
- Test 'mcpctl claude add-mcp-project weekly_reports' calls GET /api/projects/weekly_reports/mcp-config
|
||||
- Test creates .mcp.json when file doesn't exist
|
||||
- Test writes valid JSON with mcpServers object
|
||||
- Test output includes list of added MCP server names
|
||||
- Test '--path custom.mcp.json' writes to specified path
|
||||
|
||||
2. Merge behavior tests:
|
||||
- Test merges with existing .mcp.json preserving other entries
|
||||
- Test existing mcpServers entries NOT overwritten (no data loss)
|
||||
- Test handles empty existing .mcp.json gracefully
|
||||
- Test handles malformed existing .mcp.json with clear error
|
||||
|
||||
3. SECURITY tests (critical):
|
||||
- Test .mcp.json output NEVER contains secret env values (SLACK_BOT_TOKEN, JIRA_API_TOKEN, GITHUB_TOKEN)
|
||||
- Test env object only contains non-secret placeholder or reference values
|
||||
- Test path traversal rejected: --path '../../../etc/passwd' fails
|
||||
- Test --path validates parent directory exists
|
||||
- Test command injection patterns in project name rejected
|
||||
|
||||
4. Error handling tests:
|
||||
- Test 404 from API shows 'Project not found' message
|
||||
- Test network error shows connection error
|
||||
- Test write permission error handled gracefully
|
||||
|
||||
5. Remove command tests:
|
||||
- Test 'mcpctl claude remove-mcp-project weekly_reports' removes project's servers from .mcp.json
|
||||
- Test preserves other unrelated mcpServers entries
|
||||
|
||||
Create src/cli/tests/fixtures/sample-mcp-json.ts with various .mcp.json states for testing.
|
||||
|
||||
### 9.3. Implement project command group with CRUD operations and profile management
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 9.1
|
||||
|
||||
Implement the project subcommand group (create, add-profile, remove-profile, list, describe) using Commander.js with full TDD tests passing. Include project status command showing MCP server health for SRE dashboards.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/cli/src/commands/project.ts implementing CommandModule:
|
||||
|
||||
1. Command registration:
|
||||
```typescript
|
||||
const projectCmd = program.command('project').description('Manage MCP projects');
|
||||
```
|
||||
|
||||
2. 'project create' command:
|
||||
- Arguments: <name> (required)
|
||||
- Options: --profiles <profiles...> (profile names to include)
|
||||
- Implementation: Fetch /api/profiles to resolve names to IDs, POST /api/projects
|
||||
- Validation: Project name format validation via Zod schema
|
||||
- Output: 'Project "name" created with N profiles'
|
||||
|
||||
3. 'project add-profile' command:
|
||||
- Arguments: <project> <profile> (both required)
|
||||
- Implementation: GET current project, add profile ID, PUT /api/projects/:id/profiles
|
||||
- Handle profile not found with clear error message
|
||||
|
||||
4. 'project remove-profile' command:
|
||||
- Arguments: <project> <profile>
|
||||
- Implementation: Remove profile from project's profile list
|
||||
|
||||
5. 'project list' command:
|
||||
- Output: Table format showing NAME, PROFILES, CREATED columns
|
||||
- Support -o json/yaml output formats
|
||||
|
||||
6. 'project describe <name>' command:
|
||||
- Show full project details including all profiles and their servers
|
||||
|
||||
7. 'project status <name>' command (SRE requirement):
|
||||
- Show project with all MCP servers and their health status
|
||||
- Display: SERVER_NAME, PROFILE, STATUS (running/stopped/error), LAST_HEALTH_CHECK
|
||||
- Support -o json for monitoring pipeline integration
|
||||
- Exit code 0 if all healthy, 1 if any unhealthy (for alerting)
|
||||
|
||||
8. Support tags/labels for data engineer categorization:
|
||||
- Add --tag <key=value> option to create command
|
||||
- Add --filter-tag <key=value> option to list command
|
||||
|
||||
### 9.4. Implement claude command group with secure add-mcp-project and remove-mcp-project
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 9.2, 9.3
|
||||
|
||||
Implement the extensible claude subcommand group with the critical add-mcp-project command that safely writes .mcp.json without secrets, supporting both direct mcpd URLs and service discovery patterns for networking team requirements.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/cli/src/commands/claude.ts implementing CommandModule:
|
||||
|
||||
1. Extensible command group architecture:
|
||||
```typescript
|
||||
const claudeCmd = program.command('claude').description('Claude integration commands');
|
||||
// Designed for future: claude sync, claude validate, claude diagnose
|
||||
```
|
||||
|
||||
2. 'claude add-mcp-project' implementation:
|
||||
- Arguments: <project> (project name)
|
||||
- Options: --path <path> (default: .mcp.json), --dry-run (show what would be written)
|
||||
- Implementation:
|
||||
a. Validate --path: reject traversal (../), validate extension (.json)
|
||||
b. GET /api/projects/<project>/mcp-config from mcpd
|
||||
c. SECURITY: Verify response contains NO secret values (double-check even though API shouldn't return them)
|
||||
d. Read existing .mcp.json if exists, parse JSON
|
||||
e. Merge: existing.mcpServers + new mcpServers (new overwrites conflicts)
|
||||
f. Write atomic (temp file + rename)
|
||||
- Output: List of added MCP server names
|
||||
|
||||
3. SECURITY implementation in mcp-json-writer.ts:
|
||||
- Create sanitizeMcpConfig() function that strips any env values matching secret patterns
|
||||
- Log warning if API returns unexpected secret-looking values
|
||||
- Never write plain-text credentials to filesystem
|
||||
|
||||
4. Service discovery support (networking team requirement):
|
||||
- Support mcpServers entries pointing to mcpd via:
|
||||
a. Direct URL: env.MCPD_URL = 'http://nas:3000'
|
||||
b. Service discovery: env.MCPD_SERVICE = 'mcpd.local'
|
||||
- Document both patterns in command help
|
||||
|
||||
5. 'claude remove-mcp-project' implementation:
|
||||
- Read .mcp.json, identify servers added by this project (track via metadata)
|
||||
- Remove only those servers, preserve others
|
||||
- Add __mcpctl_source metadata to track which project added each server
|
||||
|
||||
6. Create utils/mcp-json-utils.ts:
|
||||
- readMcpJson(path): safely read and parse
|
||||
- writeMcpJson(path, config): atomic write with backup
|
||||
- mergeMcpServers(existing, new): merge logic
|
||||
- validateMcpJson(config): structure validation
|
||||
|
||||
### 9.5. Create integration tests and comprehensive security review documentation
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 9.3, 9.4
|
||||
|
||||
Build complete integration test suite testing project and claude commands against mock mcpd server, perform security review of .mcp.json manipulation, and document all security considerations including injection risks and credential handling.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/cli/tests/integration/project-commands.test.ts:
|
||||
|
||||
1. Full workflow integration tests:
|
||||
- Start mock mcpd server with realistic responses
|
||||
- Create project with profiles via CLI
|
||||
- Add profiles to project
|
||||
- Run 'claude add-mcp-project' and verify .mcp.json output
|
||||
- Verify merge preserves existing entries
|
||||
- Remove project and verify cleanup
|
||||
|
||||
2. SRE integration tests:
|
||||
- Test 'project status' output is grep-friendly
|
||||
- Test exit codes work with shell scripts
|
||||
- Test JSON output parseable by jq
|
||||
- Test integration with monitoring (mock Prometheus metrics endpoint)
|
||||
|
||||
3. Data engineer integration tests:
|
||||
- Test project with tags (--tag team=data, --tag category=analytics)
|
||||
- Test filtering by tags works
|
||||
- Test BigQuery/Snowflake-style profile groupings
|
||||
|
||||
4. Create src/cli/docs/SECURITY_REVIEW.md documenting:
|
||||
- .mcp.json manipulation security:
|
||||
a. Path traversal prevention with test evidence
|
||||
b. Atomic file writes to prevent corruption
|
||||
c. NEVER writing secrets (enforced at multiple layers)
|
||||
- JSON injection prevention:
|
||||
a. Input validation on project/profile names
|
||||
b. Safe JSON serialization (no eval)
|
||||
- Credential flow documentation:
|
||||
a. Secrets stored server-side only
|
||||
b. .mcp.json contains references, not values
|
||||
c. CLI prompts for secrets locally when needed
|
||||
- File permission recommendations (chmod 600)
|
||||
|
||||
5. Mock mcpd server enhancements:
|
||||
- Add /api/projects/:name/mcp-config endpoint
|
||||
- Return realistic MCP config structure
|
||||
- Test error scenarios (404, 500, timeout)
|
||||
|
||||
6. Run full security audit:
|
||||
- 'pnpm audit' for dependencies
|
||||
- grep for console.log of sensitive data
|
||||
- Verify no hardcoded credentials
|
||||
- Document findings in SECURITY_REVIEW.md
|
||||
327
.taskmaster/tasks/task_010.md
Normal file
327
.taskmaster/tasks/task_010.md
Normal file
@@ -0,0 +1,327 @@
|
||||
# Task ID: 10
|
||||
|
||||
**Title:** Implement Interactive MCP Server Setup Wizard
|
||||
|
||||
**Status:** pending
|
||||
|
||||
**Dependencies:** 7, 4
|
||||
|
||||
**Priority:** medium
|
||||
|
||||
**Description:** Create an interactive setup wizard that guides users through MCP server configuration, including OAuth flows and API token generation.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create interactive setup wizard:
|
||||
|
||||
```typescript
|
||||
// commands/setup.ts
|
||||
import inquirer from 'inquirer';
|
||||
import open from 'open';
|
||||
|
||||
program
|
||||
.command('setup')
|
||||
.argument('<server-type>', 'MCP server type (slack, jira, github, etc.)')
|
||||
.action(async (serverType) => {
|
||||
const client = getClient();
|
||||
const serverDef = await client.get(`/api/mcp-servers/types/${serverType}`);
|
||||
|
||||
console.log(`\n🚀 Setting up ${serverDef.name} MCP Server\n`);
|
||||
|
||||
// Show setup guide
|
||||
if (serverDef.setupGuide) {
|
||||
console.log(serverDef.setupGuide);
|
||||
}
|
||||
|
||||
// Collect required credentials
|
||||
const answers = {};
|
||||
for (const [key, info] of Object.entries(serverDef.envTemplate)) {
|
||||
if (info.oauth) {
|
||||
// Handle OAuth flow
|
||||
console.log(`\n📱 Opening browser for ${key} authentication...`);
|
||||
const authUrl = `${client.serverUrl}/auth/${serverType}/start`;
|
||||
await open(authUrl);
|
||||
|
||||
const { token } = await inquirer.prompt([{
|
||||
type: 'input',
|
||||
name: 'token',
|
||||
message: 'Paste the token from the browser:'
|
||||
}]);
|
||||
answers[key] = token;
|
||||
} else if (info.url) {
|
||||
// Guide user to token generation page
|
||||
console.log(`\n🔗 Opening ${info.description}...`);
|
||||
await open(info.url);
|
||||
console.log('Generate an API token with the following permissions:');
|
||||
console.log(info.permissions?.join(', '));
|
||||
|
||||
const { value } = await inquirer.prompt([{
|
||||
type: 'password',
|
||||
name: 'value',
|
||||
message: `Enter your ${key}:`
|
||||
}]);
|
||||
answers[key] = value;
|
||||
} else {
|
||||
const { value } = await inquirer.prompt([{
|
||||
type: info.secret ? 'password' : 'input',
|
||||
name: 'value',
|
||||
message: `Enter ${key}:`,
|
||||
default: info.default
|
||||
}]);
|
||||
answers[key] = value;
|
||||
}
|
||||
}
|
||||
|
||||
// Create profile with credentials
|
||||
const { profileName } = await inquirer.prompt([{
|
||||
type: 'input',
|
||||
name: 'profileName',
|
||||
message: 'Name for this profile:',
|
||||
default: `${serverType}-default`
|
||||
}]);
|
||||
|
||||
const profile = await client.post(`/api/mcp-servers/${serverDef.id}/profiles`, {
|
||||
name: profileName,
|
||||
config: answers
|
||||
});
|
||||
|
||||
console.log(`\n✅ Profile "${profileName}" created successfully!`);
|
||||
console.log(`Use: mcpctl project add-profile <project> ${profileName}`);
|
||||
});
|
||||
```
|
||||
|
||||
Server-side setup definitions:
|
||||
```typescript
|
||||
const slackSetup = {
|
||||
envTemplate: {
|
||||
SLACK_BOT_TOKEN: {
|
||||
description: 'Slack Bot Token',
|
||||
url: 'https://api.slack.com/apps',
|
||||
permissions: ['channels:read', 'chat:write', 'users:read'],
|
||||
secret: true
|
||||
},
|
||||
SLACK_TEAM_ID: {
|
||||
description: 'Slack Team ID',
|
||||
secret: false
|
||||
}
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
**Test Strategy:**
|
||||
|
||||
Test wizard flow with mocked inquirer responses. Test OAuth URL generation. Test profile creation with collected credentials. Integration test with actual Slack/Jira setup.
|
||||
|
||||
## Subtasks
|
||||
|
||||
### 10.1. Write TDD tests for wizard step components and credential collection flow
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** None
|
||||
|
||||
Create comprehensive Vitest test suites for all wizard step functions BEFORE implementation, including tests for OAuth flows, API token collection, service account JSON upload, and inquirer prompt mocking for deterministic testing.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/cli/tests/unit/commands/setup/wizard-steps.test.ts with TDD tests using vi.mock('inquirer') for deterministic prompt testing. Test cases: (1) collectCredential() with OAuth type opens browser and waits for callback token, (2) collectCredential() with API token type shows URL guidance and accepts password input, (3) collectCredential() with service account type accepts file path and validates JSON structure (for BigQuery), (4) collectCredential() with connection string type validates format (for Snowflake), (5) showSetupGuide() renders markdown correctly to terminal, (6) validateCredential() calls mcpd API to verify token before storage, (7) createProfile() posts to /api/mcp-servers/:id/profiles endpoint. Create src/cli/tests/unit/commands/setup/index.test.ts testing full wizard flow: parse server type argument, fetch server definition, iterate envTemplate, collect all credentials, create profile. Write mock fixtures for server definitions (Slack OAuth, Jira API token, GitHub PAT, BigQuery service account, Snowflake OAuth + connection string, dbt Cloud API token). All tests should fail initially (TDD red phase).
|
||||
|
||||
### 10.2. Implement composable wizard step functions with auth strategy pattern
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 10.1
|
||||
|
||||
Create reusable, testable wizard step functions following the strategy pattern for different authentication types (OAuth, API token, service account JSON, connection string, multi-step flows) that can be composed for complex data platform MCP setups.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/cli/src/commands/setup/auth-strategies.ts with authentication strategy interface and implementations:
|
||||
|
||||
```typescript
|
||||
interface AuthStrategy {
|
||||
name: string;
|
||||
collect(envKey: string, info: EnvTemplateInfo, options: CollectOptions): Promise<string>;
|
||||
validate?(value: string): Promise<boolean>;
|
||||
}
|
||||
|
||||
class OAuthStrategy implements AuthStrategy // Opens browser, waits for callback
|
||||
class ApiTokenStrategy implements AuthStrategy // Shows URL, accepts password input
|
||||
class ServiceAccountStrategy implements AuthStrategy // File path input, JSON validation
|
||||
class ConnectionStringStrategy implements AuthStrategy // Format validation (user:pass@host:port/db)
|
||||
class MultiStepStrategy implements AuthStrategy // Composes multiple sub-strategies
|
||||
```
|
||||
|
||||
Create src/cli/src/commands/setup/wizard-steps.ts with composable functions:
|
||||
- showSetupGuide(guide: string): void - Render markdown to terminal with chalk
|
||||
- selectAuthStrategy(info: EnvTemplateInfo): AuthStrategy - Factory based on envTemplate metadata
|
||||
- collectCredentials(envTemplate: EnvTemplate, strategies: AuthStrategy[]): Promise<Record<string, string>>
|
||||
- validateAllCredentials(credentials: Record<string, string>, server: McpServer): Promise<ValidationResult>
|
||||
- createProfile(serverId: string, profileName: string, config: Record<string, string>): Promise<Profile>
|
||||
|
||||
Data Engineer MCP support:
|
||||
- BigQuery: ServiceAccountStrategy expecting JSON key file with 'type': 'service_account'
|
||||
- Snowflake: MultiStepStrategy combining ConnectionStringStrategy + OAuthStrategy
|
||||
- dbt Cloud: ApiTokenStrategy with project selection step
|
||||
|
||||
All functions must pass TDD tests from subtask 1.
|
||||
|
||||
### 10.3. Implement setup command with --non-interactive flag for CI/scripting
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 10.2
|
||||
|
||||
Create the main 'mcpctl setup <server-type>' command that orchestrates the wizard flow, with --non-interactive flag for CI/automation that accepts credentials via environment variables or stdin JSON.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/cli/src/commands/setup/index.ts implementing CommandModule:
|
||||
|
||||
```typescript
|
||||
program
|
||||
.command('setup')
|
||||
.argument('<server-type>', 'MCP server type (slack, jira, github, bigquery, snowflake, dbt)')
|
||||
.option('--non-interactive', 'Run without prompts, use env vars or stdin')
|
||||
.option('--profile-name <name>', 'Name for the created profile')
|
||||
.option('--stdin', 'Read credentials JSON from stdin')
|
||||
.option('--dry-run', 'Validate without creating profile')
|
||||
.action(async (serverType, options) => { ... })
|
||||
```
|
||||
|
||||
Interactive flow:
|
||||
1. Fetch server definition from mcpd: GET /api/mcp-servers/types/:type
|
||||
2. Display setup guide with showSetupGuide()
|
||||
3. For each envTemplate entry, use selectAuthStrategy() and collect()
|
||||
4. Validate all credentials with validateAllCredentials()
|
||||
5. Prompt for profile name (default: ${serverType}-default)
|
||||
6. Create profile via mcpd API
|
||||
7. Print success message with 'mcpctl project add-profile' hint
|
||||
|
||||
Non-interactive flow:
|
||||
- --stdin: Read JSON from stdin with structure { "SLACK_BOT_TOKEN": "xoxb-...", ... }
|
||||
- Env vars: Check for each envTemplate key in process.env
|
||||
- Fail with clear error if required credential missing
|
||||
- Validate all credentials before creating profile
|
||||
- --dry-run: Skip profile creation, just validate
|
||||
|
||||
Offline/local dev support:
|
||||
- When mcpd unreachable, offer cached server definitions
|
||||
- Support --mcpd-url override for local development
|
||||
|
||||
Register via CommandRegistry. Write integration tests.
|
||||
|
||||
### 10.4. Implement OAuth browser flow with proxy and enterprise SSO support
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 10.2
|
||||
|
||||
Create secure OAuth flow handler that opens browser for authentication, handles callback tokens, supports HTTP/HTTPS proxies, custom CA certificates for enterprise SSO, and secure redirect URL handling.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/cli/src/commands/setup/oauth-handler.ts:
|
||||
|
||||
```typescript
|
||||
export class OAuthHandler {
|
||||
constructor(private config: OAuthConfig) {}
|
||||
|
||||
async startOAuthFlow(serverType: string): Promise<string> {
|
||||
// 1. Generate state token for CSRF protection
|
||||
// 2. Build auth URL with state and redirect_uri
|
||||
// 3. Start local callback server on random port
|
||||
// 4. Open browser with 'open' package
|
||||
// 5. Wait for callback with token or timeout
|
||||
// 6. Validate state matches
|
||||
// 7. Return access token
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Enterprise networking support:
|
||||
- Load proxy settings from config (Task 7) and environment (HTTP_PROXY, HTTPS_PROXY, NO_PROXY)
|
||||
- Support custom CA certificates for enterprise SSO (config.tls.caFile)
|
||||
- Use https.Agent with proxy-agent for HTTPS requests through proxy
|
||||
- Handle proxy authentication (Proxy-Authorization header)
|
||||
|
||||
Callback server:
|
||||
- Start on localhost:0 (random available port)
|
||||
- Timeout after 5 minutes with clear error message
|
||||
- CSRF protection via state parameter
|
||||
- Redirect to success page after token received
|
||||
- Shutdown immediately after callback
|
||||
|
||||
Security considerations:
|
||||
- State token must be cryptographically random (crypto.randomBytes)
|
||||
- Validate redirect_uri matches expected pattern
|
||||
- Don't log access tokens
|
||||
- Clear token from memory after passing to credential store
|
||||
|
||||
Create src/cli/tests/unit/commands/setup/oauth-handler.test.ts with mocked browser and HTTP server.
|
||||
|
||||
### 10.5. Implement secure credential storage and comprehensive security review
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 10.2, 10.3, 10.4
|
||||
|
||||
Create secure credential storage for wizard-collected tokens using system keychain or encrypted file storage, validate tokens before storage, and conduct comprehensive security review of all OAuth handling, credential storage, and browser redirect safety.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/cli/src/commands/setup/credential-store.ts:
|
||||
|
||||
```typescript
|
||||
export class WizardCredentialStore {
|
||||
// Store credentials securely for later profile creation
|
||||
async storeCredential(key: string, value: string, options: StoreOptions): Promise<void>
|
||||
|
||||
// Validate credential with mcpd before storing
|
||||
async validateAndStore(serverType: string, key: string, value: string): Promise<ValidationResult>
|
||||
|
||||
// Retrieve for profile creation (one-time use)
|
||||
async retrieveAndClear(key: string): Promise<string>
|
||||
}
|
||||
```
|
||||
|
||||
Secure storage implementation:
|
||||
- Primary: System keychain via 'keytar' package (macOS Keychain, Windows Credential Vault, Linux Secret Service)
|
||||
- Fallback: Encrypted file at ~/.mcpctl/wizard-credentials (AES-256-GCM)
|
||||
- Encryption key derived from machine-specific data + user password
|
||||
- Credentials cleared after profile creation (one-time use)
|
||||
|
||||
API token validation before storage:
|
||||
- POST /api/mcp-servers/:type/validate-credentials with credentials
|
||||
- Slack: Test token with auth.test API
|
||||
- Jira: Test with /rest/api/3/myself
|
||||
- GitHub: Test with /user API
|
||||
- BigQuery: Test service account with projects.list
|
||||
- Snowflake: Test connection with simple query
|
||||
- dbt: Test with /api/v2/accounts
|
||||
|
||||
SECURITY REVIEW - create src/cli/docs/SETUP_WIZARD_SECURITY_REVIEW.md:
|
||||
|
||||
1. OAuth Token Handling:
|
||||
- State parameter uses crypto.randomBytes(32)
|
||||
- Tokens never logged or written to non-encrypted storage
|
||||
- Browser redirect validates callback URL pattern
|
||||
- Local callback server binds to localhost only
|
||||
|
||||
2. Credential Storage Security:
|
||||
- Keychain used when available, encrypted file fallback
|
||||
- File permissions 600 on credential storage
|
||||
- Credentials cleared after single use
|
||||
- No credentials in CLI history (no --token=xxx args)
|
||||
|
||||
3. API Token Validation:
|
||||
- All tokens validated before storage
|
||||
- Validation errors don't leak token in error message
|
||||
- Failed validation clears token from memory
|
||||
|
||||
4. Network Security:
|
||||
- HTTPS required for OAuth (except localhost callback)
|
||||
- Proxy credentials handled securely
|
||||
- Custom CA for enterprise SSO supported
|
||||
|
||||
5. Browser Redirect Safety:
|
||||
- Only localhost:port/callback pattern accepted
|
||||
- State token prevents CSRF
|
||||
- Success page doesn't display token
|
||||
|
||||
Run 'pnpm audit --audit-level=high' and document findings.
|
||||
191
.taskmaster/tasks/task_011.md
Normal file
191
.taskmaster/tasks/task_011.md
Normal file
@@ -0,0 +1,191 @@
|
||||
# Task ID: 11
|
||||
|
||||
**Title:** Design Local LLM Proxy Architecture
|
||||
|
||||
**Status:** pending
|
||||
|
||||
**Dependencies:** 1, 3
|
||||
|
||||
**Priority:** high
|
||||
|
||||
**Description:** Design the local proxy component that intercepts MCP requests, uses local LLMs to pre-filter data, and communicates with mcpd.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create the local-proxy package architecture:
|
||||
|
||||
```typescript
|
||||
// src/local-proxy/src/index.ts
|
||||
|
||||
// The local proxy acts as an MCP server that Claude connects to
|
||||
// It intercepts requests, uses local LLM for filtering, then forwards to mcpd
|
||||
|
||||
import { Server } from '@modelcontextprotocol/sdk/server/index.js';
|
||||
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
|
||||
|
||||
export class McpctlLocalProxy {
|
||||
private server: Server;
|
||||
private llmProvider: LLMProvider;
|
||||
private mcpdClient: McpdClient;
|
||||
|
||||
constructor(config: ProxyConfig) {
|
||||
this.server = new Server({
|
||||
name: 'mcpctl-proxy',
|
||||
version: '1.0.0'
|
||||
}, {
|
||||
capabilities: { tools: {} }
|
||||
});
|
||||
|
||||
this.llmProvider = createLLMProvider(config.llm);
|
||||
this.mcpdClient = new McpdClient(config.mcpdUrl);
|
||||
|
||||
this.setupHandlers();
|
||||
}
|
||||
|
||||
private setupHandlers() {
|
||||
// List available tools from all configured MCP servers
|
||||
this.server.setRequestHandler('tools/list', async () => {
|
||||
const tools = await this.mcpdClient.listAvailableTools();
|
||||
return { tools };
|
||||
});
|
||||
|
||||
// Handle tool calls with pre-filtering
|
||||
this.server.setRequestHandler('tools/call', async (request) => {
|
||||
const { name, arguments: args } = request.params;
|
||||
|
||||
// Step 1: Use local LLM to interpret the request
|
||||
const refinedQuery = await this.llmProvider.refineQuery({
|
||||
tool: name,
|
||||
originalArgs: args,
|
||||
context: request.params._context // What Claude is looking for
|
||||
});
|
||||
|
||||
// Step 2: Forward to mcpd with refined query
|
||||
const rawResult = await this.mcpdClient.callTool(name, refinedQuery);
|
||||
|
||||
// Step 3: Use local LLM to filter/summarize response
|
||||
const filteredResult = await this.llmProvider.filterResponse({
|
||||
tool: name,
|
||||
query: refinedQuery,
|
||||
response: rawResult,
|
||||
maxTokens: 2000 // Keep context window small for Claude
|
||||
});
|
||||
|
||||
return { content: [{ type: 'text', text: filteredResult }] };
|
||||
});
|
||||
}
|
||||
|
||||
async start() {
|
||||
const transport = new StdioServerTransport();
|
||||
await this.server.connect(transport);
|
||||
}
|
||||
}
|
||||
|
||||
// LLM Provider interface
|
||||
interface LLMProvider {
|
||||
refineQuery(params: RefineParams): Promise<any>;
|
||||
filterResponse(params: FilterParams): Promise<string>;
|
||||
}
|
||||
```
|
||||
|
||||
Architecture flow:
|
||||
```
|
||||
Claude <--stdio--> mcpctl-proxy <--HTTP--> mcpd <---> MCP servers (containers)
|
||||
|
|
||||
v
|
||||
Local LLM (Ollama/Gemini/vLLM)
|
||||
```
|
||||
|
||||
**Test Strategy:**
|
||||
|
||||
Unit test request/response transformation. Mock LLM provider and verify refinement logic. Integration test with actual local LLM. Test error handling when LLM is unavailable.
|
||||
|
||||
## Subtasks
|
||||
|
||||
### 11.1. Create local-proxy package structure with TDD infrastructure and mock LLM provider
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** None
|
||||
|
||||
Initialize the src/local-proxy directory with clean architecture layers, Vitest configuration, and a comprehensive mock LLM provider for testing without GPU requirements.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/local-proxy/ with directory structure: src/{handlers,providers,services,middleware,types,utils}. Set up package.json with @modelcontextprotocol/sdk, vitest, and shared workspace dependencies. Configure vitest.config.ts with coverage requirements (>90%). Implement MockLLMProvider class that returns deterministic responses for testing - this is critical for CI/CD pipelines without GPU. Create test fixtures with sample MCP requests/responses for Slack, Jira, and database query scenarios. Include test utilities: createMockMcpRequest(), createMockLLMResponse(), createTestProxyInstance(). The mock provider must support configurable latency simulation and error injection for chaos testing.
|
||||
|
||||
### 11.2. Design and implement LLMProvider interface with pluggable adapter architecture
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 11.1
|
||||
|
||||
Create the abstract LLMProvider interface and adapter factory pattern that allows swapping LLM backends (Ollama, Gemini, vLLM, DeepSeek) without changing proxy logic.
|
||||
|
||||
**Details:**
|
||||
|
||||
Define LLMProvider interface in src/types/llm.ts with methods: refineQuery(params: RefineParams): Promise<RefinedQuery>, filterResponse(params: FilterParams): Promise<FilteredResponse>, healthCheck(): Promise<boolean>, getMetrics(): ProviderMetrics. Create LLMProviderFactory that accepts provider configuration and returns appropriate implementation. Design for composability - allow chaining providers (e.g., Ollama for refinement, Gemini for filtering). Include connection pooling interface for providers that support it. Create abstract BaseLLMProvider class with common retry logic, timeout handling, and metrics collection. Define clear error types: LLMUnavailableError, LLMTimeoutError, LLMRateLimitError, PromptInjectionDetectedError.
|
||||
|
||||
### 11.3. Implement MCP SDK server handlers with request/response transformation and validation
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 11.1, 11.2
|
||||
|
||||
Create the core McpctlLocalProxy class using @modelcontextprotocol/sdk with handlers for tools/list and tools/call, including MCP protocol message validation to prevent malformed requests.
|
||||
|
||||
**Details:**
|
||||
|
||||
Implement McpctlLocalProxy in src/index.ts following the architecture from task details. Create setRequestHandler for 'tools/list' that fetches available tools from mcpd and caches them with TTL. Create setRequestHandler for 'tools/call' with three-phase processing: (1) refineQuery phase using LLM, (2) forward to mcpd phase, (3) filterResponse phase using LLM. Implement MCP protocol validation middleware using Zod schemas - validate all incoming JSON-RPC messages against MCP specification before processing. Create McpdClient class in src/services/mcpd-client.ts with HTTP client for mcpd communication, including connection pooling and health checks. Handle stdio transport initialization with proper cleanup on SIGTERM/SIGINT.
|
||||
|
||||
### 11.4. Implement security layer with prompt injection prevention and data isolation
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 11.2, 11.3
|
||||
|
||||
Create security middleware that validates all inputs, prevents prompt injection in LLM queries, ensures no data leakage between users, and sanitizes all MCP protocol messages.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/middleware/security.ts with: (1) PromptInjectionValidator that scans user inputs for common injection patterns before sending to LLM - detect and reject inputs containing 'ignore previous', 'system:', role-switching attempts. (2) InputSanitizer that validates and sanitizes all tool arguments against expected schemas. (3) ResponseSanitizer that removes potentially sensitive data patterns (API keys, passwords, PII) from LLM-filtered responses before returning to Claude. (4) RequestIsolation middleware ensuring each request has its own context with no shared mutable state - critical for multi-tenant scenarios. Create SECURITY_AUDIT.md documenting all security controls and their test coverage. Implement allowlist-based argument validation for known MCP tools.
|
||||
|
||||
### 11.5. Implement configurable filtering strategies with per-profile aggressiveness settings
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 11.2, 11.3
|
||||
|
||||
Create composable filtering strategy system that allows data scientists to configure filtering aggressiveness per MCP server type, supporting different needs for raw SQL vs pre-aggregated dashboards.
|
||||
|
||||
**Details:**
|
||||
|
||||
Design FilterStrategy interface in src/services/filter-engine.ts with methods: shouldFilter(response: MpcResponse): boolean, filter(response: MpcResponse, config: FilterConfig): FilteredResponse, getAggressiveness(): number. Implement AggressiveFilter for raw SQL results (summarize, limit rows, remove redundant columns), MinimalFilter for pre-aggregated data (pass-through with size limits only), and AdaptiveFilter that adjusts based on response characteristics. Create FilterConfig type with per-profile settings stored in mcpd: { profileId: string, strategy: 'aggressive' | 'minimal' | 'adaptive', maxTokens: number, preserveFields: string[], summaryPrompt?: string }. Implement FilterStrategyComposer that chains multiple strategies. Support runtime strategy switching without proxy restart.
|
||||
|
||||
### 11.6. Implement chunking and streaming for large data responses with pagination support
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 11.3, 11.5
|
||||
|
||||
Design pagination and streaming strategy for handling large data responses (100k+ rows from database MCPs) that cannot be simply filtered, supporting cursor-based pagination in the proxy layer.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/services/pagination.ts with PaginationManager class handling: (1) Detection of large responses that require chunking (configurable threshold, default 10K rows), (2) Cursor-based pagination with stable cursors stored in proxy memory with TTL, (3) Response streaming using async iterators for progressive delivery, (4) Chunk size optimization based on estimated token count. Implement PagedResponse type with { data: any[], cursor?: string, hasMore: boolean, totalEstimate?: number, chunkIndex: number }. Create ChunkingStrategy interface for different data types - TabularChunker for SQL results, JSONChunker for nested objects, TextChunker for large text responses. Add pagination metadata to MCP tool responses so Claude can request next pages. Handle cursor expiration gracefully with re-query capability.
|
||||
|
||||
### 11.7. Implement observability with metrics endpoint and structured logging for SRE monitoring
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 11.2, 11.3, 11.5
|
||||
|
||||
Create comprehensive metrics collection and exposure system with /metrics endpoint (Prometheus format) and structured JSON logging for monitoring proxy health, performance, and LLM efficiency.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/services/metrics.ts with MetricsCollector class tracking: requests_total (counter), request_duration_seconds (histogram), llm_inference_duration_seconds (histogram), filter_reduction_ratio (gauge - original_size/filtered_size), active_connections (gauge), error_total by error_type (counter), tokens_saved_total (counter). Implement /metrics HTTP endpoint on configurable port (separate from stdio MCP transport) serving Prometheus exposition format. Create structured logger in src/utils/logger.ts outputting JSON with fields: timestamp, level, requestId, toolName, phase (refine/forward/filter), duration_ms, input_tokens, output_tokens, reduction_percent. Add request tracing with correlation IDs propagated to mcpd. Include health check endpoint /health with component status (llm: ok/degraded, mcpd: ok/disconnected).
|
||||
|
||||
### 11.8. Create integration tests and local development environment with docker-compose
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 11.1, 11.2, 11.3, 11.4, 11.5, 11.6, 11.7
|
||||
|
||||
Build comprehensive integration test suite testing the complete proxy flow against local mcpd and local Ollama, plus docker-compose setup for easy local development without external dependencies.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create deploy/docker-compose.proxy.yml with services: ollama (with pre-pulled model), mcpd (from src/mcpd), postgres (for mcpd), and local-proxy. Add scripts/setup-local-dev.sh that pulls Ollama models, starts services, and verifies connectivity. Create integration test suite in tests/integration/ testing: (1) Full request flow from Claude-style request through proxy to mcpd and back, (2) LLM refinement actually modifies queries appropriately, (3) Response filtering reduces token count measurably, (4) Pagination works for large responses, (5) Error handling when Ollama is unavailable (falls back gracefully), (6) Metrics are recorded correctly during real requests. Create performance benchmark suite measuring latency overhead vs direct mcpd access. Document local development setup in LOCAL_DEV.md.
|
||||
153
.taskmaster/tasks/task_012.md
Normal file
153
.taskmaster/tasks/task_012.md
Normal file
@@ -0,0 +1,153 @@
|
||||
# Task ID: 12
|
||||
|
||||
**Title:** Implement Local LLM Provider Integrations
|
||||
|
||||
**Status:** pending
|
||||
|
||||
**Dependencies:** 11
|
||||
|
||||
**Priority:** medium
|
||||
|
||||
**Description:** Create adapters for different local LLM providers: Ollama, Gemini CLI, vLLM, and DeepSeek API for request refinement and response filtering.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create LLM provider implementations:
|
||||
|
||||
```typescript
|
||||
// providers/ollama.ts
|
||||
export class OllamaProvider implements LLMProvider {
|
||||
constructor(private config: { host: string; model: string }) {}
|
||||
|
||||
async refineQuery(params: RefineParams): Promise<any> {
|
||||
const prompt = `You are helping refine a data request.
|
||||
Tool: ${params.tool}
|
||||
Original request: ${JSON.stringify(params.originalArgs)}
|
||||
Context (what the user wants): ${params.context}
|
||||
|
||||
Refine this query to be more specific. Output JSON only.`;
|
||||
|
||||
const response = await fetch(`${this.config.host}/api/generate`, {
|
||||
method: 'POST',
|
||||
body: JSON.stringify({ model: this.config.model, prompt, format: 'json' })
|
||||
});
|
||||
return JSON.parse((await response.json()).response);
|
||||
}
|
||||
|
||||
async filterResponse(params: FilterParams): Promise<string> {
|
||||
const prompt = `Filter this data to only include relevant information.
|
||||
Query: ${JSON.stringify(params.query)}
|
||||
Data: ${JSON.stringify(params.response).slice(0, 10000)}
|
||||
|
||||
Extract only the relevant parts. Be concise. Max ${params.maxTokens} tokens.`;
|
||||
|
||||
const response = await fetch(`${this.config.host}/api/generate`, {
|
||||
method: 'POST',
|
||||
body: JSON.stringify({ model: this.config.model, prompt })
|
||||
});
|
||||
return (await response.json()).response;
|
||||
}
|
||||
}
|
||||
|
||||
// providers/gemini-cli.ts
|
||||
export class GeminiCliProvider implements LLMProvider {
|
||||
async refineQuery(params: RefineParams): Promise<any> {
|
||||
const result = await execAsync(
|
||||
`echo '${this.buildPrompt(params)}' | gemini -m gemini-2.0-flash`
|
||||
);
|
||||
return JSON.parse(result.stdout);
|
||||
}
|
||||
}
|
||||
|
||||
// providers/deepseek.ts
|
||||
export class DeepSeekProvider implements LLMProvider {
|
||||
constructor(private apiKey: string) {}
|
||||
|
||||
async refineQuery(params: RefineParams): Promise<any> {
|
||||
const response = await fetch('https://api.deepseek.com/v1/chat/completions', {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
'Authorization': `Bearer ${this.apiKey}`,
|
||||
'Content-Type': 'application/json'
|
||||
},
|
||||
body: JSON.stringify({
|
||||
model: 'deepseek-chat',
|
||||
messages: [{ role: 'user', content: this.buildPrompt(params) }]
|
||||
})
|
||||
});
|
||||
return JSON.parse((await response.json()).choices[0].message.content);
|
||||
}
|
||||
}
|
||||
|
||||
// Factory
|
||||
export function createLLMProvider(config: LLMConfig): LLMProvider {
|
||||
switch (config.type) {
|
||||
case 'ollama': return new OllamaProvider(config);
|
||||
case 'gemini-cli': return new GeminiCliProvider();
|
||||
case 'deepseek': return new DeepSeekProvider(config.apiKey);
|
||||
case 'vllm': return new VLLMProvider(config);
|
||||
default: throw new Error(`Unknown LLM provider: ${config.type}`);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Test Strategy:**
|
||||
|
||||
Unit test each provider with mocked API responses. Integration test with local Ollama instance. Test fallback behavior when provider is unavailable. Benchmark token usage reduction.
|
||||
|
||||
## Subtasks
|
||||
|
||||
### 12.1. Implement OllamaProvider with TDD, health checks, and circuit breaker pattern
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** None
|
||||
|
||||
Create the Ollama LLM provider implementation with full TDD approach, including health check endpoint monitoring, circuit breaker for fault tolerance, and mock mode for testing without a running Ollama instance.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/providers/ollama.ts implementing LLMProvider interface from Task 11. Write Vitest tests BEFORE implementation covering: (1) refineQuery() sends correct POST to /api/generate with model and format:json, (2) filterResponse() handles large responses by truncating input to 10K chars, (3) healthCheck() calls /api/tags endpoint and returns true if model exists, (4) Circuit breaker opens after 3 consecutive failures within 30s, trips for 60s, then half-opens, (5) Timeout handling with AbortController after configurable duration (default 30s), (6) Mock mode returns deterministic responses when OLLAMA_MOCK=true for CI/CD. Implement connection pooling using undici Agent. Add structured logging for SRE monitoring with fields: model, prompt_tokens, completion_tokens, latency_ms, error_type. Security: Sanitize all prompt inputs using PromptSanitizer from Task 11.4, validate JSON responses with Zod schema before parsing. Rate limiting: configurable requests-per-minute with token bucket algorithm.
|
||||
|
||||
### 12.2. Implement GeminiCliProvider and DeepSeekProvider with security hardening
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 12.1
|
||||
|
||||
Create Gemini CLI provider using subprocess execution with shell injection prevention, and DeepSeek API provider with secure API key handling, both following TDD methodology.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/providers/gemini-cli.ts: Use execa (not child_process.exec) to prevent shell injection - pass prompt via stdin pipe, not command line arguments. Implement buildPrompt() with template literals and JSON.stringify for safe interpolation. Add timeout handling (default 60s for CLI). Parse stdout as JSON with Zod validation. Health check: verify 'gemini' binary exists using which command. Create src/providers/deepseek.ts: Implement OpenAI-compatible API client with fetch. API key from config (never log or include in prompts). Implement retry with exponential backoff for 429/5xx responses. Circuit breaker for API unavailability. Both providers: Implement LLMProvider interface methods refineQuery() and filterResponse(). Add mock modes for testing. Security review: (1) No credentials in logged prompts, (2) Validate all API responses before parsing, (3) Sanitize user inputs in prompts using shared PromptSanitizer.
|
||||
|
||||
### 12.3. Implement VLLMProvider with OpenAI-compatible API and batch inference support
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 12.1
|
||||
|
||||
Create vLLM provider supporting the OpenAI-compatible API endpoint, with batch inference optimization for processing multiple requests efficiently, and configurable model selection.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/providers/vllm.ts implementing LLMProvider interface. vLLM exposes OpenAI-compatible endpoint at /v1/completions or /v1/chat/completions. Support both completion and chat modes via config. Implement batch inference: when multiple refineQuery/filterResponse calls arrive within batching window (default 50ms), combine into single API call with multiple prompts for better GPU utilization. Configuration: { host: string, model: string, maxTokens: number, temperature: number, batchWindowMs: number }. Health check: call /health or /v1/models endpoint. Implement request queuing with configurable max queue size. Circuit breaker pattern matching OllamaProvider. Add metrics collection: batch_size_histogram, queue_depth_gauge, inference_time_per_request. Security: Same prompt sanitization as other providers. Mock mode for CI/CD testing.
|
||||
|
||||
### 12.4. Implement LLM provider factory with configuration validation and provider benchmarking utilities
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 12.1, 12.2, 12.3
|
||||
|
||||
Create the factory function and configuration system for instantiating LLM providers, plus benchmarking utilities for data scientists to compare provider performance, quality, and cost.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/providers/factory.ts with createLLMProvider(config: LLMConfig): LLMProvider function. LLMConfig Zod schema: { type: 'ollama'|'gemini-cli'|'deepseek'|'vllm', ...provider-specific fields }. Validate config at construction time with descriptive errors. Create src/utils/benchmark.ts with ProviderBenchmark class: Methods: runBenchmark(provider, testCases): BenchmarkResult, compareBenchmarks(results[]): ComparisonReport. BenchmarkResult type: { provider: string, testCases: { input, output, latencyMs, inputTokens, outputTokens, qualityScore? }[], avgLatency, p95Latency, totalTokens, estimatedCost? }. Include standard test cases for filtering accuracy: database rows, Slack messages, Jira tickets with known 'correct' filtered outputs. Quality scoring: compare filtered output against golden reference using semantic similarity (optional LLM-as-judge). Export results as JSON and markdown table for documentation. Add CLI command: mcpctl benchmark-providers --providers ollama,deepseek --test-suite standard.
|
||||
|
||||
### 12.5. Implement security review layer and comprehensive integration tests for all providers
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 12.1, 12.2, 12.3, 12.4
|
||||
|
||||
Create security middleware for prompt injection prevention across all providers, implement rate limiting, add comprehensive integration tests verifying provider interoperability, and document security controls.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/providers/security.ts with: (1) PromptSanitizer class - detect and neutralize injection patterns: 'ignore previous', 'system:', 'assistant:', embedded JSON/XML that could hijack prompts. Use regex + heuristic scoring. (2) ResponseValidator - validate LLM outputs match expected schema, detect and reject responses that contain prompt leakage or injection artifacts. (3) RateLimiter - token bucket per provider with configurable limits, shared across provider instances. (4) AuditLogger - log all LLM interactions for security review: timestamp, provider, sanitized_prompt (no PII), response_length, flagged_patterns. Create tests/integration/providers.test.ts: Test all 4 providers with same test suite verifying interface compliance. Create SECURITY_AUDIT.md documenting: all security controls, threat model (prompt injection, data exfiltration, DoS), test coverage, and manual review checklist. Add to CI: security-focused test suite that must pass before merge.
|
||||
195
.taskmaster/tasks/task_013.md
Normal file
195
.taskmaster/tasks/task_013.md
Normal file
@@ -0,0 +1,195 @@
|
||||
# Task ID: 13
|
||||
|
||||
**Title:** Implement MCP Request/Response Filtering Logic
|
||||
|
||||
**Status:** pending
|
||||
|
||||
**Dependencies:** 11, 12
|
||||
|
||||
**Priority:** medium
|
||||
|
||||
**Description:** Create the intelligent filtering system that analyzes Claude's intent and filters MCP responses to minimize token usage while maximizing relevance.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create filtering logic:
|
||||
|
||||
```typescript
|
||||
// services/filter-engine.ts
|
||||
export class FilterEngine {
|
||||
constructor(private llm: LLMProvider) {}
|
||||
|
||||
// Analyze Claude's request to understand intent
|
||||
async analyzeIntent(request: ToolCallRequest): Promise<IntentAnalysis> {
|
||||
const prompt = `Analyze this MCP tool call to understand the user's intent:
|
||||
Tool: ${request.toolName}
|
||||
Arguments: ${JSON.stringify(request.arguments)}
|
||||
|
||||
Output JSON:
|
||||
{
|
||||
"intent": "description of what user wants",
|
||||
"keywords": ["relevant", "keywords"],
|
||||
"filters": { "date_range": "...", "categories": [...] },
|
||||
"maxResults": number
|
||||
}`;
|
||||
|
||||
return this.llm.analyze(prompt);
|
||||
}
|
||||
|
||||
// Filter response based on intent
|
||||
async filterResponse(
|
||||
response: any,
|
||||
intent: IntentAnalysis,
|
||||
tool: ToolDefinition
|
||||
): Promise<FilteredResponse> {
|
||||
// Strategy 1: Structural filtering (if response is array)
|
||||
if (Array.isArray(response)) {
|
||||
const filtered = await this.filterArray(response, intent);
|
||||
return { data: filtered, reduction: 1 - filtered.length / response.length };
|
||||
}
|
||||
|
||||
// Strategy 2: Field selection (for objects)
|
||||
if (typeof response === 'object') {
|
||||
const relevant = await this.selectRelevantFields(response, intent);
|
||||
return { data: relevant, reduction: this.calculateReduction(response, relevant) };
|
||||
}
|
||||
|
||||
// Strategy 3: Text summarization (for large text responses)
|
||||
if (typeof response === 'string' && response.length > 5000) {
|
||||
const summary = await this.summarize(response, intent);
|
||||
return { data: summary, reduction: 1 - summary.length / response.length };
|
||||
}
|
||||
|
||||
return { data: response, reduction: 0 };
|
||||
}
|
||||
|
||||
private async filterArray(items: any[], intent: IntentAnalysis): Promise<any[]> {
|
||||
// Score each item for relevance
|
||||
const scored = await Promise.all(
|
||||
items.map(async (item) => ({
|
||||
item,
|
||||
score: await this.scoreRelevance(item, intent)
|
||||
}))
|
||||
);
|
||||
|
||||
// Return top N most relevant
|
||||
return scored
|
||||
.sort((a, b) => b.score - a.score)
|
||||
.slice(0, intent.maxResults || 10)
|
||||
.map(s => s.item);
|
||||
}
|
||||
|
||||
private async scoreRelevance(item: any, intent: IntentAnalysis): Promise<number> {
|
||||
const itemStr = JSON.stringify(item).toLowerCase();
|
||||
let score = 0;
|
||||
|
||||
// Keyword matching
|
||||
for (const keyword of intent.keywords) {
|
||||
if (itemStr.includes(keyword.toLowerCase())) score += 1;
|
||||
}
|
||||
|
||||
// Use LLM for deeper analysis if needed
|
||||
if (score === 0) {
|
||||
score = await this.llm.scoreRelevance(item, intent.intent);
|
||||
}
|
||||
|
||||
return score;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Example filtering for Slack messages:
|
||||
```typescript
|
||||
// User asks: "Get Slack messages about security from my team"
|
||||
const intent = {
|
||||
intent: 'Find security-related team messages',
|
||||
keywords: ['security', 'vulnerability', 'patch', 'CVE'],
|
||||
filters: { channels: ['team-*', 'security-*'] },
|
||||
maxResults: 20
|
||||
};
|
||||
|
||||
// Filter 1000 messages down to 20 most relevant
|
||||
```
|
||||
|
||||
**Test Strategy:**
|
||||
|
||||
Test intent analysis with various queries. Test filtering reduces data size significantly. Benchmark relevance accuracy. Test with real Slack/Jira data samples.
|
||||
|
||||
## Subtasks
|
||||
|
||||
### 13.1. Create FilterEngine core infrastructure with TDD and MockLLMProvider
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** None
|
||||
|
||||
Set up the services/filter-engine.ts file structure with TypeScript interfaces, Vitest test infrastructure, and MockLLMProvider for local testing without external API dependencies.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/services/filter-engine.ts with core types and interfaces. Define IntentAnalysis interface: { intent: string, keywords: string[], filters: Record<string, any>, maxResults: number, confidence: number }. Define FilteredResponse interface: { data: any, reduction: number, metadata: FilterMetadata }. Define FilterMetadata for explainability: { originalItemCount: number, filteredItemCount: number, removedItems: RemovedItemExplanation[], filterStrategy: string, scoringLatencyMs: number }. Define RemovedItemExplanation: { item: any, reason: string, score: number, threshold: number }. Create MockLLMProvider in tests/mocks/mock-llm-provider.ts that returns deterministic responses based on input patterns - essential for CI/CD without GPU. Configure Vitest with coverage requirements (>90%). Create test fixtures in tests/fixtures/ with sample MCP requests/responses for Slack, Jira, database queries. Include createMockToolCallRequest(), createMockIntentAnalysis(), createTestFilterEngine() test utilities.
|
||||
|
||||
### 13.2. Implement analyzeIntent method with keyword extraction and configurable parameters
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 13.1
|
||||
|
||||
Create the intent analysis system that interprets Claude's MCP tool calls to extract user intent, relevant keywords, filters, and maximum results using LLM-based analysis with configurable prompts.
|
||||
|
||||
**Details:**
|
||||
|
||||
Implement FilterEngine.analyzeIntent(request: ToolCallRequest): Promise<IntentAnalysis> method. Create IntentAnalyzer class in src/services/intent-analyzer.ts with configurable prompt templates per MCP tool type. Design prompt engineering for reliable JSON output: include examples, schema definition, and output format instructions. Implement keyword extraction with stemming/normalization for better matching. Add confidence scoring to intent analysis (0-1 scale) for downstream filtering decisions. Support tool-specific intent patterns: Slack (channels, date ranges, users), Jira (project, status, assignee), Database (tables, columns, aggregations). Create IntentAnalysisConfig: { promptTemplate: string, maxKeywords: number, includeNegativeKeywords: boolean, confidenceThreshold: number }. Allow data scientists to configure weights and thresholds per MCP type via JSON config file. Implement caching of intent analysis for identical requests (LRU cache with TTL). Add metrics: intent_analysis_latency_ms histogram.
|
||||
|
||||
### 13.3. Implement array filtering strategy with relevance scoring and explainability
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 13.1, 13.2
|
||||
|
||||
Create the structural filtering strategy for array responses with intelligent relevance scoring, keyword matching, LLM-based deep analysis, and detailed explainability for why items were removed.
|
||||
|
||||
**Details:**
|
||||
|
||||
Implement FilterEngine.filterArray(items: any[], intent: IntentAnalysis): Promise<FilteredArrayResult> in src/services/filter-strategies/array-filter.ts. Create RelevanceScorer class with configurable scoring: (1) Keyword matching score with configurable weights per keyword, (2) Field importance weights (title > description > metadata), (3) LLM-based semantic scoring for items with zero keyword matches, (4) Composite scoring with normalization. Implement explainability: for each removed item, record { item, reason: 'keyword_score_below_threshold' | 'llm_relevance_low' | 'exceeded_max_results', score, threshold }. Return scored items sorted by relevance with top N based on intent.maxResults. Handle nested arrays recursively. Add A/B testing support: FilterArrayConfig.abTestId allows comparing scoring algorithms. Expose metrics: items_before, items_after, reduction_ratio, avg_score, scoring_latency_ms. Implement batch scoring optimization: score multiple items in single LLM call when possible.
|
||||
|
||||
### 13.4. Implement object field selection and text summarization strategies
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 13.1, 13.2
|
||||
|
||||
Create filtering strategies for object responses (field selection based on relevance) and large text responses (intelligent summarization) with configurable thresholds and explainability.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/services/filter-strategies/object-filter.ts with selectRelevantFields(obj: object, intent: IntentAnalysis): Promise<FilteredObjectResult>. Implement field relevance scoring: (1) Field name keyword matching, (2) Field value relevance to intent, (3) Configurable always-include fields per object type (e.g., 'id', 'timestamp'). Create FieldSelectionConfig: { preserveFields: string[], maxDepth: number, maxFields: number }. Track removed fields in explainability metadata. Create src/services/filter-strategies/text-filter.ts with summarize(text: string, intent: IntentAnalysis): Promise<SummarizedTextResult>. Implement intelligent summarization: (1) Detect text type (log file, documentation, code), (2) Apply appropriate summarization strategy, (3) Preserve critical information based on intent keywords. Summarization threshold: 5000 chars (configurable). Calculate reduction ratio: 1 - summary.length / original.length. Add metrics: fields_removed, text_reduction_ratio, summarization_latency_ms.
|
||||
|
||||
### 13.5. Implement streaming-compatible large dataset filtering with memory efficiency
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 13.1, 13.3
|
||||
|
||||
Create filtering logic that integrates with Task 11's chunking/streaming system to handle 100K+ item datasets without loading all data into memory, using incremental scoring and progressive filtering.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/services/filter-strategies/streaming-filter.ts integrating with PaginationManager from Task 11.6. Implement StreamingFilterEngine with methods: (1) createFilterStream(dataStream: AsyncIterable<any[]>, intent): AsyncIterable<FilteredChunk>, (2) processChunk(chunk: any[], runningState: FilterState): Promise<FilteredChunk>. Design FilterState to maintain: running top-N items with scores, min score threshold (dynamically adjusted), chunk index, total items processed. Implement progressive threshold adjustment: as more items are seen, raise threshold to maintain O(maxResults) memory. Use heap data structure for efficient top-N maintenance. Create ChunkedFilterResult: { chunk: any[], chunkIndex: number, runningReduction: number, isComplete: boolean }. Memory budget: configurable max memory for filter state (default 50MB). Add backpressure handling for slow downstream consumers. Expose metrics: chunks_processed, peak_memory_bytes, progressive_threshold.
|
||||
|
||||
### 13.6. Implement security layer preventing data leakage in filtered responses
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 13.1, 13.3, 13.4
|
||||
|
||||
Create security middleware that sanitizes filtered responses to prevent accidental exposure of PII, credentials, or sensitive data, with configurable detection patterns and audit logging.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/services/filter-security.ts with ResponseSanitizer class. Implement sensitive data detection: (1) Regex patterns for API keys, passwords, tokens (AWS, GitHub, Slack, etc.), (2) PII patterns: email, phone, SSN, credit card, IP addresses, (3) Custom patterns configurable per MCP type. Create SanitizationConfig: { redactPatterns: RegExp[], piiDetection: boolean, auditSensitiveAccess: boolean, allowlist: string[] }. Implement redaction strategies: full replacement with [REDACTED], partial masking (show last 4 chars), or removal. Create FilterSecurityAudit log entry when sensitive data detected: { timestamp, toolName, patternMatched, fieldPath, actionTaken }. Integrate with FilterEngine.filterResponse() as final step before returning. Prevent filtered items from 'leaking back' via explainability metadata - sanitize removed item summaries too. Add metrics: sensitive_data_detected_count, redactions_applied, audit_log_entries.
|
||||
|
||||
### 13.7. Implement A/B testing framework and SRE metrics for filter performance monitoring
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 13.1, 13.2, 13.3, 13.4, 13.5, 13.6
|
||||
|
||||
Create comprehensive A/B testing infrastructure for comparing filter strategies, plus Prometheus-compatible metrics exposure for SRE monitoring of filter performance and effectiveness.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/services/filter-metrics.ts with FilterMetricsCollector exposing Prometheus metrics: filter_requests_total (counter by tool, strategy), filter_duration_seconds (histogram), items_before_filter (histogram), items_after_filter (histogram), reduction_ratio (histogram), scoring_latency_seconds (histogram by strategy), sensitive_data_detections_total (counter). Create src/services/ab-testing.ts with ABTestingFramework class: Methods: assignExperiment(requestId): ExperimentAssignment, recordOutcome(requestId, metrics): void, getExperimentResults(experimentId): ABTestResults. ExperimentConfig: { id, strategies: FilterStrategy[], trafficSplit: number[], startDate, endDate }. Persist experiment assignments and outcomes for analysis. Create ABTestResults: { experimentId, strategyResults: { strategy, avgReduction, avgLatency, sampleSize }[], statisticalSignificance }. Integrate with FilterEngine: check experiment assignment, use assigned strategy, record outcome metrics. Add /metrics HTTP endpoint serving Prometheus exposition format. Create Grafana dashboard JSON template for filter monitoring.
|
||||
172
.taskmaster/tasks/task_014.md
Normal file
172
.taskmaster/tasks/task_014.md
Normal file
@@ -0,0 +1,172 @@
|
||||
# Task ID: 14
|
||||
|
||||
**Title:** Implement Audit Logging System
|
||||
|
||||
**Status:** pending
|
||||
|
||||
**Dependencies:** 3, 6
|
||||
|
||||
**Priority:** medium
|
||||
|
||||
**Description:** Create comprehensive audit logging for all MCP operations including who ran what, when, and what data was accessed.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create audit logging system:
|
||||
|
||||
```typescript
|
||||
// services/audit-logger.ts
|
||||
export class AuditLogger {
|
||||
constructor(private prisma: PrismaClient) {}
|
||||
|
||||
async logMcpCall(params: {
|
||||
userId: string;
|
||||
sessionId: string;
|
||||
serverId: string;
|
||||
tool: string;
|
||||
arguments: any;
|
||||
responseSize: number;
|
||||
filteredSize: number;
|
||||
duration: number;
|
||||
success: boolean;
|
||||
error?: string;
|
||||
}) {
|
||||
await this.prisma.auditLog.create({
|
||||
data: {
|
||||
userId: params.userId,
|
||||
action: 'mcp_call',
|
||||
resource: `${params.serverId}:${params.tool}`,
|
||||
details: {
|
||||
sessionId: params.sessionId,
|
||||
arguments: params.arguments,
|
||||
responseSize: params.responseSize,
|
||||
filteredSize: params.filteredSize,
|
||||
reductionPercent: Math.round((1 - params.filteredSize / params.responseSize) * 100),
|
||||
duration: params.duration,
|
||||
success: params.success,
|
||||
error: params.error
|
||||
}
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
async logServerAction(params: {
|
||||
userId: string;
|
||||
action: 'start' | 'stop' | 'configure';
|
||||
serverId: string;
|
||||
details?: any;
|
||||
}) {
|
||||
await this.prisma.auditLog.create({
|
||||
data: {
|
||||
userId: params.userId,
|
||||
action: `server_${params.action}`,
|
||||
resource: params.serverId,
|
||||
details: params.details
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
async getAuditTrail(filters: {
|
||||
userId?: string;
|
||||
serverId?: string;
|
||||
action?: string;
|
||||
from?: Date;
|
||||
to?: Date;
|
||||
limit?: number;
|
||||
}) {
|
||||
return this.prisma.auditLog.findMany({
|
||||
where: {
|
||||
userId: filters.userId,
|
||||
resource: filters.serverId ? { contains: filters.serverId } : undefined,
|
||||
action: filters.action,
|
||||
timestamp: {
|
||||
gte: filters.from,
|
||||
lte: filters.to
|
||||
}
|
||||
},
|
||||
orderBy: { timestamp: 'desc' },
|
||||
take: filters.limit || 100,
|
||||
include: { user: true }
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// CLI command for audit
|
||||
program
|
||||
.command('audit')
|
||||
.description('View audit logs')
|
||||
.option('--user <userId>', 'Filter by user')
|
||||
.option('--server <serverId>', 'Filter by MCP server')
|
||||
.option('--since <date>', 'Show logs since date')
|
||||
.option('--limit <n>', 'Limit results', '50')
|
||||
.action(async (options) => {
|
||||
const logs = await client.get('/api/audit', options);
|
||||
console.table(logs.map(l => ({
|
||||
TIME: l.timestamp,
|
||||
USER: l.user?.email,
|
||||
ACTION: l.action,
|
||||
RESOURCE: l.resource
|
||||
})));
|
||||
});
|
||||
```
|
||||
|
||||
**Test Strategy:**
|
||||
|
||||
Test audit log creation for all operation types. Test query filtering works correctly. Test log retention/cleanup. Verify sensitive data is not logged.
|
||||
|
||||
## Subtasks
|
||||
|
||||
### 14.1. Design audit log schema and write TDD tests for AuditLogger methods
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** None
|
||||
|
||||
Define the AuditLog Prisma schema with SIEM-compatible structure, correlation IDs, and date partitioning support. Write comprehensive Vitest tests for all AuditLogger methods BEFORE implementation.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/mcpd/tests/unit/services/audit-logger.test.ts with TDD tests covering: (1) logMcpCall() creates audit record with correct fields including correlationId, sessionId, serverId, tool, sanitized arguments, responseSize, filteredSize, duration, success/error status; (2) logServerAction() logs start/stop/configure actions with serverId and details; (3) getAuditTrail() supports filtering by userId, serverId, action, date range, and limit; (4) Sensitive data scrubbing - verify arguments containing password, token, secret, apiKey, credentials patterns are redacted; (5) Structured JSON format compatible with Splunk/ELK (include timestamp in ISO8601, log level, correlation_id, user_id, action, resource fields). Create src/db/prisma/audit-log-schema.prisma addition with: correlationId (uuid), action (indexed), resource, details (Json), timestamp (indexed, for partitioning), userId (optional FK), responseTimeMs, success (boolean). Add @@index([timestamp, action]) and @@index([userId, timestamp]) for query performance per SRE requirements.
|
||||
|
||||
### 14.2. Implement AuditLogger service with async buffered writes and sensitive data scrubbing
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 14.1
|
||||
|
||||
Implement the AuditLogger class with high-throughput async write buffer, batch inserts to prevent performance impact, and comprehensive sensitive data scrubbing to prevent credential leakage.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/mcpd/src/services/audit-logger.ts implementing: Constructor accepting PrismaClient with configurable buffer settings (bufferSize: 100 default, flushIntervalMs: 1000 default). Implement logMcpCall() that adds entry to in-memory buffer, triggers flush when buffer full. Implement logServerAction() similarly. Implement private flushBuffer() using prisma.auditLog.createMany() for batch inserts. Implement scrubSensitiveData(obj: unknown): unknown that recursively traverses objects and redacts values for keys matching patterns: /password/i, /token/i, /secret/i, /apiKey/i, /credentials/i, /authorization/i - replace with '[REDACTED]'. Add correlationId generation using crypto.randomUUID(). Implement getAuditTrail() with Prisma query supporting all filter parameters from task spec. Add graceful shutdown: flush remaining buffer before process exit. Performance consideration: Use setImmediate/process.nextTick for non-blocking buffer operations. Add JSDoc documenting the async nature and security guarantees.
|
||||
|
||||
### 14.3. Implement audit query API with aggregation support for data analysts
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 14.2
|
||||
|
||||
Create REST API endpoints for querying audit logs with aggregation capabilities (requests per user/server/time window) and export functionality (CSV/JSON) for data analyst dashboards and usage reports.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/mcpd/src/routes/audit.ts with endpoints: GET /api/audit - paginated audit log query with filters (userId, serverId, action, from, to, limit, offset); GET /api/audit/aggregations - aggregation queries returning counts grouped by user, server, action, or time window (hourly/daily/weekly); GET /api/audit/export - export audit data as CSV or JSON file download with same filter support. Implement AuditQueryService in src/mcpd/src/services/audit-query.service.ts with methods: queryAuditLogs(filters: AuditFilters): Promise<PaginatedResult<AuditLog>>; getAggregations(groupBy: 'user' | 'server' | 'action' | 'hour' | 'day', filters: AuditFilters): Promise<AggregationResult[]>; exportToCsv(filters: AuditFilters): Promise<ReadableStream>; exportToJson(filters: AuditFilters): Promise<ReadableStream>. Use Prisma groupBy for aggregations. For CSV export, use streaming to handle large datasets without memory issues. Add rate limiting on export endpoint to prevent DoS. Write Zod schemas for all query parameters.
|
||||
|
||||
### 14.4. Implement CLI audit command with SRE-friendly output formats
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 14.3
|
||||
|
||||
Create the mcpctl audit CLI command with filters, multiple output formats (table/json/yaml for SIEM integration), and tail-like streaming capability for real-time log monitoring.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/cli/src/commands/audit.ts implementing CommandModule with: 'mcpctl audit' - list recent audit logs; 'mcpctl audit --user <userId>' - filter by user; 'mcpctl audit --server <serverId>' - filter by MCP server; 'mcpctl audit --since <date>' - logs since date (supports ISO8601, relative like '1h', '24h', '7d'); 'mcpctl audit --action <action>' - filter by action type; 'mcpctl audit --limit <n>' - limit results (default 50); 'mcpctl audit --output json' - JSON output for jq piping; 'mcpctl audit --output yaml' - YAML output; 'mcpctl audit --follow' - stream new logs in real-time (WebSocket or polling); 'mcpctl audit export --format csv --since 7d > audit.csv' - export to file. Table output format: TIME | USER | ACTION | RESOURCE (aligned columns). JSON output must be valid JSON array parseable by jq. Add --no-color flag for CI environments. Use chalk for colored output (green=success, red=error actions).
|
||||
|
||||
### 14.5. Implement log streaming to external SIEM systems and retention policy
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 14.2
|
||||
|
||||
Add support for streaming audit logs to external systems (Splunk HEC, ELK/Elasticsearch, generic webhook) and implement configurable log retention policy with automatic cleanup for compliance and storage management.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/mcpd/src/services/audit-streaming.ts with AuditStreamingService supporting multiple destinations: SplunkHecDestination (HTTP Event Collector with HEC token auth), ElasticsearchDestination (bulk API with index templates), WebhookDestination (generic HTTP POST with configurable auth). Each destination implements AuditDestination interface: send(logs: AuditLog[]): Promise<void>; healthCheck(): Promise<boolean>. Implement retry logic with exponential backoff for failed sends. Create src/mcpd/src/services/audit-retention.ts with AuditRetentionService: configure(policy: RetentionPolicy): void where policy includes retentionDays (default 90), archiveEnabled (boolean), archiveDestination (S3 path or local path). Implement cleanup job using node-cron: DELETE FROM audit_logs WHERE timestamp < NOW() - retentionDays (with batched deletes to avoid long locks). Add archiveBeforeDelete option that exports to configured destination before deletion. Add configuration in .env: AUDIT_RETENTION_DAYS, AUDIT_SPLUNK_HEC_URL, AUDIT_SPLUNK_HEC_TOKEN, AUDIT_ELASTICSEARCH_URL. Write unit tests mocking external services.
|
||||
297
.taskmaster/tasks/task_015.md
Normal file
297
.taskmaster/tasks/task_015.md
Normal file
@@ -0,0 +1,297 @@
|
||||
# Task ID: 15
|
||||
|
||||
**Title:** Create MCP Server Profiles Library
|
||||
|
||||
**Status:** pending
|
||||
|
||||
**Dependencies:** 4, 10
|
||||
|
||||
**Priority:** medium
|
||||
|
||||
**Description:** Build a library of pre-configured MCP server profiles for popular tools (Slack, Jira, GitHub, Terraform, etc.) with setup guides and permission templates.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create comprehensive server definitions:
|
||||
|
||||
```typescript
|
||||
// seed/mcp-servers.ts
|
||||
export const mcpServerDefinitions = [
|
||||
{
|
||||
name: 'slack',
|
||||
type: 'slack',
|
||||
displayName: 'Slack',
|
||||
description: 'Access Slack channels, messages, and users',
|
||||
command: 'npx',
|
||||
args: ['-y', '@modelcontextprotocol/server-slack'],
|
||||
envTemplate: {
|
||||
SLACK_BOT_TOKEN: {
|
||||
description: 'Slack Bot OAuth Token',
|
||||
required: true,
|
||||
secret: true,
|
||||
setupUrl: 'https://api.slack.com/apps',
|
||||
setupGuide: `## Slack MCP Setup\n\n1. Go to https://api.slack.com/apps\n2. Create new app or select existing\n3. Go to OAuth & Permissions\n4. Add scopes: channels:read, channels:history, users:read\n5. Install to workspace\n6. Copy Bot User OAuth Token`
|
||||
},
|
||||
SLACK_TEAM_ID: { description: 'Slack Team/Workspace ID', required: true }
|
||||
},
|
||||
defaultProfiles: [
|
||||
{ name: 'read-only', config: { permissions: ['read'] } },
|
||||
{ name: 'full-access', config: { permissions: ['read', 'write'] } }
|
||||
]
|
||||
},
|
||||
{
|
||||
name: 'jira',
|
||||
type: 'jira',
|
||||
displayName: 'Jira',
|
||||
description: 'Access Jira issues, projects, and workflows',
|
||||
command: 'npx',
|
||||
args: ['-y', '@anthropic/mcp-server-jira'],
|
||||
envTemplate: {
|
||||
JIRA_URL: { description: 'Jira instance URL', required: true },
|
||||
JIRA_EMAIL: { description: 'Jira account email', required: true },
|
||||
JIRA_API_TOKEN: {
|
||||
description: 'Jira API Token',
|
||||
required: true,
|
||||
secret: true,
|
||||
setupUrl: 'https://id.atlassian.com/manage-profile/security/api-tokens',
|
||||
setupGuide: `## Jira API Token Setup\n\n1. Go to https://id.atlassian.com/manage-profile/security/api-tokens\n2. Click Create API token\n3. Give it a label (e.g., "mcpctl")\n4. Copy the token`
|
||||
}
|
||||
},
|
||||
defaultProfiles: [
|
||||
{ name: 'read-only', config: { permissions: ['read'], projects: ['*'] } },
|
||||
{ name: 'project-limited', config: { permissions: ['read', 'write'], projects: [] } }
|
||||
]
|
||||
},
|
||||
{
|
||||
name: 'github',
|
||||
type: 'github',
|
||||
displayName: 'GitHub',
|
||||
description: 'Access GitHub repositories, issues, and PRs',
|
||||
command: 'npx',
|
||||
args: ['-y', '@modelcontextprotocol/server-github'],
|
||||
envTemplate: {
|
||||
GITHUB_TOKEN: {
|
||||
description: 'GitHub Personal Access Token',
|
||||
required: true,
|
||||
secret: true,
|
||||
setupUrl: 'https://github.com/settings/tokens',
|
||||
setupGuide: `## GitHub PAT Setup\n\n1. Go to https://github.com/settings/tokens\n2. Generate new token (classic)\n3. Select scopes: repo, read:user\n4. Copy token`
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
name: 'terraform-docs',
|
||||
type: 'terraform',
|
||||
displayName: 'Terraform Documentation',
|
||||
description: 'Access Terraform provider documentation',
|
||||
command: 'npx',
|
||||
args: ['-y', 'terraform-docs-mcp'],
|
||||
envTemplate: {},
|
||||
defaultProfiles: [
|
||||
{ name: 'aws-only', config: { providers: ['aws'] } },
|
||||
{ name: 'all-providers', config: { providers: ['*'] } }
|
||||
]
|
||||
}
|
||||
];
|
||||
```
|
||||
|
||||
**Test Strategy:**
|
||||
|
||||
Verify all server definitions have required fields. Test setup guides render correctly. Test default profiles work with actual MCP servers.
|
||||
|
||||
## Subtasks
|
||||
|
||||
### 15.1. Define TypeScript types and write TDD tests for MCP server profile schemas
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** None
|
||||
|
||||
Create comprehensive TypeScript interfaces and Zod validation schemas for MCP server profile definitions, including tests for all validation rules before implementation.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/shared/src/types/mcp-profiles.ts with TypeScript interfaces:
|
||||
|
||||
1. **Core Types**:
|
||||
- `McpServerDefinition` - Main server definition with name, type, displayName, description, command, args, envTemplate, defaultProfiles, networkRequirements
|
||||
- `EnvTemplateVariable` - Environment variable with description, required, secret, setupUrl, setupGuide, pattern (for validation)
|
||||
- `DefaultProfile` - Profile configuration with name, config object, minimumScopes array
|
||||
- `NetworkRequirement` - endpoints, ports, protocols for firewall documentation
|
||||
|
||||
2. **Zod Schemas** in src/shared/src/schemas/mcp-profiles.schema.ts:
|
||||
- Validate command is 'npx' or 'docker' or absolute path
|
||||
- Validate envTemplate has at least one required variable for auth types
|
||||
- Validate secret fields don't appear in args array
|
||||
- Validate setupGuide is valid markdown with required sections
|
||||
- Validate minimumScopes for each profile type
|
||||
|
||||
3. **TDD Tests** in src/shared/src/__tests__/mcp-profiles.test.ts:
|
||||
- Test valid definitions pass schema validation
|
||||
- Test missing required fields fail validation
|
||||
- Test invalid command types are rejected
|
||||
- Test secret variable exposure in args is detected
|
||||
- Test setupGuide markdown structure validation
|
||||
- Test profile permission escalation detection
|
||||
- Test networkRequirements field validation
|
||||
|
||||
Export all types from src/shared/src/index.ts for use by other packages.
|
||||
|
||||
### 15.2. Implement DevOps/SaaS MCP server profiles (Slack, Jira, GitHub, Terraform)
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 15.1
|
||||
|
||||
Create pre-configured MCP server profile definitions for common DevOps and SaaS tools with complete setup guides, minimum required scopes, and network requirements documentation.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/mcpd/src/seed/mcp-servers/devops.ts with server definitions:
|
||||
|
||||
1. **Slack Profile**:
|
||||
- Command: npx -y @modelcontextprotocol/server-slack
|
||||
- Required scopes: channels:read, channels:history, users:read (READ), plus channels:write, chat:write (WRITE)
|
||||
- Network: api.slack.com:443/HTTPS, files.slack.com:443/HTTPS
|
||||
- Profiles: read-only (minimum), full-access (with write scopes)
|
||||
- Setup guide with step-by-step Slack app creation
|
||||
|
||||
2. **Jira Profile**:
|
||||
- Command: npx -y @anthropic/mcp-server-jira
|
||||
- Required scopes: read:jira-work, read:jira-user (READ), write:jira-work (WRITE)
|
||||
- Network: *.atlassian.net:443/HTTPS
|
||||
- Profiles: read-only, project-limited (with project filter config)
|
||||
- Setup guide for API token generation
|
||||
|
||||
3. **GitHub Profile**:
|
||||
- Command: npx -y @modelcontextprotocol/server-github
|
||||
- Required scopes: repo:read, read:user (READ), repo:write, workflow (WRITE)
|
||||
- Network: api.github.com:443/HTTPS, github.com:443/HTTPS
|
||||
- Profiles: read-only, contributor, admin
|
||||
- Setup guide for PAT creation with fine-grained tokens
|
||||
|
||||
4. **Terraform Docs Profile**:
|
||||
- Command: npx -y terraform-docs-mcp
|
||||
- No auth required (public docs)
|
||||
- Network: registry.terraform.io:443/HTTPS
|
||||
- Profiles: aws-only, azure-only, gcp-only, all-providers
|
||||
|
||||
Include mock validation endpoints for local testing in src/mcpd/src/seed/mcp-servers/__mocks__/devops-validators.ts
|
||||
|
||||
### 15.3. Implement Data Platform MCP server profiles (BigQuery, Snowflake, dbt Cloud, Databricks, Airflow)
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 15.1
|
||||
|
||||
Create MCP server profile definitions for critical data platform tools with service account authentication patterns, connection string templates, and BI integration support.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/mcpd/src/seed/mcp-servers/data-platform.ts with server definitions:
|
||||
|
||||
1. **BigQuery Profile**:
|
||||
- Command: npx -y @anthropic/mcp-server-bigquery (or community equivalent)
|
||||
- Auth: Service account JSON file upload
|
||||
- envTemplate: GOOGLE_APPLICATION_CREDENTIALS (path to JSON), BQ_PROJECT_ID
|
||||
- Network: bigquery.googleapis.com:443/HTTPS, storage.googleapis.com:443/HTTPS
|
||||
- Profiles: viewer (roles/bigquery.dataViewer), analyst (roles/bigquery.user), admin
|
||||
|
||||
2. **Snowflake Profile**:
|
||||
- Auth: Multi-step OAuth or key-pair authentication
|
||||
- envTemplate: SNOWFLAKE_ACCOUNT, SNOWFLAKE_USER, SNOWFLAKE_WAREHOUSE, SNOWFLAKE_PRIVATE_KEY or SNOWFLAKE_PASSWORD
|
||||
- Connection string pattern: snowflake://<user>@<account>/<warehouse>
|
||||
- Network: <account>.snowflakecomputing.com:443/HTTPS
|
||||
- Profiles: reader, analyst, developer
|
||||
|
||||
3. **dbt Cloud Profile**:
|
||||
- Command: npx -y @dbt-labs/mcp-server-dbt (or community)
|
||||
- envTemplate: DBT_CLOUD_TOKEN, DBT_CLOUD_ACCOUNT_ID, DBT_CLOUD_PROJECT_ID
|
||||
- Network: cloud.getdbt.com:443/HTTPS
|
||||
- Profiles: viewer, developer, admin
|
||||
|
||||
4. **Databricks Profile**:
|
||||
- envTemplate: DATABRICKS_HOST, DATABRICKS_TOKEN, DATABRICKS_CLUSTER_ID (optional)
|
||||
- Network: <workspace>.azuredatabricks.net:443/HTTPS or <workspace>.cloud.databricks.com:443/HTTPS
|
||||
- Profiles: workspace-reader, job-runner, admin
|
||||
|
||||
5. **Apache Airflow Profile**:
|
||||
- envTemplate: AIRFLOW_URL, AIRFLOW_USERNAME, AIRFLOW_PASSWORD (basic) or AIRFLOW_API_KEY
|
||||
- Network: <airflow-host>:8080/HTTP or :443/HTTPS
|
||||
- Profiles: viewer, operator, admin
|
||||
|
||||
Include connection string builder utilities and validators for each platform.
|
||||
|
||||
### 15.4. Implement BI/Analytics tool MCP profiles (Tableau, Looker, Metabase, Grafana)
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 15.1
|
||||
|
||||
Create MCP server profile definitions for BI and analytics visualization tools commonly used by data analysts for report automation and dashboard access.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/mcpd/src/seed/mcp-servers/analytics.ts with server definitions:
|
||||
|
||||
1. **Tableau Profile**:
|
||||
- Auth: Personal Access Token (PAT) or connected app JWT
|
||||
- envTemplate: TABLEAU_SERVER_URL, TABLEAU_SITE_ID, TABLEAU_TOKEN_NAME, TABLEAU_TOKEN_SECRET
|
||||
- Network: <tableau-server>:443/HTTPS (Tableau Cloud: online.tableau.com)
|
||||
- Profiles: viewer (read dashboards), explorer (create workbooks), creator (full access)
|
||||
- Setup guide for PAT generation in Tableau account settings
|
||||
|
||||
2. **Looker Profile**:
|
||||
- Auth: API3 client credentials
|
||||
- envTemplate: LOOKER_BASE_URL, LOOKER_CLIENT_ID, LOOKER_CLIENT_SECRET
|
||||
- Network: <instance>.cloud.looker.com:443/HTTPS
|
||||
- Profiles: viewer, developer, admin
|
||||
- Setup guide for API3 key creation
|
||||
|
||||
3. **Metabase Profile**:
|
||||
- Auth: Session token or API key
|
||||
- envTemplate: METABASE_URL, METABASE_USERNAME, METABASE_PASSWORD or METABASE_API_KEY
|
||||
- Network: <metabase-host>:3000/HTTP or :443/HTTPS
|
||||
- Profiles: viewer, analyst, admin
|
||||
- Note: Self-hosted vs Cloud configuration differences
|
||||
|
||||
4. **Grafana Profile**:
|
||||
- Auth: API key or service account token
|
||||
- envTemplate: GRAFANA_URL, GRAFANA_API_KEY or GRAFANA_SERVICE_ACCOUNT_TOKEN
|
||||
- Network: <grafana-host>:3000/HTTP or :443/HTTPS
|
||||
- Profiles: viewer, editor, admin
|
||||
- Setup guide for service account token creation
|
||||
|
||||
All profiles should include query/export permissions appropriate for analyst workflows (read dashboards, export data, schedule reports where supported).
|
||||
|
||||
### 15.5. Create profile registry, validation service, and network requirements documentation generator
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 15.2, 15.3, 15.4
|
||||
|
||||
Build the central profile registry that indexes all MCP server definitions, provides validation services, and generates network requirements documentation for firewall planning.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/mcpd/src/services/mcp-profile-registry.ts:
|
||||
|
||||
1. **McpProfileRegistry Class**:
|
||||
- `getAllDefinitions()` - Returns all registered MCP server definitions
|
||||
- `getDefinitionByName(name: string)` - Lookup by server name
|
||||
- `getDefinitionsByCategory(category: 'devops' | 'data-platform' | 'analytics')` - Filter by category
|
||||
- `searchDefinitions(query: string)` - Search by name, description, or tags
|
||||
- `validateDefinition(def: McpServerDefinition)` - Validate against Zod schema
|
||||
- `registerCustomDefinition(def: McpServerDefinition)` - Add user-defined servers
|
||||
|
||||
2. **ProfileValidationService** in src/mcpd/src/services/profile-validation.ts:
|
||||
- `validateCredentials(serverName: string, env: Record<string, string>)` - Test credentials with mock endpoints
|
||||
- `checkMinimumScopes(serverName: string, profile: string)` - Verify profile has required scopes
|
||||
- `detectPermissionEscalation(base: string[], requested: string[])` - Security check for scope expansion
|
||||
|
||||
3. **NetworkDocsGenerator** in src/mcpd/src/services/network-docs-generator.ts:
|
||||
- `generateFirewallRules(serverNames: string[])` - Output firewall rules in various formats (iptables, AWS SG, Azure NSG)
|
||||
- `generateNetworkDiagram(projectName: string)` - Mermaid diagram of network flows
|
||||
- `exportToCSV()` - Export all endpoints/ports/protocols for firewall team
|
||||
|
||||
4. **Seed Database Integration**:
|
||||
- Update src/mcpd/src/seed/index.ts to load all profile definitions
|
||||
- Create `seedMcpServerLibrary()` function that populates database from profile registry
|
||||
- Support incremental updates when new profiles are added
|
||||
|
||||
Export registry and services from src/mcpd/src/index.ts
|
||||
168
.taskmaster/tasks/task_016.md
Normal file
168
.taskmaster/tasks/task_016.md
Normal file
@@ -0,0 +1,168 @@
|
||||
# Task ID: 16
|
||||
|
||||
**Title:** Implement Instance Lifecycle Management
|
||||
|
||||
**Status:** pending
|
||||
|
||||
**Dependencies:** 6, 8
|
||||
|
||||
**Priority:** medium
|
||||
|
||||
**Description:** Create APIs and commands for managing MCP server instance lifecycle: start, stop, restart, status, and health monitoring.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create instance management:
|
||||
|
||||
```typescript
|
||||
// routes/instances.ts
|
||||
app.post('/api/instances', async (req) => {
|
||||
const { profileId } = req.body;
|
||||
const profile = await prisma.mcpProfile.findUnique({
|
||||
where: { id: profileId },
|
||||
include: { server: true }
|
||||
});
|
||||
|
||||
const containerManager = new ContainerManager();
|
||||
const containerId = await containerManager.startMcpServer(profile.server, profile.config);
|
||||
|
||||
const instance = await prisma.mcpInstance.create({
|
||||
data: {
|
||||
serverId: profile.serverId,
|
||||
containerId,
|
||||
status: 'running',
|
||||
config: profile.config
|
||||
}
|
||||
});
|
||||
|
||||
await auditLogger.logServerAction({
|
||||
userId: req.user.id,
|
||||
action: 'start',
|
||||
serverId: profile.server.name,
|
||||
details: { instanceId: instance.id, containerId }
|
||||
});
|
||||
|
||||
return instance;
|
||||
});
|
||||
|
||||
app.delete('/api/instances/:id', async (req) => {
|
||||
const instance = await prisma.mcpInstance.findUnique({ where: { id: req.params.id } });
|
||||
const containerManager = new ContainerManager();
|
||||
await containerManager.stopMcpServer(instance.containerId);
|
||||
await prisma.mcpInstance.delete({ where: { id: req.params.id } });
|
||||
});
|
||||
|
||||
app.post('/api/instances/:id/restart', async (req) => {
|
||||
const instance = await prisma.mcpInstance.findUnique({
|
||||
where: { id: req.params.id },
|
||||
include: { server: true }
|
||||
});
|
||||
const containerManager = new ContainerManager();
|
||||
await containerManager.stopMcpServer(instance.containerId);
|
||||
const newContainerId = await containerManager.startMcpServer(instance.server, instance.config);
|
||||
return prisma.mcpInstance.update({
|
||||
where: { id: req.params.id },
|
||||
data: { containerId: newContainerId, status: 'running' }
|
||||
});
|
||||
});
|
||||
|
||||
// Health monitoring
|
||||
app.get('/api/instances/:id/health', async (req) => {
|
||||
const instance = await prisma.mcpInstance.findUnique({ where: { id: req.params.id } });
|
||||
const containerManager = new ContainerManager();
|
||||
const status = await containerManager.getMcpServerStatus(instance.containerId);
|
||||
const logs = await containerManager.getContainerLogs(instance.containerId, { tail: 50 });
|
||||
return { status, logs, lastChecked: new Date() };
|
||||
});
|
||||
|
||||
// CLI commands
|
||||
program
|
||||
.command('start')
|
||||
.argument('<profile>', 'Profile name')
|
||||
.action(async (profile) => {
|
||||
const instance = await client.post('/api/instances', { profileName: profile });
|
||||
console.log(`Started instance ${instance.id}`);
|
||||
});
|
||||
|
||||
program
|
||||
.command('stop')
|
||||
.argument('<instance-id>', 'Instance ID')
|
||||
.action(async (id) => {
|
||||
await client.delete(`/api/instances/${id}`);
|
||||
console.log(`Stopped instance ${id}`);
|
||||
});
|
||||
|
||||
program
|
||||
.command('logs')
|
||||
.argument('<instance-id>', 'Instance ID')
|
||||
.option('-f, --follow', 'Follow logs')
|
||||
.action(async (id, options) => {
|
||||
if (options.follow) {
|
||||
// Stream logs
|
||||
} else {
|
||||
const { logs } = await client.get(`/api/instances/${id}/health`);
|
||||
console.log(logs);
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
**Test Strategy:**
|
||||
|
||||
Test instance start/stop/restart lifecycle. Test health monitoring updates status correctly. Test logs streaming. Integration test with real Docker containers.
|
||||
|
||||
## Subtasks
|
||||
|
||||
### 16.1. Write TDD test suites for Instance Lifecycle API endpoints
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** None
|
||||
|
||||
Create comprehensive Vitest test suites for all instance lifecycle endpoints (POST /api/instances, DELETE /api/instances/:id, POST /api/instances/:id/restart, GET /api/instances/:id/health, GET /api/instances/:id/logs) BEFORE implementation using mocked ContainerManager and Prisma.
|
||||
|
||||
**Details:**
|
||||
|
||||
Write comprehensive Vitest tests following TDD methodology for all instance lifecycle API endpoints. Tests must cover: (1) POST /api/instances - successful instance creation from profile, invalid profileId handling, ContainerManager.startMcpServer mock expectations, audit logging verification; (2) DELETE /api/instances/:id - successful stop and cleanup, non-existent instance handling, containerId validation to prevent targeting unmanaged containers; (3) POST /api/instances/:id/restart - graceful shutdown with drainTimeout for data pipelines, proper sequencing of stop/start, config preservation; (4) GET /api/instances/:id/health - Prometheus-compatible metrics format, liveness/readiness probe responses, alerting threshold configuration (unhealthy for N minutes), JSON health object structure; (5) GET /api/instances/:id/logs - pagination with cursor, log injection prevention (sanitize ANSI codes and control characters), tail parameter validation. Use msw or vitest-fetch-mock for request mocking. All tests should fail initially (TDD red phase). Include security tests: validate containerId format (UUIDs only), reject path traversal in instance IDs, verify only containers with mcpctl labels can be controlled.
|
||||
|
||||
### 16.2. Write TDD test suites for CLI instance management commands
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** None
|
||||
|
||||
Create Vitest test suites for CLI commands (start, stop, restart, logs, status) BEFORE implementation, testing argument parsing, API client calls, output formatting, and WebSocket/SSE log streaming.
|
||||
|
||||
**Details:**
|
||||
|
||||
Write comprehensive Vitest tests for all CLI commands following TDD methodology: (1) 'mcpctl start <profile>' - test profile name validation, API call to POST /api/instances, success/error output formatting, instance ID display; (2) 'mcpctl stop <instance-id>' - test instance ID format validation, API call to DELETE /api/instances/:id, graceful shutdown with --drain-timeout flag for data pipeline instances, confirmation prompt (--yes to skip); (3) 'mcpctl restart <instance-id>' - test restart with optional --drain-timeout, API call to POST /api/instances/:id/restart; (4) 'mcpctl logs <instance-id>' - test -f/--follow flag for streaming, --tail N option, --since timestamp option, WebSocket connection for live streaming, graceful disconnect handling; (5) 'mcpctl status <instance-id>' - test health status display, readiness/liveness indicators, uptime calculation, JSON output format. Test network boundary scenarios: WebSocket reconnection on disconnect, SSE fallback when WebSocket unavailable, proxy-friendly streaming options. Include exit code tests for scripting compatibility.
|
||||
|
||||
### 16.3. Implement Instance Lifecycle API endpoints with security and audit logging
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 16.1
|
||||
|
||||
Implement all instance lifecycle API endpoints (create, stop, restart, health, logs) passing TDD tests from subtask 1, with security validation, graceful shutdown support, and comprehensive audit logging integration.
|
||||
|
||||
**Details:**
|
||||
|
||||
Implement routes/instances.ts with all lifecycle endpoints: (1) POST /api/instances - validate profileId exists, call ContainerManager.startMcpServer with profile config, create McpInstance record in Prisma, emit audit log with auditLogger.logServerAction({action: 'start', ...}); (2) DELETE /api/instances/:id - validate instance exists and containerId format is UUID, verify container has mcpctl management labels before stopping, call ContainerManager.stopMcpServer with configurable drainTimeout for graceful shutdown of data pipelines, delete McpInstance record, emit audit log; (3) POST /api/instances/:id/restart - implement atomic restart with stop-then-start, preserve config across restart, support drainTimeout query parameter for graceful drain before restart; (4) GET /api/instances/:id/health - call ContainerManager.getMcpServerStatus and getHealthStatus, return structured health object with {status, lastChecked, readiness, liveness, consecutiveFailures, alertThreshold}, format compatible with Prometheus/Grafana alerting; (5) GET /api/instances/:id/logs - call ContainerManager.getContainerLogs with cursor-based pagination, sanitize log output to prevent log injection (strip ANSI escape sequences, null bytes, control characters), support ELK/Loki-compatible structured JSON format. Implement security middleware to validate all containerIds are managed by mcpctl.
|
||||
|
||||
### 16.4. Implement CLI commands for instance lifecycle with streaming log support
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 16.2, 16.3
|
||||
|
||||
Implement CLI commands (start, stop, restart, logs, status) passing TDD tests from subtask 2, including WebSocket/SSE log streaming that works across network boundaries.
|
||||
|
||||
**Details:**
|
||||
|
||||
Implement commands/instances.ts with all CLI commands: (1) 'start <profile>' - call API client.post('/api/instances', {profileName: profile}), display instance ID and status, exit code 0 on success; (2) 'stop <instance-id>' - prompt for confirmation unless --yes flag, support --drain-timeout <seconds> for data pipeline graceful shutdown, call client.delete(`/api/instances/${id}`), display stop confirmation; (3) 'restart <instance-id>' - support --drain-timeout flag, call client.post(`/api/instances/${id}/restart`), display new container ID; (4) 'logs <instance-id>' - implement dual transport: WebSocket primary with SSE fallback for proxy-friendly environments, -f/--follow starts WebSocket connection to /api/instances/:id/logs/stream, --tail N parameter (default 50), --since timestamp filter, handle reconnection on disconnect with exponential backoff, gracefully handle Ctrl+C; (5) 'status <instance-id>' - call GET /api/instances/:id/health, display formatted health info with readiness/liveness indicators, support -o json output. Implement WebSocket client that works through corporate proxies (use HTTP upgrade with proper headers). For non-streaming logs, paginate through cursor-based API.
|
||||
|
||||
### 16.5. Create integration tests and docker-compose environment for instance lifecycle
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 16.3, 16.4
|
||||
|
||||
Build comprehensive integration test suite testing complete instance lifecycle against real Docker containers, including health monitoring with alerting thresholds and log streaming across network boundaries.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create integration test suite in tests/integration/instance-lifecycle.test.ts: (1) Full lifecycle test - create instance from profile, verify container running with 'docker ps', check health endpoint returns running status, stream logs with follow mode, restart instance (verify old container stopped, new container running), stop with drain timeout, verify container removed; (2) Health monitoring tests - configure alerting threshold (e.g., 3 consecutive failures), simulate unhealthy container, verify health endpoint returns correct consecutiveFailures count, test readiness probe (container ready to serve), test liveness probe (container process alive), verify Prometheus-format metrics exportable at /metrics; (3) Log streaming integration - test WebSocket streaming receives live container output, test SSE fallback when WebSocket unavailable, test log format is ELK/Loki compatible (JSON with timestamp, level, message fields), verify log injection prevention (send malicious log content, verify sanitized output); (4) Data pipeline graceful shutdown - create long-running instance simulating data processing, send stop with drain timeout, verify container receives SIGTERM, verify container has grace period before SIGKILL; (5) Network boundary tests - configure proxy simulation, verify log streaming works through proxy. Update docker-compose.yml to include test-mcp-server with configurable logging behavior.
|
||||
443
.taskmaster/tasks/task_017.md
Normal file
443
.taskmaster/tasks/task_017.md
Normal file
@@ -0,0 +1,443 @@
|
||||
# Task ID: 17
|
||||
|
||||
**Title:** Implement Kubernetes Support Architecture
|
||||
|
||||
**Status:** pending
|
||||
|
||||
**Dependencies:** 6, 16
|
||||
|
||||
**Priority:** low
|
||||
|
||||
**Description:** Design and implement the abstraction layer for Kubernetes deployment support, preparing for future pod scheduling of MCP instances.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create orchestrator abstraction:
|
||||
|
||||
```typescript
|
||||
// services/orchestrator.ts
|
||||
export interface McpOrchestrator {
|
||||
startServer(server: McpServer, config: any): Promise<string>;
|
||||
stopServer(instanceId: string): Promise<void>;
|
||||
getStatus(instanceId: string): Promise<InstanceStatus>;
|
||||
getLogs(instanceId: string, options: LogOptions): Promise<string>;
|
||||
listInstances(filters?: InstanceFilters): Promise<Instance[]>;
|
||||
}
|
||||
|
||||
// Docker implementation (current)
|
||||
export class DockerOrchestrator implements McpOrchestrator {
|
||||
private docker: Docker;
|
||||
// ... existing Docker implementation
|
||||
}
|
||||
|
||||
// Kubernetes implementation (future-ready)
|
||||
export class KubernetesOrchestrator implements McpOrchestrator {
|
||||
private k8sClient: KubernetesClient;
|
||||
|
||||
constructor(config: K8sConfig) {
|
||||
this.k8sClient = new KubernetesClient(config);
|
||||
}
|
||||
|
||||
async startServer(server: McpServer, config: any): Promise<string> {
|
||||
const pod = {
|
||||
apiVersion: 'v1',
|
||||
kind: 'Pod',
|
||||
metadata: {
|
||||
name: `mcp-${server.name}-${Date.now()}`,
|
||||
labels: {
|
||||
'mcpctl.io/server': server.name,
|
||||
'mcpctl.io/managed': 'true'
|
||||
}
|
||||
},
|
||||
spec: {
|
||||
containers: [{
|
||||
name: 'mcp-server',
|
||||
image: server.image || 'node:20-alpine',
|
||||
command: this.buildCommand(server),
|
||||
env: this.buildEnvVars(config),
|
||||
resources: {
|
||||
requests: { memory: '128Mi', cpu: '100m' },
|
||||
limits: { memory: '512Mi', cpu: '500m' }
|
||||
}
|
||||
}],
|
||||
restartPolicy: 'Always'
|
||||
}
|
||||
};
|
||||
|
||||
const created = await this.k8sClient.createPod(pod);
|
||||
return created.metadata.name;
|
||||
}
|
||||
|
||||
// ... other K8s implementations
|
||||
}
|
||||
|
||||
// Factory based on configuration
|
||||
export function createOrchestrator(config: OrchestratorConfig): McpOrchestrator {
|
||||
switch (config.type) {
|
||||
case 'docker': return new DockerOrchestrator(config);
|
||||
case 'kubernetes': return new KubernetesOrchestrator(config);
|
||||
default: throw new Error(`Unknown orchestrator: ${config.type}`);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Configuration:
|
||||
```yaml
|
||||
orchestrator:
|
||||
type: docker # or 'kubernetes'
|
||||
docker:
|
||||
socketPath: /var/run/docker.sock
|
||||
kubernetes:
|
||||
namespace: mcpctl
|
||||
kubeconfig: /path/to/kubeconfig
|
||||
```
|
||||
|
||||
**Test Strategy:**
|
||||
|
||||
Unit test orchestrator interface compliance for both implementations. Integration test Docker implementation. Mock Kubernetes API for K8s implementation tests.
|
||||
|
||||
## Subtasks
|
||||
|
||||
### 17.1. Define K8s-specific interfaces and write TDD tests for KubernetesOrchestrator
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** None
|
||||
|
||||
Extend the McpOrchestrator interface (from Task 6) with Kubernetes-specific types and write comprehensive Vitest unit tests for all KubernetesOrchestrator methods BEFORE implementation using mocked @kubernetes/client-node.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/shared/src/types/kubernetes.ts with K8s-specific types:
|
||||
|
||||
```typescript
|
||||
import { McpOrchestrator, McpServer, InstanceStatus, LogOptions, InstanceFilters } from './orchestrator';
|
||||
|
||||
export interface K8sConfig {
|
||||
namespace: string;
|
||||
kubeconfig?: string; // Path to kubeconfig file
|
||||
inCluster?: boolean; // Use in-cluster config
|
||||
context?: string; // Specific kubeconfig context
|
||||
}
|
||||
|
||||
export interface K8sPodMetadata {
|
||||
name: string;
|
||||
namespace: string;
|
||||
labels: Record<string, string>;
|
||||
annotations: Record<string, string>;
|
||||
uid: string;
|
||||
}
|
||||
|
||||
export interface K8sResourceRequirements {
|
||||
requests: { memory: string; cpu: string };
|
||||
limits: { memory: string; cpu: string };
|
||||
}
|
||||
|
||||
export interface K8sSecurityContext {
|
||||
runAsNonRoot: boolean;
|
||||
runAsUser: number;
|
||||
readOnlyRootFilesystem: boolean;
|
||||
allowPrivilegeEscalation: boolean;
|
||||
capabilities: { drop: string[] };
|
||||
}
|
||||
```
|
||||
|
||||
Create src/mcpd/tests/unit/services/kubernetes-orchestrator.test.ts with comprehensive TDD tests:
|
||||
|
||||
1. Constructor tests: verify kubeconfig loading (file path vs in-cluster), namespace validation, error handling for missing config
|
||||
2. startServer() tests: verify Pod spec generation includes security context, resource limits, labels, command building, env vars
|
||||
3. stopServer() tests: verify graceful pod termination, wait for completion, error handling for non-existent pods
|
||||
4. getStatus() tests: verify status mapping from K8s pod phases (Pending, Running, Succeeded, Failed, Unknown) to InstanceStatus
|
||||
5. getLogs() tests: verify log options (tail, follow, since, timestamps) are mapped correctly to K8s log API
|
||||
6. listInstances() tests: verify label selector filtering works, pagination handling for large deployments
|
||||
|
||||
Mock @kubernetes/client-node CoreV1Api using vitest.mock() with proper type definitions. All tests should fail initially (TDD red phase).
|
||||
|
||||
### 17.2. Implement KubernetesOrchestrator class with Pod security contexts and resource management
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 17.1
|
||||
|
||||
Implement the KubernetesOrchestrator class using @kubernetes/client-node, with all methods passing TDD tests from subtask 1, including SRE-approved pod security contexts, resource requests/limits, and proper label conventions.
|
||||
|
||||
**Details:**
|
||||
|
||||
Install @kubernetes/client-node in src/mcpd. Create src/mcpd/src/services/kubernetes-orchestrator.ts:
|
||||
|
||||
```typescript
|
||||
import * as k8s from '@kubernetes/client-node';
|
||||
import { McpOrchestrator, McpServer, InstanceStatus, LogOptions, InstanceFilters, Instance } from '@mcpctl/shared';
|
||||
import { K8sConfig, K8sSecurityContext, K8sResourceRequirements } from '@mcpctl/shared';
|
||||
|
||||
export class KubernetesOrchestrator implements McpOrchestrator {
|
||||
private coreApi: k8s.CoreV1Api;
|
||||
private namespace: string;
|
||||
|
||||
constructor(config: K8sConfig) {
|
||||
const kc = new k8s.KubeConfig();
|
||||
if (config.inCluster) {
|
||||
kc.loadFromCluster();
|
||||
} else if (config.kubeconfig) {
|
||||
kc.loadFromFile(config.kubeconfig);
|
||||
} else {
|
||||
kc.loadFromDefault();
|
||||
}
|
||||
if (config.context) kc.setCurrentContext(config.context);
|
||||
this.coreApi = kc.makeApiClient(k8s.CoreV1Api);
|
||||
this.namespace = config.namespace;
|
||||
}
|
||||
|
||||
async startServer(server: McpServer, config: any): Promise<string> {
|
||||
const podName = `mcp-${server.name}-${Date.now()}`;
|
||||
const pod: k8s.V1Pod = {
|
||||
apiVersion: 'v1',
|
||||
kind: 'Pod',
|
||||
metadata: {
|
||||
name: podName,
|
||||
namespace: this.namespace,
|
||||
labels: {
|
||||
'mcpctl.io/server': server.name,
|
||||
'mcpctl.io/managed': 'true',
|
||||
'app.kubernetes.io/name': `mcp-${server.name}`,
|
||||
'app.kubernetes.io/component': 'mcp-server',
|
||||
'app.kubernetes.io/managed-by': 'mcpctl'
|
||||
},
|
||||
annotations: {
|
||||
'mcpctl.io/created-at': new Date().toISOString()
|
||||
}
|
||||
},
|
||||
spec: {
|
||||
containers: [{
|
||||
name: 'mcp-server',
|
||||
image: server.image || 'node:20-alpine',
|
||||
command: this.buildCommand(server),
|
||||
env: this.buildEnvVars(config),
|
||||
resources: this.getResourceRequirements(config),
|
||||
securityContext: this.getSecurityContext()
|
||||
}],
|
||||
securityContext: { runAsNonRoot: true, runAsUser: 1000, fsGroup: 1000 },
|
||||
restartPolicy: 'Always',
|
||||
serviceAccountName: config.serviceAccount || 'default'
|
||||
}
|
||||
};
|
||||
const created = await this.coreApi.createNamespacedPod(this.namespace, pod);
|
||||
return created.body.metadata!.name!;
|
||||
}
|
||||
|
||||
private getSecurityContext(): k8s.V1SecurityContext {
|
||||
return {
|
||||
runAsNonRoot: true,
|
||||
runAsUser: 1000,
|
||||
readOnlyRootFilesystem: true,
|
||||
allowPrivilegeEscalation: false,
|
||||
capabilities: { drop: ['ALL'] }
|
||||
};
|
||||
}
|
||||
|
||||
private getResourceRequirements(config: any): k8s.V1ResourceRequirements {
|
||||
return {
|
||||
requests: { memory: config.memoryRequest || '128Mi', cpu: config.cpuRequest || '100m' },
|
||||
limits: { memory: config.memoryLimit || '512Mi', cpu: config.cpuLimit || '500m' }
|
||||
};
|
||||
}
|
||||
// ... implement stopServer, getStatus, getLogs, listInstances
|
||||
}
|
||||
```
|
||||
|
||||
Implement all remaining methods with proper error handling and K8s API error translation.
|
||||
|
||||
### 17.3. Implement createOrchestrator factory function and configuration schema
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 17.2
|
||||
|
||||
Create the orchestrator factory function that instantiates DockerOrchestrator or KubernetesOrchestrator based on configuration, with Zod schema validation and configuration file support.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/mcpd/src/services/orchestrator-factory.ts:
|
||||
|
||||
```typescript
|
||||
import { z } from 'zod';
|
||||
import { McpOrchestrator } from '@mcpctl/shared';
|
||||
import { DockerOrchestrator } from './container-manager'; // From Task 6
|
||||
import { KubernetesOrchestrator } from './kubernetes-orchestrator';
|
||||
|
||||
const DockerConfigSchema = z.object({
|
||||
socketPath: z.string().default('/var/run/docker.sock'),
|
||||
host: z.string().optional(),
|
||||
port: z.number().optional(),
|
||||
network: z.string().default('mcpctl-network')
|
||||
});
|
||||
|
||||
const KubernetesConfigSchema = z.object({
|
||||
namespace: z.string().default('mcpctl'),
|
||||
kubeconfig: z.string().optional(),
|
||||
inCluster: z.boolean().default(false),
|
||||
context: z.string().optional()
|
||||
});
|
||||
|
||||
const OrchestratorConfigSchema = z.discriminatedUnion('type', [
|
||||
z.object({ type: z.literal('docker'), docker: DockerConfigSchema }),
|
||||
z.object({ type: z.literal('kubernetes'), kubernetes: KubernetesConfigSchema })
|
||||
]);
|
||||
|
||||
export type OrchestratorConfig = z.infer<typeof OrchestratorConfigSchema>;
|
||||
|
||||
export function createOrchestrator(config: OrchestratorConfig): McpOrchestrator {
|
||||
const validated = OrchestratorConfigSchema.parse(config);
|
||||
switch (validated.type) {
|
||||
case 'docker':
|
||||
return new DockerOrchestrator(validated.docker);
|
||||
case 'kubernetes':
|
||||
return new KubernetesOrchestrator(validated.kubernetes);
|
||||
default:
|
||||
throw new Error(`Unknown orchestrator type`);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Create src/mcpd/src/config/orchestrator.ts for loading config from environment variables and config files (supporting both YAML and JSON). Write TDD tests in src/mcpd/tests/unit/services/orchestrator-factory.test.ts BEFORE implementation:
|
||||
|
||||
1. Test factory creates DockerOrchestrator when type='docker'
|
||||
2. Test factory creates KubernetesOrchestrator when type='kubernetes'
|
||||
3. Test factory throws on invalid type
|
||||
4. Test Zod validation rejects invalid configs
|
||||
5. Test default values are applied correctly
|
||||
6. Test config loading from MCPCTL_ORCHESTRATOR_TYPE env var
|
||||
|
||||
### 17.4. Implement K8s NetworkPolicy and PersistentVolumeClaim builders for MCP server isolation
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 17.2
|
||||
|
||||
Create resource builders for Kubernetes NetworkPolicy (network isolation between MCP servers) and PersistentVolumeClaim (for stateful data MCPs like caching or GPU providers) with proper annotations for observability.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/mcpd/src/services/k8s-resources.ts with resource builder functions:
|
||||
|
||||
```typescript
|
||||
import * as k8s from '@kubernetes/client-node';
|
||||
|
||||
export interface NetworkPolicyConfig {
|
||||
serverName: string;
|
||||
namespace: string;
|
||||
allowEgress?: string[]; // CIDR blocks or service names to allow
|
||||
allowIngress?: string[]; // Pod labels allowed to connect
|
||||
}
|
||||
|
||||
export function buildNetworkPolicy(config: NetworkPolicyConfig): k8s.V1NetworkPolicy {
|
||||
return {
|
||||
apiVersion: 'networking.k8s.io/v1',
|
||||
kind: 'NetworkPolicy',
|
||||
metadata: {
|
||||
name: `mcp-${config.serverName}-netpol`,
|
||||
namespace: config.namespace,
|
||||
labels: { 'mcpctl.io/server': config.serverName, 'mcpctl.io/managed': 'true' }
|
||||
},
|
||||
spec: {
|
||||
podSelector: { matchLabels: { 'mcpctl.io/server': config.serverName } },
|
||||
policyTypes: ['Ingress', 'Egress'],
|
||||
ingress: [{
|
||||
from: [{ podSelector: { matchLabels: { 'mcpctl.io/component': 'local-proxy' } } }]
|
||||
}],
|
||||
egress: config.allowEgress?.map(cidr => ({
|
||||
to: [{ ipBlock: { cidr } }]
|
||||
})) || [{ to: [{ ipBlock: { cidr: '0.0.0.0/0' } }] }] // Default: allow all egress
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
export interface PVCConfig {
|
||||
serverName: string;
|
||||
namespace: string;
|
||||
storageSize: string; // e.g., '1Gi'
|
||||
storageClass?: string;
|
||||
accessModes?: string[];
|
||||
}
|
||||
|
||||
export function buildPVC(config: PVCConfig): k8s.V1PersistentVolumeClaim {
|
||||
return {
|
||||
apiVersion: 'v1',
|
||||
kind: 'PersistentVolumeClaim',
|
||||
metadata: {
|
||||
name: `mcp-${config.serverName}-data`,
|
||||
namespace: config.namespace,
|
||||
labels: { 'mcpctl.io/server': config.serverName, 'mcpctl.io/managed': 'true' },
|
||||
annotations: {
|
||||
'mcpctl.io/purpose': 'mcp-server-cache',
|
||||
'mcpctl.io/created-at': new Date().toISOString()
|
||||
}
|
||||
},
|
||||
spec: {
|
||||
accessModes: config.accessModes || ['ReadWriteOnce'],
|
||||
storageClassName: config.storageClass,
|
||||
resources: { requests: { storage: config.storageSize } }
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
export function buildGpuAffinityRules(gpuType: string): k8s.V1Affinity {
|
||||
return {
|
||||
nodeAffinity: {
|
||||
requiredDuringSchedulingIgnoredDuringExecution: {
|
||||
nodeSelectorTerms: [{
|
||||
matchExpressions: [{
|
||||
key: 'nvidia.com/gpu.product',
|
||||
operator: 'In',
|
||||
values: [gpuType]
|
||||
}]
|
||||
}]
|
||||
}
|
||||
}
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
Write TDD tests in src/mcpd/tests/unit/services/k8s-resources.test.ts verifying all resource builders generate valid K8s manifests.
|
||||
|
||||
### 17.5. Create integration tests with kind/k3d and document K8s deployment architecture
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 17.2, 17.3, 17.4
|
||||
|
||||
Build integration test suite using kind or k3d for local K8s cluster testing, create comprehensive SRE documentation covering deployment architecture, resource recommendations, and network requirements.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/mcpd/tests/integration/kubernetes/ directory with integration tests:
|
||||
|
||||
1. Create setup script src/mcpd/tests/integration/kubernetes/setup-kind.ts:
|
||||
```typescript
|
||||
import { execSync } from 'child_process';
|
||||
|
||||
export async function setupKindCluster(): Promise<void> {
|
||||
execSync('kind create cluster --name mcpctl-test --config tests/integration/kubernetes/kind-config.yaml', { stdio: 'inherit' });
|
||||
}
|
||||
|
||||
export async function teardownKindCluster(): Promise<void> {
|
||||
execSync('kind delete cluster --name mcpctl-test', { stdio: 'inherit' });
|
||||
}
|
||||
```
|
||||
|
||||
2. Create kind-config.yaml with proper resource limits
|
||||
3. Create kubernetes-orchestrator.integration.test.ts testing:
|
||||
- Pod creation and deletion lifecycle
|
||||
- Status monitoring through pod phases
|
||||
- Log retrieval from running pods
|
||||
- NetworkPolicy enforcement (cannot reach blocked endpoints)
|
||||
- PVC mounting for stateful MCPs
|
||||
|
||||
4. Create src/mcpd/docs/KUBERNETES_DEPLOYMENT.md documenting:
|
||||
- Architecture overview: mcpctl namespace, resource types, label conventions
|
||||
- Security: Pod security standards (restricted), NetworkPolicies, ServiceAccounts
|
||||
- SRE recommendations: HPA configurations, PDB templates, monitoring with Prometheus labels
|
||||
- Resource sizing guide: Small (128Mi/100m), Medium (512Mi/500m), Large (2Gi/1000m)
|
||||
- Network requirements: Required egress rules per MCP server type, ingress from local-proxy
|
||||
- Troubleshooting: Common issues, kubectl commands, log access
|
||||
- GPU support: Node affinity, NVIDIA device plugin requirements
|
||||
|
||||
5. Create example manifests in src/mcpd/examples/k8s/:
|
||||
- namespace.yaml, rbac.yaml, networkpolicy.yaml, sample-mcp-pod.yaml
|
||||
|
||||
Integration tests should skip gracefully when kind is not available (CI compatibility).
|
||||
582
.taskmaster/tasks/task_018.md
Normal file
582
.taskmaster/tasks/task_018.md
Normal file
@@ -0,0 +1,582 @@
|
||||
# Task ID: 18
|
||||
|
||||
**Title:** Create End-to-End Integration and Documentation
|
||||
|
||||
**Status:** pending
|
||||
|
||||
**Dependencies:** 9, 13, 14, 15, 16
|
||||
|
||||
**Priority:** medium
|
||||
|
||||
**Description:** Build comprehensive integration tests, usage documentation, and example workflows for the complete mcpctl system.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create E2E tests and documentation:
|
||||
|
||||
```typescript
|
||||
// tests/e2e/full-workflow.test.ts
|
||||
describe('mcpctl E2E', () => {
|
||||
test('complete workflow: setup to Claude usage', async () => {
|
||||
// 1. Start mcpd server
|
||||
const mcpd = await startMcpd();
|
||||
|
||||
// 2. Setup MCP server via CLI
|
||||
await exec('mcpctl setup slack --non-interactive --token=test-token');
|
||||
|
||||
// 3. Create project
|
||||
await exec('mcpctl project create weekly_reports --profiles slack-read-only jira-read-only');
|
||||
|
||||
// 4. Add to Claude config
|
||||
await exec('mcpctl claude add-mcp-project weekly_reports');
|
||||
const mcpJson = JSON.parse(fs.readFileSync('.mcp.json', 'utf8'));
|
||||
expect(mcpJson.mcpServers['mcpctl-proxy']).toBeDefined();
|
||||
|
||||
// 5. Start local proxy
|
||||
const proxy = await startLocalProxy();
|
||||
|
||||
// 6. Simulate Claude request through proxy
|
||||
const response = await proxy.callTool('slack_get_messages', {
|
||||
channel: 'team',
|
||||
_context: 'Find security-related messages'
|
||||
});
|
||||
|
||||
// 7. Verify response is filtered
|
||||
expect(response.content.length).toBeLessThan(originalData.length);
|
||||
|
||||
// 8. Verify audit log
|
||||
const audit = await exec('mcpctl audit --limit 1');
|
||||
expect(audit).toContain('mcp_call');
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
Documentation structure:
|
||||
```
|
||||
docs/
|
||||
├── getting-started.md
|
||||
├── installation.md
|
||||
├── configuration.md
|
||||
├── cli-reference.md
|
||||
├── mcp-servers/
|
||||
│ ├── slack.md
|
||||
│ ├── jira.md
|
||||
│ └── github.md
|
||||
├── architecture.md
|
||||
├── local-llm-setup.md
|
||||
├── deployment/
|
||||
│ ├── docker-compose.md
|
||||
│ └── kubernetes.md
|
||||
└── examples/
|
||||
├── weekly-reports.md
|
||||
└── terraform-docs.md
|
||||
```
|
||||
|
||||
Example workflow documentation:
|
||||
```markdown
|
||||
# Weekly Reports Workflow
|
||||
|
||||
## Setup
|
||||
```bash
|
||||
# Install mcpctl
|
||||
npm install -g mcpctl
|
||||
|
||||
# Configure server
|
||||
mcpctl config set-server http://your-nas:3000
|
||||
|
||||
# Setup MCPs
|
||||
mcpctl setup slack
|
||||
mcpctl setup jira
|
||||
|
||||
# Create project
|
||||
mcpctl project create weekly_reports --profiles slack-team jira-myproject
|
||||
|
||||
# Add to Claude
|
||||
mcpctl claude add-mcp-project weekly_reports
|
||||
```
|
||||
|
||||
## Usage with Claude
|
||||
In your Claude session:
|
||||
> "Write me a weekly report. Get all messages from Slack related to my team and security, and all Jira tickets I worked on this week."
|
||||
|
||||
The local proxy will filter thousands of messages to only the relevant ones.
|
||||
```
|
||||
|
||||
**Test Strategy:**
|
||||
|
||||
Run full E2E test suite. Test all documented workflows work as described. Validate documentation accuracy with fresh setup. Test error scenarios and recovery.
|
||||
|
||||
## Subtasks
|
||||
|
||||
### 18.1. Build E2E Test Infrastructure with Docker Compose Local Environment
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** None
|
||||
|
||||
Create the complete E2E test infrastructure using docker-compose that runs mcpd, PostgreSQL, mock MCP servers, and local LLM proxy entirely locally without external dependencies.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create tests/e2e directory structure with:
|
||||
|
||||
**docker-compose.e2e.yml:**
|
||||
```yaml
|
||||
version: '3.8'
|
||||
services:
|
||||
postgres-e2e:
|
||||
image: postgres:16-alpine
|
||||
environment:
|
||||
POSTGRES_USER: mcpctl_test
|
||||
POSTGRES_PASSWORD: test_password
|
||||
POSTGRES_DB: mcpctl_test
|
||||
healthcheck:
|
||||
test: ['CMD-SHELL', 'pg_isready -U mcpctl_test']
|
||||
mcpd:
|
||||
build: ../../src/mcpd
|
||||
depends_on:
|
||||
postgres-e2e:
|
||||
condition: service_healthy
|
||||
environment:
|
||||
DATABASE_URL: postgresql://mcpctl_test:test_password@postgres-e2e:5432/mcpctl_test
|
||||
mock-slack-mcp:
|
||||
build: ./mocks/slack-mcp
|
||||
ports: ['9001:9001']
|
||||
mock-jira-mcp:
|
||||
build: ./mocks/jira-mcp
|
||||
ports: ['9002:9002']
|
||||
ollama:
|
||||
image: ollama/ollama:latest
|
||||
volumes: ['ollama-data:/root/.ollama']
|
||||
local-proxy:
|
||||
build: ../../src/local-proxy
|
||||
depends_on: [mcpd, ollama]
|
||||
volumes:
|
||||
ollama-data:
|
||||
```
|
||||
|
||||
**tests/e2e/setup.ts:**
|
||||
- startE2EEnvironment(): Start all containers, wait for health
|
||||
- stopE2EEnvironment(): Stop and cleanup containers
|
||||
- resetDatabase(): Truncate all tables between tests
|
||||
- getMcpdClient(): Return configured API client for mcpd
|
||||
- getProxyClient(): Return configured MCP client for local-proxy
|
||||
|
||||
**tests/e2e/mocks/slack-mcp/**: Dockerfile and Node.js mock implementing MCP protocol returning configurable test data (1000+ messages for filtering tests)
|
||||
|
||||
**tests/e2e/mocks/jira-mcp/**: Similar mock for Jira with test tickets
|
||||
|
||||
**tests/e2e/fixtures/**: Test data files (slack-messages.json, jira-tickets.json) with realistic but synthetic data
|
||||
|
||||
**tests/e2e/vitest.config.ts:**
|
||||
```typescript
|
||||
export default defineConfig({
|
||||
test: {
|
||||
globalSetup: './setup.ts',
|
||||
testTimeout: 120000,
|
||||
hookTimeout: 60000,
|
||||
setupFiles: ['./test-utils.ts']
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
Add scripts to root package.json:
|
||||
- "test:e2e": "vitest run --config tests/e2e/vitest.config.ts"
|
||||
- "test:e2e:up": "docker-compose -f tests/e2e/docker-compose.e2e.yml up -d"
|
||||
- "test:e2e:down": "docker-compose -f tests/e2e/docker-compose.e2e.yml down -v"
|
||||
|
||||
### 18.2. Implement Full Workflow E2E Tests with Security Validation
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 18.1
|
||||
|
||||
Create comprehensive E2E test suites covering the complete user workflow from CLI setup through proxy usage, plus security-focused tests verifying no credential leakage, proper auth flows, and permission boundary enforcement.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create tests/e2e/workflows directory with test files:
|
||||
|
||||
**tests/e2e/workflows/full-workflow.test.ts:**
|
||||
```typescript
|
||||
describe('mcpctl E2E: Complete Workflow', () => {
|
||||
test('setup to Claude usage', async () => {
|
||||
// 1. Start mcpd server (via docker-compose)
|
||||
expect(await getMcpdHealth()).toBe('ok');
|
||||
|
||||
// 2. Setup MCP server via CLI
|
||||
const setupResult = await exec('mcpctl setup slack --non-interactive --token=xoxb-test-token');
|
||||
expect(setupResult.exitCode).toBe(0);
|
||||
|
||||
// 3. Create project with profiles
|
||||
await exec('mcpctl project create weekly_reports --profiles slack-read-only jira-read-only');
|
||||
const project = await getMcpdClient().getProject('weekly_reports');
|
||||
expect(project.profiles).toHaveLength(2);
|
||||
|
||||
// 4. Add to Claude config
|
||||
await exec('mcpctl claude add-mcp-project weekly_reports');
|
||||
const mcpJson = JSON.parse(fs.readFileSync('.mcp.json', 'utf8'));
|
||||
expect(mcpJson.mcpServers['mcpctl-proxy']).toBeDefined();
|
||||
expect(mcpJson.mcpServers['mcpctl-proxy'].env.SLACK_BOT_TOKEN).toBeUndefined(); // No secrets!
|
||||
|
||||
// 5. Simulate proxy request with context filtering
|
||||
const response = await getProxyClient().callTool('slack_get_messages', {
|
||||
channel: 'team',
|
||||
_context: 'Find security-related messages'
|
||||
});
|
||||
expect(response.content.length).toBeLessThan(1000); // Filtered from 1000+ test messages
|
||||
|
||||
// 6. Verify audit log
|
||||
const audit = await exec('mcpctl audit --limit 1 --format json');
|
||||
expect(JSON.parse(audit.stdout)[0].action).toBe('mcp_call');
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
**tests/e2e/security/credential-leakage.test.ts:**
|
||||
```typescript
|
||||
describe('Security: No Credential Leakage', () => {
|
||||
test('.mcp.json never contains actual secrets', async () => {
|
||||
await exec('mcpctl setup slack --token=xoxb-real-token');
|
||||
await exec('mcpctl claude add-mcp-project test_project');
|
||||
const content = fs.readFileSync('.mcp.json', 'utf8');
|
||||
expect(content).not.toContain('xoxb-');
|
||||
expect(content).not.toMatch(/[A-Za-z0-9]{32,}/);
|
||||
});
|
||||
|
||||
test('audit logs scrub sensitive data', async () => {
|
||||
await exec('mcpctl setup jira --token=secret-api-token');
|
||||
const logs = await prisma.auditLog.findMany({ where: { action: 'mcp_server_setup' }});
|
||||
logs.forEach(log => {
|
||||
expect(JSON.stringify(log.details)).not.toContain('secret-api-token');
|
||||
});
|
||||
});
|
||||
|
||||
test('CLI history does not contain tokens', async () => {
|
||||
// Verify --token values are masked in any logged commands
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
**tests/e2e/security/auth-flows.test.ts:**
|
||||
```typescript
|
||||
describe('Security: Authentication Flows', () => {
|
||||
test('API rejects requests without valid token', async () => {
|
||||
const response = await fetch(`${MCPD_URL}/api/projects`, {
|
||||
headers: { 'Authorization': 'Bearer invalid-token' }
|
||||
});
|
||||
expect(response.status).toBe(401);
|
||||
});
|
||||
|
||||
test('expired sessions are rejected', async () => {
|
||||
const expiredSession = await createExpiredSession();
|
||||
const response = await authenticatedFetch('/api/projects', expiredSession.token);
|
||||
expect(response.status).toBe(401);
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
**tests/e2e/security/permission-boundaries.test.ts:**
|
||||
```typescript
|
||||
describe('Security: Permission Boundaries', () => {
|
||||
test('read-only profile cannot call write operations', async () => {
|
||||
await exec('mcpctl project create readonly_test --profiles slack-read-only');
|
||||
const response = await getProxyClient().callTool('slack_post_message', {
|
||||
channel: 'general',
|
||||
text: 'test'
|
||||
});
|
||||
expect(response.error).toContain('permission denied');
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
**tests/e2e/workflows/error-recovery.test.ts:** Test scenarios for network failures, container restarts, database disconnections with proper recovery
|
||||
|
||||
### 18.3. Create User and Technical Documentation Suite
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** None
|
||||
|
||||
Build comprehensive documentation covering getting started, installation, configuration, CLI reference, architecture overview, and local LLM setup guides with proper markdown structure.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create docs/ directory structure:
|
||||
|
||||
**docs/getting-started.md:**
|
||||
- Quick 5-minute setup guide
|
||||
- Prerequisites (Node.js, Docker, pnpm)
|
||||
- Install mcpctl globally: `npm install -g mcpctl`
|
||||
- Start mcpd: `docker-compose up -d` or `mcpctl daemon start`
|
||||
- Configure first MCP server: `mcpctl setup slack`
|
||||
- Create first project: `mcpctl project create my_assistant --profiles slack-read-only`
|
||||
- Add to Claude: `mcpctl claude add-mcp-project my_assistant`
|
||||
- Verify with `mcpctl status`
|
||||
|
||||
**docs/installation.md:**
|
||||
- NPM global install: `npm install -g mcpctl`
|
||||
- Docker deployment: Using provided docker-compose.yml
|
||||
- Kubernetes deployment: Helm chart reference (link to deployment/kubernetes.md)
|
||||
- Building from source: Clone, pnpm install, pnpm build
|
||||
- Verifying installation: `mcpctl version`, `mcpctl doctor`
|
||||
|
||||
**docs/configuration.md:**
|
||||
- Environment variables reference (DATABASE_URL, MCPD_URL, LOG_LEVEL, etc.)
|
||||
- Configuration file locations (~/.mcpctl/config.yaml, .mcpctl.yaml)
|
||||
- Per-project configuration (.mcpctl.yaml in project root)
|
||||
- Secrets management (keyring integration, environment variables, --token flags)
|
||||
- Example configurations for different environments
|
||||
|
||||
**docs/cli-reference.md:**
|
||||
- Complete command reference with examples
|
||||
- `mcpctl setup <server>` - Configure MCP server
|
||||
- `mcpctl project create|list|delete|status` - Project management
|
||||
- `mcpctl profile list|describe|apply` - Profile management
|
||||
- `mcpctl claude add-mcp-project|remove-mcp-project` - Claude integration
|
||||
- `mcpctl instance start|stop|restart|logs|status` - Instance lifecycle
|
||||
- `mcpctl audit [--limit N] [--format json|table]` - Audit log queries
|
||||
- `mcpctl config get|set` - Configuration management
|
||||
- Global flags: --server, --format, --verbose, --quiet
|
||||
|
||||
**docs/architecture.md:**
|
||||
- High-level system diagram (ASCII or Mermaid)
|
||||
- Component descriptions: CLI, mcpd, local-proxy, database
|
||||
- Data flow: Claude -> .mcp.json -> local-proxy -> mcpd -> MCP servers
|
||||
- Security model: Token validation, audit logging, credential isolation
|
||||
- Scalability: Stateless mcpd, PostgreSQL HA, horizontal scaling
|
||||
|
||||
**docs/local-llm-setup.md:**
|
||||
- Ollama installation and configuration
|
||||
- Model recommendations for filtering (llama3.2, qwen2.5)
|
||||
- Gemini CLI setup as alternative
|
||||
- vLLM for high-throughput deployments
|
||||
- DeepSeek API configuration
|
||||
- Performance tuning and benchmarks
|
||||
|
||||
**docs/mcp-servers/:**
|
||||
- slack.md: Slack MCP setup, required scopes, profile examples
|
||||
- jira.md: Jira Cloud/Server setup, API token creation
|
||||
- github.md: GitHub token scopes, repository access
|
||||
- terraform.md: Terraform docs MCP configuration
|
||||
- Each includes: Prerequisites, Setup steps, Available profiles, Troubleshooting
|
||||
|
||||
**docs/deployment/:**
|
||||
- docker-compose.md: Production docker-compose configuration
|
||||
- kubernetes.md: Helm chart installation, values.yaml reference
|
||||
|
||||
### 18.4. Create SRE Runbooks and Network Topology Documentation
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 18.3
|
||||
|
||||
Write operational runbooks for common SRE scenarios including restart procedures, credential rotation, scaling, and diagnostics, plus network topology documentation for enterprise deployments with proxy, firewall, and DNS considerations.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create docs/operations/ directory:
|
||||
|
||||
**docs/operations/runbooks/:**
|
||||
|
||||
**restart-failed-instance.md:**
|
||||
```markdown
|
||||
# Runbook: Restart Failed MCP Instance
|
||||
|
||||
## Symptoms
|
||||
- `mcpctl instance status <name>` shows 'error' or 'stopped'
|
||||
- Audit logs show repeated connection failures
|
||||
- Claude reports MCP tool unavailable
|
||||
|
||||
## Diagnosis
|
||||
1. Check instance status: `mcpctl instance status <name> --verbose`
|
||||
2. View recent logs: `mcpctl instance logs <name> --tail 100`
|
||||
3. Check container health: `docker inspect mcpctl-<name> | jq '.[0].State'`
|
||||
|
||||
## Resolution Steps
|
||||
1. Stop the instance: `mcpctl instance stop <name>`
|
||||
2. Check for resource exhaustion: `docker stats --no-stream`
|
||||
3. Restart: `mcpctl instance start <name>`
|
||||
4. Verify health: `mcpctl instance status <name> --wait-healthy`
|
||||
5. Test connectivity: `mcpctl instance test <name>`
|
||||
|
||||
## Escalation
|
||||
- If repeated failures: Check network connectivity to external APIs
|
||||
- If OOM: Increase container memory limits in profile configuration
|
||||
```
|
||||
|
||||
**rotate-credentials.md:** Steps for rotating Slack tokens, Jira API keys, GitHub PATs without downtime
|
||||
|
||||
**scale-up.md:** Adding mcpd instances, database read replicas, load balancer configuration
|
||||
|
||||
**diagnose-connectivity.md:** Network troubleshooting between proxy, mcpd, and MCP servers
|
||||
|
||||
**backup-restore.md:** PostgreSQL backup procedures, disaster recovery
|
||||
|
||||
**security-incident.md:** Credential exposure response, audit log analysis, revocation procedures
|
||||
|
||||
**docs/operations/network-topology.md:**
|
||||
```markdown
|
||||
# Network Topology and Enterprise Deployment
|
||||
|
||||
## Architecture Diagram
|
||||
[Mermaid diagram showing: Claude Desktop -> local-proxy (localhost) -> Corporate Proxy -> mcpd (internal network) -> MCP Servers (Slack API, Jira API, etc.)]
|
||||
|
||||
## Network Requirements
|
||||
|
||||
### Local Proxy (runs on developer machine)
|
||||
- Listens on localhost:9229 (configurable)
|
||||
- Outbound: HTTPS to mcpd server (configurable URL)
|
||||
- No direct internet access required
|
||||
|
||||
### mcpd Server (internal deployment)
|
||||
- Inbound: HTTPS from corporate network (developer machines)
|
||||
- Outbound: HTTPS to MCP server APIs (Slack, Jira, GitHub)
|
||||
- PostgreSQL: Port 5432 to database server
|
||||
|
||||
### Firewall Rules
|
||||
| Source | Destination | Port | Protocol | Purpose |
|
||||
|--------|-------------|------|----------|--------|
|
||||
| Developer workstations | mcpd | 443 | HTTPS | API access |
|
||||
| mcpd | PostgreSQL | 5432 | TCP | Database |
|
||||
| mcpd | api.slack.com | 443 | HTTPS | Slack MCP |
|
||||
| mcpd | *.atlassian.net | 443 | HTTPS | Jira MCP |
|
||||
| mcpd | api.github.com | 443 | HTTPS | GitHub MCP |
|
||||
|
||||
### Proxy Configuration
|
||||
- If corporate proxy required: Set HTTP_PROXY/HTTPS_PROXY for mcpd container
|
||||
- No-proxy list: Database server, internal services
|
||||
- SSL inspection: May require custom CA certificate injection
|
||||
|
||||
### DNS Configuration
|
||||
- mcpd server should be resolvable: mcpd.internal.company.com
|
||||
- Or use IP address in mcpctl config: `mcpctl config set-server https://10.0.0.50:443`
|
||||
|
||||
### TLS/Certificate Requirements
|
||||
- mcpd should use valid TLS certificate (Let's Encrypt or internal CA)
|
||||
- Certificate SANs should include all access hostnames
|
||||
- For self-signed: Export CA and configure in mcpctl: `mcpctl config set-ca /path/to/ca.pem`
|
||||
```
|
||||
|
||||
**docs/operations/troubleshooting-network.md:**
|
||||
- Common issues: Connection refused, certificate errors, proxy authentication
|
||||
- Diagnostic commands: `mcpctl doctor`, `mcpctl test-connection`
|
||||
- tcpdump/Wireshark guidance for packet inspection
|
||||
- Proxy debugging with curl equivalents
|
||||
|
||||
### 18.5. Implement Data Team Example Workflows with Automated Validation
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 18.1, 18.2, 18.3
|
||||
|
||||
Create example workflow documentation for data analysts and engineers including weekly report generation, data pipeline monitoring, and documentation querying, with automated E2E tests validating each workflow works as documented.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create docs/examples/ directory with workflow documentation:
|
||||
|
||||
**docs/examples/weekly-reports.md:**
|
||||
```markdown
|
||||
# Weekly Reports Workflow
|
||||
|
||||
## Use Case
|
||||
Generate weekly team reports by aggregating Slack discussions and Jira ticket updates.
|
||||
|
||||
## Setup
|
||||
```bash
|
||||
# Install mcpctl (if not already installed)
|
||||
npm install -g mcpctl
|
||||
|
||||
# Configure mcpd server connection
|
||||
mcpctl config set-server http://your-nas:3000
|
||||
|
||||
# Setup MCP servers with appropriate tokens
|
||||
mcpctl setup slack --token $SLACK_BOT_TOKEN
|
||||
mcpctl setup jira --url https://company.atlassian.net --token $JIRA_API_TOKEN
|
||||
|
||||
# Create project with read-only profiles
|
||||
mcpctl project create weekly_reports --profiles slack-team jira-myproject
|
||||
|
||||
# Add to Claude Desktop
|
||||
mcpctl claude add-mcp-project weekly_reports
|
||||
```
|
||||
|
||||
## Usage with Claude
|
||||
In your Claude session, say:
|
||||
> "Write me a weekly report for the security team. Get all Slack messages from #security-team mentioning incidents or vulnerabilities this week, and all Jira tickets I worked on with status changes."
|
||||
|
||||
The local proxy will:
|
||||
1. Intercept the Slack API request
|
||||
2. Use local LLM to identify relevant messages (filtering 1000s to ~50)
|
||||
3. Return only pertinent data to Claude
|
||||
4. Log the operation for audit compliance
|
||||
|
||||
## Expected Output
|
||||
- Weekly summary with categorized Slack discussions
|
||||
- Jira ticket status updates with time spent
|
||||
- Action items extracted from conversations
|
||||
```
|
||||
|
||||
**docs/examples/data-pipeline-monitoring.md:**
|
||||
- Setup for monitoring Airflow/dbt pipelines via Slack alerts
|
||||
- Integration with Jira for incident tracking
|
||||
- Example Claude prompts for pipeline health checks
|
||||
|
||||
**docs/examples/documentation-querying.md:**
|
||||
- Setup Terraform docs MCP for infrastructure documentation
|
||||
- GitHub MCP for code documentation querying
|
||||
- Example: "Find all S3 buckets with public access in our Terraform configs"
|
||||
|
||||
**tests/e2e/examples/ directory with automated validation:**
|
||||
|
||||
**tests/e2e/examples/weekly-reports.test.ts:**
|
||||
```typescript
|
||||
describe('Example Workflow: Weekly Reports', () => {
|
||||
test('follows documented setup steps', async () => {
|
||||
// Parse setup commands from docs/examples/weekly-reports.md
|
||||
const setupCommands = extractCodeBlocks('docs/examples/weekly-reports.md', 'bash');
|
||||
|
||||
for (const cmd of setupCommands) {
|
||||
// Skip comments and variable-dependent commands for test
|
||||
if (cmd.startsWith('#') || cmd.includes('$SLACK')) continue;
|
||||
|
||||
// Execute with test tokens
|
||||
const result = await exec(cmd.replace('$SLACK_BOT_TOKEN', 'test-token'));
|
||||
expect(result.exitCode).toBe(0);
|
||||
}
|
||||
});
|
||||
|
||||
test('proxy filters messages as described', async () => {
|
||||
// Setup as documented
|
||||
await exec('mcpctl setup slack --non-interactive --token=test-token');
|
||||
await exec('mcpctl project create weekly_reports --profiles slack-read-only');
|
||||
|
||||
// Simulate Claude request matching documented usage
|
||||
const response = await getProxyClient().callTool('slack_search_messages', {
|
||||
query: 'security incidents vulnerabilities',
|
||||
_context: 'Find security-related messages for weekly report'
|
||||
});
|
||||
|
||||
// Verify filtering works as documented
|
||||
expect(response.messages.length).toBeLessThan(100); // Filtered from 1000+
|
||||
expect(response.messages.every(m =>
|
||||
m.text.toLowerCase().includes('security') ||
|
||||
m.text.toLowerCase().includes('incident') ||
|
||||
m.text.toLowerCase().includes('vulnerability')
|
||||
)).toBe(true);
|
||||
});
|
||||
|
||||
test('audit log records operation', async () => {
|
||||
const auditResult = await exec('mcpctl audit --limit 1 --format json');
|
||||
const lastAudit = JSON.parse(auditResult.stdout)[0];
|
||||
expect(lastAudit.action).toBe('mcp_call');
|
||||
expect(lastAudit.resource).toContain('slack');
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
**tests/e2e/examples/data-pipeline-monitoring.test.ts:** Similar validation for pipeline monitoring workflow
|
||||
|
||||
**tests/e2e/examples/documentation-querying.test.ts:** Validation for Terraform/GitHub docs workflow
|
||||
|
||||
Each test file:
|
||||
1. Parses the corresponding markdown file for setup commands
|
||||
2. Executes commands (with test credentials) to verify they work
|
||||
3. Simulates the documented Claude usage pattern
|
||||
4. Verifies expected outcomes match documentation claims
|
||||
98
.taskmaster/tasks/task_019.md
Normal file
98
.taskmaster/tasks/task_019.md
Normal file
@@ -0,0 +1,98 @@
|
||||
# Task ID: 19
|
||||
|
||||
**Title:** Implement Local LLM Pre-filtering Proxy
|
||||
|
||||
**Status:** cancelled
|
||||
|
||||
**Dependencies:** None
|
||||
|
||||
**Priority:** high
|
||||
|
||||
**Description:** Build the local proxy component that intercepts Claude's MCP requests, uses local LLMs (Gemini CLI, Ollama, vLLM, or DeepSeek API) to interpret questions, fetch relevant data from mcpd, and filter/refine responses to minimize context window usage before returning to Claude.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/local-proxy/src/ with the following architecture:
|
||||
|
||||
**Core Components:**
|
||||
|
||||
1. **MCP Protocol Handler** (mcp-handler.ts):
|
||||
- Implement MCP server interface using @modelcontextprotocol/sdk
|
||||
- Register as the MCP endpoint Claude connects to
|
||||
- Parse incoming tool calls and extract the semantic intent
|
||||
|
||||
2. **LLM Provider Abstraction** (providers/):
|
||||
```typescript
|
||||
interface LLMProvider {
|
||||
name: string;
|
||||
interpretQuery(query: string, context: McpToolCall): Promise<InterpretedQuery>;
|
||||
filterResponse(data: unknown, originalQuery: string, maxTokens: number): Promise<FilteredResponse>;
|
||||
}
|
||||
```
|
||||
Implement providers:
|
||||
- gemini-cli.ts: Shell out to `gemini` CLI binary
|
||||
- ollama.ts: HTTP client to local Ollama server (localhost:11434)
|
||||
- vllm.ts: OpenAI-compatible API client
|
||||
- deepseek.ts: DeepSeek API client
|
||||
|
||||
3. **Query Interpreter** (interpreter.ts):
|
||||
- Takes Claude's raw MCP request (e.g., 'get_slack_messages')
|
||||
- Uses local LLM to understand semantic intent: "Find messages related to security and linux servers from my team"
|
||||
- Generates optimized query parameters for mcpd
|
||||
|
||||
4. **Response Filter** (filter.ts):
|
||||
- Receives raw data from mcpd (potentially thousands of Slack messages, large Terraform docs)
|
||||
- Uses local LLM to extract ONLY relevant information matching original query
|
||||
- Implements token counting to stay within configured limits
|
||||
- Returns compressed, relevant subset of data
|
||||
|
||||
5. **mcpd Client** (mcpd-client.ts):
|
||||
- HTTP client to communicate with mcpd server
|
||||
- Handles authentication (forwards Claude session token)
|
||||
- Supports all MCP operations exposed by mcpd
|
||||
|
||||
**Configuration:**
|
||||
```typescript
|
||||
interface ProxyConfig {
|
||||
mcpdUrl: string; // e.g., 'http://mcpd.local:3000'
|
||||
llmProvider: 'gemini-cli' | 'ollama' | 'vllm' | 'deepseek';
|
||||
llmConfig: {
|
||||
model?: string; // e.g., 'llama3.2', 'gemini-pro'
|
||||
endpoint?: string; // for vllm/deepseek
|
||||
maxTokensPerFilter: number; // target output size
|
||||
};
|
||||
filteringEnabled: boolean; // can be disabled for passthrough
|
||||
}
|
||||
```
|
||||
|
||||
**Flow:**
|
||||
1. Claude calls local-proxy MCP server
|
||||
2. Proxy interprets query semantics via local LLM
|
||||
3. Proxy calls mcpd with optimized query
|
||||
4. mcpd returns raw MCP data
|
||||
5. Proxy filters response via local LLM
|
||||
6. Claude receives minimal, relevant context
|
||||
|
||||
**Pseudo-code for filter.ts:**
|
||||
```typescript
|
||||
async function filterResponse(
|
||||
rawData: unknown,
|
||||
originalQuery: string,
|
||||
provider: LLMProvider
|
||||
): Promise<FilteredResponse> {
|
||||
const dataStr = JSON.stringify(rawData);
|
||||
if (dataStr.length < 4000) return { data: rawData, filtered: false };
|
||||
|
||||
const prompt = `Given this query: "${originalQuery}"
|
||||
Extract ONLY the relevant information from this data.
|
||||
Return a JSON array of relevant items, max 10 items.
|
||||
Data: ${dataStr.slice(0, 50000)}`; // Truncate for LLM input
|
||||
|
||||
const filtered = await provider.filterResponse(dataStr, originalQuery, 2000);
|
||||
return { data: filtered, filtered: true, originalSize: dataStr.length };
|
||||
}
|
||||
```
|
||||
|
||||
**Test Strategy:**
|
||||
|
||||
Unit tests for each LLM provider with mocked HTTP/CLI responses. Integration tests with actual Ollama instance (docker-compose service). Test query interpretation produces valid mcpd parameters. Test filtering reduces data size while preserving relevant content. Load test with large payloads (10MB JSON) to verify memory handling. Test fallback behavior when LLM provider is unavailable. Test passthrough mode when filtering is disabled.
|
||||
85
.taskmaster/tasks/task_020.md
Normal file
85
.taskmaster/tasks/task_020.md
Normal file
@@ -0,0 +1,85 @@
|
||||
# Task ID: 20
|
||||
|
||||
**Title:** Implement MCP Project Management with Claude Code Integration
|
||||
|
||||
**Status:** cancelled
|
||||
|
||||
**Dependencies:** None
|
||||
|
||||
**Priority:** high
|
||||
|
||||
**Description:** Build the `mcpctl claude add-mcp-project <project-name>` command that configures Claude Code sessions to use specific MCP server profiles, generating and managing .mcp.json files automatically.
|
||||
|
||||
**Details:**
|
||||
|
||||
Extend src/cli/src/commands/ with Claude Code integration:
|
||||
|
||||
**New Commands:**
|
||||
|
||||
1. **mcpctl claude add-mcp-project <name>** (claude/add-mcp-project.ts):
|
||||
- Fetches project definition from mcpd API
|
||||
- Generates .mcp.json file pointing to local-proxy
|
||||
- Configures local-proxy to route to the project's MCP profiles
|
||||
- Example output:
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"weekly_reports": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "@mcpctl/local-proxy", "--project", "weekly_reports", "--mcpd", "http://mcpd.local:3000"],
|
||||
"env": {}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
2. **mcpctl claude remove-mcp-project <name>** (claude/remove-mcp-project.ts):
|
||||
- Removes project from .mcp.json
|
||||
- Cleans up local-proxy config
|
||||
|
||||
3. **mcpctl claude list-projects** (claude/list-projects.ts):
|
||||
- Shows configured projects in current directory's .mcp.json
|
||||
- Shows available projects from mcpd
|
||||
|
||||
4. **mcpctl project create <name>** (project/create.ts):
|
||||
- Creates new project on mcpd
|
||||
- Interactive profile selection
|
||||
|
||||
5. **mcpctl project add-profile <project> <profile>** (project/add-profile.ts):
|
||||
- Links existing profile to project
|
||||
|
||||
**MCP.json Management** (lib/mcp-json.ts):
|
||||
```typescript
|
||||
interface McpJsonManager {
|
||||
findMcpJson(startDir: string): string | null; // Search up directory tree
|
||||
readMcpJson(path: string): McpJsonConfig;
|
||||
writeMcpJson(path: string, config: McpJsonConfig): void;
|
||||
addProject(config: McpJsonConfig, project: ProjectConfig): McpJsonConfig;
|
||||
removeProject(config: McpJsonConfig, projectName: string): McpJsonConfig;
|
||||
}
|
||||
```
|
||||
|
||||
**mcpd API Extensions** (src/mcpd/src/routes/projects.ts):
|
||||
- GET /projects - List all projects
|
||||
- GET /projects/:name - Get project details with profiles
|
||||
- POST /projects - Create project
|
||||
- PUT /projects/:name/profiles - Update project profiles
|
||||
- GET /projects/:name/claude-config - Get Claude-ready config
|
||||
|
||||
**Workflow Example:**
|
||||
```bash
|
||||
# On mcpd server (admin sets up projects)
|
||||
mcpctl project create weekly_reports
|
||||
mcpctl project add-profile weekly_reports slack-readonly
|
||||
mcpctl project add-profile weekly_reports jira-readonly
|
||||
|
||||
# On developer machine
|
||||
cd ~/my-workspace
|
||||
mcpctl claude add-mcp-project weekly_reports
|
||||
# Creates/updates .mcp.json with weekly_reports config
|
||||
# Now Claude Code in this directory can use slack and jira MCPs
|
||||
```
|
||||
|
||||
**Test Strategy:**
|
||||
|
||||
Unit test MCP.json parsing and manipulation with various file states (missing, empty, existing projects). Test findMcpJson directory traversal. Integration test with mcpd API: create project, add profiles, fetch Claude config. E2E test: run `mcpctl claude add-mcp-project`, verify .mcp.json created, start Claude Code (mock), verify MCP connection works. Test error handling: project not found, profile not found, conflicting project names. Test update behavior when project already exists in .mcp.json.
|
||||
127
.taskmaster/tasks/task_021.md
Normal file
127
.taskmaster/tasks/task_021.md
Normal file
@@ -0,0 +1,127 @@
|
||||
# Task ID: 21
|
||||
|
||||
**Title:** Implement Guided MCP Server Setup Wizard with Credential Flow
|
||||
|
||||
**Status:** cancelled
|
||||
|
||||
**Dependencies:** None
|
||||
|
||||
**Priority:** medium
|
||||
|
||||
**Description:** Build an interactive setup wizard that guides users through MCP server configuration, including browser-based OAuth flows, API token generation pages, and step-by-step credential setup with secure storage.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/cli/src/commands/setup/ with guided setup flows:
|
||||
|
||||
**Setup Wizard Architecture:**
|
||||
|
||||
1. **Setup Command** (setup.ts):
|
||||
```bash
|
||||
mcpctl setup <server-type> # e.g., mcpctl setup slack
|
||||
```
|
||||
- Fetches server definition from mcpd (envTemplate, setupGuide)
|
||||
- Runs appropriate setup flow based on server type
|
||||
|
||||
2. **Setup Flows** (flows/):
|
||||
- oauth-flow.ts: For OAuth-based services (Slack, GitHub)
|
||||
- api-key-flow.ts: For API key services (Jira, OpenAI)
|
||||
- custom-flow.ts: For services with unique setup
|
||||
|
||||
3. **OAuth Flow Handler** (flows/oauth-flow.ts):
|
||||
```typescript
|
||||
async function runOAuthFlow(serverType: string, config: OAuthConfig): Promise<Credentials> {
|
||||
// 1. Start local HTTP server to receive OAuth callback
|
||||
const callbackServer = await startCallbackServer(config.callbackPort);
|
||||
|
||||
// 2. Open browser to OAuth authorization URL
|
||||
const authUrl = buildOAuthUrl(config);
|
||||
console.log(`Opening browser to authorize ${serverType}...`);
|
||||
await open(authUrl); // Uses 'open' package
|
||||
|
||||
// 3. Wait for callback with auth code
|
||||
const authCode = await callbackServer.waitForCode();
|
||||
|
||||
// 4. Exchange code for tokens
|
||||
const tokens = await exchangeCodeForTokens(authCode, config);
|
||||
|
||||
// 5. Securely store tokens via mcpd
|
||||
await mcpdClient.storeCredentials(serverType, tokens);
|
||||
|
||||
return tokens;
|
||||
}
|
||||
```
|
||||
|
||||
4. **API Key Flow Handler** (flows/api-key-flow.ts):
|
||||
```typescript
|
||||
async function runApiKeyFlow(serverType: string, config: ApiKeyConfig): Promise<Credentials> {
|
||||
// 1. Display setup instructions
|
||||
console.log(chalk.bold(`\nSetting up ${serverType}...\n`));
|
||||
console.log(config.setupGuide); // Markdown rendered to terminal
|
||||
|
||||
// 2. Open browser to API key generation page
|
||||
if (config.apiKeyUrl) {
|
||||
const shouldOpen = await confirm('Open browser to generate API key?');
|
||||
if (shouldOpen) await open(config.apiKeyUrl);
|
||||
}
|
||||
|
||||
// 3. Prompt for required credentials
|
||||
const credentials: Record<string, string> = {};
|
||||
for (const envVar of config.requiredEnvVars) {
|
||||
credentials[envVar.name] = await password({
|
||||
message: `Enter ${envVar.description}:`,
|
||||
mask: '*'
|
||||
});
|
||||
}
|
||||
|
||||
// 4. Validate credentials (test API call)
|
||||
const valid = await validateCredentials(serverType, credentials);
|
||||
if (!valid) throw new Error('Invalid credentials');
|
||||
|
||||
// 5. Store securely via mcpd
|
||||
await mcpdClient.storeCredentials(serverType, credentials);
|
||||
|
||||
return credentials;
|
||||
}
|
||||
```
|
||||
|
||||
5. **Credential Storage** (src/mcpd/src/services/credentials.ts):
|
||||
- Encrypt credentials at rest using AES-256-GCM
|
||||
- Master key from environment (MCPCTL_MASTER_KEY) or Vault integration
|
||||
- Store encrypted credentials in database (McpServer.encryptedCredentials new field)
|
||||
- Never log or expose credentials in API responses
|
||||
|
||||
**Server-Specific Setup Guides (seed data):**
|
||||
|
||||
- **Slack:**
|
||||
- Guide: "1. Go to api.slack.com/apps, 2. Create app, 3. Add OAuth scopes..."
|
||||
- OAuth flow with workspace authorization
|
||||
- Scopes: channels:read, users:read, chat:write
|
||||
|
||||
- **Jira:**
|
||||
- Guide: "1. Go to id.atlassian.com/manage-profile/security/api-tokens"
|
||||
- API key flow with URL, email, token
|
||||
- Test: GET /rest/api/3/myself
|
||||
|
||||
- **GitHub:**
|
||||
- Guide: "1. Go to github.com/settings/tokens"
|
||||
- API key flow OR GitHub App OAuth
|
||||
- Scopes: repo, read:org
|
||||
|
||||
- **Terraform Docs:**
|
||||
- No credentials needed
|
||||
- Setup verifies terraform CLI installed
|
||||
|
||||
**Profile Creation After Setup:**
|
||||
```bash
|
||||
mcpctl setup slack
|
||||
# After successful setup:
|
||||
# "Slack configured! Create a profile for this server?"
|
||||
# > Profile name: slack-readonly
|
||||
# > Read-only mode? Yes
|
||||
# Profile 'slack-readonly' created and linked to Slack server.
|
||||
```
|
||||
|
||||
**Test Strategy:**
|
||||
|
||||
Unit test each flow handler with mocked external services. Test OAuth callback server starts and receives codes correctly. Test API key validation with mock API responses. Integration test with actual services using test accounts (Slack test workspace, GitHub test token). Test credential encryption/decryption roundtrip. Test setup guide rendering (markdown to terminal). E2E test: run `mcpctl setup slack`, mock browser open, simulate OAuth callback, verify credentials stored and profile created. Test error recovery: invalid credentials, timeout waiting for callback, network failures. Security test: verify credentials never logged, encrypted at rest, not in API responses.
|
||||
271
.taskmaster/tasks/task_022.md
Normal file
271
.taskmaster/tasks/task_022.md
Normal file
@@ -0,0 +1,271 @@
|
||||
# Task ID: 22
|
||||
|
||||
**Title:** Implement MCP Registry Client
|
||||
|
||||
**Status:** pending
|
||||
|
||||
**Dependencies:** None
|
||||
|
||||
**Priority:** high
|
||||
|
||||
**Description:** Build a multi-source registry client that queries the Official MCP Registry, Glama.ai, and Smithery.ai APIs to search, discover, and retrieve MCP server metadata with deduplication, ranking, and caching.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/cli/src/registry/ directory with the following structure:
|
||||
|
||||
```
|
||||
registry/
|
||||
├── client.ts # Main RegistryClient facade
|
||||
├── sources/
|
||||
│ ├── base.ts # Abstract RegistrySource interface
|
||||
│ ├── official.ts # Official MCP Registry (registry.modelcontextprotocol.io)
|
||||
│ ├── glama.ts # Glama.ai registry
|
||||
│ └── smithery.ts # Smithery.ai registry
|
||||
├── types.ts # RegistryServer, SearchOptions, etc.
|
||||
├── cache.ts # TTL-based result caching
|
||||
├── dedup.ts # Deduplication logic
|
||||
├── ranking.ts # Result ranking algorithm
|
||||
└── index.ts # Barrel export
|
||||
```
|
||||
|
||||
**Strategy Pattern Implementation:**
|
||||
```typescript
|
||||
// types.ts
|
||||
export interface EnvVar {
|
||||
name: string;
|
||||
description: string;
|
||||
isSecret: boolean;
|
||||
setupUrl?: string;
|
||||
}
|
||||
|
||||
export interface RegistryServer {
|
||||
name: string;
|
||||
description: string;
|
||||
packages: {
|
||||
npm?: string;
|
||||
pypi?: string;
|
||||
docker?: string;
|
||||
};
|
||||
envTemplate: EnvVar[];
|
||||
transport: 'stdio' | 'sse' | 'websocket';
|
||||
repositoryUrl?: string;
|
||||
popularityScore: number;
|
||||
verified: boolean;
|
||||
sourceRegistry: 'official' | 'glama' | 'smithery';
|
||||
lastUpdated?: Date;
|
||||
}
|
||||
|
||||
export interface SearchOptions {
|
||||
query: string;
|
||||
limit?: number;
|
||||
registries?: ('official' | 'glama' | 'smithery')[];
|
||||
verified?: boolean;
|
||||
transport?: 'stdio' | 'sse';
|
||||
category?: string;
|
||||
}
|
||||
|
||||
// base.ts
|
||||
export abstract class RegistrySource {
|
||||
abstract name: string;
|
||||
abstract search(query: string, limit: number): Promise<RegistryServer[]>;
|
||||
protected abstract normalizeResult(raw: unknown): RegistryServer;
|
||||
}
|
||||
```
|
||||
|
||||
**Official MCP Registry Source (GET /v0/servers):**
|
||||
- Base URL: https://registry.modelcontextprotocol.io/v0/servers
|
||||
- Query params: ?search=<query>&limit=100&cursor=<cursor>
|
||||
- No authentication required
|
||||
- Pagination via cursor
|
||||
- Response includes: name, description, npm package, env vars, transport
|
||||
|
||||
**Glama.ai Source:**
|
||||
- Base URL: https://glama.ai/api/mcp/v1/servers
|
||||
- No authentication required
|
||||
- Cursor-based pagination
|
||||
- Response includes env var JSON schemas
|
||||
|
||||
**Smithery.ai Source:**
|
||||
- Base URL: https://registry.smithery.ai/servers
|
||||
- Query params: ?q=<query>
|
||||
- Requires free API key from config (optional, graceful fallback)
|
||||
- Has verified badges, usage analytics
|
||||
|
||||
**Caching Implementation:**
|
||||
```typescript
|
||||
// cache.ts
|
||||
import { createHash } from 'crypto';
|
||||
|
||||
export class RegistryCache {
|
||||
private cache = new Map<string, { data: RegistryServer[]; expires: number }>();
|
||||
private defaultTTL: number;
|
||||
|
||||
constructor(ttlMs = 3600000) { // 1 hour default
|
||||
this.defaultTTL = ttlMs;
|
||||
}
|
||||
|
||||
private getKey(query: string, options: SearchOptions): string {
|
||||
return createHash('sha256').update(JSON.stringify({ query, options })).digest('hex');
|
||||
}
|
||||
|
||||
get(query: string, options: SearchOptions): RegistryServer[] | null {
|
||||
const key = this.getKey(query, options);
|
||||
const entry = this.cache.get(key);
|
||||
if (entry && entry.expires > Date.now()) {
|
||||
return entry.data;
|
||||
}
|
||||
this.cache.delete(key);
|
||||
return null;
|
||||
}
|
||||
|
||||
set(query: string, options: SearchOptions, data: RegistryServer[]): void {
|
||||
const key = this.getKey(query, options);
|
||||
this.cache.set(key, { data, expires: Date.now() + this.defaultTTL });
|
||||
}
|
||||
|
||||
getHitRatio(): { hits: number; misses: number; ratio: number } { /* metrics */ }
|
||||
}
|
||||
```
|
||||
|
||||
**Deduplication Logic:**
|
||||
- Match by npm package name first (exact match)
|
||||
- Fall back to GitHub repository URL comparison
|
||||
- Keep the result with highest popularity score
|
||||
- Merge envTemplate data from multiple sources
|
||||
|
||||
**Ranking Algorithm:**
|
||||
1. Relevance score (text match quality) - weight: 40%
|
||||
2. Popularity/usage count (Smithery analytics) - weight: 30%
|
||||
3. Verified status - weight: 20%
|
||||
4. Recency (last updated) - weight: 10%
|
||||
|
||||
**Rate Limiting & Retry:**
|
||||
```typescript
|
||||
export async function withRetry<T>(
|
||||
fn: () => Promise<T>,
|
||||
maxRetries = 3,
|
||||
baseDelay = 1000
|
||||
): Promise<T> {
|
||||
for (let attempt = 0; attempt < maxRetries; attempt++) {
|
||||
try {
|
||||
return await fn();
|
||||
} catch (error) {
|
||||
if (attempt === maxRetries - 1) throw error;
|
||||
const delay = baseDelay * Math.pow(2, attempt) + Math.random() * 1000;
|
||||
await new Promise(r => setTimeout(r, delay));
|
||||
}
|
||||
}
|
||||
throw new Error('Unreachable');
|
||||
}
|
||||
```
|
||||
|
||||
**Security Requirements:**
|
||||
- Validate all API responses with Zod schemas
|
||||
- Sanitize descriptions to prevent terminal escape sequence injection
|
||||
- Never log API keys (Smithery key)
|
||||
- Support HTTP_PROXY/HTTPS_PROXY environment variables
|
||||
- Support NODE_EXTRA_CA_CERTS for custom CA certificates
|
||||
|
||||
**SRE Metrics (expose via shared metrics module):**
|
||||
- registry_query_latency_ms (histogram by source)
|
||||
- registry_cache_hit_ratio (gauge)
|
||||
- registry_error_count (counter by source, error_type)
|
||||
|
||||
**Test Strategy:**
|
||||
|
||||
TDD approach - write tests BEFORE implementation:
|
||||
|
||||
1. **Unit tests for each registry source:**
|
||||
- Mock HTTP responses for official, glama, smithery APIs
|
||||
- Test normalization of raw API responses to RegistryServer type
|
||||
- Test pagination handling (cursor-based)
|
||||
- Test error handling (network errors, invalid responses, rate limits)
|
||||
|
||||
2. **Cache tests:**
|
||||
- Test cache hit returns data without API call
|
||||
- Test cache miss triggers API call
|
||||
- Test TTL expiration correctly invalidates entries
|
||||
- Test cache key generation is deterministic
|
||||
- Test hit ratio metrics accuracy
|
||||
|
||||
3. **Deduplication tests:**
|
||||
- Test npm package name matching
|
||||
- Test GitHub URL matching with different formats (https vs git@)
|
||||
- Test keeping highest popularity score
|
||||
- Test envTemplate merging from multiple sources
|
||||
|
||||
4. **Ranking tests:**
|
||||
- Test relevance scoring for exact vs partial matches
|
||||
- Test popularity weight contribution
|
||||
- Test verified boost
|
||||
- Test overall ranking order
|
||||
|
||||
5. **Integration tests:**
|
||||
- Test full search flow with mocked HTTP
|
||||
- Test parallel queries to all registries
|
||||
- Test graceful degradation when one registry fails
|
||||
|
||||
6. **Security tests:**
|
||||
- Test Zod validation rejects malformed responses
|
||||
- Test terminal escape sequence sanitization
|
||||
- Test no API keys in error messages or logs
|
||||
|
||||
Run: `pnpm --filter @mcpctl/cli test:run -- --coverage registry/`
|
||||
|
||||
## Subtasks
|
||||
|
||||
### 22.1. Define Registry Types, Zod Schemas, and Base Abstract Source Interface
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** None
|
||||
|
||||
Create the foundational types, validation schemas, and abstract base class for all registry sources following TDD and strategy pattern principles.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/cli/src/registry/ directory structure. Implement types.ts with RegistryServer, SearchOptions, EnvVar interfaces. Define Zod schemas for validating all API responses (OfficialRegistryResponseSchema, GlamaResponseSchema, SmitheryResponseSchema) to ensure security validation. Create base.ts with abstract RegistrySource class including name property, search() method, and normalizeResult() protected method. Include terminal escape sequence sanitization utility in types.ts. Write comprehensive Vitest tests BEFORE implementation: test type guards, Zod schema validation with valid/invalid inputs, sanitization of malicious strings with ANSI escape codes. Add category tags including data platform categories (bigquery, snowflake, dbt). Export everything via index.ts barrel file.
|
||||
|
||||
### 22.2. Implement Individual Registry Sources with HTTP Client and Proxy Support
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 22.1
|
||||
|
||||
Implement the three concrete registry source classes (OfficialRegistrySource, GlamaRegistrySource, SmitheryRegistrySource) with proper HTTP handling, proxy support, and response normalization.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create sources/official.ts for https://registry.modelcontextprotocol.io/v0/servers - implement cursor-based pagination, normalize responses to RegistryServer type. Create sources/glama.ts for https://glama.ai/api/mcp/v1/servers - handle JSON schema env vars, cursor pagination. Create sources/smithery.ts for https://registry.smithery.ai/servers - optional API key from config, graceful fallback if unauthorized, handle verified badges and analytics. Implement shared HTTP client utility supporting HTTP_PROXY/HTTPS_PROXY environment variables and NODE_EXTRA_CA_CERTS for custom CA certificates. Add exponential backoff retry logic with jitter (withRetry function). Never log API keys in error messages or debug output. Use structured logging with appropriate log levels. Write tests BEFORE implementation using mock HTTP responses.
|
||||
|
||||
### 22.3. Implement TTL-Based Caching with Metrics and Hit Ratio Tracking
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 22.1
|
||||
|
||||
Build the RegistryCache class with TTL-based expiration, SHA-256 cache keys, hit/miss metrics, and integration with the SRE metrics module.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create cache.ts with RegistryCache class. Use SHA-256 hash of query+options JSON for cache keys. Implement TTL-based expiration with configurable defaultTTL (default 1 hour). Track hits/misses with getHitRatio() method returning { hits, misses, ratio }. Integrate with shared metrics module to expose registry_cache_hit_ratio gauge. Implement cache.clear() for testing and manual invalidation. Add cache size limits with LRU eviction if needed. Ensure thread-safety for concurrent access patterns. Write comprehensive Vitest tests BEFORE implementation covering cache behavior.
|
||||
|
||||
### 22.4. Implement Deduplication Logic and Ranking Algorithm
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 22.1
|
||||
|
||||
Create the deduplication module to merge results from multiple registries and the ranking algorithm to sort results by relevance, popularity, verification, and recency.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create dedup.ts with deduplicateResults(results: RegistryServer[]): RegistryServer[] function. Match duplicates by npm package name (exact match) first, then fall back to GitHub repositoryUrl comparison. Keep the result with highest popularityScore when merging duplicates. Merge envTemplate arrays from multiple sources, deduplicating by env var name. Create ranking.ts with rankResults(results: RegistryServer[], query: string): RegistryServer[] function. Implement weighted scoring: text match relevance 40%, popularity/usage 30%, verified status 20%, recency 10%. Text relevance uses fuzzy matching on name and description. Write tests BEFORE implementation with sample datasets.
|
||||
|
||||
### 22.5. Build Main RegistryClient Facade with Parallel Queries and SRE Metrics
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 22.1, 22.2, 22.3, 22.4
|
||||
|
||||
Create the main RegistryClient facade class that orchestrates parallel queries across all sources, applies caching, deduplication, ranking, and exposes SRE metrics for observability.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create client.ts with RegistryClient class implementing the facade pattern. Constructor accepts optional config for enabling/disabling specific registries, cache TTL, and Smithery API key. Implement search(options: SearchOptions): Promise<RegistryServer[]> that queries all enabled registries in parallel using Promise.allSettled, applies caching, deduplication, and ranking. Expose SRE metrics via shared metrics module: registry_query_latency_ms histogram labeled by source, registry_error_count counter labeled by source and error_type. Use structured logging for all operations. Handle partial failures gracefully (return results from successful sources). Create index.ts barrel export for clean public API. Include comprehensive JSDoc documentation.
|
||||
596
.taskmaster/tasks/task_023.md
Normal file
596
.taskmaster/tasks/task_023.md
Normal file
@@ -0,0 +1,596 @@
|
||||
# Task ID: 23
|
||||
|
||||
**Title:** Implement mcpctl discover Command
|
||||
|
||||
**Status:** pending
|
||||
|
||||
**Dependencies:** 22
|
||||
|
||||
**Priority:** medium
|
||||
|
||||
**Description:** Create the `mcpctl discover` CLI command that lets users search for MCP servers across all configured registries with filtering, multiple output formats, and an interactive browsing mode.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/cli/src/commands/discover.ts:
|
||||
|
||||
```typescript
|
||||
import { Command } from 'commander';
|
||||
import { RegistryClient } from '../registry/client';
|
||||
import { formatTable, formatJson, formatYaml } from '../utils/output';
|
||||
import inquirer from 'inquirer';
|
||||
|
||||
export function createDiscoverCommand(): Command {
|
||||
const cmd = new Command('discover')
|
||||
.description('Search for MCP servers across registries')
|
||||
.argument('<query>', 'Search query (e.g., "slack", "database", "terraform")')
|
||||
.option('--category <category>', 'Filter by category (devops, data-platform, analytics, security)')
|
||||
.option('--verified', 'Only show verified servers')
|
||||
.option('--transport <type>', 'Filter by transport (stdio, sse)', undefined)
|
||||
.option('--registry <source>', 'Search specific registry (official, glama, smithery, all)', 'all')
|
||||
.option('--limit <n>', 'Maximum results to show', '20')
|
||||
.option('--output <format>', 'Output format (table, json, yaml)', 'table')
|
||||
.option('--interactive', 'Interactive browsing mode')
|
||||
.action(async (query, options) => {
|
||||
await discoverAction(query, options);
|
||||
});
|
||||
return cmd;
|
||||
}
|
||||
```
|
||||
|
||||
**Table Output Format:**
|
||||
```
|
||||
┌─────────────────┬────────────────────────────────┬───────────────────────┬───────────┬──────────┬────────────┐
|
||||
│ NAME │ DESCRIPTION │ PACKAGE │ TRANSPORT │ VERIFIED │ POPULARITY │
|
||||
├─────────────────┼────────────────────────────────┼───────────────────────┼───────────┼──────────┼────────────┤
|
||||
│ slack-mcp │ Slack workspace integration... │ @anthropic/slack-mcp │ stdio │ ✓ │ ★★★★☆ │
|
||||
│ slack-tools │ Send messages, manage chan... │ slack-mcp-server │ stdio │ │ ★★★☆☆ │
|
||||
└─────────────────┴────────────────────────────────┴───────────────────────┴───────────┴──────────┴────────────┘
|
||||
|
||||
Run 'mcpctl install <name>' to set up a server
|
||||
```
|
||||
|
||||
**Implementation Details:**
|
||||
|
||||
```typescript
|
||||
// discover-action.ts
|
||||
import chalk from 'chalk';
|
||||
import Table from 'cli-table3';
|
||||
|
||||
const CATEGORIES = ['devops', 'data-platform', 'analytics', 'security', 'productivity', 'development'] as const;
|
||||
|
||||
interface DiscoverOptions {
|
||||
category?: string;
|
||||
verified?: boolean;
|
||||
transport?: 'stdio' | 'sse';
|
||||
registry?: 'official' | 'glama' | 'smithery' | 'all';
|
||||
limit?: string;
|
||||
output?: 'table' | 'json' | 'yaml';
|
||||
interactive?: boolean;
|
||||
}
|
||||
|
||||
export async function discoverAction(query: string, options: DiscoverOptions): Promise<void> {
|
||||
const client = new RegistryClient();
|
||||
|
||||
const searchOptions = {
|
||||
query,
|
||||
limit: parseInt(options.limit ?? '20', 10),
|
||||
registries: options.registry === 'all'
|
||||
? ['official', 'glama', 'smithery']
|
||||
: [options.registry],
|
||||
verified: options.verified,
|
||||
transport: options.transport,
|
||||
category: options.category,
|
||||
};
|
||||
|
||||
const results = await client.search(searchOptions);
|
||||
|
||||
if (results.length === 0) {
|
||||
console.log(chalk.yellow('No MCP servers found matching your query.'));
|
||||
console.log(chalk.dim('Try a different search term or remove filters.'));
|
||||
process.exit(2); // Exit code 2 = no results
|
||||
}
|
||||
|
||||
if (options.interactive) {
|
||||
await interactiveMode(results);
|
||||
return;
|
||||
}
|
||||
|
||||
switch (options.output) {
|
||||
case 'json':
|
||||
console.log(JSON.stringify(results, null, 2));
|
||||
break;
|
||||
case 'yaml':
|
||||
console.log(formatYaml(results));
|
||||
break;
|
||||
default:
|
||||
printTable(results);
|
||||
console.log(chalk.cyan("\nRun 'mcpctl install <name>' to set up a server"));
|
||||
}
|
||||
}
|
||||
|
||||
function printTable(servers: RegistryServer[]): void {
|
||||
const table = new Table({
|
||||
head: ['NAME', 'DESCRIPTION', 'PACKAGE', 'TRANSPORT', 'VERIFIED', 'POPULARITY'],
|
||||
colWidths: [18, 35, 25, 10, 9, 12],
|
||||
wordWrap: true,
|
||||
});
|
||||
|
||||
for (const server of servers) {
|
||||
table.push([
|
||||
server.name,
|
||||
truncate(server.description, 32),
|
||||
server.packages.npm ?? server.packages.pypi ?? '-',
|
||||
server.transport,
|
||||
server.verified ? chalk.green('✓') : '',
|
||||
popularityStars(server.popularityScore),
|
||||
]);
|
||||
}
|
||||
|
||||
console.log(table.toString());
|
||||
}
|
||||
|
||||
function popularityStars(score: number): string {
|
||||
const stars = Math.round(score / 20); // 0-100 -> 0-5 stars
|
||||
return '★'.repeat(stars) + '☆'.repeat(5 - stars);
|
||||
}
|
||||
```
|
||||
|
||||
**Interactive Mode with Inquirer:**
|
||||
```typescript
|
||||
async function interactiveMode(servers: RegistryServer[]): Promise<void> {
|
||||
const { selected } = await inquirer.prompt([
|
||||
{
|
||||
type: 'list',
|
||||
name: 'selected',
|
||||
message: 'Select an MCP server to install:',
|
||||
choices: servers.map(s => ({
|
||||
name: `${s.name} - ${truncate(s.description, 50)} ${s.verified ? '✓' : ''}`,
|
||||
value: s.name,
|
||||
})),
|
||||
pageSize: 15,
|
||||
},
|
||||
]);
|
||||
|
||||
const { confirm } = await inquirer.prompt([
|
||||
{
|
||||
type: 'confirm',
|
||||
name: 'confirm',
|
||||
message: `Install ${selected}?`,
|
||||
default: true,
|
||||
},
|
||||
]);
|
||||
|
||||
if (confirm) {
|
||||
// Trigger install command
|
||||
const installCmd = await import('./install');
|
||||
await installCmd.installAction(selected, {});
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Exit Codes for Scripting:**
|
||||
- 0: Success, results found
|
||||
- 1: Error (network, API, etc.)
|
||||
- 2: No results found
|
||||
|
||||
**Category Inference for Data Analyst Tools:**
|
||||
Include categories relevant to BI/analytics:
|
||||
- 'data-platform': BigQuery, Snowflake, Databricks, dbt
|
||||
- 'analytics': Tableau, Looker, Metabase
|
||||
- 'database': PostgreSQL, MySQL, MongoDB tools
|
||||
|
||||
**Test Strategy:**
|
||||
|
||||
TDD approach - write tests BEFORE implementation:
|
||||
|
||||
1. **Command parsing tests:**
|
||||
- Test all option combinations parse correctly
|
||||
- Test query argument is required
|
||||
- Test invalid transport value rejected
|
||||
- Test invalid registry value rejected
|
||||
- Test limit parsed as integer
|
||||
|
||||
2. **Output formatting tests:**
|
||||
- Test table format with varying description lengths
|
||||
- Test table truncation at specified width
|
||||
- Test JSON output is valid JSON array
|
||||
- Test YAML output is valid YAML
|
||||
- Test popularity score to stars conversion (0-100 -> 0-5 stars)
|
||||
- Test verified badge displays correctly
|
||||
|
||||
3. **Interactive mode tests (mock inquirer):**
|
||||
- Test server list displayed as choices
|
||||
- Test selection triggers install confirmation
|
||||
- Test cancel does not trigger install
|
||||
- Test pagination with >15 results
|
||||
|
||||
4. **Exit code tests:**
|
||||
- Test exit(0) when results found
|
||||
- Test exit(1) on registry client error
|
||||
- Test exit(2) when no results match
|
||||
|
||||
5. **Integration tests:**
|
||||
- Test full command execution with mocked RegistryClient
|
||||
- Test --verified filter reduces results
|
||||
- Test --category filter applies correctly
|
||||
- Test --registry limits to single source
|
||||
|
||||
6. **Filter combination tests:**
|
||||
- Test verified + transport + category combined
|
||||
- Test filters with no matches returns empty
|
||||
|
||||
Run: `pnpm --filter @mcpctl/cli test:run -- --coverage commands/discover`
|
||||
|
||||
## Subtasks
|
||||
|
||||
### 23.1. Write TDD Test Suites for Command Parsing, Option Validation, and Exit Codes
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** None
|
||||
|
||||
Create comprehensive Vitest test suites for the discover command's argument parsing, option validation, and exit code behavior BEFORE implementation, following the project's TDD approach.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/cli/tests/unit/commands/discover.test.ts with the following test categories:
|
||||
|
||||
**Command Parsing Tests:**
|
||||
- Test 'mcpctl discover' without query argument shows error and exits with code 2 (invalid arguments)
|
||||
- Test 'mcpctl discover slack' parses query correctly as 'slack'
|
||||
- Test 'mcpctl discover "database tools"' handles quoted multi-word queries
|
||||
- Test query argument is accessible in action handler
|
||||
|
||||
**Option Validation Tests:**
|
||||
- Test --category accepts valid values: 'devops', 'data-platform', 'analytics', 'security', 'productivity', 'development'
|
||||
- Test --category with invalid value shows error listing valid options
|
||||
- Test --verified flag sets verified=true in options
|
||||
- Test --transport accepts 'stdio' and 'sse' only, rejects invalid values
|
||||
- Test --registry accepts 'official', 'glama', 'smithery', 'all' (default), rejects others
|
||||
- Test --limit parses as integer (e.g., '20' -> 20)
|
||||
- Test --limit with non-numeric value shows validation error
|
||||
- Test --output accepts 'table', 'json', 'yaml', rejects others
|
||||
- Test --interactive flag sets interactive=true
|
||||
|
||||
**Default Values Tests:**
|
||||
- Test --registry defaults to 'all' when not specified
|
||||
- Test --limit defaults to '20' when not specified
|
||||
- Test --output defaults to 'table' when not specified
|
||||
|
||||
**Exit Code Tests:**
|
||||
- Test exit code 0 when results are found
|
||||
- Test exit code 1 on RegistryClient errors (network, API failures)
|
||||
- Test exit code 2 when no results match query/filters
|
||||
|
||||
**Filter Combination Tests:**
|
||||
- Test --verified + --category + --transport combined correctly
|
||||
- Test all filters with empty results returns exit code 2
|
||||
|
||||
Create src/cli/tests/fixtures/mock-registry-client.ts with MockRegistryClient class that returns configurable results or throws configurable errors for testing. Use vitest mock functions to capture calls to verify correct option passing.
|
||||
|
||||
All tests should initially fail (TDD red phase) as the discover command doesn't exist yet.
|
||||
|
||||
### 23.2. Write TDD Test Suites for Output Formatters with Security Sanitization
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 23.1
|
||||
|
||||
Create comprehensive Vitest test suites for all three output formats (table, JSON, YAML), popularity star rendering, description truncation, and critical security tests for terminal escape sequence sanitization.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/cli/tests/unit/commands/discover-output.test.ts with the following test categories:
|
||||
|
||||
**Table Output Tests:**
|
||||
- Test table header contains: NAME, DESCRIPTION, PACKAGE, TRANSPORT, VERIFIED, POPULARITY
|
||||
- Test table column widths match spec: 18, 35, 25, 10, 9, 12
|
||||
- Test word wrapping works for long descriptions
|
||||
- Test description truncation at 32 characters with ellipsis
|
||||
- Test verified=true shows green checkmark (chalk.green('✓'))
|
||||
- Test verified=false shows empty string
|
||||
- Test footer shows "Run 'mcpctl install <name>' to set up a server"
|
||||
- Test empty results array shows yellow 'No MCP servers found' message
|
||||
|
||||
**Popularity Stars Tests (popularityStars function):**
|
||||
- Test score 0 returns '☆☆☆☆☆' (0 filled stars)
|
||||
- Test score 20 returns '★☆☆☆☆' (1 filled star)
|
||||
- Test score 50 returns '★★★☆☆' (2.5 rounds to 3 stars - verify rounding)
|
||||
- Test score 100 returns '★★★★★' (5 filled stars)
|
||||
- Test intermediate values: 10->1, 30->2, 60->3, 80->4
|
||||
|
||||
**JSON Output Tests:**
|
||||
- Test JSON output is valid JSON (passes JSON.parse())
|
||||
- Test JSON output is pretty-printed with 2-space indentation
|
||||
- Test JSON array contains all RegistryServer fields
|
||||
- Test JSON is jq-parseable: 'echo output | jq .[]' works
|
||||
- Test --output json does NOT print footer message
|
||||
|
||||
**YAML Output Tests:**
|
||||
- Test YAML output is valid YAML (passes yaml.load())
|
||||
- Test YAML output uses formatYaml utility from utils/output
|
||||
- Test --output yaml does NOT print footer message
|
||||
|
||||
**SECURITY - Terminal Escape Sequence Sanitization Tests:**
|
||||
- Test description containing ANSI codes '\x1b[31mRED\x1b[0m' is sanitized
|
||||
- Test description containing '\033[1mBOLD\033[0m' is sanitized
|
||||
- Test name containing escape sequences is sanitized
|
||||
- Test package name containing escape sequences is sanitized
|
||||
- Test sanitization removes all \x1b[ and \033[ patterns
|
||||
- Test sanitization preserves normal text content
|
||||
- Test prevents cursor movement codes (\x1b[2J screen clear, etc.)
|
||||
|
||||
**Truncate Function Tests:**
|
||||
- Test truncate('short', 32) returns 'short' unchanged
|
||||
- Test truncate('exactly 32 characters string!!!', 32) returns unchanged
|
||||
- Test truncate('this is a very long description that exceeds limit', 32) returns 'this is a very long description...' (29 chars + '...')
|
||||
|
||||
Create src/cli/tests/fixtures/mock-servers.ts with sample RegistryServer objects including edge cases: very long descriptions, special characters, potential injection strings, missing optional fields (packages.pypi undefined).
|
||||
|
||||
### 23.3. Implement discover Command Definition and Action Handler with Sanitization
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 23.1, 23.2
|
||||
|
||||
Implement the discover command using Commander.js following the project's command registration pattern, with the discoverAction handler that orchestrates RegistryClient calls, applies filters, handles errors, and sets correct exit codes.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/cli/src/commands/discover.ts implementing the CommandModule interface from the project's command registry pattern:
|
||||
|
||||
```typescript
|
||||
// discover.ts
|
||||
import { Command } from 'commander';
|
||||
import { RegistryClient } from '../registry/client';
|
||||
import { sanitizeTerminalOutput } from '../utils/sanitize';
|
||||
import { DiscoverOptions, CATEGORIES } from './discover-types';
|
||||
import { printResults } from './discover-output';
|
||||
|
||||
const VALID_TRANSPORTS = ['stdio', 'sse'] as const;
|
||||
const VALID_REGISTRIES = ['official', 'glama', 'smithery', 'all'] as const;
|
||||
const VALID_OUTPUT_FORMATS = ['table', 'json', 'yaml'] as const;
|
||||
|
||||
export function createDiscoverCommand(): Command {
|
||||
const cmd = new Command('discover')
|
||||
.description('Search for MCP servers across registries')
|
||||
.argument('<query>', 'Search query (e.g., "slack", "database", "terraform")')
|
||||
.option('--category <category>', `Filter by category (${CATEGORIES.join(', ')})`)
|
||||
.option('--verified', 'Only show verified servers')
|
||||
.option('--transport <type>', 'Filter by transport (stdio, sse)')
|
||||
.option('--registry <source>', 'Search specific registry (official, glama, smithery, all)', 'all')
|
||||
.option('--limit <n>', 'Maximum results to show', '20')
|
||||
.option('--output <format>', 'Output format (table, json, yaml)', 'table')
|
||||
.option('--interactive', 'Interactive browsing mode')
|
||||
.action(async (query, options) => {
|
||||
await discoverAction(query, options);
|
||||
});
|
||||
return cmd;
|
||||
}
|
||||
```
|
||||
|
||||
Create src/cli/src/commands/discover-action.ts:
|
||||
```typescript
|
||||
export async function discoverAction(query: string, options: DiscoverOptions): Promise<void> {
|
||||
// 1. Validate options (transport, registry, output, category)
|
||||
// 2. Parse limit as integer with validation
|
||||
// 3. Build SearchOptions for RegistryClient
|
||||
// 4. Call client.search() wrapped in try/catch
|
||||
// 5. Handle empty results -> exit code 2
|
||||
// 6. Handle network/API errors -> exit code 1 with structured logging
|
||||
// 7. Sanitize all string fields in results (prevent terminal injection)
|
||||
// 8. Delegate to printResults() or interactiveMode() based on options
|
||||
}
|
||||
```
|
||||
|
||||
Create src/cli/src/utils/sanitize.ts:
|
||||
```typescript
|
||||
export function sanitizeTerminalOutput(text: string): string {
|
||||
// Remove ANSI escape sequences: \x1b[...m, \033[...m
|
||||
// Remove cursor control sequences
|
||||
// Preserve legitimate text content
|
||||
return text
|
||||
.replace(/\x1b\[[0-9;]*[a-zA-Z]/g, '')
|
||||
.replace(/\033\[[0-9;]*[a-zA-Z]/g, '')
|
||||
.replace(/[\x00-\x08\x0B\x0C\x0E-\x1F]/g, '');
|
||||
}
|
||||
|
||||
export function sanitizeServerResult(server: RegistryServer): RegistryServer {
|
||||
return {
|
||||
...server,
|
||||
name: sanitizeTerminalOutput(server.name),
|
||||
description: sanitizeTerminalOutput(server.description),
|
||||
// sanitize other user-facing string fields
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
Create src/cli/src/commands/discover-types.ts with TypeScript interfaces and constants.
|
||||
|
||||
Register discover command via CommandRegistry following existing patterns in src/cli/src/commands/.
|
||||
|
||||
### 23.4. Implement Output Formatters: Table with cli-table3, JSON, and YAML
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 23.2, 23.3
|
||||
|
||||
Implement the three output format handlers (table, JSON, YAML) including the popularity stars renderer, description truncation, verified badge display, and footer message. Table uses cli-table3 with specified column widths.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/cli/src/commands/discover-output.ts:
|
||||
|
||||
```typescript
|
||||
import chalk from 'chalk';
|
||||
import Table from 'cli-table3';
|
||||
import { RegistryServer } from '../registry/types';
|
||||
import { formatYaml } from '../utils/output';
|
||||
|
||||
export function printResults(servers: RegistryServer[], format: 'table' | 'json' | 'yaml'): void {
|
||||
switch (format) {
|
||||
case 'json':
|
||||
printJsonOutput(servers);
|
||||
break;
|
||||
case 'yaml':
|
||||
printYamlOutput(servers);
|
||||
break;
|
||||
default:
|
||||
printTableOutput(servers);
|
||||
console.log(chalk.cyan("\nRun 'mcpctl install <name>' to set up a server"));
|
||||
}
|
||||
}
|
||||
|
||||
function printTableOutput(servers: RegistryServer[]): void {
|
||||
const table = new Table({
|
||||
head: ['NAME', 'DESCRIPTION', 'PACKAGE', 'TRANSPORT', 'VERIFIED', 'POPULARITY'],
|
||||
colWidths: [18, 35, 25, 10, 9, 12],
|
||||
wordWrap: true,
|
||||
style: { head: ['cyan'] }
|
||||
});
|
||||
|
||||
for (const server of servers) {
|
||||
table.push([
|
||||
server.name,
|
||||
truncate(server.description, 32),
|
||||
getPackageName(server.packages),
|
||||
server.transport,
|
||||
server.verified ? chalk.green('✓') : '',
|
||||
popularityStars(server.popularityScore),
|
||||
]);
|
||||
}
|
||||
|
||||
console.log(table.toString());
|
||||
}
|
||||
|
||||
function printJsonOutput(servers: RegistryServer[]): void {
|
||||
console.log(JSON.stringify(servers, null, 2));
|
||||
}
|
||||
|
||||
function printYamlOutput(servers: RegistryServer[]): void {
|
||||
console.log(formatYaml(servers));
|
||||
}
|
||||
|
||||
export function truncate(text: string, maxLength: number): string {
|
||||
if (text.length <= maxLength) return text;
|
||||
return text.slice(0, maxLength - 3) + '...';
|
||||
}
|
||||
|
||||
export function popularityStars(score: number): string {
|
||||
const stars = Math.round(score / 20); // 0-100 -> 0-5 stars
|
||||
return '★'.repeat(stars) + '☆'.repeat(5 - stars);
|
||||
}
|
||||
|
||||
function getPackageName(packages: RegistryServer['packages']): string {
|
||||
return packages.npm ?? packages.pypi ?? packages.docker ?? '-';
|
||||
}
|
||||
```
|
||||
|
||||
Create src/cli/src/commands/discover-no-results.ts for handling empty results:
|
||||
```typescript
|
||||
export function printNoResults(): void {
|
||||
console.log(chalk.yellow('No MCP servers found matching your query.'));
|
||||
console.log(chalk.dim('Try a different search term or remove filters.'));
|
||||
}
|
||||
```
|
||||
|
||||
Ensure formatYaml utility exists in src/cli/src/utils/output.ts (may need to create if not existing from Task 7). Install cli-table3 dependency: 'pnpm --filter @mcpctl/cli add cli-table3'.
|
||||
|
||||
**Data Analyst/BI Category Support:**
|
||||
Ensure CATEGORIES constant includes categories relevant to data analysts:
|
||||
- 'data-platform': BigQuery, Snowflake, Databricks, dbt
|
||||
- 'analytics': Tableau, Looker, Metabase, Power BI
|
||||
- 'database': PostgreSQL, MySQL, MongoDB connectors
|
||||
- 'visualization': Grafana, Superset integrations
|
||||
|
||||
This supports the Data Analyst persona requirement from the task context.
|
||||
|
||||
### 23.5. Implement Interactive Mode with Inquirer and Install Integration
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 23.3, 23.4
|
||||
|
||||
Implement the interactive browsing mode using Inquirer.js that allows users to scroll through results, select a server, confirm installation, and trigger the install command. Include graceful handling of user cancellation.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/cli/src/commands/discover-interactive.ts:
|
||||
|
||||
```typescript
|
||||
import inquirer from 'inquirer';
|
||||
import chalk from 'chalk';
|
||||
import { RegistryServer } from '../registry/types';
|
||||
import { truncate } from './discover-output';
|
||||
|
||||
export async function interactiveMode(servers: RegistryServer[]): Promise<void> {
|
||||
// Step 1: Display server selection list
|
||||
const { selected } = await inquirer.prompt([
|
||||
{
|
||||
type: 'list',
|
||||
name: 'selected',
|
||||
message: 'Select an MCP server to install:',
|
||||
choices: servers.map(s => ({
|
||||
name: formatChoice(s),
|
||||
value: s.name,
|
||||
})),
|
||||
pageSize: 15, // Show 15 items before scrolling
|
||||
},
|
||||
]);
|
||||
|
||||
// Step 2: Show server details and confirm installation
|
||||
const selectedServer = servers.find(s => s.name === selected);
|
||||
if (selectedServer) {
|
||||
console.log(chalk.dim('\nSelected server details:'));
|
||||
console.log(chalk.dim(` Description: ${selectedServer.description}`));
|
||||
console.log(chalk.dim(` Package: ${selectedServer.packages.npm ?? selectedServer.packages.pypi ?? '-'}`));
|
||||
console.log(chalk.dim(` Transport: ${selectedServer.transport}`));
|
||||
}
|
||||
|
||||
const { confirm } = await inquirer.prompt([
|
||||
{
|
||||
type: 'confirm',
|
||||
name: 'confirm',
|
||||
message: `Install ${selected}?`,
|
||||
default: true,
|
||||
},
|
||||
]);
|
||||
|
||||
if (confirm) {
|
||||
// Dynamically import install command to avoid circular dependencies
|
||||
const { installAction } = await import('./install');
|
||||
await installAction([selected], {}); // Pass as array per install command spec
|
||||
} else {
|
||||
console.log(chalk.dim('Installation cancelled.'));
|
||||
}
|
||||
}
|
||||
|
||||
function formatChoice(server: RegistryServer): string {
|
||||
const verifiedBadge = server.verified ? chalk.green(' ✓') : '';
|
||||
const description = truncate(server.description, 50);
|
||||
return `${server.name} - ${description}${verifiedBadge}`;
|
||||
}
|
||||
```
|
||||
|
||||
Create src/cli/tests/unit/commands/discover-interactive.test.ts with mock inquirer tests:
|
||||
- Test server list displayed as scrollable choices
|
||||
- Test selection triggers install confirmation prompt
|
||||
- Test confirm=true triggers installAction with correct server name
|
||||
- Test confirm=false outputs 'Installation cancelled' and exits gracefully
|
||||
- Test pagination works with >15 results (pageSize check)
|
||||
- Test Ctrl+C cancellation is handled gracefully (inquirer throws on SIGINT)
|
||||
- Test formatChoice includes verified badge for verified servers
|
||||
- Test formatChoice truncates long descriptions correctly
|
||||
|
||||
Update src/cli/src/commands/discover-action.ts to call interactiveMode when options.interactive is true:
|
||||
```typescript
|
||||
if (options.interactive) {
|
||||
await interactiveMode(sanitizedResults);
|
||||
return;
|
||||
}
|
||||
```
|
||||
|
||||
**Error Handling:**
|
||||
- Wrap inquirer prompts in try/catch to handle Ctrl+C gracefully
|
||||
- Exit with code 0 on user cancellation (not an error)
|
||||
- Log structured message on cancellation for SRE observability
|
||||
|
||||
**Integration with Install Command:**
|
||||
- The install command (Task 24) may not exist yet - create a stub if needed
|
||||
- src/cli/src/commands/install.ts stub: export async function installAction(servers: string[], options: {}): Promise<void> { console.log('Install not implemented yet'); }
|
||||
698
.taskmaster/tasks/task_024.md
Normal file
698
.taskmaster/tasks/task_024.md
Normal file
@@ -0,0 +1,698 @@
|
||||
# Task ID: 24
|
||||
|
||||
**Title:** Implement mcpctl install with LLM-Assisted Auto-Configuration
|
||||
|
||||
**Status:** pending
|
||||
|
||||
**Dependencies:** 22, 23
|
||||
|
||||
**Priority:** medium
|
||||
|
||||
**Description:** Create the `mcpctl install <server-name>` command that uses a local LLM to automatically read MCP server documentation, generate envTemplate/setup guides/profiles, and walk users through configuration with validation.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/cli/src/commands/install.ts:
|
||||
|
||||
```typescript
|
||||
import { Command } from 'commander';
|
||||
import { RegistryClient } from '../registry/client';
|
||||
import { LLMProvider } from '../llm/provider';
|
||||
import { SetupWizard } from '../setup/wizard';
|
||||
import { McpdClient } from '../api/mcpd-client';
|
||||
|
||||
export function createInstallCommand(): Command {
|
||||
const cmd = new Command('install')
|
||||
.description('Install and configure an MCP server')
|
||||
.argument('<servers...>', 'Server name(s) from registry')
|
||||
.option('--non-interactive', 'Use env vars for credentials, no prompts')
|
||||
.option('--profile-name <name>', 'Name for the created profile')
|
||||
.option('--project <name>', 'Auto-add to this project')
|
||||
.option('--dry-run', 'Show configuration without applying')
|
||||
.option('--skip-llm', 'Only use registry metadata, no LLM analysis')
|
||||
.action(async (servers, options) => {
|
||||
await installAction(servers, options);
|
||||
});
|
||||
return cmd;
|
||||
}
|
||||
```
|
||||
|
||||
**Installation Flow:**
|
||||
|
||||
```typescript
|
||||
// install-action.ts
|
||||
export async function installAction(
|
||||
serverNames: string[],
|
||||
options: InstallOptions
|
||||
): Promise<void> {
|
||||
const registry = new RegistryClient();
|
||||
const mcpd = new McpdClient();
|
||||
const llm = await getLLMProvider(); // From Task 12 config
|
||||
|
||||
for (const serverName of serverNames) {
|
||||
console.log(chalk.blue(`\nInstalling ${serverName}...`));
|
||||
|
||||
// Step 1: Fetch server metadata from registry
|
||||
const serverMeta = await registry.getServer(serverName);
|
||||
if (!serverMeta) {
|
||||
console.error(chalk.red(`Server '${serverName}' not found in registries`));
|
||||
continue;
|
||||
}
|
||||
|
||||
// Step 2: Check if envTemplate is complete
|
||||
let envTemplate = serverMeta.envTemplate;
|
||||
let setupGuide = serverMeta.setupGuide;
|
||||
let defaultProfiles: ProfileConfig[] = [];
|
||||
|
||||
const needsLLMAnalysis = (
|
||||
!options.skipLlm &&
|
||||
(!envTemplate || envTemplate.length === 0 || hasIncompleteEnvVars(envTemplate))
|
||||
);
|
||||
|
||||
// Step 3: LLM-assisted configuration generation
|
||||
if (needsLLMAnalysis && serverMeta.repositoryUrl) {
|
||||
console.log(chalk.dim('Analyzing server documentation with LLM...'));
|
||||
|
||||
const readme = await fetchReadme(serverMeta.repositoryUrl);
|
||||
const llmResult = await analyzeWithLLM(llm, readme, serverMeta);
|
||||
|
||||
// Merge LLM results with registry data
|
||||
envTemplate = mergeEnvTemplates(envTemplate, llmResult.envTemplate);
|
||||
setupGuide = llmResult.setupGuide || setupGuide;
|
||||
defaultProfiles = llmResult.profiles || [];
|
||||
}
|
||||
|
||||
if (options.dryRun) {
|
||||
printDryRun(serverMeta, envTemplate, setupGuide, defaultProfiles);
|
||||
continue;
|
||||
}
|
||||
|
||||
// Step 4: Register MCP server in mcpd
|
||||
const registeredServer = await mcpd.registerServer({
|
||||
name: serverMeta.name,
|
||||
command: serverMeta.packages.npm
|
||||
? `npx -y ${serverMeta.packages.npm}`
|
||||
: serverMeta.packages.docker
|
||||
? `docker run ${serverMeta.packages.docker}`
|
||||
: throw new Error('No package source available'),
|
||||
envTemplate,
|
||||
transport: serverMeta.transport,
|
||||
});
|
||||
|
||||
// Step 5: Run setup wizard to collect credentials
|
||||
const wizard = new SetupWizard(envTemplate, { nonInteractive: options.nonInteractive });
|
||||
const credentials = await wizard.run();
|
||||
|
||||
// Step 6: Create profile
|
||||
const profileName = options.profileName || `${serverMeta.name}-default`;
|
||||
const profile = await mcpd.createProfile({
|
||||
name: profileName,
|
||||
serverId: registeredServer.id,
|
||||
config: credentials,
|
||||
});
|
||||
|
||||
// Step 7: Optionally add to project
|
||||
if (options.project) {
|
||||
await mcpd.addProfileToProject(options.project, profile.id);
|
||||
console.log(chalk.green(`Added to project '${options.project}'`));
|
||||
}
|
||||
|
||||
console.log(chalk.green(`✓ ${serverMeta.name} installed successfully`));
|
||||
console.log(chalk.dim(` Profile: ${profileName}`));
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**LLM Analysis Implementation:**
|
||||
|
||||
```typescript
|
||||
// llm-analyzer.ts
|
||||
import { z } from 'zod';
|
||||
|
||||
const LLMAnalysisSchema = z.object({
|
||||
envTemplate: z.array(z.object({
|
||||
name: z.string(),
|
||||
description: z.string(),
|
||||
isSecret: z.boolean(),
|
||||
setupUrl: z.string().url().optional(),
|
||||
defaultValue: z.string().optional(),
|
||||
})),
|
||||
setupGuide: z.string().optional(),
|
||||
profiles: z.array(z.object({
|
||||
name: z.string(),
|
||||
description: z.string(),
|
||||
permissions: z.array(z.string()),
|
||||
})).optional(),
|
||||
});
|
||||
|
||||
const ANALYSIS_PROMPT = `
|
||||
Analyze this MCP server README and extract configuration information.
|
||||
|
||||
README:
|
||||
{readme}
|
||||
|
||||
Extract and return JSON with:
|
||||
1. envTemplate: Array of required environment variables with:
|
||||
- name: The env var name (e.g., SLACK_BOT_TOKEN)
|
||||
- description: What this variable is for and where to get it
|
||||
- isSecret: true if this is a secret/token/password
|
||||
- setupUrl: URL to docs for obtaining this credential (if mentioned)
|
||||
|
||||
2. setupGuide: Step-by-step setup instructions in markdown
|
||||
|
||||
3. profiles: Suggested permission profiles (e.g., read-only, admin, limited)
|
||||
|
||||
Return ONLY valid JSON matching this exact schema. No markdown formatting.
|
||||
`;
|
||||
|
||||
export async function analyzeWithLLM(
|
||||
llm: LLMProvider,
|
||||
readme: string,
|
||||
serverMeta: RegistryServer
|
||||
): Promise<z.infer<typeof LLMAnalysisSchema>> {
|
||||
// Sanitize README to prevent prompt injection
|
||||
const sanitizedReadme = sanitizeForLLM(readme);
|
||||
|
||||
const prompt = ANALYSIS_PROMPT.replace('{readme}', sanitizedReadme);
|
||||
|
||||
const response = await llm.complete(prompt, {
|
||||
maxTokens: 2000,
|
||||
temperature: 0.1, // Low temperature for structured output
|
||||
});
|
||||
|
||||
// Extract JSON from response (handle markdown code blocks)
|
||||
const jsonStr = extractJSON(response);
|
||||
|
||||
// Validate with Zod
|
||||
const parsed = LLMAnalysisSchema.safeParse(JSON.parse(jsonStr));
|
||||
if (!parsed.success) {
|
||||
console.warn(chalk.yellow('LLM output validation failed, using registry data only'));
|
||||
return { envTemplate: [], setupGuide: undefined, profiles: [] };
|
||||
}
|
||||
|
||||
return parsed.data;
|
||||
}
|
||||
|
||||
function sanitizeForLLM(text: string): string {
|
||||
// Remove potential prompt injection patterns
|
||||
return text
|
||||
.replace(/```[\s\S]*?```/g, (match) => match) // Keep code blocks
|
||||
.replace(/\{\{.*?\}\}/g, '') // Remove template syntax
|
||||
.replace(/\[INST\]/gi, '') // Remove common injection patterns
|
||||
.replace(/\[\/?SYSTEM\]/gi, '')
|
||||
.slice(0, 50000); // Limit length
|
||||
}
|
||||
```
|
||||
|
||||
**GitHub README Fetching:**
|
||||
|
||||
```typescript
|
||||
// github.ts
|
||||
export async function fetchReadme(repoUrl: string): Promise<string> {
|
||||
const { owner, repo } = parseGitHubUrl(repoUrl);
|
||||
|
||||
// Try common README locations
|
||||
const paths = ['README.md', 'readme.md', 'README.rst', 'README'];
|
||||
|
||||
for (const path of paths) {
|
||||
try {
|
||||
const response = await fetch(
|
||||
`https://raw.githubusercontent.com/${owner}/${repo}/main/${path}`
|
||||
);
|
||||
if (response.ok) {
|
||||
return await response.text();
|
||||
}
|
||||
// Try master branch
|
||||
const masterResponse = await fetch(
|
||||
`https://raw.githubusercontent.com/${owner}/${repo}/master/${path}`
|
||||
);
|
||||
if (masterResponse.ok) {
|
||||
return await masterResponse.text();
|
||||
}
|
||||
} catch {
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
throw new Error(`Could not fetch README from ${repoUrl}`);
|
||||
}
|
||||
```
|
||||
|
||||
**Security Considerations:**
|
||||
- Sanitize LLM outputs before using (prevent prompt injection from malicious READMEs)
|
||||
- Validate generated envTemplate with Zod schema
|
||||
- Never auto-execute commands suggested by LLM without explicit user approval
|
||||
- Log LLM interactions for audit (without sensitive data)
|
||||
- Rate limit LLM calls to prevent abuse
|
||||
|
||||
**Data Platform Auth Pattern Recognition:**
|
||||
LLM should understand complex auth patterns commonly found in data tools:
|
||||
- Service account JSON (GCP BigQuery, Vertex AI)
|
||||
- Connection strings (Snowflake, Databricks)
|
||||
- OAuth flows (dbt Cloud, Tableau)
|
||||
- IAM roles (AWS Redshift, Athena)
|
||||
- API keys with scopes (Fivetran, Airbyte)
|
||||
|
||||
**Test Strategy:**
|
||||
|
||||
TDD approach - write tests BEFORE implementation:
|
||||
|
||||
1. **Command parsing tests:**
|
||||
- Test single server argument
|
||||
- Test multiple servers (batch install)
|
||||
- Test all options parse correctly
|
||||
- Test --non-interactive and --dry-run flags
|
||||
|
||||
2. **Registry fetch tests:**
|
||||
- Test successful server lookup
|
||||
- Test server not found handling
|
||||
- Test registry error handling
|
||||
|
||||
3. **LLM prompt generation tests:**
|
||||
- Test prompt template populated correctly
|
||||
- Test README truncation at 50k chars
|
||||
- Test sanitization removes injection patterns
|
||||
- Test code blocks preserved in sanitization
|
||||
|
||||
4. **LLM response parsing tests:**
|
||||
- Test valid JSON extraction from plain response
|
||||
- Test JSON extraction from markdown code blocks
|
||||
- Test Zod validation accepts valid schema
|
||||
- Test Zod validation rejects invalid schema
|
||||
- Test graceful fallback on validation failure
|
||||
|
||||
5. **GitHub README fetch tests:**
|
||||
- Test main branch fetch
|
||||
- Test master branch fallback
|
||||
- Test different README filename handling
|
||||
- Test repository URL parsing (https, git@)
|
||||
- Test fetch failure handling
|
||||
|
||||
6. **envTemplate merge tests:**
|
||||
- Test LLM results merged with registry data
|
||||
- Test LLM results don't override existing registry data
|
||||
- Test deduplication by env var name
|
||||
|
||||
7. **Full install flow tests:**
|
||||
- Test complete flow with mocked dependencies
|
||||
- Test dry-run shows config without applying
|
||||
- Test skip-llm uses registry data only
|
||||
- Test non-interactive uses env vars
|
||||
- Test batch install processes all servers
|
||||
|
||||
8. **Security tests:**
|
||||
- Test prompt injection patterns sanitized
|
||||
- Test malformed LLM output rejected
|
||||
- Test no command auto-execution
|
||||
|
||||
9. **Data platform tests:**
|
||||
- Test recognition of service account JSON patterns
|
||||
- Test recognition of connection string patterns
|
||||
- Test OAuth flow detection
|
||||
|
||||
Run: `pnpm --filter @mcpctl/cli test:run -- --coverage commands/install`
|
||||
|
||||
## Subtasks
|
||||
|
||||
### 24.1. Write TDD Test Suites for Install Command Parsing, GitHub README Fetching, and Core Types
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** None
|
||||
|
||||
Create comprehensive Vitest test suites for the install command's CLI parsing, GitHub README fetching module with proxy support, and foundational types/Zod schemas BEFORE implementation, following the project's strict TDD approach.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/cli/tests/unit/commands/install/ directory with test files. Write tests for:
|
||||
|
||||
1. **Command Parsing Tests** (install.test.ts):
|
||||
- Test single server argument parsing
|
||||
- Test multiple servers (batch install): `mcpctl install slack jira github`
|
||||
- Test all options parse correctly: --non-interactive, --profile-name, --project, --dry-run, --skip-llm
|
||||
- Test required argument validation (exits with code 2 if no server specified)
|
||||
- Test option combinations are mutually compatible
|
||||
|
||||
2. **GitHub README Fetching Tests** (github-fetcher.test.ts):
|
||||
- Test parseGitHubUrl() extracts owner/repo from various URL formats (https://github.com/owner/repo, git@github.com:owner/repo.git)
|
||||
- Test fetchReadme() tries multiple paths: README.md, readme.md, README.rst, README
|
||||
- Test branch fallback: main -> master
|
||||
- Test HTTP_PROXY/HTTPS_PROXY environment variable support using undici ProxyAgent
|
||||
- Test custom CA certificate support (NODE_EXTRA_CA_CERTS)
|
||||
- Test GitHub rate limit handling (403 with X-RateLimit-Remaining: 0) with exponential backoff
|
||||
- Test timeout handling (30s default) with AbortController
|
||||
- Create test fixtures: mock README responses in src/cli/tests/fixtures/readmes/
|
||||
|
||||
3. **Type and Schema Tests** (types.test.ts):
|
||||
- Test InstallOptions Zod schema validates all fields
|
||||
- Test EnvTemplateEntry schema requires name, description, isSecret
|
||||
- Test LLMAnalysisResult schema validates envTemplate array, setupGuide string, profiles array
|
||||
- Test ProfileConfig schema validates name, description, permissions array
|
||||
|
||||
4. **Mock Infrastructure**:
|
||||
- Create MockRegistryClient in src/cli/tests/mocks/ that implements RegistryClient interface
|
||||
- Create MockLLMProvider that returns deterministic responses for testing
|
||||
- Create MockMcpdClient for testing server registration and profile creation
|
||||
- Use msw (Mock Service Worker) for GitHub API mocking
|
||||
|
||||
All tests must fail initially (red phase) with 'module not found' or 'function not implemented' errors.
|
||||
|
||||
### 24.2. Write TDD Test Suites for LLM Analysis with Security Sanitization and Data Platform Auth Recognition
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 24.1
|
||||
|
||||
Create comprehensive Vitest test suites for the LLM-based README analysis module, focusing on prompt injection prevention, output validation with Zod, and recognition of complex data platform authentication patterns (BigQuery service accounts, Snowflake connection strings, dbt OAuth).
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/cli/tests/unit/llm/analyzer.test.ts with comprehensive test coverage:
|
||||
|
||||
1. **Prompt Sanitization Tests** (SECURITY CRITICAL):
|
||||
- Test sanitizeForLLM() removes prompt injection patterns: [INST], [/SYSTEM], </s>, <|endoftext|>, <<SYS>>
|
||||
- Test removal of template syntax: {{variable}}, ${command}
|
||||
- Test preservation of legitimate code blocks (```typescript...```)
|
||||
- Test length truncation at 50000 characters
|
||||
- Test handling of Unicode edge cases and zero-width characters
|
||||
- Create malicious README fixtures in src/cli/tests/fixtures/readmes/malicious/
|
||||
|
||||
2. **LLM Output Validation Tests**:
|
||||
- Test extractJSON() handles markdown code blocks: ```json...```
|
||||
- Test extractJSON() handles raw JSON without code blocks
|
||||
- Test LLMAnalysisSchema Zod validation catches missing required fields
|
||||
- Test validation rejects envTemplate entries without isSecret field
|
||||
- Test graceful fallback returns empty result on validation failure (warn, don't crash)
|
||||
- Test malformed JSON handling (truncated, invalid syntax)
|
||||
|
||||
3. **Data Platform Auth Pattern Recognition Tests** (Principal Data Engineer focus):
|
||||
- Test BigQuery: recognizes GOOGLE_APPLICATION_CREDENTIALS, service account JSON patterns
|
||||
- Test Snowflake: recognizes SNOWFLAKE_ACCOUNT, SNOWFLAKE_USER, connection string format
|
||||
- Test dbt Cloud: recognizes DBT_API_KEY, DBT_ACCOUNT_ID, project selection patterns
|
||||
- Test Databricks: recognizes DATABRICKS_HOST, DATABRICKS_TOKEN, cluster configuration
|
||||
- Test AWS data services: recognizes AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, IAM role ARN
|
||||
- Test Fivetran/Airbyte: recognizes API key with scopes pattern
|
||||
- Create test fixtures: src/cli/tests/fixtures/readmes/data-platforms/ with realistic READMEs
|
||||
|
||||
4. **LLM Provider Integration Tests**:
|
||||
- Test analyzeWithLLM() uses injected LLMProvider (dependency injection)
|
||||
- Test temperature set to 0.1 for structured output
|
||||
- Test maxTokens set appropriately (2000)
|
||||
- Test timeout handling (60s default)
|
||||
- Test circuit breaker triggers on consecutive failures
|
||||
|
||||
5. **Profile Generation Tests**:
|
||||
- Test default profiles extracted: read-only, admin, limited access
|
||||
- Test permissions array parsing
|
||||
- Test profile descriptions are sanitized
|
||||
|
||||
6. **Structured Logging Tests** (SRE focus):
|
||||
- Test LLM interactions are logged with requestId, duration_ms, input_tokens
|
||||
- Test sensitive data (API keys, tokens) are NEVER logged
|
||||
- Test README content is not logged in full (truncate in logs)
|
||||
|
||||
### 24.3. Implement GitHub README Fetcher with Proxy Support and Rate Limit Handling
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 24.1
|
||||
|
||||
Implement the GitHub README fetching module with enterprise networking support (HTTP/HTTPS proxy, custom CA certs), intelligent branch detection, rate limit handling with exponential backoff, and proper error handling for network failures.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/cli/src/install/github-fetcher.ts:
|
||||
|
||||
```typescript
|
||||
import { ProxyAgent } from 'undici';
|
||||
import { createHash } from 'crypto';
|
||||
|
||||
export interface GitHubFetcherConfig {
|
||||
proxyUrl?: string; // From HTTP_PROXY/HTTPS_PROXY
|
||||
caFile?: string; // Custom CA certificate path
|
||||
timeout?: number; // Default 30000ms
|
||||
maxRetries?: number; // Default 3
|
||||
rateLimitWaitMs?: number; // Default 60000ms
|
||||
}
|
||||
|
||||
export interface ParsedGitHubUrl {
|
||||
owner: string;
|
||||
repo: string;
|
||||
}
|
||||
|
||||
export class GitHubReadmeFetcher {
|
||||
private config: Required<GitHubFetcherConfig>;
|
||||
private agent?: ProxyAgent;
|
||||
|
||||
constructor(config: Partial<GitHubFetcherConfig> = {}) { ... }
|
||||
|
||||
parseGitHubUrl(repoUrl: string): ParsedGitHubUrl { ... }
|
||||
|
||||
async fetchReadme(repoUrl: string): Promise<string> { ... }
|
||||
|
||||
private async fetchWithRetry(url: string, attempt: number): Promise<Response> { ... }
|
||||
|
||||
private handleRateLimit(response: Response): Promise<void> { ... }
|
||||
}
|
||||
```
|
||||
|
||||
**Implementation Requirements:**
|
||||
|
||||
1. **URL Parsing**:
|
||||
- Support HTTPS: `https://github.com/owner/repo`
|
||||
- Support HTTPS with .git: `https://github.com/owner/repo.git`
|
||||
- Support SSH: `git@github.com:owner/repo.git`
|
||||
- Extract owner and repo, strip .git suffix
|
||||
- Throw descriptive error for invalid URLs
|
||||
|
||||
2. **README Fetching**:
|
||||
- Try paths in order: README.md, readme.md, README.rst, README, Readme.md
|
||||
- Try branches in order: main, master, HEAD
|
||||
- Use raw.githubusercontent.com for content fetching
|
||||
- Return first successful fetch
|
||||
|
||||
3. **Proxy Support**:
|
||||
- Detect HTTP_PROXY, HTTPS_PROXY, NO_PROXY environment variables
|
||||
- Create undici ProxyAgent when proxy configured
|
||||
- Pass agent to fetch() dispatcher option
|
||||
- Support custom CA via NODE_EXTRA_CA_CERTS or config.caFile
|
||||
|
||||
4. **Rate Limit Handling**:
|
||||
- Check X-RateLimit-Remaining header
|
||||
- On 403 with rate limit exceeded, wait until X-RateLimit-Reset
|
||||
- Log rate limit events for SRE visibility
|
||||
- Implement exponential backoff: 1s, 2s, 4s (max 3 retries)
|
||||
|
||||
5. **Error Handling**:
|
||||
- Throw ReadmeNotFoundError if all paths fail
|
||||
- Throw NetworkError on connection failures
|
||||
- Throw RateLimitError if exhausted after retries
|
||||
- Include repository URL in all error messages
|
||||
|
||||
6. **SRE Metrics**:
|
||||
- Export metrics: github_fetch_duration_seconds, github_rate_limit_remaining gauge
|
||||
- Log structured events: { event: 'github_fetch', repo, branch, path, duration_ms }
|
||||
|
||||
### 24.4. Implement LLM-Based README Analyzer with Secure Prompt Construction and Zod Validation
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 24.1, 24.2, 24.3
|
||||
|
||||
Implement the LLM analysis module that processes MCP server READMEs to extract environment templates, setup guides, and suggested profiles using the pluggable LLMProvider interface with robust input sanitization and output validation.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/cli/src/install/llm-analyzer.ts:
|
||||
|
||||
```typescript
|
||||
import { z } from 'zod';
|
||||
import type { LLMProvider } from '../llm/provider';
|
||||
import type { RegistryServer } from '../registry/types';
|
||||
|
||||
export const EnvTemplateEntrySchema = z.object({
|
||||
name: z.string().min(1),
|
||||
description: z.string().min(10),
|
||||
isSecret: z.boolean(),
|
||||
setupUrl: z.string().url().optional(),
|
||||
defaultValue: z.string().optional(),
|
||||
validation: z.enum(['required', 'optional', 'conditional']).optional(),
|
||||
});
|
||||
|
||||
export const ProfileConfigSchema = z.object({
|
||||
name: z.string().min(1),
|
||||
description: z.string(),
|
||||
permissions: z.array(z.string()),
|
||||
});
|
||||
|
||||
export const LLMAnalysisSchema = z.object({
|
||||
envTemplate: z.array(EnvTemplateEntrySchema),
|
||||
setupGuide: z.string().optional(),
|
||||
profiles: z.array(ProfileConfigSchema).optional(),
|
||||
});
|
||||
|
||||
export type LLMAnalysisResult = z.infer<typeof LLMAnalysisSchema>;
|
||||
|
||||
export class LLMReadmeAnalyzer {
|
||||
constructor(
|
||||
private llmProvider: LLMProvider,
|
||||
private logger: StructuredLogger
|
||||
) {}
|
||||
|
||||
async analyze(
|
||||
readme: string,
|
||||
serverMeta: RegistryServer
|
||||
): Promise<LLMAnalysisResult> { ... }
|
||||
|
||||
sanitizeForLLM(text: string): string { ... }
|
||||
|
||||
private buildPrompt(readme: string, serverMeta: RegistryServer): string { ... }
|
||||
|
||||
private extractJSON(response: string): string { ... }
|
||||
|
||||
private validateAndParse(jsonStr: string): LLMAnalysisResult { ... }
|
||||
}
|
||||
```
|
||||
|
||||
**Implementation Requirements:**
|
||||
|
||||
1. **Input Sanitization** (SECURITY CRITICAL):
|
||||
- Remove prompt injection patterns: `[INST]`, `[/INST]`, `<<SYS>>`, `<</SYS>>`, `</s>`, `<|endoftext|>`, `<|im_start|>`, `<|im_end|>`
|
||||
- Remove template syntax: `{{...}}`, `${...}`, `<%...%>`
|
||||
- Preserve code blocks (```...```) for context
|
||||
- Truncate to 50000 characters with warning log
|
||||
- Normalize Unicode (NFKC) to prevent homoglyph attacks
|
||||
- Log sanitization actions for audit
|
||||
|
||||
2. **Prompt Construction**:
|
||||
- Use structured prompt template requesting JSON output
|
||||
- Include serverMeta context (name, type, existing envTemplate if partial)
|
||||
- Request specific fields: envTemplate, setupGuide, profiles
|
||||
- Include examples of data platform auth patterns in prompt
|
||||
- Set temperature=0.1 for deterministic output
|
||||
- Set maxTokens=2000
|
||||
|
||||
3. **Response Processing**:
|
||||
- Extract JSON from markdown code blocks if present
|
||||
- Handle raw JSON without code blocks
|
||||
- Validate with Zod schema
|
||||
- Return empty result on validation failure (graceful degradation)
|
||||
- Log validation errors for debugging
|
||||
|
||||
4. **Data Platform Auth Recognition**:
|
||||
- Include prompt context about common patterns:
|
||||
- GCP: Service account JSON files, GOOGLE_APPLICATION_CREDENTIALS
|
||||
- AWS: Access keys, IAM roles, STS assume role
|
||||
- Azure: Service principals, managed identity
|
||||
- Snowflake: Account URL, OAuth, key-pair auth
|
||||
- Databricks: Personal access tokens, OAuth M2M
|
||||
- dbt Cloud: API tokens with account/project scoping
|
||||
|
||||
5. **Error Handling**:
|
||||
- Wrap LLM calls in try-catch
|
||||
- Return fallback result on LLM timeout
|
||||
- Circuit breaker integration via LLMProvider
|
||||
- Never propagate sensitive data in errors
|
||||
|
||||
6. **Structured Logging** (SRE):
|
||||
- Log: requestId, llmProvider, promptLength, responseLength, duration_ms
|
||||
- NEVER log: full README content, API keys, tokens
|
||||
- Log validation failures with field paths
|
||||
|
||||
### 24.5. Implement Install Command Handler with Full Installation Flow, SetupWizard, and mcpd Integration
|
||||
|
||||
**Status:** pending
|
||||
**Dependencies:** 24.1, 24.2, 24.3, 24.4
|
||||
|
||||
Implement the main install command and action handler that orchestrates the full installation flow: registry lookup, LLM analysis (optional), server registration with mcpd, interactive credential collection via SetupWizard, profile creation, and optional project assignment.
|
||||
|
||||
**Details:**
|
||||
|
||||
Create src/cli/src/commands/install.ts and src/cli/src/install/install-action.ts:
|
||||
|
||||
```typescript
|
||||
// install.ts
|
||||
import { Command } from 'commander';
|
||||
import { installAction } from '../install/install-action';
|
||||
|
||||
export function createInstallCommand(): Command {
|
||||
return new Command('install')
|
||||
.description('Install and configure an MCP server')
|
||||
.argument('<servers...>', 'Server name(s) from registry')
|
||||
.option('--non-interactive', 'Use env vars for credentials, no prompts')
|
||||
.option('--profile-name <name>', 'Name for the created profile')
|
||||
.option('--project <name>', 'Auto-add to this project')
|
||||
.option('--dry-run', 'Show configuration without applying')
|
||||
.option('--skip-llm', 'Only use registry metadata, no LLM analysis')
|
||||
.action(installAction);
|
||||
}
|
||||
|
||||
// install-action.ts
|
||||
export interface InstallOptions {
|
||||
nonInteractive?: boolean;
|
||||
profileName?: string;
|
||||
project?: string;
|
||||
dryRun?: boolean;
|
||||
skipLlm?: boolean;
|
||||
}
|
||||
|
||||
export async function installAction(
|
||||
serverNames: string[],
|
||||
options: InstallOptions
|
||||
): Promise<void> { ... }
|
||||
```
|
||||
|
||||
**Implementation Requirements:**
|
||||
|
||||
1. **Command Registration**:
|
||||
- Register in src/cli/src/commands/index.ts command registry
|
||||
- Follow existing Commander.js patterns from discover command (Task 23)
|
||||
- Set exit codes: 0 success, 1 partial success, 2 complete failure
|
||||
|
||||
2. **Installation Flow** (per server):
|
||||
- Step 1: Fetch server metadata from RegistryClient (from Task 22)
|
||||
- Step 2: Check if envTemplate is complete or needs LLM analysis
|
||||
- Step 3: If needed and --skip-llm not set, fetch README and analyze with LLM
|
||||
- Step 4: Merge LLM results with registry metadata (registry takes precedence for conflicts)
|
||||
- Step 5: If --dry-run, print configuration and exit
|
||||
- Step 6: Register MCP server with mcpd via McpdClient
|
||||
- Step 7: Run SetupWizard to collect credentials (or use env vars if --non-interactive)
|
||||
- Step 8: Create profile with collected credentials
|
||||
- Step 9: If --project specified, add profile to project
|
||||
|
||||
3. **Dependency Injection**:
|
||||
- Accept RegistryClient, LLMProvider, McpdClient via constructor or factory
|
||||
- Enable testing with mock implementations
|
||||
- Use getLLMProvider() factory from Task 12 configuration
|
||||
|
||||
4. **SetupWizard Integration** (from Task 10):
|
||||
- Pass envTemplate to SetupWizard
|
||||
- Handle nonInteractive mode (read from environment)
|
||||
- Validate credentials before storing
|
||||
- Support OAuth flows via browser for applicable servers
|
||||
|
||||
5. **Dry Run Mode**:
|
||||
- Print server metadata (name, command, transport)
|
||||
- Print envTemplate with descriptions
|
||||
- Print setupGuide if available
|
||||
- Print suggested profiles
|
||||
- Use chalk for formatted output
|
||||
- Exit without side effects
|
||||
|
||||
6. **Batch Install**:
|
||||
- Process servers sequentially (to avoid mcpd race conditions)
|
||||
- Continue on individual server failures (log warning)
|
||||
- Report summary at end: X installed, Y failed
|
||||
- Return appropriate exit code
|
||||
|
||||
7. **Error Handling**:
|
||||
- Catch RegistryNotFoundError and suggest 'mcpctl discover'
|
||||
- Catch McpdConnectionError and print mcpd health check URL
|
||||
- Catch SetupWizardCancelledError gracefully
|
||||
- Never expose credentials in error messages
|
||||
|
||||
8. **Structured Logging** (SRE):
|
||||
- Log: serverName, registrySource, llmAnalysisUsed, installDuration_ms, success
|
||||
- Emit metrics: install_total (counter), install_duration_seconds (histogram)
|
||||
|
||||
9. **Output Messages**:
|
||||
- Use chalk.blue for progress
|
||||
- Use chalk.green + checkmark for success
|
||||
- Use chalk.red for errors
|
||||
- Print profile name and usage instructions on success
|
||||
701
.taskmaster/tasks/tasks.json
Normal file
701
.taskmaster/tasks/tasks.json
Normal file
@@ -0,0 +1,701 @@
|
||||
{
|
||||
"master": {
|
||||
"tasks": [
|
||||
{
|
||||
"id": 1,
|
||||
"title": "Initialize Project Structure and Core Dependencies",
|
||||
"description": "Set up the monorepo structure for mcpctl with CLI client, mcpd server, and shared libraries. Configure TypeScript, ESLint, and build tooling.",
|
||||
"details": "Create a monorepo using pnpm workspaces or npm workspaces with the following structure:\n\n```\nmcpctl/\n├── src/\n│ ├── cli/ # mcpctl CLI tool\n│ ├── mcpd/ # Backend daemon server\n│ ├── shared/ # Shared types, utilities, constants\n│ └── local-proxy/ # Local LLM proxy component\n├── deploy/\n│ └── docker-compose.yml\n├── package.json\n├── tsconfig.base.json\n└── pnpm-workspace.yaml\n```\n\nDependencies to install:\n- TypeScript 5.x\n- Commander.js for CLI\n- Express/Fastify for mcpd HTTP server\n- Zod for schema validation\n- Winston/Pino for logging\n- Prisma or Drizzle for database ORM\n\nCreate base tsconfig.json with strict mode, ES2022 target, and module resolution settings. Set up shared ESLint config with TypeScript rules.",
|
||||
"testStrategy": "Verify project builds successfully with `pnpm build`. Ensure all packages compile without errors. Test workspace linking works correctly between packages.",
|
||||
"priority": "high",
|
||||
"dependencies": [],
|
||||
"status": "done",
|
||||
"subtasks": [
|
||||
{
|
||||
"id": 1,
|
||||
"title": "Initialize pnpm workspace monorepo with future-proof directory structure",
|
||||
"description": "Create the complete monorepo directory structure using pnpm workspaces that accommodates all 18 planned tasks without requiring future refactoring.",
|
||||
"dependencies": [],
|
||||
"details": "Create root package.json with pnpm workspaces configuration. Create pnpm-workspace.yaml defining all workspace packages. Initialize the following directory structure:\n\n```\nmcpctl/\n├── src/\n│ ├── cli/ # mcpctl CLI tool (Task 7-10)\n│ │ ├── src/\n│ │ ├── tests/\n│ │ └── package.json\n│ ├── mcpd/ # Backend daemon server (Task 3-6, 14, 16)\n│ │ ├── src/\n│ │ ├── tests/\n│ │ └── package.json\n│ ├── shared/ # Shared types, utils, constants, validation\n│ │ ├── src/\n│ │ │ ├── types/ # TypeScript interfaces/types\n│ │ │ ├── utils/ # Utility functions\n│ │ │ ├── constants/# Shared constants\n│ │ │ ├── validation/ # Zod schemas\n│ │ │ └── index.ts # Barrel export\n│ │ ├── tests/\n│ │ └── package.json\n│ ├── local-proxy/ # Local LLM proxy (Task 11-13)\n│ │ ├── src/\n│ │ ├── tests/\n│ │ └── package.json\n│ └── db/ # Database package (Task 2)\n│ ├── src/\n│ ├── prisma/ # Schema and migrations\n│ ├── seed/ # Seed data\n│ ├── tests/\n│ └── package.json\n├── deploy/\n│ └── docker-compose.yml # Local dev services (postgres)\n├── tests/\n│ ├── e2e/ # End-to-end tests (Task 18)\n│ └── integration/ # Integration tests\n├── docs/ # Documentation (Task 18)\n├── package.json # Root workspace config\n├── pnpm-workspace.yaml\n└── turbo.json # Optional: Turborepo for build orchestration\n```\n\nThe pnpm-workspace.yaml should contain: `packages: [\"src/*\"]`",
|
||||
"status": "done",
|
||||
"testStrategy": "Write Vitest tests that verify: (1) All expected directories exist, (2) All package.json files are valid JSON with correct workspace protocol dependencies, (3) pnpm-workspace.yaml correctly includes all packages, (4) Running 'pnpm install' succeeds and creates correct node_modules symlinks between packages. Run 'pnpm ls' to verify workspace linking."
|
||||
},
|
||||
{
|
||||
"id": 2,
|
||||
"title": "Configure TypeScript with strict mode and project references",
|
||||
"description": "Set up TypeScript configuration with strict mode, ES2022 target, and proper project references for monorepo build orchestration.",
|
||||
"dependencies": [
|
||||
1
|
||||
],
|
||||
"details": "Create root tsconfig.base.json with shared compiler options. Create package-specific tsconfig.json in each package that extends the base and sets appropriate paths.",
|
||||
"status": "done",
|
||||
"testStrategy": "Write Vitest tests that verify tsconfig.base.json exists and has strict: true, each package tsconfig.json extends base correctly."
|
||||
},
|
||||
{
|
||||
"id": 3,
|
||||
"title": "Set up Vitest testing framework with workspace configuration",
|
||||
"description": "Configure Vitest as the test framework across all packages with proper workspace setup, coverage reporting, and test-driven development infrastructure.",
|
||||
"dependencies": [
|
||||
2
|
||||
],
|
||||
"details": "Install Vitest and related packages at root level. Create root vitest.config.ts and vitest.workspace.ts for workspace-aware testing pointing to src/cli, src/mcpd, src/shared, src/local-proxy, src/db.",
|
||||
"status": "done",
|
||||
"testStrategy": "Run 'pnpm test:run' and verify Vitest discovers and runs tests, coverage report is generated."
|
||||
},
|
||||
{
|
||||
"id": 4,
|
||||
"title": "Configure ESLint with TypeScript rules and docker-compose for local development",
|
||||
"description": "Set up shared ESLint configuration with TypeScript-aware rules, Prettier integration, and docker-compose.yml for local PostgreSQL database.",
|
||||
"dependencies": [
|
||||
2
|
||||
],
|
||||
"details": "Install ESLint and plugins at root. Create eslint.config.js (flat config, ESLint 9+). Create deploy/docker-compose.yml for local development with PostgreSQL service.",
|
||||
"status": "done",
|
||||
"testStrategy": "Write Vitest tests that verify eslint.config.js exists and exports valid config, deploy/docker-compose.yml is valid YAML and defines postgres service."
|
||||
},
|
||||
{
|
||||
"id": 5,
|
||||
"title": "Install core dependencies and perform security/architecture review",
|
||||
"description": "Install all required production dependencies across packages, run security audit, and validate the directory structure supports all 18 planned tasks.",
|
||||
"dependencies": [
|
||||
1,
|
||||
3,
|
||||
4
|
||||
],
|
||||
"details": "Install dependencies per package in src/cli, src/mcpd, src/shared, src/db, src/local-proxy. Perform security and architecture review.",
|
||||
"status": "done",
|
||||
"testStrategy": "Verify each package.json has required dependencies, run pnpm audit, verify .gitignore contains required patterns."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 2,
|
||||
"title": "Design and Implement Database Schema",
|
||||
"description": "Create the database schema for storing MCP server configurations, projects, profiles, user sessions, and audit logs. Use PostgreSQL for production readiness.",
|
||||
"details": "Design PostgreSQL schema using Prisma ORM with models: User, McpServer, McpProfile, Project, ProjectMcpProfile, McpInstance, AuditLog, Session. Create migrations and seed data for common MCP servers (slack, jira, github, terraform).",
|
||||
"testStrategy": "Run Prisma migrations against test database. Verify all relations work correctly with seed data. Test CRUD operations for each model using Prisma client.",
|
||||
"priority": "high",
|
||||
"dependencies": [
|
||||
"1"
|
||||
],
|
||||
"status": "pending",
|
||||
"subtasks": [
|
||||
{
|
||||
"id": 1,
|
||||
"title": "Set up Prisma ORM and PostgreSQL test infrastructure with docker-compose",
|
||||
"description": "Initialize Prisma in the db package with PostgreSQL configuration, create docker-compose.yml for local development with separate test database.",
|
||||
"dependencies": [],
|
||||
"details": "Create src/db/prisma directory structure. Install Prisma dependencies. Configure deploy/docker-compose.yml with two PostgreSQL services: mcpctl-postgres (port 5432) for development and mcpctl-postgres-test (port 5433) for testing.",
|
||||
"status": "pending",
|
||||
"testStrategy": "Write Vitest tests that verify docker-compose creates both postgres services, setupTestDb() successfully connects and pushes schema."
|
||||
},
|
||||
{
|
||||
"id": 2,
|
||||
"title": "Write TDD tests for all Prisma models before implementing schema",
|
||||
"description": "Create comprehensive Vitest test suites for all 8 models testing CRUD operations, relations, constraints, and edge cases.",
|
||||
"dependencies": [
|
||||
1
|
||||
],
|
||||
"details": "Create src/db/tests/models directory with separate test files for each model. Tests will initially fail (TDD red phase) until schema is implemented.",
|
||||
"status": "pending",
|
||||
"testStrategy": "Tests are expected to fail initially (TDD red phase). After schema implementation, all tests should pass."
|
||||
},
|
||||
{
|
||||
"id": 3,
|
||||
"title": "Implement Prisma schema with all models and security considerations",
|
||||
"description": "Create the complete Prisma schema with all 8 models, proper relations, indexes for audit queries, and security-conscious field design.",
|
||||
"dependencies": [
|
||||
2
|
||||
],
|
||||
"details": "Implement src/db/prisma/schema.prisma with all models. Add version Int field and updatedAt DateTime for git-based backup support.",
|
||||
"status": "pending",
|
||||
"testStrategy": "Run TDD tests from subtask 2 - all should now pass (TDD green phase). Verify npx prisma validate passes."
|
||||
},
|
||||
{
|
||||
"id": 4,
|
||||
"title": "Create seed data functions with unit tests for common MCP servers",
|
||||
"description": "Implement seed functions for common MCP server configurations (Slack, Jira, GitHub, Terraform) with comprehensive unit tests.",
|
||||
"dependencies": [
|
||||
3
|
||||
],
|
||||
"details": "Create src/db/seed directory with server definitions and seeding functions for Slack, Jira, GitHub, Terraform MCP servers.",
|
||||
"status": "pending",
|
||||
"testStrategy": "Write unit tests BEFORE implementing seed functions (TDD). Verify seedMcpServers() creates exactly 4 servers with idempotent behavior."
|
||||
},
|
||||
{
|
||||
"id": 5,
|
||||
"title": "Create database migrations and perform security/architecture review",
|
||||
"description": "Generate initial Prisma migration, create migration helper utilities with tests, and conduct comprehensive security and architecture review.",
|
||||
"dependencies": [
|
||||
3,
|
||||
4
|
||||
],
|
||||
"details": "Run npx prisma migrate dev --name init. Create src/db/src/migration-helpers.ts. Document security and architecture findings.",
|
||||
"status": "pending",
|
||||
"testStrategy": "Verify migration files exist, migration helper tests pass, SECURITY_REVIEW.md covers all security checkpoints."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 3,
|
||||
"title": "Implement mcpd Core Server Framework",
|
||||
"description": "Build the mcpd daemon server with Express/Fastify, including middleware for authentication, logging, and error handling. Design for horizontal scalability.",
|
||||
"details": "Create mcpd server in src/mcpd/src/ with Fastify, health check endpoint, auth middleware, and audit logging. Design for statelessness and scalability.",
|
||||
"testStrategy": "Unit test middleware functions. Integration test health endpoint. Load test with multiple concurrent requests. Verify statelessness by running two instances.",
|
||||
"priority": "high",
|
||||
"dependencies": [
|
||||
"1",
|
||||
"2"
|
||||
],
|
||||
"status": "pending",
|
||||
"subtasks": [
|
||||
{
|
||||
"id": 1,
|
||||
"title": "Set up mcpd package structure with clean architecture layers and TDD infrastructure",
|
||||
"description": "Create the src/mcpd directory structure following clean architecture principles with separate layers for routes, controllers, services, and repositories.",
|
||||
"dependencies": [],
|
||||
"details": "Create src/mcpd/src/ with routes/, controllers/, services/, repositories/, middleware/, config/, types/, utils/ directories.",
|
||||
"status": "pending",
|
||||
"testStrategy": "Write initial Vitest tests that verify all required directories exist, package.json has required dependencies."
|
||||
},
|
||||
{
|
||||
"id": 2,
|
||||
"title": "Implement Fastify server core with health endpoint and database connectivity verification",
|
||||
"description": "Create the core Fastify server with health check endpoint that verifies PostgreSQL database connectivity.",
|
||||
"dependencies": [
|
||||
1
|
||||
],
|
||||
"details": "Create src/mcpd/src/server.ts with Fastify instance factory function. Implement config validation with Zod and health endpoint.",
|
||||
"status": "pending",
|
||||
"testStrategy": "TDD approach - write tests first for config validation, health endpoint returns correct structure."
|
||||
},
|
||||
{
|
||||
"id": 3,
|
||||
"title": "Implement authentication middleware with JWT validation and session management",
|
||||
"description": "Create authentication preHandler hook that validates Bearer tokens against the Session table in PostgreSQL.",
|
||||
"dependencies": [
|
||||
2
|
||||
],
|
||||
"details": "Create src/mcpd/src/middleware/auth.ts with authMiddleware factory function using dependency injection.",
|
||||
"status": "pending",
|
||||
"testStrategy": "TDD - write all tests before implementation for 401 responses, token validation, request decoration."
|
||||
},
|
||||
{
|
||||
"id": 4,
|
||||
"title": "Implement security middleware stack with CORS, Helmet, rate limiting, and input sanitization",
|
||||
"description": "Configure and register security middleware including CORS policy, Helmet security headers, rate limiting.",
|
||||
"dependencies": [
|
||||
2
|
||||
],
|
||||
"details": "Create src/mcpd/src/middleware/security.ts with registerSecurityPlugins function. Create sanitization and validation utilities.",
|
||||
"status": "pending",
|
||||
"testStrategy": "TDD tests for CORS headers, Helmet security headers, rate limiting returns 429, input validation."
|
||||
},
|
||||
{
|
||||
"id": 5,
|
||||
"title": "Implement error handling, audit logging middleware, and graceful shutdown",
|
||||
"description": "Create global error handler, audit logging onResponse hook, and graceful shutdown handling with connection draining.",
|
||||
"dependencies": [
|
||||
2,
|
||||
3,
|
||||
4
|
||||
],
|
||||
"details": "Create error-handler.ts, audit.ts middleware, and shutdown.ts utilities in src/mcpd/src/.",
|
||||
"status": "pending",
|
||||
"testStrategy": "TDD for all components: error handler HTTP codes, audit middleware creates records, graceful shutdown handles SIGTERM."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 4,
|
||||
"title": "Implement MCP Server Registry and Profile Management",
|
||||
"description": "Create APIs for registering MCP servers, managing profiles with different permission levels, and storing configuration templates.",
|
||||
"details": "Create REST API endpoints in mcpd for MCP server and profile CRUD operations with seed data for common servers.",
|
||||
"testStrategy": "Test CRUD operations for servers and profiles. Verify profile inheritance works. Test that invalid configurations are rejected by Zod validation.",
|
||||
"priority": "high",
|
||||
"dependencies": [
|
||||
"3"
|
||||
],
|
||||
"status": "pending",
|
||||
"subtasks": [
|
||||
{
|
||||
"id": 1,
|
||||
"title": "Create Zod validation schemas with comprehensive TDD test coverage",
|
||||
"description": "Define and test Zod schemas for MCP server registration, profile management, and configuration templates before implementing any routes.",
|
||||
"dependencies": [],
|
||||
"details": "Create src/mcpd/src/validation/mcp-server.schema.ts with CreateMcpServerSchema, UpdateMcpServerSchema, CreateMcpProfileSchema.",
|
||||
"status": "pending",
|
||||
"testStrategy": "TDD approach - write all tests first, then implement schemas. Tests verify valid inputs pass, invalid inputs fail."
|
||||
},
|
||||
{
|
||||
"id": 2,
|
||||
"title": "Implement repository pattern for MCP server and profile data access",
|
||||
"description": "Create injectable repository classes for McpServer and McpProfile data access with Prisma, following dependency injection patterns.",
|
||||
"dependencies": [
|
||||
1
|
||||
],
|
||||
"details": "Create src/mcpd/src/repositories/interfaces.ts with IMcpServerRepository and IMcpProfileRepository interfaces.",
|
||||
"status": "pending",
|
||||
"testStrategy": "TDD - write tests before implementation with mocked PrismaClient. Verify all repository methods are covered."
|
||||
},
|
||||
{
|
||||
"id": 3,
|
||||
"title": "Implement MCP server service layer with business logic and authorization",
|
||||
"description": "Create McpServerService and McpProfileService with business logic, authorization checks, and validation orchestration.",
|
||||
"dependencies": [
|
||||
1,
|
||||
2
|
||||
],
|
||||
"details": "Create src/mcpd/src/services/mcp-server.service.ts and mcp-profile.service.ts with DI and authorization checks.",
|
||||
"status": "pending",
|
||||
"testStrategy": "TDD - write tests first mocking repositories and authorization. Verify authorization checks are called for every method."
|
||||
},
|
||||
{
|
||||
"id": 4,
|
||||
"title": "Implement REST API routes for MCP servers and profiles with request validation",
|
||||
"description": "Create Fastify route handlers for MCP server and profile CRUD operations using the service layer.",
|
||||
"dependencies": [
|
||||
3
|
||||
],
|
||||
"details": "Create src/mcpd/src/routes/mcp-servers.ts and mcp-profiles.ts with all CRUD endpoints.",
|
||||
"status": "pending",
|
||||
"testStrategy": "Write integration tests before implementation using Fastify.inject(). Test with docker-compose postgres."
|
||||
},
|
||||
{
|
||||
"id": 5,
|
||||
"title": "Create seed data for pre-configured MCP servers and perform security review",
|
||||
"description": "Implement seed data for Slack, Jira, GitHub, and Terraform MCP servers with default profiles, plus security review.",
|
||||
"dependencies": [
|
||||
4
|
||||
],
|
||||
"details": "Create src/mcpd/src/seed/mcp-servers.seed.ts with seedMcpServers() function and SECURITY_REVIEW.md.",
|
||||
"status": "pending",
|
||||
"testStrategy": "Write unit tests for seed functions. Security tests for injection prevention, authorization checks."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 5,
|
||||
"title": "Implement Project Management APIs",
|
||||
"description": "Create APIs for managing MCP projects that group multiple MCP profiles together for easy assignment to Claude sessions.",
|
||||
"details": "Create project management endpoints with generateMcpConfig function for .mcp.json format output.",
|
||||
"testStrategy": "Test project CRUD operations. Verify profile associations work correctly. Test MCP config generation produces valid .mcp.json format.",
|
||||
"priority": "high",
|
||||
"dependencies": [
|
||||
"4"
|
||||
],
|
||||
"status": "pending",
|
||||
"subtasks": [
|
||||
{
|
||||
"id": 1,
|
||||
"title": "Write TDD tests for project Zod validation schemas and generateMcpConfig function",
|
||||
"description": "Create comprehensive Vitest test suites for project validation schemas and generateMcpConfig function BEFORE implementing any code.",
|
||||
"dependencies": [],
|
||||
"details": "Create tests for CreateProjectSchema, UpdateProjectSchema, UpdateProjectProfilesSchema, and generateMcpConfig with security tests.",
|
||||
"status": "pending",
|
||||
"testStrategy": "TDD red phase - all tests should fail initially. Verify generateMcpConfig security tests check secret env vars are excluded."
|
||||
},
|
||||
{
|
||||
"id": 2,
|
||||
"title": "Implement project repository and generateMcpConfig service with security filtering",
|
||||
"description": "Create the project repository and generateMcpConfig function that strips sensitive credentials from output.",
|
||||
"dependencies": [
|
||||
1
|
||||
],
|
||||
"details": "Create src/mcpd/src/repositories/project.repository.ts and src/mcpd/src/services/mcp-config-generator.ts.",
|
||||
"status": "pending",
|
||||
"testStrategy": "Run TDD tests from subtask 1. Verify output must NOT contain secret values."
|
||||
},
|
||||
{
|
||||
"id": 3,
|
||||
"title": "Implement project service layer with authorization and profile validation",
|
||||
"description": "Create ProjectService with business logic including authorization checks and profile existence validation.",
|
||||
"dependencies": [
|
||||
2
|
||||
],
|
||||
"details": "Create src/mcpd/src/services/project.service.ts with DI accepting IProjectRepository and IMcpProfileRepository.",
|
||||
"status": "pending",
|
||||
"testStrategy": "TDD - write tests before implementation. Verify authorization and profile validation."
|
||||
},
|
||||
{
|
||||
"id": 4,
|
||||
"title": "Implement REST API routes for project CRUD and mcp-config endpoint",
|
||||
"description": "Create Fastify route handlers for all project management endpoints including /api/projects/:name/mcp-config.",
|
||||
"dependencies": [
|
||||
3
|
||||
],
|
||||
"details": "Create src/mcpd/src/routes/projects.ts with all CRUD routes and mcp-config endpoint.",
|
||||
"status": "pending",
|
||||
"testStrategy": "Integration tests using Fastify.inject(). Verify mcp-config returns valid structure WITHOUT secret env vars."
|
||||
},
|
||||
{
|
||||
"id": 5,
|
||||
"title": "Create integration tests and security review for project APIs",
|
||||
"description": "Write comprehensive integration tests and security review documenting credential handling.",
|
||||
"dependencies": [
|
||||
4
|
||||
],
|
||||
"details": "Create src/mcpd/tests/integration/projects.test.ts with end-to-end scenarios and SECURITY_REVIEW.md section.",
|
||||
"status": "pending",
|
||||
"testStrategy": "Run full integration test suite. Verify coverage >85% for project-related files."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 6,
|
||||
"title": "Implement Docker Container Management for MCP Servers",
|
||||
"description": "Create the container orchestration layer for running MCP servers as Docker containers, with support for docker-compose deployment.",
|
||||
"details": "Create Docker management module with ContainerManager class using dockerode. Create deploy/docker-compose.yml template.",
|
||||
"testStrategy": "Test container creation, start, stop, and removal. Integration test with actual Docker daemon. Verify network isolation.",
|
||||
"priority": "high",
|
||||
"dependencies": [
|
||||
"3",
|
||||
"4"
|
||||
],
|
||||
"status": "pending",
|
||||
"subtasks": [
|
||||
{
|
||||
"id": 1,
|
||||
"title": "Define McpOrchestrator interface and write TDD tests for ContainerManager",
|
||||
"description": "Define the McpOrchestrator abstraction interface for Docker and Kubernetes orchestrators. Write comprehensive unit tests.",
|
||||
"dependencies": [],
|
||||
"details": "Create src/mcpd/src/services/orchestrator.ts interface and TDD tests for ContainerManager methods.",
|
||||
"status": "pending",
|
||||
"testStrategy": "Run tests to verify they exist and fail with expected errors. Coverage target: 100% of interface methods."
|
||||
},
|
||||
{
|
||||
"id": 2,
|
||||
"title": "Implement ContainerManager class with DockerOrchestrator strategy pattern",
|
||||
"description": "Implement the ContainerManager class as a DockerOrchestrator implementation using dockerode.",
|
||||
"dependencies": [
|
||||
1
|
||||
],
|
||||
"details": "Create src/mcpd/src/services/docker/container-manager.ts implementing McpOrchestrator interface.",
|
||||
"status": "pending",
|
||||
"testStrategy": "Run unit tests from subtask 1. Verify TypeScript compilation and resource limits."
|
||||
},
|
||||
{
|
||||
"id": 3,
|
||||
"title": "Create docker-compose.yml template with mcpd, PostgreSQL, and test MCP server",
|
||||
"description": "Create the production-ready deploy/docker-compose.yml template for local development.",
|
||||
"dependencies": [],
|
||||
"details": "Create deploy/docker-compose.yml with mcpd, postgres, and test-mcp-server services with proper networking.",
|
||||
"status": "pending",
|
||||
"testStrategy": "Validate with docker-compose config. Run docker-compose up -d and verify all services start."
|
||||
},
|
||||
{
|
||||
"id": 4,
|
||||
"title": "Write integration tests with real Docker daemon",
|
||||
"description": "Create integration test suite that tests ContainerManager against a real Docker daemon.",
|
||||
"dependencies": [
|
||||
2,
|
||||
3
|
||||
],
|
||||
"details": "Create src/mcpd/src/services/docker/__tests__/container-manager.integration.test.ts.",
|
||||
"status": "pending",
|
||||
"testStrategy": "Run integration tests with pnpm --filter @mcpctl/mcpd test:integration. Verify containers are created/destroyed."
|
||||
},
|
||||
{
|
||||
"id": 5,
|
||||
"title": "Implement container network isolation and resource management",
|
||||
"description": "Add network segmentation utilities and resource management capabilities for container isolation.",
|
||||
"dependencies": [
|
||||
2
|
||||
],
|
||||
"details": "Create src/mcpd/src/services/docker/network-manager.ts with network isolation and resource management.",
|
||||
"status": "pending",
|
||||
"testStrategy": "Unit tests for network creation. Integration test: verify container network isolation."
|
||||
},
|
||||
{
|
||||
"id": 6,
|
||||
"title": "Conduct security review of Docker socket access and container configuration",
|
||||
"description": "Perform comprehensive security review of all Docker-related code with security controls documentation.",
|
||||
"dependencies": [
|
||||
2,
|
||||
3,
|
||||
5
|
||||
],
|
||||
"details": "Create src/mcpd/docs/DOCKER_SECURITY_REVIEW.md documenting risks and mitigations.",
|
||||
"status": "pending",
|
||||
"testStrategy": "Review DOCKER_SECURITY_REVIEW.md covers all 6 security areas. Run security unit tests."
|
||||
},
|
||||
{
|
||||
"id": 7,
|
||||
"title": "Implement container logs streaming and health monitoring",
|
||||
"description": "Add log streaming capabilities and health monitoring to ContainerManager for observability.",
|
||||
"dependencies": [
|
||||
2
|
||||
],
|
||||
"details": "Extend ContainerManager with getLogs, getHealthStatus, attachToContainer, and event subscriptions.",
|
||||
"status": "pending",
|
||||
"testStrategy": "Unit tests for getLogs. Integration test: run container, tail logs, verify output."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 7,
|
||||
"title": "Build mcpctl CLI Core Framework",
|
||||
"description": "Create the CLI tool foundation using Commander.js with kubectl-inspired command structure, configuration management, and server communication.",
|
||||
"details": "Create CLI in src/cli/src/ with Commander.js, configuration management at ~/.mcpctl/config.json, and API client for mcpd.",
|
||||
"testStrategy": "Test CLI argument parsing. Test configuration persistence. Mock API calls and verify request formatting.",
|
||||
"priority": "high",
|
||||
"dependencies": [
|
||||
"1"
|
||||
],
|
||||
"status": "pending",
|
||||
"subtasks": [
|
||||
{
|
||||
"id": 1,
|
||||
"title": "Set up CLI package structure with TDD infrastructure and command registry pattern",
|
||||
"description": "Create src/cli directory structure with Commander.js foundation, Vitest test configuration, and extensible command registry.",
|
||||
"dependencies": [],
|
||||
"details": "Create src/cli/src/ with commands/, config/, client/, formatters/, utils/, types/ directories and registry pattern.",
|
||||
"status": "pending",
|
||||
"testStrategy": "TDD approach - write tests first. Tests verify CLI shows version, help, CommandRegistry works."
|
||||
},
|
||||
{
|
||||
"id": 2,
|
||||
"title": "Implement secure configuration management with encrypted credential storage",
|
||||
"description": "Create configuration loader/saver with ~/.mcpctl/config.json and encrypted credentials storage.",
|
||||
"dependencies": [
|
||||
1
|
||||
],
|
||||
"details": "Implement config management with proxy settings, custom CA certificates support, and Zod validation.",
|
||||
"status": "pending",
|
||||
"testStrategy": "TDD tests for config loading, saving, validation, and credential encryption."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 8,
|
||||
"title": "Implement mcpctl get and describe Commands",
|
||||
"description": "Create kubectl-style get and describe commands for viewing MCP servers, profiles, projects, and instances.",
|
||||
"details": "Implement get command with table/json/yaml output formats and describe command for detailed views.",
|
||||
"testStrategy": "Test output formatting for each resource type. Test filtering and sorting options.",
|
||||
"priority": "medium",
|
||||
"dependencies": [
|
||||
"7"
|
||||
],
|
||||
"status": "pending",
|
||||
"subtasks": null
|
||||
},
|
||||
{
|
||||
"id": 9,
|
||||
"title": "Implement mcpctl apply and setup Commands",
|
||||
"description": "Create apply command for declarative configuration and setup wizard for interactive MCP server configuration.",
|
||||
"details": "Implement apply command for YAML/JSON config files and interactive setup wizard with credential prompts.",
|
||||
"testStrategy": "Test YAML/JSON parsing. Test interactive prompts with mock inputs. Verify credentials are stored securely.",
|
||||
"priority": "medium",
|
||||
"dependencies": [
|
||||
"7",
|
||||
"4"
|
||||
],
|
||||
"status": "pending",
|
||||
"subtasks": null
|
||||
},
|
||||
{
|
||||
"id": 10,
|
||||
"title": "Implement mcpctl claude and project Commands",
|
||||
"description": "Create commands for managing Claude MCP configuration and project assignments.",
|
||||
"details": "Implement claude command for managing .mcp.json files and project command for project management.",
|
||||
"testStrategy": "Test .mcp.json file generation. Test project switching. Verify file permissions are correct.",
|
||||
"priority": "medium",
|
||||
"dependencies": [
|
||||
"7",
|
||||
"5"
|
||||
],
|
||||
"status": "pending",
|
||||
"subtasks": null
|
||||
},
|
||||
{
|
||||
"id": 11,
|
||||
"title": "Design Local LLM Proxy Architecture",
|
||||
"description": "Design the architecture for the local LLM proxy that enables Claude to use MCP servers through a local intermediary.",
|
||||
"details": "Design proxy architecture in src/local-proxy/ with MCP protocol handling and request/response transformation.",
|
||||
"testStrategy": "Architecture review. Document security considerations. Create proof-of-concept for MCP protocol handling.",
|
||||
"priority": "medium",
|
||||
"dependencies": [
|
||||
"1"
|
||||
],
|
||||
"status": "pending",
|
||||
"subtasks": null
|
||||
},
|
||||
{
|
||||
"id": 12,
|
||||
"title": "Implement Local LLM Proxy Core",
|
||||
"description": "Build the core local proxy server that handles MCP protocol communication between Claude and MCP servers.",
|
||||
"details": "Implement proxy server in src/local-proxy/src/ with MCP SDK integration and request routing.",
|
||||
"testStrategy": "Test MCP protocol parsing. Test request routing. Integration test with actual MCP server.",
|
||||
"priority": "medium",
|
||||
"dependencies": [
|
||||
"11"
|
||||
],
|
||||
"status": "pending",
|
||||
"subtasks": null
|
||||
},
|
||||
{
|
||||
"id": 13,
|
||||
"title": "Implement LLM Provider Strategy Pattern",
|
||||
"description": "Create pluggable LLM provider support with strategy pattern for different providers (OpenAI, Anthropic, local models).",
|
||||
"details": "Implement provider strategy pattern in src/local-proxy/src/providers/ with adapters for different LLM APIs.",
|
||||
"testStrategy": "Test each provider adapter. Test provider switching. Mock API responses for testing.",
|
||||
"priority": "low",
|
||||
"dependencies": [
|
||||
"12"
|
||||
],
|
||||
"status": "pending",
|
||||
"subtasks": null
|
||||
},
|
||||
{
|
||||
"id": 14,
|
||||
"title": "Implement Audit Logging and Compliance",
|
||||
"description": "Create comprehensive audit logging system for tracking all MCP operations for compliance and debugging.",
|
||||
"details": "Implement audit logging in src/mcpd/src/services/ with structured logging, retention policies, and query APIs.",
|
||||
"testStrategy": "Test audit log creation. Test query APIs. Verify log retention works correctly.",
|
||||
"priority": "medium",
|
||||
"dependencies": [
|
||||
"3"
|
||||
],
|
||||
"status": "pending",
|
||||
"subtasks": null
|
||||
},
|
||||
{
|
||||
"id": 15,
|
||||
"title": "Create MCP Profiles Library",
|
||||
"description": "Build a library of pre-configured MCP profiles for common use cases with best practices baked in.",
|
||||
"details": "Create profile library in src/shared/src/profiles/ with templates for common MCP server configurations.",
|
||||
"testStrategy": "Test profile templates are valid. Test profile application. Document each profile's use case.",
|
||||
"priority": "low",
|
||||
"dependencies": [
|
||||
"4"
|
||||
],
|
||||
"status": "pending",
|
||||
"subtasks": null
|
||||
},
|
||||
{
|
||||
"id": 16,
|
||||
"title": "Implement MCP Instance Lifecycle Management",
|
||||
"description": "Create APIs and CLI commands for managing the full lifecycle of MCP server instances.",
|
||||
"details": "Implement instance lifecycle management in src/mcpd/src/services/ with start, stop, restart, logs commands.",
|
||||
"testStrategy": "Test instance state transitions. Test concurrent instance management. Integration test with Docker.",
|
||||
"priority": "medium",
|
||||
"dependencies": [
|
||||
"6"
|
||||
],
|
||||
"status": "pending",
|
||||
"subtasks": null
|
||||
},
|
||||
{
|
||||
"id": 17,
|
||||
"title": "Add Kubernetes Deployment Support",
|
||||
"description": "Extend the orchestration layer to support Kubernetes deployments for production environments.",
|
||||
"details": "Implement KubernetesOrchestrator in src/mcpd/src/services/k8s/ implementing McpOrchestrator interface.",
|
||||
"testStrategy": "Test Kubernetes manifest generation. Test with kind/minikube. Verify resource limits and security contexts.",
|
||||
"priority": "low",
|
||||
"dependencies": [
|
||||
"6"
|
||||
],
|
||||
"status": "pending",
|
||||
"subtasks": null
|
||||
},
|
||||
{
|
||||
"id": 18,
|
||||
"title": "Documentation and Testing",
|
||||
"description": "Create comprehensive documentation and end-to-end test suite for the entire mcpctl system.",
|
||||
"details": "Create documentation in docs/ and e2e tests in tests/e2e/ covering all major workflows.",
|
||||
"testStrategy": "Review documentation for completeness. Run e2e test suite. Test installation instructions.",
|
||||
"priority": "medium",
|
||||
"dependencies": [
|
||||
"7",
|
||||
"8",
|
||||
"9",
|
||||
"10"
|
||||
],
|
||||
"status": "pending",
|
||||
"subtasks": null
|
||||
},
|
||||
{
|
||||
"id": 19,
|
||||
"title": "CANCELLED - Auth middleware",
|
||||
"description": "Merged into Task 3 subtasks",
|
||||
"details": null,
|
||||
"testStrategy": null,
|
||||
"priority": null,
|
||||
"dependencies": [],
|
||||
"status": "cancelled",
|
||||
"subtasks": null,
|
||||
"updatedAt": "2026-02-21T02:21:03.958Z"
|
||||
},
|
||||
{
|
||||
"id": 20,
|
||||
"title": "CANCELLED - Duplicate project management",
|
||||
"description": "Merged into Task 5",
|
||||
"details": null,
|
||||
"testStrategy": null,
|
||||
"priority": null,
|
||||
"dependencies": [],
|
||||
"status": "cancelled",
|
||||
"subtasks": null,
|
||||
"updatedAt": "2026-02-21T02:21:03.966Z"
|
||||
},
|
||||
{
|
||||
"id": 21,
|
||||
"title": "CANCELLED - Duplicate audit logging",
|
||||
"description": "Merged into Task 14",
|
||||
"details": null,
|
||||
"testStrategy": null,
|
||||
"priority": null,
|
||||
"dependencies": [],
|
||||
"status": "cancelled",
|
||||
"subtasks": null,
|
||||
"updatedAt": "2026-02-21T02:21:03.972Z"
|
||||
},
|
||||
{
|
||||
"id": 22,
|
||||
"title": "Implement Health Monitoring Dashboard",
|
||||
"description": "Create a monitoring dashboard for tracking MCP server health, resource usage, and system metrics.",
|
||||
"details": "Implement health monitoring endpoints in src/mcpd/src/routes/ and optional web dashboard.",
|
||||
"testStrategy": "Test health check endpoints. Test metrics collection. Verify dashboard displays correct data.",
|
||||
"priority": "low",
|
||||
"dependencies": [
|
||||
"6",
|
||||
"14"
|
||||
],
|
||||
"status": "pending",
|
||||
"subtasks": null
|
||||
},
|
||||
{
|
||||
"id": 23,
|
||||
"title": "Implement Backup and Restore",
|
||||
"description": "Create backup and restore functionality for mcpctl configuration and state.",
|
||||
"details": "Implement git-based backup in src/mcpd/src/services/backup/ with encrypted secrets and restore capability.",
|
||||
"testStrategy": "Test backup creation. Test restore from backup. Verify secrets are encrypted.",
|
||||
"priority": "low",
|
||||
"dependencies": [
|
||||
"2",
|
||||
"5"
|
||||
],
|
||||
"status": "pending",
|
||||
"subtasks": null
|
||||
},
|
||||
{
|
||||
"id": 24,
|
||||
"title": "CI/CD Pipeline Setup",
|
||||
"description": "Set up continuous integration and deployment pipelines for the mcpctl project.",
|
||||
"details": "Create GitHub Actions workflows in .github/workflows/ for testing, building, and releasing.",
|
||||
"testStrategy": "Test CI pipeline runs successfully. Test release automation. Verify artifacts are published.",
|
||||
"priority": "medium",
|
||||
"dependencies": [
|
||||
"1"
|
||||
],
|
||||
"status": "pending",
|
||||
"subtasks": null
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"created": "2026-02-21T02:23:17.813Z",
|
||||
"updated": "2026-02-21T02:23:17.813Z",
|
||||
"description": "Tasks for master context"
|
||||
}
|
||||
}
|
||||
}
|
||||
Reference in New Issue
Block a user