75 lines
6.1 KiB
Plaintext
75 lines
6.1 KiB
Plaintext
mcpctl Registry Integration & Auto-Discovery Features
|
|
=====================================================
|
|
|
|
Context: mcpctl already has Tasks 1-18 covering the core CLI, mcpd server, local LLM proxy, profiles library, and lifecycle management. These 3 new tasks extend mcpctl with automatic MCP server discovery and LLM-assisted installation.
|
|
|
|
Research findings: Multiple public MCP server registries exist with open APIs:
|
|
- Official MCP Registry (registry.modelcontextprotocol.io) - 6,093 servers, no auth, OpenAPI spec, has env vars/packages/transport metadata
|
|
- Glama.ai (glama.ai/api/mcp/v1/servers) - 17,585 servers, no auth, env var JSON schemas
|
|
- Smithery.ai (registry.smithery.ai) - 3,567 servers, free API key, semantic search, verified badges, usage analytics
|
|
- NPM registry - ~1,989 packages with keyword:mcp-server
|
|
- PyPI - ~3,191 packages with mcp+server in name
|
|
|
|
Dependencies: These tasks depend on Tasks 7 (CLI framework), 4 (Server Registry), 10 (Setup Wizard), 15 (Profiles Library)
|
|
|
|
== Task 19: Implement MCP Registry Client ==
|
|
|
|
Build a multi-source registry client that queries the Official MCP Registry, Glama.ai, and Smithery.ai APIs to search, discover, and retrieve MCP server metadata.
|
|
|
|
Requirements:
|
|
- Primary source: Official MCP Registry REST API (GET /v0/servers?search=...&limit=100&cursor=...) - no auth required
|
|
- Secondary: Glama.ai API (glama.ai/api/mcp/v1/servers) - no auth, cursor pagination
|
|
- Tertiary: Smithery.ai API (registry.smithery.ai/servers?q=...) - free API key from config
|
|
- Implement registry client with strategy pattern for each source
|
|
- Merge and deduplicate results across registries (match by npm package name or GitHub repo URL)
|
|
- Rank results by: relevance score, usage/popularity (from Smithery), verified status, last updated
|
|
- Cache results locally with configurable TTL (default 1 hour)
|
|
- Handle rate limits gracefully with exponential backoff
|
|
- Return normalized RegistryServer type with: name, description, packages (npm/pypi/docker), envTemplate (env vars with isSecret, description), transport type, repository URL, popularity score, verified status
|
|
- TDD: Write Vitest tests for every client method, cache, deduplication logic BEFORE implementation
|
|
- Security: Validate all API responses, sanitize descriptions (prevent XSS in terminal output), never log API keys
|
|
- SRE: Expose metrics for registry query latency, cache hit ratio, error rates
|
|
- Networking: Support HTTP proxy and custom CA certificates for enterprise environments
|
|
- Data Engineer: Include data platform MCP servers in search results (BigQuery, Snowflake, dbt, etc.)
|
|
|
|
== Task 20: Implement mcpctl discover Command ==
|
|
|
|
Create the `mcpctl discover` CLI command that lets users search for MCP servers across all configured registries with rich filtering and display.
|
|
|
|
Requirements:
|
|
- Command: `mcpctl discover <query>` - free text search (e.g., "slack", "database query tool", "terraform")
|
|
- Options: --category <category> (devops, data-platform, analytics, etc.), --verified (only verified servers), --transport <stdio|sse>, --registry <official|glama|smithery|all>, --limit <n>, --output <table|json|yaml>
|
|
- Table output columns: NAME, DESCRIPTION (truncated), PACKAGE, TRANSPORT, VERIFIED, POPULARITY
|
|
- Show install command hint: "Run 'mcpctl install <name>' to set up this server"
|
|
- Support interactive mode: `mcpctl discover --interactive` - uses inquirer to browse results, select server, and immediately trigger install
|
|
- Use the registry client from Task 19
|
|
- TDD: Write tests for command parsing, output formatting, interactive mode BEFORE implementation
|
|
- SRE: Exit codes for scripting (0=found results, 1=error, 2=no results)
|
|
- Data Analyst: Include filtering by tags/categories relevant to BI tools
|
|
- Every function must have unit tests
|
|
|
|
== Task 21: Implement mcpctl install with LLM-Assisted Auto-Configuration ==
|
|
|
|
Create the `mcpctl install <server-name>` command that uses a local LLM (Claude Code, Ollama, or other configured provider) to automatically read the MCP server's documentation, generate envTemplate/setup guide/profiles, and walk the user through configuration.
|
|
|
|
Requirements:
|
|
- Command: `mcpctl install <server-name>` where server-name comes from discover results or direct registry reference
|
|
- Step 1: Fetch server metadata from registry (Task 19 client)
|
|
- Step 2: If envTemplate already complete in registry metadata, use it directly
|
|
- Step 3: If envTemplate incomplete/missing, use LLM to auto-generate it:
|
|
a. Fetch the server's README.md from its GitHub repository URL (from registry metadata)
|
|
b. Send README to local LLM (Claude Code session, Ollama, or configured provider from Task 12)
|
|
c. LLM prompt: "Analyze this MCP server README and extract: required environment variables (name, description, isSecret, setupUrl), recommended profiles (name, permissions), and a step-by-step setup guide"
|
|
d. Parse LLM response into structured envTemplate + setupGuide + defaultProfiles
|
|
e. Validate LLM output against Zod schema before using
|
|
- Step 4: Register the MCP server in mcpd (POST /api/mcp-servers) with generated envTemplate
|
|
- Step 5: Run the setup wizard (Task 10) to collect credentials from user
|
|
- Step 6: Create profile and optionally add to a project
|
|
- Options: --non-interactive (use env vars for credentials), --profile-name <name>, --project <name> (auto-add to project), --dry-run (show what would be configured without doing it), --skip-llm (only use registry metadata, no LLM analysis)
|
|
- LLM provider selection: Use the configured LLM provider from Task 12 (Ollama, Gemini CLI, DeepSeek, etc.) or use Claude Code session as the LLM
|
|
- Support batch install: `mcpctl install slack jira github` - install multiple servers
|
|
- TDD: Write Vitest tests for LLM prompt generation, response parsing, schema validation, full install flow BEFORE implementation
|
|
- Security: Sanitize LLM outputs (prevent prompt injection from malicious READMEs), validate generated envTemplate, never auto-execute suggested commands without user approval
|
|
- Principal Data Engineer: LLM should understand complex data platform auth patterns (service accounts, OAuth, connection strings) from README analysis
|
|
- Every function must have unit tests
|