Files
mcpctl/.taskmaster/docs/prd.txt

44 lines
3.0 KiB
Plaintext
Raw Normal View History

2026-02-21 03:10:39 +00:00
mcpctl:
We like kubectl, so we want similar syntax to manage MCP servers
What are we managing?
We manage mcpd which is application that we will make on a backend (we want to deploy it with docker - docker compose)
What will it do?
It will allow us to easly manage (run) and controll and audit mcp servers
What it means run?
Now it means running it on synology nas with pontainer using docker-compose, but in a future it might mean scheaduling pods in kubernetes with mcp instances
It should allow me to create "mcp projects", that will allow us to expose to a claude sessions. Similarly like with taskmaster, we want to be able to do "mcpctl claude add-mcp-project weekly_reports"
While "weekly_reports" contains for example slack and jira mcps
We want architecture that will allow us to audit what user runs what, but that for later, but we want to keep it in mind desining architecutre
It must be scalable and stateless (outside of DB) and work with multiple instances and nicely scale
Abstract goal:
making it easy to run mcp servers, making mcpctl both helper in configuration (no need to think about all pesky settings for each mcp - we will mantain profiles for them), so user lets say will want jira mcp, we want to take user by hand, ask them to log in, we want to redirect their browser to page that generates API tokens, and tell them what to do, if we cant do it ourselves
We want to be the main go to manager for MCPs
What about profiles and projects?
So some projects might have MCP that we want read only, or with limited endpoints
Additional core features?
Prefiltering what MCP server retunrs, before handing it over to the claude instance, maybe using a local instance of geminii with gemini cli doing first filtering
Lets say I will say: "write me a weekly report, to do so, get from Slack all messages related to my team, me, or security and linux servers" and then instead of wasting Claude-code tokens on such pointless filtering, it will use local LLM (vllam/ollama) or gemini binary or deepseek API tokens to find relevant messages (without processing them) so claude will only get relevant information.
Or when claude is using via it, a MCP server that gives it some documentation from terraform, we don't want the whole thing, but related information to the query
We want it to ask Claude to not just pull data like API, but tell our mcp layer what it wants, so it can look for it specifically
Design:
mcp servers - run by mcpd (our server side of tool) on centralized server, it might contains credentials, so we deploy it but is unavaiable for local users
local - using some other LLM (gemini cli, or other mentioned earlier) to do pre processing
claude - getting final result
claude asks -> local geminii/others, inteprets question and makes requests to mcpd to get data from mcp-servers it deployed and manage ---> mcp-servers deliver data--> local geminii/others process returning data, and refines it, to deliver to claude the smalless but the most comprehensive info with smallest context window -> claude gets response, without interacting with mcp servers directly