Smoke (tests/smoke/llm-pool.smoke.test.ts): two in-process registrars
publish virtual Llms with distinct names but a shared poolName, then:
1. /api/v1/llms/<name>/members surfaces both with the correct
effective pool key, size, activeCount, and per-member kind/status.
2. Chat through an agent pinned to one pool member dispatches across
the pool — verified by running 12 calls and asserting at least
one response from each backend (the random-shuffle selection
would have to hit only-A or only-B in 12 fair coin flips, ~1/2048).
3. Failover: stop one publisher, the surviving member still serves
chat. /members shows the stopped row as inactive immediately
(unbindSession runs synchronously on SSE close).
docs/virtual-llms.md gets a full "LB pools (v4)" section with the
two-field schema model, dispatcher selection + failover semantics,
public + virtual declaration examples, list/describe rendering, the
"pin to specific instance" escape hatch, and an API surface entry
for /members. docs/agents.md cross-link extended.
Tests: full smoke 144/144 (was 141, +3 for the new pool smoke).
Stages 1-3 ship the complete v4 — public and virtual Llms can both
join pools, agents transparently load-balance across them, yaml
round-trip preserves poolName, and the existing single-Llm world
keeps working byte-identically when poolName is null.
9.7 KiB
Agents
An Agent is an LLM persona pinned to a specific Llm, with a system prompt,
a description that surfaces in MCP tools/list, optional attachment to a
Project, and LiteLLM-style sampling defaults. Conversations are persisted
as ChatThread + ChatMessage rows so REPL sessions resume across runs.
Two surfaces use an agent:
-
Direct chat via
mcpctl chat <name>(interactive REPL or one-shot-m "msg"). Streams over SSE; tool calls and tool results print to stderr in dim brackets. Slash-commands/set,/system,/tools,/clear,/save,/quitadjust runtime behavior. -
Virtual MCP server registered into every project session by mcplocal's agents plugin. The agent shows up as
agent-<name>with one toolchat, whose description is the agent's own description. Other Claude sessions / MCP clients see the agent as just another tool intools/listand can consult it.
Data model
Three Prisma models added to src/db/prisma/schema.prisma:
-
Agent—name(unique),description,systemPrompt,llmId(FK Restrict — an Llm in active use cannot be deleted),projectId(FK SetNull — agents survive project deletion),proxyModelName(optional informational override),defaultParams(Json, LiteLLM-style),extras(Json, reserved for future LoRA / tool allowlists),ownerId, version, timestamps. -
ChatThread—agentId,ownerId,title,lastTurnAt, timestamps. Cascade delete on agent. -
ChatMessage—threadId,turnIndex(monotonic per thread, enforced by@@unique([threadId, turnIndex])),role('system' | 'user' | 'assistant' | 'tool'),content,toolCalls(Json — assistant turn's[{id,name,arguments}]),toolCallId(which call a tool turn answers),status('pending' | 'complete' | 'error'),createdAt. Cascade delete on thread.
status stays pending while the orchestrator runs an in-flight assistant
or tool turn, then flips to complete once the round settles. On any
exception in the chat loop, every pending row in the thread is flipped to
error so the trail stays auditable.
Chat parameters (LiteLLM-style passthrough)
Per-call resolution: request body → agent.defaultParams → adapter default.
Setting a key to null in the request explicitly clears a default.
| Key | Type | Notes |
|---|---|---|
temperature |
number | 0..2 |
top_p |
number | 0..1 |
top_k |
integer | Anthropic-only; OpenAI ignores |
max_tokens |
integer | adapter clamps to provider max |
stop |
string | string[] | up to 4 sequences |
presence_penalty |
number | OpenAI |
frequency_penalty |
number | OpenAI |
seed |
integer | reproducibility (provider-dependent) |
response_format |
object | text | json_object | json_schema |
tool_choice |
enum/object | auto|none|required|{type:'function',function:{name}} |
tools_allowlist |
string[] | restricts which project MCP tools the agent can call this turn |
systemOverride |
string | replaces agent.systemPrompt for this call |
systemAppend |
string | concatenated to system block (after project Prompts) |
messages |
array | full message history override; if set, message/threadId history is ignored |
extra |
object | provider-specific knobs (Anthropic metadata.user_id, vLLM repetition_penalty) — adapters cherry-pick |
HTTP API (mcpd)
GET /api/v1/agents list (RBAC: view:agents)
GET /api/v1/agents/:idOrName describe (view:agents)
POST /api/v1/agents create (create:agents)
PUT /api/v1/agents/:idOrName update (edit:agents)
DELETE /api/v1/agents/:idOrName delete (delete:agents)
POST /api/v1/agents/:name/chat chat — non-streaming or SSE (run:agents:<name>)
POST /api/v1/agents/:name/threads create thread (run:agents:<name>)
GET /api/v1/agents/:name/threads list threads (run:agents:<name>)
GET /api/v1/threads/:id/messages replay history (view:agents)
GET /api/v1/projects/:p/agents project-scoped list (view:projects:<p>)
The chat endpoint reuses the SSE pattern from llm-infer.ts exactly: same
headers (text/event-stream, X-Accel-Buffering: no), same data: …\n\n
framing, same [DONE] terminator. SSE chunk types:
{type:'text', delta}— assistant text increments{type:'tool_call', toolName, args}— model decided to call a tool{type:'tool_result', toolName, ok}— tool dispatch outcome{type:'final', threadId, turnIndex}— terminal turn{type:'error', message}— fatal error in the loop
Tool-use loop
When the agent's project has MCP servers attached, mcpd's ChatService lists
each server's tools (via mcp-proxy.service.ts — same path real MCP traffic
uses) and presents them to the model namespaced as <server>__<tool>. On a
tool_calls response the loop dispatches each call back through the same
proxy, persists the assistant + tool turns linked by toolCallId, and loops
(cap = 12 iterations) until the model returns terminal text.
Persistence is non-transactional across the loop because tool calls can take minutes; long-held DB transactions would starve other writers.
RBAC
Agents are their own resource (agents), independent of project bindings.
Recommended:
view:agents— list / describecreate:agents/edit:agents/delete:agents— CRUDrun:agents:<name>— drive a chat turn or manage its threads
Project-attached agents do not implicitly inherit project RBAC. If a project
member should be able to chat with the project's agents, grant them
run:agents:<each-name> (or wildcard run:agents) explicitly.
YAML round-trip
get agent foo -o yaml | mcpctl apply -f - is a no-op. The apply schema
also accepts shorthand:
apiVersion: mcpctl.io/v1
kind: agent
metadata: { name: deployer }
spec:
description: "I help you deploy code"
llm: qwen3-thinking # shorthand for `{ name: qwen3-thinking }`
project: mcpctl-dev # shorthand for `{ name: mcpctl-dev }`
systemPrompt: |
You are a deployment assistant for mcpctl. Always check fulldeploy.sh
and the k8s context before suggesting actions.
defaultParams:
temperature: 0.2
max_tokens: 4096
top_p: 0.9
stop: ["</deploy>"]
Wiring against your in-cluster qwen3-thinking
The kubernetes-deployment repo provisions LiteLLM in the nvidia-nim
namespace (http://litellm.nvidia-nim.svc.cluster.local:4000/v1 in-cluster,
https://llm.ad.itaz.eu/v1 external) and a virtual key reserved for mcpctl
in the Pulumi secret secrets:litellmMcpctlGatewayToken. Pulling it once:
cd /path/to/kubernetes-deployment
LITELLM_TOKEN=$(pulumi config get --stack homelab secrets:litellmMcpctlGatewayToken)
# fallback if Pulumi isn't authed locally:
# LITELLM_TOKEN=$(kubectl --context worker0-k8s0 -n nvidia-nim get secret litellm-secrets \
# -o jsonpath='{.data.LITELLM_MCPCTL_GATEWAY_TOKEN}' | base64 -d)
cd /path/to/mcpctl
mcpctl create secret litellm-key --data "API_KEY=${LITELLM_TOKEN}"
mcpctl create llm qwen3-thinking \
--type openai --model qwen3-thinking \
--url http://litellm.nvidia-nim.svc.cluster.local:4000/v1 \
--api-key-ref litellm-key/API_KEY \
--description "Qwen3-30B-A3B-Thinking-FP8 via in-cluster vLLM behind LiteLLM"
mcpctl create agent reviewer \
--llm qwen3-thinking \
--description "I review what you're shipping; ask after each major change." \
--default-temperature 0.2 --default-max-tokens 4096
mcpctl chat reviewer
Troubleshooting
-
Namespace collision in mcplocal: if a project has an upstream MCP server literally named
agent-<x>, the agents plugin detects the collision inonSessionCreate, skips that agent's registration, and emits actx.log.warnline. Document theagent-prefix as reserved on real server names. -
Llm-in-use blocks delete:
Agent.llmisonDelete: Restrict. Detach every agent (or delete them) before deleting the underlying Llm. -
Stale
pendingrows: a crash mid-loop leavespendingChatMessage rows. The next request recovers —markPendingAsErrorflips them on the next failure path, andloadHistoryfilters outerrorrows when rebuilding context for the next turn. -
proxyModelNameis informational only for agents. The agent's own internal tool loop runs server-side in mcpd and bypasses mcplocal's proxymodel pipeline entirely. Don't try to plumb it. -
Anthropic + tools: the Anthropic adapter currently drops
toolrole messages and doesn't translate OpenAItool_callsto Anthropictool_use/tool_resultblocks. Use an OpenAI-compatible provider (LiteLLM, vLLM, OpenAI) for agents that need tool calling until that translation lands.
See also
- personalities.md — named overlays of prompts on
top of an agent. Same agent, different prompt bundles, picked per-turn
via
--personality <name>oragent.defaultPersonality. - virtual-llms.md — local LLMs (e.g.
vllm-local) publishing themselves intomcpctl get llmso anyone can chat with them viamcpctl chat-llm <name>. Inference is relayed through the publishing mcplocal — mcpd never holds the local URL or key. v3 extends the same publishing model to virtual agents declared in mcplocal config — they show up inmcpctl get agentwithKIND=virtual / STATUS=activeand become chat-able viamcpctl chat <name>like any other agent. v4 adds pools: Llms sharing apoolNamestack into one load-balanced pool that the chat dispatcher transparently widens to at request time, with random selection + sequential failover on transport errors. - chat.md —
mcpctl chatflow and LiteLLM-style flags.