Files
mcpctl/docs/agents.md
Michal 137711fdf6
Some checks failed
CI/CD / lint (pull_request) Successful in 53s
CI/CD / test (pull_request) Successful in 1m8s
CI/CD / typecheck (pull_request) Successful in 2m53s
CI/CD / smoke (pull_request) Failing after 1m47s
CI/CD / build (pull_request) Successful in 6m20s
CI/CD / publish (pull_request) Has been skipped
feat(docs+smoke): LB pool live smoke + virtual-llms.md pool semantics (v4 Stage 3)
Smoke (tests/smoke/llm-pool.smoke.test.ts): two in-process registrars
publish virtual Llms with distinct names but a shared poolName, then:

  1. /api/v1/llms/<name>/members surfaces both with the correct
     effective pool key, size, activeCount, and per-member kind/status.
  2. Chat through an agent pinned to one pool member dispatches across
     the pool — verified by running 12 calls and asserting at least
     one response from each backend (the random-shuffle selection
     would have to hit only-A or only-B in 12 fair coin flips, ~1/2048).
  3. Failover: stop one publisher, the surviving member still serves
     chat. /members shows the stopped row as inactive immediately
     (unbindSession runs synchronously on SSE close).

docs/virtual-llms.md gets a full "LB pools (v4)" section with the
two-field schema model, dispatcher selection + failover semantics,
public + virtual declaration examples, list/describe rendering, the
"pin to specific instance" escape hatch, and an API surface entry
for /members. docs/agents.md cross-link extended.

Tests: full smoke 144/144 (was 141, +3 for the new pool smoke).
Stages 1-3 ship the complete v4 — public and virtual Llms can both
join pools, agents transparently load-balance across them, yaml
round-trip preserves poolName, and the existing single-Llm world
keeps working byte-identically when poolName is null.
2026-04-27 23:22:15 +01:00

9.7 KiB

Agents

An Agent is an LLM persona pinned to a specific Llm, with a system prompt, a description that surfaces in MCP tools/list, optional attachment to a Project, and LiteLLM-style sampling defaults. Conversations are persisted as ChatThread + ChatMessage rows so REPL sessions resume across runs.

Two surfaces use an agent:

  1. Direct chat via mcpctl chat <name> (interactive REPL or one-shot -m "msg"). Streams over SSE; tool calls and tool results print to stderr in dim brackets. Slash-commands /set, /system, /tools, /clear, /save, /quit adjust runtime behavior.

  2. Virtual MCP server registered into every project session by mcplocal's agents plugin. The agent shows up as agent-<name> with one tool chat, whose description is the agent's own description. Other Claude sessions / MCP clients see the agent as just another tool in tools/list and can consult it.

Data model

Three Prisma models added to src/db/prisma/schema.prisma:

  • Agentname (unique), description, systemPrompt, llmId (FK Restrict — an Llm in active use cannot be deleted), projectId (FK SetNull — agents survive project deletion), proxyModelName (optional informational override), defaultParams (Json, LiteLLM-style), extras (Json, reserved for future LoRA / tool allowlists), ownerId, version, timestamps.

  • ChatThreadagentId, ownerId, title, lastTurnAt, timestamps. Cascade delete on agent.

  • ChatMessagethreadId, turnIndex (monotonic per thread, enforced by @@unique([threadId, turnIndex])), role ('system' | 'user' | 'assistant' | 'tool'), content, toolCalls (Json — assistant turn's [{id,name,arguments}]), toolCallId (which call a tool turn answers), status ('pending' | 'complete' | 'error'), createdAt. Cascade delete on thread.

status stays pending while the orchestrator runs an in-flight assistant or tool turn, then flips to complete once the round settles. On any exception in the chat loop, every pending row in the thread is flipped to error so the trail stays auditable.

Chat parameters (LiteLLM-style passthrough)

Per-call resolution: request body → agent.defaultParams → adapter default. Setting a key to null in the request explicitly clears a default.

Key Type Notes
temperature number 0..2
top_p number 0..1
top_k integer Anthropic-only; OpenAI ignores
max_tokens integer adapter clamps to provider max
stop string | string[] up to 4 sequences
presence_penalty number OpenAI
frequency_penalty number OpenAI
seed integer reproducibility (provider-dependent)
response_format object text | json_object | json_schema
tool_choice enum/object auto|none|required|{type:'function',function:{name}}
tools_allowlist string[] restricts which project MCP tools the agent can call this turn
systemOverride string replaces agent.systemPrompt for this call
systemAppend string concatenated to system block (after project Prompts)
messages array full message history override; if set, message/threadId history is ignored
extra object provider-specific knobs (Anthropic metadata.user_id, vLLM repetition_penalty) — adapters cherry-pick

HTTP API (mcpd)

GET    /api/v1/agents                  list (RBAC: view:agents)
GET    /api/v1/agents/:idOrName        describe (view:agents)
POST   /api/v1/agents                  create (create:agents)
PUT    /api/v1/agents/:idOrName        update (edit:agents)
DELETE /api/v1/agents/:idOrName        delete (delete:agents)
POST   /api/v1/agents/:name/chat       chat — non-streaming or SSE (run:agents:<name>)
POST   /api/v1/agents/:name/threads    create thread (run:agents:<name>)
GET    /api/v1/agents/:name/threads    list threads (run:agents:<name>)
GET    /api/v1/threads/:id/messages    replay history (view:agents)
GET    /api/v1/projects/:p/agents      project-scoped list (view:projects:<p>)

The chat endpoint reuses the SSE pattern from llm-infer.ts exactly: same headers (text/event-stream, X-Accel-Buffering: no), same data: …\n\n framing, same [DONE] terminator. SSE chunk types:

  • {type:'text', delta} — assistant text increments
  • {type:'tool_call', toolName, args} — model decided to call a tool
  • {type:'tool_result', toolName, ok} — tool dispatch outcome
  • {type:'final', threadId, turnIndex} — terminal turn
  • {type:'error', message} — fatal error in the loop

Tool-use loop

When the agent's project has MCP servers attached, mcpd's ChatService lists each server's tools (via mcp-proxy.service.ts — same path real MCP traffic uses) and presents them to the model namespaced as <server>__<tool>. On a tool_calls response the loop dispatches each call back through the same proxy, persists the assistant + tool turns linked by toolCallId, and loops (cap = 12 iterations) until the model returns terminal text.

Persistence is non-transactional across the loop because tool calls can take minutes; long-held DB transactions would starve other writers.

RBAC

Agents are their own resource (agents), independent of project bindings. Recommended:

  • view:agents — list / describe
  • create:agents / edit:agents / delete:agents — CRUD
  • run:agents:<name> — drive a chat turn or manage its threads

Project-attached agents do not implicitly inherit project RBAC. If a project member should be able to chat with the project's agents, grant them run:agents:<each-name> (or wildcard run:agents) explicitly.

YAML round-trip

get agent foo -o yaml | mcpctl apply -f - is a no-op. The apply schema also accepts shorthand:

apiVersion: mcpctl.io/v1
kind: agent
metadata: { name: deployer }
spec:
  description: "I help you deploy code"
  llm: qwen3-thinking          # shorthand for `{ name: qwen3-thinking }`
  project: mcpctl-dev          # shorthand for `{ name: mcpctl-dev }`
  systemPrompt: |
    You are a deployment assistant for mcpctl. Always check fulldeploy.sh
    and the k8s context before suggesting actions.
  defaultParams:
    temperature: 0.2
    max_tokens: 4096
    top_p: 0.9
    stop: ["</deploy>"]

Wiring against your in-cluster qwen3-thinking

The kubernetes-deployment repo provisions LiteLLM in the nvidia-nim namespace (http://litellm.nvidia-nim.svc.cluster.local:4000/v1 in-cluster, https://llm.ad.itaz.eu/v1 external) and a virtual key reserved for mcpctl in the Pulumi secret secrets:litellmMcpctlGatewayToken. Pulling it once:

cd /path/to/kubernetes-deployment
LITELLM_TOKEN=$(pulumi config get --stack homelab secrets:litellmMcpctlGatewayToken)

# fallback if Pulumi isn't authed locally:
# LITELLM_TOKEN=$(kubectl --context worker0-k8s0 -n nvidia-nim get secret litellm-secrets \
#   -o jsonpath='{.data.LITELLM_MCPCTL_GATEWAY_TOKEN}' | base64 -d)

cd /path/to/mcpctl
mcpctl create secret litellm-key --data "API_KEY=${LITELLM_TOKEN}"
mcpctl create llm qwen3-thinking \
    --type openai --model qwen3-thinking \
    --url http://litellm.nvidia-nim.svc.cluster.local:4000/v1 \
    --api-key-ref litellm-key/API_KEY \
    --description "Qwen3-30B-A3B-Thinking-FP8 via in-cluster vLLM behind LiteLLM"
mcpctl create agent reviewer \
    --llm qwen3-thinking \
    --description "I review what you're shipping; ask after each major change." \
    --default-temperature 0.2 --default-max-tokens 4096
mcpctl chat reviewer

Troubleshooting

  • Namespace collision in mcplocal: if a project has an upstream MCP server literally named agent-<x>, the agents plugin detects the collision in onSessionCreate, skips that agent's registration, and emits a ctx.log.warn line. Document the agent- prefix as reserved on real server names.

  • Llm-in-use blocks delete: Agent.llm is onDelete: Restrict. Detach every agent (or delete them) before deleting the underlying Llm.

  • Stale pending rows: a crash mid-loop leaves pending ChatMessage rows. The next request recovers — markPendingAsError flips them on the next failure path, and loadHistory filters out error rows when rebuilding context for the next turn.

  • proxyModelName is informational only for agents. The agent's own internal tool loop runs server-side in mcpd and bypasses mcplocal's proxymodel pipeline entirely. Don't try to plumb it.

  • Anthropic + tools: the Anthropic adapter currently drops tool role messages and doesn't translate OpenAI tool_calls to Anthropic tool_use / tool_result blocks. Use an OpenAI-compatible provider (LiteLLM, vLLM, OpenAI) for agents that need tool calling until that translation lands.

See also

  • personalities.md — named overlays of prompts on top of an agent. Same agent, different prompt bundles, picked per-turn via --personality <name> or agent.defaultPersonality.
  • virtual-llms.md — local LLMs (e.g. vllm-local) publishing themselves into mcpctl get llm so anyone can chat with them via mcpctl chat-llm <name>. Inference is relayed through the publishing mcplocal — mcpd never holds the local URL or key. v3 extends the same publishing model to virtual agents declared in mcplocal config — they show up in mcpctl get agent with KIND=virtual / STATUS=active and become chat-able via mcpctl chat <name> like any other agent. v4 adds pools: Llms sharing a poolName stack into one load-balanced pool that the chat dispatcher transparently widens to at request time, with random selection + sequential failover on transport errors.
  • chat.mdmcpctl chat flow and LiteLLM-style flags.