# `mcpctl chat` Open an interactive chat session with an `Agent`, or send a single message in one shot. See [agents.md](agents.md) for what an Agent is and how to create one. ## Modes ```bash mcpctl chat # interactive REPL, new thread mcpctl chat --thread # interactive REPL, resume thread mcpctl chat -m "hi" # one-shot, prints reply, no REPL mcpctl chat -m "hi" --no-stream # one-shot, single JSON response (no SSE) ``` Streaming is on by default. Text deltas land on stdout as they arrive; tool calls and tool results print to stderr in dim brackets so the chat output stays clean. ## Per-call flags All optional. They override the agent's `defaultParams` for this session only — use the in-REPL `/save` slash-command to persist the current set back to the agent. ```bash --system # replace agent.systemPrompt for this session --system-file # read --system text from a file --system-append # append to the agent system block (after project Prompts) --temperature # 0..2 --top-p # 0..1 --top-k # integer; Anthropic-only, OpenAI ignores --max-tokens # cap on assistant tokens --seed # reproducibility (provider-dependent) --stop # stop sequence (repeatable, up to 4) --allow-tool # repeat to allowlist project MCP tools --extra # provider-specific knob (repeatable) --no-stream # disable SSE; single JSON response ``` `--extra` is the LiteLLM-style escape hatch: pass anything the underlying adapter understands. Numeric values are auto-parsed (`--extra repetition_penalty=1.1`); strings stay strings. ## In-REPL slash-commands ``` /set KEY VALUE adjust an override for the rest of the session (temperature, top-p, top-k, max-tokens, seed, stop, or any provider-specific knob — unknown keys go into `extra`) /system set systemAppend for this turn onward (empty = clear) /tools list MCP servers the agent can call as tools /clear start a fresh thread (same agent) /save PATCH agent.defaultParams = current overrides (systemOverride / systemAppend are NOT persisted) /quit, /exit leave the REPL (Ctrl-D works too) ``` ## Threads Threads persist server-side. To resume: ```bash mcpctl get threads --agent reviewer mcpctl chat reviewer --thread ``` A `mcpctl get thread ` reads the message log: ```bash mcpctl get thread c0abc… -o yaml ``` ## Examples **Quick gut-check on a deploy:** ```bash $ mcpctl chat reviewer -m "is fulldeploy.sh safe to run on the current branch?" Yes — I checked: tests are green on commit 727e7d6 and there's no in-flight migration. The k8s context is worker0-k8s0 (production); confirm that's intended before running. (thread: cm9k…) ``` **Resuming with overrides:** ```bash $ mcpctl chat deployer --thread cm9k… --temperature 0.0 --max-tokens 256 > walk me through what changed since the last deploy … ``` **Pinning sampling defaults to the agent:** ``` $ mcpctl chat deployer --temperature 0.0 --max-tokens 8000 > /save (saved current overrides as agent.defaultParams) > /quit ``` ## Troubleshooting - **No agents appear in `tools/list`** — check the agent has a project attach (`mcpctl describe agent `). The mcplocal plugin only exposes agents on their attached project's session. - **Tool calls fail with `Project not found`** — the agent has no project attach. Either attach it (`mcpctl edit agent ` and set the project field), or expect text-only chat. - **Anthropic agents can't call tools** — known limitation; the Anthropic adapter doesn't translate OpenAI tool format yet. Use LiteLLM or a direct OpenAI-compatible provider for tool-using agents until the translator ships. - **`mcpctl chat ` returns 404** — the agent name doesn't resolve. `mcpctl get agents` to confirm spelling. - **REPL feels stuck** — agent tool calls can take minutes (e.g. running a Grafana query). Watch stderr for `[tool_call: …]` / `[tool_result: …]` brackets; those tell you the loop is alive.