feat(cli+docs): mcpctl get agent KIND/STATUS columns + virtual-agent smoke + docs (v3 Stage 4)
Some checks failed
CI/CD / lint (pull_request) Successful in 55s
CI/CD / test (pull_request) Successful in 1m10s
CI/CD / typecheck (pull_request) Successful in 2m30s
CI/CD / build (pull_request) Successful in 2m36s
CI/CD / smoke (pull_request) Failing after 5m56s
CI/CD / publish (pull_request) Has been skipped

CLI: `mcpctl get agent` table view gains KIND and STATUS columns
mirroring the `get llm` shape from v1. Public agents render as
`public/active` (the AgentRow defaults) and virtual ones surface their
true lifecycle state, so `mcpctl get agent` becomes a single-pane view
for both manually-created and mcplocal-published personas.

Smoke: tests/smoke/virtual-agent.smoke.test.ts mirrors virtual-llm's
in-process registrar pattern — publishes a fake provider + agent in
one round-trip, confirms mcpd surfaces the agent kind=virtual /
status=active under /api/v1/agents, then disconnects and verifies the
paired Llm-and-Agent both flip to inactive (deletion is GC-driven, not
disconnect-driven, so the rows must still exist post-stop). Heartbeat-
stale and 4 h sweep paths are covered by the unit suite to keep smoke
duration in check.

Docs: docs/virtual-llms.md gets a "Virtual agents (v3)" section with a
config sample, lifecycle notes, listing example, and the cluster-wide
name-uniqueness caveat. The API surface block now mentions the new
`agents[]` field on _provider-register, the join-by-session heartbeat
behavior, and the `GET /api/v1/agents` lifecycle fields. docs/agents.md
gains a one-paragraph note pointing to the v3 publishing path.

Tests: full smoke suite 141/141 (was 139, +2 new), unit suites
unchanged (mcpd 860/860, mcplocal 723/723).
This commit is contained in:
Michal
2026-04-27 18:47:03 +01:00
parent 610808b9e7
commit 1998b733b2
4 changed files with 314 additions and 6 deletions

View File

@@ -204,5 +204,9 @@ mcpctl chat reviewer
- [virtual-llms.md](./virtual-llms.md) — local LLMs (e.g. `vllm-local`)
publishing themselves into `mcpctl get llm` so anyone can chat with
them via `mcpctl chat-llm <name>`. Inference is relayed through the
publishing mcplocal — mcpd never holds the local URL or key.
publishing mcplocal — mcpd never holds the local URL or key. **v3**
extends the same publishing model to **virtual agents** declared in
mcplocal config — they show up in `mcpctl get agent` with
`KIND=virtual / STATUS=active` and become chat-able via
`mcpctl chat <name>` like any other agent.
- [chat.md](./chat.md) — `mcpctl chat` flow and LiteLLM-style flags.

View File

@@ -199,10 +199,87 @@ provider doesn't come up within `maxWaitSeconds`), every queued infer
is rejected with a clear error and the row stays `hibernating`
the next request gets a fresh wake attempt.
## Virtual agents (v3)
Virtual agents extend the same publishing model to **agents** — named
LLM personas with their own system prompt and sampling defaults. mcplocal
declares them in its config alongside its providers, and the existing
`_provider-register` endpoint atomically publishes both Llms and Agents
in one round-trip. They show up under `mcpctl get agent` next to
manually-created public agents and become chat-able via
`mcpctl chat <agent>` — no special command.
### Declaring a virtual agent in mcplocal config
```jsonc
// ~/.mcpctl/config.json
{
"llm": {
"providers": [
{ "name": "vllm-local", "type": "vllm", "model": "Qwen/Qwen2.5-7B-Instruct-AWQ", "publish": true }
]
},
"agents": [
{
"name": "local-coder",
"llm": "vllm-local",
"description": "Local coding assistant on the workstation GPU",
"systemPrompt": "You are a senior engineer. Be terse.",
"defaultParams": { "temperature": 0.2 }
}
]
}
```
`llm` references a published provider's name from the same config. Agents
pinned to a name that isn't being published are still forwarded to mcpd —
the server validates `llmName` and 404s with a clear message if it's
genuinely missing, which lets you point at a *public* Llm if you want.
### Lifecycle
Same shape as virtual Llms — 30 s heartbeat from mcplocal, 90 s
heartbeat-stale → status flips to `inactive`, 4 h inactive → row deleted
by mcpd's GC sweep. Heartbeats cover both Llms and Agents owned by the
session.
The GC orders agent deletes **before** their pinned virtual Llm so the
`Agent.llmId onDelete: Restrict` FK doesn't block the sweep.
### Listing
```sh
$ mcpctl get agents
NAME KIND STATUS LLM PROJECT DESCRIPTION
local-coder virtual active vllm-local - Local coding assistant on…
reviewer public active qwen3-thinking mcpctl-development I review what you're shipping…
```
The `KIND` and `STATUS` columns are the v3 additions. Round-tripping
through `mcpctl get agent X -o yaml | mcpctl apply -f -` strips those
runtime fields cleanly so a virtual agent can be re-declared as a public
one (or vice versa) without manual editing.
### Chatting
```sh
$ mcpctl chat local-coder
> hello?
… streams through mcpd → SSE → mcplocal's vllm-local provider …
```
Same command as for public agents. Works because chat.service has a
`kind=virtual` branch that hands off to `VirtualLlmService.enqueueInferTask`
when the agent's pinned Llm is virtual.
### Cluster-wide name uniqueness
`Agent.name` is unique cluster-wide. Two mcplocals trying to publish the
same agent name collide on the second register with HTTP 409. Per-publisher
namespacing is a v4+ concern — same constraint as virtual Llms in v1.
## Roadmap (later stages)
- **v3 — Virtual agents**: mcplocal publishes its local agent configs
(model + system prompt + sampling defaults) into mcpd's `Agent` table.
- **v4 — LB pool by model**: agents can target a model name instead of
a specific Llm; mcpd picks the healthiest pool member per request.
- **v5 — Task queue**: persisted requests for hibernating/saturated
@@ -211,18 +288,23 @@ the next request gets a fresh wake attempt.
## API surface (v1)
```
POST /api/v1/llms/_provider-register → returns { providerSessionId, llms[] }
POST /api/v1/llms/_provider-register → returns { providerSessionId, llms[], agents[] }
v3: body accepts an optional `agents[]` array
alongside `providers[]`. Atomic publish; older
clients (providers-only) keep working.
GET /api/v1/llms/_provider-stream → SSE channel; require x-mcpctl-provider-session header
POST /api/v1/llms/_provider-heartbeat → { providerSessionId }
POST /api/v1/llms/_provider-heartbeat → { providerSessionId } — bumps both Llms and Agents
owned by the session
POST /api/v1/llms/_provider-task/:id/result
→ one of:
{ error: "msg" }
{ chunk: { data, done? } }
{ status, body }
GET /api/v1/llms → list (now includes kind, status, lastHeartbeatAt, inactiveSince)
GET /api/v1/llms → list (includes kind, status, lastHeartbeatAt, inactiveSince)
POST /api/v1/llms/<virtual>/infer → routes through the SSE relay
DELETE /api/v1/llms/<virtual> → delete unconditionally (also runs GC's job)
GET /api/v1/agents → list (v3: includes kind, status, lastHeartbeatAt, inactiveSince)
```
RBAC piggybacks on `view/edit/create:llms` — no new resource. Publishing

View File

@@ -155,10 +155,17 @@ interface AgentRow {
description: string;
llm: { id: string; name: string };
project: { id: string; name: string } | null;
// v3: lifecycle fields. Public agents have kind=public/status=active and
// these never change — virtuals get them set/updated by mcpd's
// AgentService as the publishing mcplocal heartbeats and disconnects.
kind?: 'public' | 'virtual';
status?: 'active' | 'inactive';
}
const agentColumns: Column<AgentRow>[] = [
{ header: 'NAME', key: 'name' },
{ header: 'KIND', key: (r) => r.kind ?? 'public', width: 8 },
{ header: 'STATUS', key: (r) => r.status ?? 'active', width: 10 },
{ header: 'LLM', key: (r) => r.llm.name, width: 24 },
{ header: 'PROJECT', key: (r) => r.project?.name ?? '-', width: 20 },
{ header: 'DESCRIPTION', key: (r) => truncate(r.description, 50) || '-', width: 50 },

View File

@@ -0,0 +1,215 @@
/**
* Smoke tests: v3 virtual agents — register a virtual Llm + a virtual
* Agent through the same `_provider-register` payload, then verify mcpd
* surfaces the agent as kind=virtual / status=active. Mirrors
* virtual-llm.smoke.test.ts's in-process registrar pattern so we don't
* need to mutate ~/.mcpctl/config.json or bounce systemd's mcplocal.
*
* Heartbeat-stale → inactive (90 s) and 4 h auto-deletion are covered by
* the unit suite (mcpd virtual-agent-service.test.ts); waiting > 90 s in
* smoke would balloon the suite duration.
*/
import { describe, it, expect, beforeAll, afterAll } from 'vitest';
import http from 'node:http';
import https from 'node:https';
import { mkdtempSync, rmSync, readFileSync, existsSync } from 'node:fs';
import { tmpdir } from 'node:os';
import { join } from 'node:path';
import {
VirtualLlmRegistrar,
type RegistrarPublishedProvider,
type RegistrarPublishedAgent,
} from '../../src/providers/registrar.js';
import type { LlmProvider, CompletionResult } from '../../src/providers/types.js';
const MCPD_URL = process.env.MCPD_URL ?? 'https://mcpctl.ad.itaz.eu';
const SUFFIX = Date.now().toString(36);
const PROVIDER_NAME = `smoke-vagent-llm-${SUFFIX}`;
const AGENT_NAME = `smoke-vagent-${SUFFIX}`;
function makeFakeProvider(name: string, content: string): LlmProvider {
return {
name,
async complete(): Promise<CompletionResult> {
return {
content,
toolCalls: [],
usage: { promptTokens: 1, completionTokens: 4, totalTokens: 5 },
finishReason: 'stop',
};
},
async listModels() { return []; },
async isAvailable() { return true; },
};
}
function healthz(url: string, timeoutMs = 5000): Promise<boolean> {
return new Promise((resolve) => {
const parsed = new URL(`${url.replace(/\/$/, '')}/healthz`);
const driver = parsed.protocol === 'https:' ? https : http;
const req = driver.get({
hostname: parsed.hostname,
port: parsed.port || (parsed.protocol === 'https:' ? 443 : 80),
path: parsed.pathname,
timeout: timeoutMs,
}, (res) => { resolve((res.statusCode ?? 500) < 500); res.resume(); });
req.on('error', () => resolve(false));
req.on('timeout', () => { req.destroy(); resolve(false); });
});
}
function readToken(): string | null {
try {
const path = join(process.env.HOME ?? '', '.mcpctl', 'credentials');
if (!existsSync(path)) return null;
const parsed = JSON.parse(readFileSync(path, 'utf-8')) as { token?: string };
return parsed.token ?? null;
} catch {
return null;
}
}
interface HttpResponse { status: number; body: string }
function httpRequest(method: string, urlStr: string, body: unknown): Promise<HttpResponse> {
return new Promise((resolve, reject) => {
const tokenRaw = readToken();
const parsed = new URL(urlStr);
const driver = parsed.protocol === 'https:' ? https : http;
const headers: Record<string, string> = {
Accept: 'application/json',
...(body !== undefined ? { 'Content-Type': 'application/json' } : {}),
...(tokenRaw !== null ? { Authorization: `Bearer ${tokenRaw}` } : {}),
};
const req = driver.request({
hostname: parsed.hostname,
port: parsed.port || (parsed.protocol === 'https:' ? 443 : 80),
path: parsed.pathname + parsed.search,
method,
headers,
timeout: 30_000,
}, (res) => {
const chunks: Buffer[] = [];
res.on('data', (c: Buffer) => chunks.push(c));
res.on('end', () => {
resolve({ status: res.statusCode ?? 0, body: Buffer.concat(chunks).toString('utf-8') });
});
});
req.on('error', reject);
req.on('timeout', () => { req.destroy(); reject(new Error(`httpRequest timeout: ${method} ${urlStr}`)); });
if (body !== undefined) req.write(JSON.stringify(body));
req.end();
});
}
interface AgentRow { id: string; name: string; kind?: string; status?: string; llm?: { name: string }; description?: string }
let mcpdUp = false;
let registrar: VirtualLlmRegistrar | null = null;
let tempDir: string;
describe('virtual-agent smoke (v3)', () => {
beforeAll(async () => {
mcpdUp = await healthz(MCPD_URL);
if (!mcpdUp) {
// eslint-disable-next-line no-console
console.warn(`\n ○ virtual-agent smoke: skipped — ${MCPD_URL}/healthz unreachable.\n`);
return;
}
if (readToken() === null) {
mcpdUp = false;
// eslint-disable-next-line no-console
console.warn('\n ○ virtual-agent smoke: skipped — no ~/.mcpctl/credentials.\n');
return;
}
tempDir = mkdtempSync(join(tmpdir(), 'mcpctl-virtual-agent-smoke-'));
}, 20_000);
afterAll(async () => {
if (registrar !== null) registrar.stop();
if (tempDir !== undefined) rmSync(tempDir, { recursive: true, force: true });
// Defensive cleanup: agent first (Llm.id has Restrict FK), then Llm.
if (mcpdUp) {
const agents = await httpRequest('GET', `${MCPD_URL}/api/v1/agents`, undefined);
if (agents.status === 200) {
const rows = JSON.parse(agents.body) as Array<{ id: string; name: string }>;
const row = rows.find((r) => r.name === AGENT_NAME);
if (row !== undefined) {
await httpRequest('DELETE', `${MCPD_URL}/api/v1/agents/${row.id}`, undefined);
}
}
const llms = await httpRequest('GET', `${MCPD_URL}/api/v1/llms`, undefined);
if (llms.status === 200) {
const rows = JSON.parse(llms.body) as Array<{ id: string; name: string }>;
const row = rows.find((r) => r.name === PROVIDER_NAME);
if (row !== undefined) {
await httpRequest('DELETE', `${MCPD_URL}/api/v1/llms/${row.id}`, undefined);
}
}
}
});
it('registrar publishes provider + agent in one round-trip and mcpd lists the agent kind=virtual / status=active', async () => {
if (!mcpdUp) return;
const token = readToken();
if (token === null) return;
const published: RegistrarPublishedProvider[] = [
{ provider: makeFakeProvider(PROVIDER_NAME, 'hi from virtual agent'), type: 'openai', model: 'fake-vagent', tier: 'fast' },
];
const publishedAgents: RegistrarPublishedAgent[] = [
{
name: AGENT_NAME,
llmName: PROVIDER_NAME,
description: 'v3 virtual agent smoke',
systemPrompt: 'You are a smoke test. Reply READY.',
defaultParams: { temperature: 0 },
},
];
registrar = new VirtualLlmRegistrar({
mcpdUrl: MCPD_URL,
token,
publishedProviders: published,
publishedAgents,
sessionFilePath: join(tempDir, 'session'),
log: { info: () => {}, warn: () => {}, error: () => {} },
heartbeatIntervalMs: 60_000,
});
await registrar.start();
expect(registrar.getSessionId()).not.toBeNull();
// Give the SSE handshake + atomic register a moment to settle.
await new Promise((r) => setTimeout(r, 400));
const res = await httpRequest('GET', `${MCPD_URL}/api/v1/agents`, undefined);
expect(res.status).toBe(200);
const rows = JSON.parse(res.body) as AgentRow[];
const row = rows.find((r) => r.name === AGENT_NAME);
expect(row, `${AGENT_NAME} must be present`).toBeDefined();
expect(row!.kind).toBe('virtual');
expect(row!.status).toBe('active');
expect(row!.llm?.name).toBe(PROVIDER_NAME);
expect(row!.description).toBe('v3 virtual agent smoke');
}, 30_000);
it('publisher disconnect flips the agent to status=inactive (paired with its Llm)', async () => {
if (!mcpdUp) return;
if (registrar !== null) {
registrar.stop();
registrar = null;
}
// unbindSession runs synchronously on the SSE close handler; mcpd
// flips both the Llm and any agents owned by the session to
// inactive. A short wait covers the request round-trip.
await new Promise((r) => setTimeout(r, 400));
const agents = await httpRequest('GET', `${MCPD_URL}/api/v1/agents`, undefined);
expect(agents.status).toBe(200);
const agentRow = (JSON.parse(agents.body) as AgentRow[]).find((r) => r.name === AGENT_NAME);
expect(agentRow, `${AGENT_NAME} must still exist (deletion is GC-driven, not disconnect-driven)`).toBeDefined();
expect(agentRow!.status).toBe('inactive');
const llms = await httpRequest('GET', `${MCPD_URL}/api/v1/llms`, undefined);
const llmRow = (JSON.parse(llms.body) as Array<{ name: string; status: string }>).find((r) => r.name === PROVIDER_NAME);
expect(llmRow!.status).toBe('inactive');
}, 30_000);
});