Compare commits

..

2 Commits

Author SHA1 Message Date
Michal
23f53a0798 feat(mcpd): inference proxy — POST /api/v1/llms/:name/infer
Why: the point of the Llm resource (Phase 1) is that credentials never leave
the server. This lands the proxy: clients POST OpenAI chat/completions to
mcpd, mcpd attaches the provider API key server-side, and the response
streams back as OpenAI-format SSE.

Design:
- Wire format client-side is always OpenAI chat/completions — every existing
  SDK speaks it. Adapters translate on the provider side.
- `openai | vllm | deepseek | ollama` → pure passthrough (they already speak
  OpenAI). `anthropic` → translator to/from Anthropic Messages API
  (system-string extraction, content-block flattening, SSE event remap).
- Plain fetch; no @anthropic-ai/sdk dep. Consistent with the OpenBao driver
  shape and keeps the proxy layer thin.
- `gemini-cli` intentionally rejected — subprocess providers need extra
  lifecycle plumbing; deferred to a follow-up.
- Streaming: adapters yield `StreamingChunk`s; the route frames them as
  `data: <json>\n\n` + terminal `data: [DONE]\n\n` so any OpenAI client
  works unchanged.

RBAC:
- New URL special-case in mapUrlToPermission: `POST /api/v1/llms/:name/infer`
  → `run:llms:<name>` (not the default create:llms). Users need an explicit
  `{role: 'run', resource: 'llms', [name: X]}` binding to call infer.
- Possession of `edit:llms` does NOT imply `run` — keeps catalogue
  management separate from spend.

Audit: route emits an `llm_inference_call` event per request (llm name,
model, user/tokenSha, streaming, duration, status). main.ts wires it to the
structured logger for now; hook is in place for a richer audit sink later.

Tests:
- 11 adapter tests (passthrough POST shape + default URLs + no-auth ollama +
  SSE forwarding; anthropic translate request/response + non-2xx wrap + SSE
  event translation; registry dispatch + caching + unsupported-provider).
- 7 route tests (404, 400, non-streaming dispatch + audit, apiKey failure,
  null apiKeyRef path, streaming SSE output, 502 on adapter error).
- Full suite 1830/1830 (+18 from Phase 1's 1812). TypeScript clean.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 22:43:55 +01:00
Michal
6ff90a8228 feat(mcpd): Llm resource — CRUD + CLI + apply
Why: every client that wants an LLM (the agent, HTTP-mode mcplocal, Claude
Code's STDIO mcplocal) today has to know the provider URL + key, and each
user's ~/.mcpctl/config.json carries them. Centralising the catalogue on the
server is the prerequisite for Phase 2 (mcpd proxies inference so credentials
never leave the cluster).

This phase adds the `Llm` resource and its CRUD surface — no proxy yet, no
client pivot yet. Just enough to register what you have.

Schema:
- New `Llm` model: name/type/model/url/tier/description + {apiKeySecretId,
  apiKeySecretKey} FK pair. Reverse `llms` relation on Secret.
- Provider types: anthropic | openai | deepseek | vllm | ollama | gemini-cli.
- Tiers: fast | heavy.

mcpd:
- LlmRepository + LlmService + Zod validation schema + /api/v1/llms routes.
- API surface exposes `apiKeyRef: {name, key}` — the service translates to/
  from the FK pair so clients never deal in cuids.
- `resolveApiKey(llmName)` reads through SecretService (which itself dispatches
  to the right SecretBackend). That's the hook Phase 2's inference proxy uses.
- RBAC: added `'llms'` to RBAC_RESOURCES + resource alias. Standard
  view/create/edit/delete semantics.
- Wired into main.ts (repo, service, routes).

CLI:
- `mcpctl create llm <name> --type X --model Y --tier fast|heavy --api-key-ref SECRET/KEY [--url ...] [--extra k=v ...]`
- `mcpctl get|describe|delete llm` — standard resource verbs.
- `mcpctl apply -f` with `kind: llm` (single- or multi-doc yaml/json).
  Applied after secrets, before servers — apiKeyRef resolves an existing Secret.
- Shell completions regenerated.

Tests: 11 service unit tests + 9 route tests (happy path, 404s, 409, validation).
Full suite 1812/1812 (+20 from the 1792 Phase 0 baseline). TypeScript clean.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 21:28:43 +01:00
24 changed files with 2088 additions and 12 deletions

View File

@@ -8,8 +8,8 @@ _mcpctl() {
local commands="status login logout config get describe delete logs create edit apply patch backup approve console cache test migrate"
local project_commands="get describe delete logs create edit attach-server detach-server"
local global_opts="-v --version --daemon-url --direct -p --project -h --help"
local resources="servers instances secrets secretbackends templates projects users groups rbac prompts promptrequests serverattachments proxymodels all"
local resource_aliases="servers instances secrets secretbackends templates projects users groups rbac prompts promptrequests serverattachments proxymodels all server srv instance inst secret sec secretbackend sb template tpl project proj user group rbac-definition rbac-binding prompt promptrequest pr serverattachment sa proxymodel pm"
local resources="servers instances secrets secretbackends llms templates projects users groups rbac prompts promptrequests serverattachments proxymodels all"
local resource_aliases="servers instances secrets secretbackends llms templates projects users groups rbac prompts promptrequests serverattachments proxymodels all server srv instance inst secret sec secretbackend sb llm template tpl project proj user group rbac-definition rbac-binding prompt promptrequest pr serverattachment sa proxymodel pm"
# Check if --project/-p was given
local has_project=false
@@ -175,7 +175,7 @@ _mcpctl() {
create)
local create_sub=$(_mcpctl_get_subcmd $subcmd_pos)
if [[ -z "$create_sub" ]]; then
COMPREPLY=($(compgen -W "server secret secretbackend project user group rbac mcptoken prompt serverattachment promptrequest help" -- "$cur"))
COMPREPLY=($(compgen -W "server secret llm secretbackend project user group rbac mcptoken prompt serverattachment promptrequest help" -- "$cur"))
else
case "$create_sub" in
server)
@@ -184,6 +184,9 @@ _mcpctl() {
secret)
COMPREPLY=($(compgen -W "--data --force -h --help" -- "$cur"))
;;
llm)
COMPREPLY=($(compgen -W "--type --model --url --tier --description --api-key-ref --extra --force -h --help" -- "$cur"))
;;
secretbackend)
COMPREPLY=($(compgen -W "--type --description --default --url --namespace --mount --path-prefix --token-secret --config --force -h --help" -- "$cur"))
;;

View File

@@ -31,10 +31,10 @@ function __mcpctl_has_project
end
# Resource type detection
set -l resources servers instances secrets secretbackends templates projects users groups rbac prompts promptrequests serverattachments proxymodels all
set -l resources servers instances secrets secretbackends llms templates projects users groups rbac prompts promptrequests serverattachments proxymodels all
function __mcpctl_needs_resource_type
set -l resource_aliases servers instances secrets secretbackends templates projects users groups rbac prompts promptrequests serverattachments proxymodels all server srv instance inst secret sec secretbackend sb template tpl project proj user group rbac-definition rbac-binding prompt promptrequest pr serverattachment sa proxymodel pm
set -l resource_aliases servers instances secrets secretbackends llms templates projects users groups rbac prompts promptrequests serverattachments proxymodels all server srv instance inst secret sec secretbackend sb llm template tpl project proj user group rbac-definition rbac-binding prompt promptrequest pr serverattachment sa proxymodel pm
set -l tokens (commandline -opc)
set -l found_cmd false
for tok in $tokens
@@ -60,6 +60,7 @@ function __mcpctl_resolve_resource
case instance inst instances; echo instances
case secret sec secrets; echo secrets
case secretbackend sb secretbackends; echo secretbackends
case llm llms; echo llms
case template tpl templates; echo templates
case project proj projects; echo projects
case user users; echo users
@@ -75,7 +76,7 @@ function __mcpctl_resolve_resource
end
function __mcpctl_get_resource_type
set -l resource_aliases servers instances secrets secretbackends templates projects users groups rbac prompts promptrequests serverattachments proxymodels all server srv instance inst secret sec secretbackend sb template tpl project proj user group rbac-definition rbac-binding prompt promptrequest pr serverattachment sa proxymodel pm
set -l resource_aliases servers instances secrets secretbackends llms templates projects users groups rbac prompts promptrequests serverattachments proxymodels all server srv instance inst secret sec secretbackend sb llm template tpl project proj user group rbac-definition rbac-binding prompt promptrequest pr serverattachment sa proxymodel pm
set -l tokens (commandline -opc)
set -l found_cmd false
for tok in $tokens
@@ -224,7 +225,7 @@ complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a describe -d 'Show detailed information about a resource'
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a delete -d 'Delete a resource (server, instance, secret, project, user, group, rbac)'
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a logs -d 'Get logs from an MCP server instance'
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a create -d 'Create a resource (server, secret, secretbackend, project, user, group, rbac, serverattachment, prompt)'
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a create -d 'Create a resource (server, secret, secretbackend, llm, project, user, group, rbac, serverattachment, prompt)'
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a edit -d 'Edit a resource in your default editor (server, project)'
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a apply -d 'Apply declarative configuration from a YAML or JSON file'
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a patch -d 'Patch a resource field (e.g. mcpctl patch project myproj llmProvider=none)'
@@ -240,7 +241,7 @@ complete -c mcpctl -n "__mcpctl_has_project; and not __fish_seen_subcommand_from
complete -c mcpctl -n "__mcpctl_has_project; and not __fish_seen_subcommand_from $project_commands" -a describe -d 'Show detailed information about a resource'
complete -c mcpctl -n "__mcpctl_has_project; and not __fish_seen_subcommand_from $project_commands" -a delete -d 'Delete a resource (server, instance, secret, project, user, group, rbac)'
complete -c mcpctl -n "__mcpctl_has_project; and not __fish_seen_subcommand_from $project_commands" -a logs -d 'Get logs from an MCP server instance'
complete -c mcpctl -n "__mcpctl_has_project; and not __fish_seen_subcommand_from $project_commands" -a create -d 'Create a resource (server, secret, secretbackend, project, user, group, rbac, serverattachment, prompt)'
complete -c mcpctl -n "__mcpctl_has_project; and not __fish_seen_subcommand_from $project_commands" -a create -d 'Create a resource (server, secret, secretbackend, llm, project, user, group, rbac, serverattachment, prompt)'
complete -c mcpctl -n "__mcpctl_has_project; and not __fish_seen_subcommand_from $project_commands" -a edit -d 'Edit a resource in your default editor (server, project)'
complete -c mcpctl -n "__mcpctl_has_project; and not __fish_seen_subcommand_from $project_commands" -a attach-server -d 'Attach a server to a project (requires --project)'
complete -c mcpctl -n "__mcpctl_has_project; and not __fish_seen_subcommand_from $project_commands" -a detach-server -d 'Detach a server from a project (requires --project)'
@@ -283,9 +284,10 @@ complete -c mcpctl -n "__mcpctl_subcmd_active config claude-generate" -l stdout
complete -c mcpctl -n "__mcpctl_subcmd_active config impersonate" -l quit -d 'Stop impersonating and return to original identity'
# create subcommands
set -l create_cmds server secret secretbackend project user group rbac mcptoken prompt serverattachment promptrequest
set -l create_cmds server secret llm secretbackend project user group rbac mcptoken prompt serverattachment promptrequest
complete -c mcpctl -n "__fish_seen_subcommand_from create; and not __fish_seen_subcommand_from $create_cmds" -a server -d 'Create an MCP server definition'
complete -c mcpctl -n "__fish_seen_subcommand_from create; and not __fish_seen_subcommand_from $create_cmds" -a secret -d 'Create a secret'
complete -c mcpctl -n "__fish_seen_subcommand_from create; and not __fish_seen_subcommand_from $create_cmds" -a llm -d 'Register a server-managed LLM (anthropic, openai, vllm, ollama, deepseek, gemini-cli)'
complete -c mcpctl -n "__fish_seen_subcommand_from create; and not __fish_seen_subcommand_from $create_cmds" -a secretbackend -d 'Create a secret backend (plaintext, openbao)'
complete -c mcpctl -n "__fish_seen_subcommand_from create; and not __fish_seen_subcommand_from $create_cmds" -a project -d 'Create a project'
complete -c mcpctl -n "__fish_seen_subcommand_from create; and not __fish_seen_subcommand_from $create_cmds" -a user -d 'Create a user'
@@ -316,6 +318,16 @@ complete -c mcpctl -n "__mcpctl_subcmd_active create server" -l force -d 'Update
complete -c mcpctl -n "__mcpctl_subcmd_active create secret" -l data -d 'Secret data KEY=value (repeat for multiple)' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create secret" -l force -d 'Update if already exists'
# create llm options
complete -c mcpctl -n "__mcpctl_subcmd_active create llm" -l type -d 'Provider type (anthropic, openai, deepseek, vllm, ollama, gemini-cli)' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create llm" -l model -d 'Model identifier (e.g. claude-3-5-sonnet-20241022)' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create llm" -l url -d 'Endpoint URL (empty = provider default)' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create llm" -l tier -d 'Tier: fast or heavy' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create llm" -l description -d 'Description' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create llm" -l api-key-ref -d 'API key reference in SECRET/KEY form (e.g. anthropic-key/token)' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create llm" -l extra -d 'Extra config key=value (repeat)' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create llm" -l force -d 'Update if already exists'
# create secretbackend options
complete -c mcpctl -n "__mcpctl_subcmd_active create secretbackend" -l type -d 'Backend type (plaintext, openbao)' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create secretbackend" -l description -d 'Description' -x

View File

@@ -184,7 +184,7 @@ async function extractTree(): Promise<CmdInfo> {
// ============================================================
const CANONICAL_RESOURCES = [
'servers', 'instances', 'secrets', 'secretbackends', 'templates', 'projects',
'servers', 'instances', 'secrets', 'secretbackends', 'llms', 'templates', 'projects',
'users', 'groups', 'rbac', 'prompts', 'promptrequests',
'serverattachments', 'proxymodels', 'all',
];
@@ -194,6 +194,7 @@ const ALIAS_ENTRIES: [string, string][] = [
['instance', 'instances'], ['inst', 'instances'],
['secret', 'secrets'], ['sec', 'secrets'],
['secretbackend', 'secretbackends'], ['sb', 'secretbackends'],
['llm', 'llms'], ['llms', 'llms'],
['template', 'templates'], ['tpl', 'templates'],
['project', 'projects'], ['proj', 'projects'],
['user', 'users'],

View File

@@ -49,6 +49,20 @@ const SecretBackendSpecSchema = z.object({
config: z.record(z.unknown()).default({}),
});
const LlmSpecSchema = z.object({
name: z.string().min(1).max(100).regex(/^[a-z0-9-]+$/),
type: z.enum(['anthropic', 'openai', 'deepseek', 'vllm', 'ollama', 'gemini-cli']),
model: z.string().min(1),
url: z.string().url().optional(),
tier: z.enum(['fast', 'heavy']).default('fast'),
description: z.string().max(500).default(''),
apiKeyRef: z.object({
name: z.string().min(1),
key: z.string().min(1),
}).nullable().optional(),
extraConfig: z.record(z.unknown()).default({}),
});
const TemplateEnvEntrySchema = z.object({
name: z.string().min(1),
description: z.string().optional(),
@@ -152,6 +166,7 @@ const McpTokenSpecSchema = z.object({
const ApplyConfigSchema = z.object({
secretbackends: z.array(SecretBackendSpecSchema).default([]),
secrets: z.array(SecretSpecSchema).default([]),
llms: z.array(LlmSpecSchema).default([]),
servers: z.array(ServerSpecSchema).default([]),
users: z.array(UserSpecSchema).default([]),
groups: z.array(GroupSpecSchema).default([]),
@@ -194,6 +209,7 @@ export function createApplyCommand(deps: ApplyCommandDeps): Command {
log('Dry run - would apply:');
if (config.secretbackends.length > 0) log(` ${config.secretbackends.length} secretbackend(s)`);
if (config.secrets.length > 0) log(` ${config.secrets.length} secret(s)`);
if (config.llms.length > 0) log(` ${config.llms.length} llm(s)`);
if (config.servers.length > 0) log(` ${config.servers.length} server(s)`);
if (config.users.length > 0) log(` ${config.users.length} user(s)`);
if (config.groups.length > 0) log(` ${config.groups.length} group(s)`);
@@ -240,6 +256,7 @@ const KIND_TO_RESOURCE: Record<string, string> = {
serverattachment: 'serverattachments',
mcptoken: 'mcptokens',
secretbackend: 'secretbackends',
llm: 'llms',
};
/**
@@ -376,6 +393,25 @@ async function applyConfig(client: ApiClient, config: ApplyConfig, log: (...args
}
}
// Apply LLMs (after secrets — apiKeyRef resolves to an existing Secret)
for (const llm of config.llms) {
try {
const existing = await cachedFindByName('llms', llm.name);
if (existing) {
// Exclude type on update — type is immutable.
const { name: _n, type: _t, ...updateBody } = llm;
await withRetry(() => client.put(`/api/v1/llms/${existing.id}`, updateBody));
log(`Updated llm: ${llm.name}`);
} else {
await withRetry(() => client.post('/api/v1/llms', llm));
invalidateCache('llms');
log(`Created llm: ${llm.name}`);
}
} catch (err) {
log(`Error applying llm '${llm.name}': ${err instanceof Error ? err.message : err}`);
}
}
// Apply servers
for (const server of config.servers) {
try {

View File

@@ -88,7 +88,7 @@ export function createCreateCommand(deps: CreateCommandDeps): Command {
const { client, log } = deps;
const cmd = new Command('create')
.description('Create a resource (server, secret, secretbackend, project, user, group, rbac, serverattachment, prompt)');
.description('Create a resource (server, secret, secretbackend, llm, project, user, group, rbac, serverattachment, prompt)');
// --- create server ---
cmd.command('server')
@@ -252,6 +252,61 @@ export function createCreateCommand(deps: CreateCommandDeps): Command {
}
});
// --- create llm ---
cmd.command('llm')
.description('Register a server-managed LLM (anthropic, openai, vllm, ollama, deepseek, gemini-cli)')
.argument('<name>', 'LLM name (lowercase alphanumeric with hyphens)')
.requiredOption('--type <type>', 'Provider type (anthropic, openai, deepseek, vllm, ollama, gemini-cli)')
.requiredOption('--model <model>', 'Model identifier (e.g. claude-3-5-sonnet-20241022)')
.option('--url <url>', 'Endpoint URL (empty = provider default)')
.option('--tier <tier>', 'Tier: fast or heavy', 'fast')
.option('--description <text>', 'Description')
.option('--api-key-ref <ref>', 'API key reference in SECRET/KEY form (e.g. anthropic-key/token)')
.option('--extra <entry>', 'Extra config key=value (repeat)', collect, [])
.option('--force', 'Update if already exists')
.action(async (name: string, opts) => {
const body: Record<string, unknown> = {
name,
type: opts.type,
model: opts.model,
tier: opts.tier,
};
if (opts.url) body.url = opts.url;
if (opts.description !== undefined) body.description = opts.description;
if (opts.apiKeyRef) {
const slashIdx = (opts.apiKeyRef as string).indexOf('/');
if (slashIdx < 1) throw new Error(`Invalid --api-key-ref '${opts.apiKeyRef as string}'. Expected SECRET_NAME/KEY_NAME`);
body.apiKeyRef = {
name: (opts.apiKeyRef as string).slice(0, slashIdx),
key: (opts.apiKeyRef as string).slice(slashIdx + 1),
};
}
if (opts.extra && (opts.extra as string[]).length > 0) {
const extra: Record<string, unknown> = {};
for (const entry of opts.extra as string[]) {
const eqIdx = entry.indexOf('=');
if (eqIdx === -1) throw new Error(`Invalid --extra '${entry}'. Expected key=value`);
extra[entry.slice(0, eqIdx)] = entry.slice(eqIdx + 1);
}
body.extraConfig = extra;
}
try {
const row = await client.post<{ id: string; name: string }>('/api/v1/llms', body);
log(`llm '${row.name}' created (id: ${row.id})`);
} catch (err) {
if (err instanceof ApiError && err.status === 409 && opts.force) {
const existing = (await client.get<Array<{ id: string; name: string }>>('/api/v1/llms')).find((l) => l.name === name);
if (!existing) throw err;
const { name: _n, type: _t, ...updateBody } = body;
await client.put(`/api/v1/llms/${existing.id}`, updateBody);
log(`llm '${name}' updated (id: ${existing.id})`);
} else {
throw err;
}
}
});
// --- create secretbackend ---
cmd.command('secretbackend')
.alias('sb')

View File

@@ -218,6 +218,49 @@ function formatSecretDetail(secret: Record<string, unknown>, showValues: boolean
return lines.join('\n');
}
function formatLlmDetail(llm: Record<string, unknown>): string {
const lines: string[] = [];
lines.push(`=== LLM: ${llm.name} ===`);
lines.push(`${pad('Name:')}${llm.name}`);
lines.push(`${pad('Type:')}${llm.type}`);
lines.push(`${pad('Model:')}${llm.model}`);
lines.push(`${pad('Tier:')}${llm.tier ?? 'fast'}`);
if (llm.url) lines.push(`${pad('URL:')}${llm.url}`);
if (llm.description) lines.push(`${pad('Description:')}${llm.description}`);
const ref = llm.apiKeyRef as { name: string; key: string } | null | undefined;
lines.push('');
lines.push('API Key:');
if (ref) {
lines.push(` ${pad('Secret:', 12)}${ref.name}`);
lines.push(` ${pad('Key:', 12)}${ref.key}`);
} else {
lines.push(' (none)');
}
const extra = llm.extraConfig as Record<string, unknown> | undefined;
if (extra && Object.keys(extra).length > 0) {
lines.push('');
lines.push('Extra Config:');
const keyW = Math.max(6, ...Object.keys(extra).map((k) => k.length)) + 2;
for (const [k, v] of Object.entries(extra)) {
let display: string;
if (v === null || v === undefined) display = '-';
else if (typeof v === 'object') display = JSON.stringify(v);
else display = String(v);
lines.push(` ${k.padEnd(keyW)}${display}`);
}
}
lines.push('');
lines.push('Metadata:');
lines.push(` ${pad('ID:', 12)}${llm.id}`);
if (llm.createdAt) lines.push(` ${pad('Created:', 12)}${llm.createdAt}`);
if (llm.updatedAt) lines.push(` ${pad('Updated:', 12)}${llm.updatedAt}`);
return lines.join('\n');
}
function formatSecretBackendDetail(backend: Record<string, unknown>): string {
const lines: string[] = [];
lines.push(`=== SecretBackend: ${backend.name} ===`);
@@ -840,6 +883,9 @@ export function createDescribeCommand(deps: DescribeCommandDeps): Command {
case 'secretbackends':
deps.log(formatSecretBackendDetail(item));
break;
case 'llms':
deps.log(formatLlmDetail(item));
break;
case 'projects': {
const projectPrompts = await deps.client
.get<Array<{ name: string; priority: number; linkTarget: string | null }>>(`/api/v1/prompts?projectId=${item.id as string}`)

View File

@@ -119,6 +119,26 @@ const rbacColumns: Column<RbacRow>[] = [
{ header: 'ID', key: 'id' },
];
interface LlmRow {
id: string;
name: string;
type: string;
model: string;
tier: string;
url: string;
description: string;
apiKeyRef: { name: string; key: string } | null;
}
const llmColumns: Column<LlmRow>[] = [
{ header: 'NAME', key: 'name' },
{ header: 'TYPE', key: 'type', width: 12 },
{ header: 'MODEL', key: 'model', width: 28 },
{ header: 'TIER', key: 'tier', width: 8 },
{ header: 'KEY', key: (r) => r.apiKeyRef ? `secret://${r.apiKeyRef.name}/${r.apiKeyRef.key}` : '-', width: 34 },
{ header: 'ID', key: 'id' },
];
interface SecretBackendRow {
id: string;
name: string;
@@ -284,6 +304,8 @@ function getColumnsForResource(resource: string): Column<Record<string, unknown>
return mcpTokenColumns as unknown as Column<Record<string, unknown>>[];
case 'secretbackends':
return secretBackendColumns as unknown as Column<Record<string, unknown>>[];
case 'llms':
return llmColumns as unknown as Column<Record<string, unknown>>[];
default:
return [
{ header: 'ID', key: 'id' as keyof Record<string, unknown> },
@@ -307,6 +329,7 @@ const RESOURCE_KIND: Record<string, string> = {
serverattachments: 'serverattachment',
mcptokens: 'mcptoken',
secretbackends: 'secretbackend',
llms: 'llm',
};
/**

View File

@@ -34,6 +34,8 @@ export const RESOURCE_ALIASES: Record<string, string> = {
secretbackend: 'secretbackends',
secretbackends: 'secretbackends',
sb: 'secretbackends',
llm: 'llms',
llms: 'llms',
all: 'all',
};

View File

@@ -150,11 +150,42 @@ model Secret {
updatedAt DateTime @updatedAt
backend SecretBackend @relation(fields: [backendId], references: [id])
llms Llm[]
@@index([name])
@@index([backendId])
}
// ── LLMs ──
//
// Server-managed LLM providers. Clients (agent, HTTP-mode mcplocal) send
// OpenAI-format requests to `mcpd /api/v1/llms/:name/infer` — mcpd attaches the
// provider API key server-side so credentials never leave the cluster.
// Credentials are stored by reference: `apiKeySecret` points at a Secret, and
// `apiKeySecretKey` names the key within that secret's data.
model Llm {
id String @id @default(cuid())
name String @unique
type String // anthropic | openai | deepseek | vllm | ollama | gemini-cli
model String // e.g. claude-3-5-sonnet-20241022
url String @default("") // endpoint (empty for provider default)
tier String @default("fast") // fast | heavy
description String @default("")
apiKeySecretId String? // FK to Secret
apiKeySecretKey String? // key inside the Secret's data
extraConfig Json @default("{}") // per-type extras
version Int @default(1)
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
apiKeySecret Secret? @relation(fields: [apiKeySecretId], references: [id], onDelete: SetNull)
@@index([name])
@@index([tier])
@@index([apiKeySecretId])
}
// ── Groups ──
model Group {

View File

@@ -26,6 +26,11 @@ import { SecretMigrateService } from './services/secret-migrate.service.js';
import { bootstrapSecretBackends } from './bootstrap/secret-backends.js';
import { registerSecretBackendRoutes } from './routes/secret-backends.js';
import { registerSecretMigrateRoutes } from './routes/secret-migrate.js';
import { LlmRepository } from './repositories/llm.repository.js';
import { LlmService } from './services/llm.service.js';
import { LlmAdapterRegistry } from './services/llm/dispatcher.js';
import { registerLlmRoutes } from './routes/llms.js';
import { registerLlmInferRoutes } from './routes/llm-infer.js';
import { PromptRepository } from './repositories/prompt.repository.js';
import { PromptRequestRepository } from './repositories/prompt-request.repository.js';
import { bootstrapSystemProject } from './bootstrap/system-project.js';
@@ -102,6 +107,12 @@ function mapUrlToPermission(method: string, url: string): PermissionCheck {
// /api/v1/secrets/migrate is a bulk cross-backend operation — treat as op, not a plain secret write.
if (url.startsWith('/api/v1/secrets/migrate')) return { kind: 'operation', operation: 'migrate-secrets' };
// /api/v1/llms/:name/infer → `run:llms:<name>` (not the default create:llms).
const inferMatch = url.match(/^\/api\/v1\/llms\/([^/?]+)\/infer/);
if (inferMatch?.[1]) {
return { kind: 'resource', resource: 'llms', action: 'run', resourceName: inferMatch[1] };
}
const resourceMap: Record<string, string | undefined> = {
'servers': 'servers',
'instances': 'instances',
@@ -117,6 +128,7 @@ function mapUrlToPermission(method: string, url: string): PermissionCheck {
'prompts': 'prompts',
'promptrequests': 'promptrequests',
'mcptokens': 'mcptokens',
'llms': 'llms',
};
const resource = resourceMap[segment];
@@ -271,6 +283,7 @@ async function main(): Promise<void> {
const serverRepo = new McpServerRepository(prisma);
const secretRepo = new SecretRepository(prisma);
const secretBackendRepo = new SecretBackendRepository(prisma);
const llmRepo = new LlmRepository(prisma);
const instanceRepo = new McpInstanceRepository(prisma);
const projectRepo = new ProjectRepository(prisma);
const auditLogRepo = new AuditLogRepository(prisma);
@@ -294,6 +307,7 @@ async function main(): Promise<void> {
projects: projectRepo,
groups: groupRepo,
mcptokens: mcpTokenRepo,
llms: llmRepo,
};
// Migrate legacy 'admin' role → granular roles
@@ -327,6 +341,8 @@ async function main(): Promise<void> {
});
const secretService = new SecretService(secretRepo, secretBackendService);
const secretMigrateService = new SecretMigrateService(secretRepo, secretBackendService);
const llmService = new LlmService(llmRepo, secretService);
const llmAdapters = new LlmAdapterRegistry();
const instanceService = new InstanceService(instanceRepo, serverRepo, orchestrator, secretService);
serverService.setInstanceService(instanceService);
const projectService = new ProjectService(projectRepo, serverRepo);
@@ -467,6 +483,24 @@ async function main(): Promise<void> {
registerSecretRoutes(app, secretService);
registerSecretBackendRoutes(app, secretBackendService);
registerSecretMigrateRoutes(app, secretMigrateService);
registerLlmRoutes(app, llmService);
registerLlmInferRoutes(app, {
llmService,
adapters: llmAdapters,
onInferenceEvent: (event) => {
app.log.info({
event: 'llm_inference_call',
llm: event.llmName,
model: event.model,
type: event.type,
userId: event.userId,
tokenSha: event.tokenSha,
streaming: event.streaming,
durationMs: event.durationMs,
status: event.status,
});
},
});
registerInstanceRoutes(app, instanceService);
registerProjectRoutes(app, projectService);
registerAuditLogRoutes(app, auditLogService);

View File

@@ -0,0 +1,89 @@
import type { PrismaClient, Llm, Prisma } from '@prisma/client';
export interface CreateLlmInput {
name: string;
type: string;
model: string;
url?: string;
tier?: string;
description?: string;
apiKeySecretId?: string | null;
apiKeySecretKey?: string | null;
extraConfig?: Record<string, unknown>;
}
export interface UpdateLlmInput {
model?: string;
url?: string;
tier?: string;
description?: string;
apiKeySecretId?: string | null;
apiKeySecretKey?: string | null;
extraConfig?: Record<string, unknown>;
}
export interface ILlmRepository {
findAll(): Promise<Llm[]>;
findById(id: string): Promise<Llm | null>;
findByName(name: string): Promise<Llm | null>;
findByTier(tier: string): Promise<Llm[]>;
create(data: CreateLlmInput): Promise<Llm>;
update(id: string, data: UpdateLlmInput): Promise<Llm>;
delete(id: string): Promise<void>;
}
export class LlmRepository implements ILlmRepository {
constructor(private readonly prisma: PrismaClient) {}
async findAll(): Promise<Llm[]> {
return this.prisma.llm.findMany({ orderBy: { name: 'asc' } });
}
async findById(id: string): Promise<Llm | null> {
return this.prisma.llm.findUnique({ where: { id } });
}
async findByName(name: string): Promise<Llm | null> {
return this.prisma.llm.findUnique({ where: { name } });
}
async findByTier(tier: string): Promise<Llm[]> {
return this.prisma.llm.findMany({ where: { tier }, orderBy: { name: 'asc' } });
}
async create(data: CreateLlmInput): Promise<Llm> {
return this.prisma.llm.create({
data: {
name: data.name,
type: data.type,
model: data.model,
url: data.url ?? '',
tier: data.tier ?? 'fast',
description: data.description ?? '',
apiKeySecretId: data.apiKeySecretId ?? null,
apiKeySecretKey: data.apiKeySecretKey ?? null,
extraConfig: (data.extraConfig ?? {}) as Prisma.InputJsonValue,
},
});
}
async update(id: string, data: UpdateLlmInput): Promise<Llm> {
const updateData: Prisma.LlmUpdateInput = {};
if (data.model !== undefined) updateData.model = data.model;
if (data.url !== undefined) updateData.url = data.url;
if (data.tier !== undefined) updateData.tier = data.tier;
if (data.description !== undefined) updateData.description = data.description;
if (data.apiKeySecretId !== undefined) {
updateData.apiKeySecret = data.apiKeySecretId === null
? { disconnect: true }
: { connect: { id: data.apiKeySecretId } };
}
if (data.apiKeySecretKey !== undefined) updateData.apiKeySecretKey = data.apiKeySecretKey;
if (data.extraConfig !== undefined) updateData.extraConfig = data.extraConfig as Prisma.InputJsonValue;
return this.prisma.llm.update({ where: { id }, data: updateData });
}
async delete(id: string): Promise<void> {
await this.prisma.llm.delete({ where: { id } });
}
}

View File

@@ -0,0 +1,145 @@
/**
* POST /api/v1/llms/:name/infer
*
* OpenAI-compatible chat completions endpoint. The RBAC check runs in the
* global hook — this URL maps to `run:llms:<name>`, not the default
* `create:llms`. See `main.ts:mapUrlToPermission`.
*
* Non-streaming: resolves the Llm, dispatches to the right provider adapter,
* returns the OpenAI chat.completion JSON.
*
* Streaming (`stream: true`): pipes adapter-emitted chunks back as
* `text/event-stream`. Adapters translate provider-native SSE into OpenAI
* `chat.completion.chunk`s so clients can use any OpenAI SDK unchanged.
*/
import type { FastifyInstance, FastifyReply } from 'fastify';
import type { LlmService } from '../services/llm.service.js';
import type { LlmAdapterRegistry } from '../services/llm/dispatcher.js';
import { NotFoundError } from '../services/mcp-server.service.js';
import type { OpenAiChatRequest, InferContext } from '../services/llm/types.js';
export interface LlmInferDeps {
llmService: LlmService;
adapters: LlmAdapterRegistry;
/** Optional hook to emit audit events — consumer may ignore. */
onInferenceEvent?: (event: InferenceAuditEvent) => void;
}
export interface InferenceAuditEvent {
kind: 'llm_inference_call';
llmName: string;
model: string;
type: string;
userId?: string | undefined;
tokenSha?: string | undefined;
streaming: boolean;
durationMs: number;
status: number;
}
export function registerLlmInferRoutes(
app: FastifyInstance,
deps: LlmInferDeps,
): void {
app.post<{ Params: { name: string }; Body: OpenAiChatRequest }>(
'/api/v1/llms/:name/infer',
async (request, reply) => {
const started = Date.now();
let llm;
try {
llm = await deps.llmService.getByName(request.params.name);
} catch (err) {
if (err instanceof NotFoundError) {
reply.code(404);
return { error: err.message };
}
throw err;
}
const body = (request.body ?? {}) as OpenAiChatRequest;
if (!body.messages || body.messages.length === 0) {
reply.code(400);
return { error: 'messages is required' };
}
// Resolve API key (may be empty string for providers that don't take one).
let apiKey = '';
if (llm.apiKeyRef !== null) {
try {
apiKey = await deps.llmService.resolveApiKey(llm.name);
} catch (err) {
reply.code(500);
return { error: `Failed to resolve API key: ${err instanceof Error ? err.message : String(err)}` };
}
}
const ctx: InferContext = {
body,
modelOverride: llm.model,
apiKey,
url: llm.url,
extraConfig: llm.extraConfig,
};
const adapter = deps.adapters.get(llm.type);
const streaming = body.stream === true;
const audit = (status: number): void => {
if (deps.onInferenceEvent === undefined) return;
deps.onInferenceEvent({
kind: 'llm_inference_call',
llmName: llm.name,
model: llm.model,
type: llm.type,
userId: request.userId,
tokenSha: request.mcpToken?.tokenSha,
streaming,
durationMs: Date.now() - started,
status,
});
};
if (!streaming) {
try {
const result = await adapter.infer(ctx);
reply.code(result.status);
audit(result.status);
return result.body;
} catch (err) {
audit(502);
reply.code(502);
return { error: err instanceof Error ? err.message : String(err) };
}
}
// Streaming path — set SSE headers and pipe chunks.
reply.raw.writeHead(200, {
'Content-Type': 'text/event-stream',
'Cache-Control': 'no-cache',
Connection: 'keep-alive',
'X-Accel-Buffering': 'no',
});
try {
for await (const chunk of adapter.stream(ctx)) {
writeSseChunk(reply, chunk.data);
if (chunk.done === true) break;
}
audit(200);
} catch (err) {
const payload = JSON.stringify({
error: { message: err instanceof Error ? err.message : String(err) },
});
writeSseChunk(reply, payload);
writeSseChunk(reply, '[DONE]');
audit(502);
} finally {
reply.raw.end();
}
return reply;
},
);
}
function writeSseChunk(reply: FastifyReply, data: string): void {
reply.raw.write(`data: ${data}\n\n`);
}

View File

@@ -0,0 +1,64 @@
import type { FastifyInstance } from 'fastify';
import type { LlmService } from '../services/llm.service.js';
import { NotFoundError, ConflictError } from '../services/mcp-server.service.js';
export function registerLlmRoutes(
app: FastifyInstance,
service: LlmService,
): void {
app.get('/api/v1/llms', async () => {
return service.list();
});
app.get<{ Params: { id: string } }>('/api/v1/llms/:id', async (request, reply) => {
try {
return await service.getById(request.params.id);
} catch (err) {
if (err instanceof NotFoundError) {
reply.code(404);
return { error: err.message };
}
throw err;
}
});
app.post('/api/v1/llms', async (request, reply) => {
try {
const row = await service.create(request.body);
reply.code(201);
return row;
} catch (err) {
if (err instanceof ConflictError) {
reply.code(409);
return { error: err.message };
}
throw err;
}
});
app.put<{ Params: { id: string } }>('/api/v1/llms/:id', async (request, reply) => {
try {
return await service.update(request.params.id, request.body);
} catch (err) {
if (err instanceof NotFoundError) {
reply.code(404);
return { error: err.message };
}
throw err;
}
});
app.delete<{ Params: { id: string } }>('/api/v1/llms/:id', async (request, reply) => {
try {
await service.delete(request.params.id);
reply.code(204);
return null;
} catch (err) {
if (err instanceof NotFoundError) {
reply.code(404);
return { error: err.message };
}
throw err;
}
});
}

View File

@@ -0,0 +1,180 @@
/**
* LlmService — CRUD over `Llm` rows plus credential resolution.
*
* Credentials are stored by reference: the row carries `(apiKeySecretId,
* apiKeySecretKey)`. Callers that need the raw key (the inference proxy, once
* it lands in Phase 2) call `resolveApiKey()`, which reads through the
* SecretService (whose own backend dispatch transparently hits plaintext or
* OpenBao as configured).
*
* The CLI/API accepts `apiKeyRef: { name, key }` — the service translates
* that to the FK pair.
*/
import type { Llm } from '@prisma/client';
import type { ILlmRepository } from '../repositories/llm.repository.js';
import type { SecretService } from './secret.service.js';
import {
CreateLlmSchema,
UpdateLlmSchema,
type CreateLlmInput,
type ApiKeyRef,
} from '../validation/llm.schema.js';
import { NotFoundError, ConflictError } from './mcp-server.service.js';
/** Shape returned by API layer — merges DB row with a human-readable apiKeyRef. */
export interface LlmView {
id: string;
name: string;
type: string;
model: string;
url: string;
tier: string;
description: string;
apiKeyRef: ApiKeyRef | null;
extraConfig: Record<string, unknown>;
version: number;
createdAt: Date;
updatedAt: Date;
}
export class LlmService {
constructor(
private readonly repo: ILlmRepository,
private readonly secrets: SecretService,
) {}
async list(): Promise<LlmView[]> {
const rows = await this.repo.findAll();
return Promise.all(rows.map((r) => this.toView(r)));
}
async getById(id: string): Promise<LlmView> {
const row = await this.repo.findById(id);
if (row === null) throw new NotFoundError(`Llm not found: ${id}`);
return this.toView(row);
}
async getByName(name: string): Promise<LlmView> {
const row = await this.repo.findByName(name);
if (row === null) throw new NotFoundError(`Llm not found: ${name}`);
return this.toView(row);
}
async create(input: unknown): Promise<LlmView> {
const data = CreateLlmSchema.parse(input);
const existing = await this.repo.findByName(data.name);
if (existing !== null) throw new ConflictError(`Llm already exists: ${data.name}`);
const apiKeyFields = await this.resolveApiKeyRefToIds(data.apiKeyRef);
const row = await this.repo.create({
name: data.name,
type: data.type,
model: data.model,
url: data.url ?? '',
tier: data.tier,
description: data.description,
apiKeySecretId: apiKeyFields.id,
apiKeySecretKey: apiKeyFields.key,
extraConfig: data.extraConfig,
});
return this.toView(row);
}
async update(id: string, input: unknown): Promise<LlmView> {
const data = UpdateLlmSchema.parse(input);
await this.getById(id);
const updateFields: Parameters<ILlmRepository['update']>[1] = {};
if (data.model !== undefined) updateFields.model = data.model;
if (data.url !== undefined) updateFields.url = data.url;
if (data.tier !== undefined) updateFields.tier = data.tier;
if (data.description !== undefined) updateFields.description = data.description;
if (data.extraConfig !== undefined) updateFields.extraConfig = data.extraConfig;
// apiKeyRef: null → explicit unlink; object → replace; undefined → leave alone.
if (data.apiKeyRef !== undefined) {
if (data.apiKeyRef === null) {
updateFields.apiKeySecretId = null;
updateFields.apiKeySecretKey = null;
} else {
const resolved = await this.resolveApiKeyRefToIds(data.apiKeyRef);
updateFields.apiKeySecretId = resolved.id;
updateFields.apiKeySecretKey = resolved.key;
}
}
const row = await this.repo.update(id, updateFields);
return this.toView(row);
}
async delete(id: string): Promise<void> {
await this.getById(id);
await this.repo.delete(id);
}
/**
* Return the raw API key string for a given Llm. Called by the inference
* proxy in Phase 2. Throws NotFoundError if the Llm has no apiKeyRef, or the
* referenced secret/key doesn't exist.
*/
async resolveApiKey(llmName: string): Promise<string> {
const row = await this.repo.findByName(llmName);
if (row === null) throw new NotFoundError(`Llm not found: ${llmName}`);
if (row.apiKeySecretId === null || row.apiKeySecretKey === null) {
throw new NotFoundError(`Llm '${llmName}' has no apiKeyRef configured`);
}
const secret = await this.secrets.getById(row.apiKeySecretId);
const data = await this.secrets.resolveData(secret);
const value = data[row.apiKeySecretKey];
if (value === undefined) {
throw new NotFoundError(`Secret '${secret.name}' has no key '${row.apiKeySecretKey}'`);
}
return value;
}
private async resolveApiKeyRefToIds(ref: ApiKeyRef | undefined): Promise<{ id: string | null; key: string | null }> {
if (ref === undefined) return { id: null, key: null };
const secret = await this.secrets.getByName(ref.name);
return { id: secret.id, key: ref.key };
}
private async toView(row: Llm): Promise<LlmView> {
let apiKeyRef: ApiKeyRef | null = null;
if (row.apiKeySecretId !== null && row.apiKeySecretKey !== null) {
const secret = await this.secrets.getById(row.apiKeySecretId).catch(() => null);
if (secret !== null) {
apiKeyRef = { name: secret.name, key: row.apiKeySecretKey };
}
}
return {
id: row.id,
name: row.name,
type: row.type,
model: row.model,
url: row.url,
tier: row.tier,
description: row.description,
apiKeyRef,
extraConfig: row.extraConfig as Record<string, unknown>,
version: row.version,
createdAt: row.createdAt,
updatedAt: row.updatedAt,
};
}
// ── Backup/restore helpers ──
async upsertByName(input: CreateLlmInput): Promise<LlmView> {
const existing = await this.repo.findByName(input.name);
if (existing !== null) {
return this.update(existing.id, input);
}
return this.create(input);
}
async deleteByName(name: string): Promise<void> {
const row = await this.repo.findByName(name);
if (row === null) return;
await this.delete(row.id);
}
}

View File

@@ -0,0 +1,256 @@
/**
* Anthropic adapter — translates between OpenAI chat/completions format and
* the Anthropic Messages API (`POST /v1/messages`).
*
* Key differences we translate:
* - OpenAI `role: 'system'` messages become a top-level `system` string.
* - Anthropic returns `content: [{ type: 'text', text }]` — we join into
* OpenAI's `content: "…"` string.
* - Streaming: Anthropic emits a sequence of
* `message_start / content_block_{start,delta,stop} / message_delta /
* message_stop` events. We translate those to OpenAI
* `chat.completion.chunk` deltas.
*
* This adapter implements the subset needed for plain-text chat — tool-use
* translation is intentionally left out for this phase; agents that need tool
* calling should target an OpenAI-compatible provider until the translator
* covers it.
*/
import type {
LlmAdapter,
InferContext,
NonStreamingResult,
StreamingChunk,
AdapterDeps,
OpenAiMessage,
} from '../types.js';
const DEFAULT_ANTHROPIC_URL = 'https://api.anthropic.com';
const ANTHROPIC_VERSION = '2023-06-01';
interface AnthropicMessageResponse {
id: string;
model: string;
role: 'assistant';
content: Array<{ type: 'text'; text: string } | { type: string; [k: string]: unknown }>;
stop_reason?: string;
usage?: { input_tokens: number; output_tokens: number };
}
export class AnthropicAdapter implements LlmAdapter {
readonly kind = 'anthropic';
private readonly fetchImpl: typeof globalThis.fetch;
constructor(deps: AdapterDeps = {}) {
this.fetchImpl = deps.fetch ?? globalThis.fetch;
}
async infer(ctx: InferContext): Promise<NonStreamingResult> {
const url = (ctx.url !== '' ? ctx.url : DEFAULT_ANTHROPIC_URL).replace(/\/+$/, '');
const body = this.toAnthropicRequest(ctx, false);
const res = await this.fetchImpl(`${url}/v1/messages`, {
method: 'POST',
headers: this.headers(ctx),
body: JSON.stringify(body),
});
if (!res.ok) {
const text = await res.text().catch(() => '');
return {
status: res.status,
body: { error: { message: `anthropic: HTTP ${String(res.status)} ${text}` } },
};
}
const anth = await res.json() as AnthropicMessageResponse;
return { status: 200, body: this.toOpenAiResponse(anth) };
}
async *stream(ctx: InferContext): AsyncGenerator<StreamingChunk> {
const url = (ctx.url !== '' ? ctx.url : DEFAULT_ANTHROPIC_URL).replace(/\/+$/, '');
const body = this.toAnthropicRequest(ctx, true);
const res = await this.fetchImpl(`${url}/v1/messages`, {
method: 'POST',
headers: this.headers(ctx),
body: JSON.stringify(body),
});
if (!res.ok || res.body === null) {
const text = await res.text().catch(() => '');
throw new Error(`anthropic stream: HTTP ${String(res.status)} ${text}`);
}
const id = `chatcmpl-${cryptoNonce()}`;
const model = body.model;
const created = Math.floor(Date.now() / 1000);
// Parse Anthropic SSE. Each event is `event: <name>\ndata: <json>\n\n`.
const decoder = new TextDecoder();
let buf = '';
const reader = res.body.getReader();
let emittedFirst = false;
const baseChunk = (delta: Record<string, unknown>, finishReason?: string): string => {
const chunk = {
id,
object: 'chat.completion.chunk',
created,
model,
choices: [{
index: 0,
delta,
finish_reason: finishReason ?? null,
}],
};
return JSON.stringify(chunk);
};
try {
// eslint-disable-next-line no-constant-condition
while (true) {
const { value, done } = await reader.read();
if (done) break;
buf += decoder.decode(value, { stream: true });
let idx: number;
while ((idx = buf.indexOf('\n\n')) !== -1) {
const rawEvent = buf.slice(0, idx);
buf = buf.slice(idx + 2);
const parsed = parseSseEvent(rawEvent);
if (parsed === null) continue;
const { event, data } = parsed;
if (event === 'content_block_delta') {
const textDelta = (data as { delta?: { type?: string; text?: string } }).delta;
if (textDelta?.type === 'text_delta' && typeof textDelta.text === 'string') {
if (!emittedFirst) {
yield { data: baseChunk({ role: 'assistant', content: '' }) };
emittedFirst = true;
}
yield { data: baseChunk({ content: textDelta.text }) };
}
} else if (event === 'message_delta') {
const stopReason = (data as { delta?: { stop_reason?: string } }).delta?.stop_reason;
if (typeof stopReason === 'string') {
yield { data: baseChunk({}, mapStopReason(stopReason)) };
}
} else if (event === 'message_stop') {
yield { data: '[DONE]', done: true };
return;
} else if (event === 'error') {
throw new Error(`anthropic stream error: ${JSON.stringify(data)}`);
}
}
}
} finally {
reader.releaseLock();
}
// Anthropic closed without message_stop — give consumer a clean end.
yield { data: '[DONE]', done: true };
}
private headers(ctx: InferContext): Record<string, string> {
return {
'Content-Type': 'application/json',
'x-api-key': ctx.apiKey,
'anthropic-version': ANTHROPIC_VERSION,
};
}
/** Translate the OpenAI request to the Anthropic Messages shape. */
private toAnthropicRequest(ctx: InferContext, stream: boolean): {
model: string;
max_tokens: number;
messages: Array<{ role: 'user' | 'assistant'; content: string }>;
system?: string;
stream?: boolean;
temperature?: number;
top_p?: number;
stop_sequences?: string[];
} {
const { body } = ctx;
const systemParts: string[] = [];
const messages: Array<{ role: 'user' | 'assistant'; content: string }> = [];
for (const msg of body.messages) {
const text = normaliseContent(msg);
if (msg.role === 'system') {
systemParts.push(text);
} else if (msg.role === 'user' || msg.role === 'assistant') {
messages.push({ role: msg.role, content: text });
}
// `tool` role messages are dropped — tool translation is out of scope
// for this phase.
}
const out: ReturnType<typeof this.toAnthropicRequest> = {
model: body.model !== '' ? body.model : ctx.modelOverride,
max_tokens: typeof body.max_tokens === 'number' ? body.max_tokens : 1024,
messages,
};
if (systemParts.length > 0) out.system = systemParts.join('\n\n');
if (stream) out.stream = true;
if (typeof body.temperature === 'number') out.temperature = body.temperature;
if (typeof body.top_p === 'number') out.top_p = body.top_p;
if (body.stop !== undefined) {
out.stop_sequences = Array.isArray(body.stop) ? body.stop : [body.stop];
}
return out;
}
private toOpenAiResponse(anth: AnthropicMessageResponse): Record<string, unknown> {
const text = anth.content
.map((c) => (c.type === 'text' && typeof (c as { text?: unknown }).text === 'string'
? (c as { text: string }).text
: ''))
.join('');
return {
id: `chatcmpl-${anth.id}`,
object: 'chat.completion',
created: Math.floor(Date.now() / 1000),
model: anth.model,
choices: [{
index: 0,
message: { role: 'assistant', content: text },
finish_reason: mapStopReason(anth.stop_reason ?? 'end_turn'),
}],
usage: anth.usage ? {
prompt_tokens: anth.usage.input_tokens,
completion_tokens: anth.usage.output_tokens,
total_tokens: anth.usage.input_tokens + anth.usage.output_tokens,
} : undefined,
};
}
}
function normaliseContent(msg: OpenAiMessage): string {
if (typeof msg.content === 'string') return msg.content;
return msg.content
.map((part) => (typeof part.text === 'string' ? part.text : ''))
.join('');
}
function mapStopReason(r: string): string {
// Anthropic → OpenAI finish_reason
if (r === 'end_turn' || r === 'stop_sequence') return 'stop';
if (r === 'max_tokens') return 'length';
if (r === 'tool_use') return 'tool_calls';
return r;
}
function parseSseEvent(raw: string): { event: string; data: unknown } | null {
let event = '';
let dataLine = '';
for (const line of raw.split('\n')) {
if (line.startsWith('event:')) event = line.slice(6).trim();
else if (line.startsWith('data:')) dataLine += line.slice(5).trim();
}
if (dataLine === '') return null;
try {
return { event, data: JSON.parse(dataLine) as unknown };
} catch {
return null;
}
}
function cryptoNonce(): string {
// Not security-sensitive — just a short randomish id.
return Math.random().toString(36).slice(2, 10);
}

View File

@@ -0,0 +1,112 @@
/**
* OpenAI-passthrough adapter.
*
* Covers any provider that already speaks OpenAI chat/completions on the
* wire: `openai`, `vllm`, `deepseek`, `ollama` (with their openai-compatible
* endpoint enabled). The adapter forwards the request body verbatim and
* streams the response straight through — no wire translation.
*
* Defaults when `url` is empty:
* - openai → https://api.openai.com
* - deepseek → https://api.deepseek.com
* - vllm/ollama → must be configured; these have no canonical public URL.
*/
import type { LlmAdapter, InferContext, NonStreamingResult, StreamingChunk, AdapterDeps } from '../types.js';
const DEFAULT_URLS: Record<string, string> = {
openai: 'https://api.openai.com',
deepseek: 'https://api.deepseek.com',
};
export class OpenAiPassthroughAdapter implements LlmAdapter {
readonly kind: string;
private readonly fetchImpl: typeof globalThis.fetch;
constructor(kind: 'openai' | 'vllm' | 'deepseek' | 'ollama', deps: AdapterDeps = {}) {
this.kind = kind;
this.fetchImpl = deps.fetch ?? globalThis.fetch;
}
async infer(ctx: InferContext): Promise<NonStreamingResult> {
const url = this.endpointUrl(ctx.url);
const body = this.prepareBody(ctx, false);
const res = await this.fetchImpl(`${url}/v1/chat/completions`, {
method: 'POST',
headers: this.headers(ctx),
body: JSON.stringify(body),
});
const json = await res.json() as unknown;
return { status: res.status, body: json };
}
async *stream(ctx: InferContext): AsyncGenerator<StreamingChunk> {
const url = this.endpointUrl(ctx.url);
const body = this.prepareBody(ctx, true);
const res = await this.fetchImpl(`${url}/v1/chat/completions`, {
method: 'POST',
headers: this.headers(ctx),
body: JSON.stringify(body),
});
if (!res.ok || res.body === null) {
const text = await res.text().catch(() => '');
throw new Error(`${this.kind} stream: HTTP ${String(res.status)} ${text}`);
}
// Re-frame the provider's SSE stream into our `StreamingChunk` shape.
// OpenAI-compat providers already emit `data: {...}` + `data: [DONE]` —
// we just unwrap the `data: ` prefix, forward payloads, and emit a
// single terminal `done` chunk so the consumer always gets one.
const decoder = new TextDecoder();
let buf = '';
const reader = res.body.getReader();
try {
// eslint-disable-next-line no-constant-condition
while (true) {
const { value, done } = await reader.read();
if (done) break;
buf += decoder.decode(value, { stream: true });
let idx: number;
while ((idx = buf.indexOf('\n\n')) !== -1) {
const event = buf.slice(0, idx);
buf = buf.slice(idx + 2);
for (const line of event.split('\n')) {
if (!line.startsWith('data:')) continue;
const payload = line.slice(5).trim();
if (payload === '') continue;
if (payload === '[DONE]') {
yield { data: '[DONE]', done: true };
return;
}
yield { data: payload };
}
}
}
} finally {
reader.releaseLock();
}
// Provider closed without emitting [DONE] — give the consumer a clean end.
yield { data: '[DONE]', done: true };
}
private endpointUrl(url: string): string {
if (url !== '') return url.replace(/\/+$/, '');
const def = DEFAULT_URLS[this.kind];
if (def === undefined) {
throw new Error(`${this.kind}: url is required (no default endpoint for this provider)`);
}
return def;
}
private headers(ctx: InferContext): Record<string, string> {
const headers: Record<string, string> = { 'Content-Type': 'application/json' };
if (ctx.apiKey !== '') headers['Authorization'] = `Bearer ${ctx.apiKey}`;
return headers;
}
private prepareBody(ctx: InferContext, stream: boolean): Record<string, unknown> {
const out: Record<string, unknown> = { ...ctx.body };
if (out.model === undefined || out.model === '') out.model = ctx.modelOverride;
out.stream = stream;
return out;
}
}

View File

@@ -0,0 +1,52 @@
/**
* Adapter dispatcher for the inference proxy.
*
* `getAdapter(type)` returns the right adapter instance for an Llm's `type`
* column. Adapters are cached per-type — they carry no per-request state.
* The caller (the infer route) supplies the resolved API key + request body
* through `InferContext`, so a single adapter instance serves every Llm of
* that type.
*/
import type { LlmAdapter, AdapterDeps } from './types.js';
import { OpenAiPassthroughAdapter } from './adapters/openai-passthrough.js';
import { AnthropicAdapter } from './adapters/anthropic.js';
export class UnsupportedProviderError extends Error {
constructor(type: string) {
super(`Unsupported LLM provider: ${type}`);
this.name = 'UnsupportedProviderError';
}
}
export class LlmAdapterRegistry {
private readonly cache = new Map<string, LlmAdapter>();
constructor(private readonly deps: AdapterDeps = {}) {}
get(type: string): LlmAdapter {
const cached = this.cache.get(type);
if (cached !== undefined) return cached;
const adapter = this.build(type);
this.cache.set(type, adapter);
return adapter;
}
private build(type: string): LlmAdapter {
switch (type) {
case 'openai':
case 'vllm':
case 'deepseek':
case 'ollama':
return new OpenAiPassthroughAdapter(type, this.deps);
case 'anthropic':
return new AnthropicAdapter(this.deps);
case 'gemini-cli':
// Intentionally deferred — gemini-cli requires the binary on the mcpd
// pod filesystem and subprocess lifecycle management. Flagged as
// homelab-only in the plan; not landing in this phase.
throw new UnsupportedProviderError(`${type} (subprocess providers are not supported in the proxy yet)`);
default:
throw new UnsupportedProviderError(type);
}
}
}

View File

@@ -0,0 +1,70 @@
/**
* Shared types for the LLM inference proxy.
*
* The wire format on the mcpctl side is OpenAI's chat/completions v1 — it's
* the de-facto lingua franca and every client library already speaks it.
* Provider-specific adapters translate to/from that shape.
*/
export interface OpenAiMessage {
role: 'system' | 'user' | 'assistant' | 'tool';
content: string | Array<{ type: string; text?: string; [k: string]: unknown }>;
name?: string;
tool_call_id?: string;
tool_calls?: Array<{ id: string; type: 'function'; function: { name: string; arguments: string } }>;
}
export interface OpenAiChatRequest {
model: string;
messages: OpenAiMessage[];
stream?: boolean;
temperature?: number;
max_tokens?: number;
top_p?: number;
stop?: string | string[];
tools?: Array<{ type: 'function'; function: { name: string; description?: string; parameters?: Record<string, unknown> } }>;
tool_choice?: unknown;
// Passthrough: unknown extras forwarded as-is.
[k: string]: unknown;
}
export interface InferContext {
/** Normalised OpenAI-format body. Adapters read/transform from here. */
body: OpenAiChatRequest;
/** The Llm row's `model` field, used when the request body has an empty model. */
modelOverride: string;
/** The resolved API key, or empty string for providers that don't take one. */
apiKey: string;
/** Target URL from the Llm row (may be empty for provider-default). */
url: string;
/** Arbitrary config from the Llm row (e.g. vllm gpu settings). */
extraConfig: Record<string, unknown>;
}
export interface NonStreamingResult {
status: number;
/** OpenAI chat.completion response body. */
body: unknown;
}
export interface StreamingChunk {
/** Raw SSE data payload. Consumer emits `data: <payload>\n\n`. */
data: string;
/** Mark the end of stream — consumer emits `data: [DONE]\n\n`. */
done?: boolean;
}
export interface LlmAdapter {
readonly kind: string;
/** Non-streaming request. Returns the final chat.completion body. */
infer(ctx: InferContext): Promise<NonStreamingResult>;
/**
* Streaming request. Yields OpenAI-format SSE chunks. Adapters translate
* provider-native stream formats into OpenAI `chat.completion.chunk`s.
*/
stream(ctx: InferContext): AsyncGenerator<StreamingChunk>;
}
export interface AdapterDeps {
fetch?: typeof globalThis.fetch;
}

View File

@@ -0,0 +1,39 @@
import { z } from 'zod';
export const LLM_TYPES = ['anthropic', 'openai', 'deepseek', 'vllm', 'ollama', 'gemini-cli'] as const;
export const LLM_TIERS = ['fast', 'heavy'] as const;
/**
* Reference to a key inside a Secret. `name` is the Secret resource name;
* `key` is the JSON key inside that secret's `data` map. mcpd resolves the
* pair through SecretService at inference time, so credentials never leave
* the server.
*/
export const ApiKeyRefSchema = z.object({
name: z.string().min(1),
key: z.string().min(1),
});
export const CreateLlmSchema = z.object({
name: z.string().min(1).max(100).regex(/^[a-z0-9-]+$/, 'Name must be lowercase alphanumeric with hyphens'),
type: z.enum(LLM_TYPES),
model: z.string().min(1),
url: z.string().url().optional(),
tier: z.enum(LLM_TIERS).default('fast'),
description: z.string().max(500).default(''),
apiKeyRef: ApiKeyRefSchema.optional(),
extraConfig: z.record(z.unknown()).default({}),
});
export const UpdateLlmSchema = z.object({
model: z.string().min(1).optional(),
url: z.string().url().or(z.literal('')).optional(),
tier: z.enum(LLM_TIERS).optional(),
description: z.string().max(500).optional(),
apiKeyRef: ApiKeyRefSchema.nullable().optional(),
extraConfig: z.record(z.unknown()).optional(),
});
export type CreateLlmInput = z.infer<typeof CreateLlmSchema>;
export type UpdateLlmInput = z.infer<typeof UpdateLlmSchema>;
export type ApiKeyRef = z.infer<typeof ApiKeyRefSchema>;

View File

@@ -1,7 +1,7 @@
import { z } from 'zod';
export const RBAC_ROLES = ['edit', 'view', 'create', 'delete', 'run', 'expose'] as const;
export const RBAC_RESOURCES = ['*', 'servers', 'instances', 'secrets', 'secretbackends', 'projects', 'templates', 'users', 'groups', 'rbac', 'prompts', 'promptrequests', 'mcptokens'] as const;
export const RBAC_RESOURCES = ['*', 'servers', 'instances', 'secrets', 'secretbackends', 'llms', 'projects', 'templates', 'users', 'groups', 'rbac', 'prompts', 'promptrequests', 'mcptokens'] as const;
/** Singular→plural map for resource names. */
const RESOURCE_ALIASES: Record<string, string> = {
@@ -16,6 +16,7 @@ const RESOURCE_ALIASES: Record<string, string> = {
promptrequest: 'promptrequests',
mcptoken: 'mcptokens',
secretbackend: 'secretbackends',
llm: 'llms',
};
/** Normalize a resource name to its canonical plural form. */

View File

@@ -0,0 +1,210 @@
import { describe, it, expect, vi } from 'vitest';
import { OpenAiPassthroughAdapter } from '../src/services/llm/adapters/openai-passthrough.js';
import { AnthropicAdapter } from '../src/services/llm/adapters/anthropic.js';
import { LlmAdapterRegistry, UnsupportedProviderError } from '../src/services/llm/dispatcher.js';
import type { InferContext } from '../src/services/llm/types.js';
function mockFetch(responses: Array<{ match: RegExp; status: number; body?: unknown; text?: string }>): ReturnType<typeof vi.fn> {
return vi.fn(async (input: string | URL, _init?: RequestInit) => {
const url = String(input);
const match = responses.find((r) => r.match.test(url));
if (!match) throw new Error(`unexpected fetch: ${url}`);
const body = match.body !== undefined ? JSON.stringify(match.body) : (match.text ?? '');
return new Response(body, { status: match.status, headers: { 'Content-Type': 'application/json' } });
});
}
function makeCtx(overrides: Partial<InferContext> = {}): InferContext {
return {
body: { model: '', messages: [{ role: 'user', content: 'hello' }] },
modelOverride: 'default-model',
apiKey: 'test-key',
url: '',
extraConfig: {},
...overrides,
};
}
// Helper to build a streaming Response from SSE lines.
function sseResponse(events: string[]): Response {
const body = events.join('\n\n') + '\n\n';
const stream = new ReadableStream<Uint8Array>({
start(controller) {
controller.enqueue(new TextEncoder().encode(body));
controller.close();
},
});
return new Response(stream, { status: 200, headers: { 'Content-Type': 'text/event-stream' } });
}
describe('OpenAiPassthroughAdapter', () => {
it('infer: POSTs to <url>/v1/chat/completions with Authorization + body', async () => {
const fetchFn = mockFetch([{
match: /\/v1\/chat\/completions$/,
status: 200,
body: { id: 'x', choices: [{ message: { role: 'assistant', content: 'hi' } }] },
}]);
const adapter = new OpenAiPassthroughAdapter('openai', { fetch: fetchFn as unknown as typeof fetch });
const ctx = makeCtx({ url: 'https://api.example.com' });
const res = await adapter.infer(ctx);
expect(res.status).toBe(200);
const [url, init] = fetchFn.mock.calls[0] as [string, RequestInit];
expect(url).toBe('https://api.example.com/v1/chat/completions');
expect(init.method).toBe('POST');
const headers = init.headers as Record<string, string>;
expect(headers['Authorization']).toBe('Bearer test-key');
const sent = JSON.parse(init.body as string) as { model: string; stream: boolean };
expect(sent.model).toBe('default-model'); // filled from modelOverride
expect(sent.stream).toBe(false);
});
it('infer: uses default URL for openai when url is empty', async () => {
const fetchFn = mockFetch([{ match: /api\.openai\.com/, status: 200, body: {} }]);
const adapter = new OpenAiPassthroughAdapter('openai', { fetch: fetchFn as unknown as typeof fetch });
await adapter.infer(makeCtx());
const [url] = fetchFn.mock.calls[0] as [string, RequestInit];
expect(url).toBe('https://api.openai.com/v1/chat/completions');
});
it('infer: throws for vllm when url is empty (no default)', async () => {
const adapter = new OpenAiPassthroughAdapter('vllm', { fetch: vi.fn() as unknown as typeof fetch });
await expect(adapter.infer(makeCtx())).rejects.toThrow(/no default endpoint/);
});
it('infer: omits Authorization when apiKey is empty', async () => {
const fetchFn = mockFetch([{ match: /ollama/, status: 200, body: {} }]);
const adapter = new OpenAiPassthroughAdapter('ollama', { fetch: fetchFn as unknown as typeof fetch });
await adapter.infer(makeCtx({ url: 'http://ollama:11434', apiKey: '' }));
const [, init] = fetchFn.mock.calls[0] as [string, RequestInit];
const headers = init.headers as Record<string, string>;
expect(headers['Authorization']).toBeUndefined();
});
it('stream: forwards SSE chunks and emits terminal [DONE]', async () => {
const fetchFn = vi.fn(async () => sseResponse([
'data: {"choices":[{"delta":{"content":"hi"}}]}',
'data: {"choices":[{"delta":{"content":"!"}}]}',
'data: [DONE]',
]));
const adapter = new OpenAiPassthroughAdapter('openai', { fetch: fetchFn as unknown as typeof fetch });
const ctx = makeCtx({ url: 'http://example', body: { model: '', messages: [], stream: true } });
const chunks: { data: string; done?: boolean }[] = [];
for await (const c of adapter.stream(ctx)) chunks.push(c);
expect(chunks).toHaveLength(3);
expect(chunks[2]?.done).toBe(true);
});
});
describe('AnthropicAdapter', () => {
it('infer: translates system+user messages, posts to /v1/messages', async () => {
const fetchFn = mockFetch([{
match: /\/v1\/messages$/,
status: 200,
body: {
id: 'msg_01', model: 'claude-3-5-sonnet-20241022', role: 'assistant',
content: [{ type: 'text', text: 'howdy' }],
stop_reason: 'end_turn',
usage: { input_tokens: 5, output_tokens: 2 },
},
}]);
const adapter = new AnthropicAdapter({ fetch: fetchFn as unknown as typeof fetch });
const ctx = makeCtx({
body: {
model: '', messages: [
{ role: 'system', content: 'be nice' },
{ role: 'user', content: 'hi' },
],
},
modelOverride: 'claude-3-5-sonnet-20241022',
});
const res = await adapter.infer(ctx);
expect(res.status).toBe(200);
const [url, init] = fetchFn.mock.calls[0] as [string, RequestInit];
expect(url).toBe('https://api.anthropic.com/v1/messages');
const headers = init.headers as Record<string, string>;
expect(headers['x-api-key']).toBe('test-key');
expect(headers['anthropic-version']).toBeDefined();
const sent = JSON.parse(init.body as string) as {
model: string; system: string; messages: Array<{ role: string; content: string }>; max_tokens: number;
};
expect(sent.model).toBe('claude-3-5-sonnet-20241022');
expect(sent.system).toBe('be nice');
expect(sent.messages).toEqual([{ role: 'user', content: 'hi' }]);
expect(sent.max_tokens).toBe(1024); // default
// Response shape: OpenAI chat.completion
const body = res.body as { choices: Array<{ message: { content: string }; finish_reason: string }>; usage: { total_tokens: number } };
expect(body.choices[0]!.message.content).toBe('howdy');
expect(body.choices[0]!.finish_reason).toBe('stop');
expect(body.usage.total_tokens).toBe(7);
});
it('infer: returns a synthetic error body on non-2xx', async () => {
const fetchFn = vi.fn(async () => new Response('boom', { status: 500 }));
const adapter = new AnthropicAdapter({ fetch: fetchFn as unknown as typeof fetch });
const res = await adapter.infer(makeCtx({ body: { model: '', messages: [{ role: 'user', content: 'x' }] } }));
expect(res.status).toBe(500);
const body = res.body as { error: { message: string } };
expect(body.error.message).toMatch(/HTTP 500/);
});
it('stream: translates anthropic event stream into OpenAI chunks', async () => {
const events = [
'event: message_start\ndata: {"type":"message_start","message":{"id":"m","content":[]}}',
'event: content_block_delta\ndata: {"type":"content_block_delta","delta":{"type":"text_delta","text":"he"}}',
'event: content_block_delta\ndata: {"type":"content_block_delta","delta":{"type":"text_delta","text":"llo"}}',
'event: message_delta\ndata: {"type":"message_delta","delta":{"stop_reason":"end_turn"}}',
'event: message_stop\ndata: {"type":"message_stop"}',
];
const fetchFn = vi.fn(async () => sseResponse(events));
const adapter = new AnthropicAdapter({ fetch: fetchFn as unknown as typeof fetch });
const ctx = makeCtx({ body: { model: '', messages: [{ role: 'user', content: 'hi' }], stream: true } });
const chunks: { data: string; done?: boolean }[] = [];
for await (const c of adapter.stream(ctx)) chunks.push(c);
// Expect: role-prime, two text deltas, finish-reason, [DONE]
expect(chunks[chunks.length - 1]?.data).toBe('[DONE]');
expect(chunks[chunks.length - 1]?.done).toBe(true);
// First chunk is the role-prime (role: assistant, content: '').
const first = JSON.parse(chunks[0]!.data) as { choices: [{ delta: { role: string; content: string } }] };
expect(first.choices[0]!.delta.role).toBe('assistant');
// Next two chunks carry the text.
const d1 = JSON.parse(chunks[1]!.data) as { choices: [{ delta: { content: string } }] };
const d2 = JSON.parse(chunks[2]!.data) as { choices: [{ delta: { content: string } }] };
expect(d1.choices[0]!.delta.content).toBe('he');
expect(d2.choices[0]!.delta.content).toBe('llo');
// Finish-reason chunk.
const stopped = JSON.parse(chunks[3]!.data) as { choices: [{ finish_reason: string }] };
expect(stopped.choices[0]!.finish_reason).toBe('stop');
});
});
describe('LlmAdapterRegistry', () => {
it('returns the right adapter kind for each type', () => {
const reg = new LlmAdapterRegistry();
expect(reg.get('openai').kind).toBe('openai');
expect(reg.get('vllm').kind).toBe('vllm');
expect(reg.get('deepseek').kind).toBe('deepseek');
expect(reg.get('ollama').kind).toBe('ollama');
expect(reg.get('anthropic').kind).toBe('anthropic');
});
it('caches adapters between calls', () => {
const reg = new LlmAdapterRegistry();
const a = reg.get('openai');
const b = reg.get('openai');
expect(a).toBe(b);
});
it('rejects unsupported providers (gemini-cli is deferred)', () => {
const reg = new LlmAdapterRegistry();
expect(() => reg.get('gemini-cli')).toThrow(UnsupportedProviderError);
expect(() => reg.get('bogus')).toThrow(UnsupportedProviderError);
});
});

View File

@@ -0,0 +1,208 @@
import { describe, it, expect, vi, afterEach } from 'vitest';
import Fastify from 'fastify';
import type { FastifyInstance } from 'fastify';
import { registerLlmInferRoutes } from '../src/routes/llm-infer.js';
import { LlmAdapterRegistry } from '../src/services/llm/dispatcher.js';
import { errorHandler } from '../src/middleware/error-handler.js';
import type { LlmView } from '../src/services/llm.service.js';
import { NotFoundError } from '../src/services/mcp-server.service.js';
let app: FastifyInstance;
function makeLlmView(overrides: Partial<LlmView> = {}): LlmView {
return {
id: 'llm-1',
name: 'claude',
type: 'anthropic',
model: 'claude-3-5-sonnet-20241022',
url: '',
tier: 'heavy',
description: '',
apiKeyRef: { name: 'anthropic-key', key: 'token' },
extraConfig: {},
version: 1,
createdAt: new Date(),
updatedAt: new Date(),
...overrides,
};
}
afterEach(async () => {
if (app) await app.close();
});
function sseResponse(events: string[]): Response {
const body = events.join('\n\n') + '\n\n';
const stream = new ReadableStream<Uint8Array>({
start(controller) {
controller.enqueue(new TextEncoder().encode(body));
controller.close();
},
});
return new Response(stream, { status: 200 });
}
interface LlmServiceLike {
getByName: (name: string) => Promise<LlmView>;
resolveApiKey: (name: string) => Promise<string>;
}
async function setupApp(
llmService: LlmServiceLike,
adapters: LlmAdapterRegistry,
onInferenceEvent?: Parameters<typeof registerLlmInferRoutes>[1]['onInferenceEvent'],
): Promise<FastifyInstance> {
app = Fastify({ logger: false });
app.setErrorHandler(errorHandler);
const deps: Parameters<typeof registerLlmInferRoutes>[1] = {
// eslint-disable-next-line @typescript-eslint/no-explicit-any
llmService: llmService as any,
adapters,
};
if (onInferenceEvent !== undefined) deps.onInferenceEvent = onInferenceEvent;
registerLlmInferRoutes(app, deps);
await app.ready();
return app;
}
describe('POST /api/v1/llms/:name/infer', () => {
it('returns 404 when the Llm does not exist', async () => {
const svc: LlmServiceLike = {
getByName: async () => { throw new NotFoundError('Llm not found: missing'); },
resolveApiKey: async () => '',
};
await setupApp(svc, new LlmAdapterRegistry());
const res = await app.inject({
method: 'POST',
url: '/api/v1/llms/missing/infer',
payload: { messages: [{ role: 'user', content: 'hi' }] },
});
expect(res.statusCode).toBe(404);
});
it('returns 400 when messages is missing', async () => {
const svc: LlmServiceLike = {
getByName: async () => makeLlmView({ apiKeyRef: null }),
resolveApiKey: async () => '',
};
await setupApp(svc, new LlmAdapterRegistry());
const res = await app.inject({
method: 'POST',
url: '/api/v1/llms/claude/infer',
payload: {},
});
expect(res.statusCode).toBe(400);
});
it('dispatches non-streaming to the adapter and returns its JSON', async () => {
const fetchFn = vi.fn(async () => new Response(JSON.stringify({
id: 'msg_1', model: 'claude-3-5-sonnet-20241022', role: 'assistant',
content: [{ type: 'text', text: 'hello' }],
stop_reason: 'end_turn',
usage: { input_tokens: 1, output_tokens: 1 },
}), { status: 200 }));
const adapters = new LlmAdapterRegistry({ fetch: fetchFn as unknown as typeof fetch });
const svc: LlmServiceLike = {
getByName: async () => makeLlmView(),
resolveApiKey: async () => 'sk-ant-xyz',
};
const events: unknown[] = [];
await setupApp(svc, adapters, (e) => events.push(e));
const res = await app.inject({
method: 'POST',
url: '/api/v1/llms/claude/infer',
payload: { messages: [{ role: 'user', content: 'hi' }] },
});
expect(res.statusCode).toBe(200);
const body = res.json<{ choices: Array<{ message: { content: string } }> }>();
expect(body.choices[0]!.message.content).toBe('hello');
// Audit event emitted
expect(events).toHaveLength(1);
expect((events[0] as { kind: string; llmName: string; status: number }).kind).toBe('llm_inference_call');
expect((events[0] as { llmName: string }).llmName).toBe('claude');
expect((events[0] as { streaming: boolean }).streaming).toBe(false);
expect((events[0] as { status: number }).status).toBe(200);
});
it('500s when apiKey resolution fails', async () => {
const adapters = new LlmAdapterRegistry();
const svc: LlmServiceLike = {
getByName: async () => makeLlmView(),
resolveApiKey: async () => { throw new Error('secret not found'); },
};
await setupApp(svc, adapters);
const res = await app.inject({
method: 'POST',
url: '/api/v1/llms/claude/infer',
payload: { messages: [{ role: 'user', content: 'hi' }] },
});
expect(res.statusCode).toBe(500);
expect(res.json<{ error: string }>().error).toMatch(/secret not found/);
});
it('skips apiKey resolution when the Llm has no apiKeyRef', async () => {
const fetchFn = vi.fn(async () => new Response(JSON.stringify({ id: 'x', choices: [] }), { status: 200 }));
const adapters = new LlmAdapterRegistry({ fetch: fetchFn as unknown as typeof fetch });
const resolveSpy = vi.fn();
const svc: LlmServiceLike = {
getByName: async () => makeLlmView({ type: 'ollama', url: 'http://ollama:11434', apiKeyRef: null }),
resolveApiKey: resolveSpy as unknown as LlmServiceLike['resolveApiKey'],
};
await setupApp(svc, adapters);
const res = await app.inject({
method: 'POST',
url: '/api/v1/llms/ollama-local/infer',
payload: { messages: [{ role: 'user', content: 'hi' }] },
});
expect(res.statusCode).toBe(200);
expect(resolveSpy).not.toHaveBeenCalled();
});
it('streams SSE chunks for stream: true', async () => {
const fetchFn = vi.fn(async () => sseResponse([
'event: content_block_delta\ndata: {"type":"content_block_delta","delta":{"type":"text_delta","text":"hi"}}',
'event: message_stop\ndata: {"type":"message_stop"}',
]));
const adapters = new LlmAdapterRegistry({ fetch: fetchFn as unknown as typeof fetch });
const svc: LlmServiceLike = {
getByName: async () => makeLlmView(),
resolveApiKey: async () => 'sk-ant-xyz',
};
const events: Array<{ streaming: boolean; status: number }> = [];
// eslint-disable-next-line @typescript-eslint/no-explicit-any
await setupApp(svc, adapters, ((e: any) => events.push(e)) as any);
const res = await app.inject({
method: 'POST',
url: '/api/v1/llms/claude/infer',
payload: { messages: [{ role: 'user', content: 'hi' }], stream: true },
});
expect(res.statusCode).toBe(200);
expect(res.body).toContain('data:');
expect(res.body).toContain('[DONE]');
expect(events).toHaveLength(1);
expect(events[0]!.streaming).toBe(true);
});
it('502s on adapter errors (non-streaming)', async () => {
const fetchFn = vi.fn(async () => { throw new Error('upstream down'); });
const adapters = new LlmAdapterRegistry({ fetch: fetchFn as unknown as typeof fetch });
const svc: LlmServiceLike = {
getByName: async () => makeLlmView({ type: 'openai', url: 'http://example', apiKeyRef: null }),
resolveApiKey: async () => '',
};
await setupApp(svc, adapters);
const res = await app.inject({
method: 'POST',
url: '/api/v1/llms/x/infer',
payload: { messages: [{ role: 'user', content: 'hi' }] },
});
expect(res.statusCode).toBe(502);
expect(res.json<{ error: string }>().error).toMatch(/upstream down/);
});
});

View File

@@ -0,0 +1,175 @@
import { describe, it, expect, vi, afterEach } from 'vitest';
import Fastify from 'fastify';
import type { FastifyInstance } from 'fastify';
import { registerLlmRoutes } from '../src/routes/llms.js';
import { LlmService } from '../src/services/llm.service.js';
import { errorHandler } from '../src/middleware/error-handler.js';
import type { ILlmRepository } from '../src/repositories/llm.repository.js';
import type { Llm, Secret } from '@prisma/client';
let app: FastifyInstance;
function makeLlm(overrides: Partial<Llm> = {}): Llm {
return {
id: 'llm-1',
name: 'claude',
type: 'anthropic',
model: 'claude-3-5-sonnet-20241022',
url: '',
tier: 'heavy',
description: '',
apiKeySecretId: null,
apiKeySecretKey: null,
extraConfig: {},
version: 1,
createdAt: new Date(),
updatedAt: new Date(),
...overrides,
};
}
function mockRepo(initial: Llm[] = []): ILlmRepository {
const rows = new Map(initial.map((r) => [r.id, r]));
return {
findAll: vi.fn(async () => [...rows.values()]),
findById: vi.fn(async (id: string) => rows.get(id) ?? null),
findByName: vi.fn(async (name: string) => {
for (const r of rows.values()) if (r.name === name) return r;
return null;
}),
findByTier: vi.fn(async () => []),
create: vi.fn(async (data) => {
const row = makeLlm({ id: 'new-id', name: data.name, type: data.type, model: data.model });
rows.set(row.id, row);
return row;
}),
update: vi.fn(async (id, data) => {
const existing = rows.get(id)!;
const next: Llm = {
...existing,
...(data.model !== undefined ? { model: data.model } : {}),
};
rows.set(id, next);
return next;
}),
delete: vi.fn(async (id) => { rows.delete(id); }),
};
}
function mockSecretService() {
const sec: Secret = {
id: 'sec-1', name: 'anthropic-key', backendId: 'b', data: {}, externalRef: '',
version: 1, createdAt: new Date(), updatedAt: new Date(),
};
return {
getById: vi.fn(async (id: string) => {
if (id === sec.id) return sec;
throw new Error('not found');
}),
getByName: vi.fn(async (name: string) => {
if (name === sec.name) return sec;
throw new Error('not found');
}),
resolveData: vi.fn(async () => ({ token: 'sk-ant-xyz' })),
};
}
afterEach(async () => {
if (app) await app.close();
});
async function createApp(repo: ILlmRepository): Promise<FastifyInstance> {
app = Fastify({ logger: false });
app.setErrorHandler(errorHandler);
// eslint-disable-next-line @typescript-eslint/no-explicit-any
const service = new LlmService(repo, mockSecretService() as any);
registerLlmRoutes(app, service);
await app.ready();
return app;
}
describe('Llm Routes', () => {
it('GET /api/v1/llms returns a list', async () => {
await createApp(mockRepo([makeLlm()]));
const res = await app.inject({ method: 'GET', url: '/api/v1/llms' });
expect(res.statusCode).toBe(200);
const body = res.json<Array<{ name: string }>>();
expect(body).toHaveLength(1);
expect(body[0]!.name).toBe('claude');
});
it('GET /api/v1/llms/:id returns 404 when missing', async () => {
await createApp(mockRepo());
const res = await app.inject({ method: 'GET', url: '/api/v1/llms/missing' });
expect(res.statusCode).toBe(404);
});
it('POST /api/v1/llms creates and returns 201', async () => {
await createApp(mockRepo());
const res = await app.inject({
method: 'POST',
url: '/api/v1/llms',
payload: {
name: 'ollama-local',
type: 'ollama',
model: 'llama3',
url: 'http://localhost:11434',
},
});
expect(res.statusCode).toBe(201);
expect(res.json<{ name: string }>().name).toBe('ollama-local');
});
it('POST /api/v1/llms rejects bad input with 400', async () => {
await createApp(mockRepo());
const res = await app.inject({
method: 'POST',
url: '/api/v1/llms',
payload: { name: '', type: 'anthropic', model: 'x' },
});
expect(res.statusCode).toBe(400);
});
it('POST /api/v1/llms returns 409 when name exists', async () => {
await createApp(mockRepo([makeLlm({ name: 'claude' })]));
const res = await app.inject({
method: 'POST',
url: '/api/v1/llms',
payload: { name: 'claude', type: 'anthropic', model: 'x' },
});
expect(res.statusCode).toBe(409);
});
it('PUT /api/v1/llms/:id updates model', async () => {
await createApp(mockRepo([makeLlm({ id: 'llm-1' })]));
const res = await app.inject({
method: 'PUT',
url: '/api/v1/llms/llm-1',
payload: { model: 'claude-3-opus' },
});
expect(res.statusCode).toBe(200);
expect(res.json<{ model: string }>().model).toBe('claude-3-opus');
});
it('PUT /api/v1/llms/:id returns 404 when missing', async () => {
await createApp(mockRepo());
const res = await app.inject({
method: 'PUT',
url: '/api/v1/llms/missing',
payload: { model: 'x' },
});
expect(res.statusCode).toBe(404);
});
it('DELETE /api/v1/llms/:id returns 204', async () => {
await createApp(mockRepo([makeLlm({ id: 'llm-1' })]));
const res = await app.inject({ method: 'DELETE', url: '/api/v1/llms/llm-1' });
expect(res.statusCode).toBe(204);
});
it('DELETE /api/v1/llms/:id returns 404 when missing', async () => {
await createApp(mockRepo());
const res = await app.inject({ method: 'DELETE', url: '/api/v1/llms/missing' });
expect(res.statusCode).toBe(404);
});
});

View File

@@ -0,0 +1,232 @@
import { describe, it, expect, vi } from 'vitest';
import { LlmService } from '../src/services/llm.service.js';
import type { ILlmRepository } from '../src/repositories/llm.repository.js';
import type { Llm, Secret } from '@prisma/client';
function makeLlm(overrides: Partial<Llm> = {}): Llm {
return {
id: 'llm-1',
name: 'claude',
type: 'anthropic',
model: 'claude-3-5-sonnet-20241022',
url: '',
tier: 'heavy',
description: '',
apiKeySecretId: null,
apiKeySecretKey: null,
extraConfig: {},
version: 1,
createdAt: new Date(),
updatedAt: new Date(),
...overrides,
};
}
function makeSecret(overrides: Partial<Secret> = {}): Secret {
return {
id: 'sec-anthropic',
name: 'anthropic-key',
backendId: 'backend-plaintext',
data: {},
externalRef: '',
version: 1,
createdAt: new Date(),
updatedAt: new Date(),
...overrides,
};
}
function mockRepo(initial: Llm[] = []): ILlmRepository {
const rows = new Map<string, Llm>(initial.map((r) => [r.id, r]));
return {
findAll: vi.fn(async () => [...rows.values()]),
findById: vi.fn(async (id: string) => rows.get(id) ?? null),
findByName: vi.fn(async (name: string) => {
for (const r of rows.values()) if (r.name === name) return r;
return null;
}),
findByTier: vi.fn(async (tier: string) => [...rows.values()].filter((r) => r.tier === tier)),
create: vi.fn(async (data) => {
const row = makeLlm({
id: `llm-${String(rows.size + 1)}`,
name: data.name,
type: data.type,
model: data.model,
url: data.url ?? '',
tier: data.tier ?? 'fast',
description: data.description ?? '',
apiKeySecretId: data.apiKeySecretId ?? null,
apiKeySecretKey: data.apiKeySecretKey ?? null,
extraConfig: (data.extraConfig ?? {}) as Llm['extraConfig'],
});
rows.set(row.id, row);
return row;
}),
update: vi.fn(async (id, data) => {
const existing = rows.get(id);
if (!existing) throw new Error('not found');
const next: Llm = {
...existing,
...(data.model !== undefined ? { model: data.model } : {}),
...(data.url !== undefined ? { url: data.url } : {}),
...(data.tier !== undefined ? { tier: data.tier } : {}),
...(data.description !== undefined ? { description: data.description } : {}),
...(data.apiKeySecretId !== undefined ? { apiKeySecretId: data.apiKeySecretId } : {}),
...(data.apiKeySecretKey !== undefined ? { apiKeySecretKey: data.apiKeySecretKey } : {}),
...(data.extraConfig !== undefined ? { extraConfig: data.extraConfig as Llm['extraConfig'] } : {}),
};
rows.set(id, next);
return next;
}),
delete: vi.fn(async (id) => { rows.delete(id); }),
};
}
function mockSecrets(secretByName: Record<string, Secret>, resolved: Record<string, string> = {}): {
getById: ReturnType<typeof vi.fn>;
getByName: ReturnType<typeof vi.fn>;
resolveData: ReturnType<typeof vi.fn>;
} {
return {
getById: vi.fn(async (id: string) => {
for (const s of Object.values(secretByName)) if (s.id === id) return s;
throw new Error(`secret not found: ${id}`);
}),
getByName: vi.fn(async (name: string) => {
const s = secretByName[name];
if (!s) throw new Error(`secret not found: ${name}`);
return s;
}),
resolveData: vi.fn(async () => resolved),
};
}
describe('LlmService', () => {
it('create parses input and resolves apiKeyRef → secret id', async () => {
const repo = mockRepo();
const sec = makeSecret();
const secrets = mockSecrets({ 'anthropic-key': sec });
// eslint-disable-next-line @typescript-eslint/no-explicit-any
const svc = new LlmService(repo, secrets as any);
const view = await svc.create({
name: 'claude',
type: 'anthropic',
model: 'claude-3-5-sonnet-20241022',
tier: 'heavy',
apiKeyRef: { name: 'anthropic-key', key: 'token' },
});
expect(view.name).toBe('claude');
expect(view.apiKeyRef).toEqual({ name: 'anthropic-key', key: 'token' });
expect(secrets.getByName).toHaveBeenCalledWith('anthropic-key');
expect(repo.create).toHaveBeenCalledWith(expect.objectContaining({
apiKeySecretId: sec.id,
apiKeySecretKey: 'token',
}));
});
it('create without apiKeyRef leaves FK columns null', async () => {
const repo = mockRepo();
const secrets = mockSecrets({});
// eslint-disable-next-line @typescript-eslint/no-explicit-any
const svc = new LlmService(repo, secrets as any);
const view = await svc.create({
name: 'ollama-local',
type: 'ollama',
model: 'llama3',
url: 'http://localhost:11434',
tier: 'fast',
});
expect(view.apiKeyRef).toBeNull();
expect(secrets.getByName).not.toHaveBeenCalled();
});
it('create rejects duplicate name', async () => {
const repo = mockRepo([makeLlm({ name: 'claude' })]);
// eslint-disable-next-line @typescript-eslint/no-explicit-any
const svc = new LlmService(repo, mockSecrets({}) as any);
await expect(svc.create({
name: 'claude', type: 'anthropic', model: 'x',
})).rejects.toThrow(/already exists/);
});
it('update with apiKeyRef null unlinks the secret', async () => {
const sec = makeSecret();
const repo = mockRepo([makeLlm({ apiKeySecretId: sec.id, apiKeySecretKey: 'token' })]);
const secrets = mockSecrets({ 'anthropic-key': sec });
// eslint-disable-next-line @typescript-eslint/no-explicit-any
const svc = new LlmService(repo, secrets as any);
await svc.update('llm-1', { apiKeyRef: null });
expect(repo.update).toHaveBeenCalledWith('llm-1', expect.objectContaining({
apiKeySecretId: null,
apiKeySecretKey: null,
}));
});
it('resolveApiKey reads through SecretService', async () => {
const sec = makeSecret();
const repo = mockRepo([makeLlm({ apiKeySecretId: sec.id, apiKeySecretKey: 'token' })]);
const secrets = mockSecrets({ 'anthropic-key': sec }, { token: 'sk-ant-xyz' });
// eslint-disable-next-line @typescript-eslint/no-explicit-any
const svc = new LlmService(repo, secrets as any);
const key = await svc.resolveApiKey('claude');
expect(key).toBe('sk-ant-xyz');
});
it('resolveApiKey throws when Llm has no apiKeyRef', async () => {
const repo = mockRepo([makeLlm()]);
// eslint-disable-next-line @typescript-eslint/no-explicit-any
const svc = new LlmService(repo, mockSecrets({}) as any);
await expect(svc.resolveApiKey('claude')).rejects.toThrow(/no apiKeyRef/);
});
it('resolveApiKey throws when the secret key is missing', async () => {
const sec = makeSecret();
const repo = mockRepo([makeLlm({ apiKeySecretId: sec.id, apiKeySecretKey: 'missing-key' })]);
const secrets = mockSecrets({ 'anthropic-key': sec }, { token: 'x' });
// eslint-disable-next-line @typescript-eslint/no-explicit-any
const svc = new LlmService(repo, secrets as any);
await expect(svc.resolveApiKey('claude')).rejects.toThrow(/no key 'missing-key'/);
});
it('list returns views with apiKeyRef rendered from secret name', async () => {
const sec = makeSecret();
const repo = mockRepo([makeLlm({ apiKeySecretId: sec.id, apiKeySecretKey: 'token' })]);
const secrets = mockSecrets({ 'anthropic-key': sec });
// eslint-disable-next-line @typescript-eslint/no-explicit-any
const svc = new LlmService(repo, secrets as any);
const items = await svc.list();
expect(items).toHaveLength(1);
expect(items[0]!.apiKeyRef).toEqual({ name: 'anthropic-key', key: 'token' });
});
it('delete happy path', async () => {
const repo = mockRepo([makeLlm()]);
// eslint-disable-next-line @typescript-eslint/no-explicit-any
const svc = new LlmService(repo, mockSecrets({}) as any);
await svc.delete('llm-1');
expect(repo.delete).toHaveBeenCalledWith('llm-1');
});
it('validation: rejects invalid type', async () => {
const repo = mockRepo();
// eslint-disable-next-line @typescript-eslint/no-explicit-any
const svc = new LlmService(repo, mockSecrets({}) as any);
await expect(svc.create({ name: 'x', type: 'bogus', model: 'y' })).rejects.toThrow();
});
it('validation: rejects invalid tier', async () => {
const repo = mockRepo();
// eslint-disable-next-line @typescript-eslint/no-explicit-any
const svc = new LlmService(repo, mockSecrets({}) as any);
await expect(svc.create({
name: 'x', type: 'openai', model: 'gpt-4', tier: 'warp-speed',
})).rejects.toThrow();
});
});