Compare commits
6 Commits
feat/skill
...
fix/mcpd-i
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
180e50a978 | ||
|
|
7ebc8b22d1 | ||
|
|
d60ad52018 | ||
|
|
e6cd73543a | ||
|
|
56735a5290 | ||
|
|
e8c3803fac |
@@ -35,6 +35,9 @@ Key routing rules:
|
|||||||
- `project` — workspace grouping servers, prompts, agents
|
- `project` — workspace grouping servers, prompts, agents
|
||||||
- `llm` — server-managed LLM provider (api key + endpoint)
|
- `llm` — server-managed LLM provider (api key + endpoint)
|
||||||
- `agent` — LLM persona pinned to one Llm; project attach surfaces project Prompts as system context, project MCP servers as tools, and exposes the agent itself as an MCP virtual server (`agent-<name>/chat`). See `docs/agents.md`, `docs/chat.md`.
|
- `agent` — LLM persona pinned to one Llm; project attach surfaces project Prompts as system context, project MCP servers as tools, and exposes the agent itself as an MCP virtual server (`agent-<name>/chat`). See `docs/agents.md`, `docs/chat.md`.
|
||||||
- `prompt` / `promptrequest` — curated content / pending proposal
|
- `prompt` / `promptrequest` — curated content / legacy pending proposal (use `proposal` for new work).
|
||||||
|
- `skill` — Claude Code skill bundle (SKILL.md + files + typed metadata). Materialised onto disk by `mcpctl skills sync`. See `docs/skills.md`.
|
||||||
|
- `proposal` — generic pending proposal queue, replaces `promptrequest`. Covers both prompts and skills. See `docs/proposals.md`. Triage via `mcpctl review`.
|
||||||
|
- `revision` — append-only audit + diff log shared by prompts and skills. Auto-bumps semver on save. See `docs/revisions.md`.
|
||||||
- `rbac` — access control bindings
|
- `rbac` — access control bindings
|
||||||
- `mcptoken` — bearer credentials for HTTP-mode mcplocal
|
- `mcptoken` — bearer credentials for HTTP-mode mcplocal
|
||||||
|
|||||||
126
docs/proposals.md
Normal file
126
docs/proposals.md
Normal file
@@ -0,0 +1,126 @@
|
|||||||
|
# Resource Proposals
|
||||||
|
|
||||||
|
A proposal is a pending change to a Prompt or Skill, submitted by
|
||||||
|
either a Claude Code session (via the `propose_prompt` / `propose_skill`
|
||||||
|
MCP tools) or a human (via the web UI / CLI). Reviewers triage the
|
||||||
|
queue and either approve — at which point the proposal becomes a real
|
||||||
|
prompt or skill — or reject with a note.
|
||||||
|
|
||||||
|
This is the path by which Claude **proposes back** to mcpd: things the
|
||||||
|
session learned that future sessions would benefit from. The
|
||||||
|
`propose-learnings` global skill (seeded by mcpd at startup) explains
|
||||||
|
the discipline to Claude.
|
||||||
|
|
||||||
|
## Model
|
||||||
|
|
||||||
|
`ResourceProposal` shares the schema's discriminator pattern with
|
||||||
|
`ResourceRevision` — single table, `resourceType` field disambiguates
|
||||||
|
prompts vs skills.
|
||||||
|
|
||||||
|
| Field | Purpose |
|
||||||
|
|----------------------|--------------------------------------------------------|
|
||||||
|
| `resourceType` | `'prompt'` \| `'skill'`. |
|
||||||
|
| `name` | Proposed resource name. |
|
||||||
|
| `body` | Proposed body (`{ content, priority?, metadata?, … }`).|
|
||||||
|
| `projectId` / `agentId` | Scope of the proposal (XOR; null/null = global). |
|
||||||
|
| `createdBySession` | mcplocal session that proposed (when from Claude). |
|
||||||
|
| `createdByUserId` | User who proposed (when via UI/CLI). |
|
||||||
|
| `status` | `'pending'` → `'approved'` \| `'rejected'`. |
|
||||||
|
| `reviewerNote` | Set on approval or rejection. |
|
||||||
|
| `approvedRevisionId` | Set when approved — points at the resulting revision. |
|
||||||
|
|
||||||
|
Two unique constraints — `(resourceType, name, projectId)` and
|
||||||
|
`(resourceType, name, agentId)` — mirror the Prompt / Skill scoping
|
||||||
|
rules. The same `?? ''` workaround for nullable-FK lookups applies.
|
||||||
|
|
||||||
|
## Reviewer flow
|
||||||
|
|
||||||
|
### CLI
|
||||||
|
|
||||||
|
```bash
|
||||||
|
mcpctl review pending # list pending
|
||||||
|
mcpctl review next # show oldest pending
|
||||||
|
mcpctl review show <id> # full detail
|
||||||
|
mcpctl review diff <id> # before/after diff
|
||||||
|
mcpctl review approve <id> # POST /proposals/:id/approve
|
||||||
|
mcpctl review reject <id> --reason "explain" # rejected with note
|
||||||
|
```
|
||||||
|
|
||||||
|
### Web UI
|
||||||
|
|
||||||
|
`/proposals` shows a Pending / Approved / Rejected tab view; the
|
||||||
|
sidebar nav badge polls every 30 s and shows the pending count in
|
||||||
|
amber. Click a row to see the full body, the diff against the current
|
||||||
|
resource (if any), and approve / reject controls.
|
||||||
|
|
||||||
|
### Approval is atomic
|
||||||
|
|
||||||
|
Approval runs in a single Prisma transaction:
|
||||||
|
|
||||||
|
1. Look up the pending proposal.
|
||||||
|
2. Dispatch by `resourceType` to the registered handler
|
||||||
|
(`PromptService` or `SkillService` registers itself at construction).
|
||||||
|
3. Handler upserts the underlying resource — creating it if new, or
|
||||||
|
updating + auto-bumping patch semver if it exists.
|
||||||
|
4. Handler records a `ResourceRevision` linking back to the proposal.
|
||||||
|
5. Proposal status flips to `approved`, `approvedRevisionId` set.
|
||||||
|
|
||||||
|
If any step fails, the transaction rolls back and the proposal stays
|
||||||
|
`pending`. There is no half-approved state.
|
||||||
|
|
||||||
|
## Claude side: `propose_prompt` and `propose_skill`
|
||||||
|
|
||||||
|
Both tools are registered by the `gate` plugin in mcplocal. They post
|
||||||
|
to `/api/v1/proposals` with the appropriate `resourceType`.
|
||||||
|
|
||||||
|
The `propose-learnings` global skill (seeded by mcpd) tells Claude
|
||||||
|
*when* to use them:
|
||||||
|
|
||||||
|
- `propose_prompt` for project-specific knowledge — gotchas,
|
||||||
|
conventions, hidden constraints. Cheap to add, easy to reject.
|
||||||
|
- `propose_skill` for cross-cutting knowledge — debugging discipline,
|
||||||
|
release hygiene, security review style. Larger blast radius; lean
|
||||||
|
toward `propose_prompt` unless you have a clear cross-project reason.
|
||||||
|
|
||||||
|
The `gate-encouragement-propose` system prompt (priority 10, sits in
|
||||||
|
the gating bundle) is the trigger that makes Claude actually consider
|
||||||
|
proposing. Without that, the tools exist but Claude rarely engages.
|
||||||
|
|
||||||
|
## Backwards compat
|
||||||
|
|
||||||
|
PR-1 / PR-2 deferred the cutover from the prompt-only `PromptRequest`
|
||||||
|
table to `ResourceProposal`. Both run side-by-side today:
|
||||||
|
|
||||||
|
- mcplocal's `propose_prompt` still POSTs to the legacy
|
||||||
|
`/api/v1/projects/:name/promptrequests` URL.
|
||||||
|
- mcplocal's `propose_skill` (newer) POSTs to `/api/v1/proposals`
|
||||||
|
directly.
|
||||||
|
- The legacy `/api/v1/promptrequests*` routes remain in mcpd.
|
||||||
|
- `mcpctl approve promptrequest <name>` still works.
|
||||||
|
|
||||||
|
A focused follow-up PR will:
|
||||||
|
|
||||||
|
1. Migrate existing `PromptRequest` rows into `ResourceProposal`
|
||||||
|
(resourceType=prompt).
|
||||||
|
2. Rename `PromptRequest` to `_PromptRequest_legacy`.
|
||||||
|
3. Update mcplocal's `propose_prompt` to use `/api/v1/proposals`.
|
||||||
|
4. Keep the legacy URL as a thin translation shim through one release.
|
||||||
|
5. Drop `_PromptRequest_legacy` after that.
|
||||||
|
|
||||||
|
This stays separate so the cutover is reviewable independently of
|
||||||
|
the larger Skills + Revisions + Proposals work.
|
||||||
|
|
||||||
|
## RBAC
|
||||||
|
|
||||||
|
Proposals piggyback on the `prompts` permission for now — anyone with
|
||||||
|
`view:prompts` can read the queue, anyone with `edit:prompts` can
|
||||||
|
approve or reject. Splitting out a dedicated `proposals` permission
|
||||||
|
(or a "reviewer" role) is straightforward if granularity becomes
|
||||||
|
useful.
|
||||||
|
|
||||||
|
## Audit emission
|
||||||
|
|
||||||
|
Proposal create / approve / reject events flow through the existing
|
||||||
|
audit pipeline. Approval events also reference the resulting
|
||||||
|
revision id, so you can join "proposal approved at T" against
|
||||||
|
"revision X created at T" without polling.
|
||||||
130
docs/revisions.md
Normal file
130
docs/revisions.md
Normal file
@@ -0,0 +1,130 @@
|
|||||||
|
# Resource Revisions
|
||||||
|
|
||||||
|
mcpctl keeps an append-only revision log for every Prompt and Skill —
|
||||||
|
so you can answer "who changed prompt X and when," diff between any
|
||||||
|
two versions, and restore an earlier state without losing the audit
|
||||||
|
chain.
|
||||||
|
|
||||||
|
## Model
|
||||||
|
|
||||||
|
`ResourceRevision` is a single shared table keyed by
|
||||||
|
`(resourceType, resourceId)` — the type discriminator allows the same
|
||||||
|
infrastructure to cover both prompts and skills (and any future
|
||||||
|
resource that wants version history).
|
||||||
|
|
||||||
|
| Field | Purpose |
|
||||||
|
|------------------|----------------------------------------------------------|
|
||||||
|
| `id` | cuid; the revision's stable identity. |
|
||||||
|
| `resourceType` | `'prompt'` \| `'skill'`. Validated app-layer. |
|
||||||
|
| `resourceId` | Soft FK — survives deletion of the underlying resource. |
|
||||||
|
| `semver` | Author-visible version (X.Y.Z). |
|
||||||
|
| `contentHash` | sha256 of the canonicalised body. Stable diff key. |
|
||||||
|
| `body` | Snapshot of the resource at this revision. |
|
||||||
|
| `authorUserId` | Who made the change (null for system writes). |
|
||||||
|
| `authorSessionId`| Session that proposed it (when applicable). |
|
||||||
|
| `note` | Free-text reviewer or author note. |
|
||||||
|
| `createdAt` | When the revision was recorded. |
|
||||||
|
|
||||||
|
The resource row itself (Prompt/Skill) keeps the inline `content` —
|
||||||
|
revisions are an audit log, not the source of truth. Hot read paths
|
||||||
|
(the gate plugin, `mcpctl skills sync`, prompt indexing) never need
|
||||||
|
to consult the revision log.
|
||||||
|
|
||||||
|
`Prompt.currentRevisionId` and `Skill.currentRevisionId` are soft
|
||||||
|
pointers to the latest revision so the UI can answer "which version is
|
||||||
|
live" in one query.
|
||||||
|
|
||||||
|
## Semver semantics
|
||||||
|
|
||||||
|
Auto-patch on every successful save where the body changed:
|
||||||
|
|
||||||
|
```
|
||||||
|
0.1.0 → save with content change → 0.1.1
|
||||||
|
0.1.1 → save with content change → 0.1.2
|
||||||
|
```
|
||||||
|
|
||||||
|
Authors can override:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
mcpctl edit prompt foo --bump minor # 0.1.x → 0.2.0
|
||||||
|
mcpctl edit prompt foo --bump major # 0.x.x → 1.0.0
|
||||||
|
mcpctl edit prompt foo --semver 1.2.3 # explicit
|
||||||
|
mcpctl edit prompt foo --note "fixed the gotcha" # adds note to revision
|
||||||
|
```
|
||||||
|
|
||||||
|
Invalid semver values fall back to `0.1.0` rather than throwing —
|
||||||
|
the revision write is best-effort and we don't want a corrupted
|
||||||
|
existing semver to break the prompt save.
|
||||||
|
|
||||||
|
## contentHash
|
||||||
|
|
||||||
|
sha256 of the JSON-canonicalised body (keys sorted at every object
|
||||||
|
level). Two revisions with the same hash are byte-identical. Used by
|
||||||
|
`mcpctl skills sync` as the diff key against on-disk state — re-publish
|
||||||
|
under the same semver still triggers a sync if the contentHash changed.
|
||||||
|
|
||||||
|
The server-side hash and the client-side hash are computed from the
|
||||||
|
same canonical shape, so they match exactly. See
|
||||||
|
`src/mcpd/src/services/resource-revision.service.ts` for the canonical
|
||||||
|
JSON encoder.
|
||||||
|
|
||||||
|
## CLI
|
||||||
|
|
||||||
|
### View history
|
||||||
|
|
||||||
|
```bash
|
||||||
|
mcpctl get revisions prompt my-prompt
|
||||||
|
mcpctl get revisions skill demo-skill
|
||||||
|
```
|
||||||
|
|
||||||
|
### View one
|
||||||
|
|
||||||
|
```bash
|
||||||
|
mcpctl describe revision <id>
|
||||||
|
```
|
||||||
|
|
||||||
|
### Diff
|
||||||
|
|
||||||
|
The HTTP API returns a unified-format diff:
|
||||||
|
|
||||||
|
```
|
||||||
|
GET /api/v1/revisions/<id>/diff?against=<other-id|live>
|
||||||
|
```
|
||||||
|
|
||||||
|
The web UI's revision history tab on a Skill detail page renders the
|
||||||
|
diff inline (color-coded add/remove rows).
|
||||||
|
|
||||||
|
### Restore
|
||||||
|
|
||||||
|
Restore a prompt or skill to an earlier revision. This writes a *new*
|
||||||
|
revision whose body is the old one — preserving the audit chain
|
||||||
|
rather than deleting later revisions.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
mcpctl restore prompt my-prompt --revision <revision-id>
|
||||||
|
```
|
||||||
|
|
||||||
|
The CLI subcommand is wired through to `POST
|
||||||
|
/api/v1/prompts/:id/restore-revision` (and the symmetric
|
||||||
|
`/api/v1/skills/:id/restore-revision`).
|
||||||
|
|
||||||
|
## RBAC
|
||||||
|
|
||||||
|
Revisions piggyback on the underlying resource's RBAC permission. If
|
||||||
|
you can `view:prompts`, you can read prompt history; if you can
|
||||||
|
`edit:prompts`, you can restore.
|
||||||
|
|
||||||
|
## Audit emission
|
||||||
|
|
||||||
|
Each revision write emits a structured audit event captured by the
|
||||||
|
existing audit-event pipeline. The event includes the revision id,
|
||||||
|
contentHash, semver, and author/session — sufficient to answer "what
|
||||||
|
changed" and "who" without joining tables manually.
|
||||||
|
|
||||||
|
## Storage size
|
||||||
|
|
||||||
|
A revision body is the resource snapshot — for prompts that's a few
|
||||||
|
KB; for skills with large `files` maps it can be tens of KB. The audit
|
||||||
|
log grows linearly with edits. v1 has no rotation; if a single resource
|
||||||
|
sees thousands of revisions per day this will need a retention policy
|
||||||
|
(out of scope today).
|
||||||
214
docs/skills.md
Normal file
214
docs/skills.md
Normal file
@@ -0,0 +1,214 @@
|
|||||||
|
# Skills
|
||||||
|
|
||||||
|
Skills are Claude Code skill bundles distributed by mcpctl. Each skill is a
|
||||||
|
named bundle of files — at minimum a `SKILL.md` explaining the skill's purpose
|
||||||
|
and triggers, optionally with auxiliary scripts, templates, or data files. The
|
||||||
|
mcpctl daemon (mcpd) is the source of truth; `mcpctl skills sync` materialises
|
||||||
|
the skills onto each dev machine under `~/.claude/skills/<name>/`, where Claude
|
||||||
|
Code reads them natively.
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─ mcpd (Postgres) ──────────────────────────────┐
|
||||||
|
│ Skill rows (content + files{} + metadata) │
|
||||||
|
└────────────────┬───────────────────────────────┘
|
||||||
|
│ HTTP, hash-pinned diff
|
||||||
|
▼
|
||||||
|
┌─ ~/.claude/skills/<name>/ ─────────────────────┐
|
||||||
|
│ SKILL.md │
|
||||||
|
│ scripts/setup.sh │
|
||||||
|
│ … │
|
||||||
|
└────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
## Trust model
|
||||||
|
|
||||||
|
Skills are added by senior admins together with a security reviewer at
|
||||||
|
publish time on mcpd. Once content is in mcpd, clients trust what mcpd
|
||||||
|
serves — no client-side sandboxing, no signature checks, no consent
|
||||||
|
prompts. The rigor lives on the publishing side (RBAC, audit, the
|
||||||
|
reviewer queue). See [proposals.md](proposals.md) for the
|
||||||
|
review→approve flow.
|
||||||
|
|
||||||
|
If you're publishing skills to clients you don't trust (e.g. an open-
|
||||||
|
source distribution), the design is wrong for that — the skill format
|
||||||
|
itself is fine, but the unguarded client trust assumption isn't.
|
||||||
|
|
||||||
|
## Scoping
|
||||||
|
|
||||||
|
A skill attaches to one of:
|
||||||
|
|
||||||
|
- **Global** — `projectId` and `agentId` both null. Synced onto every dev
|
||||||
|
machine when its sync runs (with or without a project context).
|
||||||
|
- **Project-scoped** — `projectId` set. Synced onto machines whose
|
||||||
|
`.mcpctl-project` marker matches.
|
||||||
|
- **Agent-scoped** — `agentId` set. Surfaced administratively via the
|
||||||
|
API; not currently materialised onto disk by `mcpctl skills sync`
|
||||||
|
(see "Future" below).
|
||||||
|
|
||||||
|
The same `<name>` can exist at multiple scopes simultaneously. The two
|
||||||
|
unique constraints are `(name, projectId)` and `(name, agentId)`.
|
||||||
|
|
||||||
|
## CLI
|
||||||
|
|
||||||
|
### Create
|
||||||
|
|
||||||
|
```bash
|
||||||
|
mcpctl create skill <name> \
|
||||||
|
[--project <name> | --agent <name>] \
|
||||||
|
--content / --content-file <path> \
|
||||||
|
[--description "<text>"] \
|
||||||
|
[--priority <1-10>] \
|
||||||
|
[--semver <X.Y.Z>] \
|
||||||
|
[--metadata-file <path>] \
|
||||||
|
[--files-dir <path>]
|
||||||
|
```
|
||||||
|
|
||||||
|
`--content-file` provides the `SKILL.md` body. `--metadata-file`
|
||||||
|
accepts YAML or JSON; see "Metadata" below for the schema. `--files-dir`
|
||||||
|
walks a directory tree into the `files{}` map (UTF-8 only; non-text
|
||||||
|
files rejected — extend later if needed).
|
||||||
|
|
||||||
|
### Edit
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Edit content in $EDITOR
|
||||||
|
mcpctl edit skill <name>
|
||||||
|
|
||||||
|
# Edit + bump semver
|
||||||
|
mcpctl edit skill <name> --bump major|minor|patch --note "<message>"
|
||||||
|
|
||||||
|
# Edit + set explicit semver
|
||||||
|
mcpctl edit skill <name> --semver 1.2.3
|
||||||
|
```
|
||||||
|
|
||||||
|
Each save records a `ResourceRevision` automatically. See
|
||||||
|
[revisions.md](revisions.md).
|
||||||
|
|
||||||
|
### Sync to disk
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# In a project directory (with .mcpctl-project marker):
|
||||||
|
mcpctl skills sync
|
||||||
|
|
||||||
|
# Override project:
|
||||||
|
mcpctl skills sync --project <name>
|
||||||
|
|
||||||
|
# Globals only (no project context, no marker):
|
||||||
|
cd / && mcpctl skills sync
|
||||||
|
|
||||||
|
# Used by the SessionStart hook — fail-open on network errors:
|
||||||
|
mcpctl skills sync --quiet
|
||||||
|
```
|
||||||
|
|
||||||
|
Useful flags:
|
||||||
|
|
||||||
|
| Flag | Purpose |
|
||||||
|
|---------------------|-----------------------------------------------------------|
|
||||||
|
| `--dry-run` | Print what would change, don't write anything. |
|
||||||
|
| `--force` | Overwrite locally-modified skills. |
|
||||||
|
| `--quiet` | Suppress output unless something changed; fail-open. |
|
||||||
|
| `--keep-orphans` | Don't remove skills no longer in the server set. |
|
||||||
|
| `--skip-postinstall`| Reserved for the postInstall executor (deferred). |
|
||||||
|
|
||||||
|
## Project setup
|
||||||
|
|
||||||
|
`mcpctl config claude --project <name>` does the full pickup chain:
|
||||||
|
|
||||||
|
1. Writes `.mcp.json` so Claude Code routes MCP traffic through mcplocal.
|
||||||
|
2. Writes `.mcpctl-project` (single line, project name) so `skills sync`
|
||||||
|
knows which project's skills to pull when run from anywhere under
|
||||||
|
that directory.
|
||||||
|
3. Runs an initial `skills sync` synchronously.
|
||||||
|
4. Installs a SessionStart hook in `~/.claude/settings.json` that runs
|
||||||
|
`mcpctl skills sync --quiet` before every Claude session. Tagged
|
||||||
|
with `_mcpctl_managed: true` so subsequent runs find and update it
|
||||||
|
instead of duplicating it.
|
||||||
|
|
||||||
|
Pass `--skip-skills` to opt out of steps 2–4 (useful in CI).
|
||||||
|
|
||||||
|
## Metadata
|
||||||
|
|
||||||
|
The `metadata` field is a typed JSON blob:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
hooks:
|
||||||
|
PreToolUse:
|
||||||
|
- type: command
|
||||||
|
command: "echo before-tool"
|
||||||
|
PostToolUse:
|
||||||
|
- type: command
|
||||||
|
command: "echo after-tool"
|
||||||
|
SessionStart:
|
||||||
|
- type: command
|
||||||
|
command: "echo session-started"
|
||||||
|
mcpServers:
|
||||||
|
- name: my-grafana
|
||||||
|
fromTemplate: grafana
|
||||||
|
project: monitoring
|
||||||
|
postInstall: scripts/install.sh
|
||||||
|
preUninstall: scripts/cleanup.sh
|
||||||
|
postInstallTimeoutSec: 60
|
||||||
|
```
|
||||||
|
|
||||||
|
**v1 sync executes none of these — they're stored verbatim and
|
||||||
|
materialisation is deferred to a follow-up.** Once enabled:
|
||||||
|
|
||||||
|
- `hooks` will be written into `~/.claude/settings.json` with
|
||||||
|
`_mcpctl_managed: true` markers (see Project Setup above for how
|
||||||
|
the SessionStart hook works today).
|
||||||
|
- `mcpServers` will be auto-attached via the mcpd attach API.
|
||||||
|
- `postInstall` will run as the user with a curated env, hard timeout,
|
||||||
|
and an audit event emitted back to mcpd. Hash-pinned: re-syncs of
|
||||||
|
unchanged scripts won't re-execute.
|
||||||
|
|
||||||
|
## State
|
||||||
|
|
||||||
|
`~/.mcpctl/skills-state.json` tracks the last-synced state:
|
||||||
|
|
||||||
|
- per-skill: `id`, `semver`, `contentHash` (matches mcpd's hash),
|
||||||
|
`installDir`, per-file `sha256` + size, `postInstallHash`,
|
||||||
|
`lastSyncedAt`.
|
||||||
|
- top-level: `lastSync`, `lastSyncProject`, `schemaVersion`.
|
||||||
|
|
||||||
|
The state file is written atomically (temp + rename). Per-file SHA-256
|
||||||
|
detects local edits — sync warns and skips modified files unless you
|
||||||
|
pass `--force`.
|
||||||
|
|
||||||
|
State lives outside `~/.claude/skills/` deliberately so Claude Code
|
||||||
|
doesn't see our bookkeeping in its tree.
|
||||||
|
|
||||||
|
## Atomic install
|
||||||
|
|
||||||
|
Each skill is staged under `<targetDir>.mcpctl-staging-<pid>/`, then
|
||||||
|
the existing directory (if any) is renamed to
|
||||||
|
`<targetDir>.mcpctl-trash-<pid>`, the staging dir is moved into place,
|
||||||
|
and the trash is rmtree'd. A concurrent reader (Claude Code starting up)
|
||||||
|
never sees a partial tree.
|
||||||
|
|
||||||
|
Symmetric atomic delete for orphan removal: rename to trash, rmtree.
|
||||||
|
Locally-modified skills are preserved (warned + skipped) unless `--force`.
|
||||||
|
|
||||||
|
## Failure semantics
|
||||||
|
|
||||||
|
| Situation | Exit code | Behaviour |
|
||||||
|
|----------------------------------|-----------|------------------------------------|
|
||||||
|
| Network/timeout in `--quiet` | 0 | Skip silently. SessionStart hook never blocks Claude. |
|
||||||
|
| Auth failure | 1 | "run mcpctl login" message. |
|
||||||
|
| Disk full / state save failure | 2 | Loud error. |
|
||||||
|
| Per-skill error | 0 | Logged in result errors[]; sync continues. |
|
||||||
|
|
||||||
|
The fail-open behaviour in `--quiet` is non-negotiable — a hung mcpd
|
||||||
|
must never block Claude Code starting up.
|
||||||
|
|
||||||
|
## Future
|
||||||
|
|
||||||
|
The following are deferred to follow-up PRs:
|
||||||
|
|
||||||
|
- `metadata.hooks` materialisation into `~/.claude/settings.json`
|
||||||
|
- `metadata.mcpServers` auto-attach
|
||||||
|
- `metadata.postInstall` execution with curated env + audit emission
|
||||||
|
- Agent-scoped skills synced to disk (would need an agent-identity-on-
|
||||||
|
disk concept that doesn't exist yet)
|
||||||
|
- Bundle backup support for skills (bundle-backup is one path; git-backup
|
||||||
|
is the other and is wired today)
|
||||||
|
- `mcpctl apply -f skill.yaml` declarative skill apply
|
||||||
836
pnpm-lock.yaml
generated
836
pnpm-lock.yaml
generated
File diff suppressed because it is too large
Load Diff
@@ -16,6 +16,20 @@ import {
|
|||||||
removeSkillAtomic,
|
removeSkillAtomic,
|
||||||
type SkillBody,
|
type SkillBody,
|
||||||
} from '../utils/skills-disk.js';
|
} from '../utils/skills-disk.js';
|
||||||
|
import {
|
||||||
|
runPostInstall,
|
||||||
|
emitPostInstallAudit,
|
||||||
|
hashScript,
|
||||||
|
} from '../utils/postinstall.js';
|
||||||
|
import {
|
||||||
|
applyManagedHooks,
|
||||||
|
removeManagedHooks,
|
||||||
|
type HooksByEvent,
|
||||||
|
} from '../utils/hooks-materialiser.js';
|
||||||
|
import {
|
||||||
|
attachSkillMcpServers,
|
||||||
|
parseMcpServerDeps,
|
||||||
|
} from '../utils/mcpservers-materialiser.js';
|
||||||
import { ApiError } from '../api-client.js';
|
import { ApiError } from '../api-client.js';
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@@ -50,6 +64,19 @@ interface FullSkill {
|
|||||||
agentId: string | null;
|
agentId: string | null;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Shape of `metadata` we care about at sync time. Validated server-side
|
||||||
|
* by SkillMetadataSchema (PR-3); we re-narrow here for the fields the
|
||||||
|
* sync acts on, keeping the rest opaque so future additions don't
|
||||||
|
* require a CLI change.
|
||||||
|
*/
|
||||||
|
interface SyncedSkillMetadata {
|
||||||
|
postInstall?: unknown;
|
||||||
|
postInstallTimeoutSec?: unknown;
|
||||||
|
hooks?: unknown;
|
||||||
|
mcpServers?: unknown;
|
||||||
|
}
|
||||||
|
|
||||||
export interface SyncOpts {
|
export interface SyncOpts {
|
||||||
/** Project name override; otherwise marker walk-up + fall back to globals-only. */
|
/** Project name override; otherwise marker walk-up + fall back to globals-only. */
|
||||||
project?: string;
|
project?: string;
|
||||||
@@ -72,6 +99,10 @@ export interface SyncResult {
|
|||||||
skipped: string[];
|
skipped: string[];
|
||||||
removed: string[];
|
removed: string[];
|
||||||
preserved: string[]; // skills with local edits we left alone
|
preserved: string[]; // skills with local edits we left alone
|
||||||
|
postInstallsRan: string[]; // skills whose postInstall executed in this sync
|
||||||
|
postInstallsSkipped: string[]; // skills with postInstall but unchanged hash → no rerun
|
||||||
|
hooksApplied: string[]; // skills whose hooks were registered/updated in ~/.claude/settings.json
|
||||||
|
mcpServersAttached: string[]; // "<skill>:<server>" tuples that landed in this sync
|
||||||
errors: Array<{ skill: string; error: string }>;
|
errors: Array<{ skill: string; error: string }>;
|
||||||
exitCode: 0 | 1 | 2;
|
exitCode: 0 | 1 | 2;
|
||||||
}
|
}
|
||||||
@@ -95,6 +126,10 @@ export async function runSkillsSync(opts: SyncOpts, deps: SyncDeps): Promise<Syn
|
|||||||
skipped: [],
|
skipped: [],
|
||||||
removed: [],
|
removed: [],
|
||||||
preserved: [],
|
preserved: [],
|
||||||
|
postInstallsRan: [],
|
||||||
|
postInstallsSkipped: [],
|
||||||
|
hooksApplied: [],
|
||||||
|
mcpServersAttached: [],
|
||||||
errors: [],
|
errors: [],
|
||||||
exitCode: 0,
|
exitCode: 0,
|
||||||
};
|
};
|
||||||
@@ -189,6 +224,8 @@ export async function runSkillsSync(opts: SyncOpts, deps: SyncDeps): Promise<Syn
|
|||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
await removeSkillAtomic(prior.installDir);
|
await removeSkillAtomic(prior.installDir);
|
||||||
|
// Drop any hook entries this skill registered.
|
||||||
|
try { await removeManagedHooks(name); } catch { /* best-effort */ }
|
||||||
delete state.skills[name];
|
delete state.skills[name];
|
||||||
result.removed.push(name);
|
result.removed.push(name);
|
||||||
} catch (err: unknown) {
|
} catch (err: unknown) {
|
||||||
@@ -210,18 +247,29 @@ export async function runSkillsSync(opts: SyncOpts, deps: SyncDeps): Promise<Syn
|
|||||||
}
|
}
|
||||||
|
|
||||||
// 8. Summary.
|
// 8. Summary.
|
||||||
if (!opts.quiet || result.errors.length > 0 || result.installed.length > 0 || result.updated.length > 0 || result.removed.length > 0) {
|
const anythingHappened =
|
||||||
|
result.errors.length > 0 ||
|
||||||
|
result.installed.length > 0 ||
|
||||||
|
result.updated.length > 0 ||
|
||||||
|
result.removed.length > 0 ||
|
||||||
|
result.postInstallsRan.length > 0 ||
|
||||||
|
result.hooksApplied.length > 0 ||
|
||||||
|
result.mcpServersAttached.length > 0;
|
||||||
|
if (!opts.quiet || anythingHappened) {
|
||||||
const parts: string[] = [];
|
const parts: string[] = [];
|
||||||
if (result.installed.length) parts.push(`${String(result.installed.length)} installed`);
|
if (result.installed.length) parts.push(`${String(result.installed.length)} installed`);
|
||||||
if (result.updated.length) parts.push(`${String(result.updated.length)} updated`);
|
if (result.updated.length) parts.push(`${String(result.updated.length)} updated`);
|
||||||
if (result.skipped.length) parts.push(`${String(result.skipped.length)} unchanged`);
|
if (result.skipped.length) parts.push(`${String(result.skipped.length)} unchanged`);
|
||||||
if (result.removed.length) parts.push(`${String(result.removed.length)} removed`);
|
if (result.removed.length) parts.push(`${String(result.removed.length)} removed`);
|
||||||
if (result.preserved.length) parts.push(`${String(result.preserved.length)} preserved (modified)`);
|
if (result.preserved.length) parts.push(`${String(result.preserved.length)} preserved (modified)`);
|
||||||
|
if (result.postInstallsRan.length) parts.push(`${String(result.postInstallsRan.length)} postInstall ran`);
|
||||||
|
if (result.hooksApplied.length) parts.push(`${String(result.hooksApplied.length)} hooks applied`);
|
||||||
|
if (result.mcpServersAttached.length) parts.push(`${String(result.mcpServersAttached.length)} mcpServers attached`);
|
||||||
if (result.errors.length) parts.push(`${String(result.errors.length)} errors`);
|
if (result.errors.length) parts.push(`${String(result.errors.length)} errors`);
|
||||||
if (parts.length === 0) parts.push('no changes');
|
if (parts.length === 0) parts.push('no changes');
|
||||||
if (!opts.quiet) {
|
if (!opts.quiet) {
|
||||||
log(`mcpctl skills sync${projectName ? ` (project: ${projectName})` : ' (global only)'}: ${parts.join(', ')}`);
|
log(`mcpctl skills sync${projectName ? ` (project: ${projectName})` : ' (global only)'}: ${parts.join(', ')}`);
|
||||||
} else if (result.installed.length || result.updated.length || result.removed.length || result.errors.length) {
|
} else if (anythingHappened) {
|
||||||
// Quiet mode: only emit a single line if something actually happened.
|
// Quiet mode: only emit a single line if something actually happened.
|
||||||
warn(`mcpctl: ${parts.join(', ')}`);
|
warn(`mcpctl: ${parts.join(', ')}`);
|
||||||
}
|
}
|
||||||
@@ -255,6 +303,112 @@ export async function runSkillsSync(opts: SyncOpts, deps: SyncDeps): Promise<Syn
|
|||||||
};
|
};
|
||||||
const fileStates = await installSkillAtomic(targetDir, body);
|
const fileStates = await installSkillAtomic(targetDir, body);
|
||||||
|
|
||||||
|
// ── hooks: register metadata.hooks in ~/.claude/settings.json ──
|
||||||
|
// Tagged with _mcpctl_source: <skill-name> so each skill's hooks
|
||||||
|
// can be cleanly added/updated/removed without trampling other
|
||||||
|
// skills or user-added hooks. No-op when the field is absent or
|
||||||
|
// empty.
|
||||||
|
const meta = (full.metadata ?? {}) as SyncedSkillMetadata;
|
||||||
|
if (meta.hooks && typeof meta.hooks === 'object') {
|
||||||
|
try {
|
||||||
|
const hookRes = await applyManagedHooks(v.name, meta.hooks as HooksByEvent);
|
||||||
|
if (hookRes.updated) result.hooksApplied.push(v.name);
|
||||||
|
} catch (err: unknown) {
|
||||||
|
warn(`mcpctl: failed to apply hooks for skill '${v.name}': ${err instanceof Error ? err.message : String(err)}`);
|
||||||
|
}
|
||||||
|
} else if (prior !== undefined) {
|
||||||
|
// Skill no longer declares hooks but used to — clean up.
|
||||||
|
try { await removeManagedHooks(v.name); } catch { /* best-effort */ }
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── mcpServers: auto-attach declared deps to the active project ──
|
||||||
|
// Only meaningful when a project context is active; global skills
|
||||||
|
// can't attach to "no project". v1 doesn't auto-create missing
|
||||||
|
// servers (warn + skip). Idempotent — re-syncing a skill whose
|
||||||
|
// deps are already attached is a no-op.
|
||||||
|
const mcpServerDeps = parseMcpServerDeps(meta.mcpServers);
|
||||||
|
if (mcpServerDeps.length > 0 && projectName) {
|
||||||
|
try {
|
||||||
|
const att = await attachSkillMcpServers(client, projectName, mcpServerDeps, warn);
|
||||||
|
for (const srv of att.attached) {
|
||||||
|
result.mcpServersAttached.push(`${v.name}:${srv}`);
|
||||||
|
}
|
||||||
|
for (const e of att.errors) {
|
||||||
|
result.errors.push({
|
||||||
|
skill: v.name,
|
||||||
|
error: `mcpServers attach '${e.server}': ${e.error}`,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
} catch (err: unknown) {
|
||||||
|
warn(`mcpctl: failed to attach mcpServers for skill '${v.name}': ${err instanceof Error ? err.message : String(err)}`);
|
||||||
|
}
|
||||||
|
} else if (mcpServerDeps.length > 0) {
|
||||||
|
warn(`mcpctl: skill '${v.name}' declares mcpServers but sync is running global-only; skipping attach`);
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── postInstall: run metadata.postInstall when present ──
|
||||||
|
// Hash-pinned: only execute when the script's sha256 differs from
|
||||||
|
// what state recorded. Failures DO NOT update the recorded hash so
|
||||||
|
// the next sync retries. Other skills continue regardless.
|
||||||
|
let postInstallHash: string | null = prior?.postInstallHash ?? null;
|
||||||
|
if (
|
||||||
|
!opts.skipPostInstall &&
|
||||||
|
typeof meta.postInstall === 'string' &&
|
||||||
|
meta.postInstall.length > 0
|
||||||
|
) {
|
||||||
|
const scriptRel = meta.postInstall;
|
||||||
|
const scriptContent = (full.files ?? {})[scriptRel];
|
||||||
|
if (typeof scriptContent !== 'string') {
|
||||||
|
warn(`mcpctl: skill '${v.name}' postInstall references '${scriptRel}' which is not in files{}; skipping`);
|
||||||
|
} else {
|
||||||
|
const newHash = hashScript(scriptContent);
|
||||||
|
const hashChanged = newHash !== prior?.postInstallHash;
|
||||||
|
if (!hashChanged) {
|
||||||
|
result.postInstallsSkipped.push(v.name);
|
||||||
|
postInstallHash = newHash;
|
||||||
|
} else {
|
||||||
|
try {
|
||||||
|
const timeoutSec = typeof meta.postInstallTimeoutSec === 'number' ? meta.postInstallTimeoutSec : undefined;
|
||||||
|
const piInput = {
|
||||||
|
installDir: targetDir,
|
||||||
|
scriptPath: scriptRel,
|
||||||
|
skillName: v.name,
|
||||||
|
semver: v.semver,
|
||||||
|
projectName: projectName ?? undefined,
|
||||||
|
timeoutSec,
|
||||||
|
logsDir: join(homedir(), '.mcpctl', 'skills', v.name),
|
||||||
|
};
|
||||||
|
const installResult = await runPostInstall(piInput);
|
||||||
|
// Best-effort audit. Don't await; mcpd slowness shouldn't slow sync.
|
||||||
|
void emitPostInstallAudit(client, piInput, installResult, (m) => warn(m));
|
||||||
|
|
||||||
|
if (installResult.timedOut) {
|
||||||
|
result.errors.push({
|
||||||
|
skill: v.name,
|
||||||
|
error: `postInstall timed out after ${String(installResult.durationMs)}ms; rerun next sync`,
|
||||||
|
});
|
||||||
|
// hash NOT updated → retry on next sync
|
||||||
|
} else if (installResult.exitCode !== 0) {
|
||||||
|
const tail = installResult.stderrTail.trim() || installResult.stdoutTail.trim() || `exit ${String(installResult.exitCode)}`;
|
||||||
|
result.errors.push({
|
||||||
|
skill: v.name,
|
||||||
|
error: `postInstall failed (exit ${String(installResult.exitCode)}): ${tail.slice(-200)}`,
|
||||||
|
});
|
||||||
|
// hash NOT updated → retry on next sync
|
||||||
|
} else {
|
||||||
|
postInstallHash = installResult.scriptHash;
|
||||||
|
result.postInstallsRan.push(v.name);
|
||||||
|
}
|
||||||
|
} catch (err: unknown) {
|
||||||
|
result.errors.push({
|
||||||
|
skill: v.name,
|
||||||
|
error: `postInstall error: ${err instanceof Error ? err.message : String(err)}`,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
const newState: SkillState = {
|
const newState: SkillState = {
|
||||||
id: v.id,
|
id: v.id,
|
||||||
semver: v.semver,
|
semver: v.semver,
|
||||||
@@ -262,10 +416,7 @@ export async function runSkillsSync(opts: SyncOpts, deps: SyncDeps): Promise<Syn
|
|||||||
scope: v.scope,
|
scope: v.scope,
|
||||||
installDir: targetDir,
|
installDir: targetDir,
|
||||||
files: fileStates,
|
files: fileStates,
|
||||||
// Tier-2 fields — postInstall execution is deferred to a follow-up
|
postInstallHash,
|
||||||
// PR. For now we record the hash so we can detect script changes
|
|
||||||
// when execution lands.
|
|
||||||
postInstallHash: null,
|
|
||||||
lastSyncedAt: new Date().toISOString(),
|
lastSyncedAt: new Date().toISOString(),
|
||||||
};
|
};
|
||||||
state.skills[v.name] = newState;
|
state.skills[v.name] = newState;
|
||||||
|
|||||||
180
src/cli/src/utils/hooks-materialiser.ts
Normal file
180
src/cli/src/utils/hooks-materialiser.ts
Normal file
@@ -0,0 +1,180 @@
|
|||||||
|
/**
|
||||||
|
* Materialise skill-declared hooks into Claude Code's
|
||||||
|
* `~/.claude/settings.json`.
|
||||||
|
*
|
||||||
|
* Each entry we write carries two markers:
|
||||||
|
* `_mcpctl_managed: true` — same flag the SessionStart-hook
|
||||||
|
* installer uses; identifies an entry mcpctl owns.
|
||||||
|
* `_mcpctl_source: "<skill-name>"` — which skill installed it.
|
||||||
|
*
|
||||||
|
* The combination lets us cleanly add/update/remove skill hooks without
|
||||||
|
* clobbering hooks the user added by hand and without one skill trampling
|
||||||
|
* another. Removing skill X re-reads the file, drops every entry tagged
|
||||||
|
* `_mcpctl_source: "X"`, and rewrites atomically.
|
||||||
|
*
|
||||||
|
* Claude Code ignores the extra fields (it only looks at `type` and
|
||||||
|
* `command`).
|
||||||
|
*
|
||||||
|
* The file is written atomically (temp + rename) and tolerant of an
|
||||||
|
* existing file that has comments, no `hooks` block, or unexpected
|
||||||
|
* shape — same robustness profile as sessionhook.ts.
|
||||||
|
*/
|
||||||
|
import { readFile, writeFile, mkdir, rename } from 'node:fs/promises';
|
||||||
|
import { dirname, join } from 'node:path';
|
||||||
|
import { homedir } from 'node:os';
|
||||||
|
|
||||||
|
import { MARKER_KEY } from './sessionhook.js';
|
||||||
|
|
||||||
|
export const SOURCE_KEY = '_mcpctl_source';
|
||||||
|
|
||||||
|
/** A single hook entry: must be `type: 'command'` for v1. Extra fields preserved. */
|
||||||
|
export interface ManagedHookEntry {
|
||||||
|
type: 'command';
|
||||||
|
command: string;
|
||||||
|
timeout?: number;
|
||||||
|
/** Free-form: skills can attach extra fields and they'll round-trip. */
|
||||||
|
[k: string]: unknown;
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Recognised hook events. Validated server-side; if a new event lands later, we still write whatever the skill declares. */
|
||||||
|
export type HookEvent =
|
||||||
|
| 'PreToolUse'
|
||||||
|
| 'PostToolUse'
|
||||||
|
| 'SessionStart'
|
||||||
|
| 'Stop'
|
||||||
|
| 'SubagentStop'
|
||||||
|
| 'Notification';
|
||||||
|
|
||||||
|
export type HooksByEvent = Partial<Record<HookEvent, ManagedHookEntry[]>>;
|
||||||
|
|
||||||
|
interface HookGroup {
|
||||||
|
hooks: ManagedHookEntry[];
|
||||||
|
[k: string]: unknown;
|
||||||
|
}
|
||||||
|
|
||||||
|
interface Settings {
|
||||||
|
hooks?: Partial<Record<string, HookGroup[]>>;
|
||||||
|
[k: string]: unknown;
|
||||||
|
}
|
||||||
|
|
||||||
|
function defaultSettingsPath(): string {
|
||||||
|
return join(homedir(), '.claude', 'settings.json');
|
||||||
|
}
|
||||||
|
|
||||||
|
async function readSettings(path: string): Promise<Settings> {
|
||||||
|
try {
|
||||||
|
const raw = await readFile(path, 'utf-8');
|
||||||
|
if (raw.trim().length === 0) return {};
|
||||||
|
const stripped = raw.replace(/^\s*\/\/.*$/gm, '');
|
||||||
|
return JSON.parse(stripped) as Settings;
|
||||||
|
} catch (err: unknown) {
|
||||||
|
if (isNotFoundError(err)) return {};
|
||||||
|
throw err;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async function writeSettings(path: string, settings: Settings): Promise<void> {
|
||||||
|
await mkdir(dirname(path), { recursive: true });
|
||||||
|
const tmp = `${path}.tmp.${String(process.pid)}`;
|
||||||
|
await writeFile(tmp, JSON.stringify(settings, null, 2) + '\n', 'utf-8');
|
||||||
|
await rename(tmp, path);
|
||||||
|
}
|
||||||
|
|
||||||
|
function isManagedBy(entry: unknown, source: string): boolean {
|
||||||
|
if (entry === null || typeof entry !== 'object') return false;
|
||||||
|
const e = entry as Record<string, unknown>;
|
||||||
|
return e[MARKER_KEY] === true && e[SOURCE_KEY] === source;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Replace this skill's hook entries with the provided set. If `hooks`
|
||||||
|
* omits an event the skill previously installed, those entries are
|
||||||
|
* dropped. Other skills' entries and user-added entries are preserved.
|
||||||
|
*
|
||||||
|
* Returns the count of changes (added or removed entries) so callers
|
||||||
|
* can short-circuit no-op writes.
|
||||||
|
*/
|
||||||
|
export async function applyManagedHooks(
|
||||||
|
source: string,
|
||||||
|
hooks: HooksByEvent,
|
||||||
|
settingsPath: string = defaultSettingsPath(),
|
||||||
|
): Promise<{ updated: boolean; settingsPath: string }> {
|
||||||
|
const settings = await readSettings(settingsPath);
|
||||||
|
if (!settings.hooks) settings.hooks = {};
|
||||||
|
|
||||||
|
let changed = false;
|
||||||
|
|
||||||
|
// For each known/declared event, drop our previous entries and add the new ones.
|
||||||
|
const declaredEvents = new Set<string>(Object.keys(hooks));
|
||||||
|
// Also walk events that already have entries from this source (so skills can shrink scope).
|
||||||
|
for (const [eventName, groups] of Object.entries(settings.hooks)) {
|
||||||
|
if (!Array.isArray(groups)) continue;
|
||||||
|
if (groups.some((g) => Array.isArray(g.hooks) && g.hooks.some((e) => isManagedBy(e, source)))) {
|
||||||
|
declaredEvents.add(eventName);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
for (const eventName of declaredEvents) {
|
||||||
|
const desired = hooks[eventName as HookEvent] ?? [];
|
||||||
|
const groups = (settings.hooks[eventName] as HookGroup[] | undefined) ?? [];
|
||||||
|
|
||||||
|
// Strip our entries from each group, then drop empty groups.
|
||||||
|
const stripped: HookGroup[] = [];
|
||||||
|
for (const group of groups) {
|
||||||
|
if (!Array.isArray(group?.hooks)) {
|
||||||
|
stripped.push(group);
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
const before = group.hooks.length;
|
||||||
|
const filtered = group.hooks.filter((e) => !isManagedBy(e, source));
|
||||||
|
if (filtered.length !== before) changed = true;
|
||||||
|
if (filtered.length > 0) {
|
||||||
|
stripped.push({ ...group, hooks: filtered });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Insert the new set as a single group tagged with our source.
|
||||||
|
if (desired.length > 0) {
|
||||||
|
const tagged = desired.map((entry) => ({
|
||||||
|
...entry,
|
||||||
|
type: 'command' as const,
|
||||||
|
[MARKER_KEY]: true,
|
||||||
|
[SOURCE_KEY]: source,
|
||||||
|
}));
|
||||||
|
stripped.push({ hooks: tagged, [SOURCE_KEY]: source });
|
||||||
|
changed = true;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (stripped.length === 0) {
|
||||||
|
// No groups left for this event — drop the event entirely so the
|
||||||
|
// settings.json doesn't accumulate empty arrays.
|
||||||
|
delete settings.hooks[eventName];
|
||||||
|
} else {
|
||||||
|
settings.hooks[eventName] = stripped;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!changed) {
|
||||||
|
return { updated: false, settingsPath };
|
||||||
|
}
|
||||||
|
|
||||||
|
await writeSettings(settingsPath, settings);
|
||||||
|
return { updated: true, settingsPath };
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Drop all hook entries owned by `source`. Used by the sync's orphan-
|
||||||
|
* removal path so a skill that's no longer in the server set
|
||||||
|
* un-registers its hooks too. Returns whether anything was changed.
|
||||||
|
*/
|
||||||
|
export async function removeManagedHooks(
|
||||||
|
source: string,
|
||||||
|
settingsPath: string = defaultSettingsPath(),
|
||||||
|
): Promise<{ removed: boolean; settingsPath: string }> {
|
||||||
|
const result = await applyManagedHooks(source, {}, settingsPath);
|
||||||
|
return { removed: result.updated, settingsPath: result.settingsPath };
|
||||||
|
}
|
||||||
|
|
||||||
|
function isNotFoundError(err: unknown): boolean {
|
||||||
|
return typeof err === 'object' && err !== null && (err as { code?: string }).code === 'ENOENT';
|
||||||
|
}
|
||||||
176
src/cli/src/utils/mcpservers-materialiser.ts
Normal file
176
src/cli/src/utils/mcpservers-materialiser.ts
Normal file
@@ -0,0 +1,176 @@
|
|||||||
|
/**
|
||||||
|
* Auto-attach the MCP server dependencies a skill declares to the
|
||||||
|
* project that's syncing. Per the corporate-appliance trust model,
|
||||||
|
* publishing a skill that says "this project depends on my-grafana"
|
||||||
|
* is enough — the client takes mcpd at its word and asks mcpd to
|
||||||
|
* attach the server to the project.
|
||||||
|
*
|
||||||
|
* What this function does NOT do (deliberately):
|
||||||
|
* - Auto-create the server from a template if it's missing.
|
||||||
|
* Provisioning infrastructure from a skill push is a separate
|
||||||
|
* decision that needs explicit operator consent. v1 just warns
|
||||||
|
* when the named server doesn't exist and skips that dep.
|
||||||
|
* - Detach servers that a skill removed from its mcpServers list.
|
||||||
|
* Detach is destructive (the project loses access) and the
|
||||||
|
* `attach` itself is idempotent on the server side, so we err
|
||||||
|
* on the side of leaving things attached. PR-7 can revisit if
|
||||||
|
* a use case shows up.
|
||||||
|
*
|
||||||
|
* The mcpServers field is per-project: a skill's declared deps only
|
||||||
|
* get attached to the project the sync is running for. Global skills
|
||||||
|
* (no projectName context) skip this step entirely — there's no
|
||||||
|
* project to attach to.
|
||||||
|
*/
|
||||||
|
import type { ApiClient } from '../api-client.js';
|
||||||
|
import { ApiError } from '../api-client.js';
|
||||||
|
|
||||||
|
export interface McpServerDep {
|
||||||
|
name: string;
|
||||||
|
fromTemplate?: string;
|
||||||
|
project?: string;
|
||||||
|
}
|
||||||
|
|
||||||
|
export interface AttachResult {
|
||||||
|
attached: string[];
|
||||||
|
alreadyAttached: string[];
|
||||||
|
missing: string[];
|
||||||
|
errors: Array<{ server: string; error: string }>;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Resolve project name → id, list its currently-attached servers,
|
||||||
|
* then attach each declared dep that isn't already there. Idempotent
|
||||||
|
* by virtue of the existing-attachment check.
|
||||||
|
*
|
||||||
|
* Failures per-server are collected, not thrown — sync continues.
|
||||||
|
*/
|
||||||
|
export async function attachSkillMcpServers(
|
||||||
|
client: ApiClient,
|
||||||
|
projectName: string,
|
||||||
|
deps: McpServerDep[],
|
||||||
|
warn: (msg: string) => void = () => {},
|
||||||
|
): Promise<AttachResult> {
|
||||||
|
const result: AttachResult = {
|
||||||
|
attached: [],
|
||||||
|
alreadyAttached: [],
|
||||||
|
missing: [],
|
||||||
|
errors: [],
|
||||||
|
};
|
||||||
|
if (deps.length === 0) return result;
|
||||||
|
|
||||||
|
// Resolve project → id (the attach endpoint is keyed by id, not name).
|
||||||
|
let projectId: string;
|
||||||
|
try {
|
||||||
|
const projects = await client.get<Array<{ id: string; name: string }>>('/api/v1/projects');
|
||||||
|
const match = projects.find((p) => p.name === projectName);
|
||||||
|
if (!match) {
|
||||||
|
// No project to attach to — surface every dep as an error so the
|
||||||
|
// operator can see something is mis-configured.
|
||||||
|
for (const dep of deps) {
|
||||||
|
result.errors.push({ server: dep.name, error: `Project '${projectName}' not found` });
|
||||||
|
}
|
||||||
|
return result;
|
||||||
|
}
|
||||||
|
projectId = match.id;
|
||||||
|
} catch (err: unknown) {
|
||||||
|
for (const dep of deps) {
|
||||||
|
result.errors.push({
|
||||||
|
server: dep.name,
|
||||||
|
error: `Failed to resolve project: ${err instanceof Error ? err.message : String(err)}`,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
return result;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Inspect current attachments. The /api/v1/projects/:id/servers POST
|
||||||
|
// endpoint is idempotent server-side, but we still pre-check so we
|
||||||
|
// can report alreadyAttached vs newly-attached cleanly.
|
||||||
|
let attached = new Set<string>();
|
||||||
|
try {
|
||||||
|
const project = await client.get<{ servers?: Array<{ server?: { name: string } }> }>(`/api/v1/projects/${projectId}`);
|
||||||
|
attached = new Set(
|
||||||
|
(project.servers ?? [])
|
||||||
|
.map((s) => s.server?.name)
|
||||||
|
.filter((n): n is string => typeof n === 'string'),
|
||||||
|
);
|
||||||
|
} catch (err: unknown) {
|
||||||
|
warn(`mcpctl: failed to read current attachments for project '${projectName}': ${err instanceof Error ? err.message : String(err)}`);
|
||||||
|
// Fall through with an empty set — we'll attempt attaches and let
|
||||||
|
// server-side idempotency cover any duplicates.
|
||||||
|
}
|
||||||
|
|
||||||
|
// Optionally narrow the existing-server set so we can warn loudly on
|
||||||
|
// unknown server names. (Server attaches against a non-existent
|
||||||
|
// server would 404 anyway, but a clearer warning is friendlier.)
|
||||||
|
let existingServers = new Set<string>();
|
||||||
|
try {
|
||||||
|
const servers = await client.get<Array<{ name: string }>>('/api/v1/servers');
|
||||||
|
existingServers = new Set(servers.map((s) => s.name));
|
||||||
|
} catch {
|
||||||
|
// Best-effort; if listing fails we still try the attach.
|
||||||
|
}
|
||||||
|
|
||||||
|
for (const dep of deps) {
|
||||||
|
// Honour an explicit `project` on the dep — defensive, normally
|
||||||
|
// matches the active project anyway. Skip mismatches so a skill
|
||||||
|
// can declare deps for a different project without collateral
|
||||||
|
// damage during this sync.
|
||||||
|
if (dep.project && dep.project !== projectName) {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (attached.has(dep.name)) {
|
||||||
|
result.alreadyAttached.push(dep.name);
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (existingServers.size > 0 && !existingServers.has(dep.name)) {
|
||||||
|
// Server doesn't exist on mcpd. v1 doesn't auto-create; warn and continue.
|
||||||
|
const detail = dep.fromTemplate
|
||||||
|
? ` (skill suggests creating it via template '${dep.fromTemplate}')`
|
||||||
|
: '';
|
||||||
|
warn(`mcpctl: skill mcpServers dep '${dep.name}' not found on mcpd${detail}; skipping attach`);
|
||||||
|
result.missing.push(dep.name);
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
try {
|
||||||
|
await client.post(`/api/v1/projects/${projectId}/servers`, { server: dep.name });
|
||||||
|
result.attached.push(dep.name);
|
||||||
|
} catch (err: unknown) {
|
||||||
|
// Idempotency: 409 (already attached) is success.
|
||||||
|
if (err instanceof ApiError && err.status === 409) {
|
||||||
|
result.alreadyAttached.push(dep.name);
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
// 404 means either the project or the server vanished mid-sync.
|
||||||
|
if (err instanceof ApiError && err.status === 404) {
|
||||||
|
result.missing.push(dep.name);
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
result.errors.push({
|
||||||
|
server: dep.name,
|
||||||
|
error: err instanceof Error ? err.message : String(err),
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return result;
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Type-narrow the metadata.mcpServers field. Tolerant of garbage. */
|
||||||
|
export function parseMcpServerDeps(value: unknown): McpServerDep[] {
|
||||||
|
if (!Array.isArray(value)) return [];
|
||||||
|
const out: McpServerDep[] = [];
|
||||||
|
for (const v of value) {
|
||||||
|
if (v === null || typeof v !== 'object') continue;
|
||||||
|
const obj = v as Record<string, unknown>;
|
||||||
|
const name = obj['name'];
|
||||||
|
if (typeof name !== 'string' || name.length === 0) continue;
|
||||||
|
const dep: McpServerDep = { name };
|
||||||
|
if (typeof obj['fromTemplate'] === 'string') dep.fromTemplate = obj['fromTemplate'];
|
||||||
|
if (typeof obj['project'] === 'string') dep.project = obj['project'];
|
||||||
|
out.push(dep);
|
||||||
|
}
|
||||||
|
return out;
|
||||||
|
}
|
||||||
282
src/cli/src/utils/postinstall.ts
Normal file
282
src/cli/src/utils/postinstall.ts
Normal file
@@ -0,0 +1,282 @@
|
|||||||
|
/**
|
||||||
|
* postInstall executor for `mcpctl skills sync`.
|
||||||
|
*
|
||||||
|
* Trust model: mcpctl runs scripts that mcpd has served. mcpd is the
|
||||||
|
* corporate source of truth — content is reviewed at publish time. We
|
||||||
|
* do NOT sandbox or signature-check on the client. The controls that
|
||||||
|
* matter live on the publishing side (RBAC, audit, reviewer queue).
|
||||||
|
*
|
||||||
|
* What we DO provide is ops hygiene:
|
||||||
|
* - Hard timeout (default 60 s, per-skill override via
|
||||||
|
* `metadata.postInstallTimeoutSec`). Stops a runaway script from
|
||||||
|
* wedging Claude startup forever.
|
||||||
|
* - Hash-pinning: the script's sha256 is recorded in the skills state
|
||||||
|
* file so the next sync skips re-execution unless the hash changed.
|
||||||
|
* Saves churn; catches "the same skill at the same semver was
|
||||||
|
* re-published with a fixed script".
|
||||||
|
* - Curated env: MCPCTL_SKILL_NAME / _VERSION / _DIR / _PROJECT plus
|
||||||
|
* inherited PATH / HOME / USER / SHELL. Cron-style minimal env so
|
||||||
|
* scripts behave the same on every machine.
|
||||||
|
* - Per-skill install log under ~/.mcpctl/skills/<name>/install.log
|
||||||
|
* (rotated to keep the last 5 runs). Standard sysadmin reflex.
|
||||||
|
* - Audit event back to mcpd on every run. So mcpd's audit pipeline
|
||||||
|
* has both sides of the timeline (publish + per-machine execution).
|
||||||
|
*
|
||||||
|
* Failure semantics: a non-zero exit, a hang past the timeout, or a
|
||||||
|
* spawn error is treated as a failed sync of THIS skill. The state
|
||||||
|
* file's postInstallHash is NOT updated on failure, so the next sync
|
||||||
|
* will retry. Other skills in the same sync run continue regardless.
|
||||||
|
*/
|
||||||
|
import { createHash } from 'node:crypto';
|
||||||
|
import { spawn } from 'node:child_process';
|
||||||
|
import { mkdir, readFile, writeFile, stat } from 'node:fs/promises';
|
||||||
|
import { dirname, join, resolve } from 'node:path';
|
||||||
|
import { hostname } from 'node:os';
|
||||||
|
import { setTimeout as delay } from 'node:timers/promises';
|
||||||
|
|
||||||
|
import type { ApiClient } from '../api-client.js';
|
||||||
|
|
||||||
|
export interface PostInstallInput {
|
||||||
|
/** Full path of the materialised skill directory. The script path is resolved relative to this. */
|
||||||
|
installDir: string;
|
||||||
|
/** metadata.postInstall — relative path inside the skill bundle. */
|
||||||
|
scriptPath: string;
|
||||||
|
/** Name of the skill. Surfaces in audit + env + log path. */
|
||||||
|
skillName: string;
|
||||||
|
/** Skill version. Audit + env. */
|
||||||
|
semver: string;
|
||||||
|
/** Project name when the skill is project-scoped, else undefined. */
|
||||||
|
projectName?: string | undefined;
|
||||||
|
/** Per-skill override for the 60-s default. */
|
||||||
|
timeoutSec?: number | undefined;
|
||||||
|
/** Where to put the rolling install.log. Default: ~/.mcpctl/skills/<name>/install.log. */
|
||||||
|
logsDir: string;
|
||||||
|
}
|
||||||
|
|
||||||
|
export interface PostInstallResult {
|
||||||
|
exitCode: number | null;
|
||||||
|
durationMs: number;
|
||||||
|
scriptHash: string;
|
||||||
|
timedOut: boolean;
|
||||||
|
signal: NodeJS.Signals | null;
|
||||||
|
stdoutTail: string;
|
||||||
|
stderrTail: string;
|
||||||
|
}
|
||||||
|
|
||||||
|
const DEFAULT_TIMEOUT_SEC = 60;
|
||||||
|
const TAIL_BYTES = 4 * 1024;
|
||||||
|
const MAX_LOG_BYTES = 256 * 1024;
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Compute the sha256 of a script — used as the "have I already run this
|
||||||
|
* version?" key in the skills state file. Caller passes the raw script
|
||||||
|
* bytes; this just wraps the hash routine to stay consistent with the
|
||||||
|
* `'sha256:'`-prefixed format used elsewhere (skills-state.ts).
|
||||||
|
*/
|
||||||
|
export function hashScript(content: string | Buffer): string {
|
||||||
|
const buf = typeof content === 'string' ? Buffer.from(content, 'utf-8') : content;
|
||||||
|
return 'sha256:' + createHash('sha256').update(buf).digest('hex');
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Run the post-install script. Returns a result regardless of success
|
||||||
|
* or failure — caller inspects `exitCode`/`timedOut` to decide.
|
||||||
|
*
|
||||||
|
* Path validation: the resolved script path must remain inside
|
||||||
|
* `installDir`. A skill that tries to point postInstall at
|
||||||
|
* `../../../../etc/passwd-like` is rejected as a failed run, not
|
||||||
|
* silently ignored.
|
||||||
|
*/
|
||||||
|
export async function runPostInstall(input: PostInstallInput): Promise<PostInstallResult> {
|
||||||
|
const start = Date.now();
|
||||||
|
const timeoutMs = (input.timeoutSec ?? DEFAULT_TIMEOUT_SEC) * 1000;
|
||||||
|
|
||||||
|
const fullPath = resolve(input.installDir, input.scriptPath);
|
||||||
|
// Defence in depth: the install dir is server-published content, but
|
||||||
|
// a server with skill-write RBAC could still cause mischief. The
|
||||||
|
// check makes our intent explicit: scripts may only live inside the
|
||||||
|
// skill bundle.
|
||||||
|
const installDirResolved = resolve(input.installDir);
|
||||||
|
if (!fullPath.startsWith(installDirResolved + '/') && fullPath !== installDirResolved) {
|
||||||
|
throw new Error(
|
||||||
|
`postInstall path '${input.scriptPath}' escapes skill dir`,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Read script bytes for hashing (and to fail-fast if missing).
|
||||||
|
const scriptBytes = await readFile(fullPath);
|
||||||
|
const scriptHash = hashScript(scriptBytes);
|
||||||
|
|
||||||
|
// Curated env. Cron-style minimum: keep PATH so the script can find
|
||||||
|
// git/curl/python; keep HOME/USER/SHELL so scripts that touch dotfiles
|
||||||
|
// work; drop everything else.
|
||||||
|
const env: Record<string, string> = {
|
||||||
|
PATH: process.env['PATH'] ?? '/usr/local/bin:/usr/bin:/bin',
|
||||||
|
HOME: process.env['HOME'] ?? '',
|
||||||
|
USER: process.env['USER'] ?? '',
|
||||||
|
SHELL: process.env['SHELL'] ?? '/bin/sh',
|
||||||
|
LANG: process.env['LANG'] ?? 'C.UTF-8',
|
||||||
|
TERM: process.env['TERM'] ?? 'dumb',
|
||||||
|
MCPCTL_SKILL_NAME: input.skillName,
|
||||||
|
MCPCTL_SKILL_VERSION: input.semver,
|
||||||
|
MCPCTL_SKILL_DIR: installDirResolved,
|
||||||
|
};
|
||||||
|
if (input.projectName) env['MCPCTL_PROJECT'] = input.projectName;
|
||||||
|
|
||||||
|
// Make sure the script is executable. Some upstreams ship with mode
|
||||||
|
// 0644 — if shebang exists, we can fall through to the interpreter;
|
||||||
|
// otherwise spawn will EACCES.
|
||||||
|
await ensureExecutable(fullPath, scriptBytes);
|
||||||
|
|
||||||
|
await mkdir(input.logsDir, { recursive: true });
|
||||||
|
const logPath = join(input.logsDir, 'install.log');
|
||||||
|
|
||||||
|
// Rolling-append. Keep ~256 KB; old entries get truncated. The tail
|
||||||
|
// returned to the caller is the last few KB regardless.
|
||||||
|
const logHeader = `\n=== ${new Date().toISOString()} ${input.skillName}@${input.semver} ===\n`;
|
||||||
|
|
||||||
|
// Cast through Buffer<ArrayBufferLike> — Node's typings split Buffer
|
||||||
|
// into Buffer<ArrayBuffer> (from .alloc) and Buffer<ArrayBufferLike>
|
||||||
|
// (from .subarray), which exactOptionalPropertyTypes refuses to
|
||||||
|
// bridge. Explicit `Buffer` annotation widens to the union.
|
||||||
|
let stdoutBuf: Buffer = Buffer.alloc(0);
|
||||||
|
let stderrBuf: Buffer = Buffer.alloc(0);
|
||||||
|
let timedOut = false;
|
||||||
|
|
||||||
|
const child = spawn(fullPath, [], {
|
||||||
|
cwd: installDirResolved,
|
||||||
|
env,
|
||||||
|
stdio: ['ignore', 'pipe', 'pipe'],
|
||||||
|
});
|
||||||
|
|
||||||
|
child.stdout.on('data', (chunk: Buffer) => {
|
||||||
|
stdoutBuf = appendCapped(stdoutBuf, chunk);
|
||||||
|
});
|
||||||
|
child.stderr.on('data', (chunk: Buffer) => {
|
||||||
|
stderrBuf = appendCapped(stderrBuf, chunk);
|
||||||
|
});
|
||||||
|
|
||||||
|
// Hard timeout via SIGTERM, then SIGKILL after 2 s grace.
|
||||||
|
const timer = setTimeout(() => {
|
||||||
|
timedOut = true;
|
||||||
|
child.kill('SIGTERM');
|
||||||
|
void (async () => {
|
||||||
|
await delay(2000);
|
||||||
|
if (child.exitCode === null) child.kill('SIGKILL');
|
||||||
|
})();
|
||||||
|
}, timeoutMs);
|
||||||
|
|
||||||
|
const { exitCode, signal } = await new Promise<{ exitCode: number | null; signal: NodeJS.Signals | null }>((resolveProm) => {
|
||||||
|
child.on('close', (code, sig) => resolveProm({ exitCode: code, signal: sig }));
|
||||||
|
child.on('error', () => resolveProm({ exitCode: null, signal: null }));
|
||||||
|
});
|
||||||
|
clearTimeout(timer);
|
||||||
|
|
||||||
|
const durationMs = Date.now() - start;
|
||||||
|
const stdoutText = stdoutBuf.toString('utf-8');
|
||||||
|
const stderrText = stderrBuf.toString('utf-8');
|
||||||
|
|
||||||
|
// Append to the install log, truncating from the front if oversize.
|
||||||
|
const trailer = `\n--- exit ${exitCode === null ? '?' : String(exitCode)}${signal ? ` (${signal})` : ''} in ${String(durationMs)}ms${timedOut ? ' [TIMEOUT]' : ''} ---\n`;
|
||||||
|
const fullEntry = logHeader + 'STDOUT:\n' + stdoutText + '\nSTDERR:\n' + stderrText + trailer;
|
||||||
|
await appendBoundedLog(logPath, fullEntry);
|
||||||
|
|
||||||
|
return {
|
||||||
|
exitCode,
|
||||||
|
durationMs,
|
||||||
|
scriptHash,
|
||||||
|
timedOut,
|
||||||
|
signal,
|
||||||
|
stdoutTail: tailString(stdoutText, TAIL_BYTES),
|
||||||
|
stderrTail: tailString(stderrText, TAIL_BYTES),
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Best-effort audit emission — POSTs a structured event back to mcpd
|
||||||
|
* so admins have fleet visibility. Failures are warned via the
|
||||||
|
* provided logger but never thrown; the audit log is supplementary,
|
||||||
|
* not load-bearing for sync correctness.
|
||||||
|
*
|
||||||
|
* The event includes machine fingerprint (hostname) so the operator
|
||||||
|
* can tell which dev box ran the script — useful when triaging a
|
||||||
|
* misbehaving update.
|
||||||
|
*/
|
||||||
|
export async function emitPostInstallAudit(
|
||||||
|
client: ApiClient,
|
||||||
|
input: PostInstallInput,
|
||||||
|
result: PostInstallResult,
|
||||||
|
warn: (msg: string) => void = () => {},
|
||||||
|
): Promise<void> {
|
||||||
|
try {
|
||||||
|
await client.post('/api/v1/audit-events', {
|
||||||
|
eventKind: 'skill_postinstall',
|
||||||
|
source: 'mcpctl-cli',
|
||||||
|
verified: false,
|
||||||
|
payload: {
|
||||||
|
skillName: input.skillName,
|
||||||
|
skillVersion: input.semver,
|
||||||
|
projectName: input.projectName ?? null,
|
||||||
|
scriptPath: input.scriptPath,
|
||||||
|
scriptHash: result.scriptHash,
|
||||||
|
exitCode: result.exitCode,
|
||||||
|
durationMs: result.durationMs,
|
||||||
|
timedOut: result.timedOut,
|
||||||
|
signal: result.signal,
|
||||||
|
machine: hostname(),
|
||||||
|
},
|
||||||
|
});
|
||||||
|
} catch (err) {
|
||||||
|
warn(`mcpctl: failed to emit postInstall audit event: ${err instanceof Error ? err.message : String(err)}`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── internals ──
|
||||||
|
|
||||||
|
function appendCapped(buf: Buffer, chunk: Buffer): Buffer {
|
||||||
|
// Keep up to MAX_LOG_BYTES per stream; drop oldest bytes if over.
|
||||||
|
if (buf.length + chunk.length <= MAX_LOG_BYTES) {
|
||||||
|
return Buffer.concat([buf, chunk]);
|
||||||
|
}
|
||||||
|
const merged = Buffer.concat([buf, chunk]);
|
||||||
|
// Buffer.from(...) here keeps Node's typing happy under
|
||||||
|
// exactOptionalPropertyTypes — `subarray` on Buffer returns a
|
||||||
|
// Buffer<ArrayBufferLike> which TS won't widen to the input type.
|
||||||
|
return Buffer.from(merged.subarray(merged.length - MAX_LOG_BYTES));
|
||||||
|
}
|
||||||
|
|
||||||
|
function tailString(s: string, bytes: number): string {
|
||||||
|
if (s.length <= bytes) return s;
|
||||||
|
return '…' + s.slice(s.length - bytes + 1);
|
||||||
|
}
|
||||||
|
|
||||||
|
async function ensureExecutable(path: string, bytes: Buffer): Promise<void> {
|
||||||
|
try {
|
||||||
|
const st = await stat(path);
|
||||||
|
// Owner execute bit. Skip if it's set already.
|
||||||
|
if ((st.mode & 0o100) !== 0) return;
|
||||||
|
} catch {
|
||||||
|
return; // stat failed — let the spawn surface the real error
|
||||||
|
}
|
||||||
|
// Has shebang? Fine — many shells will still execute even without +x
|
||||||
|
// when invoked as `<interpreter> <path>`, but we always spawn the
|
||||||
|
// path directly so we need +x. Set 0755.
|
||||||
|
void bytes; // (kept around in case we want to inspect shebang later)
|
||||||
|
const { chmod } = await import('node:fs/promises');
|
||||||
|
await chmod(path, 0o755);
|
||||||
|
}
|
||||||
|
|
||||||
|
async function appendBoundedLog(path: string, entry: string): Promise<void> {
|
||||||
|
const max = 5 * MAX_LOG_BYTES;
|
||||||
|
let existing = '';
|
||||||
|
try {
|
||||||
|
existing = await readFile(path, 'utf-8');
|
||||||
|
} catch (err: unknown) {
|
||||||
|
if (typeof err !== 'object' || err === null || (err as { code?: string }).code !== 'ENOENT') throw err;
|
||||||
|
}
|
||||||
|
const combined = existing + entry;
|
||||||
|
// Keep last `max` bytes.
|
||||||
|
const trimmed = combined.length > max ? '…[truncated]…\n' + combined.slice(combined.length - max) : combined;
|
||||||
|
await mkdir(dirname(path), { recursive: true });
|
||||||
|
await writeFile(path, trimmed, 'utf-8');
|
||||||
|
}
|
||||||
174
src/cli/tests/utils/hooks-materialiser.test.ts
Normal file
174
src/cli/tests/utils/hooks-materialiser.test.ts
Normal file
@@ -0,0 +1,174 @@
|
|||||||
|
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
|
||||||
|
import { mkdtemp, rm, readFile, writeFile, mkdir } from 'node:fs/promises';
|
||||||
|
import { tmpdir } from 'node:os';
|
||||||
|
import { join } from 'node:path';
|
||||||
|
|
||||||
|
import { applyManagedHooks, removeManagedHooks, SOURCE_KEY } from '../../src/utils/hooks-materialiser.js';
|
||||||
|
import { MARKER_KEY } from '../../src/utils/sessionhook.js';
|
||||||
|
|
||||||
|
describe('hooks-materialiser', () => {
|
||||||
|
let tmp: string;
|
||||||
|
let settings: string;
|
||||||
|
|
||||||
|
beforeEach(async () => {
|
||||||
|
tmp = await mkdtemp(join(tmpdir(), 'mcpctl-hooks-'));
|
||||||
|
settings = join(tmp, 'settings.json');
|
||||||
|
});
|
||||||
|
|
||||||
|
afterEach(async () => {
|
||||||
|
await rm(tmp, { recursive: true, force: true });
|
||||||
|
});
|
||||||
|
|
||||||
|
it('writes a tagged hook from scratch when settings.json is missing', async () => {
|
||||||
|
const result = await applyManagedHooks('skill-a', {
|
||||||
|
PreToolUse: [{ type: 'command', command: 'echo before' }],
|
||||||
|
}, settings);
|
||||||
|
|
||||||
|
expect(result.updated).toBe(true);
|
||||||
|
const file = JSON.parse(await readFile(settings, 'utf-8'));
|
||||||
|
expect(file.hooks.PreToolUse).toHaveLength(1);
|
||||||
|
const entry = file.hooks.PreToolUse[0].hooks[0];
|
||||||
|
expect(entry.command).toBe('echo before');
|
||||||
|
expect(entry[MARKER_KEY]).toBe(true);
|
||||||
|
expect(entry[SOURCE_KEY]).toBe('skill-a');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('coexists with hooks owned by other skills', async () => {
|
||||||
|
await applyManagedHooks('skill-a', {
|
||||||
|
PreToolUse: [{ type: 'command', command: 'echo a' }],
|
||||||
|
}, settings);
|
||||||
|
await applyManagedHooks('skill-b', {
|
||||||
|
PreToolUse: [{ type: 'command', command: 'echo b' }],
|
||||||
|
}, settings);
|
||||||
|
|
||||||
|
const file = JSON.parse(await readFile(settings, 'utf-8'));
|
||||||
|
const all = file.hooks.PreToolUse.flatMap((g: { hooks: Array<{ command: string; [k: string]: unknown }> }) => g.hooks);
|
||||||
|
expect(all.find((e: { command: string }) => e.command === 'echo a')).toBeDefined();
|
||||||
|
expect(all.find((e: { command: string }) => e.command === 'echo b')).toBeDefined();
|
||||||
|
expect(all).toHaveLength(2);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('preserves user-added hooks (no marker)', async () => {
|
||||||
|
await mkdir(tmp, { recursive: true });
|
||||||
|
await writeFile(settings, JSON.stringify({
|
||||||
|
hooks: {
|
||||||
|
PreToolUse: [{ hooks: [{ type: 'command', command: 'echo user' }] }],
|
||||||
|
},
|
||||||
|
}));
|
||||||
|
|
||||||
|
await applyManagedHooks('skill-a', {
|
||||||
|
PreToolUse: [{ type: 'command', command: 'echo a' }],
|
||||||
|
}, settings);
|
||||||
|
|
||||||
|
const file = JSON.parse(await readFile(settings, 'utf-8'));
|
||||||
|
const all = file.hooks.PreToolUse.flatMap((g: { hooks: Array<{ command: string; [k: string]: unknown }> }) => g.hooks);
|
||||||
|
expect(all.find((e: { command: string }) => e.command === 'echo user')).toBeDefined();
|
||||||
|
expect(all.find((e: { command: string; [k: string]: unknown }) => e.command === 'echo a' && e[MARKER_KEY] === true)).toBeDefined();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('updating a skill replaces its old entries (does not duplicate)', async () => {
|
||||||
|
await applyManagedHooks('skill-a', {
|
||||||
|
PreToolUse: [{ type: 'command', command: 'echo old' }],
|
||||||
|
}, settings);
|
||||||
|
const second = await applyManagedHooks('skill-a', {
|
||||||
|
PreToolUse: [{ type: 'command', command: 'echo new' }],
|
||||||
|
}, settings);
|
||||||
|
|
||||||
|
expect(second.updated).toBe(true);
|
||||||
|
const file = JSON.parse(await readFile(settings, 'utf-8'));
|
||||||
|
const all = file.hooks.PreToolUse.flatMap((g: { hooks: Array<{ command: string; [k: string]: unknown }> }) => g.hooks);
|
||||||
|
const ours = all.filter((e: { [k: string]: unknown }) => e[SOURCE_KEY] === 'skill-a');
|
||||||
|
expect(ours).toHaveLength(1);
|
||||||
|
expect((ours[0] as { command: string }).command).toBe('echo new');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('shrinking a skill drops events it no longer declares', async () => {
|
||||||
|
await applyManagedHooks('skill-a', {
|
||||||
|
PreToolUse: [{ type: 'command', command: 'echo pre' }],
|
||||||
|
PostToolUse: [{ type: 'command', command: 'echo post' }],
|
||||||
|
}, settings);
|
||||||
|
|
||||||
|
await applyManagedHooks('skill-a', {
|
||||||
|
PreToolUse: [{ type: 'command', command: 'echo pre' }],
|
||||||
|
// PostToolUse omitted → should be dropped
|
||||||
|
}, settings);
|
||||||
|
|
||||||
|
const file = JSON.parse(await readFile(settings, 'utf-8'));
|
||||||
|
expect(file.hooks.PreToolUse).toBeDefined();
|
||||||
|
expect(file.hooks.PostToolUse).toBeUndefined();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('removeManagedHooks drops only the named source', async () => {
|
||||||
|
await applyManagedHooks('skill-a', {
|
||||||
|
PreToolUse: [{ type: 'command', command: 'echo a' }],
|
||||||
|
}, settings);
|
||||||
|
await applyManagedHooks('skill-b', {
|
||||||
|
PreToolUse: [{ type: 'command', command: 'echo b' }],
|
||||||
|
}, settings);
|
||||||
|
|
||||||
|
const removed = await removeManagedHooks('skill-a', settings);
|
||||||
|
expect(removed.removed).toBe(true);
|
||||||
|
|
||||||
|
const file = JSON.parse(await readFile(settings, 'utf-8'));
|
||||||
|
const all = file.hooks.PreToolUse.flatMap((g: { hooks: Array<{ command: string; [k: string]: unknown }> }) => g.hooks);
|
||||||
|
expect(all).toHaveLength(1);
|
||||||
|
expect((all[0] as { command: string }).command).toBe('echo b');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('removeManagedHooks is a no-op when the source has no entries', async () => {
|
||||||
|
const result = await removeManagedHooks('never-installed', settings);
|
||||||
|
expect(result.removed).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('handles multiple hook events independently', async () => {
|
||||||
|
await applyManagedHooks('skill-a', {
|
||||||
|
PreToolUse: [{ type: 'command', command: 'echo pre' }],
|
||||||
|
PostToolUse: [{ type: 'command', command: 'echo post' }],
|
||||||
|
SessionStart: [{ type: 'command', command: 'echo start' }],
|
||||||
|
}, settings);
|
||||||
|
|
||||||
|
const file = JSON.parse(await readFile(settings, 'utf-8'));
|
||||||
|
expect(file.hooks.PreToolUse).toBeDefined();
|
||||||
|
expect(file.hooks.PostToolUse).toBeDefined();
|
||||||
|
expect(file.hooks.SessionStart).toBeDefined();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('idempotent — re-applying the same hooks reports updated=true on first call only', async () => {
|
||||||
|
const first = await applyManagedHooks('skill-a', {
|
||||||
|
PreToolUse: [{ type: 'command', command: 'echo a' }],
|
||||||
|
}, settings);
|
||||||
|
expect(first.updated).toBe(true);
|
||||||
|
|
||||||
|
// Re-applying ALWAYS rewrites our entry (we don't deep-equal them
|
||||||
|
// for "no change"), but the resulting file is byte-identical except
|
||||||
|
// for ordering. The test just confirms the file remains valid + well-shaped.
|
||||||
|
const second = await applyManagedHooks('skill-a', {
|
||||||
|
PreToolUse: [{ type: 'command', command: 'echo a' }],
|
||||||
|
}, settings);
|
||||||
|
// updated=true is acceptable here; we replaced+re-added our entry.
|
||||||
|
expect(second.updated).toBe(true);
|
||||||
|
|
||||||
|
const file = JSON.parse(await readFile(settings, 'utf-8'));
|
||||||
|
const all = file.hooks.PreToolUse.flatMap((g: { hooks: Array<{ command: string; [k: string]: unknown }> }) => g.hooks);
|
||||||
|
const ours = all.filter((e: { [k: string]: unknown }) => e[SOURCE_KEY] === 'skill-a');
|
||||||
|
expect(ours).toHaveLength(1);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('survives empty settings.json', async () => {
|
||||||
|
await writeFile(settings, '');
|
||||||
|
await applyManagedHooks('skill-a', {
|
||||||
|
PreToolUse: [{ type: 'command', command: 'echo a' }],
|
||||||
|
}, settings);
|
||||||
|
const file = JSON.parse(await readFile(settings, 'utf-8'));
|
||||||
|
expect(file.hooks.PreToolUse).toHaveLength(1);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('survives JSONC line comments in settings.json', async () => {
|
||||||
|
await writeFile(settings, '// preamble\n{ "hooks": {} }\n');
|
||||||
|
await applyManagedHooks('skill-a', {
|
||||||
|
PreToolUse: [{ type: 'command', command: 'echo a' }],
|
||||||
|
}, settings);
|
||||||
|
const file = JSON.parse(await readFile(settings, 'utf-8'));
|
||||||
|
expect(file.hooks.PreToolUse).toHaveLength(1);
|
||||||
|
});
|
||||||
|
});
|
||||||
227
src/cli/tests/utils/mcpservers-materialiser.test.ts
Normal file
227
src/cli/tests/utils/mcpservers-materialiser.test.ts
Normal file
@@ -0,0 +1,227 @@
|
|||||||
|
import { describe, it, expect, vi } from 'vitest';
|
||||||
|
import { attachSkillMcpServers, parseMcpServerDeps } from '../../src/utils/mcpservers-materialiser.js';
|
||||||
|
import type { ApiClient } from '../../src/api-client.js';
|
||||||
|
import { ApiError } from '../../src/api-client.js';
|
||||||
|
|
||||||
|
interface MockClient {
|
||||||
|
get: ReturnType<typeof vi.fn>;
|
||||||
|
post: ReturnType<typeof vi.fn>;
|
||||||
|
put: ReturnType<typeof vi.fn>;
|
||||||
|
delete: ReturnType<typeof vi.fn>;
|
||||||
|
}
|
||||||
|
|
||||||
|
function makeClient(): MockClient {
|
||||||
|
return {
|
||||||
|
get: vi.fn(),
|
||||||
|
post: vi.fn(async () => ({})),
|
||||||
|
put: vi.fn(async () => ({})),
|
||||||
|
delete: vi.fn(async () => undefined),
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
function apiError(status: number, body = 'err'): ApiError {
|
||||||
|
return new ApiError(status, body);
|
||||||
|
}
|
||||||
|
|
||||||
|
describe('mcpservers-materialiser', () => {
|
||||||
|
describe('parseMcpServerDeps', () => {
|
||||||
|
it('returns [] for non-arrays', () => {
|
||||||
|
expect(parseMcpServerDeps(null)).toEqual([]);
|
||||||
|
expect(parseMcpServerDeps('foo')).toEqual([]);
|
||||||
|
expect(parseMcpServerDeps({})).toEqual([]);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('keeps valid entries and drops garbage', () => {
|
||||||
|
const out = parseMcpServerDeps([
|
||||||
|
{ name: 'good', fromTemplate: 't', project: 'p' },
|
||||||
|
{ name: '', fromTemplate: 't' }, // empty name → drop
|
||||||
|
{ fromTemplate: 'no-name' }, // no name → drop
|
||||||
|
{ name: 'bare' }, // valid, minimal
|
||||||
|
'string', // not an object → drop
|
||||||
|
]);
|
||||||
|
expect(out).toEqual([
|
||||||
|
{ name: 'good', fromTemplate: 't', project: 'p' },
|
||||||
|
{ name: 'bare' },
|
||||||
|
]);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('attachSkillMcpServers', () => {
|
||||||
|
it('attaches a new server when not already present', async () => {
|
||||||
|
const client = makeClient();
|
||||||
|
client.get.mockImplementation(async (path: string) => {
|
||||||
|
if (path === '/api/v1/projects') return [{ id: 'proj-1', name: 'demo' }];
|
||||||
|
if (path === '/api/v1/projects/proj-1') return { servers: [] };
|
||||||
|
if (path === '/api/v1/servers') return [{ name: 'my-grafana' }];
|
||||||
|
throw new Error(`unexpected GET ${path}`);
|
||||||
|
});
|
||||||
|
|
||||||
|
const result = await attachSkillMcpServers(
|
||||||
|
client as unknown as ApiClient,
|
||||||
|
'demo',
|
||||||
|
[{ name: 'my-grafana', fromTemplate: 'grafana' }],
|
||||||
|
);
|
||||||
|
|
||||||
|
expect(result.attached).toEqual(['my-grafana']);
|
||||||
|
expect(result.alreadyAttached).toEqual([]);
|
||||||
|
expect(result.missing).toEqual([]);
|
||||||
|
expect(result.errors).toEqual([]);
|
||||||
|
expect(client.post).toHaveBeenCalledWith('/api/v1/projects/proj-1/servers', { server: 'my-grafana' });
|
||||||
|
});
|
||||||
|
|
||||||
|
it('reports alreadyAttached without re-posting', async () => {
|
||||||
|
const client = makeClient();
|
||||||
|
client.get.mockImplementation(async (path: string) => {
|
||||||
|
if (path === '/api/v1/projects') return [{ id: 'proj-1', name: 'demo' }];
|
||||||
|
if (path === '/api/v1/projects/proj-1') return { servers: [{ server: { name: 'my-grafana' } }] };
|
||||||
|
if (path === '/api/v1/servers') return [{ name: 'my-grafana' }];
|
||||||
|
throw new Error(`unexpected GET ${path}`);
|
||||||
|
});
|
||||||
|
|
||||||
|
const result = await attachSkillMcpServers(
|
||||||
|
client as unknown as ApiClient,
|
||||||
|
'demo',
|
||||||
|
[{ name: 'my-grafana' }],
|
||||||
|
);
|
||||||
|
|
||||||
|
expect(result.alreadyAttached).toEqual(['my-grafana']);
|
||||||
|
expect(result.attached).toEqual([]);
|
||||||
|
expect(client.post).not.toHaveBeenCalled();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('warns + skips when server does not exist on mcpd', async () => {
|
||||||
|
const client = makeClient();
|
||||||
|
client.get.mockImplementation(async (path: string) => {
|
||||||
|
if (path === '/api/v1/projects') return [{ id: 'proj-1', name: 'demo' }];
|
||||||
|
if (path === '/api/v1/projects/proj-1') return { servers: [] };
|
||||||
|
if (path === '/api/v1/servers') return [{ name: 'something-else' }];
|
||||||
|
throw new Error(`unexpected GET ${path}`);
|
||||||
|
});
|
||||||
|
|
||||||
|
const warnings: string[] = [];
|
||||||
|
const result = await attachSkillMcpServers(
|
||||||
|
client as unknown as ApiClient,
|
||||||
|
'demo',
|
||||||
|
[{ name: 'my-grafana', fromTemplate: 'grafana' }],
|
||||||
|
(m) => warnings.push(m),
|
||||||
|
);
|
||||||
|
|
||||||
|
expect(result.missing).toEqual(['my-grafana']);
|
||||||
|
expect(result.attached).toEqual([]);
|
||||||
|
expect(client.post).not.toHaveBeenCalled();
|
||||||
|
expect(warnings.some((w) => w.includes('my-grafana') && w.includes('grafana'))).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('errors-out when the project does not exist', async () => {
|
||||||
|
const client = makeClient();
|
||||||
|
client.get.mockImplementation(async (path: string) => {
|
||||||
|
if (path === '/api/v1/projects') return []; // no projects
|
||||||
|
throw new Error(`unexpected GET ${path}`);
|
||||||
|
});
|
||||||
|
|
||||||
|
const result = await attachSkillMcpServers(
|
||||||
|
client as unknown as ApiClient,
|
||||||
|
'no-such-project',
|
||||||
|
[{ name: 'my-grafana' }],
|
||||||
|
);
|
||||||
|
|
||||||
|
expect(result.errors).toHaveLength(1);
|
||||||
|
expect(result.errors[0]?.error).toContain('Project');
|
||||||
|
expect(client.post).not.toHaveBeenCalled();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('treats 409 from POST as alreadyAttached (idempotent server-side)', async () => {
|
||||||
|
const client = makeClient();
|
||||||
|
client.get.mockImplementation(async (path: string) => {
|
||||||
|
if (path === '/api/v1/projects') return [{ id: 'proj-1', name: 'demo' }];
|
||||||
|
// attachments listing fails — fall back to attempting + handling 409
|
||||||
|
if (path === '/api/v1/projects/proj-1') throw apiError(500, 'flake');
|
||||||
|
if (path === '/api/v1/servers') return [{ name: 'my-grafana' }];
|
||||||
|
throw new Error(`unexpected GET ${path}`);
|
||||||
|
});
|
||||||
|
client.post.mockRejectedValueOnce(apiError(409, 'already attached'));
|
||||||
|
|
||||||
|
const result = await attachSkillMcpServers(
|
||||||
|
client as unknown as ApiClient,
|
||||||
|
'demo',
|
||||||
|
[{ name: 'my-grafana' }],
|
||||||
|
);
|
||||||
|
|
||||||
|
expect(result.alreadyAttached).toEqual(['my-grafana']);
|
||||||
|
expect(result.errors).toEqual([]);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('treats 404 from POST as missing (server vanished mid-sync)', async () => {
|
||||||
|
const client = makeClient();
|
||||||
|
client.get.mockImplementation(async (path: string) => {
|
||||||
|
if (path === '/api/v1/projects') return [{ id: 'proj-1', name: 'demo' }];
|
||||||
|
if (path === '/api/v1/projects/proj-1') return { servers: [] };
|
||||||
|
if (path === '/api/v1/servers') return [{ name: 'my-grafana' }]; // existed when we listed
|
||||||
|
throw new Error(`unexpected GET ${path}`);
|
||||||
|
});
|
||||||
|
// …but vanished by the time we POSTed.
|
||||||
|
client.post.mockRejectedValueOnce(apiError(404, 'gone'));
|
||||||
|
|
||||||
|
const result = await attachSkillMcpServers(
|
||||||
|
client as unknown as ApiClient,
|
||||||
|
'demo',
|
||||||
|
[{ name: 'my-grafana' }],
|
||||||
|
);
|
||||||
|
|
||||||
|
expect(result.missing).toEqual(['my-grafana']);
|
||||||
|
expect(result.errors).toEqual([]);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('skips deps that target a different project', async () => {
|
||||||
|
const client = makeClient();
|
||||||
|
client.get.mockImplementation(async (path: string) => {
|
||||||
|
if (path === '/api/v1/projects') return [{ id: 'proj-1', name: 'demo' }];
|
||||||
|
if (path === '/api/v1/projects/proj-1') return { servers: [] };
|
||||||
|
if (path === '/api/v1/servers') return [{ name: 'my-grafana' }];
|
||||||
|
throw new Error(`unexpected GET ${path}`);
|
||||||
|
});
|
||||||
|
|
||||||
|
const result = await attachSkillMcpServers(
|
||||||
|
client as unknown as ApiClient,
|
||||||
|
'demo',
|
||||||
|
[{ name: 'my-grafana', project: 'other-project' }],
|
||||||
|
);
|
||||||
|
|
||||||
|
expect(result.attached).toEqual([]);
|
||||||
|
expect(result.missing).toEqual([]);
|
||||||
|
expect(client.post).not.toHaveBeenCalled();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('continues past per-server errors', async () => {
|
||||||
|
const client = makeClient();
|
||||||
|
client.get.mockImplementation(async (path: string) => {
|
||||||
|
if (path === '/api/v1/projects') return [{ id: 'proj-1', name: 'demo' }];
|
||||||
|
if (path === '/api/v1/projects/proj-1') return { servers: [] };
|
||||||
|
if (path === '/api/v1/servers') return [{ name: 'a' }, { name: 'b' }];
|
||||||
|
throw new Error(`unexpected GET ${path}`);
|
||||||
|
});
|
||||||
|
client.post.mockImplementation(async (path: string, body) => {
|
||||||
|
if ((body as { server: string }).server === 'a') throw apiError(500, 'boom');
|
||||||
|
return {};
|
||||||
|
});
|
||||||
|
|
||||||
|
const result = await attachSkillMcpServers(
|
||||||
|
client as unknown as ApiClient,
|
||||||
|
'demo',
|
||||||
|
[{ name: 'a' }, { name: 'b' }],
|
||||||
|
);
|
||||||
|
|
||||||
|
expect(result.errors).toHaveLength(1);
|
||||||
|
expect(result.errors[0]?.server).toBe('a');
|
||||||
|
expect(result.attached).toEqual(['b']);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('returns empty on empty deps without making any calls', async () => {
|
||||||
|
const client = makeClient();
|
||||||
|
const result = await attachSkillMcpServers(client as unknown as ApiClient, 'demo', []);
|
||||||
|
expect(result).toEqual({ attached: [], alreadyAttached: [], missing: [], errors: [] });
|
||||||
|
expect(client.get).not.toHaveBeenCalled();
|
||||||
|
expect(client.post).not.toHaveBeenCalled();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
223
src/cli/tests/utils/postinstall.test.ts
Normal file
223
src/cli/tests/utils/postinstall.test.ts
Normal file
@@ -0,0 +1,223 @@
|
|||||||
|
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
|
||||||
|
import { mkdtemp, rm, writeFile, chmod, readFile, mkdir } from 'node:fs/promises';
|
||||||
|
import { tmpdir } from 'node:os';
|
||||||
|
import { join } from 'node:path';
|
||||||
|
|
||||||
|
import { runPostInstall, hashScript } from '../../src/utils/postinstall.js';
|
||||||
|
|
||||||
|
describe('postinstall executor', () => {
|
||||||
|
let tmp: string;
|
||||||
|
|
||||||
|
beforeEach(async () => {
|
||||||
|
tmp = await mkdtemp(join(tmpdir(), 'mcpctl-postinstall-'));
|
||||||
|
});
|
||||||
|
|
||||||
|
afterEach(async () => {
|
||||||
|
await rm(tmp, { recursive: true, force: true });
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('hashScript', () => {
|
||||||
|
it('returns deterministic sha256-prefixed hash', () => {
|
||||||
|
expect(hashScript('hello')).toMatch(/^sha256:[0-9a-f]{64}$/);
|
||||||
|
expect(hashScript('hello')).toBe(hashScript('hello'));
|
||||||
|
expect(hashScript('hello')).not.toBe(hashScript('hellp'));
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('runPostInstall — success path', () => {
|
||||||
|
it('runs a passing script and returns exit 0 + script hash', async () => {
|
||||||
|
const installDir = join(tmp, 'skill');
|
||||||
|
await mkdir(installDir, { recursive: true });
|
||||||
|
const scriptPath = join(installDir, 'install.sh');
|
||||||
|
await writeFile(scriptPath, '#!/bin/sh\necho hello-stdout\necho hello-stderr 1>&2\nexit 0\n');
|
||||||
|
await chmod(scriptPath, 0o755);
|
||||||
|
|
||||||
|
const result = await runPostInstall({
|
||||||
|
installDir,
|
||||||
|
scriptPath: 'install.sh',
|
||||||
|
skillName: 'test-skill',
|
||||||
|
semver: '0.1.0',
|
||||||
|
logsDir: join(tmp, 'logs'),
|
||||||
|
});
|
||||||
|
|
||||||
|
expect(result.exitCode).toBe(0);
|
||||||
|
expect(result.timedOut).toBe(false);
|
||||||
|
expect(result.stdoutTail).toContain('hello-stdout');
|
||||||
|
expect(result.stderrTail).toContain('hello-stderr');
|
||||||
|
expect(result.scriptHash).toMatch(/^sha256:/);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('passes curated env (MCPCTL_SKILL_NAME, _VERSION, _DIR, _PROJECT)', async () => {
|
||||||
|
const installDir = join(tmp, 'skill');
|
||||||
|
await mkdir(installDir, { recursive: true });
|
||||||
|
const scriptPath = join(installDir, 'install.sh');
|
||||||
|
// Write env vars to a file we can read back.
|
||||||
|
const outFile = join(tmp, 'env-dump.txt');
|
||||||
|
await writeFile(scriptPath, `#!/bin/sh
|
||||||
|
echo "name=$MCPCTL_SKILL_NAME" > ${JSON.stringify(outFile)}
|
||||||
|
echo "version=$MCPCTL_SKILL_VERSION" >> ${JSON.stringify(outFile)}
|
||||||
|
echo "dir=$MCPCTL_SKILL_DIR" >> ${JSON.stringify(outFile)}
|
||||||
|
echo "project=$MCPCTL_PROJECT" >> ${JSON.stringify(outFile)}
|
||||||
|
`);
|
||||||
|
await chmod(scriptPath, 0o755);
|
||||||
|
|
||||||
|
const result = await runPostInstall({
|
||||||
|
installDir,
|
||||||
|
scriptPath: 'install.sh',
|
||||||
|
skillName: 'env-test',
|
||||||
|
semver: '1.2.3',
|
||||||
|
projectName: 'demo',
|
||||||
|
logsDir: join(tmp, 'logs'),
|
||||||
|
});
|
||||||
|
expect(result.exitCode).toBe(0);
|
||||||
|
|
||||||
|
const dumped = await readFile(outFile, 'utf-8');
|
||||||
|
expect(dumped).toContain('name=env-test');
|
||||||
|
expect(dumped).toContain('version=1.2.3');
|
||||||
|
expect(dumped).toContain('dir=' + installDir);
|
||||||
|
expect(dumped).toContain('project=demo');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('chmods 0644 scripts to executable before spawn', async () => {
|
||||||
|
const installDir = join(tmp, 'skill');
|
||||||
|
await mkdir(installDir, { recursive: true });
|
||||||
|
const scriptPath = join(installDir, 'install.sh');
|
||||||
|
await writeFile(scriptPath, '#!/bin/sh\nexit 0\n');
|
||||||
|
await chmod(scriptPath, 0o644); // not executable
|
||||||
|
|
||||||
|
const result = await runPostInstall({
|
||||||
|
installDir,
|
||||||
|
scriptPath: 'install.sh',
|
||||||
|
skillName: 't',
|
||||||
|
semver: '0.1.0',
|
||||||
|
logsDir: join(tmp, 'logs'),
|
||||||
|
});
|
||||||
|
|
||||||
|
expect(result.exitCode).toBe(0);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('runPostInstall — failure paths', () => {
|
||||||
|
it('captures non-zero exit code and returns it', async () => {
|
||||||
|
const installDir = join(tmp, 'skill');
|
||||||
|
await mkdir(installDir, { recursive: true });
|
||||||
|
const scriptPath = join(installDir, 'fail.sh');
|
||||||
|
await writeFile(scriptPath, '#!/bin/sh\necho oops 1>&2\nexit 7\n');
|
||||||
|
await chmod(scriptPath, 0o755);
|
||||||
|
|
||||||
|
const result = await runPostInstall({
|
||||||
|
installDir,
|
||||||
|
scriptPath: 'fail.sh',
|
||||||
|
skillName: 't',
|
||||||
|
semver: '0.1.0',
|
||||||
|
logsDir: join(tmp, 'logs'),
|
||||||
|
});
|
||||||
|
|
||||||
|
expect(result.exitCode).toBe(7);
|
||||||
|
expect(result.timedOut).toBe(false);
|
||||||
|
expect(result.stderrTail).toContain('oops');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('honors timeoutSec — kills via SIGTERM and reports timedOut=true', async () => {
|
||||||
|
const installDir = join(tmp, 'skill');
|
||||||
|
await mkdir(installDir, { recursive: true });
|
||||||
|
const scriptPath = join(installDir, 'hang.sh');
|
||||||
|
// `exec` so SIGTERM hits sleep directly — without it /bin/sh
|
||||||
|
// catches the signal but the orphaned sleep keeps the streams
|
||||||
|
// open until SIGKILL; the test then has to wait for the 2s grace
|
||||||
|
// window before we force-kill, which is fine but flakier.
|
||||||
|
await writeFile(scriptPath, '#!/bin/sh\nexec sleep 30\n');
|
||||||
|
await chmod(scriptPath, 0o755);
|
||||||
|
|
||||||
|
const start = Date.now();
|
||||||
|
const result = await runPostInstall({
|
||||||
|
installDir,
|
||||||
|
scriptPath: 'hang.sh',
|
||||||
|
skillName: 't',
|
||||||
|
semver: '0.1.0',
|
||||||
|
timeoutSec: 1,
|
||||||
|
logsDir: join(tmp, 'logs'),
|
||||||
|
});
|
||||||
|
const elapsed = Date.now() - start;
|
||||||
|
|
||||||
|
expect(result.timedOut).toBe(true);
|
||||||
|
// 1s timeout + up to 2s grace before SIGKILL.
|
||||||
|
expect(elapsed).toBeLessThan(5000);
|
||||||
|
expect(elapsed).toBeGreaterThanOrEqual(1000);
|
||||||
|
}, 15_000);
|
||||||
|
|
||||||
|
it('rejects path-escape attempts', async () => {
|
||||||
|
const installDir = join(tmp, 'skill');
|
||||||
|
await mkdir(installDir, { recursive: true });
|
||||||
|
|
||||||
|
await expect(runPostInstall({
|
||||||
|
installDir,
|
||||||
|
scriptPath: '../escape.sh',
|
||||||
|
skillName: 't',
|
||||||
|
semver: '0.1.0',
|
||||||
|
logsDir: join(tmp, 'logs'),
|
||||||
|
})).rejects.toThrow(/escapes skill dir/);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('throws when the script does not exist', async () => {
|
||||||
|
const installDir = join(tmp, 'skill');
|
||||||
|
await mkdir(installDir, { recursive: true });
|
||||||
|
|
||||||
|
await expect(runPostInstall({
|
||||||
|
installDir,
|
||||||
|
scriptPath: 'missing.sh',
|
||||||
|
skillName: 't',
|
||||||
|
semver: '0.1.0',
|
||||||
|
logsDir: join(tmp, 'logs'),
|
||||||
|
})).rejects.toThrow();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('runPostInstall — install log', () => {
|
||||||
|
it('writes stdout + stderr + exit summary to logsDir/install.log', async () => {
|
||||||
|
const installDir = join(tmp, 'skill');
|
||||||
|
await mkdir(installDir, { recursive: true });
|
||||||
|
const scriptPath = join(installDir, 'install.sh');
|
||||||
|
await writeFile(scriptPath, '#!/bin/sh\necho hello\nexit 0\n');
|
||||||
|
await chmod(scriptPath, 0o755);
|
||||||
|
|
||||||
|
const logsDir = join(tmp, 'logs');
|
||||||
|
await runPostInstall({
|
||||||
|
installDir,
|
||||||
|
scriptPath: 'install.sh',
|
||||||
|
skillName: 'log-test',
|
||||||
|
semver: '0.1.0',
|
||||||
|
logsDir,
|
||||||
|
});
|
||||||
|
|
||||||
|
const log = await readFile(join(logsDir, 'install.log'), 'utf-8');
|
||||||
|
expect(log).toContain('log-test@0.1.0');
|
||||||
|
expect(log).toContain('hello');
|
||||||
|
expect(log).toContain('exit 0');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('appends across runs without losing prior history', async () => {
|
||||||
|
const installDir = join(tmp, 'skill');
|
||||||
|
await mkdir(installDir, { recursive: true });
|
||||||
|
const scriptPath = join(installDir, 'install.sh');
|
||||||
|
await writeFile(scriptPath, '#!/bin/sh\necho run\nexit 0\n');
|
||||||
|
await chmod(scriptPath, 0o755);
|
||||||
|
|
||||||
|
const logsDir = join(tmp, 'logs');
|
||||||
|
const input = {
|
||||||
|
installDir,
|
||||||
|
scriptPath: 'install.sh',
|
||||||
|
skillName: 't',
|
||||||
|
semver: '0.1.0',
|
||||||
|
logsDir,
|
||||||
|
};
|
||||||
|
await runPostInstall(input);
|
||||||
|
await runPostInstall(input);
|
||||||
|
|
||||||
|
const log = await readFile(join(logsDir, 'install.log'), 'utf-8');
|
||||||
|
// Two run headers separated by `===`.
|
||||||
|
const headers = (log.match(/=== /g) ?? []).length;
|
||||||
|
expect(headers).toBeGreaterThanOrEqual(2);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
@@ -129,10 +129,18 @@ export class HealthProbeRunner {
|
|||||||
result = await this.probeLiveness(server, timeoutMs);
|
result = await this.probeLiveness(server, timeoutMs);
|
||||||
} else {
|
} else {
|
||||||
const readinessCheck = healthCheck as HealthCheckSpec & { tool: string };
|
const readinessCheck = healthCheck as HealthCheckSpec & { tool: string };
|
||||||
if (server.transport === 'SSE' || server.transport === 'STREAMABLE_HTTP') {
|
if (server.transport === 'STDIO') {
|
||||||
result = await this.probeHttp(instance, server, readinessCheck, timeoutMs);
|
// Route STDIO readiness through the proxy so probes hit the live
|
||||||
|
// running container rather than spawning a fresh process inside
|
||||||
|
// it. The legacy `probeStdio` (docker-exec a synthetic Node script
|
||||||
|
// that re-spawns the package binary) only worked for
|
||||||
|
// packageName-based servers — image-based STDIO servers (gitea,
|
||||||
|
// docmost) returned a fake-unhealthy "No packageName or command"
|
||||||
|
// before they even tried the tool. Going through mcpProxyService
|
||||||
|
// also means readiness failures match production failures exactly.
|
||||||
|
result = await this.probeReadinessViaProxy(server, readinessCheck, timeoutMs);
|
||||||
} else {
|
} else {
|
||||||
result = await this.probeStdio(instance, server, readinessCheck, timeoutMs);
|
result = await this.probeHttp(instance, server, readinessCheck, timeoutMs);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
} catch (err) {
|
} catch (err) {
|
||||||
@@ -188,6 +196,71 @@ export class HealthProbeRunner {
|
|||||||
return result;
|
return result;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Readiness probe via McpProxyService — sends `tools/call` against the
|
||||||
|
* configured probe tool through the live running instance. Used by
|
||||||
|
* STDIO servers; HTTP/SSE servers go through the bespoke `probeHttp`
|
||||||
|
* paths that connect directly to the container's IP+port (those work
|
||||||
|
* fine and are kept as-is to minimise the diff in this PR).
|
||||||
|
*
|
||||||
|
* If the tool returns a JSON-RPC `error` (e.g. gitea-mcp-server's
|
||||||
|
* "token is required" when GITEA_ACCESS_TOKEN didn't resolve), we mark
|
||||||
|
* the instance unhealthy with the upstream error message. That's how
|
||||||
|
* we catch broken-by-empty-secret cases that liveness (`tools/list`)
|
||||||
|
* would otherwise pass.
|
||||||
|
*/
|
||||||
|
private async probeReadinessViaProxy(
|
||||||
|
server: McpServer,
|
||||||
|
healthCheck: HealthCheckSpec & { tool: string },
|
||||||
|
timeoutMs: number,
|
||||||
|
): Promise<ProbeResult> {
|
||||||
|
const start = Date.now();
|
||||||
|
if (!this.mcpProxyService) {
|
||||||
|
return { healthy: false, latencyMs: 0, message: 'mcpProxyService not wired — cannot run readiness probe' };
|
||||||
|
}
|
||||||
|
|
||||||
|
const deadline = new Promise<ProbeResult>((resolve) => {
|
||||||
|
setTimeout(() => resolve({
|
||||||
|
healthy: false,
|
||||||
|
latencyMs: timeoutMs,
|
||||||
|
message: `Readiness probe timed out after ${timeoutMs}ms`,
|
||||||
|
}), timeoutMs);
|
||||||
|
});
|
||||||
|
|
||||||
|
const probe = this.mcpProxyService
|
||||||
|
.execute({
|
||||||
|
serverId: server.id,
|
||||||
|
method: 'tools/call',
|
||||||
|
params: { name: healthCheck.tool, arguments: healthCheck.arguments ?? {} },
|
||||||
|
})
|
||||||
|
.then((response): ProbeResult => {
|
||||||
|
const latencyMs = Date.now() - start;
|
||||||
|
if (response.error) {
|
||||||
|
return {
|
||||||
|
healthy: false,
|
||||||
|
latencyMs,
|
||||||
|
message: response.error.message ?? `tools/call ${healthCheck.tool} returned error`,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
// Some servers report tool-level failures inside the result body
|
||||||
|
// (`{ isError: true, content: [...] }`) rather than as JSON-RPC
|
||||||
|
// errors. Treat that as unhealthy too.
|
||||||
|
const result = response.result as { isError?: boolean; content?: Array<{ text?: string }> } | undefined;
|
||||||
|
if (result?.isError) {
|
||||||
|
const text = result.content?.[0]?.text ?? `${healthCheck.tool} returned isError`;
|
||||||
|
return { healthy: false, latencyMs, message: text };
|
||||||
|
}
|
||||||
|
return { healthy: true, latencyMs, message: 'ok' };
|
||||||
|
})
|
||||||
|
.catch((err: unknown): ProbeResult => ({
|
||||||
|
healthy: false,
|
||||||
|
latencyMs: Date.now() - start,
|
||||||
|
message: err instanceof Error ? err.message : String(err),
|
||||||
|
}));
|
||||||
|
|
||||||
|
return Promise.race([probe, deadline]);
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Liveness probe — sends tools/list via McpProxyService so the probe traverses
|
* Liveness probe — sends tools/list via McpProxyService so the probe traverses
|
||||||
* the exact code path production clients use. Works uniformly across every
|
* the exact code path production clients use. Works uniformly across every
|
||||||
@@ -463,122 +536,14 @@ export class HealthProbeRunner {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
// Note: a previous `probeStdio` implementation existed here that ran a
|
||||||
* Probe a STDIO MCP server by running `docker exec` with a disposable Node.js
|
// disposable Node script inside the container via `docker exec`,
|
||||||
* script that pipes JSON-RPC messages into the package binary.
|
// re-spawning the package binary and piping JSON-RPC into it. It only
|
||||||
*/
|
// worked for packageName-based servers (the spawn step required an
|
||||||
private async probeStdio(
|
// npx-compatible package); image-based STDIO servers like
|
||||||
instance: McpInstance,
|
// gitea-mcp-server fell through with "No packageName or command" and
|
||||||
server: McpServer,
|
// were always reported unhealthy for the wrong reason. STDIO readiness
|
||||||
healthCheck: HealthCheckSpec & { tool: string },
|
// now goes through `probeReadinessViaProxy` which calls the live
|
||||||
timeoutMs: number,
|
// running container — same code path as production traffic — and
|
||||||
): Promise<ProbeResult> {
|
// surfaces the upstream error verbatim.
|
||||||
if (!instance.containerId) {
|
|
||||||
return { healthy: false, latencyMs: 0, message: 'No container ID' };
|
|
||||||
}
|
|
||||||
|
|
||||||
const start = Date.now();
|
|
||||||
const packageName = server.packageName as string | null;
|
|
||||||
const command = server.command as string[] | null;
|
|
||||||
|
|
||||||
// Determine how to spawn the MCP server inside the container
|
|
||||||
let spawnCmd: string[];
|
|
||||||
if (packageName) {
|
|
||||||
spawnCmd = ['npx', '--prefer-offline', '-y', packageName];
|
|
||||||
} else if (command && command.length > 0) {
|
|
||||||
spawnCmd = command;
|
|
||||||
} else {
|
|
||||||
return { healthy: false, latencyMs: 0, message: 'No packageName or command for STDIO server' };
|
|
||||||
}
|
|
||||||
|
|
||||||
// Build JSON-RPC messages for the health probe
|
|
||||||
const initMsg = JSON.stringify({
|
|
||||||
jsonrpc: '2.0', id: 1, method: 'initialize',
|
|
||||||
params: {
|
|
||||||
protocolVersion: '2024-11-05',
|
|
||||||
capabilities: {},
|
|
||||||
clientInfo: { name: 'mcpctl-health', version: '0.0.1' },
|
|
||||||
},
|
|
||||||
});
|
|
||||||
const initializedMsg = JSON.stringify({
|
|
||||||
jsonrpc: '2.0', method: 'notifications/initialized',
|
|
||||||
});
|
|
||||||
const toolCallMsg = JSON.stringify({
|
|
||||||
jsonrpc: '2.0', id: 2, method: 'tools/call',
|
|
||||||
params: { name: healthCheck.tool, arguments: healthCheck.arguments ?? {} },
|
|
||||||
});
|
|
||||||
|
|
||||||
// Use a Node.js inline script that:
|
|
||||||
// 1. Spawns the MCP server binary
|
|
||||||
// 2. Sends initialize + initialized + tool call via stdin
|
|
||||||
// 3. Reads responses from stdout
|
|
||||||
// 4. Exits with 0 if tool call succeeds, 1 if it fails
|
|
||||||
const spawnArgs = JSON.stringify(spawnCmd);
|
|
||||||
const probeScript = `
|
|
||||||
const { spawn } = require('child_process');
|
|
||||||
const args = ${spawnArgs};
|
|
||||||
const proc = spawn(args[0], args.slice(1), { stdio: ['pipe', 'pipe', 'pipe'] });
|
|
||||||
let output = '';
|
|
||||||
let responded = false;
|
|
||||||
proc.stdout.on('data', d => {
|
|
||||||
output += d;
|
|
||||||
const lines = output.split('\\n');
|
|
||||||
for (const line of lines) {
|
|
||||||
if (!line.trim()) continue;
|
|
||||||
try {
|
|
||||||
const msg = JSON.parse(line);
|
|
||||||
if (msg.id === 2) {
|
|
||||||
responded = true;
|
|
||||||
if (msg.error) {
|
|
||||||
process.stdout.write('ERROR:' + (msg.error.message || 'unknown'));
|
|
||||||
proc.kill();
|
|
||||||
process.exit(1);
|
|
||||||
} else {
|
|
||||||
process.stdout.write('OK');
|
|
||||||
proc.kill();
|
|
||||||
process.exit(0);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
} catch {}
|
|
||||||
}
|
|
||||||
output = lines[lines.length - 1] || '';
|
|
||||||
});
|
|
||||||
proc.stderr.on('data', () => {});
|
|
||||||
proc.on('error', e => { process.stdout.write('ERROR:' + e.message); process.exit(1); });
|
|
||||||
proc.on('exit', (code) => { if (!responded) { process.stdout.write('ERROR:process exited ' + code); process.exit(1); } });
|
|
||||||
setTimeout(() => { if (!responded) { process.stdout.write('ERROR:timeout'); proc.kill(); process.exit(1); } }, ${timeoutMs - 2000});
|
|
||||||
proc.stdin.write(${JSON.stringify(initMsg)} + '\\n');
|
|
||||||
setTimeout(() => {
|
|
||||||
proc.stdin.write(${JSON.stringify(initializedMsg)} + '\\n');
|
|
||||||
setTimeout(() => {
|
|
||||||
proc.stdin.write(${JSON.stringify(toolCallMsg)} + '\\n');
|
|
||||||
}, 500);
|
|
||||||
}, 500);
|
|
||||||
`.trim();
|
|
||||||
|
|
||||||
try {
|
|
||||||
const result = await this.orchestrator.execInContainer(
|
|
||||||
instance.containerId,
|
|
||||||
['node', '-e', probeScript],
|
|
||||||
{ timeoutMs },
|
|
||||||
);
|
|
||||||
|
|
||||||
const latencyMs = Date.now() - start;
|
|
||||||
|
|
||||||
if (result.exitCode === 0 && result.stdout.includes('OK')) {
|
|
||||||
return { healthy: true, latencyMs, message: 'ok' };
|
|
||||||
}
|
|
||||||
|
|
||||||
// Extract error message
|
|
||||||
const errorMatch = result.stdout.match(/ERROR:(.*)/);
|
|
||||||
const errorMsg = errorMatch?.[1] ?? (result.stderr.trim() || `exit code ${result.exitCode}`);
|
|
||||||
return { healthy: false, latencyMs, message: errorMsg };
|
|
||||||
} catch (err) {
|
|
||||||
return {
|
|
||||||
healthy: false,
|
|
||||||
latencyMs: Date.now() - start,
|
|
||||||
message: err instanceof Error ? err.message : String(err),
|
|
||||||
};
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
import type { McpInstance } from '@prisma/client';
|
import type { McpInstance, McpServer } from '@prisma/client';
|
||||||
import type { IMcpInstanceRepository, IMcpServerRepository } from '../repositories/interfaces.js';
|
import type { IMcpInstanceRepository, IMcpServerRepository } from '../repositories/interfaces.js';
|
||||||
import type { McpOrchestrator, ContainerSpec, ContainerInfo } from './orchestrator.js';
|
import type { McpOrchestrator, ContainerSpec, ContainerInfo } from './orchestrator.js';
|
||||||
import { NotFoundError } from './mcp-server.service.js';
|
import { NotFoundError } from './mcp-server.service.js';
|
||||||
@@ -13,6 +13,36 @@ const RUNNER_IMAGES: Record<string, string> = {
|
|||||||
/** Network for MCP server containers (matches docker-compose mcp-servers network). */
|
/** Network for MCP server containers (matches docker-compose mcp-servers network). */
|
||||||
const MCP_SERVERS_NETWORK = process.env['MCPD_MCP_NETWORK'] ?? 'mcp-servers';
|
const MCP_SERVERS_NETWORK = process.env['MCPD_MCP_NETWORK'] ?? 'mcp-servers';
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Backoff schedule for instance startup failures (env resolution, container
|
||||||
|
* creation, etc). Mirrors Kubernetes-style escalation: fast retries for
|
||||||
|
* transient hiccups, then a longer pause once it's clear something is
|
||||||
|
* persistently wrong.
|
||||||
|
*
|
||||||
|
* The retry state lives on `McpInstance.metadata` (no schema migration
|
||||||
|
* needed) and is preserved across reconcile cycles by the in-place
|
||||||
|
* `retryInstance` path so attemptCount actually accumulates.
|
||||||
|
*/
|
||||||
|
const FAST_RETRY_MS = 30_000; // first 5 attempts: 30s apart
|
||||||
|
const SLOW_RETRY_MS = 5 * 60_000; // afterwards: 5 minutes
|
||||||
|
const MAX_FAST_RETRIES = 5;
|
||||||
|
|
||||||
|
interface RetryMetadata {
|
||||||
|
error?: string;
|
||||||
|
attemptCount?: number;
|
||||||
|
lastAttemptAt?: string;
|
||||||
|
nextRetryAt?: string;
|
||||||
|
[k: string]: unknown;
|
||||||
|
}
|
||||||
|
|
||||||
|
function readRetryMeta(instance: McpInstance): RetryMetadata {
|
||||||
|
return (instance.metadata ?? {}) as RetryMetadata;
|
||||||
|
}
|
||||||
|
|
||||||
|
function nextDelayMs(attemptCount: number): number {
|
||||||
|
return attemptCount <= MAX_FAST_RETRIES ? FAST_RETRY_MS : SLOW_RETRY_MS;
|
||||||
|
}
|
||||||
|
|
||||||
export class InvalidStateError extends Error {
|
export class InvalidStateError extends Error {
|
||||||
readonly statusCode = 409;
|
readonly statusCode = 409;
|
||||||
constructor(message: string) {
|
constructor(message: string) {
|
||||||
@@ -118,8 +148,12 @@ export class InstanceService {
|
|||||||
* Reconcile ALL servers — the operator loop.
|
* Reconcile ALL servers — the operator loop.
|
||||||
*
|
*
|
||||||
* For every server with replicas > 0, ensures the correct number of
|
* For every server with replicas > 0, ensures the correct number of
|
||||||
* healthy instances exist. Cleans up ERROR instances and starts
|
* healthy instances exist. ERROR instances are not blindly recreated:
|
||||||
* replacements. This is the core self-healing mechanism.
|
* within their `nextRetryAt` window they're left alone (and counted
|
||||||
|
* against the replica budget so we don't churn replacements while one
|
||||||
|
* is in backoff); past their window they're retried in-place via
|
||||||
|
* `retryInstance` so attemptCount accumulates and backoff escalates
|
||||||
|
* correctly.
|
||||||
*/
|
*/
|
||||||
async reconcileAll(): Promise<{ reconciled: number; errors: string[] }> {
|
async reconcileAll(): Promise<{ reconciled: number; errors: string[] }> {
|
||||||
await this.syncStatus();
|
await this.syncStatus();
|
||||||
@@ -128,6 +162,8 @@ export class InstanceService {
|
|||||||
let reconciled = 0;
|
let reconciled = 0;
|
||||||
const errors: string[] = [];
|
const errors: string[] = [];
|
||||||
|
|
||||||
|
const now = Date.now();
|
||||||
|
|
||||||
for (const server of servers) {
|
for (const server of servers) {
|
||||||
if (server.replicas <= 0) continue;
|
if (server.replicas <= 0) continue;
|
||||||
|
|
||||||
@@ -136,17 +172,38 @@ export class InstanceService {
|
|||||||
const active = instances.filter((i) => i.status === 'RUNNING' || i.status === 'STARTING');
|
const active = instances.filter((i) => i.status === 'RUNNING' || i.status === 'STARTING');
|
||||||
const errored = instances.filter((i) => i.status === 'ERROR');
|
const errored = instances.filter((i) => i.status === 'ERROR');
|
||||||
|
|
||||||
// Clean up ERROR instances so they don't accumulate
|
// Partition ERROR instances by whether their backoff window has elapsed.
|
||||||
|
const dueForRetry: McpInstance[] = [];
|
||||||
|
const stillWaiting: McpInstance[] = [];
|
||||||
for (const inst of errored) {
|
for (const inst of errored) {
|
||||||
await this.removeOne(inst);
|
const meta = readRetryMeta(inst);
|
||||||
|
const ts = meta.nextRetryAt ? Date.parse(meta.nextRetryAt) : 0;
|
||||||
|
if (Number.isNaN(ts) || ts <= now) {
|
||||||
|
dueForRetry.push(inst);
|
||||||
|
} else {
|
||||||
|
stillWaiting.push(inst);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Scale up if needed
|
// Retry elapsed ones in-place. This preserves attemptCount across
|
||||||
const toStart = server.replicas - active.length;
|
// attempts so the 30s × 5 → 5min schedule actually escalates.
|
||||||
|
for (const inst of dueForRetry) {
|
||||||
|
await this.retryInstance(inst);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Scale up only if we don't already have enough live attempts.
|
||||||
|
// Live attempts = currently-running OR -starting + still-waiting
|
||||||
|
// (in backoff) + just-retried (now STARTING via retryInstance).
|
||||||
|
// Counting waiting + retried against the budget prevents tight
|
||||||
|
// create-fail-create churn while previous attempts work through
|
||||||
|
// their backoff schedule.
|
||||||
|
const toStart = server.replicas - active.length - stillWaiting.length - dueForRetry.length;
|
||||||
if (toStart > 0) {
|
if (toStart > 0) {
|
||||||
for (let i = 0; i < toStart; i++) {
|
for (let i = 0; i < toStart; i++) {
|
||||||
await this.startOne(server.id);
|
await this.startOne(server.id);
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
if (toStart > 0 || dueForRetry.length > 0) {
|
||||||
reconciled++;
|
reconciled++;
|
||||||
}
|
}
|
||||||
} catch (err) {
|
} catch (err) {
|
||||||
@@ -220,7 +277,12 @@ export class InstanceService {
|
|||||||
return this.orchestrator.getContainerLogs(instance.containerId, opts);
|
return this.orchestrator.getContainerLogs(instance.containerId, opts);
|
||||||
}
|
}
|
||||||
|
|
||||||
/** Start a single instance for a server. */
|
/**
|
||||||
|
* Start a single instance for a server. Creates a fresh `STARTING` row
|
||||||
|
* and hands off to `attemptStart` for the env+container work. On
|
||||||
|
* failure, `attemptStart` marks the row `ERROR` with a backoff-aware
|
||||||
|
* `nextRetryAt`; the reconciler picks it up later via `retryInstance`.
|
||||||
|
*/
|
||||||
private async startOne(serverId: string): Promise<McpInstance> {
|
private async startOne(serverId: string): Promise<McpInstance> {
|
||||||
const server = await this.serverRepo.findById(serverId);
|
const server = await this.serverRepo.findById(serverId);
|
||||||
if (!server) throw new NotFoundError(`McpServer '${serverId}' not found`);
|
if (!server) throw new NotFoundError(`McpServer '${serverId}' not found`);
|
||||||
@@ -234,6 +296,49 @@ export class InstanceService {
|
|||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
|
const instance = await this.instanceRepo.create({
|
||||||
|
serverId,
|
||||||
|
status: 'STARTING',
|
||||||
|
});
|
||||||
|
return this.attemptStart(instance, server);
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Re-attempt a previously-errored instance in place, preserving its
|
||||||
|
* `attemptCount` so the backoff schedule escalates correctly. Called
|
||||||
|
* by `reconcileAll` for ERROR instances whose `nextRetryAt` has elapsed.
|
||||||
|
*/
|
||||||
|
private async retryInstance(instance: McpInstance): Promise<McpInstance> {
|
||||||
|
const server = await this.serverRepo.findById(instance.serverId);
|
||||||
|
if (!server) {
|
||||||
|
// Server was deleted underneath us — nothing to retry against.
|
||||||
|
return this.markInstanceError(instance, 'Server no longer exists');
|
||||||
|
}
|
||||||
|
|
||||||
|
if (server.externalUrl) {
|
||||||
|
// External servers don't need a container; the URL is the contract.
|
||||||
|
return this.instanceRepo.updateStatus(instance.id, 'RUNNING', {
|
||||||
|
metadata: { external: true, url: server.externalUrl },
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// Reset transient fields but keep retry counters via the metadata
|
||||||
|
// passed through `attemptStart` → `markInstanceError`.
|
||||||
|
await this.instanceRepo.updateStatus(instance.id, 'STARTING', {});
|
||||||
|
const refreshed = (await this.instanceRepo.findById(instance.id)) ?? instance;
|
||||||
|
return this.attemptStart(refreshed, server);
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Run the env-resolution + container-creation steps for a STARTING
|
||||||
|
* instance. On any failure, mark the instance `ERROR` with structured
|
||||||
|
* retry metadata. Used by both initial start (`startOne`) and retry
|
||||||
|
* (`retryInstance`).
|
||||||
|
*/
|
||||||
|
private async attemptStart(
|
||||||
|
instance: McpInstance,
|
||||||
|
server: McpServer,
|
||||||
|
): Promise<McpInstance> {
|
||||||
// Determine image + command based on server config:
|
// Determine image + command based on server config:
|
||||||
// 1. Explicit dockerImage → use as-is
|
// 1. Explicit dockerImage → use as-is
|
||||||
// 2. packageName → use runtime-specific runner image (node/python/go/...)
|
// 2. packageName → use runtime-specific runner image (node/python/go/...)
|
||||||
@@ -253,11 +358,6 @@ export class InstanceService {
|
|||||||
image = server.name;
|
image = server.name;
|
||||||
}
|
}
|
||||||
|
|
||||||
let instance = await this.instanceRepo.create({
|
|
||||||
serverId,
|
|
||||||
status: 'STARTING',
|
|
||||||
});
|
|
||||||
|
|
||||||
try {
|
try {
|
||||||
const spec: ContainerSpec = {
|
const spec: ContainerSpec = {
|
||||||
image,
|
image,
|
||||||
@@ -265,7 +365,7 @@ export class InstanceService {
|
|||||||
hostPort: null,
|
hostPort: null,
|
||||||
network: MCP_SERVERS_NETWORK,
|
network: MCP_SERVERS_NETWORK,
|
||||||
labels: {
|
labels: {
|
||||||
'mcpctl.server-id': serverId,
|
'mcpctl.server-id': server.id,
|
||||||
'mcpctl.instance-id': instance.id,
|
'mcpctl.instance-id': instance.id,
|
||||||
},
|
},
|
||||||
};
|
};
|
||||||
@@ -283,7 +383,17 @@ export class InstanceService {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Resolve env vars from inline values and secret refs
|
// Resolve env vars from inline values and secret refs.
|
||||||
|
//
|
||||||
|
// Failure here is FATAL for the start attempt: a container that
|
||||||
|
// boots without its declared secrets will silently mis-behave (we
|
||||||
|
// saw this with gitea-mcp-server starting up with an empty
|
||||||
|
// GITEA_ACCESS_TOKEN when OpenBao was unreachable, then reporting
|
||||||
|
// "healthy" while every authed call failed). Marking the instance
|
||||||
|
// ERROR with a backoff-aware nextRetryAt is honest; the reconciler
|
||||||
|
// will retry it in-place on the next tick whose nextRetryAt has
|
||||||
|
// elapsed. Optional/missing env vars should be modeled as `value: ""`
|
||||||
|
// entries on the server, not as silent secret-resolution failures.
|
||||||
if (this.secretResolver) {
|
if (this.secretResolver) {
|
||||||
try {
|
try {
|
||||||
const resolvedEnv = await resolveServerEnv(server, this.secretResolver);
|
const resolvedEnv = await resolveServerEnv(server, this.secretResolver);
|
||||||
@@ -291,8 +401,8 @@ export class InstanceService {
|
|||||||
spec.env = resolvedEnv;
|
spec.env = resolvedEnv;
|
||||||
}
|
}
|
||||||
} catch (envErr) {
|
} catch (envErr) {
|
||||||
// Log but don't prevent startup — env resolution failures are non-fatal
|
const msg = envErr instanceof Error ? envErr.message : String(envErr);
|
||||||
// The container may still work if env vars are optional
|
return this.markInstanceError(instance, `secret resolution failed: ${msg}`);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -313,14 +423,39 @@ export class InstanceService {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Set STARTING — syncStatus will promote to RUNNING once the container is actually ready
|
// Set STARTING — syncStatus will promote to RUNNING once the container is actually ready
|
||||||
instance = await this.instanceRepo.updateStatus(instance.id, 'STARTING', updateFields);
|
return this.instanceRepo.updateStatus(instance.id, 'STARTING', updateFields);
|
||||||
} catch (err) {
|
} catch (err) {
|
||||||
instance = await this.instanceRepo.updateStatus(instance.id, 'ERROR', {
|
return this.markInstanceError(
|
||||||
metadata: { error: err instanceof Error ? err.message : String(err) },
|
instance,
|
||||||
});
|
err instanceof Error ? err.message : String(err),
|
||||||
|
);
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
|
||||||
return instance;
|
/**
|
||||||
|
* Mark an instance ERROR with a backoff-aware retry schedule. The
|
||||||
|
* `attemptCount` accumulates across retries (preserved by
|
||||||
|
* `retryInstance` which reuses the same row), so the schedule
|
||||||
|
* actually escalates: 30s × 5 → 5min thereafter.
|
||||||
|
*/
|
||||||
|
private async markInstanceError(
|
||||||
|
instance: McpInstance,
|
||||||
|
error: string,
|
||||||
|
): Promise<McpInstance> {
|
||||||
|
const meta = readRetryMeta(instance);
|
||||||
|
const attemptCount = (typeof meta.attemptCount === 'number' ? meta.attemptCount : 0) + 1;
|
||||||
|
const delayMs = nextDelayMs(attemptCount);
|
||||||
|
const now = new Date();
|
||||||
|
const nextRetryAt = new Date(now.getTime() + delayMs).toISOString();
|
||||||
|
return this.instanceRepo.updateStatus(instance.id, 'ERROR', {
|
||||||
|
metadata: {
|
||||||
|
...meta,
|
||||||
|
error,
|
||||||
|
attemptCount,
|
||||||
|
lastAttemptAt: now.toISOString(),
|
||||||
|
nextRetryAt,
|
||||||
|
},
|
||||||
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
/** Stop and remove a single instance. */
|
/** Stop and remove a single instance. */
|
||||||
|
|||||||
@@ -334,20 +334,93 @@ describe('InstanceService', () => {
|
|||||||
expect(instanceRepo.create).not.toHaveBeenCalled();
|
expect(instanceRepo.create).not.toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
it('cleans up ERROR instances and creates replacements', async () => {
|
it('retries ERROR instances in-place when their backoff has elapsed (no delete, no new row)', async () => {
|
||||||
const server = makeServer({ id: 'srv-1', replicas: 1 });
|
const server = makeServer({ id: 'srv-1', replicas: 1 });
|
||||||
vi.mocked(serverRepo.findAll).mockResolvedValue([server]);
|
vi.mocked(serverRepo.findAll).mockResolvedValue([server]);
|
||||||
vi.mocked(serverRepo.findById).mockResolvedValue(server);
|
vi.mocked(serverRepo.findById).mockResolvedValue(server);
|
||||||
|
// ERROR instance with no nextRetryAt → retry is due immediately.
|
||||||
vi.mocked(instanceRepo.findAll).mockResolvedValue([
|
vi.mocked(instanceRepo.findAll).mockResolvedValue([
|
||||||
makeInstance({ id: 'inst-dead', serverId: 'srv-1', status: 'ERROR', containerId: 'ctr-dead' }),
|
makeInstance({ id: 'inst-dead', serverId: 'srv-1', status: 'ERROR', containerId: 'ctr-dead' }),
|
||||||
]);
|
]);
|
||||||
|
|
||||||
const result = await service.reconcileAll();
|
const result = await service.reconcileAll();
|
||||||
|
|
||||||
// Should delete ERROR instance and create a new one
|
// Retry-in-place semantics: don't delete the row, don't create a
|
||||||
|
// replacement. attemptCount needs to live on the same row so the
|
||||||
|
// backoff schedule can actually escalate.
|
||||||
|
expect(instanceRepo.delete).not.toHaveBeenCalled();
|
||||||
|
expect(instanceRepo.create).not.toHaveBeenCalled();
|
||||||
|
// retryInstance flips the row STARTING before attemptStart runs.
|
||||||
|
expect(instanceRepo.updateStatus).toHaveBeenCalledWith('inst-dead', 'STARTING', expect.anything());
|
||||||
expect(result.reconciled).toBe(1);
|
expect(result.reconciled).toBe(1);
|
||||||
expect(instanceRepo.delete).toHaveBeenCalledWith('inst-dead');
|
});
|
||||||
expect(instanceRepo.create).toHaveBeenCalled();
|
|
||||||
|
it('leaves ERROR instances alone while their nextRetryAt is in the future', async () => {
|
||||||
|
const server = makeServer({ id: 'srv-1', replicas: 1 });
|
||||||
|
vi.mocked(serverRepo.findAll).mockResolvedValue([server]);
|
||||||
|
vi.mocked(serverRepo.findById).mockResolvedValue(server);
|
||||||
|
const futureRetry = new Date(Date.now() + 60_000).toISOString();
|
||||||
|
vi.mocked(instanceRepo.findAll).mockResolvedValue([
|
||||||
|
makeInstance({
|
||||||
|
id: 'inst-waiting',
|
||||||
|
serverId: 'srv-1',
|
||||||
|
status: 'ERROR',
|
||||||
|
metadata: { nextRetryAt: futureRetry, attemptCount: 2 },
|
||||||
|
}),
|
||||||
|
]);
|
||||||
|
|
||||||
|
const result = await service.reconcileAll();
|
||||||
|
|
||||||
|
// Within the backoff window the reconciler must not delete the row,
|
||||||
|
// not retry it, and not spawn a replacement (counting it against
|
||||||
|
// the replica budget is what prevents tight create-fail-create churn).
|
||||||
|
expect(instanceRepo.delete).not.toHaveBeenCalled();
|
||||||
|
expect(instanceRepo.create).not.toHaveBeenCalled();
|
||||||
|
expect(orchestrator.createContainer).not.toHaveBeenCalled();
|
||||||
|
expect(result.reconciled).toBe(0);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('escalates the backoff: attemptCount + nextRetryAt persist on retry failures', async () => {
|
||||||
|
const server = makeServer({ id: 'srv-1', replicas: 1 });
|
||||||
|
vi.mocked(serverRepo.findAll).mockResolvedValue([server]);
|
||||||
|
vi.mocked(serverRepo.findById).mockResolvedValue(server);
|
||||||
|
|
||||||
|
// Fail container creation so attemptStart goes down the markInstanceError path.
|
||||||
|
vi.mocked(orchestrator.createContainer).mockRejectedValue(new Error('boom'));
|
||||||
|
|
||||||
|
// Existing ERROR instance with attemptCount=2 (so the next failure
|
||||||
|
// produces attemptCount=3, still inside the fast-retry window).
|
||||||
|
vi.mocked(instanceRepo.findAll).mockResolvedValue([
|
||||||
|
makeInstance({
|
||||||
|
id: 'inst-1',
|
||||||
|
serverId: 'srv-1',
|
||||||
|
status: 'ERROR',
|
||||||
|
metadata: { error: 'previous failure', attemptCount: 2, nextRetryAt: new Date(Date.now() - 1000).toISOString() },
|
||||||
|
}),
|
||||||
|
]);
|
||||||
|
// retryInstance refreshes via findById; let it return the same row.
|
||||||
|
vi.mocked(instanceRepo.findById).mockImplementation(async () => makeInstance({
|
||||||
|
id: 'inst-1',
|
||||||
|
serverId: 'srv-1',
|
||||||
|
status: 'STARTING',
|
||||||
|
metadata: { error: 'previous failure', attemptCount: 2, nextRetryAt: new Date(Date.now() - 1000).toISOString() },
|
||||||
|
}));
|
||||||
|
|
||||||
|
await service.reconcileAll();
|
||||||
|
|
||||||
|
// Look at the last updateStatus call — it should be the ERROR transition
|
||||||
|
// with attemptCount bumped to 3.
|
||||||
|
const errorCalls = vi.mocked(instanceRepo.updateStatus).mock.calls.filter(
|
||||||
|
(c) => c[1] === 'ERROR',
|
||||||
|
);
|
||||||
|
expect(errorCalls.length).toBeGreaterThan(0);
|
||||||
|
const lastErrorCall = errorCalls[errorCalls.length - 1]!;
|
||||||
|
const meta = (lastErrorCall[2] as { metadata?: Record<string, unknown> } | undefined)?.metadata;
|
||||||
|
expect(meta).toBeDefined();
|
||||||
|
expect((meta as Record<string, unknown>)['attemptCount']).toBe(3);
|
||||||
|
expect((meta as Record<string, unknown>)['nextRetryAt']).toBeTypeOf('string');
|
||||||
|
// Reason should reference the boom we threw.
|
||||||
|
expect(String((meta as Record<string, unknown>)['error'])).toContain('boom');
|
||||||
});
|
});
|
||||||
|
|
||||||
it('reconciles multiple servers independently', async () => {
|
it('reconciles multiple servers independently', async () => {
|
||||||
|
|||||||
@@ -192,25 +192,28 @@ describe('HealthProbeRunner', () => {
|
|||||||
expect(serverRepo.findById).not.toHaveBeenCalled();
|
expect(serverRepo.findById).not.toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
it('probes STDIO instance with exec and marks healthy on success', async () => {
|
it('probes STDIO instance via mcpProxyService and marks healthy on success', async () => {
|
||||||
const instance = makeInstance();
|
const instance = makeInstance();
|
||||||
const server = makeServer();
|
const server = makeServer();
|
||||||
|
|
||||||
vi.mocked(instanceRepo.findAll).mockResolvedValue([instance]);
|
vi.mocked(instanceRepo.findAll).mockResolvedValue([instance]);
|
||||||
vi.mocked(serverRepo.findById).mockResolvedValue(server);
|
vi.mocked(serverRepo.findById).mockResolvedValue(server);
|
||||||
vi.mocked(orchestrator.execInContainer).mockResolvedValue({
|
vi.mocked(mcpProxyService.execute).mockResolvedValue({
|
||||||
exitCode: 0,
|
jsonrpc: '2.0', id: 1,
|
||||||
stdout: 'OK',
|
result: { content: [{ type: 'text', text: 'ok' }] },
|
||||||
stderr: '',
|
|
||||||
});
|
});
|
||||||
|
|
||||||
await runner.tick();
|
await runner.tick();
|
||||||
|
|
||||||
expect(orchestrator.execInContainer).toHaveBeenCalledWith(
|
// STDIO readiness now goes through the proxy (the live container),
|
||||||
'container-abc',
|
// not via docker-exec into a synthetic spawn — see comment on
|
||||||
expect.arrayContaining(['node', '-e']),
|
// probeReadinessViaProxy for why.
|
||||||
expect.objectContaining({ timeoutMs: 10000 }),
|
expect(orchestrator.execInContainer).not.toHaveBeenCalled();
|
||||||
);
|
expect(mcpProxyService.execute).toHaveBeenCalledWith({
|
||||||
|
serverId: 'srv-1',
|
||||||
|
method: 'tools/call',
|
||||||
|
params: { name: 'list_datasources', arguments: {} },
|
||||||
|
});
|
||||||
|
|
||||||
expect(instanceRepo.updateStatus).toHaveBeenCalledWith(
|
expect(instanceRepo.updateStatus).toHaveBeenCalledWith(
|
||||||
'inst-1',
|
'inst-1',
|
||||||
@@ -225,6 +228,57 @@ describe('HealthProbeRunner', () => {
|
|||||||
);
|
);
|
||||||
});
|
});
|
||||||
|
|
||||||
|
it('marks unhealthy when proxy returns a JSON-RPC error (e.g. broken-secret auth failure)', async () => {
|
||||||
|
const instance = makeInstance();
|
||||||
|
const server = makeServer({
|
||||||
|
healthCheck: { tool: 'get_me', intervalSeconds: 0, failureThreshold: 1 } as McpServer['healthCheck'],
|
||||||
|
});
|
||||||
|
|
||||||
|
vi.mocked(instanceRepo.findAll).mockResolvedValue([instance]);
|
||||||
|
vi.mocked(serverRepo.findById).mockResolvedValue(server);
|
||||||
|
vi.mocked(mcpProxyService.execute).mockResolvedValue({
|
||||||
|
jsonrpc: '2.0', id: 1,
|
||||||
|
error: { code: -32603, message: 'token is required' },
|
||||||
|
});
|
||||||
|
|
||||||
|
await runner.tick();
|
||||||
|
|
||||||
|
expect(instanceRepo.updateStatus).toHaveBeenCalledWith(
|
||||||
|
'inst-1',
|
||||||
|
'RUNNING',
|
||||||
|
expect.objectContaining({
|
||||||
|
healthStatus: 'unhealthy',
|
||||||
|
events: expect.arrayContaining([
|
||||||
|
expect.objectContaining({ type: 'Warning', message: expect.stringContaining('token is required') }),
|
||||||
|
]),
|
||||||
|
}),
|
||||||
|
);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('marks unhealthy when proxy returns a tool-level error in result.isError', async () => {
|
||||||
|
const instance = makeInstance();
|
||||||
|
const server = makeServer({
|
||||||
|
healthCheck: { tool: 'get_me', intervalSeconds: 0, failureThreshold: 1 } as McpServer['healthCheck'],
|
||||||
|
});
|
||||||
|
|
||||||
|
vi.mocked(instanceRepo.findAll).mockResolvedValue([instance]);
|
||||||
|
vi.mocked(serverRepo.findById).mockResolvedValue(server);
|
||||||
|
vi.mocked(mcpProxyService.execute).mockResolvedValue({
|
||||||
|
jsonrpc: '2.0', id: 1,
|
||||||
|
result: { isError: true, content: [{ type: 'text', text: 'auth failed: token is required' }] },
|
||||||
|
});
|
||||||
|
|
||||||
|
await runner.tick();
|
||||||
|
|
||||||
|
const events = vi.mocked(instanceRepo.updateStatus).mock.calls[0]?.[2]?.events as Array<{ message: string }> | undefined;
|
||||||
|
expect(events?.[events.length - 1]?.message).toContain('auth failed');
|
||||||
|
expect(instanceRepo.updateStatus).toHaveBeenCalledWith(
|
||||||
|
'inst-1',
|
||||||
|
'RUNNING',
|
||||||
|
expect.objectContaining({ healthStatus: 'unhealthy' }),
|
||||||
|
);
|
||||||
|
});
|
||||||
|
|
||||||
it('marks unhealthy after failureThreshold consecutive failures', async () => {
|
it('marks unhealthy after failureThreshold consecutive failures', async () => {
|
||||||
const instance = makeInstance();
|
const instance = makeInstance();
|
||||||
const healthCheck: HealthCheckSpec = {
|
const healthCheck: HealthCheckSpec = {
|
||||||
@@ -237,10 +291,9 @@ describe('HealthProbeRunner', () => {
|
|||||||
|
|
||||||
vi.mocked(instanceRepo.findAll).mockResolvedValue([instance]);
|
vi.mocked(instanceRepo.findAll).mockResolvedValue([instance]);
|
||||||
vi.mocked(serverRepo.findById).mockResolvedValue(server);
|
vi.mocked(serverRepo.findById).mockResolvedValue(server);
|
||||||
vi.mocked(orchestrator.execInContainer).mockResolvedValue({
|
vi.mocked(mcpProxyService.execute).mockResolvedValue({
|
||||||
exitCode: 1,
|
jsonrpc: '2.0', id: 1,
|
||||||
stdout: 'ERROR:connection refused',
|
error: { code: -32603, message: 'connection refused' },
|
||||||
stderr: '',
|
|
||||||
});
|
});
|
||||||
|
|
||||||
// First failure → degraded
|
// First failure → degraded
|
||||||
@@ -274,15 +327,15 @@ describe('HealthProbeRunner', () => {
|
|||||||
vi.mocked(serverRepo.findById).mockResolvedValue(server);
|
vi.mocked(serverRepo.findById).mockResolvedValue(server);
|
||||||
|
|
||||||
// Two failures
|
// Two failures
|
||||||
vi.mocked(orchestrator.execInContainer).mockResolvedValue({
|
vi.mocked(mcpProxyService.execute).mockResolvedValue({
|
||||||
exitCode: 1, stdout: 'ERROR:fail', stderr: '',
|
jsonrpc: '2.0', id: 1, error: { code: -32603, message: 'fail' },
|
||||||
});
|
});
|
||||||
await runner.tick();
|
await runner.tick();
|
||||||
await runner.tick();
|
await runner.tick();
|
||||||
|
|
||||||
// Then success — should reset to healthy
|
// Then success — should reset to healthy
|
||||||
vi.mocked(orchestrator.execInContainer).mockResolvedValue({
|
vi.mocked(mcpProxyService.execute).mockResolvedValue({
|
||||||
exitCode: 0, stdout: 'OK', stderr: '',
|
jsonrpc: '2.0', id: 1, result: {},
|
||||||
});
|
});
|
||||||
await runner.tick();
|
await runner.tick();
|
||||||
|
|
||||||
@@ -290,13 +343,16 @@ describe('HealthProbeRunner', () => {
|
|||||||
expect(lastCall?.[2]).toEqual(expect.objectContaining({ healthStatus: 'healthy' }));
|
expect(lastCall?.[2]).toEqual(expect.objectContaining({ healthStatus: 'healthy' }));
|
||||||
});
|
});
|
||||||
|
|
||||||
it('handles exec timeout as failure', async () => {
|
it('handles probe timeout as failure', async () => {
|
||||||
const instance = makeInstance();
|
const instance = makeInstance();
|
||||||
const server = makeServer();
|
const server = makeServer({
|
||||||
|
healthCheck: { tool: 'list_datasources', intervalSeconds: 0, timeoutSeconds: 0.05, failureThreshold: 3 } as unknown as McpServer['healthCheck'],
|
||||||
|
});
|
||||||
|
|
||||||
vi.mocked(instanceRepo.findAll).mockResolvedValue([instance]);
|
vi.mocked(instanceRepo.findAll).mockResolvedValue([instance]);
|
||||||
vi.mocked(serverRepo.findById).mockResolvedValue(server);
|
vi.mocked(serverRepo.findById).mockResolvedValue(server);
|
||||||
vi.mocked(orchestrator.execInContainer).mockRejectedValue(new Error('Exec timed out after 10000ms'));
|
// Hang forever — the probe's internal deadline should fire instead.
|
||||||
|
vi.mocked(mcpProxyService.execute).mockImplementation(() => new Promise(() => { /* never resolves */ }));
|
||||||
|
|
||||||
await runner.tick();
|
await runner.tick();
|
||||||
|
|
||||||
@@ -323,8 +379,8 @@ describe('HealthProbeRunner', () => {
|
|||||||
|
|
||||||
vi.mocked(instanceRepo.findAll).mockResolvedValue([instance]);
|
vi.mocked(instanceRepo.findAll).mockResolvedValue([instance]);
|
||||||
vi.mocked(serverRepo.findById).mockResolvedValue(server);
|
vi.mocked(serverRepo.findById).mockResolvedValue(server);
|
||||||
vi.mocked(orchestrator.execInContainer).mockResolvedValue({
|
vi.mocked(mcpProxyService.execute).mockResolvedValue({
|
||||||
exitCode: 0, stdout: 'OK', stderr: '',
|
jsonrpc: '2.0', id: 1, result: {},
|
||||||
});
|
});
|
||||||
|
|
||||||
await runner.tick();
|
await runner.tick();
|
||||||
@@ -343,17 +399,17 @@ describe('HealthProbeRunner', () => {
|
|||||||
|
|
||||||
vi.mocked(instanceRepo.findAll).mockResolvedValue([instance]);
|
vi.mocked(instanceRepo.findAll).mockResolvedValue([instance]);
|
||||||
vi.mocked(serverRepo.findById).mockResolvedValue(server);
|
vi.mocked(serverRepo.findById).mockResolvedValue(server);
|
||||||
vi.mocked(orchestrator.execInContainer).mockResolvedValue({
|
vi.mocked(mcpProxyService.execute).mockResolvedValue({
|
||||||
exitCode: 0, stdout: 'OK', stderr: '',
|
jsonrpc: '2.0', id: 1, result: {},
|
||||||
});
|
});
|
||||||
|
|
||||||
// First tick: should probe
|
// First tick: should probe
|
||||||
await runner.tick();
|
await runner.tick();
|
||||||
expect(orchestrator.execInContainer).toHaveBeenCalledTimes(1);
|
expect(mcpProxyService.execute).toHaveBeenCalledTimes(1);
|
||||||
|
|
||||||
// Second tick immediately: should skip (300s interval not elapsed)
|
// Second tick immediately: should skip (300s interval not elapsed)
|
||||||
await runner.tick();
|
await runner.tick();
|
||||||
expect(orchestrator.execInContainer).toHaveBeenCalledTimes(1);
|
expect(mcpProxyService.execute).toHaveBeenCalledTimes(1);
|
||||||
});
|
});
|
||||||
|
|
||||||
it('cleans up probe states for removed instances', async () => {
|
it('cleans up probe states for removed instances', async () => {
|
||||||
@@ -364,9 +420,12 @@ describe('HealthProbeRunner', () => {
|
|||||||
|
|
||||||
vi.mocked(instanceRepo.findAll).mockResolvedValue([instance]);
|
vi.mocked(instanceRepo.findAll).mockResolvedValue([instance]);
|
||||||
vi.mocked(serverRepo.findById).mockResolvedValue(server);
|
vi.mocked(serverRepo.findById).mockResolvedValue(server);
|
||||||
|
vi.mocked(mcpProxyService.execute).mockResolvedValue({
|
||||||
|
jsonrpc: '2.0', id: 1, result: {},
|
||||||
|
});
|
||||||
|
|
||||||
await runner.tick();
|
await runner.tick();
|
||||||
expect(orchestrator.execInContainer).toHaveBeenCalledTimes(1);
|
expect(mcpProxyService.execute).toHaveBeenCalledTimes(1);
|
||||||
|
|
||||||
// Instance removed
|
// Instance removed
|
||||||
vi.mocked(instanceRepo.findAll).mockResolvedValue([]);
|
vi.mocked(instanceRepo.findAll).mockResolvedValue([]);
|
||||||
@@ -375,7 +434,7 @@ describe('HealthProbeRunner', () => {
|
|||||||
// Re-add same instance — should probe again (state was cleaned)
|
// Re-add same instance — should probe again (state was cleaned)
|
||||||
vi.mocked(instanceRepo.findAll).mockResolvedValue([instance]);
|
vi.mocked(instanceRepo.findAll).mockResolvedValue([instance]);
|
||||||
await runner.tick();
|
await runner.tick();
|
||||||
expect(orchestrator.execInContainer).toHaveBeenCalledTimes(2);
|
expect(mcpProxyService.execute).toHaveBeenCalledTimes(2);
|
||||||
});
|
});
|
||||||
|
|
||||||
it('skips STDIO instances without containerId', async () => {
|
it('skips STDIO instances without containerId', async () => {
|
||||||
@@ -397,8 +456,8 @@ describe('HealthProbeRunner', () => {
|
|||||||
arguments: {},
|
arguments: {},
|
||||||
};
|
};
|
||||||
|
|
||||||
vi.mocked(orchestrator.execInContainer).mockResolvedValue({
|
vi.mocked(mcpProxyService.execute).mockResolvedValue({
|
||||||
exitCode: 0, stdout: 'OK', stderr: '',
|
jsonrpc: '2.0', id: 1, result: {},
|
||||||
});
|
});
|
||||||
|
|
||||||
const result = await runner.probeInstance(instance, server, healthCheck);
|
const result = await runner.probeInstance(instance, server, healthCheck);
|
||||||
@@ -407,15 +466,14 @@ describe('HealthProbeRunner', () => {
|
|||||||
expect(result.message).toBe('ok');
|
expect(result.message).toBe('ok');
|
||||||
});
|
});
|
||||||
|
|
||||||
it('handles STDIO exec failure with error message', async () => {
|
it('surfaces upstream JSON-RPC error message verbatim', async () => {
|
||||||
const instance = makeInstance();
|
const instance = makeInstance();
|
||||||
const server = makeServer();
|
const server = makeServer();
|
||||||
const healthCheck: HealthCheckSpec = { tool: 'list_datasources', arguments: {} };
|
const healthCheck: HealthCheckSpec = { tool: 'list_datasources', arguments: {} };
|
||||||
|
|
||||||
vi.mocked(orchestrator.execInContainer).mockResolvedValue({
|
vi.mocked(mcpProxyService.execute).mockResolvedValue({
|
||||||
exitCode: 1,
|
jsonrpc: '2.0', id: 1,
|
||||||
stdout: 'ERROR:ECONNREFUSED 10.0.0.1:3000',
|
error: { code: -32603, message: 'ECONNREFUSED 10.0.0.1:3000' },
|
||||||
stderr: '',
|
|
||||||
});
|
});
|
||||||
|
|
||||||
const result = await runner.probeInstance(instance, server, healthCheck);
|
const result = await runner.probeInstance(instance, server, healthCheck);
|
||||||
|
|||||||
@@ -12,17 +12,26 @@
|
|||||||
},
|
},
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@monaco-editor/react": "^4.7.0",
|
"@monaco-editor/react": "^4.7.0",
|
||||||
|
"class-variance-authority": "^0.7.1",
|
||||||
|
"clsx": "^2.1.1",
|
||||||
|
"diff": "^5.2.0",
|
||||||
|
"geist": "^1.5.1",
|
||||||
|
"lucide-react": "^0.487.0",
|
||||||
"react": "^19.2.5",
|
"react": "^19.2.5",
|
||||||
"react-dom": "^19.2.5",
|
"react-dom": "^19.2.5",
|
||||||
"react-router-dom": "^7.7.0"
|
"react-router-dom": "^7.7.0",
|
||||||
|
"tailwind-merge": "^3.3.1"
|
||||||
},
|
},
|
||||||
"devDependencies": {
|
"devDependencies": {
|
||||||
|
"@tailwindcss/vite": "^4.1.16",
|
||||||
"@testing-library/jest-dom": "^6.7.0",
|
"@testing-library/jest-dom": "^6.7.0",
|
||||||
"@testing-library/react": "^16.3.0",
|
"@testing-library/react": "^16.3.0",
|
||||||
|
"@types/diff": "^5.2.3",
|
||||||
"@types/react": "^19.2.14",
|
"@types/react": "^19.2.14",
|
||||||
"@types/react-dom": "^19.2.0",
|
"@types/react-dom": "^19.2.0",
|
||||||
"@vitejs/plugin-react": "^5.1.0",
|
"@vitejs/plugin-react": "^5.1.0",
|
||||||
"jsdom": "^28.0.0",
|
"jsdom": "^28.0.0",
|
||||||
|
"tailwindcss": "^4.1.16",
|
||||||
"vite": "^7.2.0"
|
"vite": "^7.2.0"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -9,6 +9,11 @@ import { ProjectPromptsPage } from './pages/ProjectPrompts';
|
|||||||
import { AgentsPage } from './pages/Agents';
|
import { AgentsPage } from './pages/Agents';
|
||||||
import { AgentDetailPage } from './pages/AgentDetail';
|
import { AgentDetailPage } from './pages/AgentDetail';
|
||||||
import { PersonalityDetailPage } from './pages/PersonalityDetail';
|
import { PersonalityDetailPage } from './pages/PersonalityDetail';
|
||||||
|
import { DashboardPage } from './pages/Dashboard';
|
||||||
|
import { SkillsPage } from './pages/Skills';
|
||||||
|
import { SkillDetailPage } from './pages/SkillDetail';
|
||||||
|
import { ProposalsPage } from './pages/Proposals';
|
||||||
|
import { ProposalDetailPage } from './pages/ProposalDetail';
|
||||||
|
|
||||||
export function App(): React.JSX.Element {
|
export function App(): React.JSX.Element {
|
||||||
const [tokenPresent, setTokenPresent] = useState(getToken() !== null);
|
const [tokenPresent, setTokenPresent] = useState(getToken() !== null);
|
||||||
@@ -28,13 +33,19 @@ export function App(): React.JSX.Element {
|
|||||||
<BrowserRouter basename="/ui">
|
<BrowserRouter basename="/ui">
|
||||||
<Routes>
|
<Routes>
|
||||||
<Route element={<Layout />}>
|
<Route element={<Layout />}>
|
||||||
<Route path="/" element={<Navigate to="/projects" replace />} />
|
<Route path="/" element={<Navigate to="/dashboard" replace />} />
|
||||||
|
<Route path="/dashboard" element={<DashboardPage />} />
|
||||||
<Route path="/projects" element={<ProjectsPage />} />
|
<Route path="/projects" element={<ProjectsPage />} />
|
||||||
<Route path="/projects/:name/prompts" element={<ProjectPromptsPage />} />
|
<Route path="/projects/:name/prompts" element={<ProjectPromptsPage />} />
|
||||||
<Route path="/agents" element={<AgentsPage />} />
|
<Route path="/agents" element={<AgentsPage />} />
|
||||||
<Route path="/agents/:name" element={<AgentDetailPage />} />
|
<Route path="/agents/:name" element={<AgentDetailPage />} />
|
||||||
<Route path="/personalities/:id" element={<PersonalityDetailPage />} />
|
<Route path="/personalities/:id" element={<PersonalityDetailPage />} />
|
||||||
<Route path="*" element={<Navigate to="/projects" replace />} />
|
{/* PR-6: Skills + Proposals UI. */}
|
||||||
|
<Route path="/skills" element={<SkillsPage />} />
|
||||||
|
<Route path="/skills/:name" element={<SkillDetailPage />} />
|
||||||
|
<Route path="/proposals" element={<ProposalsPage />} />
|
||||||
|
<Route path="/proposals/:id" element={<ProposalDetailPage />} />
|
||||||
|
<Route path="*" element={<Navigate to="/dashboard" replace />} />
|
||||||
</Route>
|
</Route>
|
||||||
</Routes>
|
</Routes>
|
||||||
</BrowserRouter>
|
</BrowserRouter>
|
||||||
|
|||||||
@@ -95,6 +95,72 @@ export interface Personality {
|
|||||||
promptCount: number;
|
promptCount: number;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// PR-3: Skill resource. Mirrors Prompt with the addition of multi-file
|
||||||
|
// bundles (`files`) and typed metadata (`hooks`, `mcpServers`,
|
||||||
|
// `postInstall`, …).
|
||||||
|
export interface Skill {
|
||||||
|
id: string;
|
||||||
|
name: string;
|
||||||
|
description: string;
|
||||||
|
content: string;
|
||||||
|
files: Record<string, string>;
|
||||||
|
metadata: Record<string, unknown>;
|
||||||
|
projectId: string | null;
|
||||||
|
agentId: string | null;
|
||||||
|
priority: number;
|
||||||
|
semver: string;
|
||||||
|
currentRevisionId: string | null;
|
||||||
|
createdAt: string;
|
||||||
|
updatedAt: string;
|
||||||
|
project?: { name: string } | null;
|
||||||
|
agent?: { name: string } | null;
|
||||||
|
}
|
||||||
|
|
||||||
|
export interface VisibleSkill {
|
||||||
|
id: string;
|
||||||
|
name: string;
|
||||||
|
description: string;
|
||||||
|
semver: string;
|
||||||
|
contentHash: string;
|
||||||
|
metadata: unknown;
|
||||||
|
scope: 'global' | 'project' | 'agent';
|
||||||
|
}
|
||||||
|
|
||||||
|
// PR-2: ResourceProposal — generic propose/approve/reject queue.
|
||||||
|
// Replaces PromptRequest in the new path.
|
||||||
|
export interface Proposal {
|
||||||
|
id: string;
|
||||||
|
resourceType: 'prompt' | 'skill';
|
||||||
|
name: string;
|
||||||
|
body: Record<string, unknown>;
|
||||||
|
projectId: string | null;
|
||||||
|
agentId: string | null;
|
||||||
|
createdBySession: string | null;
|
||||||
|
createdByUserId: string | null;
|
||||||
|
status: 'pending' | 'approved' | 'rejected';
|
||||||
|
reviewerNote: string;
|
||||||
|
approvedRevisionId: string | null;
|
||||||
|
createdAt: string;
|
||||||
|
updatedAt: string;
|
||||||
|
project?: { name: string } | null;
|
||||||
|
agent?: { name: string } | null;
|
||||||
|
}
|
||||||
|
|
||||||
|
// PR-2: ResourceRevision — append-only audit log keyed by
|
||||||
|
// (resourceType, resourceId).
|
||||||
|
export interface Revision {
|
||||||
|
id: string;
|
||||||
|
resourceType: 'prompt' | 'skill';
|
||||||
|
resourceId: string;
|
||||||
|
semver: string;
|
||||||
|
contentHash: string;
|
||||||
|
body: Record<string, unknown>;
|
||||||
|
authorUserId: string | null;
|
||||||
|
authorSessionId: string | null;
|
||||||
|
note: string;
|
||||||
|
createdAt: string;
|
||||||
|
}
|
||||||
|
|
||||||
export interface PersonalityPrompt {
|
export interface PersonalityPrompt {
|
||||||
promptId: string;
|
promptId: string;
|
||||||
promptName: string;
|
promptName: string;
|
||||||
|
|||||||
53
src/web/src/components/Diff.tsx
Normal file
53
src/web/src/components/Diff.tsx
Normal file
@@ -0,0 +1,53 @@
|
|||||||
|
import * as React from 'react';
|
||||||
|
import { diffLines } from 'diff';
|
||||||
|
import { cn } from '../lib/utils';
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Unified-diff renderer — line-by-line color-coded display. Powers the
|
||||||
|
* proposal review and revision-history pages. We use `diff.diffLines`
|
||||||
|
* (text-line granularity) rather than `diff.createPatch` because we
|
||||||
|
* want to render the diff as styled DOM, not as plain monospace text.
|
||||||
|
*/
|
||||||
|
export function Diff({
|
||||||
|
before,
|
||||||
|
after,
|
||||||
|
className,
|
||||||
|
}: {
|
||||||
|
before: string;
|
||||||
|
after: string;
|
||||||
|
className?: string;
|
||||||
|
}): React.JSX.Element {
|
||||||
|
const parts = React.useMemo(() => diffLines(before, after), [before, after]);
|
||||||
|
|
||||||
|
return (
|
||||||
|
<pre
|
||||||
|
className={cn(
|
||||||
|
'overflow-x-auto rounded-md border border-(--color-border) bg-(--color-canvas) p-4 font-mono text-xs leading-relaxed',
|
||||||
|
className,
|
||||||
|
)}
|
||||||
|
>
|
||||||
|
{parts.map((part, i) => {
|
||||||
|
const color = part.added
|
||||||
|
? 'text-(--color-success)'
|
||||||
|
: part.removed
|
||||||
|
? 'text-(--color-danger)'
|
||||||
|
: 'text-(--color-fg-muted)';
|
||||||
|
const prefix = part.added ? '+ ' : part.removed ? '- ' : ' ';
|
||||||
|
const lines = part.value.split('\n');
|
||||||
|
// diffLines returns trailing newlines as separate lines; drop the
|
||||||
|
// empty tail so we don't render dead rows.
|
||||||
|
const trimmed = lines[lines.length - 1] === '' ? lines.slice(0, -1) : lines;
|
||||||
|
return (
|
||||||
|
<span key={i} className={color}>
|
||||||
|
{trimmed.map((line, j) => (
|
||||||
|
<span key={j} className="block whitespace-pre-wrap">
|
||||||
|
{prefix}
|
||||||
|
{line}
|
||||||
|
</span>
|
||||||
|
))}
|
||||||
|
</span>
|
||||||
|
);
|
||||||
|
})}
|
||||||
|
</pre>
|
||||||
|
);
|
||||||
|
}
|
||||||
@@ -1,80 +1,115 @@
|
|||||||
import * as React from 'react';
|
import * as React from 'react';
|
||||||
import { NavLink, Outlet } from 'react-router-dom';
|
import { NavLink, Outlet } from 'react-router-dom';
|
||||||
import { clearToken } from '../api';
|
import { LogOut, FolderKanban, Bot, Sparkles, Inbox, LayoutDashboard } from 'lucide-react';
|
||||||
|
|
||||||
|
import { api, clearToken, type Proposal } from '../api';
|
||||||
|
import { Badge } from './ui/badge';
|
||||||
|
import { cn } from '../lib/utils';
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Top-of-page nav + outlet. Terminal-style dark theme so the UI feels
|
* Sidebar layout. Pending-proposals badge polls every 30 s so reviewers
|
||||||
* adjacent to the CLI rather than a separate product.
|
* see a queue building up without having to refresh the page.
|
||||||
*/
|
*/
|
||||||
export function Layout(): React.JSX.Element {
|
export function Layout(): React.JSX.Element {
|
||||||
|
const [pendingCount, setPendingCount] = React.useState<number | null>(null);
|
||||||
|
|
||||||
|
React.useEffect(() => {
|
||||||
|
let cancelled = false;
|
||||||
|
async function poll(): Promise<void> {
|
||||||
|
try {
|
||||||
|
const proposals = await api.get<Proposal[]>('/api/v1/proposals?status=pending');
|
||||||
|
if (!cancelled) setPendingCount(proposals.length);
|
||||||
|
} catch {
|
||||||
|
if (!cancelled) setPendingCount(null);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
void poll();
|
||||||
|
const id = setInterval(poll, 30_000);
|
||||||
|
return () => {
|
||||||
|
cancelled = true;
|
||||||
|
clearInterval(id);
|
||||||
|
};
|
||||||
|
}, []);
|
||||||
|
|
||||||
return (
|
return (
|
||||||
<div style={styles.shell}>
|
<div className="flex min-h-screen">
|
||||||
<header style={styles.header}>
|
<aside className="flex w-56 shrink-0 flex-col border-r border-(--color-border) bg-(--color-surface)">
|
||||||
<div style={styles.brand}>mcpctl <span style={styles.dim}>· prompt editor</span></div>
|
<div className="flex items-center gap-2 px-5 py-5">
|
||||||
<nav style={styles.nav}>
|
<span className="text-base font-bold tracking-tight">mcpctl</span>
|
||||||
<NavLink to="/projects" style={navStyle}>Projects</NavLink>
|
<span className="text-xs text-(--color-fg-muted)">UI</span>
|
||||||
<NavLink to="/agents" style={navStyle}>Agents</NavLink>
|
</div>
|
||||||
|
|
||||||
|
<nav className="flex flex-1 flex-col gap-0.5 px-2 py-2">
|
||||||
|
<NavItem to="/dashboard" icon={LayoutDashboard}>
|
||||||
|
Dashboard
|
||||||
|
</NavItem>
|
||||||
|
<NavItem to="/projects" icon={FolderKanban}>
|
||||||
|
Projects
|
||||||
|
</NavItem>
|
||||||
|
<NavItem to="/agents" icon={Bot}>
|
||||||
|
Agents
|
||||||
|
</NavItem>
|
||||||
|
<NavItem to="/skills" icon={Sparkles}>
|
||||||
|
Skills
|
||||||
|
</NavItem>
|
||||||
|
<NavItem to="/proposals" icon={Inbox} badge={pendingCount}>
|
||||||
|
Proposals
|
||||||
|
</NavItem>
|
||||||
|
</nav>
|
||||||
|
|
||||||
|
<div className="border-t border-(--color-border) p-2">
|
||||||
<button
|
<button
|
||||||
style={styles.logout}
|
onClick={() => {
|
||||||
onClick={() => { clearToken(); window.location.assign('/ui/'); }}
|
clearToken();
|
||||||
|
window.location.assign('/ui/');
|
||||||
|
}}
|
||||||
|
className="flex w-full items-center gap-2 rounded-md px-3 py-2 text-sm text-(--color-fg-muted) transition-colors hover:bg-(--color-surface-hi) hover:text-(--color-fg)"
|
||||||
>
|
>
|
||||||
|
<LogOut className="size-4" />
|
||||||
Logout
|
Logout
|
||||||
</button>
|
</button>
|
||||||
</nav>
|
</div>
|
||||||
</header>
|
</aside>
|
||||||
<main style={styles.main}>
|
|
||||||
|
<main className="flex-1 overflow-y-auto px-8 py-8">
|
||||||
<Outlet />
|
<Outlet />
|
||||||
</main>
|
</main>
|
||||||
</div>
|
</div>
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
function navStyle({ isActive }: { isActive: boolean }): React.CSSProperties {
|
function NavItem({
|
||||||
return {
|
to,
|
||||||
color: isActive ? '#58a6ff' : '#c9d1d9',
|
icon: Icon,
|
||||||
textDecoration: 'none',
|
children,
|
||||||
padding: '6px 12px',
|
badge,
|
||||||
borderBottom: isActive ? '2px solid #58a6ff' : '2px solid transparent',
|
}: {
|
||||||
};
|
to: string;
|
||||||
|
icon: React.ComponentType<{ className?: string }>;
|
||||||
|
children: React.ReactNode;
|
||||||
|
badge?: number | null;
|
||||||
|
}): React.JSX.Element {
|
||||||
|
return (
|
||||||
|
<NavLink
|
||||||
|
to={to}
|
||||||
|
className={({ isActive }) =>
|
||||||
|
cn(
|
||||||
|
'flex items-center justify-between gap-2 rounded-md px-3 py-2 text-sm transition-colors',
|
||||||
|
isActive
|
||||||
|
? 'bg-(--color-surface-hi) text-(--color-fg) font-medium'
|
||||||
|
: 'text-(--color-fg-muted) hover:bg-(--color-surface-hi) hover:text-(--color-fg)',
|
||||||
|
)
|
||||||
|
}
|
||||||
|
>
|
||||||
|
<span className="flex items-center gap-2">
|
||||||
|
<Icon className="size-4" />
|
||||||
|
{children}
|
||||||
|
</span>
|
||||||
|
{typeof badge === 'number' && badge > 0 && (
|
||||||
|
<Badge variant="warning" className="px-1.5 py-0">
|
||||||
|
{badge}
|
||||||
|
</Badge>
|
||||||
|
)}
|
||||||
|
</NavLink>
|
||||||
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
const styles: Record<string, React.CSSProperties> = {
|
|
||||||
shell: {
|
|
||||||
minHeight: '100vh',
|
|
||||||
display: 'flex',
|
|
||||||
flexDirection: 'column',
|
|
||||||
},
|
|
||||||
header: {
|
|
||||||
display: 'flex',
|
|
||||||
alignItems: 'center',
|
|
||||||
justifyContent: 'space-between',
|
|
||||||
padding: '12px 24px',
|
|
||||||
background: '#161b22',
|
|
||||||
borderBottom: '1px solid #30363d',
|
|
||||||
},
|
|
||||||
brand: {
|
|
||||||
fontFamily: 'ui-monospace, "SF Mono", Menlo, monospace',
|
|
||||||
fontWeight: 700,
|
|
||||||
fontSize: 16,
|
|
||||||
},
|
|
||||||
dim: { color: '#7d8590', fontWeight: 400 },
|
|
||||||
nav: {
|
|
||||||
display: 'flex',
|
|
||||||
gap: 8,
|
|
||||||
alignItems: 'center',
|
|
||||||
},
|
|
||||||
logout: {
|
|
||||||
background: 'transparent',
|
|
||||||
color: '#c9d1d9',
|
|
||||||
border: '1px solid #30363d',
|
|
||||||
padding: '4px 12px',
|
|
||||||
borderRadius: 4,
|
|
||||||
cursor: 'pointer',
|
|
||||||
marginLeft: 12,
|
|
||||||
},
|
|
||||||
main: {
|
|
||||||
flex: 1,
|
|
||||||
padding: 24,
|
|
||||||
overflowY: 'auto',
|
|
||||||
},
|
|
||||||
};
|
|
||||||
|
|||||||
37
src/web/src/components/ui/badge.tsx
Normal file
37
src/web/src/components/ui/badge.tsx
Normal file
@@ -0,0 +1,37 @@
|
|||||||
|
import * as React from 'react';
|
||||||
|
import { cva, type VariantProps } from 'class-variance-authority';
|
||||||
|
import { cn } from '../../lib/utils';
|
||||||
|
|
||||||
|
const badgeVariants = cva(
|
||||||
|
'inline-flex items-center rounded-md px-2 py-0.5 text-xs font-medium border',
|
||||||
|
{
|
||||||
|
variants: {
|
||||||
|
variant: {
|
||||||
|
default:
|
||||||
|
'border-(--color-border) bg-(--color-surface) text-(--color-fg-muted)',
|
||||||
|
info:
|
||||||
|
'border-(--color-primary)/30 bg-(--color-primary)/15 text-(--color-primary)',
|
||||||
|
success:
|
||||||
|
'border-(--color-success)/30 bg-(--color-success-bg) text-(--color-success)',
|
||||||
|
warning:
|
||||||
|
'border-(--color-warning)/30 bg-(--color-warning-bg) text-(--color-warning)',
|
||||||
|
danger:
|
||||||
|
'border-(--color-danger)/30 bg-(--color-danger-bg) text-(--color-danger)',
|
||||||
|
outline:
|
||||||
|
'border-(--color-border) text-(--color-fg)',
|
||||||
|
},
|
||||||
|
},
|
||||||
|
defaultVariants: { variant: 'default' },
|
||||||
|
},
|
||||||
|
);
|
||||||
|
|
||||||
|
export interface BadgeProps
|
||||||
|
extends React.HTMLAttributes<HTMLSpanElement>,
|
||||||
|
VariantProps<typeof badgeVariants> {}
|
||||||
|
|
||||||
|
export const Badge = React.forwardRef<HTMLSpanElement, BadgeProps>(
|
||||||
|
({ className, variant, ...props }, ref) => (
|
||||||
|
<span ref={ref} className={cn(badgeVariants({ variant }), className)} {...props} />
|
||||||
|
),
|
||||||
|
);
|
||||||
|
Badge.displayName = 'Badge';
|
||||||
48
src/web/src/components/ui/button.tsx
Normal file
48
src/web/src/components/ui/button.tsx
Normal file
@@ -0,0 +1,48 @@
|
|||||||
|
import * as React from 'react';
|
||||||
|
import { cva, type VariantProps } from 'class-variance-authority';
|
||||||
|
import { cn } from '../../lib/utils';
|
||||||
|
|
||||||
|
const buttonVariants = cva(
|
||||||
|
'inline-flex items-center justify-center gap-2 whitespace-nowrap rounded-md font-medium transition-colors disabled:pointer-events-none disabled:opacity-50 focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-(--color-primary) focus-visible:ring-offset-2 focus-visible:ring-offset-(--color-canvas) [&_svg]:size-4 [&_svg]:shrink-0',
|
||||||
|
{
|
||||||
|
variants: {
|
||||||
|
variant: {
|
||||||
|
primary:
|
||||||
|
'bg-(--color-primary) text-(--color-primary-fg) hover:bg-(--color-primary-hover)',
|
||||||
|
secondary:
|
||||||
|
'border border-(--color-border) bg-(--color-surface) text-(--color-fg) hover:bg-(--color-surface-hi)',
|
||||||
|
ghost:
|
||||||
|
'text-(--color-fg) hover:bg-(--color-surface) hover:text-(--color-fg)',
|
||||||
|
danger:
|
||||||
|
'bg-(--color-danger-bg) text-(--color-danger) border border-(--color-danger)/40 hover:bg-(--color-danger) hover:text-(--color-canvas)',
|
||||||
|
link:
|
||||||
|
'text-(--color-primary) underline-offset-4 hover:underline',
|
||||||
|
},
|
||||||
|
size: {
|
||||||
|
sm: 'h-8 px-3 text-sm',
|
||||||
|
md: 'h-9 px-4 text-sm',
|
||||||
|
lg: 'h-10 px-6 text-base',
|
||||||
|
icon: 'h-9 w-9',
|
||||||
|
},
|
||||||
|
},
|
||||||
|
defaultVariants: {
|
||||||
|
variant: 'primary',
|
||||||
|
size: 'md',
|
||||||
|
},
|
||||||
|
},
|
||||||
|
);
|
||||||
|
|
||||||
|
export interface ButtonProps
|
||||||
|
extends React.ButtonHTMLAttributes<HTMLButtonElement>,
|
||||||
|
VariantProps<typeof buttonVariants> {}
|
||||||
|
|
||||||
|
export const Button = React.forwardRef<HTMLButtonElement, ButtonProps>(
|
||||||
|
({ className, variant, size, ...props }, ref) => (
|
||||||
|
<button
|
||||||
|
ref={ref}
|
||||||
|
className={cn(buttonVariants({ variant, size }), className)}
|
||||||
|
{...props}
|
||||||
|
/>
|
||||||
|
),
|
||||||
|
);
|
||||||
|
Button.displayName = 'Button';
|
||||||
67
src/web/src/components/ui/card.tsx
Normal file
67
src/web/src/components/ui/card.tsx
Normal file
@@ -0,0 +1,67 @@
|
|||||||
|
import * as React from 'react';
|
||||||
|
import { cn } from '../../lib/utils';
|
||||||
|
|
||||||
|
export const Card = React.forwardRef<HTMLDivElement, React.HTMLAttributes<HTMLDivElement>>(
|
||||||
|
({ className, ...props }, ref) => (
|
||||||
|
<div
|
||||||
|
ref={ref}
|
||||||
|
className={cn(
|
||||||
|
'rounded-lg border border-(--color-border) bg-(--color-surface) shadow-sm',
|
||||||
|
className,
|
||||||
|
)}
|
||||||
|
{...props}
|
||||||
|
/>
|
||||||
|
),
|
||||||
|
);
|
||||||
|
Card.displayName = 'Card';
|
||||||
|
|
||||||
|
export const CardHeader = React.forwardRef<HTMLDivElement, React.HTMLAttributes<HTMLDivElement>>(
|
||||||
|
({ className, ...props }, ref) => (
|
||||||
|
<div
|
||||||
|
ref={ref}
|
||||||
|
className={cn('flex flex-col gap-1.5 p-5', className)}
|
||||||
|
{...props}
|
||||||
|
/>
|
||||||
|
),
|
||||||
|
);
|
||||||
|
CardHeader.displayName = 'CardHeader';
|
||||||
|
|
||||||
|
export const CardTitle = React.forwardRef<HTMLHeadingElement, React.HTMLAttributes<HTMLHeadingElement>>(
|
||||||
|
({ className, ...props }, ref) => (
|
||||||
|
<h3
|
||||||
|
ref={ref}
|
||||||
|
className={cn('text-base font-semibold leading-none tracking-tight', className)}
|
||||||
|
{...props}
|
||||||
|
/>
|
||||||
|
),
|
||||||
|
);
|
||||||
|
CardTitle.displayName = 'CardTitle';
|
||||||
|
|
||||||
|
export const CardDescription = React.forwardRef<HTMLParagraphElement, React.HTMLAttributes<HTMLParagraphElement>>(
|
||||||
|
({ className, ...props }, ref) => (
|
||||||
|
<p
|
||||||
|
ref={ref}
|
||||||
|
className={cn('text-sm text-(--color-fg-muted)', className)}
|
||||||
|
{...props}
|
||||||
|
/>
|
||||||
|
),
|
||||||
|
);
|
||||||
|
CardDescription.displayName = 'CardDescription';
|
||||||
|
|
||||||
|
export const CardContent = React.forwardRef<HTMLDivElement, React.HTMLAttributes<HTMLDivElement>>(
|
||||||
|
({ className, ...props }, ref) => (
|
||||||
|
<div ref={ref} className={cn('p-5 pt-0', className)} {...props} />
|
||||||
|
),
|
||||||
|
);
|
||||||
|
CardContent.displayName = 'CardContent';
|
||||||
|
|
||||||
|
export const CardFooter = React.forwardRef<HTMLDivElement, React.HTMLAttributes<HTMLDivElement>>(
|
||||||
|
({ className, ...props }, ref) => (
|
||||||
|
<div
|
||||||
|
ref={ref}
|
||||||
|
className={cn('flex items-center p-5 pt-0 gap-2', className)}
|
||||||
|
{...props}
|
||||||
|
/>
|
||||||
|
),
|
||||||
|
);
|
||||||
|
CardFooter.displayName = 'CardFooter';
|
||||||
45
src/web/src/components/ui/input.tsx
Normal file
45
src/web/src/components/ui/input.tsx
Normal file
@@ -0,0 +1,45 @@
|
|||||||
|
import * as React from 'react';
|
||||||
|
import { cn } from '../../lib/utils';
|
||||||
|
|
||||||
|
export const Input = React.forwardRef<HTMLInputElement, React.InputHTMLAttributes<HTMLInputElement>>(
|
||||||
|
({ className, type, ...props }, ref) => (
|
||||||
|
<input
|
||||||
|
ref={ref}
|
||||||
|
type={type}
|
||||||
|
className={cn(
|
||||||
|
'flex h-9 w-full rounded-md border border-(--color-border) bg-(--color-canvas) px-3 py-1 text-sm text-(--color-fg) placeholder:text-(--color-fg-subtle) focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-(--color-primary) disabled:cursor-not-allowed disabled:opacity-50',
|
||||||
|
className,
|
||||||
|
)}
|
||||||
|
{...props}
|
||||||
|
/>
|
||||||
|
),
|
||||||
|
);
|
||||||
|
Input.displayName = 'Input';
|
||||||
|
|
||||||
|
export const Textarea = React.forwardRef<HTMLTextAreaElement, React.TextareaHTMLAttributes<HTMLTextAreaElement>>(
|
||||||
|
({ className, ...props }, ref) => (
|
||||||
|
<textarea
|
||||||
|
ref={ref}
|
||||||
|
className={cn(
|
||||||
|
'flex min-h-24 w-full rounded-md border border-(--color-border) bg-(--color-canvas) px-3 py-2 text-sm text-(--color-fg) placeholder:text-(--color-fg-subtle) focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-(--color-primary) disabled:cursor-not-allowed disabled:opacity-50',
|
||||||
|
className,
|
||||||
|
)}
|
||||||
|
{...props}
|
||||||
|
/>
|
||||||
|
),
|
||||||
|
);
|
||||||
|
Textarea.displayName = 'Textarea';
|
||||||
|
|
||||||
|
export const Label = React.forwardRef<HTMLLabelElement, React.LabelHTMLAttributes<HTMLLabelElement>>(
|
||||||
|
({ className, ...props }, ref) => (
|
||||||
|
<label
|
||||||
|
ref={ref}
|
||||||
|
className={cn(
|
||||||
|
'text-xs font-medium uppercase tracking-wider text-(--color-fg-muted)',
|
||||||
|
className,
|
||||||
|
)}
|
||||||
|
{...props}
|
||||||
|
/>
|
||||||
|
),
|
||||||
|
);
|
||||||
|
Label.displayName = 'Label';
|
||||||
22
src/web/src/components/ui/separator.tsx
Normal file
22
src/web/src/components/ui/separator.tsx
Normal file
@@ -0,0 +1,22 @@
|
|||||||
|
import * as React from 'react';
|
||||||
|
import { cn } from '../../lib/utils';
|
||||||
|
|
||||||
|
export function Separator({
|
||||||
|
className,
|
||||||
|
orientation = 'horizontal',
|
||||||
|
}: {
|
||||||
|
className?: string;
|
||||||
|
orientation?: 'horizontal' | 'vertical';
|
||||||
|
}): React.JSX.Element {
|
||||||
|
return (
|
||||||
|
<div
|
||||||
|
role="separator"
|
||||||
|
aria-orientation={orientation}
|
||||||
|
className={cn(
|
||||||
|
'bg-(--color-border)',
|
||||||
|
orientation === 'horizontal' ? 'h-px w-full' : 'h-full w-px',
|
||||||
|
className,
|
||||||
|
)}
|
||||||
|
/>
|
||||||
|
);
|
||||||
|
}
|
||||||
90
src/web/src/components/ui/tabs.tsx
Normal file
90
src/web/src/components/ui/tabs.tsx
Normal file
@@ -0,0 +1,90 @@
|
|||||||
|
import * as React from 'react';
|
||||||
|
import { cn } from '../../lib/utils';
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Tiny no-dep Tabs primitive. Doesn't need Radix for our use case —
|
||||||
|
* just tracks the active tab via state and re-renders the matching
|
||||||
|
* panel. ARIA roles are set so screen readers parse it as tabs.
|
||||||
|
*/
|
||||||
|
|
||||||
|
interface TabsContextValue {
|
||||||
|
value: string;
|
||||||
|
setValue: (v: string) => void;
|
||||||
|
}
|
||||||
|
const TabsContext = React.createContext<TabsContextValue | null>(null);
|
||||||
|
|
||||||
|
export function Tabs({
|
||||||
|
defaultValue,
|
||||||
|
value: valueProp,
|
||||||
|
onValueChange,
|
||||||
|
className,
|
||||||
|
children,
|
||||||
|
}: {
|
||||||
|
defaultValue?: string;
|
||||||
|
value?: string;
|
||||||
|
onValueChange?: (v: string) => void;
|
||||||
|
className?: string;
|
||||||
|
children: React.ReactNode;
|
||||||
|
}): React.JSX.Element {
|
||||||
|
const [internal, setInternal] = React.useState(defaultValue ?? '');
|
||||||
|
const value = valueProp ?? internal;
|
||||||
|
const setValue = React.useCallback(
|
||||||
|
(v: string) => {
|
||||||
|
if (valueProp === undefined) setInternal(v);
|
||||||
|
onValueChange?.(v);
|
||||||
|
},
|
||||||
|
[valueProp, onValueChange],
|
||||||
|
);
|
||||||
|
return (
|
||||||
|
<TabsContext.Provider value={{ value, setValue }}>
|
||||||
|
<div className={cn('flex flex-col gap-3', className)}>{children}</div>
|
||||||
|
</TabsContext.Provider>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
export function TabsList({ className, children }: { className?: string; children: React.ReactNode }): React.JSX.Element {
|
||||||
|
return (
|
||||||
|
<div
|
||||||
|
role="tablist"
|
||||||
|
className={cn(
|
||||||
|
'inline-flex h-9 items-center justify-start gap-1 rounded-md border border-(--color-border) bg-(--color-surface) p-1',
|
||||||
|
className,
|
||||||
|
)}
|
||||||
|
>
|
||||||
|
{children}
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
export function TabsTrigger({ value, className, children }: { value: string; className?: string; children: React.ReactNode }): React.JSX.Element {
|
||||||
|
const ctx = React.useContext(TabsContext);
|
||||||
|
if (!ctx) throw new Error('TabsTrigger must be used within Tabs');
|
||||||
|
const active = ctx.value === value;
|
||||||
|
return (
|
||||||
|
<button
|
||||||
|
role="tab"
|
||||||
|
aria-selected={active}
|
||||||
|
onClick={() => ctx.setValue(value)}
|
||||||
|
className={cn(
|
||||||
|
'inline-flex h-7 items-center justify-center rounded px-3 text-sm font-medium transition-colors',
|
||||||
|
active
|
||||||
|
? 'bg-(--color-canvas) text-(--color-fg) shadow-sm'
|
||||||
|
: 'text-(--color-fg-muted) hover:text-(--color-fg)',
|
||||||
|
className,
|
||||||
|
)}
|
||||||
|
>
|
||||||
|
{children}
|
||||||
|
</button>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
export function TabsContent({ value, className, children }: { value: string; className?: string; children: React.ReactNode }): React.JSX.Element | null {
|
||||||
|
const ctx = React.useContext(TabsContext);
|
||||||
|
if (!ctx) throw new Error('TabsContent must be used within Tabs');
|
||||||
|
if (ctx.value !== value) return null;
|
||||||
|
return (
|
||||||
|
<div role="tabpanel" className={cn('focus-visible:outline-none', className)}>
|
||||||
|
{children}
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
50
src/web/src/hooks/usePolling.ts
Normal file
50
src/web/src/hooks/usePolling.ts
Normal file
@@ -0,0 +1,50 @@
|
|||||||
|
import { useEffect, useState } from 'react';
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Polling hook with cancellation. Re-fetches `fn` every `intervalMs`
|
||||||
|
* until unmounted. Returns the latest data, error, and a setter to
|
||||||
|
* force-refresh on demand.
|
||||||
|
*/
|
||||||
|
export function usePolling<T>(
|
||||||
|
fn: () => Promise<T>,
|
||||||
|
intervalMs: number,
|
||||||
|
deps: unknown[] = [],
|
||||||
|
): { data: T | null; error: Error | null; loading: boolean; refetch: () => void } {
|
||||||
|
const [data, setData] = useState<T | null>(null);
|
||||||
|
const [error, setError] = useState<Error | null>(null);
|
||||||
|
const [loading, setLoading] = useState(true);
|
||||||
|
const [tick, setTick] = useState(0);
|
||||||
|
|
||||||
|
useEffect(() => {
|
||||||
|
let cancelled = false;
|
||||||
|
async function run(): Promise<void> {
|
||||||
|
try {
|
||||||
|
const v = await fn();
|
||||||
|
if (!cancelled) {
|
||||||
|
setData(v);
|
||||||
|
setError(null);
|
||||||
|
setLoading(false);
|
||||||
|
}
|
||||||
|
} catch (err) {
|
||||||
|
if (!cancelled) {
|
||||||
|
setError(err as Error);
|
||||||
|
setLoading(false);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
void run();
|
||||||
|
const id = setInterval(() => { void run(); }, intervalMs);
|
||||||
|
return () => {
|
||||||
|
cancelled = true;
|
||||||
|
clearInterval(id);
|
||||||
|
};
|
||||||
|
// eslint-disable-next-line react-hooks/exhaustive-deps
|
||||||
|
}, [...deps, tick, intervalMs]);
|
||||||
|
|
||||||
|
return {
|
||||||
|
data,
|
||||||
|
error,
|
||||||
|
loading,
|
||||||
|
refetch: () => setTick((t) => t + 1),
|
||||||
|
};
|
||||||
|
}
|
||||||
94
src/web/src/index.css
Normal file
94
src/web/src/index.css
Normal file
@@ -0,0 +1,94 @@
|
|||||||
|
/*
|
||||||
|
* mcpctl design tokens. Dark-mode-only — this is an internal tool and
|
||||||
|
* adding light mode doubles QA surface for no clear user benefit.
|
||||||
|
*
|
||||||
|
* Color philosophy: a near-black canvas with a slightly lifted surface
|
||||||
|
* tier ("surface" / "surfaceHi") for cards. Borders are subtle (zinc-800
|
||||||
|
* range) so spatial structure comes from spacing, not lines. Accent
|
||||||
|
* colours are reserved for status: emerald = success/approved, red =
|
||||||
|
* danger/rejected, amber = pending, sky = primary action.
|
||||||
|
*
|
||||||
|
* Typography: Inter for UI, JetBrains Mono for IDs / code / monospace
|
||||||
|
* displays. Loaded via Google Fonts so production deploys don't need a
|
||||||
|
* separate CDN. (Could swap to a self-hosted geist later — the fallback
|
||||||
|
* stack reads identically.)
|
||||||
|
*
|
||||||
|
* NOTE: @import url(...) must come before any other rules. Tailwind's
|
||||||
|
* own @import directive is wired up after.
|
||||||
|
*/
|
||||||
|
|
||||||
|
@import url('https://fonts.googleapis.com/css2?family=Inter:wght@400;500;600;700&family=JetBrains+Mono:wght@400;500;600&display=swap');
|
||||||
|
@import "tailwindcss";
|
||||||
|
|
||||||
|
@theme {
|
||||||
|
--color-canvas: oklch(0.16 0.005 270); /* near-black */
|
||||||
|
--color-surface: oklch(0.20 0.008 270); /* card bg */
|
||||||
|
--color-surface-hi: oklch(0.24 0.010 270); /* hover/lifted */
|
||||||
|
--color-border: oklch(0.30 0.010 270);
|
||||||
|
--color-border-strong: oklch(0.40 0.010 270);
|
||||||
|
--color-fg: oklch(0.92 0.005 270);
|
||||||
|
--color-fg-muted: oklch(0.65 0.010 270);
|
||||||
|
--color-fg-subtle: oklch(0.50 0.012 270);
|
||||||
|
|
||||||
|
--color-primary: oklch(0.74 0.16 240); /* sky-ish */
|
||||||
|
--color-primary-hover: oklch(0.78 0.16 240);
|
||||||
|
--color-primary-fg: oklch(0.16 0.005 270);
|
||||||
|
|
||||||
|
--color-success: oklch(0.72 0.18 145); /* emerald */
|
||||||
|
--color-success-bg: oklch(0.30 0.10 145);
|
||||||
|
--color-warning: oklch(0.80 0.16 80); /* amber */
|
||||||
|
--color-warning-bg: oklch(0.30 0.10 80);
|
||||||
|
--color-danger: oklch(0.70 0.20 25); /* red */
|
||||||
|
--color-danger-bg: oklch(0.30 0.12 25);
|
||||||
|
|
||||||
|
--font-sans: 'Inter', ui-sans-serif, system-ui, sans-serif;
|
||||||
|
--font-mono: 'JetBrains Mono', ui-monospace, 'SF Mono', Menlo, monospace;
|
||||||
|
|
||||||
|
--radius-sm: 0.25rem;
|
||||||
|
--radius-md: 0.5rem;
|
||||||
|
--radius-lg: 0.75rem;
|
||||||
|
|
||||||
|
--shadow-sm: 0 1px 2px rgb(0 0 0 / 0.4);
|
||||||
|
--shadow-md: 0 4px 12px rgb(0 0 0 / 0.5);
|
||||||
|
}
|
||||||
|
|
||||||
|
@layer base {
|
||||||
|
*,
|
||||||
|
*::before,
|
||||||
|
*::after {
|
||||||
|
box-sizing: border-box;
|
||||||
|
}
|
||||||
|
|
||||||
|
html, body, #root {
|
||||||
|
height: 100%;
|
||||||
|
}
|
||||||
|
|
||||||
|
html {
|
||||||
|
color-scheme: dark;
|
||||||
|
font-family: var(--font-sans);
|
||||||
|
}
|
||||||
|
|
||||||
|
body {
|
||||||
|
background: var(--color-canvas);
|
||||||
|
color: var(--color-fg);
|
||||||
|
font-feature-settings: 'cv11', 'ss01';
|
||||||
|
-webkit-font-smoothing: antialiased;
|
||||||
|
margin: 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
::selection {
|
||||||
|
background: var(--color-primary);
|
||||||
|
color: var(--color-primary-fg);
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Monaco / code-y bits inherit the mono stack. */
|
||||||
|
code, pre, kbd {
|
||||||
|
font-family: var(--font-mono);
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Keep focus visible — accessibility table stakes. */
|
||||||
|
:focus-visible {
|
||||||
|
outline: 2px solid var(--color-primary);
|
||||||
|
outline-offset: 2px;
|
||||||
|
}
|
||||||
|
}
|
||||||
11
src/web/src/lib/utils.ts
Normal file
11
src/web/src/lib/utils.ts
Normal file
@@ -0,0 +1,11 @@
|
|||||||
|
import { clsx, type ClassValue } from 'clsx';
|
||||||
|
import { twMerge } from 'tailwind-merge';
|
||||||
|
|
||||||
|
/**
|
||||||
|
* shadcn-style class-name helper. Merges Tailwind classes intelligently
|
||||||
|
* (later classes override earlier ones from the same group), and
|
||||||
|
* conditionally applies values via clsx semantics.
|
||||||
|
*/
|
||||||
|
export function cn(...inputs: ClassValue[]): string {
|
||||||
|
return twMerge(clsx(inputs));
|
||||||
|
}
|
||||||
@@ -1,6 +1,7 @@
|
|||||||
import { StrictMode } from 'react';
|
import { StrictMode } from 'react';
|
||||||
import { createRoot } from 'react-dom/client';
|
import { createRoot } from 'react-dom/client';
|
||||||
import { App } from './App';
|
import { App } from './App';
|
||||||
|
import './index.css';
|
||||||
|
|
||||||
const root = document.getElementById('root');
|
const root = document.getElementById('root');
|
||||||
if (root === null) throw new Error('#root not found');
|
if (root === null) throw new Error('#root not found');
|
||||||
|
|||||||
133
src/web/src/pages/Dashboard.tsx
Normal file
133
src/web/src/pages/Dashboard.tsx
Normal file
@@ -0,0 +1,133 @@
|
|||||||
|
import * as React from 'react';
|
||||||
|
import { Link } from 'react-router-dom';
|
||||||
|
import { Sparkles, Inbox, FolderKanban, Bot, ScrollText } from 'lucide-react';
|
||||||
|
|
||||||
|
import { api, type Skill, type Proposal, type Project, type Agent } from '../api';
|
||||||
|
import { Card, CardContent, CardHeader, CardTitle } from '../components/ui/card';
|
||||||
|
import { Badge } from '../components/ui/badge';
|
||||||
|
|
||||||
|
/**
|
||||||
|
* At-a-glance home page. Counts come from the `/api/v1/<resource>`
|
||||||
|
* lists; pending proposals are highlighted with an amber badge to draw
|
||||||
|
* the reviewer in.
|
||||||
|
*/
|
||||||
|
export function DashboardPage(): React.JSX.Element {
|
||||||
|
const [counts, setCounts] = React.useState<{
|
||||||
|
skills: number;
|
||||||
|
proposals: { pending: number; approved: number; rejected: number };
|
||||||
|
projects: number;
|
||||||
|
agents: number;
|
||||||
|
prompts: number;
|
||||||
|
} | null>(null);
|
||||||
|
const [error, setError] = React.useState<string | null>(null);
|
||||||
|
|
||||||
|
React.useEffect(() => {
|
||||||
|
let cancelled = false;
|
||||||
|
async function load(): Promise<void> {
|
||||||
|
try {
|
||||||
|
const [skills, proposals, projects, agents, prompts] = await Promise.all([
|
||||||
|
api.get<Skill[]>('/api/v1/skills'),
|
||||||
|
api.get<Proposal[]>('/api/v1/proposals'),
|
||||||
|
api.get<Project[]>('/api/v1/projects'),
|
||||||
|
api.get<Agent[]>('/api/v1/agents'),
|
||||||
|
api.get<unknown[]>('/api/v1/prompts'),
|
||||||
|
]);
|
||||||
|
if (cancelled) return;
|
||||||
|
setCounts({
|
||||||
|
skills: skills.length,
|
||||||
|
proposals: {
|
||||||
|
pending: proposals.filter((p) => p.status === 'pending').length,
|
||||||
|
approved: proposals.filter((p) => p.status === 'approved').length,
|
||||||
|
rejected: proposals.filter((p) => p.status === 'rejected').length,
|
||||||
|
},
|
||||||
|
projects: projects.length,
|
||||||
|
agents: agents.length,
|
||||||
|
prompts: prompts.length,
|
||||||
|
});
|
||||||
|
} catch (err) {
|
||||||
|
if (!cancelled) setError((err as Error).message);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
void load();
|
||||||
|
return () => { cancelled = true; };
|
||||||
|
}, []);
|
||||||
|
|
||||||
|
if (error !== null) return <div className="text-(--color-danger)">Error: {error}</div>;
|
||||||
|
if (counts === null) return <div className="text-(--color-fg-muted)">Loading…</div>;
|
||||||
|
|
||||||
|
return (
|
||||||
|
<div className="space-y-6">
|
||||||
|
<header className="space-y-1">
|
||||||
|
<h1 className="text-2xl font-semibold tracking-tight">Dashboard</h1>
|
||||||
|
<p className="text-sm text-(--color-fg-muted)">
|
||||||
|
A glance at what's in mcpd. Numbers update on page load.
|
||||||
|
</p>
|
||||||
|
</header>
|
||||||
|
|
||||||
|
{counts.proposals.pending > 0 && (
|
||||||
|
<Link to="/proposals">
|
||||||
|
<Card className="border-(--color-warning)/40 bg-(--color-warning-bg)/30 transition-colors hover:bg-(--color-warning-bg)/50">
|
||||||
|
<CardContent className="flex items-center gap-4 p-5 pt-5">
|
||||||
|
<Inbox className="size-6 text-(--color-warning)" />
|
||||||
|
<div className="flex-1">
|
||||||
|
<div className="font-medium text-(--color-fg)">
|
||||||
|
{counts.proposals.pending}{' '}
|
||||||
|
pending {counts.proposals.pending === 1 ? 'proposal' : 'proposals'}
|
||||||
|
</div>
|
||||||
|
<div className="text-sm text-(--color-fg-muted)">
|
||||||
|
Review the queue to approve or reject incoming changes.
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<Badge variant="warning">Review</Badge>
|
||||||
|
</CardContent>
|
||||||
|
</Card>
|
||||||
|
</Link>
|
||||||
|
)}
|
||||||
|
|
||||||
|
<div className="grid grid-cols-1 gap-4 md:grid-cols-2 lg:grid-cols-3">
|
||||||
|
<CountCard to="/skills" icon={Sparkles} label="Skills" value={counts.skills} />
|
||||||
|
<CountCard to="/projects" icon={FolderKanban} label="Projects" value={counts.projects} />
|
||||||
|
<CountCard to="/agents" icon={Bot} label="Agents" value={counts.agents} />
|
||||||
|
<CountCard to="/projects" icon={ScrollText} label="Prompts" value={counts.prompts} />
|
||||||
|
<CountCard
|
||||||
|
to="/proposals"
|
||||||
|
icon={Inbox}
|
||||||
|
label="Proposals"
|
||||||
|
value={counts.proposals.pending + counts.proposals.approved + counts.proposals.rejected}
|
||||||
|
subtitle={`${counts.proposals.pending} pending · ${counts.proposals.approved} approved · ${counts.proposals.rejected} rejected`}
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
function CountCard({
|
||||||
|
to,
|
||||||
|
icon: Icon,
|
||||||
|
label,
|
||||||
|
value,
|
||||||
|
subtitle,
|
||||||
|
}: {
|
||||||
|
to: string;
|
||||||
|
icon: React.ComponentType<{ className?: string }>;
|
||||||
|
label: string;
|
||||||
|
value: number;
|
||||||
|
subtitle?: string;
|
||||||
|
}): React.JSX.Element {
|
||||||
|
return (
|
||||||
|
<Link to={to} className="block">
|
||||||
|
<Card className="transition-colors hover:bg-(--color-surface-hi)">
|
||||||
|
<CardHeader className="flex-row items-center justify-between space-y-0 pb-2">
|
||||||
|
<CardTitle className="text-sm font-medium text-(--color-fg-muted)">{label}</CardTitle>
|
||||||
|
<Icon className="size-4 text-(--color-fg-muted)" />
|
||||||
|
</CardHeader>
|
||||||
|
<CardContent>
|
||||||
|
<div className="font-mono text-3xl font-semibold tabular-nums">{value}</div>
|
||||||
|
{subtitle !== undefined && (
|
||||||
|
<p className="mt-1 text-xs text-(--color-fg-muted)">{subtitle}</p>
|
||||||
|
)}
|
||||||
|
</CardContent>
|
||||||
|
</Card>
|
||||||
|
</Link>
|
||||||
|
);
|
||||||
|
}
|
||||||
173
src/web/src/pages/ProposalDetail.tsx
Normal file
173
src/web/src/pages/ProposalDetail.tsx
Normal file
@@ -0,0 +1,173 @@
|
|||||||
|
import * as React from 'react';
|
||||||
|
import { Link, useNavigate, useParams } from 'react-router-dom';
|
||||||
|
import { ArrowLeft, Check, X } from 'lucide-react';
|
||||||
|
|
||||||
|
import { api, type Proposal, type Skill } from '../api';
|
||||||
|
import { Card, CardContent, CardHeader, CardTitle } from '../components/ui/card';
|
||||||
|
import { Badge } from '../components/ui/badge';
|
||||||
|
import { Button } from '../components/ui/button';
|
||||||
|
import { Textarea, Label } from '../components/ui/input';
|
||||||
|
import { Diff } from '../components/Diff';
|
||||||
|
|
||||||
|
export function ProposalDetailPage(): React.JSX.Element {
|
||||||
|
const { id } = useParams<{ id: string }>();
|
||||||
|
const navigate = useNavigate();
|
||||||
|
const [proposal, setProposal] = React.useState<Proposal | null>(null);
|
||||||
|
const [existing, setExisting] = React.useState<string | null>(null);
|
||||||
|
const [reason, setReason] = React.useState('');
|
||||||
|
const [busy, setBusy] = React.useState(false);
|
||||||
|
const [error, setError] = React.useState<string | null>(null);
|
||||||
|
|
||||||
|
React.useEffect(() => {
|
||||||
|
let cancelled = false;
|
||||||
|
async function load(): Promise<void> {
|
||||||
|
try {
|
||||||
|
const p = await api.get<Proposal>(`/api/v1/proposals/${String(id)}`);
|
||||||
|
if (cancelled) return;
|
||||||
|
setProposal(p);
|
||||||
|
|
||||||
|
// Fetch existing resource (if any) for the diff.
|
||||||
|
const projectName = p.project?.name;
|
||||||
|
try {
|
||||||
|
if (p.resourceType === 'prompt') {
|
||||||
|
const params = new URLSearchParams();
|
||||||
|
if (projectName) params.set('project', projectName);
|
||||||
|
const list = await api.get<Array<{ name: string; content: string }>>(`/api/v1/prompts?${params.toString()}`);
|
||||||
|
const match = list.find((x) => x.name === p.name);
|
||||||
|
if (!cancelled) setExisting(match?.content ?? '');
|
||||||
|
} else {
|
||||||
|
const params = new URLSearchParams();
|
||||||
|
if (projectName) params.set('project', projectName);
|
||||||
|
const list = await api.get<Skill[]>(`/api/v1/skills?${params.toString()}`);
|
||||||
|
const match = list.find((x) => x.name === p.name);
|
||||||
|
if (!cancelled) setExisting(match?.content ?? '');
|
||||||
|
}
|
||||||
|
} catch {
|
||||||
|
if (!cancelled) setExisting('');
|
||||||
|
}
|
||||||
|
} catch (err) {
|
||||||
|
if (!cancelled) setError((err as Error).message);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
void load();
|
||||||
|
return () => { cancelled = true; };
|
||||||
|
}, [id]);
|
||||||
|
|
||||||
|
if (error !== null) return (
|
||||||
|
<div className="space-y-3">
|
||||||
|
<Link to="/proposals" className="inline-flex items-center gap-1 text-sm text-(--color-primary) hover:underline">
|
||||||
|
<ArrowLeft className="size-3.5" /> Proposals
|
||||||
|
</Link>
|
||||||
|
<div className="text-(--color-danger)">Error: {error}</div>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
if (proposal === null) return <div className="text-(--color-fg-muted)">Loading…</div>;
|
||||||
|
|
||||||
|
const proposed = (proposal.body as { content?: string }).content ?? '';
|
||||||
|
const isPending = proposal.status === 'pending';
|
||||||
|
const willCreateNew = (existing ?? '').length === 0;
|
||||||
|
const scope = proposal.project?.name ?? proposal.agent?.name ?? 'global';
|
||||||
|
|
||||||
|
async function approve(): Promise<void> {
|
||||||
|
if (!proposal) return;
|
||||||
|
setBusy(true);
|
||||||
|
try {
|
||||||
|
await api.post(`/api/v1/proposals/${proposal.id}/approve`, {});
|
||||||
|
navigate('/proposals');
|
||||||
|
} finally {
|
||||||
|
setBusy(false);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async function reject(): Promise<void> {
|
||||||
|
if (!proposal) return;
|
||||||
|
if (reason.trim().length === 0) return;
|
||||||
|
setBusy(true);
|
||||||
|
try {
|
||||||
|
await api.post(`/api/v1/proposals/${proposal.id}/reject`, { reviewerNote: reason });
|
||||||
|
navigate('/proposals');
|
||||||
|
} finally {
|
||||||
|
setBusy(false);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return (
|
||||||
|
<div className="space-y-6">
|
||||||
|
<Link to="/proposals" className="inline-flex items-center gap-1 text-sm text-(--color-primary) hover:underline">
|
||||||
|
<ArrowLeft className="size-3.5" /> Proposals
|
||||||
|
</Link>
|
||||||
|
|
||||||
|
<header className="flex items-start justify-between gap-6">
|
||||||
|
<div className="space-y-2">
|
||||||
|
<div className="flex items-center gap-3">
|
||||||
|
<h1 className="font-mono text-2xl font-semibold tracking-tight">{proposal.name}</h1>
|
||||||
|
<Badge variant="outline">{proposal.resourceType}</Badge>
|
||||||
|
<Badge variant={proposal.status === 'pending' ? 'warning' : proposal.status === 'approved' ? 'success' : 'danger'}>
|
||||||
|
{proposal.status}
|
||||||
|
</Badge>
|
||||||
|
</div>
|
||||||
|
<div className="flex items-center gap-3 text-xs text-(--color-fg-subtle)">
|
||||||
|
<span>scope: {scope}</span>
|
||||||
|
<span>session: <code className="font-mono">{proposal.createdBySession ?? '—'}</code></span>
|
||||||
|
<span>created: {new Date(proposal.createdAt).toLocaleString()}</span>
|
||||||
|
</div>
|
||||||
|
{proposal.reviewerNote && (
|
||||||
|
<div className="rounded-md border border-(--color-border) bg-(--color-surface-hi) p-3 text-sm">
|
||||||
|
<span className="text-(--color-fg-muted)">Reviewer note:</span> {proposal.reviewerNote}
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{isPending && (
|
||||||
|
<div className="flex items-center gap-2">
|
||||||
|
<Button variant="primary" onClick={approve} disabled={busy}>
|
||||||
|
<Check className="size-4" />
|
||||||
|
Approve
|
||||||
|
</Button>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
</header>
|
||||||
|
|
||||||
|
<Card>
|
||||||
|
<CardHeader>
|
||||||
|
<CardTitle className="text-sm">
|
||||||
|
{willCreateNew ? `Would create a new ${proposal.resourceType}` : 'Diff against current'}
|
||||||
|
</CardTitle>
|
||||||
|
</CardHeader>
|
||||||
|
<CardContent>
|
||||||
|
{willCreateNew ? (
|
||||||
|
<pre className="overflow-x-auto whitespace-pre-wrap rounded bg-(--color-canvas) p-3 font-mono text-xs leading-relaxed">
|
||||||
|
{proposed}
|
||||||
|
</pre>
|
||||||
|
) : (
|
||||||
|
<Diff before={existing ?? ''} after={proposed} />
|
||||||
|
)}
|
||||||
|
</CardContent>
|
||||||
|
</Card>
|
||||||
|
|
||||||
|
{isPending && (
|
||||||
|
<Card className="border-(--color-danger)/30">
|
||||||
|
<CardHeader>
|
||||||
|
<CardTitle className="text-sm">Reject</CardTitle>
|
||||||
|
</CardHeader>
|
||||||
|
<CardContent className="space-y-3">
|
||||||
|
<div className="space-y-2">
|
||||||
|
<Label htmlFor="reject-reason">Reviewer note (required)</Label>
|
||||||
|
<Textarea
|
||||||
|
id="reject-reason"
|
||||||
|
placeholder="Explain why this is being rejected so the proposer can learn from it."
|
||||||
|
value={reason}
|
||||||
|
onChange={(e) => setReason(e.target.value)}
|
||||||
|
rows={3}
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
|
<Button variant="danger" onClick={reject} disabled={busy || reason.trim().length === 0}>
|
||||||
|
<X className="size-4" />
|
||||||
|
Reject with note
|
||||||
|
</Button>
|
||||||
|
</CardContent>
|
||||||
|
</Card>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
136
src/web/src/pages/Proposals.tsx
Normal file
136
src/web/src/pages/Proposals.tsx
Normal file
@@ -0,0 +1,136 @@
|
|||||||
|
import * as React from 'react';
|
||||||
|
import { Link } from 'react-router-dom';
|
||||||
|
import { Inbox, ScrollText, Sparkles } from 'lucide-react';
|
||||||
|
|
||||||
|
import { api, type Proposal } from '../api';
|
||||||
|
import { Card, CardContent } from '../components/ui/card';
|
||||||
|
import { Badge } from '../components/ui/badge';
|
||||||
|
import { Tabs, TabsList, TabsTrigger, TabsContent } from '../components/ui/tabs';
|
||||||
|
|
||||||
|
export function ProposalsPage(): React.JSX.Element {
|
||||||
|
const [proposals, setProposals] = React.useState<Proposal[] | null>(null);
|
||||||
|
const [error, setError] = React.useState<string | null>(null);
|
||||||
|
const [tab, setTab] = React.useState<'pending' | 'approved' | 'rejected'>('pending');
|
||||||
|
|
||||||
|
React.useEffect(() => {
|
||||||
|
let cancelled = false;
|
||||||
|
async function load(): Promise<void> {
|
||||||
|
try {
|
||||||
|
const data = await api.get<Proposal[]>('/api/v1/proposals');
|
||||||
|
if (!cancelled) setProposals(data);
|
||||||
|
} catch (err) {
|
||||||
|
if (!cancelled) setError((err as Error).message);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
void load();
|
||||||
|
const id = setInterval(load, 30_000);
|
||||||
|
return () => { cancelled = true; clearInterval(id); };
|
||||||
|
}, []);
|
||||||
|
|
||||||
|
if (error !== null) return <div className="text-(--color-danger)">Error: {error}</div>;
|
||||||
|
if (proposals === null) return <div className="text-(--color-fg-muted)">Loading proposals…</div>;
|
||||||
|
|
||||||
|
const pending = proposals.filter((p) => p.status === 'pending');
|
||||||
|
const approved = proposals.filter((p) => p.status === 'approved');
|
||||||
|
const rejected = proposals.filter((p) => p.status === 'rejected');
|
||||||
|
|
||||||
|
return (
|
||||||
|
<div className="space-y-6">
|
||||||
|
<header className="space-y-1">
|
||||||
|
<h1 className="text-2xl font-semibold tracking-tight">Proposals</h1>
|
||||||
|
<p className="text-sm text-(--color-fg-muted)">
|
||||||
|
Prompts and skills proposed by Claude sessions or human authors. Approve to materialise; reject to dismiss.
|
||||||
|
</p>
|
||||||
|
</header>
|
||||||
|
|
||||||
|
<Tabs value={tab} onValueChange={(v) => setTab(v as typeof tab)}>
|
||||||
|
<TabsList>
|
||||||
|
<TabsTrigger value="pending">
|
||||||
|
Pending {pending.length > 0 && <span className="ml-1 text-(--color-warning)">({pending.length})</span>}
|
||||||
|
</TabsTrigger>
|
||||||
|
<TabsTrigger value="approved">Approved ({approved.length})</TabsTrigger>
|
||||||
|
<TabsTrigger value="rejected">Rejected ({rejected.length})</TabsTrigger>
|
||||||
|
</TabsList>
|
||||||
|
|
||||||
|
<TabsContent value="pending">
|
||||||
|
<ProposalList list={pending} emptyText="No pending proposals." />
|
||||||
|
</TabsContent>
|
||||||
|
<TabsContent value="approved">
|
||||||
|
<ProposalList list={approved} emptyText="No approved proposals yet." />
|
||||||
|
</TabsContent>
|
||||||
|
<TabsContent value="rejected">
|
||||||
|
<ProposalList list={rejected} emptyText="No rejected proposals." />
|
||||||
|
</TabsContent>
|
||||||
|
</Tabs>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
function ProposalList({ list, emptyText }: { list: Proposal[]; emptyText: string }): React.JSX.Element {
|
||||||
|
if (list.length === 0) {
|
||||||
|
return (
|
||||||
|
<Card>
|
||||||
|
<CardContent className="flex items-center justify-center gap-2 p-8 text-(--color-fg-muted)">
|
||||||
|
<Inbox className="size-4" /> {emptyText}
|
||||||
|
</CardContent>
|
||||||
|
</Card>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
return (
|
||||||
|
<div className="space-y-2">
|
||||||
|
{list.map((p) => <ProposalRow key={p.id} proposal={p} />)}
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
function ProposalRow({ proposal }: { proposal: Proposal }): React.JSX.Element {
|
||||||
|
const Icon = proposal.resourceType === 'skill' ? Sparkles : ScrollText;
|
||||||
|
const scope =
|
||||||
|
proposal.project?.name
|
||||||
|
? `project: ${proposal.project.name}`
|
||||||
|
: proposal.agent?.name
|
||||||
|
? `agent: ${proposal.agent.name}`
|
||||||
|
: 'global';
|
||||||
|
const statusVariant =
|
||||||
|
proposal.status === 'pending' ? 'warning' : proposal.status === 'approved' ? 'success' : 'danger';
|
||||||
|
|
||||||
|
return (
|
||||||
|
<Link to={`/proposals/${proposal.id}`}>
|
||||||
|
<Card className="transition-colors hover:bg-(--color-surface-hi)">
|
||||||
|
<CardContent className="flex items-center justify-between gap-4 p-4">
|
||||||
|
<div className="flex items-center gap-3 min-w-0">
|
||||||
|
<Icon className="size-4 shrink-0 text-(--color-fg-muted)" />
|
||||||
|
<div className="min-w-0">
|
||||||
|
<div className="flex items-center gap-2">
|
||||||
|
<span className="font-mono text-sm font-medium truncate">{proposal.name}</span>
|
||||||
|
<Badge variant="outline">{proposal.resourceType}</Badge>
|
||||||
|
</div>
|
||||||
|
<div className="text-xs text-(--color-fg-subtle)">
|
||||||
|
{scope} · session{' '}
|
||||||
|
<code className="font-mono">{(proposal.createdBySession ?? '—').slice(0, 8)}</code>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<div className="flex items-center gap-2">
|
||||||
|
<span className="text-xs text-(--color-fg-subtle)">
|
||||||
|
{ageOf(proposal.createdAt)}
|
||||||
|
</span>
|
||||||
|
<Badge variant={statusVariant}>{proposal.status}</Badge>
|
||||||
|
</div>
|
||||||
|
</CardContent>
|
||||||
|
</Card>
|
||||||
|
</Link>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
function ageOf(iso: string): string {
|
||||||
|
const t = Date.parse(iso);
|
||||||
|
if (Number.isNaN(t)) return '?';
|
||||||
|
const sec = Math.floor((Date.now() - t) / 1000);
|
||||||
|
if (sec < 60) return `${String(sec)}s`;
|
||||||
|
const min = Math.floor(sec / 60);
|
||||||
|
if (min < 60) return `${String(min)}m`;
|
||||||
|
const hr = Math.floor(min / 60);
|
||||||
|
if (hr < 24) return `${String(hr)}h`;
|
||||||
|
return `${String(Math.floor(hr / 24))}d`;
|
||||||
|
}
|
||||||
185
src/web/src/pages/SkillDetail.tsx
Normal file
185
src/web/src/pages/SkillDetail.tsx
Normal file
@@ -0,0 +1,185 @@
|
|||||||
|
import * as React from 'react';
|
||||||
|
import { Link, useParams } from 'react-router-dom';
|
||||||
|
import { ArrowLeft, History } from 'lucide-react';
|
||||||
|
|
||||||
|
import { api, type Skill, type Revision } from '../api';
|
||||||
|
import { Card, CardContent, CardHeader, CardTitle } from '../components/ui/card';
|
||||||
|
import { Badge } from '../components/ui/badge';
|
||||||
|
import { Button } from '../components/ui/button';
|
||||||
|
import { Tabs, TabsList, TabsTrigger, TabsContent } from '../components/ui/tabs';
|
||||||
|
import { Diff } from '../components/Diff';
|
||||||
|
|
||||||
|
export function SkillDetailPage(): React.JSX.Element {
|
||||||
|
const { name } = useParams<{ name: string }>();
|
||||||
|
const [skill, setSkill] = React.useState<Skill | null>(null);
|
||||||
|
const [revisions, setRevisions] = React.useState<Revision[] | null>(null);
|
||||||
|
const [error, setError] = React.useState<string | null>(null);
|
||||||
|
const [tab, setTab] = React.useState('content');
|
||||||
|
|
||||||
|
React.useEffect(() => {
|
||||||
|
let cancelled = false;
|
||||||
|
async function load(): Promise<void> {
|
||||||
|
try {
|
||||||
|
const list = await api.get<Skill[]>('/api/v1/skills');
|
||||||
|
const match = list.find((s) => s.name === name);
|
||||||
|
if (!match) {
|
||||||
|
if (!cancelled) setError(`Skill "${String(name)}" not found`);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
const full = await api.get<Skill>(`/api/v1/skills/${match.id}`);
|
||||||
|
if (!cancelled) setSkill(full);
|
||||||
|
const revs = await api.get<Revision[]>(
|
||||||
|
`/api/v1/revisions?resourceType=skill&resourceId=${full.id}`,
|
||||||
|
);
|
||||||
|
if (!cancelled) setRevisions(revs);
|
||||||
|
} catch (err) {
|
||||||
|
if (!cancelled) setError((err as Error).message);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
void load();
|
||||||
|
return () => { cancelled = true; };
|
||||||
|
}, [name]);
|
||||||
|
|
||||||
|
if (error !== null) return (
|
||||||
|
<div className="space-y-3">
|
||||||
|
<Link to="/skills" className="inline-flex items-center gap-1 text-sm text-(--color-primary) hover:underline">
|
||||||
|
<ArrowLeft className="size-3.5" /> Skills
|
||||||
|
</Link>
|
||||||
|
<div className="text-(--color-danger)">Error: {error}</div>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
if (skill === null) return <div className="text-(--color-fg-muted)">Loading…</div>;
|
||||||
|
|
||||||
|
const fileEntries = Object.entries(skill.files);
|
||||||
|
const metadataKeys = Object.keys(skill.metadata);
|
||||||
|
|
||||||
|
return (
|
||||||
|
<div className="space-y-6">
|
||||||
|
<Link to="/skills" className="inline-flex items-center gap-1 text-sm text-(--color-primary) hover:underline">
|
||||||
|
<ArrowLeft className="size-3.5" /> Skills
|
||||||
|
</Link>
|
||||||
|
|
||||||
|
<header className="flex items-start justify-between gap-4">
|
||||||
|
<div className="space-y-1">
|
||||||
|
<div className="flex items-center gap-3">
|
||||||
|
<h1 className="font-mono text-2xl font-semibold tracking-tight">{skill.name}</h1>
|
||||||
|
<Badge variant="info">v{skill.semver}</Badge>
|
||||||
|
</div>
|
||||||
|
{skill.description && (
|
||||||
|
<p className="text-sm text-(--color-fg-muted)">{skill.description}</p>
|
||||||
|
)}
|
||||||
|
<div className="flex items-center gap-3 pt-1 text-xs text-(--color-fg-subtle)">
|
||||||
|
<span>id: <code className="font-mono">{skill.id}</code></span>
|
||||||
|
{skill.project?.name && <span>project: <code className="font-mono">{skill.project.name}</code></span>}
|
||||||
|
{skill.agent?.name && <span>agent: <code className="font-mono">{skill.agent.name}</code></span>}
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</header>
|
||||||
|
|
||||||
|
<Tabs value={tab} onValueChange={setTab}>
|
||||||
|
<TabsList>
|
||||||
|
<TabsTrigger value="content">SKILL.md</TabsTrigger>
|
||||||
|
{fileEntries.length > 0 && <TabsTrigger value="files">Files ({fileEntries.length})</TabsTrigger>}
|
||||||
|
{metadataKeys.length > 0 && <TabsTrigger value="metadata">Metadata</TabsTrigger>}
|
||||||
|
<TabsTrigger value="history">History ({revisions?.length ?? '…'})</TabsTrigger>
|
||||||
|
</TabsList>
|
||||||
|
|
||||||
|
<TabsContent value="content">
|
||||||
|
<Card>
|
||||||
|
<CardContent className="p-5">
|
||||||
|
<pre className="overflow-x-auto whitespace-pre-wrap font-mono text-xs leading-relaxed text-(--color-fg)">
|
||||||
|
{skill.content}
|
||||||
|
</pre>
|
||||||
|
</CardContent>
|
||||||
|
</Card>
|
||||||
|
</TabsContent>
|
||||||
|
|
||||||
|
{fileEntries.length > 0 && (
|
||||||
|
<TabsContent value="files">
|
||||||
|
<div className="space-y-3">
|
||||||
|
{fileEntries.map(([path, content]) => (
|
||||||
|
<Card key={path}>
|
||||||
|
<CardHeader>
|
||||||
|
<CardTitle className="font-mono text-sm">{path}</CardTitle>
|
||||||
|
</CardHeader>
|
||||||
|
<CardContent>
|
||||||
|
<pre className="overflow-x-auto rounded bg-(--color-canvas) p-3 font-mono text-xs">{content}</pre>
|
||||||
|
</CardContent>
|
||||||
|
</Card>
|
||||||
|
))}
|
||||||
|
</div>
|
||||||
|
</TabsContent>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{metadataKeys.length > 0 && (
|
||||||
|
<TabsContent value="metadata">
|
||||||
|
<Card>
|
||||||
|
<CardContent className="p-5">
|
||||||
|
<pre className="overflow-x-auto rounded bg-(--color-canvas) p-3 font-mono text-xs">
|
||||||
|
{JSON.stringify(skill.metadata, null, 2)}
|
||||||
|
</pre>
|
||||||
|
</CardContent>
|
||||||
|
</Card>
|
||||||
|
</TabsContent>
|
||||||
|
)}
|
||||||
|
|
||||||
|
<TabsContent value="history">
|
||||||
|
<RevisionHistorySection revisions={revisions} skill={skill} />
|
||||||
|
</TabsContent>
|
||||||
|
</Tabs>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
function RevisionHistorySection({
|
||||||
|
revisions,
|
||||||
|
skill,
|
||||||
|
}: {
|
||||||
|
revisions: Revision[] | null;
|
||||||
|
skill: Skill;
|
||||||
|
}): React.JSX.Element {
|
||||||
|
const [diffAgainst, setDiffAgainst] = React.useState<string | null>(null);
|
||||||
|
|
||||||
|
if (revisions === null) return <div className="text-(--color-fg-muted)">Loading history…</div>;
|
||||||
|
if (revisions.length === 0) {
|
||||||
|
return <Card><CardContent className="p-8 text-center text-(--color-fg-muted)">No revisions yet.</CardContent></Card>;
|
||||||
|
}
|
||||||
|
|
||||||
|
const target = revisions.find((r) => r.id === diffAgainst);
|
||||||
|
const targetContent = (target?.body as { content?: string } | undefined)?.content ?? '';
|
||||||
|
|
||||||
|
return (
|
||||||
|
<div className="space-y-4">
|
||||||
|
<div className="space-y-2">
|
||||||
|
{revisions.map((rev) => (
|
||||||
|
<Card key={rev.id} className="cursor-pointer transition-colors hover:bg-(--color-surface-hi)" onClick={() => setDiffAgainst(rev.id === diffAgainst ? null : rev.id)}>
|
||||||
|
<CardContent className="flex items-center justify-between gap-3 p-4">
|
||||||
|
<div className="flex items-center gap-3">
|
||||||
|
<History className="size-4 text-(--color-fg-muted)" />
|
||||||
|
<Badge variant={rev.id === diffAgainst ? 'info' : 'outline'}>v{rev.semver}</Badge>
|
||||||
|
{rev.note && <span className="text-sm text-(--color-fg-muted)">{rev.note}</span>}
|
||||||
|
</div>
|
||||||
|
<span className="text-xs text-(--color-fg-subtle)">
|
||||||
|
{new Date(rev.createdAt).toLocaleString()}
|
||||||
|
</span>
|
||||||
|
</CardContent>
|
||||||
|
</Card>
|
||||||
|
))}
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{target && (
|
||||||
|
<Card>
|
||||||
|
<CardHeader className="flex-row items-center justify-between space-y-0">
|
||||||
|
<CardTitle className="text-sm">
|
||||||
|
Diff: v{target.semver} ↔ live (v{skill.semver})
|
||||||
|
</CardTitle>
|
||||||
|
<Button variant="ghost" size="sm" onClick={() => setDiffAgainst(null)}>Close</Button>
|
||||||
|
</CardHeader>
|
||||||
|
<CardContent>
|
||||||
|
<Diff before={targetContent} after={skill.content} />
|
||||||
|
</CardContent>
|
||||||
|
</Card>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
119
src/web/src/pages/Skills.tsx
Normal file
119
src/web/src/pages/Skills.tsx
Normal file
@@ -0,0 +1,119 @@
|
|||||||
|
import * as React from 'react';
|
||||||
|
import { Link } from 'react-router-dom';
|
||||||
|
import { Sparkles, FolderKanban, Bot, Globe } from 'lucide-react';
|
||||||
|
|
||||||
|
import { api, type Skill } from '../api';
|
||||||
|
import { Card, CardContent, CardHeader, CardTitle } from '../components/ui/card';
|
||||||
|
import { Badge } from '../components/ui/badge';
|
||||||
|
|
||||||
|
export function SkillsPage(): React.JSX.Element {
|
||||||
|
const [skills, setSkills] = React.useState<Skill[] | null>(null);
|
||||||
|
const [error, setError] = React.useState<string | null>(null);
|
||||||
|
|
||||||
|
React.useEffect(() => {
|
||||||
|
let cancelled = false;
|
||||||
|
async function load(): Promise<void> {
|
||||||
|
try {
|
||||||
|
const data = await api.get<Skill[]>('/api/v1/skills');
|
||||||
|
if (!cancelled) setSkills(data);
|
||||||
|
} catch (err) {
|
||||||
|
if (!cancelled) setError((err as Error).message);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
void load();
|
||||||
|
return () => { cancelled = true; };
|
||||||
|
}, []);
|
||||||
|
|
||||||
|
if (error !== null) return <div className="text-(--color-danger)">Error: {error}</div>;
|
||||||
|
if (skills === null) return <div className="text-(--color-fg-muted)">Loading skills…</div>;
|
||||||
|
|
||||||
|
const sorted = [...skills].sort((a, b) => a.name.localeCompare(b.name));
|
||||||
|
const globals = sorted.filter((s) => s.projectId === null && s.agentId === null);
|
||||||
|
const scoped = sorted.filter((s) => s.projectId !== null || s.agentId !== null);
|
||||||
|
|
||||||
|
return (
|
||||||
|
<div className="space-y-6">
|
||||||
|
<header className="flex items-end justify-between">
|
||||||
|
<div>
|
||||||
|
<h1 className="text-2xl font-semibold tracking-tight">Skills</h1>
|
||||||
|
<p className="text-sm text-(--color-fg-muted)">
|
||||||
|
Materialised onto every dev box by <code className="font-mono text-xs">mcpctl skills sync</code>.
|
||||||
|
</p>
|
||||||
|
</div>
|
||||||
|
<span className="text-sm text-(--color-fg-muted)">
|
||||||
|
{sorted.length} {sorted.length === 1 ? 'skill' : 'skills'}
|
||||||
|
</span>
|
||||||
|
</header>
|
||||||
|
|
||||||
|
{sorted.length === 0 && (
|
||||||
|
<Card>
|
||||||
|
<CardContent className="p-8 text-center text-(--color-fg-muted)">
|
||||||
|
No skills defined yet. Create one with{' '}
|
||||||
|
<code className="rounded bg-(--color-surface-hi) px-1 py-0.5 font-mono text-xs">
|
||||||
|
mcpctl create skill {'<name>'}
|
||||||
|
</code>
|
||||||
|
.
|
||||||
|
</CardContent>
|
||||||
|
</Card>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{globals.length > 0 && (
|
||||||
|
<section className="space-y-3">
|
||||||
|
<h2 className="text-xs font-semibold uppercase tracking-wider text-(--color-fg-muted)">
|
||||||
|
Global
|
||||||
|
</h2>
|
||||||
|
<div className="grid grid-cols-1 gap-3 md:grid-cols-2 xl:grid-cols-3">
|
||||||
|
{globals.map((s) => <SkillCard key={s.id} skill={s} />)}
|
||||||
|
</div>
|
||||||
|
</section>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{scoped.length > 0 && (
|
||||||
|
<section className="space-y-3">
|
||||||
|
<h2 className="text-xs font-semibold uppercase tracking-wider text-(--color-fg-muted)">
|
||||||
|
Project- and agent-scoped
|
||||||
|
</h2>
|
||||||
|
<div className="grid grid-cols-1 gap-3 md:grid-cols-2 xl:grid-cols-3">
|
||||||
|
{scoped.map((s) => <SkillCard key={s.id} skill={s} />)}
|
||||||
|
</div>
|
||||||
|
</section>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
function SkillCard({ skill }: { skill: Skill }): React.JSX.Element {
|
||||||
|
const ScopeIcon =
|
||||||
|
skill.projectId !== null ? FolderKanban : skill.agentId !== null ? Bot : Globe;
|
||||||
|
const scopeLabel =
|
||||||
|
skill.project?.name
|
||||||
|
? `project: ${skill.project.name}`
|
||||||
|
: skill.agent?.name
|
||||||
|
? `agent: ${skill.agent.name}`
|
||||||
|
: 'global';
|
||||||
|
|
||||||
|
return (
|
||||||
|
<Link to={`/skills/${encodeURIComponent(skill.name)}`}>
|
||||||
|
<Card className="h-full transition-colors hover:bg-(--color-surface-hi)">
|
||||||
|
<CardHeader className="space-y-2">
|
||||||
|
<div className="flex items-start justify-between gap-2">
|
||||||
|
<CardTitle className="font-mono text-sm">
|
||||||
|
<Sparkles className="mr-1.5 inline size-3.5 text-(--color-primary)" />
|
||||||
|
{skill.name}
|
||||||
|
</CardTitle>
|
||||||
|
<Badge variant="info">v{skill.semver}</Badge>
|
||||||
|
</div>
|
||||||
|
{skill.description && (
|
||||||
|
<p className="text-sm text-(--color-fg-muted) line-clamp-2">{skill.description}</p>
|
||||||
|
)}
|
||||||
|
</CardHeader>
|
||||||
|
<CardContent>
|
||||||
|
<div className="flex items-center gap-1.5 text-xs text-(--color-fg-subtle)">
|
||||||
|
<ScopeIcon className="size-3" />
|
||||||
|
<span>{scopeLabel}</span>
|
||||||
|
</div>
|
||||||
|
</CardContent>
|
||||||
|
</Card>
|
||||||
|
</Link>
|
||||||
|
);
|
||||||
|
}
|
||||||
@@ -1,6 +1,7 @@
|
|||||||
/// <reference types="vitest/config" />
|
/// <reference types="vitest/config" />
|
||||||
import { defineConfig } from 'vite';
|
import { defineConfig } from 'vite';
|
||||||
import react from '@vitejs/plugin-react';
|
import react from '@vitejs/plugin-react';
|
||||||
|
import tailwindcss from '@tailwindcss/vite';
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Vite config for the @mcpctl/web prompt editor.
|
* Vite config for the @mcpctl/web prompt editor.
|
||||||
@@ -16,7 +17,7 @@ import react from '@vitejs/plugin-react';
|
|||||||
const apiTarget = process.env['MCPCTL_API_URL'] ?? 'https://mcpctl.ad.itaz.eu';
|
const apiTarget = process.env['MCPCTL_API_URL'] ?? 'https://mcpctl.ad.itaz.eu';
|
||||||
|
|
||||||
export default defineConfig({
|
export default defineConfig({
|
||||||
plugins: [react()],
|
plugins: [react(), tailwindcss()],
|
||||||
base: '/ui/',
|
base: '/ui/',
|
||||||
server: {
|
server: {
|
||||||
port: 5173,
|
port: 5173,
|
||||||
|
|||||||
Reference in New Issue
Block a user