- Add warmup() to LlmProvider interface for eager subprocess startup - ManagedVllmProvider.warmup() starts vLLM in background on project load - ProviderRegistry.warmupAll() triggers all managed providers - NamedProvider proxies warmup() to inner provider - paginate stage generates LLM-powered descriptive page titles when available, cached by content hash, falls back to generic "Page N" - project-mcp-endpoint calls warmupAll() on router creation so vLLM is loading while the session initializes Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
13 lines
382 B
Docker
13 lines
382 B
Docker
# Base container for Python/uvx-based MCP servers (STDIO transport).
|
|
# mcpd uses this image to run `uvx <packageName>` when a server
|
|
# has packageName with runtime=python but no dockerImage.
|
|
FROM python:3.12-slim
|
|
|
|
WORKDIR /mcp
|
|
|
|
# Install uv (which provides uvx)
|
|
RUN pip install --no-cache-dir uv
|
|
|
|
# Default entrypoint — overridden by mcpd via container command
|
|
ENTRYPOINT ["uvx"]
|