perf: vitest threads pool + Dockerfile pnpm cache mount
Some checks failed
CI/CD / typecheck (pull_request) Successful in 56s
CI/CD / test (pull_request) Successful in 1m9s
CI/CD / lint (pull_request) Successful in 2m40s
CI/CD / smoke (pull_request) Failing after 1m43s
CI/CD / build (pull_request) Failing after 7m6s
CI/CD / publish (pull_request) Has been skipped
Some checks failed
CI/CD / typecheck (pull_request) Successful in 56s
CI/CD / test (pull_request) Successful in 1m9s
CI/CD / lint (pull_request) Successful in 2m40s
CI/CD / smoke (pull_request) Failing after 1m43s
CI/CD / build (pull_request) Failing after 7m6s
CI/CD / publish (pull_request) Has been skipped
Two tuning knobs that were leaving most of the host idle: 1) vitest.config.ts pool=threads with maxThreads ≈ cores/2. Default left this 64-core workstation at ~10% CPU during \`pnpm test:run\`. Threads pool uses the box: same 152-file/2050-test suite now runs at ~700% CPU instead of ~150%. Wall time gain is modest (workload is dominated by a handful of slow individual files that one thread must run serially), but the parallel headroom is there for when the suite grows. Cap = max(2, cores/2) keeps laptops reasonable; override with \`VITEST_MAX_THREADS=N\` in the env. 2) Dockerfile.mcpd uses BuildKit cache mounts on both pnpm install steps. Adds \`# syntax=docker/dockerfile:1.6\` and a \`--mount=type=cache,target=/root/.local/share/pnpm/store\` so pnpm's content-addressed store survives across image rebuilds. Cold rebuilds where the lockfile changed are unaffected; warm rebuilds where only source changed drop the install step from ~60s to <5s. fulldeploy.sh's mcpd image rebuild gets that back minus the docker push hash mismatch. Test parity: 2050/2050 across 152 files; per-package mcpd 837/837. Both unchanged. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
@@ -1,8 +1,21 @@
|
||||
import { defineConfig } from 'vitest/config';
|
||||
import { availableParallelism } from 'node:os';
|
||||
|
||||
// Default vitest's pool to ~half the CPU threads we have. The previous
|
||||
// implicit default left this 64-thread workstation at ~10% utilization
|
||||
// during `pnpm test:run`. Half is a soft cap that stays kind to laptops
|
||||
// (8-thread → 4 workers) while letting beefy hosts push closer to the
|
||||
// box's actual capacity. Override at run time with VITEST_MAX_THREADS.
|
||||
const cores = availableParallelism();
|
||||
const maxThreads = Number(process.env['VITEST_MAX_THREADS'] ?? Math.max(2, Math.floor(cores / 2)));
|
||||
|
||||
export default defineConfig({
|
||||
test: {
|
||||
globals: true,
|
||||
pool: 'threads',
|
||||
poolOptions: {
|
||||
threads: { maxThreads, minThreads: 1 },
|
||||
},
|
||||
coverage: {
|
||||
provider: 'v8',
|
||||
reporter: ['text', 'json', 'html'],
|
||||
|
||||
Reference in New Issue
Block a user