feat: tiered LLM providers (fast/heavy) with multi-provider config

Adds tier-based LLM routing so fast local models (vLLM, Ollama) handle
structured tasks while cloud models (Gemini, Anthropic) are reserved for
heavy reasoning. Single-provider configs continue to work via fallback.

- Tier type + ProviderRegistry with assignTier/getProvider/fallback chain
- Multi-provider config format: { providers: [{ name, type, tier, ... }] }
- NamedProvider wrapper for multiple instances of same provider type
- Setup wizard: Simple (legacy) / Advanced (fast+heavy tiers) modes
- Status display: tiered view with /llm/providers endpoint
- Call sites use getProvider('fast') instead of getActive()
- Full backward compatibility with existing single-provider configs

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
Michal
2026-02-25 02:16:08 +00:00
parent 7b5a658d9b
commit d2be0d7198
17 changed files with 834 additions and 285 deletions

View File

@@ -26,6 +26,7 @@ function baseDeps(overrides?: Partial<StatusCommandDeps>): Partial<StatusCommand
log,
write,
checkHealth: async () => true,
fetchProviders: async () => null,
isTTY: false,
...overrides,
};