feat: tiered LLM providers (fast/heavy) with multi-provider config
Adds tier-based LLM routing so fast local models (vLLM, Ollama) handle
structured tasks while cloud models (Gemini, Anthropic) are reserved for
heavy reasoning. Single-provider configs continue to work via fallback.
- Tier type + ProviderRegistry with assignTier/getProvider/fallback chain
- Multi-provider config format: { providers: [{ name, type, tier, ... }] }
- NamedProvider wrapper for multiple instances of same provider type
- Setup wizard: Simple (legacy) / Advanced (fast+heavy tiers) modes
- Status display: tiered view with /llm/providers endpoint
- Call sites use getProvider('fast') instead of getActive()
- Full backward compatibility with existing single-provider configs
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
@@ -26,6 +26,7 @@ function baseDeps(overrides?: Partial<StatusCommandDeps>): Partial<StatusCommand
|
||||
log,
|
||||
write,
|
||||
checkHealth: async () => true,
|
||||
fetchProviders: async () => null,
|
||||
isTTY: false,
|
||||
...overrides,
|
||||
};
|
||||
|
||||
Reference in New Issue
Block a user