fix(mcplocal): thread client bearer into per-upstream McpdClient
Symptom: HTTP-mode mcplocal accepted the incoming mcpctl_pat_ bearer, but every /api/v1/mcp/proxy call to mcpd for upstream discovery came back with "Authentication failed: invalid or expired token" — because those proxy calls were using the pod's DEFAULT McpdClient token, which in a container with no ~/.mcpctl/credentials is the empty string. The discovery GET was correct (explicit authOverride in forward()), but syncUpstreams() then created McpdUpstream instances bound to the original mcpdClient — so every tools/list to each upstream went out with `Authorization: Bearer ` (empty) and mcpd's auth hook rejected it. Fix: add McpdClient.withToken(token) and have refreshProjectUpstreams swap to `mcpdClient.withToken(authToken)` before handing the client to syncUpstreams. This keeps the "pod has no identity" design: the token used for downstream /api/v1/mcp/proxy calls is the caller's McpToken, same as the one used for the initial discovery GET and for introspect. Tested: project-discovery.test.ts + mcpd-upstream.test.ts pass. Next: rebuild + roll the mcplocal image and retry LiteLLM probe. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
@@ -13,6 +13,7 @@ function mockMcpdClient(servers: Array<{ id: string; name: string; transport: st
|
||||
forward: vi.fn(async () => ({ status: 200, body: servers })),
|
||||
withTimeout: vi.fn(() => client),
|
||||
withHeaders: vi.fn(() => client),
|
||||
withToken: vi.fn(() => client),
|
||||
};
|
||||
return client;
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user