mirror of
https://github.com/openclaw/openclaw.git
synced 2026-03-19 14:00:51 +00:00
feat(plugins): move provider runtimes into bundled plugins
This commit is contained in:
@@ -15,6 +15,7 @@ Docs: https://docs.openclaw.ai
|
||||
- Android/nodes: add `callLog.search` plus shared Call Log permission wiring so Android nodes can search recent call history through the gateway. (#44073) Thanks @lxk7280.
|
||||
- Docs/Zalo: clarify the Marketplace-bot support matrix and config guidance so the Zalo channel docs match current Bot Creator behavior more closely. (#47552) Thanks @No898.
|
||||
- Install/update: allow package-manager installs from GitHub `main` via `openclaw update --tag main`, installer `--version main`, or direct npm/pnpm git specs.
|
||||
- Plugins/providers: move OpenRouter, GitHub Copilot, and OpenAI Codex provider/runtime logic into bundled plugins, including dynamic model fallback, runtime auth exchange, stream wrappers, capability hints, and cache-TTL policy.
|
||||
|
||||
### Fixes
|
||||
|
||||
|
||||
@@ -16,6 +16,46 @@ For model selection rules, see [/concepts/models](/concepts/models).
|
||||
- Model refs use `provider/model` (example: `opencode/claude-opus-4-6`).
|
||||
- If you set `agents.defaults.models`, it becomes the allowlist.
|
||||
- CLI helpers: `openclaw onboard`, `openclaw models list`, `openclaw models set <provider/model>`.
|
||||
- Provider plugins can inject model catalogs via `registerProvider({ catalog })`;
|
||||
OpenClaw merges that output into `models.providers` before writing
|
||||
`models.json`.
|
||||
- Provider plugins can also own provider runtime behavior via
|
||||
`resolveDynamicModel`, `prepareDynamicModel`, `normalizeResolvedModel`,
|
||||
`capabilities`, `prepareExtraParams`, `wrapStreamFn`,
|
||||
`isCacheTtlEligible`, and `prepareRuntimeAuth`.
|
||||
|
||||
## Plugin-owned provider behavior
|
||||
|
||||
Provider plugins can now own most provider-specific logic while OpenClaw keeps
|
||||
the generic inference loop.
|
||||
|
||||
Typical split:
|
||||
|
||||
- `catalog`: provider appears in `models.providers`
|
||||
- `resolveDynamicModel`: provider accepts model ids not present in the local
|
||||
static catalog yet
|
||||
- `prepareDynamicModel`: provider needs a metadata refresh before retrying
|
||||
dynamic resolution
|
||||
- `normalizeResolvedModel`: provider needs transport or base URL rewrites
|
||||
- `capabilities`: provider publishes transcript/tooling/provider-family quirks
|
||||
- `prepareExtraParams`: provider defaults or normalizes per-model request params
|
||||
- `wrapStreamFn`: provider applies request headers/body/model compat wrappers
|
||||
- `isCacheTtlEligible`: provider decides which upstream model ids support prompt-cache TTL
|
||||
- `prepareRuntimeAuth`: provider turns a configured credential into a short
|
||||
lived runtime token
|
||||
|
||||
Current bundled examples:
|
||||
|
||||
- `openrouter`: pass-through model ids, request wrappers, provider capability
|
||||
hints, and cache-TTL policy
|
||||
- `github-copilot`: forward-compat model fallback, Claude-thinking transcript
|
||||
hints, and runtime token exchange
|
||||
- `openai-codex`: forward-compat model fallback, transport normalization, and
|
||||
default transport params
|
||||
|
||||
That covers providers that still fit OpenClaw's normal transports. A provider
|
||||
that needs a totally custom request executor is a separate, deeper extension
|
||||
surface.
|
||||
|
||||
## API key rotation
|
||||
|
||||
|
||||
@@ -105,6 +105,9 @@ Important trust note:
|
||||
- [Microsoft Teams](/channels/msteams) — `@openclaw/msteams`
|
||||
- Google Antigravity OAuth (provider auth) — bundled as `google-antigravity-auth` (disabled by default)
|
||||
- Gemini CLI OAuth (provider auth) — bundled as `google-gemini-cli-auth` (disabled by default)
|
||||
- GitHub Copilot provider runtime — bundled as `github-copilot` (enabled by default)
|
||||
- OpenAI Codex provider runtime — bundled as `openai-codex` (enabled by default)
|
||||
- OpenRouter provider runtime — bundled as `openrouter` (enabled by default)
|
||||
- Qwen OAuth (provider auth) — bundled as `qwen-portal-auth` (disabled by default)
|
||||
- Copilot Proxy (provider auth) — local VS Code Copilot Proxy bridge; distinct from built-in `github-copilot` device login (bundled, disabled by default)
|
||||
|
||||
@@ -120,6 +123,8 @@ Plugins can register:
|
||||
- CLI commands
|
||||
- Background services
|
||||
- Context engines
|
||||
- Provider auth flows and model catalogs
|
||||
- Provider runtime hooks for dynamic model ids, transport normalization, capability metadata, stream wrapping, cache TTL policy, and runtime auth exchange
|
||||
- Optional config validation
|
||||
- **Skills** (by listing `skills` directories in the plugin manifest)
|
||||
- **Auto-reply commands** (execute without invoking the AI agent)
|
||||
@@ -127,6 +132,137 @@ Plugins can register:
|
||||
Plugins run **in‑process** with the Gateway, so treat them as trusted code.
|
||||
Tool authoring guide: [Plugin agent tools](/plugins/agent-tools).
|
||||
|
||||
## Provider runtime hooks
|
||||
|
||||
Provider plugins now have two layers:
|
||||
|
||||
- config-time hooks: `catalog` / legacy `discovery`
|
||||
- runtime hooks: `resolveDynamicModel`, `prepareDynamicModel`, `normalizeResolvedModel`, `capabilities`, `prepareExtraParams`, `wrapStreamFn`, `isCacheTtlEligible`, `prepareRuntimeAuth`
|
||||
|
||||
OpenClaw still owns the generic agent loop, failover, transcript handling, and
|
||||
tool policy. These hooks are the seam for provider-specific behavior without
|
||||
needing a whole custom inference transport.
|
||||
|
||||
### Hook order
|
||||
|
||||
For model/provider plugins, OpenClaw uses hooks in this rough order:
|
||||
|
||||
1. `catalog`
|
||||
Publish provider config into `models.providers` during `models.json`
|
||||
generation.
|
||||
2. built-in/discovered model lookup
|
||||
OpenClaw tries the normal registry/catalog path first.
|
||||
3. `resolveDynamicModel`
|
||||
Sync fallback for provider-owned model ids that are not in the local
|
||||
registry yet.
|
||||
4. `prepareDynamicModel`
|
||||
Async warm-up only on async model resolution paths, then
|
||||
`resolveDynamicModel` runs again.
|
||||
5. `normalizeResolvedModel`
|
||||
Final rewrite before the embedded runner uses the resolved model.
|
||||
6. `capabilities`
|
||||
Provider-owned transcript/tooling metadata used by shared core logic.
|
||||
7. `prepareExtraParams`
|
||||
Provider-owned request-param normalization before generic stream option wrappers.
|
||||
8. `wrapStreamFn`
|
||||
Provider-owned stream wrapper after generic wrappers are applied.
|
||||
9. `isCacheTtlEligible`
|
||||
Provider-owned prompt-cache policy for proxy/backhaul providers.
|
||||
10. `prepareRuntimeAuth`
|
||||
Exchanges a configured credential into the actual runtime token/key just
|
||||
before inference.
|
||||
|
||||
### Which hook to use
|
||||
|
||||
- `catalog`: publish provider config and model catalogs into `models.providers`
|
||||
- `resolveDynamicModel`: handle pass-through or forward-compat model ids that are not in the local registry yet
|
||||
- `prepareDynamicModel`: async warm-up before retrying dynamic resolution (for example refresh provider metadata cache)
|
||||
- `normalizeResolvedModel`: rewrite a resolved model's transport/base URL/compat before inference
|
||||
- `capabilities`: publish provider-family and transcript/tooling quirks without hardcoding provider ids in core
|
||||
- `prepareExtraParams`: set provider defaults or normalize provider-specific per-model params before generic stream wrapping
|
||||
- `wrapStreamFn`: add provider-specific headers/payload/model compat patches while still using the normal `pi-ai` execution path
|
||||
- `isCacheTtlEligible`: decide whether provider/model pairs should use cache TTL metadata
|
||||
- `prepareRuntimeAuth`: exchange a configured credential into the actual short-lived runtime token/key used for requests
|
||||
|
||||
Rule of thumb:
|
||||
|
||||
- provider owns a catalog or base URL defaults: use `catalog`
|
||||
- provider accepts arbitrary upstream model ids: use `resolveDynamicModel`
|
||||
- provider needs network metadata before resolving unknown ids: add `prepareDynamicModel`
|
||||
- provider needs transport rewrites but still uses a core transport: use `normalizeResolvedModel`
|
||||
- provider needs transcript/provider-family quirks: use `capabilities`
|
||||
- provider needs default request params or per-provider param cleanup: use `prepareExtraParams`
|
||||
- provider needs request headers/body/model compat wrappers without a custom transport: use `wrapStreamFn`
|
||||
- provider needs proxy-specific cache TTL gating: use `isCacheTtlEligible`
|
||||
- provider needs a token exchange or short-lived request credential: use `prepareRuntimeAuth`
|
||||
|
||||
If the provider needs a fully custom wire protocol or custom request executor,
|
||||
that is a different class of extension. These hooks are for provider behavior
|
||||
that still runs on OpenClaw's normal inference loop.
|
||||
|
||||
### Example
|
||||
|
||||
```ts
|
||||
api.registerProvider({
|
||||
id: "example-proxy",
|
||||
label: "Example Proxy",
|
||||
auth: [],
|
||||
catalog: {
|
||||
order: "simple",
|
||||
run: async (ctx) => {
|
||||
const apiKey = ctx.resolveProviderApiKey("example-proxy").apiKey;
|
||||
if (!apiKey) {
|
||||
return null;
|
||||
}
|
||||
return {
|
||||
provider: {
|
||||
baseUrl: "https://proxy.example.com/v1",
|
||||
apiKey,
|
||||
api: "openai-completions",
|
||||
models: [{ id: "auto", name: "Auto" }],
|
||||
},
|
||||
};
|
||||
},
|
||||
},
|
||||
resolveDynamicModel: (ctx) => ({
|
||||
id: ctx.modelId,
|
||||
name: ctx.modelId,
|
||||
provider: "example-proxy",
|
||||
api: "openai-completions",
|
||||
baseUrl: "https://proxy.example.com/v1",
|
||||
reasoning: false,
|
||||
input: ["text"],
|
||||
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
|
||||
contextWindow: 128000,
|
||||
maxTokens: 8192,
|
||||
}),
|
||||
prepareRuntimeAuth: async (ctx) => {
|
||||
const exchanged = await exchangeToken(ctx.apiKey);
|
||||
return {
|
||||
apiKey: exchanged.token,
|
||||
baseUrl: exchanged.baseUrl,
|
||||
expiresAt: exchanged.expiresAt,
|
||||
};
|
||||
},
|
||||
});
|
||||
```
|
||||
|
||||
### Built-in examples
|
||||
|
||||
- OpenRouter uses `catalog` plus `resolveDynamicModel` and
|
||||
`prepareDynamicModel` because the provider is pass-through and may expose new
|
||||
model ids before OpenClaw's static catalog updates.
|
||||
- GitHub Copilot uses `catalog`, `resolveDynamicModel`, and
|
||||
`capabilities` plus `prepareRuntimeAuth` because it needs model fallback
|
||||
behavior, Claude transcript quirks, and a GitHub token -> Copilot token exchange.
|
||||
- OpenAI Codex uses `catalog`, `resolveDynamicModel`, and
|
||||
`normalizeResolvedModel` plus `prepareExtraParams` because it still runs on
|
||||
core OpenAI transports but owns its transport/base URL normalization and
|
||||
default transport choice.
|
||||
- OpenRouter uses `capabilities`, `wrapStreamFn`, and `isCacheTtlEligible`
|
||||
to keep provider-specific request headers, routing metadata, reasoning
|
||||
patches, and prompt-cache policy out of core.
|
||||
|
||||
## Load pipeline
|
||||
|
||||
At startup, OpenClaw does roughly this:
|
||||
@@ -268,6 +404,36 @@ authoring plugins:
|
||||
`openclaw/plugin-sdk/twitch`, `openclaw/plugin-sdk/voice-call`,
|
||||
`openclaw/plugin-sdk/zalo`, and `openclaw/plugin-sdk/zalouser`.
|
||||
|
||||
## Provider catalogs
|
||||
|
||||
Provider plugins can define model catalogs for inference with
|
||||
`registerProvider({ catalog: { run(...) { ... } } })`.
|
||||
|
||||
`catalog.run(...)` returns the same shape OpenClaw writes into
|
||||
`models.providers`:
|
||||
|
||||
- `{ provider }` for one provider entry
|
||||
- `{ providers }` for multiple provider entries
|
||||
|
||||
Use `catalog` when the plugin owns provider-specific model ids, base URL
|
||||
defaults, or auth-gated model metadata.
|
||||
|
||||
`catalog.order` controls when a plugin's catalog merges relative to OpenClaw's
|
||||
built-in implicit providers:
|
||||
|
||||
- `simple`: plain API-key or env-driven providers
|
||||
- `profile`: providers that appear when auth profiles exist
|
||||
- `paired`: providers that synthesize multiple related provider entries
|
||||
- `late`: last pass, after other implicit providers
|
||||
|
||||
Later providers win on key collision, so plugins can intentionally override a
|
||||
built-in provider entry with the same provider id.
|
||||
|
||||
Compatibility:
|
||||
|
||||
- `discovery` still works as a legacy alias
|
||||
- if both `catalog` and `discovery` are registered, OpenClaw uses `catalog`
|
||||
|
||||
Compatibility note:
|
||||
|
||||
- `openclaw/plugin-sdk` remains supported for existing external plugins.
|
||||
|
||||
49
extensions/github-copilot/index.test.ts
Normal file
49
extensions/github-copilot/index.test.ts
Normal file
@@ -0,0 +1,49 @@
|
||||
import { describe, expect, it } from "vitest";
|
||||
import type { ProviderPlugin } from "../../src/plugins/types.js";
|
||||
import githubCopilotPlugin from "./index.js";
|
||||
|
||||
function registerProvider(): ProviderPlugin {
|
||||
let provider: ProviderPlugin | undefined;
|
||||
githubCopilotPlugin.register({
|
||||
registerProvider(nextProvider: ProviderPlugin) {
|
||||
provider = nextProvider;
|
||||
},
|
||||
} as never);
|
||||
if (!provider) {
|
||||
throw new Error("provider registration missing");
|
||||
}
|
||||
return provider;
|
||||
}
|
||||
|
||||
describe("github-copilot plugin", () => {
|
||||
it("owns Copilot-specific forward-compat fallbacks", () => {
|
||||
const provider = registerProvider();
|
||||
const model = provider.resolveDynamicModel?.({
|
||||
provider: "github-copilot",
|
||||
modelId: "gpt-5.3-codex",
|
||||
modelRegistry: {
|
||||
find: (_provider: string, id: string) =>
|
||||
id === "gpt-5.2-codex"
|
||||
? {
|
||||
id,
|
||||
name: id,
|
||||
api: "openai-codex-responses",
|
||||
provider: "github-copilot",
|
||||
baseUrl: "https://api.copilot.example",
|
||||
reasoning: true,
|
||||
input: ["text"],
|
||||
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
|
||||
contextWindow: 128_000,
|
||||
maxTokens: 8_192,
|
||||
}
|
||||
: null,
|
||||
} as never,
|
||||
});
|
||||
|
||||
expect(model).toMatchObject({
|
||||
id: "gpt-5.3-codex",
|
||||
provider: "github-copilot",
|
||||
api: "openai-codex-responses",
|
||||
});
|
||||
});
|
||||
});
|
||||
137
extensions/github-copilot/index.ts
Normal file
137
extensions/github-copilot/index.ts
Normal file
@@ -0,0 +1,137 @@
|
||||
import {
|
||||
emptyPluginConfigSchema,
|
||||
type OpenClawPluginApi,
|
||||
type ProviderResolveDynamicModelContext,
|
||||
type ProviderRuntimeModel,
|
||||
} from "openclaw/plugin-sdk/core";
|
||||
import { listProfilesForProvider } from "../../src/agents/auth-profiles/profiles.js";
|
||||
import { ensureAuthProfileStore } from "../../src/agents/auth-profiles/store.js";
|
||||
import { normalizeModelCompat } from "../../src/agents/model-compat.js";
|
||||
import { coerceSecretRef } from "../../src/config/types.secrets.js";
|
||||
import {
|
||||
DEFAULT_COPILOT_API_BASE_URL,
|
||||
resolveCopilotApiToken,
|
||||
} from "../../src/providers/github-copilot-token.js";
|
||||
|
||||
const PROVIDER_ID = "github-copilot";
|
||||
const COPILOT_ENV_VARS = ["COPILOT_GITHUB_TOKEN", "GH_TOKEN", "GITHUB_TOKEN"];
|
||||
const CODEX_GPT_53_MODEL_ID = "gpt-5.3-codex";
|
||||
const CODEX_TEMPLATE_MODEL_IDS = ["gpt-5.2-codex"] as const;
|
||||
|
||||
function resolveFirstGithubToken(params: { agentDir?: string; env: NodeJS.ProcessEnv }): {
|
||||
githubToken: string;
|
||||
hasProfile: boolean;
|
||||
} {
|
||||
const authStore = ensureAuthProfileStore(params.agentDir, {
|
||||
allowKeychainPrompt: false,
|
||||
});
|
||||
const hasProfile = listProfilesForProvider(authStore, PROVIDER_ID).length > 0;
|
||||
const envToken =
|
||||
params.env.COPILOT_GITHUB_TOKEN ?? params.env.GH_TOKEN ?? params.env.GITHUB_TOKEN ?? "";
|
||||
const githubToken = envToken.trim();
|
||||
if (githubToken || !hasProfile) {
|
||||
return { githubToken, hasProfile };
|
||||
}
|
||||
|
||||
const profileId = listProfilesForProvider(authStore, PROVIDER_ID)[0];
|
||||
const profile = profileId ? authStore.profiles[profileId] : undefined;
|
||||
if (profile?.type !== "token") {
|
||||
return { githubToken: "", hasProfile };
|
||||
}
|
||||
const directToken = profile.token?.trim() ?? "";
|
||||
if (directToken) {
|
||||
return { githubToken: directToken, hasProfile };
|
||||
}
|
||||
const tokenRef = coerceSecretRef(profile.tokenRef);
|
||||
if (tokenRef?.source === "env" && tokenRef.id.trim()) {
|
||||
return {
|
||||
githubToken: (params.env[tokenRef.id] ?? process.env[tokenRef.id] ?? "").trim(),
|
||||
hasProfile,
|
||||
};
|
||||
}
|
||||
return { githubToken: "", hasProfile };
|
||||
}
|
||||
|
||||
function resolveCopilotForwardCompatModel(
|
||||
ctx: ProviderResolveDynamicModelContext,
|
||||
): ProviderRuntimeModel | undefined {
|
||||
const trimmedModelId = ctx.modelId.trim();
|
||||
if (trimmedModelId.toLowerCase() !== CODEX_GPT_53_MODEL_ID) {
|
||||
return undefined;
|
||||
}
|
||||
for (const templateId of CODEX_TEMPLATE_MODEL_IDS) {
|
||||
const template = ctx.modelRegistry.find(PROVIDER_ID, templateId) as ProviderRuntimeModel | null;
|
||||
if (!template) {
|
||||
continue;
|
||||
}
|
||||
return normalizeModelCompat({
|
||||
...template,
|
||||
id: trimmedModelId,
|
||||
name: trimmedModelId,
|
||||
} as ProviderRuntimeModel);
|
||||
}
|
||||
return undefined;
|
||||
}
|
||||
|
||||
const githubCopilotPlugin = {
|
||||
id: "github-copilot",
|
||||
name: "GitHub Copilot Provider",
|
||||
description: "Bundled GitHub Copilot provider plugin",
|
||||
configSchema: emptyPluginConfigSchema(),
|
||||
register(api: OpenClawPluginApi) {
|
||||
api.registerProvider({
|
||||
id: PROVIDER_ID,
|
||||
label: "GitHub Copilot",
|
||||
docsPath: "/providers/models",
|
||||
envVars: COPILOT_ENV_VARS,
|
||||
auth: [],
|
||||
catalog: {
|
||||
order: "late",
|
||||
run: async (ctx) => {
|
||||
const { githubToken, hasProfile } = resolveFirstGithubToken({
|
||||
agentDir: ctx.agentDir,
|
||||
env: ctx.env,
|
||||
});
|
||||
if (!hasProfile && !githubToken) {
|
||||
return null;
|
||||
}
|
||||
let baseUrl = DEFAULT_COPILOT_API_BASE_URL;
|
||||
if (githubToken) {
|
||||
try {
|
||||
const token = await resolveCopilotApiToken({
|
||||
githubToken,
|
||||
env: ctx.env,
|
||||
});
|
||||
baseUrl = token.baseUrl;
|
||||
} catch {
|
||||
baseUrl = DEFAULT_COPILOT_API_BASE_URL;
|
||||
}
|
||||
}
|
||||
return {
|
||||
provider: {
|
||||
baseUrl,
|
||||
models: [],
|
||||
},
|
||||
};
|
||||
},
|
||||
},
|
||||
resolveDynamicModel: (ctx) => resolveCopilotForwardCompatModel(ctx),
|
||||
capabilities: {
|
||||
dropThinkingBlockModelHints: ["claude"],
|
||||
},
|
||||
prepareRuntimeAuth: async (ctx) => {
|
||||
const token = await resolveCopilotApiToken({
|
||||
githubToken: ctx.apiKey,
|
||||
env: ctx.env,
|
||||
});
|
||||
return {
|
||||
apiKey: token.token,
|
||||
baseUrl: token.baseUrl,
|
||||
expiresAt: token.expiresAt,
|
||||
};
|
||||
},
|
||||
});
|
||||
},
|
||||
};
|
||||
|
||||
export default githubCopilotPlugin;
|
||||
@@ -4,6 +4,7 @@ import {
|
||||
type OpenClawPluginApi,
|
||||
type ProviderAuthContext,
|
||||
type ProviderAuthResult,
|
||||
type ProviderCatalogContext,
|
||||
} from "openclaw/plugin-sdk/minimax-portal-auth";
|
||||
import { loginMiniMaxPortalOAuth, type MiniMaxRegion } from "./oauth.js";
|
||||
|
||||
@@ -14,7 +15,6 @@ const DEFAULT_BASE_URL_CN = "https://api.minimaxi.com/anthropic";
|
||||
const DEFAULT_BASE_URL_GLOBAL = "https://api.minimax.io/anthropic";
|
||||
const DEFAULT_CONTEXT_WINDOW = 200000;
|
||||
const DEFAULT_MAX_TOKENS = 8192;
|
||||
const OAUTH_PLACEHOLDER = "minimax-oauth";
|
||||
|
||||
function getDefaultBaseUrl(region: MiniMaxRegion): string {
|
||||
return region === "cn" ? DEFAULT_BASE_URL_CN : DEFAULT_BASE_URL_GLOBAL;
|
||||
@@ -41,6 +41,53 @@ function buildModelDefinition(params: {
|
||||
};
|
||||
}
|
||||
|
||||
function buildProviderCatalog(params: { baseUrl: string; apiKey: string }) {
|
||||
return {
|
||||
baseUrl: params.baseUrl,
|
||||
apiKey: params.apiKey,
|
||||
api: "anthropic-messages" as const,
|
||||
models: [
|
||||
buildModelDefinition({
|
||||
id: "MiniMax-M2.5",
|
||||
name: "MiniMax M2.5",
|
||||
input: ["text"],
|
||||
}),
|
||||
buildModelDefinition({
|
||||
id: "MiniMax-M2.5-highspeed",
|
||||
name: "MiniMax M2.5 Highspeed",
|
||||
input: ["text"],
|
||||
reasoning: true,
|
||||
}),
|
||||
buildModelDefinition({
|
||||
id: "MiniMax-M2.5-Lightning",
|
||||
name: "MiniMax M2.5 Lightning",
|
||||
input: ["text"],
|
||||
reasoning: true,
|
||||
}),
|
||||
],
|
||||
};
|
||||
}
|
||||
|
||||
function resolveCatalog(ctx: ProviderCatalogContext) {
|
||||
const explicitProvider = ctx.config.models?.providers?.[PROVIDER_ID];
|
||||
const apiKey =
|
||||
ctx.resolveProviderApiKey(PROVIDER_ID).apiKey ??
|
||||
(typeof explicitProvider?.apiKey === "string" ? explicitProvider.apiKey.trim() : undefined);
|
||||
if (!apiKey) {
|
||||
return null;
|
||||
}
|
||||
|
||||
const explicitBaseUrl =
|
||||
typeof explicitProvider?.baseUrl === "string" ? explicitProvider.baseUrl.trim() : undefined;
|
||||
|
||||
return {
|
||||
provider: buildProviderCatalog({
|
||||
baseUrl: explicitBaseUrl || DEFAULT_BASE_URL_GLOBAL,
|
||||
apiKey,
|
||||
}),
|
||||
};
|
||||
}
|
||||
|
||||
function createOAuthHandler(region: MiniMaxRegion) {
|
||||
const defaultBaseUrl = getDefaultBaseUrl(region);
|
||||
const regionLabel = region === "cn" ? "CN" : "Global";
|
||||
@@ -74,27 +121,7 @@ function createOAuthHandler(region: MiniMaxRegion) {
|
||||
providers: {
|
||||
[PROVIDER_ID]: {
|
||||
baseUrl,
|
||||
apiKey: OAUTH_PLACEHOLDER,
|
||||
api: "anthropic-messages",
|
||||
models: [
|
||||
buildModelDefinition({
|
||||
id: "MiniMax-M2.5",
|
||||
name: "MiniMax M2.5",
|
||||
input: ["text"],
|
||||
}),
|
||||
buildModelDefinition({
|
||||
id: "MiniMax-M2.5-highspeed",
|
||||
name: "MiniMax M2.5 Highspeed",
|
||||
input: ["text"],
|
||||
reasoning: true,
|
||||
}),
|
||||
buildModelDefinition({
|
||||
id: "MiniMax-M2.5-Lightning",
|
||||
name: "MiniMax M2.5 Lightning",
|
||||
input: ["text"],
|
||||
reasoning: true,
|
||||
}),
|
||||
],
|
||||
models: [],
|
||||
},
|
||||
},
|
||||
},
|
||||
@@ -141,6 +168,9 @@ const minimaxPortalPlugin = {
|
||||
label: PROVIDER_LABEL,
|
||||
docsPath: "/providers/minimax",
|
||||
aliases: ["minimax"],
|
||||
catalog: {
|
||||
run: async (ctx: ProviderCatalogContext) => resolveCatalog(ctx),
|
||||
},
|
||||
auth: [
|
||||
{
|
||||
id: "oauth",
|
||||
|
||||
65
extensions/openai-codex/index.test.ts
Normal file
65
extensions/openai-codex/index.test.ts
Normal file
@@ -0,0 +1,65 @@
|
||||
import { describe, expect, it } from "vitest";
|
||||
import type { ProviderPlugin } from "../../src/plugins/types.js";
|
||||
import openAICodexPlugin from "./index.js";
|
||||
|
||||
function registerProvider(): ProviderPlugin {
|
||||
let provider: ProviderPlugin | undefined;
|
||||
openAICodexPlugin.register({
|
||||
registerProvider(nextProvider: ProviderPlugin) {
|
||||
provider = nextProvider;
|
||||
},
|
||||
} as never);
|
||||
if (!provider) {
|
||||
throw new Error("provider registration missing");
|
||||
}
|
||||
return provider;
|
||||
}
|
||||
|
||||
describe("openai-codex plugin", () => {
|
||||
it("owns forward-compat codex models", () => {
|
||||
const provider = registerProvider();
|
||||
const model = provider.resolveDynamicModel?.({
|
||||
provider: "openai-codex",
|
||||
modelId: "gpt-5.4",
|
||||
modelRegistry: {
|
||||
find: (_provider: string, id: string) =>
|
||||
id === "gpt-5.2-codex"
|
||||
? {
|
||||
id,
|
||||
name: id,
|
||||
api: "openai-codex-responses",
|
||||
provider: "openai-codex",
|
||||
baseUrl: "https://chatgpt.com/backend-api",
|
||||
reasoning: true,
|
||||
input: ["text"],
|
||||
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
|
||||
contextWindow: 200_000,
|
||||
maxTokens: 8_192,
|
||||
}
|
||||
: null,
|
||||
} as never,
|
||||
});
|
||||
|
||||
expect(model).toMatchObject({
|
||||
id: "gpt-5.4",
|
||||
provider: "openai-codex",
|
||||
api: "openai-codex-responses",
|
||||
contextWindow: 1_050_000,
|
||||
maxTokens: 128_000,
|
||||
});
|
||||
});
|
||||
|
||||
it("owns codex transport defaults", () => {
|
||||
const provider = registerProvider();
|
||||
expect(
|
||||
provider.prepareExtraParams?.({
|
||||
provider: "openai-codex",
|
||||
modelId: "gpt-5.4",
|
||||
extraParams: { temperature: 0.2 },
|
||||
}),
|
||||
).toEqual({
|
||||
temperature: 0.2,
|
||||
transport: "auto",
|
||||
});
|
||||
});
|
||||
});
|
||||
189
extensions/openai-codex/index.ts
Normal file
189
extensions/openai-codex/index.ts
Normal file
@@ -0,0 +1,189 @@
|
||||
import {
|
||||
emptyPluginConfigSchema,
|
||||
type OpenClawPluginApi,
|
||||
type ProviderResolveDynamicModelContext,
|
||||
type ProviderRuntimeModel,
|
||||
} from "openclaw/plugin-sdk/core";
|
||||
import { listProfilesForProvider } from "../../src/agents/auth-profiles/profiles.js";
|
||||
import { ensureAuthProfileStore } from "../../src/agents/auth-profiles/store.js";
|
||||
import { DEFAULT_CONTEXT_TOKENS } from "../../src/agents/defaults.js";
|
||||
import { normalizeModelCompat } from "../../src/agents/model-compat.js";
|
||||
import { normalizeProviderId } from "../../src/agents/model-selection.js";
|
||||
import { buildOpenAICodexProvider } from "../../src/agents/models-config.providers.static.js";
|
||||
|
||||
const PROVIDER_ID = "openai-codex";
|
||||
const OPENAI_CODEX_BASE_URL = "https://chatgpt.com/backend-api";
|
||||
const OPENAI_CODEX_GPT_54_MODEL_ID = "gpt-5.4";
|
||||
const OPENAI_CODEX_GPT_54_CONTEXT_TOKENS = 1_050_000;
|
||||
const OPENAI_CODEX_GPT_54_MAX_TOKENS = 128_000;
|
||||
const OPENAI_CODEX_GPT_54_TEMPLATE_MODEL_IDS = ["gpt-5.3-codex", "gpt-5.2-codex"] as const;
|
||||
const OPENAI_CODEX_GPT_53_MODEL_ID = "gpt-5.3-codex";
|
||||
const OPENAI_CODEX_GPT_53_SPARK_MODEL_ID = "gpt-5.3-codex-spark";
|
||||
const OPENAI_CODEX_GPT_53_SPARK_CONTEXT_TOKENS = 128_000;
|
||||
const OPENAI_CODEX_GPT_53_SPARK_MAX_TOKENS = 128_000;
|
||||
const OPENAI_CODEX_TEMPLATE_MODEL_IDS = ["gpt-5.2-codex"] as const;
|
||||
|
||||
function isOpenAIApiBaseUrl(baseUrl?: string): boolean {
|
||||
const trimmed = baseUrl?.trim();
|
||||
if (!trimmed) {
|
||||
return false;
|
||||
}
|
||||
return /^https?:\/\/api\.openai\.com(?:\/v1)?\/?$/i.test(trimmed);
|
||||
}
|
||||
|
||||
function isOpenAICodexBaseUrl(baseUrl?: string): boolean {
|
||||
const trimmed = baseUrl?.trim();
|
||||
if (!trimmed) {
|
||||
return false;
|
||||
}
|
||||
return /^https?:\/\/chatgpt\.com\/backend-api\/?$/i.test(trimmed);
|
||||
}
|
||||
|
||||
function normalizeCodexTransport(model: ProviderRuntimeModel): ProviderRuntimeModel {
|
||||
const useCodexTransport =
|
||||
!model.baseUrl || isOpenAIApiBaseUrl(model.baseUrl) || isOpenAICodexBaseUrl(model.baseUrl);
|
||||
const api =
|
||||
useCodexTransport && model.api === "openai-responses" ? "openai-codex-responses" : model.api;
|
||||
const baseUrl =
|
||||
api === "openai-codex-responses" && (!model.baseUrl || isOpenAIApiBaseUrl(model.baseUrl))
|
||||
? OPENAI_CODEX_BASE_URL
|
||||
: model.baseUrl;
|
||||
if (api === model.api && baseUrl === model.baseUrl) {
|
||||
return model;
|
||||
}
|
||||
return {
|
||||
...model,
|
||||
api,
|
||||
baseUrl,
|
||||
};
|
||||
}
|
||||
|
||||
function cloneFirstTemplateModel(params: {
|
||||
modelId: string;
|
||||
templateIds: readonly string[];
|
||||
ctx: ProviderResolveDynamicModelContext;
|
||||
patch?: Partial<ProviderRuntimeModel>;
|
||||
}): ProviderRuntimeModel | undefined {
|
||||
const trimmedModelId = params.modelId.trim();
|
||||
for (const templateId of [...new Set(params.templateIds)].filter(Boolean)) {
|
||||
const template = params.ctx.modelRegistry.find(
|
||||
PROVIDER_ID,
|
||||
templateId,
|
||||
) as ProviderRuntimeModel | null;
|
||||
if (!template) {
|
||||
continue;
|
||||
}
|
||||
return normalizeModelCompat({
|
||||
...template,
|
||||
id: trimmedModelId,
|
||||
name: trimmedModelId,
|
||||
...params.patch,
|
||||
} as ProviderRuntimeModel);
|
||||
}
|
||||
return undefined;
|
||||
}
|
||||
|
||||
function resolveCodexForwardCompatModel(
|
||||
ctx: ProviderResolveDynamicModelContext,
|
||||
): ProviderRuntimeModel | undefined {
|
||||
const trimmedModelId = ctx.modelId.trim();
|
||||
const lower = trimmedModelId.toLowerCase();
|
||||
|
||||
let templateIds: readonly string[];
|
||||
let patch: Partial<ProviderRuntimeModel> | undefined;
|
||||
if (lower === OPENAI_CODEX_GPT_54_MODEL_ID) {
|
||||
templateIds = OPENAI_CODEX_GPT_54_TEMPLATE_MODEL_IDS;
|
||||
patch = {
|
||||
contextWindow: OPENAI_CODEX_GPT_54_CONTEXT_TOKENS,
|
||||
maxTokens: OPENAI_CODEX_GPT_54_MAX_TOKENS,
|
||||
};
|
||||
} else if (lower === OPENAI_CODEX_GPT_53_SPARK_MODEL_ID) {
|
||||
templateIds = [OPENAI_CODEX_GPT_53_MODEL_ID, ...OPENAI_CODEX_TEMPLATE_MODEL_IDS];
|
||||
patch = {
|
||||
api: "openai-codex-responses",
|
||||
provider: PROVIDER_ID,
|
||||
baseUrl: OPENAI_CODEX_BASE_URL,
|
||||
reasoning: true,
|
||||
input: ["text"],
|
||||
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
|
||||
contextWindow: OPENAI_CODEX_GPT_53_SPARK_CONTEXT_TOKENS,
|
||||
maxTokens: OPENAI_CODEX_GPT_53_SPARK_MAX_TOKENS,
|
||||
};
|
||||
} else if (lower === OPENAI_CODEX_GPT_53_MODEL_ID) {
|
||||
templateIds = OPENAI_CODEX_TEMPLATE_MODEL_IDS;
|
||||
} else {
|
||||
return undefined;
|
||||
}
|
||||
|
||||
return (
|
||||
cloneFirstTemplateModel({
|
||||
modelId: trimmedModelId,
|
||||
templateIds,
|
||||
ctx,
|
||||
patch,
|
||||
}) ??
|
||||
normalizeModelCompat({
|
||||
id: trimmedModelId,
|
||||
name: trimmedModelId,
|
||||
api: "openai-codex-responses",
|
||||
provider: PROVIDER_ID,
|
||||
baseUrl: OPENAI_CODEX_BASE_URL,
|
||||
reasoning: true,
|
||||
input: ["text", "image"],
|
||||
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
|
||||
contextWindow: patch?.contextWindow ?? DEFAULT_CONTEXT_TOKENS,
|
||||
maxTokens: patch?.maxTokens ?? DEFAULT_CONTEXT_TOKENS,
|
||||
} as ProviderRuntimeModel)
|
||||
);
|
||||
}
|
||||
|
||||
const openAICodexPlugin = {
|
||||
id: "openai-codex",
|
||||
name: "OpenAI Codex Provider",
|
||||
description: "Bundled OpenAI Codex provider plugin",
|
||||
configSchema: emptyPluginConfigSchema(),
|
||||
register(api: OpenClawPluginApi) {
|
||||
api.registerProvider({
|
||||
id: PROVIDER_ID,
|
||||
label: "OpenAI Codex",
|
||||
docsPath: "/providers/models",
|
||||
auth: [],
|
||||
catalog: {
|
||||
order: "profile",
|
||||
run: async (ctx) => {
|
||||
const authStore = ensureAuthProfileStore(ctx.agentDir, {
|
||||
allowKeychainPrompt: false,
|
||||
});
|
||||
if (listProfilesForProvider(authStore, PROVIDER_ID).length === 0) {
|
||||
return null;
|
||||
}
|
||||
return {
|
||||
provider: buildOpenAICodexProvider(),
|
||||
};
|
||||
},
|
||||
},
|
||||
resolveDynamicModel: (ctx) => resolveCodexForwardCompatModel(ctx),
|
||||
capabilities: {
|
||||
providerFamily: "openai",
|
||||
},
|
||||
prepareExtraParams: (ctx) => {
|
||||
const transport = ctx.extraParams?.transport;
|
||||
if (transport === "auto" || transport === "sse" || transport === "websocket") {
|
||||
return ctx.extraParams;
|
||||
}
|
||||
return {
|
||||
...ctx.extraParams,
|
||||
transport: "auto",
|
||||
};
|
||||
},
|
||||
normalizeResolvedModel: (ctx) => {
|
||||
if (normalizeProviderId(ctx.provider) !== PROVIDER_ID) {
|
||||
return undefined;
|
||||
}
|
||||
return normalizeCodexTransport(ctx.model);
|
||||
},
|
||||
});
|
||||
},
|
||||
};
|
||||
|
||||
export default openAICodexPlugin;
|
||||
134
extensions/openrouter/index.ts
Normal file
134
extensions/openrouter/index.ts
Normal file
@@ -0,0 +1,134 @@
|
||||
import type { StreamFn } from "@mariozechner/pi-agent-core";
|
||||
import {
|
||||
emptyPluginConfigSchema,
|
||||
type OpenClawPluginApi,
|
||||
type ProviderResolveDynamicModelContext,
|
||||
type ProviderRuntimeModel,
|
||||
} from "openclaw/plugin-sdk/core";
|
||||
import { DEFAULT_CONTEXT_TOKENS } from "../../src/agents/defaults.js";
|
||||
import { buildOpenrouterProvider } from "../../src/agents/models-config.providers.static.js";
|
||||
import {
|
||||
getOpenRouterModelCapabilities,
|
||||
loadOpenRouterModelCapabilities,
|
||||
} from "../../src/agents/pi-embedded-runner/openrouter-model-capabilities.js";
|
||||
import {
|
||||
createOpenRouterSystemCacheWrapper,
|
||||
createOpenRouterWrapper,
|
||||
isProxyReasoningUnsupported,
|
||||
} from "../../src/agents/pi-embedded-runner/proxy-stream-wrappers.js";
|
||||
|
||||
const PROVIDER_ID = "openrouter";
|
||||
const OPENROUTER_BASE_URL = "https://openrouter.ai/api/v1";
|
||||
const OPENROUTER_DEFAULT_MAX_TOKENS = 8192;
|
||||
const OPENROUTER_CACHE_TTL_MODEL_PREFIXES = [
|
||||
"anthropic/",
|
||||
"moonshot/",
|
||||
"moonshotai/",
|
||||
"zai/",
|
||||
] as const;
|
||||
|
||||
function buildDynamicOpenRouterModel(
|
||||
ctx: ProviderResolveDynamicModelContext,
|
||||
): ProviderRuntimeModel {
|
||||
const capabilities = getOpenRouterModelCapabilities(ctx.modelId);
|
||||
return {
|
||||
id: ctx.modelId,
|
||||
name: capabilities?.name ?? ctx.modelId,
|
||||
api: "openai-completions",
|
||||
provider: PROVIDER_ID,
|
||||
baseUrl: OPENROUTER_BASE_URL,
|
||||
reasoning: capabilities?.reasoning ?? false,
|
||||
input: capabilities?.input ?? ["text"],
|
||||
cost: capabilities?.cost ?? { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
|
||||
contextWindow: capabilities?.contextWindow ?? DEFAULT_CONTEXT_TOKENS,
|
||||
maxTokens: capabilities?.maxTokens ?? OPENROUTER_DEFAULT_MAX_TOKENS,
|
||||
};
|
||||
}
|
||||
|
||||
function injectOpenRouterRouting(
|
||||
baseStreamFn: StreamFn | undefined,
|
||||
providerRouting?: Record<string, unknown>,
|
||||
): StreamFn | undefined {
|
||||
if (!providerRouting) {
|
||||
return baseStreamFn;
|
||||
}
|
||||
return (model, context, options) =>
|
||||
(
|
||||
baseStreamFn ??
|
||||
((nextModel, nextContext, nextOptions) => {
|
||||
throw new Error(
|
||||
`OpenRouter routing wrapper requires an underlying streamFn for ${String(nextModel.id)}.`,
|
||||
);
|
||||
})
|
||||
)(
|
||||
{
|
||||
...model,
|
||||
compat: { ...model.compat, openRouterRouting: providerRouting },
|
||||
} as typeof model,
|
||||
context,
|
||||
options,
|
||||
);
|
||||
}
|
||||
|
||||
function isOpenRouterCacheTtlModel(modelId: string): boolean {
|
||||
return OPENROUTER_CACHE_TTL_MODEL_PREFIXES.some((prefix) => modelId.startsWith(prefix));
|
||||
}
|
||||
|
||||
const openRouterPlugin = {
|
||||
id: "openrouter",
|
||||
name: "OpenRouter Provider",
|
||||
description: "Bundled OpenRouter provider plugin",
|
||||
configSchema: emptyPluginConfigSchema(),
|
||||
register(api: OpenClawPluginApi) {
|
||||
api.registerProvider({
|
||||
id: PROVIDER_ID,
|
||||
label: "OpenRouter",
|
||||
docsPath: "/providers/models",
|
||||
envVars: ["OPENROUTER_API_KEY"],
|
||||
auth: [],
|
||||
catalog: {
|
||||
order: "simple",
|
||||
run: async (ctx) => {
|
||||
const apiKey = ctx.resolveProviderApiKey(PROVIDER_ID).apiKey;
|
||||
if (!apiKey) {
|
||||
return null;
|
||||
}
|
||||
return {
|
||||
provider: {
|
||||
...buildOpenrouterProvider(),
|
||||
apiKey,
|
||||
},
|
||||
};
|
||||
},
|
||||
},
|
||||
resolveDynamicModel: (ctx) => buildDynamicOpenRouterModel(ctx),
|
||||
prepareDynamicModel: async (ctx) => {
|
||||
await loadOpenRouterModelCapabilities(ctx.modelId);
|
||||
},
|
||||
capabilities: {
|
||||
openAiCompatTurnValidation: false,
|
||||
geminiThoughtSignatureSanitization: true,
|
||||
geminiThoughtSignatureModelHints: ["gemini"],
|
||||
},
|
||||
wrapStreamFn: (ctx) => {
|
||||
let streamFn = ctx.streamFn;
|
||||
const providerRouting =
|
||||
ctx.extraParams?.provider != null && typeof ctx.extraParams.provider === "object"
|
||||
? (ctx.extraParams.provider as Record<string, unknown>)
|
||||
: undefined;
|
||||
if (providerRouting) {
|
||||
streamFn = injectOpenRouterRouting(streamFn, providerRouting);
|
||||
}
|
||||
const skipReasoningInjection =
|
||||
ctx.modelId === "auto" || isProxyReasoningUnsupported(ctx.modelId);
|
||||
const openRouterThinkingLevel = skipReasoningInjection ? undefined : ctx.thinkingLevel;
|
||||
streamFn = createOpenRouterWrapper(streamFn, openRouterThinkingLevel);
|
||||
streamFn = createOpenRouterSystemCacheWrapper(streamFn);
|
||||
return streamFn;
|
||||
},
|
||||
isCacheTtlEligible: (ctx) => isOpenRouterCacheTtlModel(ctx.modelId),
|
||||
});
|
||||
},
|
||||
};
|
||||
|
||||
export default openRouterPlugin;
|
||||
@@ -3,6 +3,7 @@ import {
|
||||
emptyPluginConfigSchema,
|
||||
type OpenClawPluginApi,
|
||||
type ProviderAuthContext,
|
||||
type ProviderCatalogContext,
|
||||
} from "openclaw/plugin-sdk/qwen-portal-auth";
|
||||
import { loginQwenPortalOAuth } from "./oauth.js";
|
||||
|
||||
@@ -12,7 +13,6 @@ const DEFAULT_MODEL = "qwen-portal/coder-model";
|
||||
const DEFAULT_BASE_URL = "https://portal.qwen.ai/v1";
|
||||
const DEFAULT_CONTEXT_WINDOW = 128000;
|
||||
const DEFAULT_MAX_TOKENS = 8192;
|
||||
const OAUTH_PLACEHOLDER = "qwen-oauth";
|
||||
|
||||
function normalizeBaseUrl(value: string | undefined): string {
|
||||
const raw = value?.trim() || DEFAULT_BASE_URL;
|
||||
@@ -36,6 +36,46 @@ function buildModelDefinition(params: {
|
||||
};
|
||||
}
|
||||
|
||||
function buildProviderCatalog(params: { baseUrl: string; apiKey: string }) {
|
||||
return {
|
||||
baseUrl: params.baseUrl,
|
||||
apiKey: params.apiKey,
|
||||
api: "openai-completions" as const,
|
||||
models: [
|
||||
buildModelDefinition({
|
||||
id: "coder-model",
|
||||
name: "Qwen Coder",
|
||||
input: ["text"],
|
||||
}),
|
||||
buildModelDefinition({
|
||||
id: "vision-model",
|
||||
name: "Qwen Vision",
|
||||
input: ["text", "image"],
|
||||
}),
|
||||
],
|
||||
};
|
||||
}
|
||||
|
||||
function resolveCatalog(ctx: ProviderCatalogContext) {
|
||||
const explicitProvider = ctx.config.models?.providers?.[PROVIDER_ID];
|
||||
const apiKey =
|
||||
ctx.resolveProviderApiKey(PROVIDER_ID).apiKey ??
|
||||
(typeof explicitProvider?.apiKey === "string" ? explicitProvider.apiKey.trim() : undefined);
|
||||
if (!apiKey) {
|
||||
return null;
|
||||
}
|
||||
|
||||
const explicitBaseUrl =
|
||||
typeof explicitProvider?.baseUrl === "string" ? explicitProvider.baseUrl : undefined;
|
||||
|
||||
return {
|
||||
provider: buildProviderCatalog({
|
||||
baseUrl: normalizeBaseUrl(explicitBaseUrl),
|
||||
apiKey,
|
||||
}),
|
||||
};
|
||||
}
|
||||
|
||||
const qwenPortalPlugin = {
|
||||
id: "qwen-portal-auth",
|
||||
name: "Qwen OAuth",
|
||||
@@ -47,6 +87,9 @@ const qwenPortalPlugin = {
|
||||
label: PROVIDER_LABEL,
|
||||
docsPath: "/providers/qwen",
|
||||
aliases: ["qwen"],
|
||||
catalog: {
|
||||
run: async (ctx: ProviderCatalogContext) => resolveCatalog(ctx),
|
||||
},
|
||||
auth: [
|
||||
{
|
||||
id: "device",
|
||||
@@ -77,20 +120,7 @@ const qwenPortalPlugin = {
|
||||
providers: {
|
||||
[PROVIDER_ID]: {
|
||||
baseUrl,
|
||||
apiKey: OAUTH_PLACEHOLDER,
|
||||
api: "openai-completions",
|
||||
models: [
|
||||
buildModelDefinition({
|
||||
id: "coder-model",
|
||||
name: "Qwen Coder",
|
||||
input: ["text"],
|
||||
}),
|
||||
buildModelDefinition({
|
||||
id: "vision-model",
|
||||
name: "Qwen Vision",
|
||||
input: ["text", "image"],
|
||||
}),
|
||||
],
|
||||
models: [],
|
||||
},
|
||||
},
|
||||
},
|
||||
|
||||
@@ -61,21 +61,6 @@ function createOpenAITemplateModel(id: string): Model<Api> {
|
||||
} as Model<Api>;
|
||||
}
|
||||
|
||||
function createOpenAICodexTemplateModel(id: string): Model<Api> {
|
||||
return {
|
||||
id,
|
||||
name: id,
|
||||
provider: "openai-codex",
|
||||
api: "openai-codex-responses",
|
||||
baseUrl: "https://chatgpt.com/backend-api",
|
||||
input: ["text", "image"],
|
||||
reasoning: true,
|
||||
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
|
||||
contextWindow: 272_000,
|
||||
maxTokens: 128_000,
|
||||
} as Model<Api>;
|
||||
}
|
||||
|
||||
function createRegistry(models: Record<string, Model<Api>>): ModelRegistry {
|
||||
return {
|
||||
find(provider: string, modelId: string) {
|
||||
@@ -451,18 +436,6 @@ describe("resolveForwardCompatModel", () => {
|
||||
expect(model?.maxTokens).toBe(128_000);
|
||||
});
|
||||
|
||||
it("resolves openai-codex gpt-5.4 via codex template fallback", () => {
|
||||
const registry = createRegistry({
|
||||
"openai-codex/gpt-5.2-codex": createOpenAICodexTemplateModel("gpt-5.2-codex"),
|
||||
});
|
||||
const model = resolveForwardCompatModel("openai-codex", "gpt-5.4", registry);
|
||||
expectResolvedForwardCompat(model, { provider: "openai-codex", id: "gpt-5.4" });
|
||||
expect(model?.api).toBe("openai-codex-responses");
|
||||
expect(model?.baseUrl).toBe("https://chatgpt.com/backend-api");
|
||||
expect(model?.contextWindow).toBe(1_050_000);
|
||||
expect(model?.maxTokens).toBe(128_000);
|
||||
});
|
||||
|
||||
it("resolves anthropic opus 4.6 via 4.5 template", () => {
|
||||
const registry = createRegistry({
|
||||
"anthropic/claude-opus-4-5": createTemplateModel("anthropic", "claude-opus-4-5"),
|
||||
|
||||
@@ -11,16 +11,6 @@ const OPENAI_GPT_54_MAX_TOKENS = 128_000;
|
||||
const OPENAI_GPT_54_TEMPLATE_MODEL_IDS = ["gpt-5.2"] as const;
|
||||
const OPENAI_GPT_54_PRO_TEMPLATE_MODEL_IDS = ["gpt-5.2-pro", "gpt-5.2"] as const;
|
||||
|
||||
const OPENAI_CODEX_GPT_54_MODEL_ID = "gpt-5.4";
|
||||
const OPENAI_CODEX_GPT_54_CONTEXT_TOKENS = 1_050_000;
|
||||
const OPENAI_CODEX_GPT_54_MAX_TOKENS = 128_000;
|
||||
const OPENAI_CODEX_GPT_54_TEMPLATE_MODEL_IDS = ["gpt-5.3-codex", "gpt-5.2-codex"] as const;
|
||||
const OPENAI_CODEX_GPT_53_MODEL_ID = "gpt-5.3-codex";
|
||||
const OPENAI_CODEX_GPT_53_SPARK_MODEL_ID = "gpt-5.3-codex-spark";
|
||||
const OPENAI_CODEX_GPT_53_SPARK_CONTEXT_TOKENS = 128_000;
|
||||
const OPENAI_CODEX_GPT_53_SPARK_MAX_TOKENS = 128_000;
|
||||
const OPENAI_CODEX_TEMPLATE_MODEL_IDS = ["gpt-5.2-codex"] as const;
|
||||
|
||||
const ANTHROPIC_OPUS_46_MODEL_ID = "claude-opus-4-6";
|
||||
const ANTHROPIC_OPUS_46_DOT_MODEL_ID = "claude-opus-4.6";
|
||||
const ANTHROPIC_OPUS_TEMPLATE_MODEL_IDS = ["claude-opus-4-5", "claude-opus-4.5"] as const;
|
||||
@@ -114,79 +104,6 @@ function cloneFirstTemplateModel(params: {
|
||||
return undefined;
|
||||
}
|
||||
|
||||
const CODEX_GPT54_ELIGIBLE_PROVIDERS = new Set(["openai-codex"]);
|
||||
const CODEX_GPT53_ELIGIBLE_PROVIDERS = new Set(["openai-codex", "github-copilot"]);
|
||||
|
||||
function resolveOpenAICodexForwardCompatModel(
|
||||
provider: string,
|
||||
modelId: string,
|
||||
modelRegistry: ModelRegistry,
|
||||
): Model<Api> | undefined {
|
||||
const normalizedProvider = normalizeProviderId(provider);
|
||||
const trimmedModelId = modelId.trim();
|
||||
const lower = trimmedModelId.toLowerCase();
|
||||
|
||||
let templateIds: readonly string[];
|
||||
let eligibleProviders: Set<string>;
|
||||
let patch: Partial<Model<Api>> | undefined;
|
||||
if (lower === OPENAI_CODEX_GPT_54_MODEL_ID) {
|
||||
templateIds = OPENAI_CODEX_GPT_54_TEMPLATE_MODEL_IDS;
|
||||
eligibleProviders = CODEX_GPT54_ELIGIBLE_PROVIDERS;
|
||||
patch = {
|
||||
contextWindow: OPENAI_CODEX_GPT_54_CONTEXT_TOKENS,
|
||||
maxTokens: OPENAI_CODEX_GPT_54_MAX_TOKENS,
|
||||
};
|
||||
} else if (lower === OPENAI_CODEX_GPT_53_SPARK_MODEL_ID) {
|
||||
templateIds = [OPENAI_CODEX_GPT_53_MODEL_ID, ...OPENAI_CODEX_TEMPLATE_MODEL_IDS];
|
||||
eligibleProviders = CODEX_GPT54_ELIGIBLE_PROVIDERS;
|
||||
patch = {
|
||||
api: "openai-codex-responses",
|
||||
provider: normalizedProvider,
|
||||
baseUrl: "https://chatgpt.com/backend-api",
|
||||
reasoning: true,
|
||||
input: ["text"],
|
||||
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
|
||||
contextWindow: OPENAI_CODEX_GPT_53_SPARK_CONTEXT_TOKENS,
|
||||
maxTokens: OPENAI_CODEX_GPT_53_SPARK_MAX_TOKENS,
|
||||
};
|
||||
} else if (lower === OPENAI_CODEX_GPT_53_MODEL_ID) {
|
||||
templateIds = OPENAI_CODEX_TEMPLATE_MODEL_IDS;
|
||||
eligibleProviders = CODEX_GPT53_ELIGIBLE_PROVIDERS;
|
||||
} else {
|
||||
return undefined;
|
||||
}
|
||||
|
||||
if (!eligibleProviders.has(normalizedProvider)) {
|
||||
return undefined;
|
||||
}
|
||||
|
||||
for (const templateId of templateIds) {
|
||||
const template = modelRegistry.find(normalizedProvider, templateId) as Model<Api> | null;
|
||||
if (!template) {
|
||||
continue;
|
||||
}
|
||||
return normalizeModelCompat({
|
||||
...template,
|
||||
id: trimmedModelId,
|
||||
name: trimmedModelId,
|
||||
...patch,
|
||||
} as Model<Api>);
|
||||
}
|
||||
|
||||
return normalizeModelCompat({
|
||||
id: trimmedModelId,
|
||||
name: trimmedModelId,
|
||||
api: "openai-codex-responses",
|
||||
provider: normalizedProvider,
|
||||
baseUrl: "https://chatgpt.com/backend-api",
|
||||
reasoning: true,
|
||||
input: ["text", "image"],
|
||||
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
|
||||
contextWindow: patch?.contextWindow ?? DEFAULT_CONTEXT_TOKENS,
|
||||
maxTokens: patch?.maxTokens ?? DEFAULT_CONTEXT_TOKENS,
|
||||
} as Model<Api>);
|
||||
}
|
||||
|
||||
function resolveAnthropic46ForwardCompatModel(params: {
|
||||
provider: string;
|
||||
modelId: string;
|
||||
@@ -348,7 +265,6 @@ export function resolveForwardCompatModel(
|
||||
): Model<Api> | undefined {
|
||||
return (
|
||||
resolveOpenAIGpt54ForwardCompatModel(provider, modelId, modelRegistry) ??
|
||||
resolveOpenAICodexForwardCompatModel(provider, modelId, modelRegistry) ??
|
||||
resolveAnthropicOpus46ForwardCompatModel(provider, modelId, modelRegistry) ??
|
||||
resolveAnthropicSonnet46ForwardCompatModel(provider, modelId, modelRegistry) ??
|
||||
resolveZaiGlm5ForwardCompatModel(provider, modelId, modelRegistry) ??
|
||||
|
||||
@@ -1,9 +1,5 @@
|
||||
import type { OpenClawConfig } from "../config/config.js";
|
||||
import { coerceSecretRef, resolveSecretInputRef } from "../config/types.secrets.js";
|
||||
import {
|
||||
DEFAULT_COPILOT_API_BASE_URL,
|
||||
resolveCopilotApiToken,
|
||||
} from "../providers/github-copilot-token.js";
|
||||
import { isRecord } from "../utils.js";
|
||||
import { normalizeOptionalSecretInput } from "../utils/normalize-secret-input.js";
|
||||
import { ensureAuthProfileStore, listProfilesForProvider } from "./auth-profiles.js";
|
||||
@@ -32,8 +28,6 @@ import {
|
||||
buildModelStudioProvider,
|
||||
buildMoonshotProvider,
|
||||
buildNvidiaProvider,
|
||||
buildOpenAICodexProvider,
|
||||
buildOpenrouterProvider,
|
||||
buildQianfanProvider,
|
||||
buildQwenPortalProvider,
|
||||
buildSyntheticProvider,
|
||||
@@ -60,6 +54,7 @@ import {
|
||||
groupPluginDiscoveryProvidersByOrder,
|
||||
normalizePluginDiscoveryResult,
|
||||
resolvePluginDiscoveryProviders,
|
||||
runProviderCatalog,
|
||||
} from "../plugins/provider-discovery.js";
|
||||
import {
|
||||
MINIMAX_OAUTH_MARKER,
|
||||
@@ -762,7 +757,6 @@ const SIMPLE_IMPLICIT_PROVIDER_LOADERS: ImplicitProviderLoader[] = [
|
||||
apiKey,
|
||||
};
|
||||
}),
|
||||
withApiKey("openrouter", async ({ apiKey }) => ({ ...buildOpenrouterProvider(), apiKey })),
|
||||
withApiKey("nvidia", async ({ apiKey }) => ({ ...buildNvidiaProvider(), apiKey })),
|
||||
withApiKey("kilocode", async ({ apiKey }) => ({
|
||||
...(await buildKilocodeProviderWithDiscovery()),
|
||||
@@ -788,7 +782,6 @@ const PROFILE_IMPLICIT_PROVIDER_LOADERS: ImplicitProviderLoader[] = [
|
||||
...buildQwenPortalProvider(),
|
||||
apiKey: QWEN_OAUTH_MARKER,
|
||||
})),
|
||||
withProfilePresence("openai-codex", async () => buildOpenAICodexProvider()),
|
||||
];
|
||||
|
||||
const PAIRED_IMPLICIT_PROVIDER_LOADERS: ImplicitProviderLoader[] = [
|
||||
@@ -868,7 +861,8 @@ async function resolvePluginImplicitProviders(
|
||||
const byOrder = groupPluginDiscoveryProvidersByOrder(providers);
|
||||
const discovered: Record<string, ProviderConfig> = {};
|
||||
for (const provider of byOrder[order]) {
|
||||
const result = await provider.discovery?.run({
|
||||
const result = await runProviderCatalog({
|
||||
provider,
|
||||
config: ctx.config ?? {},
|
||||
agentDir: ctx.agentDir,
|
||||
workspaceDir: ctx.workspaceDir,
|
||||
@@ -933,16 +927,6 @@ export async function resolveImplicitProviders(
|
||||
mergeImplicitProviderSet(providers, await resolveCloudflareAiGatewayImplicitProvider(context));
|
||||
mergeImplicitProviderSet(providers, await resolvePluginImplicitProviders(context, "late"));
|
||||
|
||||
if (!providers["github-copilot"]) {
|
||||
const implicitCopilot = await resolveImplicitCopilotProvider({
|
||||
agentDir: params.agentDir,
|
||||
env,
|
||||
});
|
||||
if (implicitCopilot) {
|
||||
providers["github-copilot"] = implicitCopilot;
|
||||
}
|
||||
}
|
||||
|
||||
const implicitBedrock = await resolveImplicitBedrockProvider({
|
||||
agentDir: params.agentDir,
|
||||
config: params.config,
|
||||
@@ -965,64 +949,6 @@ export async function resolveImplicitProviders(
|
||||
return providers;
|
||||
}
|
||||
|
||||
export async function resolveImplicitCopilotProvider(params: {
|
||||
agentDir: string;
|
||||
env?: NodeJS.ProcessEnv;
|
||||
}): Promise<ProviderConfig | null> {
|
||||
const env = params.env ?? process.env;
|
||||
const authStore = ensureAuthProfileStore(params.agentDir, {
|
||||
allowKeychainPrompt: false,
|
||||
});
|
||||
const hasProfile = listProfilesForProvider(authStore, "github-copilot").length > 0;
|
||||
const envToken = env.COPILOT_GITHUB_TOKEN ?? env.GH_TOKEN ?? env.GITHUB_TOKEN;
|
||||
const githubToken = (envToken ?? "").trim();
|
||||
|
||||
if (!hasProfile && !githubToken) {
|
||||
return null;
|
||||
}
|
||||
|
||||
let selectedGithubToken = githubToken;
|
||||
if (!selectedGithubToken && hasProfile) {
|
||||
// Use the first available profile as a default for discovery (it will be
|
||||
// re-resolved per-run by the embedded runner).
|
||||
const profileId = listProfilesForProvider(authStore, "github-copilot")[0];
|
||||
const profile = profileId ? authStore.profiles[profileId] : undefined;
|
||||
if (profile && profile.type === "token") {
|
||||
selectedGithubToken = profile.token?.trim() ?? "";
|
||||
if (!selectedGithubToken) {
|
||||
const tokenRef = coerceSecretRef(profile.tokenRef);
|
||||
if (tokenRef?.source === "env" && tokenRef.id.trim()) {
|
||||
selectedGithubToken = (env[tokenRef.id] ?? process.env[tokenRef.id] ?? "").trim();
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
let baseUrl = DEFAULT_COPILOT_API_BASE_URL;
|
||||
if (selectedGithubToken) {
|
||||
try {
|
||||
const token = await resolveCopilotApiToken({
|
||||
githubToken: selectedGithubToken,
|
||||
env,
|
||||
});
|
||||
baseUrl = token.baseUrl;
|
||||
} catch {
|
||||
baseUrl = DEFAULT_COPILOT_API_BASE_URL;
|
||||
}
|
||||
}
|
||||
|
||||
// We deliberately do not write pi-coding-agent auth.json here.
|
||||
// OpenClaw keeps auth in auth-profiles and resolves runtime availability from that store.
|
||||
|
||||
// We intentionally do NOT define custom models for Copilot in models.json.
|
||||
// pi-coding-agent treats providers with models as replacements requiring apiKey.
|
||||
// We only override baseUrl; the model list comes from pi-ai built-ins.
|
||||
return {
|
||||
baseUrl,
|
||||
models: [],
|
||||
} satisfies ProviderConfig;
|
||||
}
|
||||
|
||||
export async function resolveImplicitBedrockProvider(params: {
|
||||
agentDir: string;
|
||||
config?: OpenClawConfig;
|
||||
|
||||
@@ -1,6 +1,73 @@
|
||||
import type { StreamFn } from "@mariozechner/pi-agent-core";
|
||||
import type { Context, Model, SimpleStreamOptions } from "@mariozechner/pi-ai";
|
||||
import { describe, expect, it, vi } from "vitest";
|
||||
|
||||
vi.mock("../plugins/provider-runtime.js", async (importOriginal) => {
|
||||
const actual = await importOriginal<typeof import("../plugins/provider-runtime.js")>();
|
||||
const {
|
||||
createOpenRouterSystemCacheWrapper,
|
||||
createOpenRouterWrapper,
|
||||
isProxyReasoningUnsupported,
|
||||
} = await import("./pi-embedded-runner/proxy-stream-wrappers.js");
|
||||
|
||||
return {
|
||||
...actual,
|
||||
prepareProviderExtraParams: (params: {
|
||||
provider: string;
|
||||
context: { extraParams?: Record<string, unknown> };
|
||||
}) => {
|
||||
if (params.provider !== "openai-codex") {
|
||||
return undefined;
|
||||
}
|
||||
const transport = params.context.extraParams?.transport;
|
||||
if (transport === "auto" || transport === "sse" || transport === "websocket") {
|
||||
return params.context.extraParams;
|
||||
}
|
||||
return {
|
||||
...params.context.extraParams,
|
||||
transport: "auto",
|
||||
};
|
||||
},
|
||||
wrapProviderStreamFn: (params: {
|
||||
provider: string;
|
||||
context: {
|
||||
modelId: string;
|
||||
thinkingLevel?: import("../auto-reply/thinking.js").ThinkLevel;
|
||||
extraParams?: Record<string, unknown>;
|
||||
streamFn?: StreamFn;
|
||||
};
|
||||
}) => {
|
||||
if (params.provider !== "openrouter") {
|
||||
return params.context.streamFn;
|
||||
}
|
||||
|
||||
const providerRouting =
|
||||
params.context.extraParams?.provider != null &&
|
||||
typeof params.context.extraParams.provider === "object"
|
||||
? (params.context.extraParams.provider as Record<string, unknown>)
|
||||
: undefined;
|
||||
let streamFn = params.context.streamFn;
|
||||
if (providerRouting) {
|
||||
const underlying = streamFn;
|
||||
streamFn = (model, context, options) =>
|
||||
(underlying as StreamFn)(
|
||||
{
|
||||
...model,
|
||||
compat: { ...model.compat, openRouterRouting: providerRouting },
|
||||
},
|
||||
context,
|
||||
options,
|
||||
);
|
||||
}
|
||||
|
||||
const skipReasoningInjection =
|
||||
params.context.modelId === "auto" || isProxyReasoningUnsupported(params.context.modelId);
|
||||
const thinkingLevel = skipReasoningInjection ? undefined : params.context.thinkingLevel;
|
||||
return createOpenRouterSystemCacheWrapper(createOpenRouterWrapper(streamFn, thinkingLevel));
|
||||
},
|
||||
};
|
||||
});
|
||||
|
||||
import { applyExtraParamsToAgent, resolveExtraParams } from "./pi-embedded-runner.js";
|
||||
import { log } from "./pi-embedded-runner/logger.js";
|
||||
|
||||
|
||||
@@ -38,6 +38,30 @@ vi.mock("../providers/github-copilot-token.js", () => ({
|
||||
resolveCopilotApiToken: (...args: unknown[]) => resolveCopilotApiTokenMock(...args),
|
||||
}));
|
||||
|
||||
vi.mock("../plugins/provider-runtime.js", async (importOriginal) => {
|
||||
const actual = await importOriginal<typeof import("../plugins/provider-runtime.js")>();
|
||||
return {
|
||||
...actual,
|
||||
prepareProviderRuntimeAuth: async (params: {
|
||||
provider: string;
|
||||
context: { apiKey: string; env: NodeJS.ProcessEnv };
|
||||
}) => {
|
||||
if (params.provider !== "github-copilot") {
|
||||
return undefined;
|
||||
}
|
||||
const token = await resolveCopilotApiTokenMock({
|
||||
githubToken: params.context.apiKey,
|
||||
env: params.context.env,
|
||||
});
|
||||
return {
|
||||
apiKey: token.token,
|
||||
baseUrl: token.baseUrl,
|
||||
expiresAt: token.expiresAt,
|
||||
};
|
||||
},
|
||||
};
|
||||
});
|
||||
|
||||
vi.mock("./pi-embedded-runner/compact.js", () => ({
|
||||
compactEmbeddedPiSessionDirect: vi.fn(async () => {
|
||||
throw new Error("compact should not run in auth profile rotation tests");
|
||||
|
||||
@@ -1,4 +1,16 @@
|
||||
import { describe, expect, it } from "vitest";
|
||||
import { describe, expect, it, vi } from "vitest";
|
||||
|
||||
vi.mock("../../plugins/provider-runtime.js", () => ({
|
||||
resolveProviderCacheTtlEligibility: (params: {
|
||||
context: { provider: string; modelId: string };
|
||||
}) =>
|
||||
params.context.provider === "openrouter"
|
||||
? ["anthropic/", "moonshot/", "moonshotai/", "zai/"].some((prefix) =>
|
||||
params.context.modelId.startsWith(prefix),
|
||||
)
|
||||
: undefined,
|
||||
}));
|
||||
|
||||
import { isCacheTtlEligibleProvider } from "./cache-ttl.js";
|
||||
|
||||
describe("isCacheTtlEligibleProvider", () => {
|
||||
|
||||
@@ -1,3 +1,5 @@
|
||||
import { resolveProviderCacheTtlEligibility } from "../../plugins/provider-runtime.js";
|
||||
|
||||
type CustomEntryLike = { type?: unknown; customType?: unknown; data?: unknown };
|
||||
|
||||
export const CACHE_TTL_CUSTOM_TYPE = "openclaw.cache-ttl";
|
||||
@@ -9,24 +11,21 @@ export type CacheTtlEntryData = {
|
||||
};
|
||||
|
||||
const CACHE_TTL_NATIVE_PROVIDERS = new Set(["anthropic", "moonshot", "zai"]);
|
||||
const OPENROUTER_CACHE_TTL_MODEL_PREFIXES = [
|
||||
"anthropic/",
|
||||
"moonshot/",
|
||||
"moonshotai/",
|
||||
"zai/",
|
||||
] as const;
|
||||
|
||||
function isOpenRouterCacheTtlModel(modelId: string): boolean {
|
||||
return OPENROUTER_CACHE_TTL_MODEL_PREFIXES.some((prefix) => modelId.startsWith(prefix));
|
||||
}
|
||||
|
||||
export function isCacheTtlEligibleProvider(provider: string, modelId: string): boolean {
|
||||
const normalizedProvider = provider.toLowerCase();
|
||||
const normalizedModelId = modelId.toLowerCase();
|
||||
if (CACHE_TTL_NATIVE_PROVIDERS.has(normalizedProvider)) {
|
||||
return true;
|
||||
const pluginEligibility = resolveProviderCacheTtlEligibility({
|
||||
provider: normalizedProvider,
|
||||
context: {
|
||||
provider: normalizedProvider,
|
||||
modelId: normalizedModelId,
|
||||
},
|
||||
});
|
||||
if (pluginEligibility !== undefined) {
|
||||
return pluginEligibility;
|
||||
}
|
||||
if (normalizedProvider === "openrouter" && isOpenRouterCacheTtlModel(normalizedModelId)) {
|
||||
if (CACHE_TTL_NATIVE_PROVIDERS.has(normalizedProvider)) {
|
||||
return true;
|
||||
}
|
||||
if (normalizedProvider === "kilocode" && normalizedModelId.startsWith("anthropic/")) {
|
||||
|
||||
@@ -23,6 +23,7 @@ import { getMachineDisplayName } from "../../infra/machine-name.js";
|
||||
import { generateSecureToken } from "../../infra/secure-random.js";
|
||||
import { getMemorySearchManager } from "../../memory/index.js";
|
||||
import { getGlobalHookRunner } from "../../plugins/hook-runner-global.js";
|
||||
import { prepareProviderRuntimeAuth } from "../../plugins/provider-runtime.js";
|
||||
import { type enqueueCommand, enqueueCommandInLane } from "../../process/command-queue.js";
|
||||
import { isCronSessionKey, isSubagentSessionKey } from "../../routing/session-key.js";
|
||||
import { emitSessionTranscriptUpdate } from "../../sessions/transcript-events.js";
|
||||
@@ -434,10 +435,11 @@ export async function compactEmbeddedPiSessionDirect(
|
||||
const reason = error ?? `Unknown model: ${provider}/${modelId}`;
|
||||
return fail(reason);
|
||||
}
|
||||
let runtimeModel = model;
|
||||
let apiKeyInfo: Awaited<ReturnType<typeof getApiKeyForModel>> | null = null;
|
||||
try {
|
||||
apiKeyInfo = await getApiKeyForModel({
|
||||
model,
|
||||
model: runtimeModel,
|
||||
cfg: params.config,
|
||||
profileId: authProfileId,
|
||||
agentDir,
|
||||
@@ -446,17 +448,36 @@ export async function compactEmbeddedPiSessionDirect(
|
||||
if (!apiKeyInfo.apiKey) {
|
||||
if (apiKeyInfo.mode !== "aws-sdk") {
|
||||
throw new Error(
|
||||
`No API key resolved for provider "${model.provider}" (auth mode: ${apiKeyInfo.mode}).`,
|
||||
`No API key resolved for provider "${runtimeModel.provider}" (auth mode: ${apiKeyInfo.mode}).`,
|
||||
);
|
||||
}
|
||||
} else if (model.provider === "github-copilot") {
|
||||
const { resolveCopilotApiToken } = await import("../../providers/github-copilot-token.js");
|
||||
const copilotToken = await resolveCopilotApiToken({
|
||||
githubToken: apiKeyInfo.apiKey,
|
||||
});
|
||||
authStorage.setRuntimeApiKey(model.provider, copilotToken.token);
|
||||
} else {
|
||||
authStorage.setRuntimeApiKey(model.provider, apiKeyInfo.apiKey);
|
||||
const preparedAuth = await prepareProviderRuntimeAuth({
|
||||
provider: runtimeModel.provider,
|
||||
config: params.config,
|
||||
workspaceDir: resolvedWorkspace,
|
||||
env: process.env,
|
||||
context: {
|
||||
config: params.config,
|
||||
agentDir,
|
||||
workspaceDir: resolvedWorkspace,
|
||||
env: process.env,
|
||||
provider: runtimeModel.provider,
|
||||
modelId,
|
||||
model: runtimeModel,
|
||||
apiKey: apiKeyInfo.apiKey,
|
||||
authMode: apiKeyInfo.mode,
|
||||
profileId: apiKeyInfo.profileId,
|
||||
},
|
||||
});
|
||||
if (preparedAuth?.baseUrl) {
|
||||
runtimeModel = { ...runtimeModel, baseUrl: preparedAuth.baseUrl };
|
||||
}
|
||||
const runtimeApiKey = preparedAuth?.apiKey ?? apiKeyInfo.apiKey;
|
||||
if (!runtimeApiKey) {
|
||||
throw new Error(`Provider "${runtimeModel.provider}" runtime auth returned no apiKey.`);
|
||||
}
|
||||
authStorage.setRuntimeApiKey(runtimeModel.provider, runtimeApiKey);
|
||||
}
|
||||
} catch (err) {
|
||||
const reason = describeUnknownError(err);
|
||||
@@ -521,13 +542,13 @@ export async function compactEmbeddedPiSessionDirect(
|
||||
cfg: params.config,
|
||||
provider,
|
||||
modelId,
|
||||
modelContextWindow: model.contextWindow,
|
||||
modelContextWindow: runtimeModel.contextWindow,
|
||||
defaultTokens: DEFAULT_CONTEXT_TOKENS,
|
||||
});
|
||||
const effectiveModel = applyLocalNoAuthHeaderOverride(
|
||||
ctxInfo.tokens < (model.contextWindow ?? Infinity)
|
||||
? { ...model, contextWindow: ctxInfo.tokens }
|
||||
: model,
|
||||
ctxInfo.tokens < (runtimeModel.contextWindow ?? Infinity)
|
||||
? { ...runtimeModel, contextWindow: ctxInfo.tokens }
|
||||
: runtimeModel,
|
||||
apiKeyInfo,
|
||||
);
|
||||
|
||||
@@ -557,7 +578,7 @@ export async function compactEmbeddedPiSessionDirect(
|
||||
modelAuthMode: resolveModelAuthMode(model.provider, params.config),
|
||||
});
|
||||
const tools = sanitizeToolsForGoogle({
|
||||
tools: supportsModelTools(model) ? toolsRaw : [],
|
||||
tools: supportsModelTools(runtimeModel) ? toolsRaw : [],
|
||||
provider,
|
||||
});
|
||||
const allowedToolNames = collectAllowedToolNames({ tools });
|
||||
|
||||
@@ -3,6 +3,10 @@ import type { SimpleStreamOptions } from "@mariozechner/pi-ai";
|
||||
import { streamSimple } from "@mariozechner/pi-ai";
|
||||
import type { ThinkLevel } from "../../auto-reply/thinking.js";
|
||||
import type { OpenClawConfig } from "../../config/config.js";
|
||||
import {
|
||||
prepareProviderExtraParams,
|
||||
wrapProviderStreamFn,
|
||||
} from "../../plugins/provider-runtime.js";
|
||||
import {
|
||||
createAnthropicBetaHeadersWrapper,
|
||||
createAnthropicFastModeWrapper,
|
||||
@@ -22,7 +26,6 @@ import {
|
||||
shouldApplySiliconFlowThinkingOffCompat,
|
||||
} from "./moonshot-stream-wrappers.js";
|
||||
import {
|
||||
createCodexDefaultTransportWrapper,
|
||||
createOpenAIDefaultTransportWrapper,
|
||||
createOpenAIFastModeWrapper,
|
||||
createOpenAIResponsesContextManagementWrapper,
|
||||
@@ -30,12 +33,7 @@ import {
|
||||
resolveOpenAIFastMode,
|
||||
resolveOpenAIServiceTier,
|
||||
} from "./openai-stream-wrappers.js";
|
||||
import {
|
||||
createKilocodeWrapper,
|
||||
createOpenRouterSystemCacheWrapper,
|
||||
createOpenRouterWrapper,
|
||||
isProxyReasoningUnsupported,
|
||||
} from "./proxy-stream-wrappers.js";
|
||||
import { createKilocodeWrapper, isProxyReasoningUnsupported } from "./proxy-stream-wrappers.js";
|
||||
|
||||
/**
|
||||
* Resolve provider-specific extra params from model config.
|
||||
@@ -111,39 +109,15 @@ function createStreamFnWithExtraParams(
|
||||
streamParams.cacheRetention = cacheRetention;
|
||||
}
|
||||
|
||||
// Extract OpenRouter provider routing preferences from extraParams.provider.
|
||||
// Injected into model.compat.openRouterRouting so pi-ai's buildParams sets
|
||||
// params.provider in the API request body (openai-completions.js L359-362).
|
||||
// pi-ai's OpenRouterRouting type only declares { only?, order? }, but at
|
||||
// runtime the full object is forwarded — enabling allow_fallbacks,
|
||||
// data_collection, ignore, sort, quantizations, etc.
|
||||
const providerRouting =
|
||||
provider === "openrouter" &&
|
||||
extraParams.provider != null &&
|
||||
typeof extraParams.provider === "object"
|
||||
? (extraParams.provider as Record<string, unknown>)
|
||||
: undefined;
|
||||
|
||||
if (Object.keys(streamParams).length === 0 && !providerRouting) {
|
||||
if (Object.keys(streamParams).length === 0) {
|
||||
return undefined;
|
||||
}
|
||||
|
||||
log.debug(`creating streamFn wrapper with params: ${JSON.stringify(streamParams)}`);
|
||||
if (providerRouting) {
|
||||
log.debug(`OpenRouter provider routing: ${JSON.stringify(providerRouting)}`);
|
||||
}
|
||||
|
||||
const underlying = baseStreamFn ?? streamSimple;
|
||||
const wrappedStreamFn: StreamFn = (model, context, options) => {
|
||||
// When provider routing is configured, inject it into model.compat so
|
||||
// pi-ai picks it up via model.compat.openRouterRouting.
|
||||
const effectiveModel = providerRouting
|
||||
? ({
|
||||
...model,
|
||||
compat: { ...model.compat, openRouterRouting: providerRouting },
|
||||
} as unknown as typeof model)
|
||||
: model;
|
||||
return underlying(effectiveModel, context, {
|
||||
return underlying(model, context, {
|
||||
...streamParams,
|
||||
...options,
|
||||
});
|
||||
@@ -342,13 +316,6 @@ export function applyExtraParamsToAgent(
|
||||
modelId,
|
||||
agentId,
|
||||
});
|
||||
if (provider === "openai-codex") {
|
||||
// Default Codex to WebSocket-first when nothing else specifies transport.
|
||||
agent.streamFn = createCodexDefaultTransportWrapper(agent.streamFn);
|
||||
} else if (provider === "openai") {
|
||||
// Default OpenAI Responses to WebSocket-first with transparent SSE fallback.
|
||||
agent.streamFn = createOpenAIDefaultTransportWrapper(agent.streamFn);
|
||||
}
|
||||
const override =
|
||||
extraParamsOverride && Object.keys(extraParamsOverride).length > 0
|
||||
? Object.fromEntries(
|
||||
@@ -356,14 +323,35 @@ export function applyExtraParamsToAgent(
|
||||
)
|
||||
: undefined;
|
||||
const merged = Object.assign({}, resolvedExtraParams, override);
|
||||
const wrappedStreamFn = createStreamFnWithExtraParams(agent.streamFn, merged, provider);
|
||||
const effectiveExtraParams =
|
||||
prepareProviderExtraParams({
|
||||
provider,
|
||||
config: cfg,
|
||||
context: {
|
||||
config: cfg,
|
||||
provider,
|
||||
modelId,
|
||||
extraParams: merged,
|
||||
thinkingLevel,
|
||||
},
|
||||
}) ?? merged;
|
||||
|
||||
if (provider === "openai") {
|
||||
// Default OpenAI Responses to WebSocket-first with transparent SSE fallback.
|
||||
agent.streamFn = createOpenAIDefaultTransportWrapper(agent.streamFn);
|
||||
}
|
||||
const wrappedStreamFn = createStreamFnWithExtraParams(
|
||||
agent.streamFn,
|
||||
effectiveExtraParams,
|
||||
provider,
|
||||
);
|
||||
|
||||
if (wrappedStreamFn) {
|
||||
log.debug(`applying extraParams to agent streamFn for ${provider}/${modelId}`);
|
||||
agent.streamFn = wrappedStreamFn;
|
||||
}
|
||||
|
||||
const anthropicBetas = resolveAnthropicBetas(merged, provider, modelId);
|
||||
const anthropicBetas = resolveAnthropicBetas(effectiveExtraParams, provider, modelId);
|
||||
if (anthropicBetas?.length) {
|
||||
log.debug(
|
||||
`applying Anthropic beta header for ${provider}/${modelId}: ${anthropicBetas.join(",")}`,
|
||||
@@ -380,7 +368,7 @@ export function applyExtraParamsToAgent(
|
||||
|
||||
if (shouldApplyMoonshotPayloadCompat({ provider, modelId })) {
|
||||
const moonshotThinkingType = resolveMoonshotThinkingType({
|
||||
configuredThinking: merged?.thinking,
|
||||
configuredThinking: effectiveExtraParams?.thinking,
|
||||
thinkingLevel,
|
||||
});
|
||||
if (moonshotThinkingType) {
|
||||
@@ -392,25 +380,19 @@ export function applyExtraParamsToAgent(
|
||||
}
|
||||
|
||||
agent.streamFn = createAnthropicToolPayloadCompatibilityWrapper(agent.streamFn);
|
||||
|
||||
if (provider === "openrouter") {
|
||||
log.debug(`applying OpenRouter app attribution headers for ${provider}/${modelId}`);
|
||||
// "auto" is a dynamic routing model — we don't know which underlying model
|
||||
// OpenRouter will select, and it may be a reasoning-required endpoint.
|
||||
// Omit the thinkingLevel so we never inject `reasoning.effort: "none"`,
|
||||
// which would cause a 400 on models where reasoning is mandatory.
|
||||
// Users who need reasoning control should target a specific model ID.
|
||||
// See: openclaw/openclaw#24851
|
||||
//
|
||||
// x-ai/grok models do not support OpenRouter's reasoning.effort parameter
|
||||
// and reject payloads containing it with "Invalid arguments passed to the
|
||||
// model." Skip reasoning injection for these models.
|
||||
// See: openclaw/openclaw#32039
|
||||
const skipReasoningInjection = modelId === "auto" || isProxyReasoningUnsupported(modelId);
|
||||
const openRouterThinkingLevel = skipReasoningInjection ? undefined : thinkingLevel;
|
||||
agent.streamFn = createOpenRouterWrapper(agent.streamFn, openRouterThinkingLevel);
|
||||
agent.streamFn = createOpenRouterSystemCacheWrapper(agent.streamFn);
|
||||
}
|
||||
agent.streamFn =
|
||||
wrapProviderStreamFn({
|
||||
provider,
|
||||
config: cfg,
|
||||
context: {
|
||||
config: cfg,
|
||||
provider,
|
||||
modelId,
|
||||
extraParams: effectiveExtraParams,
|
||||
thinkingLevel,
|
||||
streamFn: agent.streamFn,
|
||||
},
|
||||
}) ?? agent.streamFn;
|
||||
|
||||
if (provider === "kilocode") {
|
||||
log.debug(`applying Kilocode feature header for ${provider}/${modelId}`);
|
||||
@@ -430,7 +412,7 @@ export function applyExtraParamsToAgent(
|
||||
// Enable Z.AI tool_stream for real-time tool call streaming.
|
||||
// Enabled by default for Z.AI provider, can be disabled via params.tool_stream: false
|
||||
if (provider === "zai" || provider === "z-ai") {
|
||||
const toolStreamEnabled = merged?.tool_stream !== false;
|
||||
const toolStreamEnabled = effectiveExtraParams?.tool_stream !== false;
|
||||
if (toolStreamEnabled) {
|
||||
log.debug(`enabling Z.AI tool_stream for ${provider}/${modelId}`);
|
||||
agent.streamFn = createZaiToolStreamWrapper(agent.streamFn, true);
|
||||
@@ -441,19 +423,19 @@ export function applyExtraParamsToAgent(
|
||||
// upstream model-ID heuristics for Gemini 3.1 variants.
|
||||
agent.streamFn = createGoogleThinkingPayloadWrapper(agent.streamFn, thinkingLevel);
|
||||
|
||||
const anthropicFastMode = resolveAnthropicFastMode(merged);
|
||||
const anthropicFastMode = resolveAnthropicFastMode(effectiveExtraParams);
|
||||
if (anthropicFastMode !== undefined) {
|
||||
log.debug(`applying Anthropic fast mode=${anthropicFastMode} for ${provider}/${modelId}`);
|
||||
agent.streamFn = createAnthropicFastModeWrapper(agent.streamFn, anthropicFastMode);
|
||||
}
|
||||
|
||||
const openAIFastMode = resolveOpenAIFastMode(merged);
|
||||
const openAIFastMode = resolveOpenAIFastMode(effectiveExtraParams);
|
||||
if (openAIFastMode) {
|
||||
log.debug(`applying OpenAI fast mode for ${provider}/${modelId}`);
|
||||
agent.streamFn = createOpenAIFastModeWrapper(agent.streamFn);
|
||||
}
|
||||
|
||||
const openAIServiceTier = resolveOpenAIServiceTier(merged);
|
||||
const openAIServiceTier = resolveOpenAIServiceTier(effectiveExtraParams);
|
||||
if (openAIServiceTier) {
|
||||
log.debug(`applying OpenAI service_tier=${openAIServiceTier} for ${provider}/${modelId}`);
|
||||
agent.streamFn = createOpenAIServiceTierWrapper(agent.streamFn, openAIServiceTier);
|
||||
@@ -462,7 +444,10 @@ export function applyExtraParamsToAgent(
|
||||
// Work around upstream pi-ai hardcoding `store: false` for Responses API.
|
||||
// Force `store=true` for direct OpenAI Responses models and auto-enable
|
||||
// server-side compaction for compatible OpenAI Responses payloads.
|
||||
agent.streamFn = createOpenAIResponsesContextManagementWrapper(agent.streamFn, merged);
|
||||
agent.streamFn = createOpenAIResponsesContextManagementWrapper(
|
||||
agent.streamFn,
|
||||
effectiveExtraParams,
|
||||
);
|
||||
|
||||
const rawParallelToolCalls = resolveAliasedParamValue(
|
||||
[resolvedExtraParams, override],
|
||||
|
||||
@@ -2,8 +2,6 @@ import type { Api, Model } from "@mariozechner/pi-ai";
|
||||
import { normalizeModelCompat } from "../model-compat.js";
|
||||
import { normalizeProviderId } from "../model-selection.js";
|
||||
|
||||
const OPENAI_CODEX_BASE_URL = "https://chatgpt.com/backend-api";
|
||||
|
||||
function isOpenAIApiBaseUrl(baseUrl?: string): boolean {
|
||||
const trimmed = baseUrl?.trim();
|
||||
if (!trimmed) {
|
||||
@@ -12,48 +10,6 @@ function isOpenAIApiBaseUrl(baseUrl?: string): boolean {
|
||||
return /^https?:\/\/api\.openai\.com(?:\/v1)?\/?$/i.test(trimmed);
|
||||
}
|
||||
|
||||
function isOpenAICodexBaseUrl(baseUrl?: string): boolean {
|
||||
const trimmed = baseUrl?.trim();
|
||||
if (!trimmed) {
|
||||
return false;
|
||||
}
|
||||
return /^https?:\/\/chatgpt\.com\/backend-api\/?$/i.test(trimmed);
|
||||
}
|
||||
|
||||
function normalizeOpenAICodexTransport(params: {
|
||||
provider: string;
|
||||
model: Model<Api>;
|
||||
}): Model<Api> {
|
||||
if (normalizeProviderId(params.provider) !== "openai-codex") {
|
||||
return params.model;
|
||||
}
|
||||
|
||||
const useCodexTransport =
|
||||
!params.model.baseUrl ||
|
||||
isOpenAIApiBaseUrl(params.model.baseUrl) ||
|
||||
isOpenAICodexBaseUrl(params.model.baseUrl);
|
||||
|
||||
const nextApi =
|
||||
useCodexTransport && params.model.api === "openai-responses"
|
||||
? ("openai-codex-responses" as const)
|
||||
: params.model.api;
|
||||
const nextBaseUrl =
|
||||
nextApi === "openai-codex-responses" &&
|
||||
(!params.model.baseUrl || isOpenAIApiBaseUrl(params.model.baseUrl))
|
||||
? OPENAI_CODEX_BASE_URL
|
||||
: params.model.baseUrl;
|
||||
|
||||
if (nextApi === params.model.api && nextBaseUrl === params.model.baseUrl) {
|
||||
return params.model;
|
||||
}
|
||||
|
||||
return {
|
||||
...params.model,
|
||||
api: nextApi,
|
||||
baseUrl: nextBaseUrl,
|
||||
} as Model<Api>;
|
||||
}
|
||||
|
||||
function normalizeOpenAITransport(params: { provider: string; model: Model<Api> }): Model<Api> {
|
||||
if (normalizeProviderId(params.provider) !== "openai") {
|
||||
return params.model;
|
||||
@@ -73,14 +29,16 @@ function normalizeOpenAITransport(params: { provider: string; model: Model<Api>
|
||||
} as Model<Api>;
|
||||
}
|
||||
|
||||
export function applyBuiltInResolvedProviderTransportNormalization(params: {
|
||||
provider: string;
|
||||
model: Model<Api>;
|
||||
}): Model<Api> {
|
||||
return normalizeOpenAITransport(params);
|
||||
}
|
||||
|
||||
export function normalizeResolvedProviderModel(params: {
|
||||
provider: string;
|
||||
model: Model<Api>;
|
||||
}): Model<Api> {
|
||||
const normalizedOpenAI = normalizeOpenAITransport(params);
|
||||
const normalizedCodex = normalizeOpenAICodexTransport({
|
||||
provider: params.provider,
|
||||
model: normalizedOpenAI,
|
||||
});
|
||||
return normalizeModelCompat(normalizedCodex);
|
||||
return normalizeModelCompat(applyBuiltInResolvedProviderTransportNormalization(params));
|
||||
}
|
||||
|
||||
@@ -2,10 +2,17 @@ import type { Api, Model } from "@mariozechner/pi-ai";
|
||||
import type { AuthStorage, ModelRegistry } from "@mariozechner/pi-coding-agent";
|
||||
import type { OpenClawConfig } from "../../config/config.js";
|
||||
import type { ModelDefinitionConfig } from "../../config/types.js";
|
||||
import {
|
||||
prepareProviderDynamicModel,
|
||||
resolveProviderRuntimePlugin,
|
||||
runProviderDynamicModel,
|
||||
normalizeProviderResolvedModelWithPlugin,
|
||||
} from "../../plugins/provider-runtime.js";
|
||||
import { resolveOpenClawAgentDir } from "../agent-paths.js";
|
||||
import { DEFAULT_CONTEXT_TOKENS } from "../defaults.js";
|
||||
import { buildModelAliasLines } from "../model-alias-lines.js";
|
||||
import { isSecretRefHeaderValueMarker } from "../model-auth-markers.js";
|
||||
import { normalizeModelCompat } from "../model-compat.js";
|
||||
import { resolveForwardCompatModel } from "../model-forward-compat.js";
|
||||
import { findNormalizedProviderValue, normalizeProviderId } from "../model-selection.js";
|
||||
import {
|
||||
@@ -14,10 +21,6 @@ import {
|
||||
} from "../model-suppression.js";
|
||||
import { discoverAuthStorage, discoverModels } from "../pi-model-discovery.js";
|
||||
import { normalizeResolvedProviderModel } from "./model.provider-normalization.js";
|
||||
import {
|
||||
getOpenRouterModelCapabilities,
|
||||
loadOpenRouterModelCapabilities,
|
||||
} from "./openrouter-model-capabilities.js";
|
||||
|
||||
type InlineModelEntry = ModelDefinitionConfig & {
|
||||
provider: string;
|
||||
@@ -51,7 +54,26 @@ function sanitizeModelHeaders(
|
||||
return Object.keys(next).length > 0 ? next : undefined;
|
||||
}
|
||||
|
||||
function normalizeResolvedModel(params: { provider: string; model: Model<Api> }): Model<Api> {
|
||||
function normalizeResolvedModel(params: {
|
||||
provider: string;
|
||||
model: Model<Api>;
|
||||
cfg?: OpenClawConfig;
|
||||
agentDir?: string;
|
||||
}): Model<Api> {
|
||||
const pluginNormalized = normalizeProviderResolvedModelWithPlugin({
|
||||
provider: params.provider,
|
||||
config: params.cfg,
|
||||
context: {
|
||||
config: params.cfg,
|
||||
agentDir: params.agentDir,
|
||||
provider: params.provider,
|
||||
modelId: params.model.id,
|
||||
model: params.model,
|
||||
},
|
||||
});
|
||||
if (pluginNormalized) {
|
||||
return normalizeModelCompat(pluginNormalized);
|
||||
}
|
||||
return normalizeResolvedProviderModel(params);
|
||||
}
|
||||
|
||||
@@ -165,8 +187,9 @@ function resolveExplicitModelWithRegistry(params: {
|
||||
modelId: string;
|
||||
modelRegistry: ModelRegistry;
|
||||
cfg?: OpenClawConfig;
|
||||
agentDir?: string;
|
||||
}): { kind: "resolved"; model: Model<Api> } | { kind: "suppressed" } | undefined {
|
||||
const { provider, modelId, modelRegistry, cfg } = params;
|
||||
const { provider, modelId, modelRegistry, cfg, agentDir } = params;
|
||||
if (shouldSuppressBuiltInModel({ provider, id: modelId })) {
|
||||
return { kind: "suppressed" };
|
||||
}
|
||||
@@ -178,6 +201,8 @@ function resolveExplicitModelWithRegistry(params: {
|
||||
kind: "resolved",
|
||||
model: normalizeResolvedModel({
|
||||
provider,
|
||||
cfg,
|
||||
agentDir,
|
||||
model: applyConfiguredProviderOverrides({
|
||||
discoveredModel: model,
|
||||
providerConfig,
|
||||
@@ -196,7 +221,12 @@ function resolveExplicitModelWithRegistry(params: {
|
||||
if (inlineMatch?.api) {
|
||||
return {
|
||||
kind: "resolved",
|
||||
model: normalizeResolvedModel({ provider, model: inlineMatch as Model<Api> }),
|
||||
model: normalizeResolvedModel({
|
||||
provider,
|
||||
cfg,
|
||||
agentDir,
|
||||
model: inlineMatch as Model<Api>,
|
||||
}),
|
||||
};
|
||||
}
|
||||
|
||||
@@ -208,6 +238,8 @@ function resolveExplicitModelWithRegistry(params: {
|
||||
kind: "resolved",
|
||||
model: normalizeResolvedModel({
|
||||
provider,
|
||||
cfg,
|
||||
agentDir,
|
||||
model: applyConfiguredProviderOverrides({
|
||||
discoveredModel: forwardCompat,
|
||||
providerConfig,
|
||||
@@ -225,6 +257,7 @@ export function resolveModelWithRegistry(params: {
|
||||
modelId: string;
|
||||
modelRegistry: ModelRegistry;
|
||||
cfg?: OpenClawConfig;
|
||||
agentDir?: string;
|
||||
}): Model<Api> | undefined {
|
||||
const explicitModel = resolveExplicitModelWithRegistry(params);
|
||||
if (explicitModel?.kind === "suppressed") {
|
||||
@@ -234,31 +267,26 @@ export function resolveModelWithRegistry(params: {
|
||||
return explicitModel.model;
|
||||
}
|
||||
|
||||
const { provider, modelId, cfg } = params;
|
||||
const normalizedProvider = normalizeProviderId(provider);
|
||||
const { provider, modelId, cfg, modelRegistry, agentDir } = params;
|
||||
const providerConfig = resolveConfiguredProviderConfig(cfg, provider);
|
||||
|
||||
// OpenRouter is a pass-through proxy - any model ID available on OpenRouter
|
||||
// should work without being pre-registered in the local catalog.
|
||||
// Try to fetch actual capabilities from the OpenRouter API so that new models
|
||||
// (not yet in the static pi-ai snapshot) get correct image/reasoning support.
|
||||
if (normalizedProvider === "openrouter") {
|
||||
const capabilities = getOpenRouterModelCapabilities(modelId);
|
||||
const pluginDynamicModel = runProviderDynamicModel({
|
||||
provider,
|
||||
config: cfg,
|
||||
context: {
|
||||
config: cfg,
|
||||
agentDir,
|
||||
provider,
|
||||
modelId,
|
||||
modelRegistry,
|
||||
providerConfig,
|
||||
},
|
||||
});
|
||||
if (pluginDynamicModel) {
|
||||
return normalizeResolvedModel({
|
||||
provider,
|
||||
model: {
|
||||
id: modelId,
|
||||
name: capabilities?.name ?? modelId,
|
||||
api: "openai-completions",
|
||||
provider,
|
||||
baseUrl: "https://openrouter.ai/api/v1",
|
||||
reasoning: capabilities?.reasoning ?? false,
|
||||
input: capabilities?.input ?? ["text"],
|
||||
cost: capabilities?.cost ?? { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
|
||||
contextWindow: capabilities?.contextWindow ?? DEFAULT_CONTEXT_TOKENS,
|
||||
// Align with OPENROUTER_DEFAULT_MAX_TOKENS in models-config.providers.ts
|
||||
maxTokens: capabilities?.maxTokens ?? 8192,
|
||||
} as Model<Api>,
|
||||
cfg,
|
||||
agentDir,
|
||||
model: pluginDynamicModel,
|
||||
});
|
||||
}
|
||||
|
||||
@@ -272,6 +300,8 @@ export function resolveModelWithRegistry(params: {
|
||||
if (providerConfig || modelId.startsWith("mock-")) {
|
||||
return normalizeResolvedModel({
|
||||
provider,
|
||||
cfg,
|
||||
agentDir,
|
||||
model: {
|
||||
id: modelId,
|
||||
name: modelId,
|
||||
@@ -312,7 +342,13 @@ export function resolveModel(
|
||||
const resolvedAgentDir = agentDir ?? resolveOpenClawAgentDir();
|
||||
const authStorage = discoverAuthStorage(resolvedAgentDir);
|
||||
const modelRegistry = discoverModels(authStorage, resolvedAgentDir);
|
||||
const model = resolveModelWithRegistry({ provider, modelId, modelRegistry, cfg });
|
||||
const model = resolveModelWithRegistry({
|
||||
provider,
|
||||
modelId,
|
||||
modelRegistry,
|
||||
cfg,
|
||||
agentDir: resolvedAgentDir,
|
||||
});
|
||||
if (model) {
|
||||
return { model, authStorage, modelRegistry };
|
||||
}
|
||||
@@ -338,7 +374,13 @@ export async function resolveModelAsync(
|
||||
const resolvedAgentDir = agentDir ?? resolveOpenClawAgentDir();
|
||||
const authStorage = discoverAuthStorage(resolvedAgentDir);
|
||||
const modelRegistry = discoverModels(authStorage, resolvedAgentDir);
|
||||
const explicitModel = resolveExplicitModelWithRegistry({ provider, modelId, modelRegistry, cfg });
|
||||
const explicitModel = resolveExplicitModelWithRegistry({
|
||||
provider,
|
||||
modelId,
|
||||
modelRegistry,
|
||||
cfg,
|
||||
agentDir: resolvedAgentDir,
|
||||
});
|
||||
if (explicitModel?.kind === "suppressed") {
|
||||
return {
|
||||
error: buildUnknownModelError(provider, modelId),
|
||||
@@ -346,13 +388,36 @@ export async function resolveModelAsync(
|
||||
modelRegistry,
|
||||
};
|
||||
}
|
||||
if (!explicitModel && normalizeProviderId(provider) === "openrouter") {
|
||||
await loadOpenRouterModelCapabilities(modelId);
|
||||
if (!explicitModel) {
|
||||
const providerPlugin = resolveProviderRuntimePlugin({
|
||||
provider,
|
||||
config: cfg,
|
||||
});
|
||||
if (providerPlugin?.prepareDynamicModel) {
|
||||
await prepareProviderDynamicModel({
|
||||
provider,
|
||||
config: cfg,
|
||||
context: {
|
||||
config: cfg,
|
||||
agentDir: resolvedAgentDir,
|
||||
provider,
|
||||
modelId,
|
||||
modelRegistry,
|
||||
providerConfig: resolveConfiguredProviderConfig(cfg, provider),
|
||||
},
|
||||
});
|
||||
}
|
||||
}
|
||||
const model =
|
||||
explicitModel?.kind === "resolved"
|
||||
? explicitModel.model
|
||||
: resolveModelWithRegistry({ provider, modelId, modelRegistry, cfg });
|
||||
: resolveModelWithRegistry({
|
||||
provider,
|
||||
modelId,
|
||||
modelRegistry,
|
||||
cfg,
|
||||
agentDir: resolvedAgentDir,
|
||||
});
|
||||
if (model) {
|
||||
return { model, authStorage, modelRegistry };
|
||||
}
|
||||
|
||||
@@ -8,6 +8,7 @@ import {
|
||||
import { computeBackoff, sleepWithAbort, type BackoffPolicy } from "../../infra/backoff.js";
|
||||
import { generateSecureToken } from "../../infra/secure-random.js";
|
||||
import { getGlobalHookRunner } from "../../plugins/hook-runner-global.js";
|
||||
import { prepareProviderRuntimeAuth } from "../../plugins/provider-runtime.js";
|
||||
import type { PluginHookBeforeAgentStartResult } from "../../plugins/types.js";
|
||||
import { enqueueCommandInLane } from "../../process/command-queue.js";
|
||||
import { isMarkdownCapableMessageChannel } from "../../utils/message-channel.js";
|
||||
@@ -80,16 +81,18 @@ import { describeUnknownError } from "./utils.js";
|
||||
|
||||
type ApiKeyInfo = ResolvedProviderAuth;
|
||||
|
||||
type CopilotTokenState = {
|
||||
githubToken: string;
|
||||
expiresAt: number;
|
||||
type RuntimeAuthState = {
|
||||
sourceApiKey: string;
|
||||
authMode: string;
|
||||
profileId?: string;
|
||||
expiresAt?: number;
|
||||
refreshTimer?: ReturnType<typeof setTimeout>;
|
||||
refreshInFlight?: Promise<void>;
|
||||
};
|
||||
|
||||
const COPILOT_REFRESH_MARGIN_MS = 5 * 60 * 1000;
|
||||
const COPILOT_REFRESH_RETRY_MS = 60 * 1000;
|
||||
const COPILOT_REFRESH_MIN_DELAY_MS = 5 * 1000;
|
||||
const RUNTIME_AUTH_REFRESH_MARGIN_MS = 5 * 60 * 1000;
|
||||
const RUNTIME_AUTH_REFRESH_RETRY_MS = 60 * 1000;
|
||||
const RUNTIME_AUTH_REFRESH_MIN_DELAY_MS = 5 * 1000;
|
||||
// Keep overload pacing noticeable enough to avoid tight retry bursts, but short
|
||||
// enough that fallback still feels responsive within a single turn.
|
||||
const OVERLOAD_FAILOVER_BACKOFF_POLICY: BackoffPolicy = {
|
||||
@@ -380,20 +383,21 @@ export async function runEmbeddedPiAgent(
|
||||
model: modelId,
|
||||
});
|
||||
}
|
||||
let runtimeModel = model;
|
||||
|
||||
const ctxInfo = resolveContextWindowInfo({
|
||||
cfg: params.config,
|
||||
provider,
|
||||
modelId,
|
||||
modelContextWindow: model.contextWindow,
|
||||
modelContextWindow: runtimeModel.contextWindow,
|
||||
defaultTokens: DEFAULT_CONTEXT_TOKENS,
|
||||
});
|
||||
// Apply contextTokens cap to model so pi-coding-agent's auto-compaction
|
||||
// threshold uses the effective limit, not the native context window.
|
||||
const effectiveModel =
|
||||
ctxInfo.tokens < (model.contextWindow ?? Infinity)
|
||||
? { ...model, contextWindow: ctxInfo.tokens }
|
||||
: model;
|
||||
let effectiveModel =
|
||||
ctxInfo.tokens < (runtimeModel.contextWindow ?? Infinity)
|
||||
? { ...runtimeModel, contextWindow: ctxInfo.tokens }
|
||||
: runtimeModel;
|
||||
const ctxGuard = evaluateContextWindowGuard({
|
||||
info: ctxInfo,
|
||||
warnBelowTokens: CONTEXT_WINDOW_WARN_BELOW_TOKENS,
|
||||
@@ -447,103 +451,142 @@ export async function runEmbeddedPiAgent(
|
||||
const attemptedThinking = new Set<ThinkLevel>();
|
||||
let apiKeyInfo: ApiKeyInfo | null = null;
|
||||
let lastProfileId: string | undefined;
|
||||
const copilotTokenState: CopilotTokenState | null =
|
||||
model.provider === "github-copilot" ? { githubToken: "", expiresAt: 0 } : null;
|
||||
let copilotRefreshCancelled = false;
|
||||
const hasCopilotGithubToken = () => Boolean(copilotTokenState?.githubToken.trim());
|
||||
let runtimeAuthState: RuntimeAuthState | null = null;
|
||||
let runtimeAuthRefreshCancelled = false;
|
||||
const hasRefreshableRuntimeAuth = () => Boolean(runtimeAuthState?.sourceApiKey.trim());
|
||||
|
||||
const clearCopilotRefreshTimer = () => {
|
||||
if (!copilotTokenState?.refreshTimer) {
|
||||
const clearRuntimeAuthRefreshTimer = () => {
|
||||
if (!runtimeAuthState?.refreshTimer) {
|
||||
return;
|
||||
}
|
||||
clearTimeout(copilotTokenState.refreshTimer);
|
||||
copilotTokenState.refreshTimer = undefined;
|
||||
clearTimeout(runtimeAuthState.refreshTimer);
|
||||
runtimeAuthState.refreshTimer = undefined;
|
||||
};
|
||||
|
||||
const stopCopilotRefreshTimer = () => {
|
||||
if (!copilotTokenState) {
|
||||
const stopRuntimeAuthRefreshTimer = () => {
|
||||
if (!runtimeAuthState) {
|
||||
return;
|
||||
}
|
||||
copilotRefreshCancelled = true;
|
||||
clearCopilotRefreshTimer();
|
||||
runtimeAuthRefreshCancelled = true;
|
||||
clearRuntimeAuthRefreshTimer();
|
||||
};
|
||||
|
||||
const refreshCopilotToken = async (reason: string): Promise<void> => {
|
||||
if (!copilotTokenState) {
|
||||
const refreshRuntimeAuth = async (reason: string): Promise<void> => {
|
||||
if (!runtimeAuthState) {
|
||||
return;
|
||||
}
|
||||
if (copilotTokenState.refreshInFlight) {
|
||||
await copilotTokenState.refreshInFlight;
|
||||
if (runtimeAuthState.refreshInFlight) {
|
||||
await runtimeAuthState.refreshInFlight;
|
||||
return;
|
||||
}
|
||||
const { resolveCopilotApiToken } = await import("../../providers/github-copilot-token.js");
|
||||
copilotTokenState.refreshInFlight = (async () => {
|
||||
const githubToken = copilotTokenState.githubToken.trim();
|
||||
if (!githubToken) {
|
||||
throw new Error("Copilot refresh requires a GitHub token.");
|
||||
runtimeAuthState.refreshInFlight = (async () => {
|
||||
const sourceApiKey = runtimeAuthState?.sourceApiKey.trim() ?? "";
|
||||
if (!sourceApiKey) {
|
||||
throw new Error(`Runtime auth refresh requires a source credential.`);
|
||||
}
|
||||
log.debug(`Refreshing GitHub Copilot token (${reason})...`);
|
||||
const copilotToken = await resolveCopilotApiToken({
|
||||
githubToken,
|
||||
log.debug(`Refreshing runtime auth for ${runtimeModel.provider} (${reason})...`);
|
||||
const preparedAuth = await prepareProviderRuntimeAuth({
|
||||
provider: runtimeModel.provider,
|
||||
config: params.config,
|
||||
workspaceDir: resolvedWorkspace,
|
||||
env: process.env,
|
||||
context: {
|
||||
config: params.config,
|
||||
agentDir,
|
||||
workspaceDir: resolvedWorkspace,
|
||||
env: process.env,
|
||||
provider: runtimeModel.provider,
|
||||
modelId,
|
||||
model: runtimeModel,
|
||||
apiKey: sourceApiKey,
|
||||
authMode: runtimeAuthState?.authMode ?? "unknown",
|
||||
profileId: runtimeAuthState?.profileId,
|
||||
},
|
||||
});
|
||||
authStorage.setRuntimeApiKey(model.provider, copilotToken.token);
|
||||
copilotTokenState.expiresAt = copilotToken.expiresAt;
|
||||
const remaining = copilotToken.expiresAt - Date.now();
|
||||
log.debug(
|
||||
`Copilot token refreshed; expires in ${Math.max(0, Math.floor(remaining / 1000))}s.`,
|
||||
);
|
||||
if (!preparedAuth?.apiKey) {
|
||||
throw new Error(
|
||||
`Provider "${runtimeModel.provider}" does not support runtime auth refresh.`,
|
||||
);
|
||||
}
|
||||
authStorage.setRuntimeApiKey(runtimeModel.provider, preparedAuth.apiKey);
|
||||
if (preparedAuth.baseUrl) {
|
||||
runtimeModel = { ...runtimeModel, baseUrl: preparedAuth.baseUrl };
|
||||
effectiveModel = { ...effectiveModel, baseUrl: preparedAuth.baseUrl };
|
||||
}
|
||||
runtimeAuthState = {
|
||||
...runtimeAuthState,
|
||||
expiresAt: preparedAuth.expiresAt,
|
||||
};
|
||||
if (preparedAuth.expiresAt) {
|
||||
const remaining = preparedAuth.expiresAt - Date.now();
|
||||
log.debug(
|
||||
`Runtime auth refreshed for ${runtimeModel.provider}; expires in ${Math.max(0, Math.floor(remaining / 1000))}s.`,
|
||||
);
|
||||
}
|
||||
})()
|
||||
.catch((err) => {
|
||||
log.warn(`Copilot token refresh failed: ${describeUnknownError(err)}`);
|
||||
log.warn(
|
||||
`Runtime auth refresh failed for ${runtimeModel.provider}: ${describeUnknownError(err)}`,
|
||||
);
|
||||
throw err;
|
||||
})
|
||||
.finally(() => {
|
||||
copilotTokenState.refreshInFlight = undefined;
|
||||
if (runtimeAuthState) {
|
||||
runtimeAuthState.refreshInFlight = undefined;
|
||||
}
|
||||
});
|
||||
await copilotTokenState.refreshInFlight;
|
||||
await runtimeAuthState.refreshInFlight;
|
||||
};
|
||||
|
||||
const scheduleCopilotRefresh = (): void => {
|
||||
if (!copilotTokenState || copilotRefreshCancelled) {
|
||||
const scheduleRuntimeAuthRefresh = (): void => {
|
||||
if (!runtimeAuthState || runtimeAuthRefreshCancelled) {
|
||||
return;
|
||||
}
|
||||
if (!hasCopilotGithubToken()) {
|
||||
log.warn("Skipping Copilot refresh scheduling; GitHub token missing.");
|
||||
if (!hasRefreshableRuntimeAuth()) {
|
||||
log.warn(
|
||||
`Skipping runtime auth refresh scheduling for ${runtimeModel.provider}; source credential missing.`,
|
||||
);
|
||||
return;
|
||||
}
|
||||
clearCopilotRefreshTimer();
|
||||
if (!runtimeAuthState.expiresAt) {
|
||||
return;
|
||||
}
|
||||
clearRuntimeAuthRefreshTimer();
|
||||
const now = Date.now();
|
||||
const refreshAt = copilotTokenState.expiresAt - COPILOT_REFRESH_MARGIN_MS;
|
||||
const delayMs = Math.max(COPILOT_REFRESH_MIN_DELAY_MS, refreshAt - now);
|
||||
const refreshAt = runtimeAuthState.expiresAt - RUNTIME_AUTH_REFRESH_MARGIN_MS;
|
||||
const delayMs = Math.max(RUNTIME_AUTH_REFRESH_MIN_DELAY_MS, refreshAt - now);
|
||||
const timer = setTimeout(() => {
|
||||
if (copilotRefreshCancelled) {
|
||||
if (runtimeAuthRefreshCancelled) {
|
||||
return;
|
||||
}
|
||||
refreshCopilotToken("scheduled")
|
||||
.then(() => scheduleCopilotRefresh())
|
||||
refreshRuntimeAuth("scheduled")
|
||||
.then(() => scheduleRuntimeAuthRefresh())
|
||||
.catch(() => {
|
||||
if (copilotRefreshCancelled) {
|
||||
if (runtimeAuthRefreshCancelled) {
|
||||
return;
|
||||
}
|
||||
const retryTimer = setTimeout(() => {
|
||||
if (copilotRefreshCancelled) {
|
||||
if (runtimeAuthRefreshCancelled) {
|
||||
return;
|
||||
}
|
||||
refreshCopilotToken("scheduled-retry")
|
||||
.then(() => scheduleCopilotRefresh())
|
||||
refreshRuntimeAuth("scheduled-retry")
|
||||
.then(() => scheduleRuntimeAuthRefresh())
|
||||
.catch(() => undefined);
|
||||
}, COPILOT_REFRESH_RETRY_MS);
|
||||
copilotTokenState.refreshTimer = retryTimer;
|
||||
if (copilotRefreshCancelled) {
|
||||
}, RUNTIME_AUTH_REFRESH_RETRY_MS);
|
||||
const activeRuntimeAuthState = runtimeAuthState;
|
||||
if (activeRuntimeAuthState) {
|
||||
activeRuntimeAuthState.refreshTimer = retryTimer;
|
||||
}
|
||||
if (runtimeAuthRefreshCancelled && activeRuntimeAuthState) {
|
||||
clearTimeout(retryTimer);
|
||||
copilotTokenState.refreshTimer = undefined;
|
||||
activeRuntimeAuthState.refreshTimer = undefined;
|
||||
}
|
||||
});
|
||||
}, delayMs);
|
||||
copilotTokenState.refreshTimer = timer;
|
||||
if (copilotRefreshCancelled) {
|
||||
runtimeAuthState.refreshTimer = timer;
|
||||
if (runtimeAuthRefreshCancelled) {
|
||||
clearTimeout(timer);
|
||||
copilotTokenState.refreshTimer = undefined;
|
||||
runtimeAuthState.refreshTimer = undefined;
|
||||
}
|
||||
};
|
||||
|
||||
@@ -599,7 +642,7 @@ export async function runEmbeddedPiAgent(
|
||||
|
||||
const resolveApiKeyForCandidate = async (candidate?: string) => {
|
||||
return getApiKeyForModel({
|
||||
model,
|
||||
model: runtimeModel,
|
||||
cfg: params.config,
|
||||
profileId: candidate,
|
||||
store: authStore,
|
||||
@@ -613,26 +656,53 @@ export async function runEmbeddedPiAgent(
|
||||
if (!apiKeyInfo.apiKey) {
|
||||
if (apiKeyInfo.mode !== "aws-sdk") {
|
||||
throw new Error(
|
||||
`No API key resolved for provider "${model.provider}" (auth mode: ${apiKeyInfo.mode}).`,
|
||||
`No API key resolved for provider "${runtimeModel.provider}" (auth mode: ${apiKeyInfo.mode}).`,
|
||||
);
|
||||
}
|
||||
lastProfileId = resolvedProfileId;
|
||||
return;
|
||||
}
|
||||
if (model.provider === "github-copilot") {
|
||||
const { resolveCopilotApiToken } =
|
||||
await import("../../providers/github-copilot-token.js");
|
||||
const copilotToken = await resolveCopilotApiToken({
|
||||
githubToken: apiKeyInfo.apiKey,
|
||||
});
|
||||
authStorage.setRuntimeApiKey(model.provider, copilotToken.token);
|
||||
if (copilotTokenState) {
|
||||
copilotTokenState.githubToken = apiKeyInfo.apiKey;
|
||||
copilotTokenState.expiresAt = copilotToken.expiresAt;
|
||||
scheduleCopilotRefresh();
|
||||
let runtimeAuthHandled = false;
|
||||
const preparedAuth = await prepareProviderRuntimeAuth({
|
||||
provider: runtimeModel.provider,
|
||||
config: params.config,
|
||||
workspaceDir: resolvedWorkspace,
|
||||
env: process.env,
|
||||
context: {
|
||||
config: params.config,
|
||||
agentDir,
|
||||
workspaceDir: resolvedWorkspace,
|
||||
env: process.env,
|
||||
provider: runtimeModel.provider,
|
||||
modelId,
|
||||
model: runtimeModel,
|
||||
apiKey: apiKeyInfo.apiKey,
|
||||
authMode: apiKeyInfo.mode,
|
||||
profileId: apiKeyInfo.profileId,
|
||||
},
|
||||
});
|
||||
if (preparedAuth?.baseUrl) {
|
||||
runtimeModel = { ...runtimeModel, baseUrl: preparedAuth.baseUrl };
|
||||
effectiveModel = { ...effectiveModel, baseUrl: preparedAuth.baseUrl };
|
||||
}
|
||||
if (preparedAuth?.apiKey) {
|
||||
authStorage.setRuntimeApiKey(runtimeModel.provider, preparedAuth.apiKey);
|
||||
runtimeAuthState = {
|
||||
sourceApiKey: apiKeyInfo.apiKey,
|
||||
authMode: apiKeyInfo.mode,
|
||||
profileId: apiKeyInfo.profileId,
|
||||
expiresAt: preparedAuth.expiresAt,
|
||||
};
|
||||
if (preparedAuth.expiresAt) {
|
||||
scheduleRuntimeAuthRefresh();
|
||||
}
|
||||
runtimeAuthHandled = true;
|
||||
}
|
||||
if (runtimeAuthHandled) {
|
||||
// Plugin-owned runtime auth already stored the exchanged credential.
|
||||
} else {
|
||||
authStorage.setRuntimeApiKey(model.provider, apiKeyInfo.apiKey);
|
||||
authStorage.setRuntimeApiKey(runtimeModel.provider, apiKeyInfo.apiKey);
|
||||
runtimeAuthState = null;
|
||||
}
|
||||
lastProfileId = apiKeyInfo.profileId;
|
||||
};
|
||||
@@ -721,11 +791,11 @@ export async function runEmbeddedPiAgent(
|
||||
}
|
||||
}
|
||||
|
||||
const maybeRefreshCopilotForAuthError = async (
|
||||
const maybeRefreshRuntimeAuthForAuthError = async (
|
||||
errorText: string,
|
||||
retried: boolean,
|
||||
): Promise<boolean> => {
|
||||
if (!copilotTokenState || retried) {
|
||||
if (!runtimeAuthState || retried) {
|
||||
return false;
|
||||
}
|
||||
if (!isFailoverErrorMessage(errorText)) {
|
||||
@@ -735,8 +805,8 @@ export async function runEmbeddedPiAgent(
|
||||
return false;
|
||||
}
|
||||
try {
|
||||
await refreshCopilotToken("auth-error");
|
||||
scheduleCopilotRefresh();
|
||||
await refreshRuntimeAuth("auth-error");
|
||||
scheduleRuntimeAuthRefresh();
|
||||
return true;
|
||||
} catch {
|
||||
return false;
|
||||
@@ -846,7 +916,7 @@ export async function runEmbeddedPiAgent(
|
||||
};
|
||||
}
|
||||
runLoopIterations += 1;
|
||||
const copilotAuthRetry = authRetryPending;
|
||||
const runtimeAuthRetry = authRetryPending;
|
||||
authRetryPending = false;
|
||||
attemptedThinking.add(thinkLevel);
|
||||
await fs.mkdir(resolvedWorkspace, { recursive: true });
|
||||
@@ -1233,7 +1303,7 @@ export async function runEmbeddedPiAgent(
|
||||
? describeFailoverError(normalizedPromptFailover)
|
||||
: describeFailoverError(promptError);
|
||||
const errorText = promptErrorDetails.message || describeUnknownError(promptError);
|
||||
if (await maybeRefreshCopilotForAuthError(errorText, copilotAuthRetry)) {
|
||||
if (await maybeRefreshRuntimeAuthForAuthError(errorText, runtimeAuthRetry)) {
|
||||
authRetryPending = true;
|
||||
continue;
|
||||
}
|
||||
@@ -1403,9 +1473,9 @@ export async function runEmbeddedPiAgent(
|
||||
|
||||
if (
|
||||
authFailure &&
|
||||
(await maybeRefreshCopilotForAuthError(
|
||||
(await maybeRefreshRuntimeAuthForAuthError(
|
||||
lastAssistant?.errorMessage ?? "",
|
||||
copilotAuthRetry,
|
||||
runtimeAuthRetry,
|
||||
))
|
||||
) {
|
||||
authRetryPending = true;
|
||||
@@ -1620,7 +1690,7 @@ export async function runEmbeddedPiAgent(
|
||||
}
|
||||
} finally {
|
||||
await contextEngine.dispose?.();
|
||||
stopCopilotRefreshTimer();
|
||||
stopRuntimeAuthRefreshTimer();
|
||||
process.chdir(prevCwd);
|
||||
}
|
||||
}),
|
||||
|
||||
@@ -1,4 +1,31 @@
|
||||
import { describe, expect, it } from "vitest";
|
||||
import { describe, expect, it, vi } from "vitest";
|
||||
|
||||
const resolveProviderCapabilitiesWithPluginMock = vi.fn((params: { provider: string }) => {
|
||||
switch (params.provider) {
|
||||
case "openrouter":
|
||||
return {
|
||||
openAiCompatTurnValidation: false,
|
||||
geminiThoughtSignatureSanitization: true,
|
||||
geminiThoughtSignatureModelHints: ["gemini"],
|
||||
};
|
||||
case "openai-codex":
|
||||
return {
|
||||
providerFamily: "openai",
|
||||
};
|
||||
case "github-copilot":
|
||||
return {
|
||||
dropThinkingBlockModelHints: ["claude"],
|
||||
};
|
||||
default:
|
||||
return undefined;
|
||||
}
|
||||
});
|
||||
|
||||
vi.mock("../plugins/provider-runtime.js", () => ({
|
||||
resolveProviderCapabilitiesWithPlugin: (params: { provider: string }) =>
|
||||
resolveProviderCapabilitiesWithPluginMock(params),
|
||||
}));
|
||||
|
||||
import {
|
||||
isAnthropicProviderFamily,
|
||||
isOpenAiProviderFamily,
|
||||
|
||||
@@ -1,3 +1,4 @@
|
||||
import { resolveProviderCapabilitiesWithPlugin } from "../plugins/provider-runtime.js";
|
||||
import { normalizeProviderId } from "./model-selection.js";
|
||||
|
||||
export type ProviderCapabilities = {
|
||||
@@ -55,14 +56,6 @@ const PROVIDER_CAPABILITIES: Record<string, Partial<ProviderCapabilities>> = {
|
||||
openai: {
|
||||
providerFamily: "openai",
|
||||
},
|
||||
"openai-codex": {
|
||||
providerFamily: "openai",
|
||||
},
|
||||
openrouter: {
|
||||
openAiCompatTurnValidation: false,
|
||||
geminiThoughtSignatureSanitization: true,
|
||||
geminiThoughtSignatureModelHints: ["gemini"],
|
||||
},
|
||||
opencode: {
|
||||
openAiCompatTurnValidation: false,
|
||||
geminiThoughtSignatureSanitization: true,
|
||||
@@ -77,16 +70,17 @@ const PROVIDER_CAPABILITIES: Record<string, Partial<ProviderCapabilities>> = {
|
||||
geminiThoughtSignatureSanitization: true,
|
||||
geminiThoughtSignatureModelHints: ["gemini"],
|
||||
},
|
||||
"github-copilot": {
|
||||
dropThinkingBlockModelHints: ["claude"],
|
||||
},
|
||||
};
|
||||
|
||||
export function resolveProviderCapabilities(provider?: string | null): ProviderCapabilities {
|
||||
const normalized = normalizeProviderId(provider ?? "");
|
||||
const pluginCapabilities = normalized
|
||||
? resolveProviderCapabilitiesWithPlugin({ provider: normalized })
|
||||
: undefined;
|
||||
return {
|
||||
...DEFAULT_PROVIDER_CAPABILITIES,
|
||||
...PROVIDER_CAPABILITIES[normalized],
|
||||
...pluginCapabilities,
|
||||
};
|
||||
}
|
||||
|
||||
|
||||
@@ -2,6 +2,17 @@ export type {
|
||||
AnyAgentTool,
|
||||
OpenClawPluginApi,
|
||||
ProviderDiscoveryContext,
|
||||
ProviderCatalogContext,
|
||||
ProviderCatalogResult,
|
||||
ProviderCacheTtlEligibilityContext,
|
||||
ProviderPreparedRuntimeAuth,
|
||||
ProviderPrepareExtraParamsContext,
|
||||
ProviderPrepareDynamicModelContext,
|
||||
ProviderPrepareRuntimeAuthContext,
|
||||
ProviderResolveDynamicModelContext,
|
||||
ProviderNormalizeResolvedModelContext,
|
||||
ProviderRuntimeModel,
|
||||
ProviderWrapStreamFnContext,
|
||||
OpenClawPluginService,
|
||||
ProviderAuthContext,
|
||||
ProviderAuthMethodNonInteractiveContext,
|
||||
|
||||
@@ -103,6 +103,15 @@ export type {
|
||||
PluginLogger,
|
||||
ProviderAuthContext,
|
||||
ProviderAuthResult,
|
||||
ProviderCacheTtlEligibilityContext,
|
||||
ProviderPreparedRuntimeAuth,
|
||||
ProviderPrepareExtraParamsContext,
|
||||
ProviderPrepareDynamicModelContext,
|
||||
ProviderPrepareRuntimeAuthContext,
|
||||
ProviderResolveDynamicModelContext,
|
||||
ProviderNormalizeResolvedModelContext,
|
||||
ProviderRuntimeModel,
|
||||
ProviderWrapStreamFnContext,
|
||||
} from "../plugins/types.js";
|
||||
export type {
|
||||
GatewayRequestHandler,
|
||||
@@ -805,7 +814,11 @@ export type { ContextEngineFactory } from "../context-engine/registry.js";
|
||||
// agentDir/store) rather than importing raw helpers directly.
|
||||
export { requireApiKey } from "../agents/model-auth.js";
|
||||
export type { ResolvedProviderAuth } from "../agents/model-auth.js";
|
||||
export type { ProviderDiscoveryContext } from "../plugins/types.js";
|
||||
export type {
|
||||
ProviderCatalogContext,
|
||||
ProviderCatalogResult,
|
||||
ProviderDiscoveryContext,
|
||||
} from "../plugins/types.js";
|
||||
export {
|
||||
applyProviderDefaultModel,
|
||||
promptAndConfigureOpenAICompatibleSelfHostedProvider,
|
||||
|
||||
@@ -6,6 +6,7 @@ export { buildOauthProviderAuthResult } from "./provider-auth-result.js";
|
||||
export type {
|
||||
OpenClawPluginApi,
|
||||
ProviderAuthContext,
|
||||
ProviderCatalogContext,
|
||||
ProviderAuthResult,
|
||||
} from "../plugins/types.js";
|
||||
export { generatePkceVerifierChallenge, toFormUrlEncoded } from "./oauth-utils.js";
|
||||
|
||||
@@ -3,5 +3,9 @@
|
||||
|
||||
export { emptyPluginConfigSchema } from "../plugins/config-schema.js";
|
||||
export { buildOauthProviderAuthResult } from "./provider-auth-result.js";
|
||||
export type { OpenClawPluginApi, ProviderAuthContext } from "../plugins/types.js";
|
||||
export type {
|
||||
OpenClawPluginApi,
|
||||
ProviderAuthContext,
|
||||
ProviderCatalogContext,
|
||||
} from "../plugins/types.js";
|
||||
export { generatePkceVerifierChallenge, toFormUrlEncoded } from "./oauth-utils.js";
|
||||
|
||||
@@ -25,7 +25,10 @@ export type NormalizedPluginsConfig = {
|
||||
|
||||
export const BUNDLED_ENABLED_BY_DEFAULT = new Set<string>([
|
||||
"device-pair",
|
||||
"github-copilot",
|
||||
"ollama",
|
||||
"openai-codex",
|
||||
"openrouter",
|
||||
"phone-control",
|
||||
"sglang",
|
||||
"talk-voice",
|
||||
|
||||
@@ -3,6 +3,7 @@ import type { ModelProviderConfig } from "../config/types.js";
|
||||
import {
|
||||
groupPluginDiscoveryProvidersByOrder,
|
||||
normalizePluginDiscoveryResult,
|
||||
runProviderCatalog,
|
||||
} from "./provider-discovery.js";
|
||||
import type { ProviderDiscoveryOrder, ProviderPlugin } from "./types.js";
|
||||
|
||||
@@ -10,15 +11,17 @@ function makeProvider(params: {
|
||||
id: string;
|
||||
label?: string;
|
||||
order?: ProviderDiscoveryOrder;
|
||||
mode?: "catalog" | "discovery";
|
||||
}): ProviderPlugin {
|
||||
const hook = {
|
||||
...(params.order ? { order: params.order } : {}),
|
||||
run: async () => null,
|
||||
};
|
||||
return {
|
||||
id: params.id,
|
||||
label: params.label ?? params.id,
|
||||
auth: [],
|
||||
discovery: {
|
||||
...(params.order ? { order: params.order } : {}),
|
||||
run: async () => null,
|
||||
},
|
||||
...(params.mode === "discovery" ? { discovery: hook } : { catalog: hook }),
|
||||
};
|
||||
}
|
||||
|
||||
@@ -45,6 +48,14 @@ describe("groupPluginDiscoveryProvidersByOrder", () => {
|
||||
expect(grouped.paired.map((provider) => provider.id)).toEqual(["paired"]);
|
||||
expect(grouped.late.map((provider) => provider.id)).toEqual(["late-a", "late-b"]);
|
||||
});
|
||||
|
||||
it("uses the legacy discovery hook when catalog is absent", () => {
|
||||
const grouped = groupPluginDiscoveryProvidersByOrder([
|
||||
makeProvider({ id: "legacy", label: "Legacy", order: "profile", mode: "discovery" }),
|
||||
]);
|
||||
|
||||
expect(grouped.profile.map((provider) => provider.id)).toEqual(["legacy"]);
|
||||
});
|
||||
});
|
||||
|
||||
describe("normalizePluginDiscoveryResult", () => {
|
||||
@@ -88,3 +99,34 @@ describe("normalizePluginDiscoveryResult", () => {
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
describe("runProviderCatalog", () => {
|
||||
it("prefers catalog over discovery when both exist", async () => {
|
||||
const catalogRun = async () => ({
|
||||
provider: makeModelProviderConfig({ baseUrl: "http://catalog.example/v1" }),
|
||||
});
|
||||
const discoveryRun = async () => ({
|
||||
provider: makeModelProviderConfig({ baseUrl: "http://discovery.example/v1" }),
|
||||
});
|
||||
|
||||
const result = await runProviderCatalog({
|
||||
provider: {
|
||||
id: "demo",
|
||||
label: "Demo",
|
||||
auth: [],
|
||||
catalog: { run: catalogRun },
|
||||
discovery: { run: discoveryRun },
|
||||
},
|
||||
config: {},
|
||||
env: {},
|
||||
resolveProviderApiKey: () => ({ apiKey: undefined }),
|
||||
});
|
||||
|
||||
expect(result).toEqual({
|
||||
provider: {
|
||||
baseUrl: "http://catalog.example/v1",
|
||||
models: [],
|
||||
},
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
@@ -6,12 +6,16 @@ import type { ProviderDiscoveryOrder, ProviderPlugin } from "./types.js";
|
||||
|
||||
const DISCOVERY_ORDER: readonly ProviderDiscoveryOrder[] = ["simple", "profile", "paired", "late"];
|
||||
|
||||
function resolveProviderCatalogHook(provider: ProviderPlugin) {
|
||||
return provider.catalog ?? provider.discovery;
|
||||
}
|
||||
|
||||
export function resolvePluginDiscoveryProviders(params: {
|
||||
config?: OpenClawConfig;
|
||||
workspaceDir?: string;
|
||||
env?: NodeJS.ProcessEnv;
|
||||
}): ProviderPlugin[] {
|
||||
return resolvePluginProviders(params).filter((provider) => provider.discovery);
|
||||
return resolvePluginProviders(params).filter((provider) => resolveProviderCatalogHook(provider));
|
||||
}
|
||||
|
||||
export function groupPluginDiscoveryProvidersByOrder(
|
||||
@@ -25,7 +29,7 @@ export function groupPluginDiscoveryProvidersByOrder(
|
||||
} as Record<ProviderDiscoveryOrder, ProviderPlugin[]>;
|
||||
|
||||
for (const provider of providers) {
|
||||
const order = provider.discovery?.order ?? "late";
|
||||
const order = resolveProviderCatalogHook(provider)?.order ?? "late";
|
||||
grouped[order].push(provider);
|
||||
}
|
||||
|
||||
@@ -63,3 +67,23 @@ export function normalizePluginDiscoveryResult(params: {
|
||||
}
|
||||
return normalized;
|
||||
}
|
||||
|
||||
export function runProviderCatalog(params: {
|
||||
provider: ProviderPlugin;
|
||||
config: OpenClawConfig;
|
||||
agentDir?: string;
|
||||
workspaceDir?: string;
|
||||
env: NodeJS.ProcessEnv;
|
||||
resolveProviderApiKey: (providerId?: string) => {
|
||||
apiKey: string | undefined;
|
||||
discoveryApiKey?: string;
|
||||
};
|
||||
}) {
|
||||
return resolveProviderCatalogHook(params.provider)?.run({
|
||||
config: params.config,
|
||||
agentDir: params.agentDir,
|
||||
workspaceDir: params.workspaceDir,
|
||||
env: params.env,
|
||||
resolveProviderApiKey: params.resolveProviderApiKey,
|
||||
});
|
||||
}
|
||||
|
||||
186
src/plugins/provider-runtime.test.ts
Normal file
186
src/plugins/provider-runtime.test.ts
Normal file
@@ -0,0 +1,186 @@
|
||||
import { beforeEach, describe, expect, it, vi } from "vitest";
|
||||
import type { ProviderPlugin, ProviderRuntimeModel } from "./types.js";
|
||||
|
||||
const resolvePluginProvidersMock = vi.fn((_: unknown) => [] as ProviderPlugin[]);
|
||||
|
||||
vi.mock("./providers.js", () => ({
|
||||
resolvePluginProviders: (params: unknown) => resolvePluginProvidersMock(params as never),
|
||||
}));
|
||||
|
||||
import {
|
||||
prepareProviderExtraParams,
|
||||
resolveProviderCacheTtlEligibility,
|
||||
resolveProviderCapabilitiesWithPlugin,
|
||||
normalizeProviderResolvedModelWithPlugin,
|
||||
prepareProviderDynamicModel,
|
||||
prepareProviderRuntimeAuth,
|
||||
resolveProviderRuntimePlugin,
|
||||
runProviderDynamicModel,
|
||||
wrapProviderStreamFn,
|
||||
} from "./provider-runtime.js";
|
||||
|
||||
const MODEL: ProviderRuntimeModel = {
|
||||
id: "demo-model",
|
||||
name: "Demo Model",
|
||||
api: "openai-responses",
|
||||
provider: "demo",
|
||||
baseUrl: "https://api.example.com/v1",
|
||||
reasoning: true,
|
||||
input: ["text"],
|
||||
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
|
||||
contextWindow: 128_000,
|
||||
maxTokens: 8_192,
|
||||
};
|
||||
|
||||
describe("provider-runtime", () => {
|
||||
beforeEach(() => {
|
||||
resolvePluginProvidersMock.mockReset();
|
||||
resolvePluginProvidersMock.mockReturnValue([]);
|
||||
});
|
||||
|
||||
it("matches providers by alias for runtime hook lookup", () => {
|
||||
resolvePluginProvidersMock.mockReturnValue([
|
||||
{
|
||||
id: "openrouter",
|
||||
label: "OpenRouter",
|
||||
aliases: ["Open Router"],
|
||||
auth: [],
|
||||
},
|
||||
]);
|
||||
|
||||
const plugin = resolveProviderRuntimePlugin({ provider: "Open Router" });
|
||||
|
||||
expect(plugin?.id).toBe("openrouter");
|
||||
});
|
||||
|
||||
it("dispatches runtime hooks for the matched provider", async () => {
|
||||
const prepareDynamicModel = vi.fn(async () => undefined);
|
||||
const prepareRuntimeAuth = vi.fn(async () => ({
|
||||
apiKey: "runtime-token",
|
||||
baseUrl: "https://runtime.example.com/v1",
|
||||
expiresAt: 123,
|
||||
}));
|
||||
resolvePluginProvidersMock.mockReturnValue([
|
||||
{
|
||||
id: "demo",
|
||||
label: "Demo",
|
||||
auth: [],
|
||||
resolveDynamicModel: () => MODEL,
|
||||
prepareDynamicModel,
|
||||
capabilities: {
|
||||
providerFamily: "openai",
|
||||
},
|
||||
prepareExtraParams: ({ extraParams }) => ({
|
||||
...extraParams,
|
||||
transport: "auto",
|
||||
}),
|
||||
wrapStreamFn: ({ streamFn }) => streamFn,
|
||||
normalizeResolvedModel: ({ model }) => ({
|
||||
...model,
|
||||
api: "openai-codex-responses",
|
||||
}),
|
||||
prepareRuntimeAuth,
|
||||
isCacheTtlEligible: ({ modelId }) => modelId.startsWith("anthropic/"),
|
||||
},
|
||||
]);
|
||||
|
||||
expect(
|
||||
runProviderDynamicModel({
|
||||
provider: "demo",
|
||||
context: {
|
||||
provider: "demo",
|
||||
modelId: MODEL.id,
|
||||
modelRegistry: { find: () => null } as never,
|
||||
},
|
||||
}),
|
||||
).toMatchObject(MODEL);
|
||||
|
||||
await prepareProviderDynamicModel({
|
||||
provider: "demo",
|
||||
context: {
|
||||
provider: "demo",
|
||||
modelId: MODEL.id,
|
||||
modelRegistry: { find: () => null } as never,
|
||||
},
|
||||
});
|
||||
|
||||
expect(
|
||||
resolveProviderCapabilitiesWithPlugin({
|
||||
provider: "demo",
|
||||
}),
|
||||
).toMatchObject({
|
||||
providerFamily: "openai",
|
||||
});
|
||||
|
||||
expect(
|
||||
prepareProviderExtraParams({
|
||||
provider: "demo",
|
||||
context: {
|
||||
provider: "demo",
|
||||
modelId: MODEL.id,
|
||||
extraParams: { temperature: 0.3 },
|
||||
},
|
||||
}),
|
||||
).toMatchObject({
|
||||
temperature: 0.3,
|
||||
transport: "auto",
|
||||
});
|
||||
|
||||
expect(
|
||||
wrapProviderStreamFn({
|
||||
provider: "demo",
|
||||
context: {
|
||||
provider: "demo",
|
||||
modelId: MODEL.id,
|
||||
streamFn: vi.fn(),
|
||||
},
|
||||
}),
|
||||
).toBeTypeOf("function");
|
||||
|
||||
expect(
|
||||
normalizeProviderResolvedModelWithPlugin({
|
||||
provider: "demo",
|
||||
context: {
|
||||
provider: "demo",
|
||||
modelId: MODEL.id,
|
||||
model: MODEL,
|
||||
},
|
||||
}),
|
||||
).toMatchObject({
|
||||
...MODEL,
|
||||
api: "openai-codex-responses",
|
||||
});
|
||||
|
||||
await expect(
|
||||
prepareProviderRuntimeAuth({
|
||||
provider: "demo",
|
||||
env: process.env,
|
||||
context: {
|
||||
env: process.env,
|
||||
provider: "demo",
|
||||
modelId: MODEL.id,
|
||||
model: MODEL,
|
||||
apiKey: "source-token",
|
||||
authMode: "api-key",
|
||||
},
|
||||
}),
|
||||
).resolves.toMatchObject({
|
||||
apiKey: "runtime-token",
|
||||
baseUrl: "https://runtime.example.com/v1",
|
||||
expiresAt: 123,
|
||||
});
|
||||
|
||||
expect(
|
||||
resolveProviderCacheTtlEligibility({
|
||||
provider: "demo",
|
||||
context: {
|
||||
provider: "demo",
|
||||
modelId: "anthropic/claude-sonnet-4-5",
|
||||
},
|
||||
}),
|
||||
).toBe(true);
|
||||
|
||||
expect(prepareDynamicModel).toHaveBeenCalledTimes(1);
|
||||
expect(prepareRuntimeAuth).toHaveBeenCalledTimes(1);
|
||||
});
|
||||
});
|
||||
123
src/plugins/provider-runtime.ts
Normal file
123
src/plugins/provider-runtime.ts
Normal file
@@ -0,0 +1,123 @@
|
||||
import { normalizeProviderId } from "../agents/model-selection.js";
|
||||
import type { OpenClawConfig } from "../config/config.js";
|
||||
import { resolvePluginProviders } from "./providers.js";
|
||||
import type {
|
||||
ProviderCacheTtlEligibilityContext,
|
||||
ProviderPrepareExtraParamsContext,
|
||||
ProviderPrepareDynamicModelContext,
|
||||
ProviderPrepareRuntimeAuthContext,
|
||||
ProviderPlugin,
|
||||
ProviderResolveDynamicModelContext,
|
||||
ProviderRuntimeModel,
|
||||
ProviderWrapStreamFnContext,
|
||||
} from "./types.js";
|
||||
|
||||
function matchesProviderId(provider: ProviderPlugin, providerId: string): boolean {
|
||||
const normalized = normalizeProviderId(providerId);
|
||||
if (!normalized) {
|
||||
return false;
|
||||
}
|
||||
if (normalizeProviderId(provider.id) === normalized) {
|
||||
return true;
|
||||
}
|
||||
return (provider.aliases ?? []).some((alias) => normalizeProviderId(alias) === normalized);
|
||||
}
|
||||
|
||||
export function resolveProviderRuntimePlugin(params: {
|
||||
provider: string;
|
||||
config?: OpenClawConfig;
|
||||
workspaceDir?: string;
|
||||
env?: NodeJS.ProcessEnv;
|
||||
}): ProviderPlugin | undefined {
|
||||
return resolvePluginProviders(params).find((plugin) =>
|
||||
matchesProviderId(plugin, params.provider),
|
||||
);
|
||||
}
|
||||
|
||||
export function runProviderDynamicModel(params: {
|
||||
provider: string;
|
||||
config?: OpenClawConfig;
|
||||
workspaceDir?: string;
|
||||
env?: NodeJS.ProcessEnv;
|
||||
context: ProviderResolveDynamicModelContext;
|
||||
}): ProviderRuntimeModel | undefined {
|
||||
return resolveProviderRuntimePlugin(params)?.resolveDynamicModel?.(params.context) ?? undefined;
|
||||
}
|
||||
|
||||
export async function prepareProviderDynamicModel(params: {
|
||||
provider: string;
|
||||
config?: OpenClawConfig;
|
||||
workspaceDir?: string;
|
||||
env?: NodeJS.ProcessEnv;
|
||||
context: ProviderPrepareDynamicModelContext;
|
||||
}): Promise<void> {
|
||||
await resolveProviderRuntimePlugin(params)?.prepareDynamicModel?.(params.context);
|
||||
}
|
||||
|
||||
export function normalizeProviderResolvedModelWithPlugin(params: {
|
||||
provider: string;
|
||||
config?: OpenClawConfig;
|
||||
workspaceDir?: string;
|
||||
env?: NodeJS.ProcessEnv;
|
||||
context: {
|
||||
config?: OpenClawConfig;
|
||||
agentDir?: string;
|
||||
workspaceDir?: string;
|
||||
provider: string;
|
||||
modelId: string;
|
||||
model: ProviderRuntimeModel;
|
||||
};
|
||||
}): ProviderRuntimeModel | undefined {
|
||||
return (
|
||||
resolveProviderRuntimePlugin(params)?.normalizeResolvedModel?.(params.context) ?? undefined
|
||||
);
|
||||
}
|
||||
|
||||
export function resolveProviderCapabilitiesWithPlugin(params: {
|
||||
provider: string;
|
||||
config?: OpenClawConfig;
|
||||
workspaceDir?: string;
|
||||
env?: NodeJS.ProcessEnv;
|
||||
}) {
|
||||
return resolveProviderRuntimePlugin(params)?.capabilities;
|
||||
}
|
||||
|
||||
export function prepareProviderExtraParams(params: {
|
||||
provider: string;
|
||||
config?: OpenClawConfig;
|
||||
workspaceDir?: string;
|
||||
env?: NodeJS.ProcessEnv;
|
||||
context: ProviderPrepareExtraParamsContext;
|
||||
}) {
|
||||
return resolveProviderRuntimePlugin(params)?.prepareExtraParams?.(params.context) ?? undefined;
|
||||
}
|
||||
|
||||
export function wrapProviderStreamFn(params: {
|
||||
provider: string;
|
||||
config?: OpenClawConfig;
|
||||
workspaceDir?: string;
|
||||
env?: NodeJS.ProcessEnv;
|
||||
context: ProviderWrapStreamFnContext;
|
||||
}) {
|
||||
return resolveProviderRuntimePlugin(params)?.wrapStreamFn?.(params.context) ?? undefined;
|
||||
}
|
||||
|
||||
export async function prepareProviderRuntimeAuth(params: {
|
||||
provider: string;
|
||||
config?: OpenClawConfig;
|
||||
workspaceDir?: string;
|
||||
env?: NodeJS.ProcessEnv;
|
||||
context: ProviderPrepareRuntimeAuthContext;
|
||||
}) {
|
||||
return await resolveProviderRuntimePlugin(params)?.prepareRuntimeAuth?.(params.context);
|
||||
}
|
||||
|
||||
export function resolveProviderCacheTtlEligibility(params: {
|
||||
provider: string;
|
||||
config?: OpenClawConfig;
|
||||
workspaceDir?: string;
|
||||
env?: NodeJS.ProcessEnv;
|
||||
context: ProviderCacheTtlEligibilityContext;
|
||||
}) {
|
||||
return resolveProviderRuntimePlugin(params)?.isCacheTtlEligible?.(params.context);
|
||||
}
|
||||
@@ -124,4 +124,33 @@ describe("normalizeRegisteredProvider", () => {
|
||||
'provider "demo" model-picker metadata ignored because it has no auth methods',
|
||||
]);
|
||||
});
|
||||
|
||||
it("prefers catalog when a provider registers both catalog and discovery", () => {
|
||||
const { diagnostics, pushDiagnostic } = collectDiagnostics();
|
||||
|
||||
const provider = normalizeRegisteredProvider({
|
||||
pluginId: "demo-plugin",
|
||||
source: "/tmp/demo/index.ts",
|
||||
provider: makeProvider({
|
||||
catalog: {
|
||||
run: async () => null,
|
||||
},
|
||||
discovery: {
|
||||
run: async () => ({
|
||||
provider: {
|
||||
baseUrl: "http://127.0.0.1:8000/v1",
|
||||
models: [],
|
||||
},
|
||||
}),
|
||||
},
|
||||
}),
|
||||
pushDiagnostic,
|
||||
});
|
||||
|
||||
expect(provider?.catalog).toBeDefined();
|
||||
expect(provider?.discovery).toBeUndefined();
|
||||
expect(diagnostics.map((diag) => diag.message)).toEqual([
|
||||
'provider "demo" registered both catalog and discovery; using catalog',
|
||||
]);
|
||||
});
|
||||
});
|
||||
|
||||
@@ -212,11 +212,24 @@ export function normalizeRegisteredProvider(params: {
|
||||
wizard: params.provider.wizard,
|
||||
pushDiagnostic: params.pushDiagnostic,
|
||||
});
|
||||
const catalog = params.provider.catalog;
|
||||
const discovery = params.provider.discovery;
|
||||
if (catalog && discovery) {
|
||||
pushProviderDiagnostic({
|
||||
level: "warn",
|
||||
pluginId: params.pluginId,
|
||||
source: params.source,
|
||||
message: `provider "${id}" registered both catalog and discovery; using catalog`,
|
||||
pushDiagnostic: params.pushDiagnostic,
|
||||
});
|
||||
}
|
||||
const {
|
||||
wizard: _ignoredWizard,
|
||||
docsPath: _ignoredDocsPath,
|
||||
aliases: _ignoredAliases,
|
||||
envVars: _ignoredEnvVars,
|
||||
catalog: _ignoredCatalog,
|
||||
discovery: _ignoredDiscovery,
|
||||
...restProvider
|
||||
} = params.provider;
|
||||
return {
|
||||
@@ -227,6 +240,8 @@ export function normalizeRegisteredProvider(params: {
|
||||
...(aliases ? { aliases } : {}),
|
||||
...(envVars ? { envVars } : {}),
|
||||
auth,
|
||||
...(catalog ? { catalog } : {}),
|
||||
...(!catalog && discovery ? { discovery } : {}),
|
||||
...(wizard ? { wizard } : {}),
|
||||
};
|
||||
}
|
||||
|
||||
@@ -1,12 +1,17 @@
|
||||
import type { IncomingMessage, ServerResponse } from "node:http";
|
||||
import type { AgentMessage } from "@mariozechner/pi-agent-core";
|
||||
import type { StreamFn } from "@mariozechner/pi-agent-core";
|
||||
import type { Api, Model } from "@mariozechner/pi-ai";
|
||||
import type { ModelRegistry } from "@mariozechner/pi-coding-agent";
|
||||
import type { Command } from "commander";
|
||||
import type {
|
||||
ApiKeyCredential,
|
||||
AuthProfileCredential,
|
||||
OAuthCredential,
|
||||
} from "../agents/auth-profiles/types.js";
|
||||
import type { ProviderCapabilities } from "../agents/provider-capabilities.js";
|
||||
import type { AnyAgentTool } from "../agents/tools/common.js";
|
||||
import type { ThinkLevel } from "../auto-reply/thinking.js";
|
||||
import type { ReplyPayload } from "../auto-reply/types.js";
|
||||
import type { ChannelDock } from "../channels/dock.js";
|
||||
import type { ChannelId, ChannelPlugin } from "../channels/plugins/types.js";
|
||||
@@ -166,9 +171,9 @@ export type ProviderAuthMethod = {
|
||||
) => Promise<OpenClawConfig | null>;
|
||||
};
|
||||
|
||||
export type ProviderDiscoveryOrder = "simple" | "profile" | "paired" | "late";
|
||||
export type ProviderCatalogOrder = "simple" | "profile" | "paired" | "late";
|
||||
|
||||
export type ProviderDiscoveryContext = {
|
||||
export type ProviderCatalogContext = {
|
||||
config: OpenClawConfig;
|
||||
agentDir?: string;
|
||||
workspaceDir?: string;
|
||||
@@ -179,17 +184,168 @@ export type ProviderDiscoveryContext = {
|
||||
};
|
||||
};
|
||||
|
||||
export type ProviderDiscoveryResult =
|
||||
export type ProviderCatalogResult =
|
||||
| { provider: ModelProviderConfig }
|
||||
| { providers: Record<string, ModelProviderConfig> }
|
||||
| null
|
||||
| undefined;
|
||||
|
||||
export type ProviderPluginDiscovery = {
|
||||
order?: ProviderDiscoveryOrder;
|
||||
run: (ctx: ProviderDiscoveryContext) => Promise<ProviderDiscoveryResult>;
|
||||
export type ProviderPluginCatalog = {
|
||||
order?: ProviderCatalogOrder;
|
||||
run: (ctx: ProviderCatalogContext) => Promise<ProviderCatalogResult>;
|
||||
};
|
||||
|
||||
/**
|
||||
* Fully-resolved runtime model shape used by the embedded runner.
|
||||
*
|
||||
* Catalog hooks publish config-time `models.providers` entries.
|
||||
* Runtime hooks below operate on the final `pi-ai` model object after
|
||||
* discovery/override merging, just before inference runs.
|
||||
*/
|
||||
export type ProviderRuntimeModel = Model<Api>;
|
||||
|
||||
export type ProviderRuntimeProviderConfig = {
|
||||
baseUrl?: string;
|
||||
api?: ModelProviderConfig["api"];
|
||||
models?: ModelProviderConfig["models"];
|
||||
headers?: unknown;
|
||||
};
|
||||
|
||||
/**
|
||||
* Sync hook for provider-owned model ids that are not present in the local
|
||||
* registry/catalog yet.
|
||||
*
|
||||
* Use this for pass-through providers or provider-specific forward-compat
|
||||
* behavior. The hook should be cheap and side-effect free; async refreshes
|
||||
* belong in `prepareDynamicModel`.
|
||||
*/
|
||||
export type ProviderResolveDynamicModelContext = {
|
||||
config?: OpenClawConfig;
|
||||
agentDir?: string;
|
||||
workspaceDir?: string;
|
||||
provider: string;
|
||||
modelId: string;
|
||||
modelRegistry: ModelRegistry;
|
||||
providerConfig?: ProviderRuntimeProviderConfig;
|
||||
};
|
||||
|
||||
/**
|
||||
* Optional async warm-up for dynamic model resolution.
|
||||
*
|
||||
* Called only from async model resolution paths, before retrying
|
||||
* `resolveDynamicModel`. This is the place to refresh caches or fetch provider
|
||||
* metadata over the network.
|
||||
*/
|
||||
export type ProviderPrepareDynamicModelContext = ProviderResolveDynamicModelContext;
|
||||
|
||||
/**
|
||||
* Last-chance rewrite hook for provider-owned transport normalization.
|
||||
*
|
||||
* Runs after OpenClaw resolves an explicit/discovered/dynamic model and before
|
||||
* the embedded runner uses it. Typical uses: swap API ids, fix base URLs, or
|
||||
* patch provider-specific compat bits.
|
||||
*/
|
||||
export type ProviderNormalizeResolvedModelContext = {
|
||||
config?: OpenClawConfig;
|
||||
agentDir?: string;
|
||||
workspaceDir?: string;
|
||||
provider: string;
|
||||
modelId: string;
|
||||
model: ProviderRuntimeModel;
|
||||
};
|
||||
|
||||
/**
|
||||
* Runtime auth input for providers that need an extra exchange step before
|
||||
* inference. The incoming `apiKey` is the raw credential resolved from auth
|
||||
* profiles/env/config. The returned value should be the actual token/key to use
|
||||
* for the request.
|
||||
*/
|
||||
export type ProviderPrepareRuntimeAuthContext = {
|
||||
config?: OpenClawConfig;
|
||||
agentDir?: string;
|
||||
workspaceDir?: string;
|
||||
env: NodeJS.ProcessEnv;
|
||||
provider: string;
|
||||
modelId: string;
|
||||
model: ProviderRuntimeModel;
|
||||
apiKey: string;
|
||||
authMode: string;
|
||||
profileId?: string;
|
||||
};
|
||||
|
||||
/**
|
||||
* Result of `prepareRuntimeAuth`.
|
||||
*
|
||||
* `apiKey` is required and becomes the runtime credential stored in auth
|
||||
* storage. `baseUrl` is optional and lets providers like GitHub Copilot swap to
|
||||
* an entitlement-specific endpoint at request time. `expiresAt` enables generic
|
||||
* background refresh in long-running turns.
|
||||
*/
|
||||
export type ProviderPreparedRuntimeAuth = {
|
||||
apiKey: string;
|
||||
baseUrl?: string;
|
||||
expiresAt?: number;
|
||||
};
|
||||
|
||||
/**
|
||||
* Provider-owned extra-param normalization before OpenClaw builds its generic
|
||||
* stream option wrapper.
|
||||
*
|
||||
* Use this to set provider defaults or rewrite provider-specific config keys
|
||||
* into the merged `extraParams` object. Return the full next extraParams object.
|
||||
*/
|
||||
export type ProviderPrepareExtraParamsContext = {
|
||||
config?: OpenClawConfig;
|
||||
agentDir?: string;
|
||||
workspaceDir?: string;
|
||||
provider: string;
|
||||
modelId: string;
|
||||
extraParams?: Record<string, unknown>;
|
||||
thinkingLevel?: ThinkLevel;
|
||||
};
|
||||
|
||||
/**
|
||||
* Provider-owned stream wrapper hook after OpenClaw applies its generic
|
||||
* transport-independent wrappers.
|
||||
*
|
||||
* Use this for provider-specific payload/header/model mutations that still run
|
||||
* through the normal `pi-ai` stream path.
|
||||
*/
|
||||
export type ProviderWrapStreamFnContext = ProviderPrepareExtraParamsContext & {
|
||||
streamFn?: StreamFn;
|
||||
};
|
||||
|
||||
/**
|
||||
* Provider-owned prompt-cache eligibility.
|
||||
*
|
||||
* Return `true` or `false` to override OpenClaw's built-in provider cache TTL
|
||||
* detection for this provider. Return `undefined` to fall back to core rules.
|
||||
*/
|
||||
export type ProviderCacheTtlEligibilityContext = {
|
||||
provider: string;
|
||||
modelId: string;
|
||||
};
|
||||
|
||||
/**
|
||||
* @deprecated Use ProviderCatalogOrder.
|
||||
*/
|
||||
export type ProviderDiscoveryOrder = ProviderCatalogOrder;
|
||||
|
||||
/**
|
||||
* @deprecated Use ProviderCatalogContext.
|
||||
*/
|
||||
export type ProviderDiscoveryContext = ProviderCatalogContext;
|
||||
|
||||
/**
|
||||
* @deprecated Use ProviderCatalogResult.
|
||||
*/
|
||||
export type ProviderDiscoveryResult = ProviderCatalogResult;
|
||||
|
||||
/**
|
||||
* @deprecated Use ProviderPluginCatalog.
|
||||
*/
|
||||
export type ProviderPluginDiscovery = ProviderPluginCatalog;
|
||||
|
||||
export type ProviderPluginWizardOnboarding = {
|
||||
choiceId?: string;
|
||||
choiceLabel?: string;
|
||||
@@ -227,7 +383,93 @@ export type ProviderPlugin = {
|
||||
aliases?: string[];
|
||||
envVars?: string[];
|
||||
auth: ProviderAuthMethod[];
|
||||
/**
|
||||
* Preferred hook for plugin-defined provider catalogs.
|
||||
* Returns provider config/model definitions that merge into models.providers.
|
||||
*/
|
||||
catalog?: ProviderPluginCatalog;
|
||||
/**
|
||||
* Legacy alias for catalog.
|
||||
* Kept for compatibility with existing provider plugins.
|
||||
*/
|
||||
discovery?: ProviderPluginDiscovery;
|
||||
/**
|
||||
* Sync runtime fallback for model ids not present in the local catalog.
|
||||
*
|
||||
* Hook order:
|
||||
* 1. discovered/static model lookup
|
||||
* 2. plugin `resolveDynamicModel`
|
||||
* 3. core fallback heuristics
|
||||
* 4. generic provider-config fallback
|
||||
*
|
||||
* Keep this hook cheap and deterministic. If you need network I/O first, use
|
||||
* `prepareDynamicModel` to prime state for the async retry path.
|
||||
*/
|
||||
resolveDynamicModel?: (
|
||||
ctx: ProviderResolveDynamicModelContext,
|
||||
) => ProviderRuntimeModel | null | undefined;
|
||||
/**
|
||||
* Optional async prefetch for dynamic model resolution.
|
||||
*
|
||||
* OpenClaw calls this only from async model resolution paths. After it
|
||||
* completes, `resolveDynamicModel` is called again.
|
||||
*/
|
||||
prepareDynamicModel?: (ctx: ProviderPrepareDynamicModelContext) => Promise<void>;
|
||||
/**
|
||||
* Provider-owned transport normalization.
|
||||
*
|
||||
* Use this to rewrite a resolved model without forking the generic runner:
|
||||
* swap API ids, update base URLs, or adjust compat flags for a provider's
|
||||
* transport quirks.
|
||||
*/
|
||||
normalizeResolvedModel?: (
|
||||
ctx: ProviderNormalizeResolvedModelContext,
|
||||
) => ProviderRuntimeModel | null | undefined;
|
||||
/**
|
||||
* Static provider capability overrides consumed by shared transcript/tooling
|
||||
* logic.
|
||||
*
|
||||
* Use this when the provider behaves like OpenAI/Anthropic, needs transcript
|
||||
* sanitization quirks, or requires provider-family hints.
|
||||
*/
|
||||
capabilities?: Partial<ProviderCapabilities>;
|
||||
/**
|
||||
* Provider-owned extra-param normalization before generic stream option
|
||||
* wrapping.
|
||||
*
|
||||
* Typical uses: set provider-default `transport`, map provider-specific
|
||||
* config aliases, or inject extra request metadata sourced from
|
||||
* `agents.defaults.models.<provider>/<model>.params`.
|
||||
*/
|
||||
prepareExtraParams?: (
|
||||
ctx: ProviderPrepareExtraParamsContext,
|
||||
) => Record<string, unknown> | null | undefined;
|
||||
/**
|
||||
* Provider-owned stream wrapper applied after generic OpenClaw wrappers.
|
||||
*
|
||||
* Typical uses: provider attribution headers, request-body rewrites, or
|
||||
* provider-specific compat payload patches that do not justify a separate
|
||||
* transport implementation.
|
||||
*/
|
||||
wrapStreamFn?: (ctx: ProviderWrapStreamFnContext) => StreamFn | null | undefined;
|
||||
/**
|
||||
* Runtime auth exchange hook.
|
||||
*
|
||||
* Called after OpenClaw resolves the raw configured credential but before the
|
||||
* runner stores it in runtime auth storage. This lets plugins exchange a
|
||||
* source credential (for example a GitHub token) into a short-lived runtime
|
||||
* token plus optional base URL override.
|
||||
*/
|
||||
prepareRuntimeAuth?: (
|
||||
ctx: ProviderPrepareRuntimeAuthContext,
|
||||
) => Promise<ProviderPreparedRuntimeAuth | null | undefined>;
|
||||
/**
|
||||
* Provider-owned cache TTL eligibility.
|
||||
*
|
||||
* Use this when a proxy provider supports Anthropic-style prompt caching for
|
||||
* only a subset of upstream models.
|
||||
*/
|
||||
isCacheTtlEligible?: (ctx: ProviderCacheTtlEligibilityContext) => boolean | undefined;
|
||||
wizard?: ProviderPluginWizard;
|
||||
formatApiKey?: (cred: AuthProfileCredential) => string;
|
||||
refreshOAuth?: (cred: OAuthCredential) => Promise<OAuthCredential>;
|
||||
|
||||
Reference in New Issue
Block a user