feat(plugins): merge openai vendor seams into one plugin

This commit is contained in:
Peter Steinberger
2026-03-15 17:50:16 -07:00
parent bc5054ce68
commit b54e37c71f
26 changed files with 833 additions and 548 deletions

View File

@@ -22,8 +22,9 @@ For model selection rules, see [/concepts/models](/concepts/models).
- Provider plugins can also own provider runtime behavior via
`resolveDynamicModel`, `prepareDynamicModel`, `normalizeResolvedModel`,
`capabilities`, `prepareExtraParams`, `wrapStreamFn`,
`isCacheTtlEligible`, `prepareRuntimeAuth`, `resolveUsageAuth`, and
`fetchUsageSnapshot`.
`isCacheTtlEligible`, `buildMissingAuthMessage`,
`suppressBuiltInModel`, `augmentModelCatalog`, `prepareRuntimeAuth`,
`resolveUsageAuth`, and `fetchUsageSnapshot`.
## Plugin-owned provider behavior
@@ -42,6 +43,12 @@ Typical split:
- `prepareExtraParams`: provider defaults or normalizes per-model request params
- `wrapStreamFn`: provider applies request headers/body/model compat wrappers
- `isCacheTtlEligible`: provider decides which upstream model ids support prompt-cache TTL
- `buildMissingAuthMessage`: provider replaces the generic auth-store error
with a provider-specific recovery hint
- `suppressBuiltInModel`: provider hides stale upstream rows and can return a
vendor-owned error for direct resolution failures
- `augmentModelCatalog`: provider appends synthetic/final catalog rows after
discovery and config merging
- `prepareRuntimeAuth`: provider turns a configured credential into a short
lived runtime token
- `resolveUsageAuth`: provider resolves usage/quota credentials for `/usage`
@@ -58,9 +65,8 @@ Current bundled examples:
- `github-copilot`: forward-compat model fallback, Claude-thinking transcript
hints, runtime token exchange, and usage endpoint fetching
- `openai`: GPT-5.4 forward-compat fallback, direct OpenAI transport
normalization, and provider-family metadata
- `openai-codex`: forward-compat model fallback, transport normalization, and
default transport params plus usage endpoint fetching
normalization, Codex-aware missing-auth hints, Spark suppression, synthetic
OpenAI/Codex catalog rows, and provider-family metadata
- `google-gemini-cli`: Gemini 3.1 forward-compat fallback plus usage-token
parsing and quota endpoint fetching for usage surfaces
- `moonshot`: shared transport, plugin-owned thinking payload normalization
@@ -75,6 +81,9 @@ Current bundled examples:
plugin-owned catalogs only
- `minimax` and `xiaomi`: plugin-owned catalogs plus usage auth/snapshot logic
The bundled `openai` plugin now owns both provider ids: `openai` and
`openai-codex`.
That covers providers that still fit OpenClaw's normal transports. A provider
that needs a totally custom request executor is a separate, deeper extension
surface.

View File

@@ -178,8 +178,7 @@ Important trust note:
- Model Studio provider catalog — bundled as `modelstudio` (enabled by default)
- Moonshot provider runtime — bundled as `moonshot` (enabled by default)
- NVIDIA provider catalog — bundled as `nvidia` (enabled by default)
- OpenAI provider runtime — bundled as `openai` (enabled by default)
- OpenAI Codex provider runtime — bundled as `openai-codex` (enabled by default)
- OpenAI provider runtime — bundled as `openai` (enabled by default; owns both `openai` and `openai-codex`)
- OpenCode Go provider capabilities — bundled as `opencode-go` (enabled by default)
- OpenCode Zen provider capabilities — bundled as `opencode` (enabled by default)
- OpenRouter provider runtime — bundled as `openrouter` (enabled by default)
@@ -207,7 +206,7 @@ Native OpenClaw plugins can register:
- Background services
- Context engines
- Provider auth flows and model catalogs
- Provider runtime hooks for dynamic model ids, transport normalization, capability metadata, stream wrapping, cache TTL policy, runtime auth exchange, and usage/billing auth + snapshot resolution
- Provider runtime hooks for dynamic model ids, transport normalization, capability metadata, stream wrapping, cache TTL policy, missing-auth hints, built-in model suppression, catalog augmentation, runtime auth exchange, and usage/billing auth + snapshot resolution
- Optional config validation
- **Skills** (by listing `skills` directories in the plugin manifest)
- **Auto-reply commands** (execute without invoking the AI agent)
@@ -220,7 +219,7 @@ Tool authoring guide: [Plugin agent tools](/plugins/agent-tools).
Provider plugins now have two layers:
- config-time hooks: `catalog` / legacy `discovery`
- runtime hooks: `resolveDynamicModel`, `prepareDynamicModel`, `normalizeResolvedModel`, `capabilities`, `prepareExtraParams`, `wrapStreamFn`, `isCacheTtlEligible`, `prepareRuntimeAuth`, `resolveUsageAuth`, `fetchUsageSnapshot`
- runtime hooks: `resolveDynamicModel`, `prepareDynamicModel`, `normalizeResolvedModel`, `capabilities`, `prepareExtraParams`, `wrapStreamFn`, `isCacheTtlEligible`, `buildMissingAuthMessage`, `suppressBuiltInModel`, `augmentModelCatalog`, `prepareRuntimeAuth`, `resolveUsageAuth`, `fetchUsageSnapshot`
OpenClaw still owns the generic agent loop, failover, transcript handling, and
tool policy. These hooks are the seam for provider-specific behavior without
@@ -251,13 +250,20 @@ For model/provider plugins, OpenClaw uses hooks in this rough order:
Provider-owned stream wrapper after generic wrappers are applied.
9. `isCacheTtlEligible`
Provider-owned prompt-cache policy for proxy/backhaul providers.
10. `prepareRuntimeAuth`
10. `buildMissingAuthMessage`
Provider-owned replacement for the generic missing-auth recovery message.
11. `suppressBuiltInModel`
Provider-owned stale upstream model suppression plus optional user-facing
error hint.
12. `augmentModelCatalog`
Provider-owned synthetic/final catalog rows appended after discovery.
13. `prepareRuntimeAuth`
Exchanges a configured credential into the actual runtime token/key just
before inference.
11. `resolveUsageAuth`
14. `resolveUsageAuth`
Resolves usage/billing credentials for `/usage` and related status
surfaces.
12. `fetchUsageSnapshot`
15. `fetchUsageSnapshot`
Fetches and normalizes provider-specific usage/quota snapshots after auth
is resolved.
@@ -271,6 +277,9 @@ For model/provider plugins, OpenClaw uses hooks in this rough order:
- `prepareExtraParams`: set provider defaults or normalize provider-specific per-model params before generic stream wrapping
- `wrapStreamFn`: add provider-specific headers/payload/model compat patches while still using the normal `pi-ai` execution path
- `isCacheTtlEligible`: decide whether provider/model pairs should use cache TTL metadata
- `buildMissingAuthMessage`: replace the generic auth-store error with a provider-specific recovery hint
- `suppressBuiltInModel`: hide stale upstream rows and optionally return a provider-owned error for direct resolution failures
- `augmentModelCatalog`: append synthetic/final catalog rows after discovery and config merging
- `prepareRuntimeAuth`: exchange a configured credential into the actual short-lived runtime token/key used for requests
- `resolveUsageAuth`: resolve provider-owned credentials for usage/billing endpoints without hardcoding token parsing in core
- `fetchUsageSnapshot`: own provider-specific usage endpoint fetch/parsing while core keeps summary fan-out and formatting
@@ -285,6 +294,9 @@ Rule of thumb:
- provider needs default request params or per-provider param cleanup: use `prepareExtraParams`
- provider needs request headers/body/model compat wrappers without a custom transport: use `wrapStreamFn`
- provider needs proxy-specific cache TTL gating: use `isCacheTtlEligible`
- provider needs a provider-specific missing-auth recovery hint: use `buildMissingAuthMessage`
- provider needs to hide stale upstream rows or replace them with a vendor hint: use `suppressBuiltInModel`
- provider needs synthetic forward-compat rows in `models list` and pickers: use `augmentModelCatalog`
- provider needs a token exchange or short-lived request credential: use `prepareRuntimeAuth`
- provider needs custom usage/quota token parsing or a different usage credential: use `resolveUsageAuth`
- provider needs a provider-specific usage endpoint or payload parser: use `fetchUsageSnapshot`
@@ -354,8 +366,10 @@ api.registerProvider({
forward-compat, provider-family hints, usage endpoint integration, and
prompt-cache eligibility.
- OpenAI uses `resolveDynamicModel`, `normalizeResolvedModel`, and
`capabilities` because it owns GPT-5.4 forward-compat plus the direct OpenAI
`openai-completions` -> `openai-responses` normalization.
`capabilities` plus `buildMissingAuthMessage`, `suppressBuiltInModel`, and
`augmentModelCatalog` because it owns GPT-5.4 forward-compat, the direct
OpenAI `openai-completions` -> `openai-responses` normalization, Codex-aware
auth hints, Spark suppression, and synthetic OpenAI list rows.
- OpenRouter uses `catalog` plus `resolveDynamicModel` and
`prepareDynamicModel` because the provider is pass-through and may expose new
model ids before OpenClaw's static catalog updates.
@@ -363,11 +377,12 @@ api.registerProvider({
`capabilities` plus `prepareRuntimeAuth` and `fetchUsageSnapshot` because it
needs model fallback behavior, Claude transcript quirks, a GitHub token ->
Copilot token exchange, and a provider-owned usage endpoint.
- OpenAI Codex uses `catalog`, `resolveDynamicModel`, and
`normalizeResolvedModel` plus `prepareExtraParams`, `resolveUsageAuth`, and
`fetchUsageSnapshot` because it still runs on core OpenAI transports but owns
its transport/base URL normalization, default transport choice, and ChatGPT
usage endpoint integration.
- OpenAI Codex uses `catalog`, `resolveDynamicModel`,
`normalizeResolvedModel`, and `augmentModelCatalog` plus
`prepareExtraParams`, `resolveUsageAuth`, and `fetchUsageSnapshot` because it
still runs on core OpenAI transports but owns its transport/base URL
normalization, default transport choice, synthetic Codex catalog rows, and
ChatGPT usage endpoint integration.
- Gemini CLI OAuth uses `resolveDynamicModel`, `resolveUsageAuth`, and
`fetchUsageSnapshot` because it owns Gemini 3.1 forward-compat fallback plus
the token parsing and quota endpoint wiring needed by `/usage`.
@@ -654,7 +669,7 @@ Default-on bundled plugin examples:
- `moonshot`
- `nvidia`
- `ollama`
- `openai-codex`
- `openai`
- `openrouter`
- `phone-control`
- `qianfan`

View File

@@ -1,9 +0,0 @@
{
"id": "openai-codex",
"providers": ["openai-codex"],
"configSchema": {
"type": "object",
"additionalProperties": false,
"properties": {}
}
}

View File

@@ -1,12 +0,0 @@
{
"name": "@openclaw/openai-codex-provider",
"version": "2026.3.14",
"private": true,
"description": "OpenClaw OpenAI Codex provider plugin",
"type": "module",
"openclaw": {
"extensions": [
"./index.ts"
]
}
}

View File

@@ -2,22 +2,31 @@ import { describe, expect, it } from "vitest";
import type { ProviderPlugin } from "../../src/plugins/types.js";
import openAIPlugin from "./index.js";
function registerProvider(): ProviderPlugin {
let provider: ProviderPlugin | undefined;
function registerProviders(): ProviderPlugin[] {
const providers: ProviderPlugin[] = [];
openAIPlugin.register({
registerProvider(nextProvider: ProviderPlugin) {
provider = nextProvider;
providers.push(nextProvider);
},
} as never);
return providers;
}
function requireProvider(id: string): ProviderPlugin {
const provider = registerProviders().find((entry) => entry.id === id);
if (!provider) {
throw new Error("provider registration missing");
throw new Error(`provider registration missing for ${id}`);
}
return provider;
}
describe("openai plugin", () => {
it("registers openai and openai-codex providers from one extension", () => {
expect(registerProviders().map((provider) => provider.id)).toEqual(["openai", "openai-codex"]);
});
it("owns openai gpt-5.4 forward-compat resolution", () => {
const provider = registerProvider();
const provider = requireProvider("openai");
const model = provider.resolveDynamicModel?.({
provider: "openai",
modelId: "gpt-5.4-pro",
@@ -51,7 +60,7 @@ describe("openai plugin", () => {
});
it("owns direct openai transport normalization", () => {
const provider = registerProvider();
const provider = requireProvider("openai");
expect(
provider.normalizeResolvedModel?.({
provider: "openai",
@@ -73,4 +82,24 @@ describe("openai plugin", () => {
api: "openai-responses",
});
});
it("owns codex-only missing-auth hints and Spark suppression", () => {
const provider = requireProvider("openai");
expect(
provider.buildMissingAuthMessage?.({
env: {} as NodeJS.ProcessEnv,
provider: "openai",
listProfileIds: (providerId) => (providerId === "openai-codex" ? ["p1"] : []),
}),
).toContain("openai-codex/gpt-5.4");
expect(
provider.suppressBuiltInModel?.({
env: {} as NodeJS.ProcessEnv,
provider: "azure-openai-responses",
modelId: "gpt-5.3-codex-spark",
}),
).toMatchObject({
suppress: true,
});
});
});

View File

@@ -1,136 +1,15 @@
import {
emptyPluginConfigSchema,
type OpenClawPluginApi,
type ProviderResolveDynamicModelContext,
type ProviderRuntimeModel,
} from "openclaw/plugin-sdk/core";
import { DEFAULT_CONTEXT_TOKENS } from "../../src/agents/defaults.js";
import { normalizeModelCompat } from "../../src/agents/model-compat.js";
import { normalizeProviderId } from "../../src/agents/model-selection.js";
const PROVIDER_ID = "openai";
const OPENAI_BASE_URL = "https://api.openai.com/v1";
const OPENAI_GPT_54_MODEL_ID = "gpt-5.4";
const OPENAI_GPT_54_PRO_MODEL_ID = "gpt-5.4-pro";
const OPENAI_GPT_54_CONTEXT_TOKENS = 1_050_000;
const OPENAI_GPT_54_MAX_TOKENS = 128_000;
const OPENAI_GPT_54_TEMPLATE_MODEL_IDS = ["gpt-5.2"] as const;
const OPENAI_GPT_54_PRO_TEMPLATE_MODEL_IDS = ["gpt-5.2-pro", "gpt-5.2"] as const;
function isOpenAIApiBaseUrl(baseUrl?: string): boolean {
const trimmed = baseUrl?.trim();
if (!trimmed) {
return false;
}
return /^https?:\/\/api\.openai\.com(?:\/v1)?\/?$/i.test(trimmed);
}
function normalizeOpenAITransport(model: ProviderRuntimeModel): ProviderRuntimeModel {
const useResponsesTransport =
model.api === "openai-completions" && (!model.baseUrl || isOpenAIApiBaseUrl(model.baseUrl));
if (!useResponsesTransport) {
return model;
}
return {
...model,
api: "openai-responses",
};
}
function cloneFirstTemplateModel(params: {
modelId: string;
templateIds: readonly string[];
ctx: ProviderResolveDynamicModelContext;
patch?: Partial<ProviderRuntimeModel>;
}): ProviderRuntimeModel | undefined {
const trimmedModelId = params.modelId.trim();
for (const templateId of [...new Set(params.templateIds)].filter(Boolean)) {
const template = params.ctx.modelRegistry.find(
PROVIDER_ID,
templateId,
) as ProviderRuntimeModel | null;
if (!template) {
continue;
}
return normalizeModelCompat({
...template,
id: trimmedModelId,
name: trimmedModelId,
...params.patch,
} as ProviderRuntimeModel);
}
return undefined;
}
function resolveOpenAIGpt54ForwardCompatModel(
ctx: ProviderResolveDynamicModelContext,
): ProviderRuntimeModel | undefined {
const trimmedModelId = ctx.modelId.trim();
const lower = trimmedModelId.toLowerCase();
let templateIds: readonly string[];
if (lower === OPENAI_GPT_54_MODEL_ID) {
templateIds = OPENAI_GPT_54_TEMPLATE_MODEL_IDS;
} else if (lower === OPENAI_GPT_54_PRO_MODEL_ID) {
templateIds = OPENAI_GPT_54_PRO_TEMPLATE_MODEL_IDS;
} else {
return undefined;
}
return (
cloneFirstTemplateModel({
modelId: trimmedModelId,
templateIds,
ctx,
patch: {
api: "openai-responses",
provider: PROVIDER_ID,
baseUrl: OPENAI_BASE_URL,
reasoning: true,
input: ["text", "image"],
contextWindow: OPENAI_GPT_54_CONTEXT_TOKENS,
maxTokens: OPENAI_GPT_54_MAX_TOKENS,
},
}) ??
normalizeModelCompat({
id: trimmedModelId,
name: trimmedModelId,
api: "openai-responses",
provider: PROVIDER_ID,
baseUrl: OPENAI_BASE_URL,
reasoning: true,
input: ["text", "image"],
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
contextWindow: OPENAI_GPT_54_CONTEXT_TOKENS,
maxTokens: OPENAI_GPT_54_MAX_TOKENS,
} as ProviderRuntimeModel)
);
}
import { emptyPluginConfigSchema, type OpenClawPluginApi } from "openclaw/plugin-sdk/core";
import { buildOpenAICodexProviderPlugin } from "./openai-codex-provider.js";
import { buildOpenAIProvider } from "./openai-provider.js";
const openAIPlugin = {
id: PROVIDER_ID,
id: "openai",
name: "OpenAI Provider",
description: "Bundled OpenAI provider plugin",
description: "Bundled OpenAI provider plugins",
configSchema: emptyPluginConfigSchema(),
register(api: OpenClawPluginApi) {
api.registerProvider({
id: PROVIDER_ID,
label: "OpenAI",
docsPath: "/providers/models",
envVars: ["OPENAI_API_KEY"],
auth: [],
resolveDynamicModel: (ctx) => resolveOpenAIGpt54ForwardCompatModel(ctx),
normalizeResolvedModel: (ctx) => {
if (normalizeProviderId(ctx.provider) !== PROVIDER_ID) {
return undefined;
}
return normalizeOpenAITransport(ctx.model);
},
capabilities: {
providerFamily: "openai",
},
});
api.registerProvider(buildOpenAIProvider());
api.registerProvider(buildOpenAICodexProviderPlugin());
},
};

View File

@@ -1,8 +1,6 @@
import {
emptyPluginConfigSchema,
type OpenClawPluginApi,
type ProviderResolveDynamicModelContext,
type ProviderRuntimeModel,
import type {
ProviderResolveDynamicModelContext,
ProviderRuntimeModel,
} from "openclaw/plugin-sdk/core";
import { listProfilesForProvider } from "../../src/agents/auth-profiles/profiles.js";
import { ensureAuthProfileStore } from "../../src/agents/auth-profiles/store.js";
@@ -11,6 +9,8 @@ import { normalizeModelCompat } from "../../src/agents/model-compat.js";
import { normalizeProviderId } from "../../src/agents/model-selection.js";
import { buildOpenAICodexProvider } from "../../src/agents/models-config.providers.static.js";
import { fetchCodexUsage } from "../../src/infra/provider-usage.fetch.js";
import type { ProviderPlugin } from "../../src/plugins/types.js";
import { cloneFirstTemplateModel, findCatalogTemplate, isOpenAIApiBaseUrl } from "./shared.js";
const PROVIDER_ID = "openai-codex";
const OPENAI_CODEX_BASE_URL = "https://chatgpt.com/backend-api";
@@ -24,14 +24,6 @@ const OPENAI_CODEX_GPT_53_SPARK_CONTEXT_TOKENS = 128_000;
const OPENAI_CODEX_GPT_53_SPARK_MAX_TOKENS = 128_000;
const OPENAI_CODEX_TEMPLATE_MODEL_IDS = ["gpt-5.2-codex"] as const;
function isOpenAIApiBaseUrl(baseUrl?: string): boolean {
const trimmed = baseUrl?.trim();
if (!trimmed) {
return false;
}
return /^https?:\/\/api\.openai\.com(?:\/v1)?\/?$/i.test(trimmed);
}
function isOpenAICodexBaseUrl(baseUrl?: string): boolean {
const trimmed = baseUrl?.trim();
if (!trimmed) {
@@ -59,31 +51,6 @@ function normalizeCodexTransport(model: ProviderRuntimeModel): ProviderRuntimeMo
};
}
function cloneFirstTemplateModel(params: {
modelId: string;
templateIds: readonly string[];
ctx: ProviderResolveDynamicModelContext;
patch?: Partial<ProviderRuntimeModel>;
}): ProviderRuntimeModel | undefined {
const trimmedModelId = params.modelId.trim();
for (const templateId of [...new Set(params.templateIds)].filter(Boolean)) {
const template = params.ctx.modelRegistry.find(
PROVIDER_ID,
templateId,
) as ProviderRuntimeModel | null;
if (!template) {
continue;
}
return normalizeModelCompat({
...template,
id: trimmedModelId,
name: trimmedModelId,
...params.patch,
} as ProviderRuntimeModel);
}
return undefined;
}
function resolveCodexForwardCompatModel(
ctx: ProviderResolveDynamicModelContext,
): ProviderRuntimeModel | undefined {
@@ -118,6 +85,7 @@ function resolveCodexForwardCompatModel(
return (
cloneFirstTemplateModel({
providerId: PROVIDER_ID,
modelId: trimmedModelId,
templateIds,
ctx,
@@ -138,56 +106,76 @@ function resolveCodexForwardCompatModel(
);
}
const openAICodexPlugin = {
id: "openai-codex",
name: "OpenAI Codex Provider",
description: "Bundled OpenAI Codex provider plugin",
configSchema: emptyPluginConfigSchema(),
register(api: OpenClawPluginApi) {
api.registerProvider({
id: PROVIDER_ID,
label: "OpenAI Codex",
docsPath: "/providers/models",
auth: [],
catalog: {
order: "profile",
run: async (ctx) => {
const authStore = ensureAuthProfileStore(ctx.agentDir, {
allowKeychainPrompt: false,
});
if (listProfilesForProvider(authStore, PROVIDER_ID).length === 0) {
return null;
}
return {
provider: buildOpenAICodexProvider(),
};
},
},
resolveDynamicModel: (ctx) => resolveCodexForwardCompatModel(ctx),
capabilities: {
providerFamily: "openai",
},
prepareExtraParams: (ctx) => {
const transport = ctx.extraParams?.transport;
if (transport === "auto" || transport === "sse" || transport === "websocket") {
return ctx.extraParams;
export function buildOpenAICodexProviderPlugin(): ProviderPlugin {
return {
id: PROVIDER_ID,
label: "OpenAI Codex",
docsPath: "/providers/models",
auth: [],
catalog: {
order: "profile",
run: async (ctx) => {
const authStore = ensureAuthProfileStore(ctx.agentDir, {
allowKeychainPrompt: false,
});
if (listProfilesForProvider(authStore, PROVIDER_ID).length === 0) {
return null;
}
return {
...ctx.extraParams,
transport: "auto",
provider: buildOpenAICodexProvider(),
};
},
normalizeResolvedModel: (ctx) => {
if (normalizeProviderId(ctx.provider) !== PROVIDER_ID) {
return undefined;
}
return normalizeCodexTransport(ctx.model);
},
resolveUsageAuth: async (ctx) => await ctx.resolveOAuthToken(),
fetchUsageSnapshot: async (ctx) =>
await fetchCodexUsage(ctx.token, ctx.accountId, ctx.timeoutMs, ctx.fetchFn),
});
},
};
export default openAICodexPlugin;
},
resolveDynamicModel: (ctx) => resolveCodexForwardCompatModel(ctx),
capabilities: {
providerFamily: "openai",
},
prepareExtraParams: (ctx) => {
const transport = ctx.extraParams?.transport;
if (transport === "auto" || transport === "sse" || transport === "websocket") {
return ctx.extraParams;
}
return {
...ctx.extraParams,
transport: "auto",
};
},
normalizeResolvedModel: (ctx) => {
if (normalizeProviderId(ctx.provider) !== PROVIDER_ID) {
return undefined;
}
return normalizeCodexTransport(ctx.model);
},
resolveUsageAuth: async (ctx) => await ctx.resolveOAuthToken(),
fetchUsageSnapshot: async (ctx) =>
await fetchCodexUsage(ctx.token, ctx.accountId, ctx.timeoutMs, ctx.fetchFn),
augmentModelCatalog: (ctx) => {
const gpt54Template = findCatalogTemplate({
entries: ctx.entries,
providerId: PROVIDER_ID,
templateIds: OPENAI_CODEX_GPT_54_TEMPLATE_MODEL_IDS,
});
const sparkTemplate = findCatalogTemplate({
entries: ctx.entries,
providerId: PROVIDER_ID,
templateIds: [OPENAI_CODEX_GPT_53_MODEL_ID, ...OPENAI_CODEX_TEMPLATE_MODEL_IDS],
});
return [
gpt54Template
? {
...gpt54Template,
id: OPENAI_CODEX_GPT_54_MODEL_ID,
name: OPENAI_CODEX_GPT_54_MODEL_ID,
}
: undefined,
sparkTemplate
? {
...sparkTemplate,
id: OPENAI_CODEX_GPT_53_SPARK_MODEL_ID,
name: OPENAI_CODEX_GPT_53_SPARK_MODEL_ID,
}
: undefined,
].filter((entry): entry is NonNullable<typeof entry> => entry !== undefined);
},
};
}

View File

@@ -4,13 +4,15 @@ import {
createProviderUsageFetch,
makeResponse,
} from "../../src/test-utils/provider-usage-fetch.js";
import openAICodexPlugin from "./index.js";
import openAIPlugin from "./index.js";
function registerProvider(): ProviderPlugin {
function registerCodexProvider(): ProviderPlugin {
let provider: ProviderPlugin | undefined;
openAICodexPlugin.register({
openAIPlugin.register({
registerProvider(nextProvider: ProviderPlugin) {
provider = nextProvider;
if (nextProvider.id === "openai-codex") {
provider = nextProvider;
}
},
} as never);
if (!provider) {
@@ -19,9 +21,9 @@ function registerProvider(): ProviderPlugin {
return provider;
}
describe("openai-codex plugin", () => {
describe("openai codex provider", () => {
it("owns forward-compat codex models", () => {
const provider = registerProvider();
const provider = registerCodexProvider();
const model = provider.resolveDynamicModel?.({
provider: "openai-codex",
modelId: "gpt-5.4",
@@ -54,7 +56,7 @@ describe("openai-codex plugin", () => {
});
it("owns codex transport defaults", () => {
const provider = registerProvider();
const provider = registerCodexProvider();
expect(
provider.prepareExtraParams?.({
provider: "openai-codex",
@@ -68,7 +70,7 @@ describe("openai-codex plugin", () => {
});
it("owns usage snapshot fetching", async () => {
const provider = registerProvider();
const provider = registerCodexProvider();
const mockFetch = createProviderUsageFetch(async (url) => {
if (url.includes("chatgpt.com/backend-api/wham/usage")) {
return makeResponse(200, {

View File

@@ -0,0 +1,143 @@
import {
type ProviderResolveDynamicModelContext,
type ProviderRuntimeModel,
} from "openclaw/plugin-sdk/core";
import { normalizeModelCompat } from "../../src/agents/model-compat.js";
import { normalizeProviderId } from "../../src/agents/model-selection.js";
import type { ProviderPlugin } from "../../src/plugins/types.js";
import { cloneFirstTemplateModel, findCatalogTemplate, isOpenAIApiBaseUrl } from "./shared.js";
const PROVIDER_ID = "openai";
const OPENAI_GPT_54_MODEL_ID = "gpt-5.4";
const OPENAI_GPT_54_PRO_MODEL_ID = "gpt-5.4-pro";
const OPENAI_GPT_54_CONTEXT_TOKENS = 1_050_000;
const OPENAI_GPT_54_MAX_TOKENS = 128_000;
const OPENAI_GPT_54_TEMPLATE_MODEL_IDS = ["gpt-5.2"] as const;
const OPENAI_GPT_54_PRO_TEMPLATE_MODEL_IDS = ["gpt-5.2-pro", "gpt-5.2"] as const;
const OPENAI_DIRECT_SPARK_MODEL_ID = "gpt-5.3-codex-spark";
const SUPPRESSED_SPARK_PROVIDERS = new Set(["openai", "azure-openai-responses"]);
function normalizeOpenAITransport(model: ProviderRuntimeModel): ProviderRuntimeModel {
const useResponsesTransport =
model.api === "openai-completions" && (!model.baseUrl || isOpenAIApiBaseUrl(model.baseUrl));
if (!useResponsesTransport) {
return model;
}
return {
...model,
api: "openai-responses",
};
}
function resolveOpenAIGpt54ForwardCompatModel(
ctx: ProviderResolveDynamicModelContext,
): ProviderRuntimeModel | undefined {
const trimmedModelId = ctx.modelId.trim();
const lower = trimmedModelId.toLowerCase();
let templateIds: readonly string[];
if (lower === OPENAI_GPT_54_MODEL_ID) {
templateIds = OPENAI_GPT_54_TEMPLATE_MODEL_IDS;
} else if (lower === OPENAI_GPT_54_PRO_MODEL_ID) {
templateIds = OPENAI_GPT_54_PRO_TEMPLATE_MODEL_IDS;
} else {
return undefined;
}
return (
cloneFirstTemplateModel({
providerId: PROVIDER_ID,
modelId: trimmedModelId,
templateIds,
ctx,
patch: {
api: "openai-responses",
provider: PROVIDER_ID,
baseUrl: "https://api.openai.com/v1",
reasoning: true,
input: ["text", "image"],
contextWindow: OPENAI_GPT_54_CONTEXT_TOKENS,
maxTokens: OPENAI_GPT_54_MAX_TOKENS,
},
}) ??
normalizeModelCompat({
id: trimmedModelId,
name: trimmedModelId,
api: "openai-responses",
provider: PROVIDER_ID,
baseUrl: "https://api.openai.com/v1",
reasoning: true,
input: ["text", "image"],
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
contextWindow: OPENAI_GPT_54_CONTEXT_TOKENS,
maxTokens: OPENAI_GPT_54_MAX_TOKENS,
} as ProviderRuntimeModel)
);
}
export function buildOpenAIProvider(): ProviderPlugin {
return {
id: PROVIDER_ID,
label: "OpenAI",
docsPath: "/providers/models",
envVars: ["OPENAI_API_KEY"],
auth: [],
resolveDynamicModel: (ctx) => resolveOpenAIGpt54ForwardCompatModel(ctx),
normalizeResolvedModel: (ctx) => {
if (normalizeProviderId(ctx.provider) !== PROVIDER_ID) {
return undefined;
}
return normalizeOpenAITransport(ctx.model);
},
capabilities: {
providerFamily: "openai",
},
buildMissingAuthMessage: (ctx) => {
if (ctx.provider !== PROVIDER_ID || ctx.listProfileIds("openai-codex").length === 0) {
return undefined;
}
return 'No API key found for provider "openai". You are authenticated with OpenAI Codex OAuth. Use openai-codex/gpt-5.4 (OAuth) or set OPENAI_API_KEY to use openai/gpt-5.4.';
},
suppressBuiltInModel: (ctx) => {
if (
!SUPPRESSED_SPARK_PROVIDERS.has(normalizeProviderId(ctx.provider)) ||
ctx.modelId.toLowerCase() !== OPENAI_DIRECT_SPARK_MODEL_ID
) {
return undefined;
}
return {
suppress: true,
errorMessage: `Unknown model: ${ctx.provider}/${OPENAI_DIRECT_SPARK_MODEL_ID}. ${OPENAI_DIRECT_SPARK_MODEL_ID} is only supported via openai-codex OAuth. Use openai-codex/${OPENAI_DIRECT_SPARK_MODEL_ID}.`,
};
},
augmentModelCatalog: (ctx) => {
const openAiGpt54Template = findCatalogTemplate({
entries: ctx.entries,
providerId: PROVIDER_ID,
templateIds: OPENAI_GPT_54_TEMPLATE_MODEL_IDS,
});
const openAiGpt54ProTemplate = findCatalogTemplate({
entries: ctx.entries,
providerId: PROVIDER_ID,
templateIds: OPENAI_GPT_54_PRO_TEMPLATE_MODEL_IDS,
});
return [
openAiGpt54Template
? {
...openAiGpt54Template,
id: OPENAI_GPT_54_MODEL_ID,
name: OPENAI_GPT_54_MODEL_ID,
}
: undefined,
openAiGpt54ProTemplate
? {
...openAiGpt54ProTemplate,
id: OPENAI_GPT_54_PRO_MODEL_ID,
name: OPENAI_GPT_54_PRO_MODEL_ID,
}
: undefined,
].filter((entry): entry is NonNullable<typeof entry> => entry !== undefined);
},
};
}

View File

@@ -1,6 +1,6 @@
{
"id": "openai",
"providers": ["openai"],
"providers": ["openai", "openai-codex"],
"configSchema": {
"type": "object",
"additionalProperties": false,

View File

@@ -2,7 +2,7 @@
"name": "@openclaw/openai-provider",
"version": "2026.3.14",
"private": true,
"description": "OpenClaw OpenAI provider plugin",
"description": "OpenClaw OpenAI provider plugins",
"type": "module",
"openclaw": {
"extensions": [

View File

@@ -0,0 +1,57 @@
import { normalizeModelCompat } from "../../src/agents/model-compat.js";
import type {
ProviderResolveDynamicModelContext,
ProviderRuntimeModel,
} from "../../src/plugins/types.js";
export const OPENAI_API_BASE_URL = "https://api.openai.com/v1";
export function isOpenAIApiBaseUrl(baseUrl?: string): boolean {
const trimmed = baseUrl?.trim();
if (!trimmed) {
return false;
}
return /^https?:\/\/api\.openai\.com(?:\/v1)?\/?$/i.test(trimmed);
}
export function cloneFirstTemplateModel(params: {
providerId: string;
modelId: string;
templateIds: readonly string[];
ctx: ProviderResolveDynamicModelContext;
patch?: Partial<ProviderRuntimeModel>;
}): ProviderRuntimeModel | undefined {
const trimmedModelId = params.modelId.trim();
for (const templateId of [...new Set(params.templateIds)].filter(Boolean)) {
const template = params.ctx.modelRegistry.find(
params.providerId,
templateId,
) as ProviderRuntimeModel | null;
if (!template) {
continue;
}
return normalizeModelCompat({
...template,
id: trimmedModelId,
name: trimmedModelId,
...params.patch,
} as ProviderRuntimeModel);
}
return undefined;
}
export function findCatalogTemplate(params: {
entries: ReadonlyArray<{ provider: string; id: string }>;
providerId: string;
templateIds: readonly string[];
}) {
return params.templateIds
.map((templateId) =>
params.entries.find(
(entry) =>
entry.provider.toLowerCase() === params.providerId.toLowerCase() &&
entry.id.toLowerCase() === templateId.toLowerCase(),
),
)
.find((entry) => entry !== undefined);
}

View File

@@ -6,6 +6,7 @@ import type { ModelProviderAuthMode, ModelProviderConfig } from "../config/types
import { coerceSecretRef } from "../config/types.secrets.js";
import { getShellEnvAppliedKeys } from "../infra/shell-env.js";
import { createSubsystemLogger } from "../logging/subsystem.js";
import { buildProviderMissingAuthMessageWithPlugin } from "../plugins/provider-runtime.js";
import {
normalizeOptionalSecretInput,
normalizeSecretInput,
@@ -358,13 +359,19 @@ export async function resolveApiKeyForProvider(params: {
return resolveAwsSdkAuthInfo();
}
if (provider === "openai") {
const hasCodex = listProfilesForProvider(store, "openai-codex").length > 0;
if (hasCodex) {
throw new Error(
'No API key found for provider "openai". You are authenticated with OpenAI Codex OAuth. Use openai-codex/gpt-5.4 (OAuth) or set OPENAI_API_KEY to use openai/gpt-5.4.',
);
}
const pluginMissingAuthMessage = buildProviderMissingAuthMessageWithPlugin({
provider,
config: cfg,
context: {
config: cfg,
agentDir: params.agentDir,
env: process.env,
provider,
listProfileIds: (providerId) => listProfilesForProvider(store, providerId),
},
});
if (pluginMissingAuthMessage) {
throw new Error(pluginMissingAuthMessage);
}
const authStorePath = resolveAuthStorePathForDisplay(params.agentDir);

View File

@@ -1,5 +1,6 @@
import { type OpenClawConfig, loadConfig } from "../config/config.js";
import { createSubsystemLogger } from "../logging/subsystem.js";
import { augmentModelCatalogWithProviderPlugins } from "../plugins/provider-runtime.js";
import { resolveOpenClawAgentDir } from "./agent-paths.js";
import { shouldSuppressBuiltInModel } from "./model-suppression.js";
import { ensureOpenClawModelsJson } from "./models-config.js";
@@ -33,70 +34,8 @@ let hasLoggedModelCatalogError = false;
const defaultImportPiSdk = () => import("./pi-model-discovery-runtime.js");
let importPiSdk = defaultImportPiSdk;
const CODEX_PROVIDER = "openai-codex";
const OPENAI_PROVIDER = "openai";
const OPENAI_GPT54_MODEL_ID = "gpt-5.4";
const OPENAI_GPT54_PRO_MODEL_ID = "gpt-5.4-pro";
const OPENAI_CODEX_GPT53_MODEL_ID = "gpt-5.3-codex";
const OPENAI_CODEX_GPT53_SPARK_MODEL_ID = "gpt-5.3-codex-spark";
const OPENAI_CODEX_GPT54_MODEL_ID = "gpt-5.4";
const NON_PI_NATIVE_MODEL_PROVIDERS = new Set(["kilocode"]);
type SyntheticCatalogFallback = {
provider: string;
id: string;
templateIds: readonly string[];
};
const SYNTHETIC_CATALOG_FALLBACKS: readonly SyntheticCatalogFallback[] = [
{
provider: OPENAI_PROVIDER,
id: OPENAI_GPT54_MODEL_ID,
templateIds: ["gpt-5.2"],
},
{
provider: OPENAI_PROVIDER,
id: OPENAI_GPT54_PRO_MODEL_ID,
templateIds: ["gpt-5.2-pro", "gpt-5.2"],
},
{
provider: CODEX_PROVIDER,
id: OPENAI_CODEX_GPT54_MODEL_ID,
templateIds: ["gpt-5.3-codex", "gpt-5.2-codex"],
},
{
provider: CODEX_PROVIDER,
id: OPENAI_CODEX_GPT53_SPARK_MODEL_ID,
templateIds: [OPENAI_CODEX_GPT53_MODEL_ID],
},
] as const;
function applySyntheticCatalogFallbacks(models: ModelCatalogEntry[]): void {
const findCatalogEntry = (provider: string, id: string) =>
models.find(
(entry) =>
entry.provider.toLowerCase() === provider.toLowerCase() &&
entry.id.toLowerCase() === id.toLowerCase(),
);
for (const fallback of SYNTHETIC_CATALOG_FALLBACKS) {
if (findCatalogEntry(fallback.provider, fallback.id)) {
continue;
}
const template = fallback.templateIds
.map((templateId) => findCatalogEntry(fallback.provider, templateId))
.find((entry) => entry !== undefined);
if (!template) {
continue;
}
models.push({
...template,
id: fallback.id,
name: fallback.id,
});
}
}
function normalizeConfiguredModelInput(input: unknown): ModelInputType[] | undefined {
if (!Array.isArray(input)) {
return undefined;
@@ -256,7 +195,31 @@ export async function loadModelCatalog(params?: {
models.push({ id, name, provider, contextWindow, reasoning, input });
}
mergeConfiguredOptInProviderModels({ config: cfg, models });
applySyntheticCatalogFallbacks(models);
const supplemental = await augmentModelCatalogWithProviderPlugins({
config: cfg,
env: process.env,
context: {
config: cfg,
agentDir,
env: process.env,
entries: [...models],
},
});
if (supplemental.length > 0) {
const seen = new Set(
models.map(
(entry) => `${entry.provider.toLowerCase().trim()}::${entry.id.toLowerCase().trim()}`,
),
);
for (const entry of supplemental) {
const key = `${entry.provider.toLowerCase().trim()}::${entry.id.toLowerCase().trim()}`;
if (seen.has(key)) {
continue;
}
models.push(entry);
seen.add(key);
}
}
if (models.length === 0) {
// If we found nothing, don't cache this result so we can try again.

View File

@@ -4,83 +4,18 @@ import { DEFAULT_CONTEXT_TOKENS } from "./defaults.js";
import { normalizeModelCompat } from "./model-compat.js";
import { normalizeProviderId } from "./model-selection.js";
const OPENAI_GPT_54_MODEL_ID = "gpt-5.4";
const OPENAI_GPT_54_PRO_MODEL_ID = "gpt-5.4-pro";
const OPENAI_GPT_54_CONTEXT_TOKENS = 1_050_000;
const OPENAI_GPT_54_MAX_TOKENS = 128_000;
const OPENAI_GPT_54_TEMPLATE_MODEL_IDS = ["gpt-5.2"] as const;
const OPENAI_GPT_54_PRO_TEMPLATE_MODEL_IDS = ["gpt-5.2-pro", "gpt-5.2"] as const;
const ANTHROPIC_OPUS_46_MODEL_ID = "claude-opus-4-6";
const ANTHROPIC_OPUS_46_DOT_MODEL_ID = "claude-opus-4.6";
const ANTHROPIC_OPUS_TEMPLATE_MODEL_IDS = ["claude-opus-4-5", "claude-opus-4.5"] as const;
const ANTHROPIC_SONNET_46_MODEL_ID = "claude-sonnet-4-6";
const ANTHROPIC_SONNET_46_DOT_MODEL_ID = "claude-sonnet-4.6";
const ANTHROPIC_SONNET_TEMPLATE_MODEL_IDS = ["claude-sonnet-4-5", "claude-sonnet-4.5"] as const;
const ZAI_GLM5_MODEL_ID = "glm-5";
const ZAI_GLM5_TEMPLATE_MODEL_IDS = ["glm-4.7"] as const;
// gemini-3.1-pro-preview / gemini-3.1-flash-preview are not yet in pi-ai's built-in
// google-gemini-cli catalog. Clone the gemini-3-pro/flash-preview template so users
// don't get "Unknown model" errors when Google releases a new minor version.
// gemini-3.1-pro-preview / gemini-3.1-flash-preview are not present in some pi-ai
// Google catalogs yet. Clone the nearest gemini-3 template so users don't get
// "Unknown model" errors when Google ships new minor-version models before pi-ai
// updates its built-in registry.
const GEMINI_3_1_PRO_PREFIX = "gemini-3.1-pro";
const GEMINI_3_1_FLASH_PREFIX = "gemini-3.1-flash";
const GEMINI_3_1_PRO_TEMPLATE_IDS = ["gemini-3-pro-preview"] as const;
const GEMINI_3_1_FLASH_TEMPLATE_IDS = ["gemini-3-flash-preview"] as const;
function resolveOpenAIGpt54ForwardCompatModel(
provider: string,
modelId: string,
modelRegistry: ModelRegistry,
): Model<Api> | undefined {
const normalizedProvider = normalizeProviderId(provider);
if (normalizedProvider !== "openai") {
return undefined;
}
const trimmedModelId = modelId.trim();
const lower = trimmedModelId.toLowerCase();
let templateIds: readonly string[];
if (lower === OPENAI_GPT_54_MODEL_ID) {
templateIds = OPENAI_GPT_54_TEMPLATE_MODEL_IDS;
} else if (lower === OPENAI_GPT_54_PRO_MODEL_ID) {
templateIds = OPENAI_GPT_54_PRO_TEMPLATE_MODEL_IDS;
} else {
return undefined;
}
return (
cloneFirstTemplateModel({
normalizedProvider,
trimmedModelId,
templateIds: [...templateIds],
modelRegistry,
patch: {
api: "openai-responses",
provider: normalizedProvider,
baseUrl: "https://api.openai.com/v1",
reasoning: true,
input: ["text", "image"],
contextWindow: OPENAI_GPT_54_CONTEXT_TOKENS,
maxTokens: OPENAI_GPT_54_MAX_TOKENS,
},
}) ??
normalizeModelCompat({
id: trimmedModelId,
name: trimmedModelId,
api: "openai-responses",
provider: normalizedProvider,
baseUrl: "https://api.openai.com/v1",
reasoning: true,
input: ["text", "image"],
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
contextWindow: OPENAI_GPT_54_CONTEXT_TOKENS,
maxTokens: OPENAI_GPT_54_MAX_TOKENS,
} as Model<Api>)
);
}
function cloneFirstTemplateModel(params: {
normalizedProvider: string;
trimmedModelId: string;
@@ -104,88 +39,6 @@ function cloneFirstTemplateModel(params: {
return undefined;
}
function resolveAnthropic46ForwardCompatModel(params: {
provider: string;
modelId: string;
modelRegistry: ModelRegistry;
dashModelId: string;
dotModelId: string;
dashTemplateId: string;
dotTemplateId: string;
fallbackTemplateIds: readonly string[];
}): Model<Api> | undefined {
const { provider, modelId, modelRegistry, dashModelId, dotModelId } = params;
const normalizedProvider = normalizeProviderId(provider);
if (normalizedProvider !== "anthropic") {
return undefined;
}
const trimmedModelId = modelId.trim();
const lower = trimmedModelId.toLowerCase();
const is46Model =
lower === dashModelId ||
lower === dotModelId ||
lower.startsWith(`${dashModelId}-`) ||
lower.startsWith(`${dotModelId}-`);
if (!is46Model) {
return undefined;
}
const templateIds: string[] = [];
if (lower.startsWith(dashModelId)) {
templateIds.push(lower.replace(dashModelId, params.dashTemplateId));
}
if (lower.startsWith(dotModelId)) {
templateIds.push(lower.replace(dotModelId, params.dotTemplateId));
}
templateIds.push(...params.fallbackTemplateIds);
return cloneFirstTemplateModel({
normalizedProvider,
trimmedModelId,
templateIds,
modelRegistry,
});
}
function resolveAnthropicOpus46ForwardCompatModel(
provider: string,
modelId: string,
modelRegistry: ModelRegistry,
): Model<Api> | undefined {
return resolveAnthropic46ForwardCompatModel({
provider,
modelId,
modelRegistry,
dashModelId: ANTHROPIC_OPUS_46_MODEL_ID,
dotModelId: ANTHROPIC_OPUS_46_DOT_MODEL_ID,
dashTemplateId: "claude-opus-4-5",
dotTemplateId: "claude-opus-4.5",
fallbackTemplateIds: ANTHROPIC_OPUS_TEMPLATE_MODEL_IDS,
});
}
function resolveAnthropicSonnet46ForwardCompatModel(
provider: string,
modelId: string,
modelRegistry: ModelRegistry,
): Model<Api> | undefined {
return resolveAnthropic46ForwardCompatModel({
provider,
modelId,
modelRegistry,
dashModelId: ANTHROPIC_SONNET_46_MODEL_ID,
dotModelId: ANTHROPIC_SONNET_46_DOT_MODEL_ID,
dashTemplateId: "claude-sonnet-4-5",
dotTemplateId: "claude-sonnet-4.5",
fallbackTemplateIds: ANTHROPIC_SONNET_TEMPLATE_MODEL_IDS,
});
}
// gemini-3.1-pro-preview / gemini-3.1-flash-preview are not present in some pi-ai
// Google catalogs yet. Clone the nearest gemini-3 template so users don't get
// "Unknown model" errors when Google ships new minor-version models before pi-ai
// updates its built-in registry.
function resolveGoogle31ForwardCompatModel(
provider: string,
modelId: string,
@@ -264,9 +117,6 @@ export function resolveForwardCompatModel(
modelRegistry: ModelRegistry,
): Model<Api> | undefined {
return (
resolveOpenAIGpt54ForwardCompatModel(provider, modelId, modelRegistry) ??
resolveAnthropicOpus46ForwardCompatModel(provider, modelId, modelRegistry) ??
resolveAnthropicSonnet46ForwardCompatModel(provider, modelId, modelRegistry) ??
resolveZaiGlm5ForwardCompatModel(provider, modelId, modelRegistry) ??
resolveGoogle31ForwardCompatModel(provider, modelId, modelRegistry)
);

View File

@@ -1,27 +1,32 @@
import { resolveProviderBuiltInModelSuppression } from "../plugins/provider-runtime.js";
import { normalizeProviderId } from "./model-selection.js";
const OPENAI_DIRECT_SPARK_MODEL_ID = "gpt-5.3-codex-spark";
const SUPPRESSED_SPARK_PROVIDERS = new Set(["openai", "azure-openai-responses"]);
function resolveBuiltInModelSuppression(params: { provider?: string | null; id?: string | null }) {
const provider = normalizeProviderId(params.provider?.trim().toLowerCase() ?? "");
const modelId = params.id?.trim().toLowerCase() ?? "";
if (!provider || !modelId) {
return undefined;
}
return resolveProviderBuiltInModelSuppression({
env: process.env,
context: {
env: process.env,
provider,
modelId,
},
});
}
export function shouldSuppressBuiltInModel(params: {
provider?: string | null;
id?: string | null;
}) {
const provider = normalizeProviderId(params.provider?.trim().toLowerCase() ?? "");
const id = params.id?.trim().toLowerCase() ?? "";
// pi-ai still ships non-Codex Spark rows, but OpenClaw treats Spark as
// Codex-only until upstream availability is proven on direct API paths.
return SUPPRESSED_SPARK_PROVIDERS.has(provider) && id === OPENAI_DIRECT_SPARK_MODEL_ID;
return resolveBuiltInModelSuppression(params)?.suppress ?? false;
}
export function buildSuppressedBuiltInModelError(params: {
provider?: string | null;
id?: string | null;
}): string | undefined {
if (!shouldSuppressBuiltInModel(params)) {
return undefined;
}
const provider = normalizeProviderId(params.provider?.trim().toLowerCase() ?? "") || "openai";
return `Unknown model: ${provider}/${OPENAI_DIRECT_SPARK_MODEL_ID}. ${OPENAI_DIRECT_SPARK_MODEL_ID} is only supported via openai-codex OAuth. Use openai-codex/${OPENAI_DIRECT_SPARK_MODEL_ID}.`;
return resolveBuiltInModelSuppression(params)?.errorMessage;
}

View File

@@ -34,7 +34,7 @@ type InlineProviderConfig = {
headers?: unknown;
};
const PLUGIN_FIRST_DYNAMIC_PROVIDERS = new Set(["anthropic", "google-gemini-cli", "openai", "zai"]);
const PLUGIN_FIRST_DYNAMIC_PROVIDERS = new Set(["google-gemini-cli", "zai"]);
function sanitizeModelHeaders(
headers: unknown,

View File

@@ -4,6 +4,10 @@ export type {
ProviderDiscoveryContext,
ProviderCatalogContext,
ProviderCatalogResult,
ProviderAugmentModelCatalogContext,
ProviderBuiltInModelSuppressionContext,
ProviderBuiltInModelSuppressionResult,
ProviderBuildMissingAuthMessageContext,
ProviderCacheTtlEligibilityContext,
ProviderFetchUsageSnapshotContext,
ProviderPreparedRuntimeAuth,

View File

@@ -109,6 +109,10 @@ export type {
PluginLogger,
ProviderAuthContext,
ProviderAuthResult,
ProviderAugmentModelCatalogContext,
ProviderBuiltInModelSuppressionContext,
ProviderBuiltInModelSuppressionResult,
ProviderBuildMissingAuthMessageContext,
ProviderCacheTtlEligibilityContext,
ProviderFetchUsageSnapshotContext,
ProviderPreparedRuntimeAuth,

View File

@@ -77,6 +77,22 @@ describe("normalizePluginsConfig", () => {
});
expect(result.entries["voice-call"]?.hooks).toBeUndefined();
});
it("normalizes legacy plugin ids to their merged bundled plugin id", () => {
const result = normalizePluginsConfig({
allow: ["openai-codex"],
deny: ["openai-codex"],
entries: {
"openai-codex": {
enabled: true,
},
},
});
expect(result.allow).toEqual(["openai"]);
expect(result.deny).toEqual(["openai"]);
expect(result.entries.openai?.enabled).toBe(true);
});
});
describe("resolveEffectiveEnableState", () => {

View File

@@ -40,7 +40,6 @@ export const BUNDLED_ENABLED_BY_DEFAULT = new Set<string>([
"nvidia",
"ollama",
"openai",
"openai-codex",
"opencode",
"opencode-go",
"openrouter",
@@ -59,11 +58,22 @@ export const BUNDLED_ENABLED_BY_DEFAULT = new Set<string>([
"zai",
]);
const PLUGIN_ID_ALIASES: Readonly<Record<string, string>> = {
"openai-codex": "openai",
};
function normalizePluginId(id: string): string {
const trimmed = id.trim();
return PLUGIN_ID_ALIASES[trimmed] ?? trimmed;
}
const normalizeList = (value: unknown): string[] => {
if (!Array.isArray(value)) {
return [];
}
return value.map((entry) => (typeof entry === "string" ? entry.trim() : "")).filter(Boolean);
return value
.map((entry) => (typeof entry === "string" ? normalizePluginId(entry) : ""))
.filter(Boolean);
};
const normalizeSlotValue = (value: unknown): string | null | undefined => {
@@ -86,11 +96,12 @@ const normalizePluginEntries = (entries: unknown): NormalizedPluginsConfig["entr
}
const normalized: NormalizedPluginsConfig["entries"] = {};
for (const [key, value] of Object.entries(entries)) {
if (!key.trim()) {
const normalizedKey = normalizePluginId(key);
if (!normalizedKey) {
continue;
}
if (!value || typeof value !== "object" || Array.isArray(value)) {
normalized[key] = {};
normalized[normalizedKey] = {};
continue;
}
const entry = value as Record<string, unknown>;
@@ -108,10 +119,12 @@ const normalizePluginEntries = (entries: unknown): NormalizedPluginsConfig["entr
allowPromptInjection: hooks.allowPromptInjection,
}
: undefined;
normalized[key] = {
enabled: typeof entry.enabled === "boolean" ? entry.enabled : undefined,
hooks: normalizedHooks,
config: "config" in entry ? entry.config : undefined,
normalized[normalizedKey] = {
...normalized[normalizedKey],
enabled:
typeof entry.enabled === "boolean" ? entry.enabled : normalized[normalizedKey]?.enabled,
hooks: normalizedHooks ?? normalized[normalizedKey]?.hooks,
config: "config" in entry ? entry.config : normalized[normalizedKey]?.config,
};
}
return normalized;

View File

@@ -8,8 +8,11 @@ vi.mock("./providers.js", () => ({
}));
import {
augmentModelCatalogWithProviderPlugins,
buildProviderMissingAuthMessageWithPlugin,
prepareProviderExtraParams,
resolveProviderCacheTtlEligibility,
resolveProviderBuiltInModelSuppression,
resolveProviderUsageSnapshotWithPlugin,
resolveProviderCapabilitiesWithPlugin,
resolveProviderUsageAuthWithPlugin,
@@ -57,6 +60,7 @@ describe("provider-runtime", () => {
expect.objectContaining({
provider: "Open Router",
bundledProviderAllowlistCompat: true,
bundledProviderVitestCompat: true,
}),
);
});
@@ -77,31 +81,59 @@ describe("provider-runtime", () => {
displayName: "Demo",
windows: [{ label: "Day", usedPercent: 25 }],
}));
resolvePluginProvidersMock.mockReturnValue([
{
id: "demo",
label: "Demo",
auth: [],
resolveDynamicModel: () => MODEL,
prepareDynamicModel,
capabilities: {
providerFamily: "openai",
resolvePluginProvidersMock.mockImplementation((params?: { onlyPluginIds?: string[] }) => {
if (params?.onlyPluginIds?.includes("openai")) {
return [
{
id: "openai",
label: "OpenAI",
auth: [],
buildMissingAuthMessage: () =>
'No API key found for provider "openai". Use openai-codex/gpt-5.4.',
suppressBuiltInModel: ({ provider, modelId }) =>
provider === "azure-openai-responses" && modelId === "gpt-5.3-codex-spark"
? { suppress: true, errorMessage: "openai-codex/gpt-5.3-codex-spark" }
: undefined,
augmentModelCatalog: () => [
{ provider: "openai", id: "gpt-5.4", name: "gpt-5.4" },
{ provider: "openai", id: "gpt-5.4-pro", name: "gpt-5.4-pro" },
{ provider: "openai-codex", id: "gpt-5.4", name: "gpt-5.4" },
{
provider: "openai-codex",
id: "gpt-5.3-codex-spark",
name: "gpt-5.3-codex-spark",
},
],
},
];
}
return [
{
id: "demo",
label: "Demo",
auth: [],
resolveDynamicModel: () => MODEL,
prepareDynamicModel,
capabilities: {
providerFamily: "openai",
},
prepareExtraParams: ({ extraParams }) => ({
...extraParams,
transport: "auto",
}),
wrapStreamFn: ({ streamFn }) => streamFn,
normalizeResolvedModel: ({ model }) => ({
...model,
api: "openai-codex-responses",
}),
prepareRuntimeAuth,
resolveUsageAuth,
fetchUsageSnapshot,
isCacheTtlEligible: ({ modelId }) => modelId.startsWith("anthropic/"),
},
prepareExtraParams: ({ extraParams }) => ({
...extraParams,
transport: "auto",
}),
wrapStreamFn: ({ streamFn }) => streamFn,
normalizeResolvedModel: ({ model }) => ({
...model,
api: "openai-codex-responses",
}),
prepareRuntimeAuth,
resolveUsageAuth,
fetchUsageSnapshot,
isCacheTtlEligible: ({ modelId }) => modelId.startsWith("anthropic/"),
},
]);
];
});
expect(
runProviderDynamicModel({
@@ -234,6 +266,60 @@ describe("provider-runtime", () => {
}),
).toBe(true);
expect(
buildProviderMissingAuthMessageWithPlugin({
provider: "openai",
env: process.env,
context: {
env: process.env,
provider: "openai",
listProfileIds: (providerId) => (providerId === "openai-codex" ? ["p1"] : []),
},
}),
).toContain("openai-codex/gpt-5.4");
expect(
resolveProviderBuiltInModelSuppression({
env: process.env,
context: {
env: process.env,
provider: "azure-openai-responses",
modelId: "gpt-5.3-codex-spark",
},
}),
).toMatchObject({
suppress: true,
errorMessage: expect.stringContaining("openai-codex/gpt-5.3-codex-spark"),
});
await expect(
augmentModelCatalogWithProviderPlugins({
env: process.env,
context: {
env: process.env,
entries: [
{ provider: "openai", id: "gpt-5.2", name: "GPT-5.2" },
{ provider: "openai", id: "gpt-5.2-pro", name: "GPT-5.2 Pro" },
{ provider: "openai-codex", id: "gpt-5.3-codex", name: "GPT-5.3 Codex" },
],
},
}),
).resolves.toEqual([
{ provider: "openai", id: "gpt-5.4", name: "gpt-5.4" },
{ provider: "openai", id: "gpt-5.4-pro", name: "gpt-5.4-pro" },
{ provider: "openai-codex", id: "gpt-5.4", name: "gpt-5.4" },
{
provider: "openai-codex",
id: "gpt-5.3-codex-spark",
name: "gpt-5.3-codex-spark",
},
]);
expect(resolvePluginProvidersMock).toHaveBeenCalledWith(
expect.objectContaining({
onlyPluginIds: ["openai"],
}),
);
expect(prepareDynamicModel).toHaveBeenCalledTimes(1);
expect(prepareRuntimeAuth).toHaveBeenCalledTimes(1);
expect(resolveUsageAuth).toHaveBeenCalledTimes(1);

View File

@@ -2,6 +2,9 @@ import { normalizeProviderId } from "../agents/model-selection.js";
import type { OpenClawConfig } from "../config/config.js";
import { resolvePluginProviders } from "./providers.js";
import type {
ProviderAugmentModelCatalogContext,
ProviderBuildMissingAuthMessageContext,
ProviderBuiltInModelSuppressionContext,
ProviderCacheTtlEligibilityContext,
ProviderFetchUsageSnapshotContext,
ProviderPrepareExtraParamsContext,
@@ -25,16 +28,41 @@ function matchesProviderId(provider: ProviderPlugin, providerId: string): boolea
return (provider.aliases ?? []).some((alias) => normalizeProviderId(alias) === normalized);
}
function resolveProviderPluginsForHooks(params: {
config?: OpenClawConfig;
workspaceDir?: string;
env?: NodeJS.ProcessEnv;
onlyPluginIds?: string[];
}): ProviderPlugin[] {
return resolvePluginProviders({
...params,
bundledProviderAllowlistCompat: true,
bundledProviderVitestCompat: true,
});
}
const GLOBAL_PROVIDER_HOOK_PLUGIN_IDS = ["openai"] as const;
function resolveGlobalProviderHookPlugins(params: {
config?: OpenClawConfig;
workspaceDir?: string;
env?: NodeJS.ProcessEnv;
}): ProviderPlugin[] {
return resolveProviderPluginsForHooks({
...params,
onlyPluginIds: [...GLOBAL_PROVIDER_HOOK_PLUGIN_IDS],
});
}
export function resolveProviderRuntimePlugin(params: {
provider: string;
config?: OpenClawConfig;
workspaceDir?: string;
env?: NodeJS.ProcessEnv;
}): ProviderPlugin | undefined {
return resolvePluginProviders({
...params,
bundledProviderAllowlistCompat: true,
}).find((plugin) => matchesProviderId(plugin, params.provider));
return resolveProviderPluginsForHooks(params).find((plugin) =>
matchesProviderId(plugin, params.provider),
);
}
export function runProviderDynamicModel(params: {
@@ -144,3 +172,48 @@ export function resolveProviderCacheTtlEligibility(params: {
}) {
return resolveProviderRuntimePlugin(params)?.isCacheTtlEligible?.(params.context);
}
export function buildProviderMissingAuthMessageWithPlugin(params: {
provider: string;
config?: OpenClawConfig;
workspaceDir?: string;
env?: NodeJS.ProcessEnv;
context: ProviderBuildMissingAuthMessageContext;
}) {
const plugin = resolveGlobalProviderHookPlugins(params).find((providerPlugin) =>
matchesProviderId(providerPlugin, params.provider),
);
return plugin?.buildMissingAuthMessage?.(params.context) ?? undefined;
}
export function resolveProviderBuiltInModelSuppression(params: {
config?: OpenClawConfig;
workspaceDir?: string;
env?: NodeJS.ProcessEnv;
context: ProviderBuiltInModelSuppressionContext;
}) {
for (const plugin of resolveGlobalProviderHookPlugins(params)) {
const result = plugin.suppressBuiltInModel?.(params.context);
if (result?.suppress) {
return result;
}
}
return undefined;
}
export async function augmentModelCatalogWithProviderPlugins(params: {
config?: OpenClawConfig;
workspaceDir?: string;
env?: NodeJS.ProcessEnv;
context: ProviderAugmentModelCatalogContext;
}) {
const supplemental = [] as ProviderAugmentModelCatalogContext["entries"];
for (const plugin of resolveGlobalProviderHookPlugins(params)) {
const next = await plugin.augmentModelCatalog?.(params.context);
if (!next || next.length === 0) {
continue;
}
supplemental.push(...next);
}
return supplemental;
}

View File

@@ -52,4 +52,22 @@ describe("resolvePluginProviders", () => {
}),
);
});
it("can enable bundled provider plugins under Vitest when no explicit plugin config exists", () => {
resolvePluginProviders({
env: { VITEST: "1" } as NodeJS.ProcessEnv,
bundledProviderVitestCompat: true,
});
expect(loadOpenClawPluginsMock).toHaveBeenCalledWith(
expect.objectContaining({
config: expect.objectContaining({
plugins: expect.objectContaining({
enabled: true,
allow: expect.arrayContaining(["openai", "moonshot", "zai"]),
}),
}),
}),
);
});
});

View File

@@ -22,7 +22,6 @@ const BUNDLED_PROVIDER_ALLOWLIST_COMPAT_PLUGIN_IDS = [
"nvidia",
"ollama",
"openai",
"openai-codex",
"opencode",
"opencode-go",
"openrouter",
@@ -39,6 +38,32 @@ const BUNDLED_PROVIDER_ALLOWLIST_COMPAT_PLUGIN_IDS = [
"zai",
] as const;
function hasExplicitPluginConfig(config: PluginLoadOptions["config"]): boolean {
const plugins = config?.plugins;
if (!plugins) {
return false;
}
if (typeof plugins.enabled === "boolean") {
return true;
}
if (Array.isArray(plugins.allow) && plugins.allow.length > 0) {
return true;
}
if (Array.isArray(plugins.deny) && plugins.deny.length > 0) {
return true;
}
if (Array.isArray(plugins.load?.paths) && plugins.load.paths.length > 0) {
return true;
}
if (plugins.entries && Object.keys(plugins.entries).length > 0) {
return true;
}
if (plugins.slots && Object.keys(plugins.slots).length > 0) {
return true;
}
return false;
}
function withBundledProviderAllowlistCompat(
config: PluginLoadOptions["config"],
): PluginLoadOptions["config"] {
@@ -71,20 +96,52 @@ function withBundledProviderAllowlistCompat(
};
}
function withBundledProviderVitestCompat(params: {
config: PluginLoadOptions["config"];
env?: PluginLoadOptions["env"];
}): PluginLoadOptions["config"] {
const env = params.env ?? process.env;
if (!env.VITEST || hasExplicitPluginConfig(params.config)) {
return params.config;
}
return {
...params.config,
plugins: {
...params.config?.plugins,
enabled: true,
allow: [...BUNDLED_PROVIDER_ALLOWLIST_COMPAT_PLUGIN_IDS],
slots: {
...params.config?.plugins?.slots,
memory: "none",
},
},
};
}
export function resolvePluginProviders(params: {
config?: PluginLoadOptions["config"];
workspaceDir?: string;
/** Use an explicit env when plugin roots should resolve independently from process.env. */
env?: PluginLoadOptions["env"];
bundledProviderAllowlistCompat?: boolean;
bundledProviderVitestCompat?: boolean;
onlyPluginIds?: string[];
}): ProviderPlugin[] {
const config = params.bundledProviderAllowlistCompat
const maybeAllowlistCompat = params.bundledProviderAllowlistCompat
? withBundledProviderAllowlistCompat(params.config)
: params.config;
const config = params.bundledProviderVitestCompat
? withBundledProviderVitestCompat({
config: maybeAllowlistCompat,
env: params.env,
})
: maybeAllowlistCompat;
const registry = loadOpenClawPlugins({
config,
workspaceDir: params.workspaceDir,
env: params.env,
onlyPluginIds: params.onlyPluginIds,
logger: createPluginLoaderLogger(log),
});

View File

@@ -10,6 +10,7 @@ import type {
AuthProfileCredential,
OAuthCredential,
} from "../agents/auth-profiles/types.js";
import type { ModelCatalogEntry } from "../agents/model-catalog.js";
import type { ProviderCapabilities } from "../agents/provider-capabilities.js";
import type { AnyAgentTool } from "../agents/tools/common.js";
import type { ThinkLevel } from "../auto-reply/thinking.js";
@@ -390,6 +391,59 @@ export type ProviderCacheTtlEligibilityContext = {
modelId: string;
};
/**
* Provider-owned missing-auth message override.
*
* Runs only after OpenClaw exhausts normal env/profile/config auth resolution
* for the requested provider. Return a custom message to replace the generic
* "No API key found" error.
*/
export type ProviderBuildMissingAuthMessageContext = {
config?: OpenClawConfig;
agentDir?: string;
workspaceDir?: string;
env: NodeJS.ProcessEnv;
provider: string;
listProfileIds: (providerId: string) => string[];
};
/**
* Built-in model suppression hook.
*
* Use this when a provider/plugin needs to hide stale upstream catalog rows or
* replace them with a vendor-specific hint. This hook is consulted by model
* resolution, model listing, and catalog loading.
*/
export type ProviderBuiltInModelSuppressionContext = {
config?: OpenClawConfig;
agentDir?: string;
workspaceDir?: string;
env: NodeJS.ProcessEnv;
provider: string;
modelId: string;
};
export type ProviderBuiltInModelSuppressionResult = {
suppress: boolean;
errorMessage?: string;
};
/**
* Final catalog augmentation hook.
*
* Runs after OpenClaw loads the discovered model catalog and merges configured
* opt-in providers. Use this for forward-compat rows or vendor-owned synthetic
* entries that should appear in `models list` and model pickers even when the
* upstream registry has not caught up yet.
*/
export type ProviderAugmentModelCatalogContext = {
config?: OpenClawConfig;
agentDir?: string;
workspaceDir?: string;
env: NodeJS.ProcessEnv;
entries: ModelCatalogEntry[];
};
/**
* @deprecated Use ProviderCatalogOrder.
*/
@@ -560,6 +614,40 @@ export type ProviderPlugin = {
* only a subset of upstream models.
*/
isCacheTtlEligible?: (ctx: ProviderCacheTtlEligibilityContext) => boolean | undefined;
/**
* Provider-owned missing-auth message override.
*
* Return a custom message when the provider wants a more specific recovery
* hint than OpenClaw's generic auth-store guidance.
*/
buildMissingAuthMessage?: (
ctx: ProviderBuildMissingAuthMessageContext,
) => string | null | undefined;
/**
* Provider-owned built-in model suppression.
*
* Return `{ suppress: true }` to hide a stale upstream row. Include
* `errorMessage` when OpenClaw should surface a provider-specific hint for
* direct model resolution failures.
*/
suppressBuiltInModel?: (
ctx: ProviderBuiltInModelSuppressionContext,
) => ProviderBuiltInModelSuppressionResult | null | undefined;
/**
* Provider-owned final catalog augmentation.
*
* Return extra rows to append to the final catalog after discovery/config
* merging. OpenClaw deduplicates by `provider/id`, so plugins only need to
* describe the desired supplemental rows.
*/
augmentModelCatalog?: (
ctx: ProviderAugmentModelCatalogContext,
) =>
| Array<ModelCatalogEntry>
| ReadonlyArray<ModelCatalogEntry>
| Promise<Array<ModelCatalogEntry> | ReadonlyArray<ModelCatalogEntry> | null | undefined>
| null
| undefined;
wizard?: ProviderPluginWizard;
formatApiKey?: (cred: AuthProfileCredential) => string;
refreshOAuth?: (cred: OAuthCredential) => Promise<OAuthCredential>;