Keep OpenAI Codex migrations on automatic runtime routing (#79238)

* fix: keep migrated openai codex routes automatic

* scope runtime policy to providers and models

* fix runtime policy surfaces

* fix ci runtime policy checks

* fix doctor stale session runtime pins
This commit is contained in:
pashpashpash
2026-05-08 00:05:35 -07:00
committed by GitHub
parent b7aca7dc6e
commit 02fe0d8978
92 changed files with 1421 additions and 1264 deletions

View File

@@ -180,6 +180,7 @@ Docs: https://docs.openclaw.ai
- fix(discord): gate user allowlist name resolution [AI]. (#79002) Thanks @pgondhi987.
- fix(msteams): gate startup user allowlist resolution [AI]. (#79003) Thanks @pgondhi987.
- Harden macOS shell wrapper allowlist parsing [AI]. (#78518) Thanks @pgondhi987.
- Doctor/OpenAI: stop pinning migrated `openai-codex/*` routes to the Codex runtime so mixed-provider agents keep automatic PI routing for MiniMax, Anthropic, and other non-OpenAI model switches.
- Gateway/macOS: `openclaw gateway stop` now uses `launchctl bootout` by default instead of unconditionally calling `launchctl disable`, so KeepAlive auto-recovery still works after unexpected crashes; use the new `--disable` flag to opt into the persistent-disable behavior when a manual stop should survive reboots. Fixes #77934. Thanks @bmoran1022.
- Gateway/macOS: `repairLaunchAgentBootstrap` no longer kickstarts an already-running LaunchAgent, preventing unnecessary service restarts and session disconnects when repair runs against a healthy gateway. Fixes #77428. Thanks @ramitrkar-hash.
- Gateway/macOS: `openclaw gateway stop --disable` now persists the LaunchAgent disable bit even after a previous bootout left the service not loaded, keeping the explicit stay-down path reliable. (#78412) Thanks @wdeveloper16.
@@ -341,7 +342,7 @@ Docs: https://docs.openclaw.ai
- CLI/status: show the selected agent runtime/harness in `openclaw status` session rows so terminal status matches the `/status` runtime line. Thanks @vincentkoc.
- CLI/sessions: prune old unreferenced transcript, compaction checkpoint, and trajectory artifacts during normal `sessions cleanup`, so gateway restart or crash orphans do not accumulate indefinitely outside `sessions.json`. Fixes #77608. Thanks @slideshow-dingo.
- Doctor/Codex: repair legacy `openai-codex/*` routes in primary models, fallbacks, heartbeat/subagent/compaction overrides, hooks, channel overrides, and stale session pins to canonical `openai/*`, selecting `agentRuntime.id: "codex"` only when the Codex plugin is installed, enabled, contributes the `codex` harness, and has usable OAuth; otherwise select `agentRuntime.id: "pi"`. Thanks @vincentkoc.
- Doctor/Codex: repair legacy `openai-codex/*` routes to canonical `openai/*`, keep OpenAI agent turns on Codex by default, ignore stale whole-agent/session runtime pins, preserve explicit provider/model runtime policy, and migrate legacy runtime model refs to model-scoped runtime entries. Thanks @vincentkoc.
- Video generation: wait up to 20 minutes for slow fal/MiniMax queue-backed jobs, stop forwarding unsupported Google Veo generated-audio options, and normalize MiniMax `720P` requests to its supported `768P` resolution with the usual override warning/details instead of failing fallback.
- Video generation: accept provider-specific aspect-ratio and resolution hints at the tool boundary, normalize `720P` to MiniMax's supported `768P`, and stop sending Google `generateAudio` on Gemini video requests so provider fallback can recover from model-specific parameter differences. Thanks @vincentkoc.
- Channels/durable delivery: preserve channel-specific final reply semantics when using durable sends, including Telegram selected quotes and silent error replies plus WhatsApp message-sending cancellations.

View File

@@ -1,4 +1,4 @@
885a734aa93cf04f6c14f8d83c1e96a66a5b96705327ea2de7b2aa7314238976 config-baseline.json
074eb9a1480ff40836d98090ccb9be3465345ac4b46e0d273b7995504bbb8008 config-baseline.core.json
98f80c92fc4fcb37d41470216ae6cd19b094d7f67b0ddc4983eba04aba314fe0 config-baseline.json
d9c4b2035178d3ffe637b751036f12082d4f26761681bb8496b86550565307e8 config-baseline.core.json
ed15b24c1ccf0234e6b3435149a6f1c1e709579d1259f1d09402688799b149bd config-baseline.channel.json
c4e8d8898eebc4d40f35b167c987870e426e6c82121696dc055ff929f6a24046 config-baseline.plugin.json
7a9ed89a6ff7e578bfcab7828ab660af59e62402a85bfbfc05d5ae3d975e9728 config-baseline.plugin.json

View File

@@ -170,7 +170,7 @@ configured OpenClaw model. If no configured model is usable yet, it can fall
back to local runtimes already present on the machine:
- Claude Code CLI: `claude-cli/claude-opus-4-7`
- Codex app-server harness: `openai/gpt-5.5` with `agentRuntime.id: "codex"`
- Codex app-server harness: `openai/gpt-5.5`
- Codex CLI: `codex-cli/gpt-5.5`
The model-assisted planner cannot mutate config directly. It must translate the

View File

@@ -56,7 +56,7 @@ Notes:
- Doctor also scans `~/.openclaw/cron/jobs.json` (or `cron.store`) for legacy cron job shapes and can rewrite them in place before the scheduler has to auto-normalize them at runtime.
- On Linux, doctor warns when the user's crontab still runs legacy `~/.openclaw/bin/ensure-whatsapp.sh`; that script is no longer maintained and can log false WhatsApp gateway outages when cron lacks the systemd user-bus environment.
- When WhatsApp is enabled, doctor checks for a degraded Gateway event loop with local `openclaw-tui` clients still running. `doctor --fix` stops only verified local TUI clients so WhatsApp replies are not queued behind stale TUI refresh loops.
- Doctor rewrites legacy `openai-codex/*` model refs to canonical `openai/*` refs across primary models, fallbacks, heartbeat/subagent/compaction overrides, hooks, channel model overrides, and stale session route pins. `--fix` selects `agentRuntime.id: "codex"` only when the Codex plugin is installed, enabled, contributes the `codex` harness, and has usable OAuth; otherwise it selects `agentRuntime.id: "pi"` so the route stays on the default OpenClaw runner.
- Doctor rewrites legacy `openai-codex/*` model refs to canonical `openai/*` refs across primary models, fallbacks, heartbeat/subagent/compaction overrides, hooks, channel model overrides, and stale session route pins. `--fix` preserves explicit provider/model `agentRuntime` policy, removes stale whole-agent/session runtime pins, and leaves canonical OpenAI agent refs on the default Codex harness when the official OpenAI provider is in use.
- Doctor cleans legacy plugin dependency staging state created by older OpenClaw versions. It also repairs missing downloadable plugins that are referenced by config, such as `plugins.entries`, configured channels, configured provider/search settings, or configured agent runtimes. During package updates, doctor skips package-manager plugin repair until the package swap is complete; rerun `openclaw doctor --fix` afterward if a configured plugin still needs recovery. If the download fails, doctor reports the install error and preserves the configured plugin entry for the next repair attempt.
- Doctor repairs stale plugin config by removing missing plugin ids from `plugins.allow`/`plugins.entries`, plus matching dangling channel config, heartbeat targets, and channel model overrides when plugin discovery is healthy.
- Doctor quarantines invalid plugin config by disabling the affected `plugins.entries.<id>` entry and removing its invalid `config` payload. Gateway startup already skips only that bad plugin so other plugins and channels can keep running.

View File

@@ -23,8 +23,11 @@ configuration. They are different layers:
You will also see the word **harness** in code. A harness is the implementation
that provides an agent runtime. For example, the bundled Codex harness
implements the `codex` runtime. Public config uses `agentRuntime.id`; `openclaw
doctor --fix` rewrites older runtime-policy keys to that shape.
implements the `codex` runtime. Public config uses `agentRuntime.id` on
provider or model entries; whole-agent runtime keys are legacy and ignored.
`openclaw doctor --fix` removes old whole-agent runtime pins and rewrites
legacy runtime model refs to canonical provider/model refs plus model-scoped
runtime policy where needed.
There are two runtime families:
@@ -33,9 +36,9 @@ There are two runtime families:
`codex`.
- **CLI backends** run a local CLI process while keeping the model ref
canonical. For example, `anthropic/claude-opus-4-7` with
`agentRuntime.id: "claude-cli"` means "select the Anthropic model, execute
through Claude CLI." `claude-cli` is not an embedded harness id and must not
be passed to AgentHarness selection.
a model-scoped `agentRuntime.id: "claude-cli"` means "select the Anthropic
model, execute through Claude CLI." `claude-cli` is not an embedded harness id
and must not be passed to AgentHarness selection.
## Codex surfaces
@@ -87,9 +90,9 @@ This is the agent-facing decision tree:
2. If the user asks for **Codex as the embedded runtime** or wants the normal
subscription-backed Codex agent experience, use `openai/<model>`.
3. If the user explicitly chooses **PI for an OpenAI model**, keep the model ref
as `openai/<model>` and set `agentRuntime.id: "pi"`. A selected
`openai-codex` auth profile is routed internally through PI's legacy
Codex-auth transport.
as `openai/<model>` and set provider/model runtime policy to
`agentRuntime.id: "pi"`. A selected `openai-codex` auth profile is routed
internally through PI's legacy Codex-auth transport.
4. If legacy config still contains **`openai-codex/*` model refs**, repair it to
`openai/<model>` with `openclaw doctor --fix`.
5. If the user explicitly says **ACP**, **acpx**, or **Codex ACP adapter**, use
@@ -132,21 +135,26 @@ This ownership split is the main design rule:
OpenClaw chooses an embedded runtime after provider and model resolution:
1. A session's recorded runtime wins. Config changes do not hot-switch an
existing transcript to a different native thread system.
2. `OPENCLAW_AGENT_RUNTIME=<id>` forces that runtime for new or reset sessions.
3. `agents.defaults.agentRuntime.id` or `agents.list[].agentRuntime.id` can set
`auto`, `pi`, a registered embedded harness id such as `codex`, or a
supported CLI backend alias such as `claude-cli`.
4. In `auto` mode, registered plugin runtimes can claim supported provider/model
1. Model-scoped runtime policy wins. This can live in a configured provider
model entry or in `agents.defaults.models["provider/model"].agentRuntime` /
`agents.list[].models["provider/model"].agentRuntime`.
2. Provider-scoped runtime policy comes next at
`models.providers.<provider>.agentRuntime`.
3. In `auto` mode, registered plugin runtimes can claim supported provider/model
pairs.
5. If no runtime claims a turn in `auto` mode, OpenClaw uses PI as the
4. If no runtime claims a turn in `auto` mode, OpenClaw uses PI as the
compatibility runtime. Use an explicit runtime id when the run must be
strict.
Explicit plugin runtimes fail closed. For example, `agentRuntime.id: "codex"`
means Codex or a clear selection/runtime error; it is never silently routed back
to PI.
Whole-session and whole-agent runtime pins are ignored. That includes
`OPENCLAW_AGENT_RUNTIME`, session `agentHarnessId`/`agentRuntimeOverride` state,
`agents.defaults.agentRuntime`, and `agents.list[].agentRuntime`. Run
`openclaw doctor --fix` to remove stale whole-agent runtime config and convert
legacy runtime model refs where OpenClaw can preserve the intent.
Explicit provider/model plugin runtimes fail closed. For example,
`agentRuntime.id: "codex"` on a provider or model means Codex or a clear
selection/runtime error; it is never silently routed back to PI.
CLI backend aliases are different from embedded harness ids. The preferred
Claude CLI form is:
@@ -156,7 +164,11 @@ Claude CLI form is:
agents: {
defaults: {
model: "anthropic/claude-opus-4-7",
agentRuntime: { id: "claude-cli" },
models: {
"anthropic/claude-opus-4-7": {
agentRuntime: { id: "claude-cli" },
},
},
},
},
}
@@ -164,15 +176,15 @@ Claude CLI form is:
Legacy refs such as `claude-cli/claude-opus-4-7` remain supported for
compatibility, but new config should keep the provider/model canonical and put
the execution backend in `agentRuntime.id`.
the execution backend in provider/model runtime policy.
`auto` mode is intentionally conservative for most providers. OpenAI agent
models are the exception: unset runtime and `auto` both resolve to the Codex
harness. Explicit PI runtime config remains an opt-in compatibility route for
`openai/*` agent turns; when paired with a selected `openai-codex` auth profile,
OpenClaw routes PI internally through the legacy Codex-auth transport while
keeping the public model ref as `openai/*`. Stale OpenAI PI session pins without
explicit config are repaired back to Codex.
keeping the public model ref as `openai/*`. Stale OpenAI PI session pins are
ignored by runtime selection and can be cleaned with `openclaw doctor --fix`.
If `openclaw doctor` warns that the `codex` plugin is enabled while
`openai-codex/*` remains in config, treat that as legacy route state. Run
@@ -206,10 +218,8 @@ diagnostics, not as provider names.
- A runtime id such as `codex` tells you which loop is executing the turn.
- A channel label such as Telegram or Discord tells you where the conversation is happening.
If a session still shows PI after changing runtime config, start a new session
with `/new` or clear the current one with `/reset`. Existing sessions keep their
recorded runtime so a transcript is not replayed through two incompatible native
session systems.
If a run still shows an unexpected runtime, inspect the selected provider/model
runtime policy first. Legacy session runtime pins no longer decide routing.
## Related

View File

@@ -29,19 +29,19 @@ Reference for **LLM/model providers** (not chat channels like WhatsApp/Telegram)
<Accordion title="OpenAI provider/runtime split">
OpenAI-family routes are prefix-specific:
- `openai/<model>` plus `agents.defaults.agentRuntime.id: "codex"` uses the native Codex app-server harness. This is the usual ChatGPT/Codex subscription setup.
- `openai-codex/<model>` uses Codex OAuth in PI.
- `openai/<model>` without a Codex runtime override uses the direct OpenAI API-key provider in PI.
- `openai/<model>` uses the native Codex app-server harness for agent turns by default. This is the usual ChatGPT/Codex subscription setup.
- `openai-codex/<model>` is legacy config that doctor rewrites to `openai/<model>`.
- `openai/<model>` plus provider/model `agentRuntime.id: "pi"` uses PI for explicit API-key or compatibility routes.
See [OpenAI](/providers/openai) and [Codex harness](/plugins/codex-harness). If the provider/runtime split is confusing, read [Agent runtimes](/concepts/agent-runtimes) first.
Plugin auto-enable follows the same boundary: `openai-codex/<model>` belongs to the OpenAI plugin, while the Codex plugin is enabled by `agentRuntime.id: "codex"` or legacy `codex/<model>` refs.
Plugin auto-enable follows the same boundary: `openai/*` agent refs enable the Codex plugin for the default route, and explicit provider/model `agentRuntime.id: "codex"` or legacy `codex/<model>` refs also require it.
GPT-5.5 is available through the native Codex app-server harness when `agentRuntime.id: "codex"` is set, through `openai-codex/gpt-5.5` in PI for Codex OAuth, and through `openai/gpt-5.5` in PI for direct API-key traffic when your account exposes it.
GPT-5.5 is available through the native Codex app-server harness by default on `openai/gpt-5.5`, and through PI only when provider/model runtime policy explicitly selects `pi`.
</Accordion>
<Accordion title="CLI runtimes">
CLI runtimes use the same split: choose canonical model refs such as `anthropic/claude-*`, `google/gemini-*`, or `openai/gpt-*`, then set `agents.defaults.agentRuntime.id` to `claude-cli`, `google-gemini-cli`, or `codex-cli` when you want a local CLI backend.
CLI runtimes use the same split: choose canonical model refs such as `anthropic/claude-*`, `google/gemini-*`, or `openai/gpt-*`, then set provider/model runtime policy to `claude-cli`, `google-gemini-cli`, or `codex-cli` when you want a local CLI backend.
Legacy `claude-cli/*`, `google-gemini-cli/*`, and `codex-cli/*` refs migrate back to canonical provider refs with the runtime recorded separately.
@@ -118,7 +118,7 @@ OpenClaw ships with the pi-ai catalog. These providers require **no** `models.pr
- Direct public Anthropic requests support the shared `/fast` toggle and `params.fastMode`, including API-key and OAuth-authenticated traffic sent to `api.anthropic.com`; OpenClaw maps that to Anthropic `service_tier` (`auto` vs `standard_only`)
- Preferred Claude CLI config keeps the model ref canonical and selects the CLI
backend separately: `anthropic/claude-opus-4-7` with
`agents.defaults.agentRuntime.id: "claude-cli"`. Legacy
model-scoped `agentRuntime.id: "claude-cli"`. Legacy
`claude-cli/claude-opus-4-7` refs still work for compatibility.
<Note>
@@ -135,8 +135,8 @@ Anthropic staff told us OpenClaw-style Claude CLI usage is allowed again, so Ope
- Provider: `openai-codex`
- Auth: OAuth (ChatGPT)
- PI model ref: `openai-codex/gpt-5.5`
- Native Codex app-server harness ref: `openai/gpt-5.5` with `agents.defaults.agentRuntime.id: "codex"`
- Legacy PI model ref: `openai-codex/gpt-5.5`
- Native Codex app-server harness ref: `openai/gpt-5.5`
- Native Codex app-server harness docs: [Codex harness](/plugins/codex-harness)
- Legacy model refs: `codex/gpt-*`
- Plugin boundary: `openai-codex/*` loads the OpenAI plugin; the native Codex app-server plugin is selected only by the Codex harness runtime or legacy `codex/*` refs.
@@ -148,8 +148,8 @@ Anthropic staff told us OpenClaw-style Claude CLI usage is allowed again, so Ope
- Shares the same `/fast` toggle and `params.fastMode` config as direct `openai/*`; OpenClaw maps that to `service_tier=priority`
- `openai-codex/gpt-5.5` uses the Codex catalog native `contextWindow = 400000` and default runtime `contextTokens = 272000`; override the runtime cap with `models.providers.openai-codex.models[].contextTokens`
- Policy note: OpenAI Codex OAuth is explicitly supported for external tools/workflows like OpenClaw.
- For the common subscription plus native Codex runtime route, sign in with `openai-codex` auth but configure `openai/gpt-5.5` plus `agents.defaults.agentRuntime.id: "codex"`.
- Use `openai-codex/gpt-5.5` only when you want the Codex OAuth/subscription route through PI; use `openai/gpt-5.5` without the Codex runtime override when your API-key setup and local catalog expose the public API route.
- For the common subscription plus native Codex runtime route, sign in with `openai-codex` auth but configure `openai/gpt-5.5`; OpenAI agent turns select Codex by default.
- Use provider/model `agentRuntime.id: "pi"` only when you want a compatibility route through PI; otherwise keep `openai/gpt-5.5` on the default Codex harness.
- Older `openai-codex/gpt-5.1*`, `openai-codex/gpt-5.2*`, and `openai-codex/gpt-5.3*` refs are suppressed because ChatGPT/Codex OAuth accounts reject them; use `openai-codex/gpt-5.5` or the native Codex runtime route instead.
```json5
@@ -158,7 +158,6 @@ Anthropic staff told us OpenClaw-style Claude CLI usage is allowed again, so Ope
agents: {
defaults: {
model: { primary: "openai/gpt-5.5" },
agentRuntime: { id: "codex" },
},
},
}

View File

@@ -23,7 +23,7 @@ sidebarTitle: "Models CLI"
</Card>
</CardGroup>
Model refs choose a provider and model. They do not usually choose the low-level agent runtime. For example, `openai/gpt-5.5` can run through the normal OpenAI provider path or through the Codex app-server runtime, depending on `agents.defaults.agentRuntime.id`. In Codex runtime mode, the `openai/gpt-*` ref does not imply API-key billing; auth can come from a Codex account or `openai-codex` auth profile. See [Agent runtimes](/concepts/agent-runtimes).
Model refs choose a provider and model. They do not usually choose the low-level agent runtime. OpenAI agent refs are the main exception: `openai/gpt-5.5` runs through the Codex app-server runtime by default on the official OpenAI provider. Explicit runtime overrides belong on provider/model policy, not on the whole agent or session. In Codex runtime mode, the `openai/gpt-*` ref does not imply API-key billing; auth can come from a Codex account or `openai-codex` auth profile. See [Agent runtimes](/concepts/agent-runtimes).
## How model selection works

View File

@@ -336,9 +336,6 @@ Time format in system prompt. Default: `auto` (OS preference).
fallbacks: ["openai/gpt-5.4-mini"],
},
params: { cacheRetention: "long" }, // global default provider params
agentRuntime: {
id: "pi", // pi | auto | registered harness id, e.g. codex
},
pdfMaxBytesMb: 10,
pdfMaxPages: 20,
thinkingDefault: "low",
@@ -398,25 +395,28 @@ Time format in system prompt. Default: `auto` (OS preference).
- `params.chat_template_kwargs`: vLLM/OpenAI-compatible chat-template arguments merged into top-level `api: "openai-completions"` request bodies. For `vllm/nemotron-3-*` with thinking off, the bundled vLLM plugin automatically sends `enable_thinking: false` and `force_nonempty_content: true`; explicit `chat_template_kwargs` override generated defaults, and `extra_body.chat_template_kwargs` still has final precedence. For vLLM Qwen thinking controls, set `params.qwenThinkingFormat` to `"chat-template"` or `"top-level"` on that model entry.
- `compat.supportedReasoningEfforts`: per-model OpenAI-compatible reasoning effort list. Include `"xhigh"` for custom endpoints that truly accept it; OpenClaw then exposes `/think xhigh` in command menus, Gateway session rows, session patch validation, agent CLI validation, and `llm-task` validation for that configured provider/model. Use `compat.reasoningEffortMap` when the backend wants a provider-specific value for a canonical level.
- `params.preserveThinking`: Z.AI-only opt-in for preserved thinking. When enabled and thinking is on, OpenClaw sends `thinking.clear_thinking: false` and replays prior `reasoning_content`; see [Z.AI thinking and preserved thinking](/providers/zai#thinking-and-preserved-thinking).
- `agentRuntime`: default low-level agent runtime policy. Omitted id defaults to OpenClaw Pi. Use `id: "pi"` to force the built-in PI harness, `id: "auto"` to let registered plugin harnesses claim supported models and use PI when none match, a registered harness id such as `id: "codex"` to require that harness, or a supported CLI backend alias such as `id: "claude-cli"`. Explicit plugin runtimes fail closed when the harness is unavailable or fails. Keep model refs canonical as `provider/model`; select Codex, Claude CLI, Gemini CLI, and other execution backends through runtime config instead of legacy runtime provider prefixes. See [Agent runtimes](/concepts/agent-runtimes) for how this differs from provider/model selection.
- Runtime policy belongs on providers or models, not on `agents.defaults`. Use `models.providers.<provider>.agentRuntime` for provider-wide rules or `agents.defaults.models["provider/model"].agentRuntime` / `agents.list[].models["provider/model"].agentRuntime` for model-specific rules. OpenAI agent models on the official OpenAI provider select Codex by default.
- Config writers that mutate these fields (for example `/models set`, `/models set-image`, and fallback add/remove commands) save canonical object form and preserve existing fallback lists when possible.
- `maxConcurrent`: max parallel agent runs across sessions (each session still serialized). Default: 4.
### `agents.defaults.agentRuntime`
`agentRuntime` controls which low-level executor runs agent turns. Most
deployments should keep the default OpenClaw Pi runtime. Use it when a trusted
plugin provides a native harness, such as the bundled Codex app-server harness,
or when you want a supported CLI backend such as Claude CLI. For the mental
model, see [Agent runtimes](/concepts/agent-runtimes).
### Runtime policy
```json5
{
models: {
providers: {
openai: {
agentRuntime: { id: "codex" },
},
},
},
agents: {
defaults: {
model: "openai/gpt-5.5",
agentRuntime: {
id: "codex",
models: {
"anthropic/claude-opus-4-7": {
agentRuntime: { id: "claude-cli" },
},
},
},
},
@@ -425,11 +425,9 @@ model, see [Agent runtimes](/concepts/agent-runtimes).
- `id`: `"auto"`, `"pi"`, a registered plugin harness id, or a supported CLI backend alias. The bundled Codex plugin registers `codex`; the bundled Anthropic plugin provides the `claude-cli` CLI backend.
- `id: "auto"` lets registered plugin harnesses claim supported turns and uses PI when no harness matches. An explicit plugin runtime such as `id: "codex"` requires that harness and fails closed if it is unavailable or fails.
- Environment override: `OPENCLAW_AGENT_RUNTIME=<id|auto|pi>` overrides `id` for that process.
- OpenAI agent models use the Codex harness by default; `agentRuntime.id: "codex"` remains valid when you want to make that explicit.
- For Claude CLI deployments, prefer `model: "anthropic/claude-opus-4-7"` plus `agentRuntime.id: "claude-cli"`. Legacy `claude-cli/claude-opus-4-7` model refs still work for compatibility, but new config should keep provider/model selection canonical and put the execution backend in `agentRuntime.id`.
- Older runtime-policy keys are rewritten to `agentRuntime` by `openclaw doctor --fix`.
- Harness choice is pinned per session id after the first embedded run. Config/env changes affect new or reset sessions, not an existing transcript. Legacy OpenAI sessions with transcript history but no recorded pin use Codex; stale OpenAI PI pins can be repaired with `openclaw doctor --fix`. `/status` reports the effective runtime, for example `Runtime: OpenClaw Pi Default` or `Runtime: OpenAI Codex`.
- Whole-agent runtime keys are legacy. `agents.defaults.agentRuntime`, `agents.list[].agentRuntime`, session runtime pins, and `OPENCLAW_AGENT_RUNTIME` are ignored by runtime selection. Run `openclaw doctor --fix` to remove stale values.
- OpenAI agent models use the Codex harness by default; provider/model `agentRuntime.id: "codex"` remains valid when you want to make that explicit.
- For Claude CLI deployments, prefer `model: "anthropic/claude-opus-4-7"` plus model-scoped `agentRuntime.id: "claude-cli"`. Legacy `claude-cli/claude-opus-4-7` model refs still work for compatibility, but new config should keep provider/model selection canonical and put the execution backend in provider/model runtime policy.
- This only controls text agent-turn execution. Media generation, vision, PDF, music, video, and TTS still use their provider/model settings.
**Built-in alias shorthands** (only apply when the model is in `agents.defaults.models`):
@@ -959,7 +957,6 @@ for provider examples and precedence.
thinkingDefault: "high", // per-agent thinking level override
reasoningDefault: "on", // per-agent reasoning visibility override
fastModeDefault: false, // per-agent fast mode override
agentRuntime: { id: "auto" },
params: { cacheRetention: "none" }, // overrides matching defaults.models params by key
tts: {
providers: {
@@ -1006,7 +1003,7 @@ for provider examples and precedence.
- `thinkingDefault`: optional per-agent default thinking level (`off | minimal | low | medium | high | xhigh | adaptive | max`). Overrides `agents.defaults.thinkingDefault` for this agent when no per-message or session override is set. The selected provider/model profile controls which values are valid; for Google Gemini, `adaptive` keeps provider-owned dynamic thinking (`thinkingLevel` omitted on Gemini 3/3.1, `thinkingBudget: -1` on Gemini 2.5).
- `reasoningDefault`: optional per-agent default reasoning visibility (`on | off | stream`). Overrides `agents.defaults.reasoningDefault` for this agent when no per-message or session reasoning override is set.
- `fastModeDefault`: optional per-agent default for fast mode (`true | false`). Applies when no per-message or session fast-mode override is set.
- `agentRuntime`: optional per-agent low-level runtime policy override. Use `{ id: "codex" }` to make one agent Codex-only while other agents keep the default PI fallback in `auto` mode.
- `models`: optional per-agent model catalog/runtime overrides keyed by full `provider/model` ids. Use `models["provider/model"].agentRuntime` for per-agent runtime exceptions.
- `runtime`: optional per-agent runtime descriptor. Use `type: "acp"` with `runtime.acp` defaults (`agent`, `backend`, `mode`, `cwd`) when the agent should default to ACP harness sessions.
- `identity.avatar`: workspace-relative path, `http(s)` URL, or `data:` URI.
- `identity` derives defaults: `ackReaction` from `emoji`, `mentionPatterns` from `name`/`emoji`.

View File

@@ -87,7 +87,7 @@ cat ~/.openclaw/openclaw.json
- Legacy on-disk state migration (sessions/agent dir/WhatsApp auth).
- Legacy plugin manifest contract key migration (`speechProviders`, `realtimeTranscriptionProviders`, `realtimeVoiceProviders`, `mediaUnderstandingProviders`, `imageGenerationProviders`, `videoGenerationProviders`, `webFetchProviders`, `webSearchProviders` → `contracts`).
- Legacy cron store migration (`jobId`, `schedule.cron`, top-level delivery/payload fields, payload `provider`, simple `notify: true` webhook fallback jobs).
- Legacy agent runtime-policy migration to `agents.defaults.agentRuntime` and `agents.list[].agentRuntime`.
- Legacy whole-agent runtime-policy cleanup; provider/model runtime policy is the active route selector.
- Stale plugin config cleanup when plugins are enabled; when `plugins.enabled=false`, stale plugin references are treated as inert containment config and are preserved.
</Accordion>
@@ -109,7 +109,7 @@ cat ~/.openclaw/openclaw.json
- Channel status warnings (probed from the running gateway).
- Channel-specific permission checks live under `openclaw channels capabilities`; for example, Discord voice channel permissions are audited with `openclaw channels capabilities --channel discord --target channel:<channel-id>`.
- WhatsApp responsiveness checks for degraded Gateway event-loop health with local TUI clients still running; `--fix` stops only verified local TUI clients.
- Codex route repair for legacy `openai-codex/*` model refs in primary models, fallbacks, heartbeat/subagent/compaction overrides, hooks, channel model overrides, and session route pins; `--fix` rewrites them to `openai/*` and selects `agentRuntime.id: "codex"` only when the Codex plugin is installed, enabled, contributes the `codex` harness, and has usable OAuth. Otherwise it selects `agentRuntime.id: "pi"`.
- Codex route repair for legacy `openai-codex/*` model refs in primary models, fallbacks, heartbeat/subagent/compaction overrides, hooks, channel model overrides, and session route pins; `--fix` rewrites them to `openai/*`, removes stale session/whole-agent runtime pins, and leaves canonical OpenAI agent refs on the default Codex harness.
- Supervisor config audit (launchd/systemd/schtasks) with optional repair.
- Embedded proxy environment cleanup for gateway services that captured shell `HTTP_PROXY` / `HTTPS_PROXY` / `NO_PROXY` values during install or update.
- Gateway runtime best-practice checks (Node vs Bun, version-manager paths).
@@ -269,8 +269,8 @@ That stages grounded durable candidates into the short-term dreaming store while
In `--fix` / `--repair` mode, doctor rewrites affected default-agent and per-agent refs, including primary models, fallbacks, heartbeat/subagent/compaction overrides, hooks, channel model overrides, and stale persisted session route state:
- `openai-codex/gpt-*` becomes `openai/gpt-*`.
- The matching agent runtime becomes `agentRuntime.id: "codex"` only when Codex is installed, enabled, contributes the `codex` harness, and has usable OAuth.
- Otherwise the matching agent runtime becomes `agentRuntime.id: "pi"`.
- Stale whole-agent runtime config and persisted session runtime pins are removed because runtime selection is provider/model-scoped.
- Explicit provider/model runtime policy is preserved.
- Existing model fallback lists are preserved with their legacy entries rewritten; copied per-model settings move from the legacy key to the canonical `openai/*` key.
- Persisted session `modelProvider`/`providerOverride`, `model`/`modelOverride`, fallback notices, auth-profile pins, and Codex harness pins are repaired across all discovered agent session stores.
- `/codex ...` means "control or bind a native Codex conversation from chat."

View File

@@ -594,12 +594,11 @@ and troubleshooting see the main [FAQ](/help/faq).
<Accordion title="How does Codex auth work?">
OpenClaw supports **OpenAI Code (Codex)** via OAuth (ChatGPT sign-in). Use
`openai/gpt-5.5` with `agentRuntime.id: "codex"` for the common setup:
ChatGPT/Codex subscription auth plus native Codex app-server execution. Use
`openai-codex/gpt-5.5` only when you want Codex OAuth through the default
Codex runtime. Direct OpenAI API-key access remains available for non-agent
OpenAI API surfaces and for agent models through an ordered
`openai-codex` API-key profile.
`openai/gpt-5.5` for the common setup: ChatGPT/Codex subscription auth plus
native Codex app-server execution. `openai-codex/gpt-*` model refs are
legacy config repaired by `openclaw doctor --fix`. Direct OpenAI API-key
access remains available for non-agent OpenAI API surfaces and for agent
models through an ordered `openai-codex` API-key profile.
See [Model providers](/concepts/model-providers) and [Onboarding (CLI)](/start/wizard).
</Accordion>

View File

@@ -150,7 +150,7 @@ troubleshooting, see the main [FAQ](/help/faq).
- **Native Codex coding agent:** set `agents.defaults.model.primary` to `openai/gpt-5.5`. Sign in with `openclaw models auth login --provider openai-codex` when you want ChatGPT/Codex subscription auth.
- **Direct OpenAI API tasks outside the agent loop:** configure `OPENAI_API_KEY` for images, embeddings, speech, realtime, and other non-agent OpenAI API surfaces.
- **OpenAI agent API-key auth:** use `/model openai/gpt-5.5` with an ordered `openai-codex` API-key profile.
- **Sub-agents:** route coding tasks to a Codex-only agent with its own model and `agentRuntime` default.
- **Sub-agents:** route coding tasks to a Codex-focused agent with its own `openai/gpt-5.5` model.
See [Models](/concepts/models) and [Slash commands](/tools/slash-commands).

View File

@@ -285,8 +285,8 @@ Docker notes:
- Goal: validate the plugin-owned Codex harness through the normal gateway
`agent` method:
- load the bundled `codex` plugin
- select `OPENCLAW_AGENT_RUNTIME=codex`
- send a first gateway agent turn to `openai/gpt-5.5` with the Codex harness forced
- select `openai/gpt-5.5`, which routes OpenAI agent turns through Codex by default
- send a first gateway agent turn to `openai/gpt-5.5` with the Codex harness selected
- send a second turn to the same OpenClaw session and verify the app-server
thread can resume
- run `/codex status` and `/codex models` through the same gateway command
@@ -300,8 +300,8 @@ Docker notes:
- Optional image probe: `OPENCLAW_LIVE_CODEX_HARNESS_IMAGE_PROBE=1`
- Optional MCP/tool probe: `OPENCLAW_LIVE_CODEX_HARNESS_MCP_PROBE=1`
- Optional Guardian probe: `OPENCLAW_LIVE_CODEX_HARNESS_GUARDIAN_PROBE=1`
- The smoke uses `agentRuntime.id: "codex"` so a broken Codex harness cannot
pass by silently falling back to PI.
- The smoke forces provider/model `agentRuntime.id: "codex"` so a broken Codex
harness cannot pass by silently falling back to PI.
- Auth: Codex app-server auth from the local Codex subscription login. Docker
smokes can also provide `OPENAI_API_KEY` for non-Codex probes when applicable,
plus optional copied `~/.codex/auth.json` and `~/.codex/config.toml`.

View File

@@ -96,9 +96,6 @@ Computer Use available before a thread starts:
agents: {
defaults: {
model: "openai/gpt-5.5",
agentRuntime: {
id: "codex",
},
},
},
}
@@ -114,9 +111,8 @@ register the bundled Codex marketplace from
fails. If setup still cannot make the MCP server available, the turn fails
before the thread starts.
Existing sessions keep their runtime and Codex thread binding. After changing
`agentRuntime` or Computer Use config, use `/new` or `/reset` in the affected
chat before testing.
After changing Computer Use config, use `/new` or `/reset` in the affected chat
before testing if an existing Codex thread has already started.
## Commands

View File

@@ -50,7 +50,8 @@ First sign in with Codex OAuth if you have not already:
openclaw models auth login --provider openai-codex
```
Then enable the bundled `codex` plugin and force the Codex runtime:
Then enable the bundled `codex` plugin and use the canonical OpenAI model ref.
OpenAI agent turns select the Codex runtime by default:
```json5
{
@@ -64,9 +65,6 @@ Then enable the bundled `codex` plugin and force the Codex runtime:
agents: {
defaults: {
model: "openai/gpt-5.5",
agentRuntime: {
id: "codex",
},
},
},
}
@@ -98,7 +96,7 @@ The bundled `codex` plugin contributes several separate capabilities:
| Capability | How you use it | What it does |
| --------------------------------- | --------------------------------------------------- | ----------------------------------------------------------------------------- |
| Native embedded runtime | `agentRuntime.id: "codex"` | Runs OpenClaw embedded agent turns through Codex app-server. |
| Native embedded runtime | `openai/gpt-*` agent model refs | Runs OpenClaw embedded agent turns through Codex app-server. |
| Native chat-control commands | `/codex bind`, `/codex resume`, `/codex steer`, ... | Binds and controls Codex app-server threads from a messaging conversation. |
| Codex app-server provider/catalog | `codex` internals, surfaced through the harness | Lets the runtime discover and validate app-server models. |
| Codex media-understanding path | `codex/*` image-model compatibility paths | Runs bounded Codex app-server turns for supported image understanding models. |
@@ -110,7 +108,7 @@ Enabling the plugin makes those capabilities available. It does **not**:
realtime
- convert `openai-codex/*` model refs without `openclaw doctor --fix`
- make ACP/acpx the default Codex path
- hot-switch existing sessions that already recorded a PI runtime
- use stale whole-agent or session runtime pins for routing
- replace OpenClaw channel delivery, session files, auth-profile storage, or
message routing
@@ -141,35 +139,37 @@ For the plugin hook semantics themselves, see [Plugin hooks](/plugins/hooks)
and [Plugin guard behavior](/tools/plugin).
OpenAI agent model refs use the harness by default. New configs should keep
OpenAI model refs canonical as `openai/gpt-*`; `agentRuntime.id: "codex"` is
still valid but no longer required for OpenAI agent turns. Legacy `codex/*`
model refs still auto-select the harness for compatibility, but
OpenAI model refs canonical as `openai/gpt-*`; provider/model
`agentRuntime.id: "codex"` is still valid but no longer required for OpenAI
agent turns. Legacy `codex/*` model refs still auto-select the harness for
compatibility, but
runtime-backed legacy provider prefixes are not shown as normal model/provider
choices.
If any configured model route is still `openai-codex/*`, `openclaw doctor --fix`
rewrites it to `openai/*`. For matching agent routes, it sets the agent runtime
to `codex` and preserves existing `openai-codex` auth profile overrides.
rewrites it to `openai/*` and preserves existing `openai-codex` auth profile
overrides. It does not pin the whole agent to `agentRuntime.id: "codex"` because
canonical OpenAI refs already select the Codex harness automatically.
## Route map
Use this table before changing config:
| Desired behavior | Model ref | Runtime config | Auth/profile route | Expected status label |
| ---------------------------------------------------- | -------------------------- | -------------------------------------- | ------------------------------ | ---------------------------- |
| ChatGPT/Codex subscription with native Codex runtime | `openai/gpt-*` | omitted or `agentRuntime.id: "codex"` | Codex OAuth or Codex account | `Runtime: OpenAI Codex` |
| OpenAI API-key auth for agent models | `openai/gpt-*` | omitted or `agentRuntime.id: "codex"` | `openai-codex` API-key profile | `Runtime: OpenAI Codex` |
| Legacy config that needs doctor repair | `openai-codex/gpt-*` | repaired to `codex` | Existing configured auth | Recheck after `doctor --fix` |
| Mixed providers with conservative auto mode | provider-specific refs | `agentRuntime.id: "auto"` | Per selected provider | Depends on selected runtime |
| Explicit Codex ACP adapter session | ACP prompt/model dependent | `sessions_spawn` with `runtime: "acp"` | ACP backend auth | ACP task/session status |
| Desired behavior | Model ref | Runtime config | Auth/profile route | Expected status label |
| ---------------------------------------------------- | -------------------------- | -------------------------------------------------------- | ------------------------------ | ---------------------------- |
| ChatGPT/Codex subscription with native Codex runtime | `openai/gpt-*` | omitted or provider/model `agentRuntime.id: "codex"` | Codex OAuth or Codex account | `Runtime: OpenAI Codex` |
| OpenAI API-key auth for agent models | `openai/gpt-*` | omitted or provider/model `agentRuntime.id: "codex"` | `openai-codex` API-key profile | `Runtime: OpenAI Codex` |
| Legacy config that needs doctor repair | `openai-codex/gpt-*` | preserved or automatic | Existing configured auth | Recheck after `doctor --fix` |
| Mixed providers with conservative auto mode | provider-specific refs | omitted unless a provider/model needs a runtime override | Per selected provider | Depends on selected runtime |
| Explicit Codex ACP adapter session | ACP prompt/model dependent | `sessions_spawn` with `runtime: "acp"` | ACP backend auth | ACP task/session status |
The important split is provider versus runtime:
- `openai-codex/*` is a legacy route that doctor rewrites.
- `agentRuntime.id: "codex"` requires the Codex harness and fails closed if it
is unavailable.
- `agentRuntime.id: "auto"` lets registered harnesses claim matching provider
routes; OpenAI agent refs resolve to Codex instead of PI.
- Provider/model `agentRuntime.id: "codex"` requires the Codex harness and fails
closed if it is unavailable.
- Provider/model `agentRuntime.id: "auto"` lets registered harnesses claim
matching provider routes; OpenAI agent refs resolve to Codex instead of PI.
- `/codex ...` answers "which native Codex conversation should this chat bind
or control?"
- ACP answers "which external harness process should acpx launch?"
@@ -188,13 +188,14 @@ Treat `openai-codex/*` as legacy config that doctor should rewrite:
GPT-5.5 can appear on both direct OpenAI API-key and Codex subscription routes
when your account exposes them. Use `openai/gpt-5.5` with the Codex app-server
harness for native Codex runtime, or `openai/gpt-5.5` without a Codex runtime
override for direct API-key traffic.
harness for native Codex runtime. For direct API-key traffic through PI, opt in
with provider/model `agentRuntime.id: "pi"` and a normal `openai` auth profile.
Legacy `codex/gpt-*` refs remain accepted as compatibility aliases. Doctor
compatibility migration rewrites legacy runtime refs to canonical model refs
and records the runtime policy separately. New native app-server harness configs
should use `openai/gpt-*` plus `agentRuntime.id: "codex"`.
should use `openai/gpt-*`; explicit provider/model `agentRuntime.id: "codex"`
is only needed when you want the policy written down.
`agents.defaults.imageModel` follows the same prefix split. Use
`openai/gpt-*` for the normal OpenAI route and `codex/gpt-*` when image
@@ -213,27 +214,13 @@ in `auto` mode, each plugin candidate's support result.
`openclaw doctor` warns when configured model refs or persisted session route
state still use `openai-codex/*`. `openclaw doctor --fix` rewrites those routes
to:
to `openai/<model>`. Canonical OpenAI agent refs already select the native Codex
harness, so doctor does not pin the whole agent to Codex.
- `openai/<model>`
- `agentRuntime.id: "codex"`
The `codex` route forces the native Codex harness. PI runtime config is not
allowed for OpenAI agent model turns.
Doctor also repairs stale persisted session pins across discovered agent session
stores so old conversations do not stay wedged on the removed route.
Harness selection is not a live session control. When an embedded turn runs,
OpenClaw records the selected harness id on that session and keeps using it for
later turns in the same session id. Change `agentRuntime` config or
`OPENCLAW_AGENT_RUNTIME` when you want future sessions to use another harness;
use `/new` or `/reset` to start a fresh session before switching an existing
conversation between PI and Codex. This avoids replaying one transcript through
two incompatible native session systems.
Legacy sessions created before harness pins are treated as PI-pinned once they
have transcript history. Use `/new` or `/reset` to opt that conversation into
Codex after changing config.
Whole-session and whole-agent runtime pins are legacy state. Runtime selection
now comes from provider/model policy; `openclaw doctor --fix` removes stale
session pins and old whole-agent runtime config so they do not mask the selected
provider/model route.
`/status` shows the effective model runtime. The default PI harness appears as
`Runtime: OpenClaw Pi Default`, and the Codex app-server harness appears as
@@ -274,22 +261,21 @@ Codex behavior-shaping lane without duplicating `AGENTS.md`.
## Add Codex alongside other models
Do not set `agentRuntime.id: "codex"` globally if the same agent should freely switch
between Codex and non-Codex provider models. A forced runtime applies to every
embedded turn for that agent or session. If you select an Anthropic model while
that runtime is forced, OpenClaw still tries the Codex harness and fails closed
instead of silently routing that turn through PI.
Do not set a whole-agent runtime. Whole-agent runtime pins are legacy and
ignored, and they were the source of mixed-provider traps after upgrades. Keep
runtime policy on the provider or model that needs it.
Use one of these shapes instead:
- Put Codex on a dedicated agent with `agentRuntime.id: "codex"`.
- Keep the default agent on `agentRuntime.id: "auto"` and PI fallback for normal mixed
provider usage.
- Use `openai/gpt-*` for OpenAI agent turns; Codex is selected by default.
- Put runtime overrides on `models.providers.<provider>.agentRuntime` or on a
model entry such as `agents.defaults.models["anthropic/claude-opus-4-7"].agentRuntime`.
- Use legacy `codex/*` refs only for compatibility. New configs should prefer
`openai/*` plus an explicit Codex runtime policy.
`openai/*`; add an explicit Codex runtime policy only when you need to make
the provider/model rule strict.
For example, this keeps the default agent on normal automatic selection and
adds a separate Codex agent:
For example, this keeps mixed-provider routing ergonomic while using OpenAI
through Codex by default and Claude through PI:
```json5
{
@@ -302,9 +288,7 @@ adds a separate Codex agent:
},
agents: {
defaults: {
agentRuntime: {
id: "auto",
},
model: "anthropic/claude-opus-4-6",
},
list: [
{
@@ -316,9 +300,6 @@ adds a separate Codex agent:
id: "codex",
name: "Codex",
model: "openai/gpt-5.5",
agentRuntime: {
id: "codex",
},
},
],
},
@@ -355,45 +336,36 @@ routing.
## Codex-only deployments
Force the Codex harness when you need to prove that every embedded agent turn
uses Codex. Explicit plugin runtimes fail closed and are never silently retried
through PI:
For OpenAI agent turns, `openai/gpt-*` already resolves to Codex. If you need a
strict written policy, put it on the OpenAI provider or model. Explicit plugin
runtimes fail closed and are never silently retried through PI:
```json5
{
agents: {
defaults: {
model: "openai/gpt-5.5",
agentRuntime: {
id: "codex",
models: {
providers: {
openai: {
agentRuntime: {
id: "codex",
},
},
},
},
agents: { defaults: { model: "openai/gpt-5.5" } },
}
```
Environment override:
```bash
OPENCLAW_AGENT_RUNTIME=codex openclaw gateway run
```
With Codex forced, OpenClaw fails early if the Codex plugin is disabled, the
app-server is too old, or the app-server cannot start.
## Per-agent Codex
You can make one agent Codex-only while the default agent keeps normal
auto-selection:
You can make one agent Codex-strict while the default agent keeps normal
selection by using a per-agent model runtime override:
```json5
{
agents: {
defaults: {
agentRuntime: {
id: "auto",
},
},
list: [
{
id: "main",
@@ -404,8 +376,12 @@ auto-selection:
id: "codex",
name: "Codex",
model: "openai/gpt-5.5",
agentRuntime: {
id: "codex",
models: {
"openai/gpt-5.5": {
agentRuntime: {
id: "codex",
},
},
},
},
],
@@ -827,9 +803,6 @@ Minimal config:
agents: {
defaults: {
model: "openai/gpt-5.5",
agentRuntime: {
id: "codex",
},
},
},
}
@@ -876,12 +849,18 @@ Codex-only harness validation:
```json5
{
models: {
providers: {
openai: {
agentRuntime: {
id: "codex",
},
},
},
},
agents: {
defaults: {
model: "openai/gpt-5.5",
agentRuntime: {
id: "codex",
},
},
},
plugins: {
@@ -1185,16 +1164,16 @@ understanding continue to use the matching provider/model settings such as
## Troubleshooting
**Codex does not appear as a normal `/model` provider:** that is expected for
new configs. Select an `openai/gpt-*` model with
`agentRuntime.id: "codex"` (or a legacy `codex/*` ref), enable
new configs. Select an `openai/gpt-*` model, enable
`plugins.entries.codex.enabled`, and check whether `plugins.allow` excludes
`codex`.
`codex`. Legacy `codex/*` refs remain compatibility aliases, not normal model
provider choices.
**OpenClaw uses PI instead of Codex:** `agentRuntime.id: "auto"` can still use PI as the
compatibility backend when no Codex harness claims the run. Set
`agentRuntime.id: "codex"` to force Codex selection while testing. A
forced Codex runtime fails instead of falling back to PI. Once Codex app-server
is selected, its failures surface directly.
**OpenClaw uses PI instead of Codex:** make sure the model ref is `openai/gpt-*`
on the official OpenAI provider and that the Codex plugin is installed/enabled.
If you need a strict policy while testing, set provider/model
`agentRuntime.id: "codex"`. A forced Codex runtime fails instead of falling back
to PI. Once Codex app-server is selected, its failures surface directly.
**The app-server is rejected:** upgrade Codex so the app-server handshake
reports version `0.125.0` or newer. Same-version prereleases or build-suffixed
@@ -1207,11 +1186,11 @@ or disable discovery.
**WebSocket transport fails immediately:** check `appServer.url`, `authToken`,
and that the remote app-server speaks the same Codex app-server protocol version.
**A non-Codex model uses PI:** that is expected unless you forced
`agentRuntime.id: "codex"` for that agent or selected a legacy
`codex/*` ref. Plain `openai/gpt-*` and other provider refs stay on their normal
provider path in `auto` mode. If you force `agentRuntime.id: "codex"`, every embedded
turn for that agent must be a Codex-supported OpenAI model.
**A non-Codex model uses PI:** that is expected unless provider/model runtime
policy routes it to another harness. Plain non-OpenAI provider refs stay on
their normal provider path in `auto` mode. If you force
`agentRuntime.id: "codex"` on a provider or model, matching embedded turns must
be Codex-supported OpenAI models.
**Computer Use is installed but tools do not run:** check
`/codex computer-use status` from a fresh session. If a tool reports

View File

@@ -103,14 +103,11 @@ export default definePluginEntry({
OpenClaw chooses a harness after provider/model resolution:
1. An existing session's recorded harness id wins, so config/env changes do not
hot-switch that transcript to another runtime.
2. `OPENCLAW_AGENT_RUNTIME=<id>` forces a registered harness with that id for
sessions that are not already pinned.
3. `OPENCLAW_AGENT_RUNTIME=pi` forces the built-in PI harness.
4. `OPENCLAW_AGENT_RUNTIME=auto` asks registered harnesses if they support the
resolved provider/model.
5. If no registered harness matches, OpenClaw uses PI unless PI fallback is
1. Model-scoped runtime policy wins.
2. Provider-scoped runtime policy comes next.
3. `auto` asks registered harnesses if they support the resolved
provider/model.
4. If no registered harness matches, OpenClaw uses PI unless PI fallback is
disabled.
Plugin harness failures surface as run failures. In `auto` mode, PI fallback is
@@ -119,11 +116,10 @@ provider/model. Once a plugin harness has claimed a run, OpenClaw does not
replay that same turn through PI because that can change auth/runtime semantics
or duplicate side effects.
The selected harness id is persisted with the session id after an embedded run.
Legacy sessions created before harness pins are treated as PI-pinned once they
have transcript history. Use a new/reset session when changing between PI and a
native plugin harness. `/status` shows non-default harness ids such as `codex`
next to `Fast`; PI stays hidden because it is the default compatibility path.
Whole-session and whole-agent runtime pins are ignored by selection. That
includes stale session `agentHarnessId` values, `agents.defaults.agentRuntime`,
`agents.list[].agentRuntime`, and `OPENCLAW_AGENT_RUNTIME`. `/status` shows the
effective runtime selected from the provider/model route.
If the selected harness is surprising, enable `agents/harness` debug logging and
inspect the gateway's structured `agent harness selected` record. It includes
the selected harness id, selection reason, runtime/fallback policy, and, in
@@ -141,8 +137,7 @@ OpenClaw. The harness then claims that provider in `supports(...)`.
The bundled Codex plugin follows this pattern:
- preferred user model refs: `openai/gpt-5.5` plus
`agentRuntime.id: "codex"`
- preferred user model refs: `openai/gpt-5.5`
- compatibility refs: legacy `codex/gpt-*` refs remain accepted, but new
configs should not use them as normal provider/model refs
- harness id: `codex`
@@ -151,10 +146,9 @@ The bundled Codex plugin follows this pattern:
- app-server request: OpenClaw sends the bare model id to Codex and lets the
harness talk to the native app-server protocol
The Codex plugin is additive. Plain `openai/gpt-*` refs continue to use the
normal OpenClaw provider path unless you force the Codex harness with
`agentRuntime.id: "codex"`. Older `codex/gpt-*` refs still select the
Codex provider and harness for compatibility.
The Codex plugin is additive. Plain `openai/gpt-*` agent refs on the official
OpenAI provider select the Codex harness by default. Older `codex/gpt-*` refs
still select the Codex provider and harness for compatibility.
For operator setup, model prefix examples, and Codex-only configs, see
[Codex Harness](/plugins/codex-harness).
@@ -202,74 +196,94 @@ aliases for the native harness.
When this mode runs, Codex owns the native thread id, resume behavior,
compaction, and app-server execution. OpenClaw still owns the chat channel,
visible transcript mirror, tool policy, approvals, media delivery, and session
selection. Use `agentRuntime.id: "codex"` when you need to prove that only the
Codex app-server path can claim the run. Explicit plugin runtimes fail closed;
Codex app-server selection failures and runtime failures are not retried through
PI.
selection. Use provider/model `agentRuntime.id: "codex"` when you need to prove
that only the Codex app-server path can claim the run. Explicit plugin runtimes
fail closed; Codex app-server selection failures and runtime failures are not
retried through PI.
## Runtime strictness
By default, OpenClaw runs embedded agents with OpenClaw Pi. In `auto` mode,
registered plugin harnesses can claim a provider/model pair, and PI handles the
turn when none match. Use an explicit plugin runtime such as
By default, OpenClaw uses `auto` provider/model runtime policy: registered
plugin harnesses can claim a provider/model pair, and PI handles the turn when
none match. OpenAI agent refs on the official OpenAI provider default to Codex.
Use an explicit provider/model plugin runtime such as
`agentRuntime.id: "codex"` when missing harness selection should fail instead
of routing through PI. Selected plugin harness failures always fail hard. This
does not block an explicit `agentRuntime.id: "pi"` or
`OPENCLAW_AGENT_RUNTIME=pi`.
does not block an explicit provider/model `agentRuntime.id: "pi"`.
For Codex-only embedded runs:
```json
{
"models": {
"providers": {
"openai": {
"agentRuntime": {
"id": "codex"
}
}
}
},
"agents": {
"defaults": {
"model": "openai/gpt-5.5",
"agentRuntime": {
"id": "codex"
"model": "openai/gpt-5.5"
}
}
}
```
If you want a CLI backend for one canonical model, put the runtime on that
model entry:
```json
{
"agents": {
"defaults": {
"model": "anthropic/claude-opus-4-7",
"models": {
"anthropic/claude-opus-4-7": {
"agentRuntime": {
"id": "claude-cli"
}
}
}
}
}
}
```
If you want any registered plugin harness to claim matching models and otherwise
use PI, set `id: "auto"`:
Per-agent overrides use the same model-scoped shape:
```json
{
"agents": {
"defaults": {
"agentRuntime": {
"id": "auto"
}
}
}
}
```
Per-agent overrides use the same shape:
```json
{
"agents": {
"defaults": {
"agentRuntime": { "id": "auto" }
},
"list": [
{
"id": "codex-only",
"model": "openai/gpt-5.5",
"agentRuntime": { "id": "codex" }
"models": {
"openai/gpt-5.5": {
"agentRuntime": { "id": "codex" }
}
}
}
]
}
}
```
`OPENCLAW_AGENT_RUNTIME` still overrides the configured runtime.
Legacy whole-agent runtime examples like this are ignored:
```bash
OPENCLAW_AGENT_RUNTIME=codex openclaw gateway run
```json
{
"agents": {
"defaults": {
"agentRuntime": {
"id": "codex"
}
}
}
}
```
With an explicit plugin runtime, a session fails early when the requested

View File

@@ -106,7 +106,11 @@ Anthropic's current public docs:
agents: {
defaults: {
model: { primary: "anthropic/claude-opus-4-7" },
agentRuntime: { id: "claude-cli" },
models: {
"anthropic/claude-opus-4-7": {
agentRuntime: { id: "claude-cli" },
},
},
},
},
}
@@ -114,7 +118,7 @@ Anthropic's current public docs:
Legacy `claude-cli/claude-opus-4-7` model refs still work for
compatibility, but new config should keep provider/model selection as
`anthropic/*` and put the execution backend in `agentRuntime.id`.
`anthropic/*` and put the execution backend in provider/model runtime policy.
<Tip>
If you want the clearest billing path, use an Anthropic API key instead. OpenClaw also supports subscription-style options from [OpenAI Codex](/providers/openai), [Qwen Cloud](/providers/qwen), [MiniMax](/providers/minimax), and [Z.AI / GLM](/providers/glm).

View File

@@ -13,7 +13,7 @@ Gemini Grounding.
- Provider: `google`
- Auth: `GEMINI_API_KEY` or `GOOGLE_API_KEY`
- API: Google Gemini API
- Runtime option: `agents.defaults.agentRuntime.id: "google-gemini-cli"`
- Runtime option: provider/model `agentRuntime.id: "google-gemini-cli"`
reuses Gemini CLI OAuth while keeping model refs canonical as `google/*`.
## Getting started

View File

@@ -36,9 +36,9 @@ changing config.
| ---------------------------------------------------- | ------------------------------------------------------- | --------------------------------------------------------------------- |
| ChatGPT/Codex subscription with native Codex runtime | `openai/gpt-5.5` | Default OpenAI agent setup. Sign in with `openai-codex` auth. |
| Direct API-key billing for agent models | `openai/gpt-5.5` plus an `openai-codex` API-key profile | Use `auth.order.openai-codex` to prefer that profile. |
| Direct API-key billing through explicit PI | `openai/gpt-5.5` plus `agentRuntime.id: "pi"` | Select a normal `openai` API-key profile. |
| Direct API-key billing through explicit PI | `openai/gpt-5.5` plus provider/model runtime `pi` | Select a normal `openai` API-key profile. |
| Latest ChatGPT Instant API alias | `openai/chat-latest` | Direct API-key only. Moving alias for experiments, not the default. |
| ChatGPT/Codex subscription auth through explicit PI | `openai/gpt-5.5` plus `agentRuntime.id: "pi"` | Select an `openai-codex` auth profile for the compatibility route. |
| ChatGPT/Codex subscription auth through explicit PI | `openai/gpt-5.5` plus provider/model runtime `pi` | Select an `openai-codex` auth profile for the compatibility route. |
| Image generation or editing | `openai/gpt-image-2` | Works with either `OPENAI_API_KEY` or OpenAI Codex OAuth. |
| Transparent-background images | `openai/gpt-image-1.5` | Use `outputFormat=png` or `webp` and `openai.background=transparent`. |
@@ -46,14 +46,14 @@ changing config.
The names are similar but not interchangeable:
| Name you see | Layer | Meaning |
| ---------------------------------- | ------------------- | ------------------------------------------------------------------------------------------------- |
| `openai` | Provider prefix | Canonical OpenAI model route; agent turns use the Codex runtime. |
| `openai-codex` | Auth/profile prefix | OpenAI Codex OAuth/subscription auth profile provider. |
| `codex` plugin | Plugin | Bundled OpenClaw plugin that provides native Codex app-server runtime and `/codex` chat controls. |
| `agentRuntime.id: codex` | Agent runtime | Force the native Codex app-server harness for embedded turns. |
| `/codex ...` | Chat command set | Bind/control Codex app-server threads from a conversation. |
| `runtime: "acp", agentId: "codex"` | ACP session route | Explicit fallback path that runs Codex through ACP/acpx. |
| Name you see | Layer | Meaning |
| --------------------------------------- | ------------------- | ------------------------------------------------------------------------------------------------- |
| `openai` | Provider prefix | Canonical OpenAI model route; agent turns use the Codex runtime. |
| `openai-codex` | Auth/profile prefix | OpenAI Codex OAuth/subscription auth profile provider. |
| `codex` plugin | Plugin | Bundled OpenClaw plugin that provides native Codex app-server runtime and `/codex` chat controls. |
| provider/model `agentRuntime.id: codex` | Agent runtime | Force the native Codex app-server harness for matching embedded turns. |
| `/codex ...` | Chat command set | Bind/control Codex app-server threads from a conversation. |
| `runtime: "acp", agentId: "codex"` | ACP session route | Explicit fallback path that runs Codex through ACP/acpx. |
This means a config can intentionally contain both `openai/*` model refs and
`openai-codex` auth profiles. `openclaw doctor --fix` rewrites legacy
@@ -79,20 +79,20 @@ explicit runtime config.
## OpenClaw feature coverage
| OpenAI capability | OpenClaw surface | Status |
| ------------------------- | ----------------------------------------------------------------- | ------------------------------------------------------ |
| Chat / Responses | `openai/<model>` model provider | Yes |
| Codex subscription models | `openai/<model>` with `openai-codex` OAuth | Yes |
| Legacy Codex model refs | `openai-codex/<model>` | Repaired by doctor to `openai/<model>` |
| Codex app-server harness | `openai/<model>` with omitted runtime or `agentRuntime.id: codex` | Yes |
| Server-side web search | Native OpenAI Responses tool | Yes, when web search is enabled and no provider pinned |
| Images | `image_generate` | Yes |
| Videos | `video_generate` | Yes |
| Text-to-speech | `messages.tts.provider: "openai"` / `tts` | Yes |
| Batch speech-to-text | `tools.media.audio` / media understanding | Yes |
| Streaming speech-to-text | Voice Call `streaming.provider: "openai"` | Yes |
| Realtime voice | Voice Call `realtime.provider: "openai"` / Control UI Talk | Yes |
| Embeddings | memory embedding provider | Yes |
| OpenAI capability | OpenClaw surface | Status |
| ------------------------- | -------------------------------------------------------------------------------- | ------------------------------------------------------ |
| Chat / Responses | `openai/<model>` model provider | Yes |
| Codex subscription models | `openai/<model>` with `openai-codex` OAuth | Yes |
| Legacy Codex model refs | `openai-codex/<model>` | Repaired by doctor to `openai/<model>` |
| Codex app-server harness | `openai/<model>` with omitted runtime or provider/model `agentRuntime.id: codex` | Yes |
| Server-side web search | Native OpenAI Responses tool | Yes, when web search is enabled and no provider pinned |
| Images | `image_generate` | Yes |
| Videos | `video_generate` | Yes |
| Text-to-speech | `messages.tts.provider: "openai"` / `tts` | Yes |
| Batch speech-to-text | `tools.media.audio` / media understanding | Yes |
| Streaming speech-to-text | Voice Call `streaming.provider: "openai"` | Yes |
| Realtime voice | Voice Call `realtime.provider: "openai"` / Control UI Talk | Yes |
| Embeddings | memory embedding provider | Yes |
## Memory embeddings
@@ -152,9 +152,9 @@ Choose your preferred auth method and follow the setup steps.
| Model ref | Runtime config | Route | Auth |
| ---------------------- | -------------------------- | --------------------------- | ---------------- |
| `openai/gpt-5.5` | omitted / `agentRuntime.id: "codex"` | Codex app-server harness | `openai-codex` profile |
| `openai/gpt-5.4-mini` | omitted / `agentRuntime.id: "codex"` | Codex app-server harness | `openai-codex` profile |
| `openai/gpt-5.5` | `agentRuntime.id: "pi"` | PI embedded runtime | `openai` profile or selected `openai-codex` profile |
| `openai/gpt-5.5` | omitted / provider/model `agentRuntime.id: "codex"` | Codex app-server harness | `openai-codex` profile |
| `openai/gpt-5.4-mini` | omitted / provider/model `agentRuntime.id: "codex"` | Codex app-server harness | `openai-codex` profile |
| `openai/gpt-5.5` | provider/model `agentRuntime.id: "pi"` | PI embedded runtime | `openai` profile or selected `openai-codex` profile |
<Note>
`openai/*` agent models use the Codex app-server harness. To use API-key
@@ -239,8 +239,8 @@ Choose your preferred auth method and follow the setup steps.
| Model ref | Runtime config | Route | Auth |
|-----------|----------------|-------|------|
| `openai/gpt-5.5` | omitted / `agentRuntime.id: "codex"` | Native Codex app-server harness | Codex sign-in or selected `openai-codex` profile |
| `openai/gpt-5.5` | `agentRuntime.id: "pi"` | PI embedded runtime with internal Codex-auth transport | Selected `openai-codex` profile |
| `openai/gpt-5.5` | omitted / provider/model `agentRuntime.id: "codex"` | Native Codex app-server harness | Codex sign-in or selected `openai-codex` profile |
| `openai/gpt-5.5` | provider/model `agentRuntime.id: "pi"` | PI embedded runtime with internal Codex-auth transport | Selected `openai-codex` profile |
| `openai-codex/gpt-5.5` | repaired by doctor | Legacy route rewritten to `openai/gpt-5.5` | Existing `openai-codex` profile |
<Warning>
@@ -265,7 +265,6 @@ Choose your preferred auth method and follow the setup steps.
agents: {
defaults: {
model: { primary: "openai/gpt-5.5" },
agentRuntime: { id: "codex" },
},
},
}
@@ -284,7 +283,7 @@ Choose your preferred auth method and follow the setup steps.
openclaw models status
openclaw models auth list --provider openai-codex
openclaw config get agents.defaults.model --json
openclaw config get agents.defaults.agentRuntime --json
openclaw config get models.providers.openai.agentRuntime --json
```
For a specific agent, add `--agent <id>`:
@@ -367,7 +366,7 @@ Choose your preferred auth method and follow the setup steps.
## Native Codex app-server auth
The native Codex app-server harness uses `openai/*` model refs plus omitted
runtime config or `agentRuntime.id: "codex"`, but its auth is still
runtime config or provider/model `agentRuntime.id: "codex"`, but its auth is still
account-based. OpenClaw
selects auth in this order:
@@ -504,7 +503,7 @@ See [Video Generation](/tools/video-generation) for shared tool parameters, prov
OpenClaw adds a shared GPT-5 prompt contribution for GPT-5-family runs across providers. It applies by model id, so `openai/gpt-5.5`, legacy pre-repair refs such as `openai-codex/gpt-5.5`, `openrouter/openai/gpt-5.5`, `opencode/gpt-5.5`, and other compatible GPT-5 refs receive the same overlay. Older GPT-4.x models do not.
The bundled native Codex harness uses the same GPT-5 behavior and heartbeat overlay through Codex app-server developer instructions, so `openai/gpt-5.x` sessions forced through `agentRuntime.id: "codex"` keep the same follow-through and proactive heartbeat guidance even though Codex owns the rest of the harness prompt.
The bundled native Codex harness uses the same GPT-5 behavior and heartbeat overlay through Codex app-server developer instructions, so `openai/gpt-5.x` sessions routed through Codex keep the same follow-through and proactive heartbeat guidance even though Codex owns the rest of the harness prompt.
The GPT-5 contribution adds a tagged behavior contract for persona persistence, execution safety, tool discipline, output shape, completion checks, and verification. Channel-specific reply and silent-message behavior stays in the shared OpenClaw system prompt and outbound delivery policy. The GPT-5 guidance is always enabled for matching models. The friendly interaction-style layer is separate and configurable.
@@ -912,7 +911,7 @@ the Server-side compaction accordion below.
- Injects `context_management: [{ type: "compaction", compact_threshold: ... }]`
- Default `compact_threshold`: 70% of `contextWindow` (or `80000` when unavailable)
This applies to the built-in Pi harness path and to OpenAI provider hooks used by embedded runs. The native Codex app-server harness manages its own context through Codex and is configured separately with `agents.defaults.agentRuntime.id`.
This applies to the built-in Pi harness path and to OpenAI provider hooks used by embedded runs. The native Codex app-server harness manages its own context through Codex and is configured by OpenAI's default agent route or provider/model runtime policy.
<Tabs>
<Tab title="Enable explicitly">

View File

@@ -20,7 +20,7 @@ Codex has two OpenClaw routes:
| Route | Config/command | Setup page |
| -------------------------- | ------------------------------------------------------ | --------------------------------------- |
| Native Codex app-server | `/codex ...`, `agentRuntime.id: "codex"` | [Codex harness](/plugins/codex-harness) |
| Native Codex app-server | `/codex ...`, `openai/gpt-*` agent refs | [Codex harness](/plugins/codex-harness) |
| Explicit Codex ACP adapter | `/acp spawn codex`, `runtime: "acp", agentId: "codex"` | This page |
Prefer the native route unless you explicitly need ACP/acpx behavior.

View File

@@ -19,8 +19,8 @@ Each ACP session spawn is tracked as a [background task](/automation/tasks).
<Note>
**ACP is the external-harness path, not the default Codex path.** The
native Codex app-server plugin owns `/codex ...` controls and the
`agentRuntime.id: "codex"` embedded runtime; ACP owns
native Codex app-server plugin owns `/codex ...` controls and the default
`openai/gpt-*` embedded runtime for agent turns; ACP owns
`/acp ...` controls and `sessions_spawn({ runtime: "acp" })` sessions.
If you want Codex or Claude Code to connect as an external MCP client

View File

@@ -391,8 +391,8 @@ even when source overlay mounts are present.
re-enable plugins before running doctor cleanup if you want stale ids removed
- OpenAI-family Codex routes keep separate plugin boundaries:
`openai-codex/*` belongs to the OpenAI plugin, while the bundled Codex
app-server plugin is selected by `agentRuntime.id: "codex"` or legacy
`codex/*` model refs
app-server plugin is selected by canonical `openai/*` agent refs, explicit
provider/model `agentRuntime.id: "codex"`, or legacy `codex/*` model refs
## Troubleshooting runtime hooks

View File

@@ -845,8 +845,6 @@ async function agentCommandInternal(
const acceptedAuthProviders = listOpenAIAuthProfileProvidersForAgentRuntime({
provider: providerForAuthProfileValidation,
harnessRuntime: validationHarnessPolicy.runtime,
sessionAgentHarnessId: sessionEntry.agentHarnessId,
sessionAgentRuntimeOverride: sessionEntry.agentRuntimeOverride,
}).map((candidateProvider) =>
resolveProviderIdForAuth(candidateProvider, { config: cfg, workspaceDir }),
);

View File

@@ -1,61 +1,43 @@
import type { OpenClawConfig } from "../config/types.openclaw.js";
import { normalizeAgentId } from "../routing/session-key.js";
import { normalizeLowercaseStringOrEmpty } from "../shared/string-coerce.js";
import { resolveAgentRuntimePolicy } from "./agent-runtime-policy.js";
import { listAgentEntries } from "./agent-scope.js";
import {
normalizeEmbeddedAgentRuntime,
type EmbeddedAgentRuntime,
} from "./pi-embedded-runner/runtime.js";
import { resolveAgentHarnessPolicy } from "./harness/policy.js";
import { resolveDefaultModelForAgent } from "./model-selection.js";
type AgentRuntimeMetadata = {
id: string;
source: "env" | "agent" | "defaults" | "implicit";
source: "implicit" | "model" | "provider";
};
function normalizeRuntimeValue(value: unknown): EmbeddedAgentRuntime | undefined {
const normalized = typeof value === "string" ? normalizeLowercaseStringOrEmpty(value) : "";
return normalized ? normalizeEmbeddedAgentRuntime(normalized) : undefined;
}
export function resolveAgentRuntimeMetadata(
cfg: OpenClawConfig,
agentId: string,
env: NodeJS.ProcessEnv = process.env,
_cfg: OpenClawConfig,
_agentId: string,
_env: NodeJS.ProcessEnv = process.env,
): AgentRuntimeMetadata {
const envRuntime = normalizeRuntimeValue(env.OPENCLAW_AGENT_RUNTIME);
const normalizedAgentId = normalizeAgentId(agentId);
const agentEntry = listAgentEntries(cfg).find(
(entry) => normalizeAgentId(entry.id) === normalizedAgentId,
);
const agentPolicy = resolveAgentRuntimePolicy(agentEntry);
const defaultsPolicy = resolveAgentRuntimePolicy(cfg.agents?.defaults);
if (envRuntime) {
return {
id: envRuntime,
source: "env",
};
}
const agentRuntime = normalizeRuntimeValue(agentPolicy?.id);
if (agentRuntime) {
return {
id: agentRuntime,
source: "agent",
};
}
const defaultsRuntime = normalizeRuntimeValue(defaultsPolicy?.id);
if (defaultsRuntime) {
return {
id: defaultsRuntime,
source: "defaults",
};
}
return {
id: "pi",
id: "auto",
source: "implicit",
};
}
export function resolveModelAgentRuntimeMetadata(params: {
cfg: OpenClawConfig;
agentId: string;
provider?: string;
model?: string;
sessionKey?: string;
}): AgentRuntimeMetadata {
const resolved =
params.provider && params.model
? { provider: params.provider, model: params.model }
: resolveDefaultModelForAgent({ cfg: params.cfg, agentId: params.agentId });
const policy = resolveAgentHarnessPolicy({
provider: resolved.provider,
modelId: resolved.model,
config: params.cfg,
agentId: params.agentId,
sessionKey: params.sessionKey,
});
return {
id: policy.runtime,
source: policy.runtimeSource ?? "implicit",
};
}

View File

@@ -1,19 +0,0 @@
import type { AgentRuntimePolicyConfig } from "../config/types.agents-shared.js";
type AgentRuntimePolicyContainer = {
agentRuntime?: AgentRuntimePolicyConfig;
};
export function resolveAgentRuntimePolicy(
container: AgentRuntimePolicyContainer | undefined,
): AgentRuntimePolicyConfig | undefined {
const preferred = container?.agentRuntime;
if (hasAgentRuntimePolicy(preferred)) {
return preferred;
}
return undefined;
}
function hasAgentRuntimePolicy(value: AgentRuntimePolicyConfig | undefined): boolean {
return Boolean(value?.id?.trim());
}

View File

@@ -124,6 +124,20 @@ function makeEmbeddedResult(text: string): EmbeddedPiRunResult {
};
}
function providerRuntimeConfig(provider: string, runtime: string): OpenClawConfig {
return {
models: {
providers: {
[provider]: {
baseUrl: "https://api.openclaw.test/v1",
agentRuntime: { id: runtime },
models: [],
},
},
},
} as OpenClawConfig;
}
async function runAuthContractAttempt(params: {
tmpDir: string;
storePath: string;
@@ -301,9 +315,13 @@ describe("Auth profile runtime contract - Pi and CLI adapter", () => {
authProfileProvider: AUTH_PROFILE_RUNTIME_CONTRACT.openAiCodexProvider,
authProfileOverride: AUTH_PROFILE_RUNTIME_CONTRACT.openAiCodexProfileId,
cfg: {
agents: {
defaults: {
agentRuntime: { id: "codex" },
models: {
providers: {
[AUTH_PROFILE_RUNTIME_CONTRACT.openAiProvider]: {
baseUrl: "https://api.openclaw.test/v1",
agentRuntime: { id: "codex" },
models: [],
},
},
},
} as OpenClawConfig,
@@ -355,19 +373,12 @@ describe("Auth profile runtime contract - Pi and CLI adapter", () => {
providerOverride: AUTH_PROFILE_RUNTIME_CONTRACT.openAiProvider,
authProfileProvider: AUTH_PROFILE_RUNTIME_CONTRACT.openAiProvider,
authProfileOverride: AUTH_PROFILE_RUNTIME_CONTRACT.openAiProfileId,
cfg: {
agents: {
defaults: {
agentRuntime: { id: "pi" },
},
},
} as OpenClawConfig,
cfg: providerRuntimeConfig(AUTH_PROFILE_RUNTIME_CONTRACT.openAiProvider, "pi"),
});
expect(runEmbeddedPiAgentMock).toHaveBeenCalledTimes(1);
expect(runEmbeddedPiAgentMock.mock.calls[0]?.[0]).toMatchObject({
provider: AUTH_PROFILE_RUNTIME_CONTRACT.openAiProvider,
agentHarnessId: "pi",
authProfileId: AUTH_PROFILE_RUNTIME_CONTRACT.openAiProfileId,
});
});
@@ -383,7 +394,6 @@ describe("Auth profile runtime contract - Pi and CLI adapter", () => {
expect(runEmbeddedPiAgentMock).toHaveBeenCalledTimes(1);
expect(runEmbeddedPiAgentMock.mock.calls[0]?.[0]).toMatchObject({
agentHarnessId: "codex",
authProfileId: AUTH_PROFILE_RUNTIME_CONTRACT.openAiCodexProfileId,
});
});
@@ -395,19 +405,12 @@ describe("Auth profile runtime contract - Pi and CLI adapter", () => {
providerOverride: AUTH_PROFILE_RUNTIME_CONTRACT.openAiProvider,
authProfileProvider: AUTH_PROFILE_RUNTIME_CONTRACT.openAiCodexProvider,
authProfileOverride: AUTH_PROFILE_RUNTIME_CONTRACT.openAiCodexProfileId,
cfg: {
agents: {
defaults: {
agentRuntime: { id: "pi" },
},
},
} as OpenClawConfig,
cfg: providerRuntimeConfig(AUTH_PROFILE_RUNTIME_CONTRACT.openAiProvider, "pi"),
});
expect(runEmbeddedPiAgentMock).toHaveBeenCalledTimes(1);
expect(runEmbeddedPiAgentMock.mock.calls[0]?.[0]).toMatchObject({
provider: AUTH_PROFILE_RUNTIME_CONTRACT.openAiCodexProvider,
agentHarnessId: "pi",
authProfileId: AUTH_PROFILE_RUNTIME_CONTRACT.openAiCodexProfileId,
});
});
@@ -419,18 +422,11 @@ describe("Auth profile runtime contract - Pi and CLI adapter", () => {
providerOverride: AUTH_PROFILE_RUNTIME_CONTRACT.codexHarnessProvider,
authProfileProvider: AUTH_PROFILE_RUNTIME_CONTRACT.openAiCodexProvider,
authProfileOverride: AUTH_PROFILE_RUNTIME_CONTRACT.openAiCodexProfileId,
cfg: {
agents: {
defaults: {
agentRuntime: { id: "codex" },
},
},
} as OpenClawConfig,
cfg: providerRuntimeConfig(AUTH_PROFILE_RUNTIME_CONTRACT.codexHarnessProvider, "codex"),
});
expect(runEmbeddedPiAgentMock).toHaveBeenCalledTimes(1);
expect(runEmbeddedPiAgentMock.mock.calls[0]?.[0]).toMatchObject({
agentHarnessId: "codex",
authProfileId: AUTH_PROFILE_RUNTIME_CONTRACT.openAiCodexProfileId,
});
});
@@ -442,18 +438,11 @@ describe("Auth profile runtime contract - Pi and CLI adapter", () => {
providerOverride: AUTH_PROFILE_RUNTIME_CONTRACT.openAiProvider,
authProfileProvider: AUTH_PROFILE_RUNTIME_CONTRACT.openAiCodexProvider,
authProfileOverride: AUTH_PROFILE_RUNTIME_CONTRACT.openAiCodexProfileId,
cfg: {
agents: {
defaults: {
agentRuntime: { id: "codex" },
},
},
} as OpenClawConfig,
cfg: providerRuntimeConfig(AUTH_PROFILE_RUNTIME_CONTRACT.openAiProvider, "codex"),
});
expect(runEmbeddedPiAgentMock).toHaveBeenCalledTimes(1);
expect(runEmbeddedPiAgentMock.mock.calls[0]?.[0]).toMatchObject({
agentHarnessId: "codex",
authProfileId: AUTH_PROFILE_RUNTIME_CONTRACT.openAiCodexProfileId,
});
});
@@ -466,21 +455,11 @@ describe("Auth profile runtime contract - Pi and CLI adapter", () => {
authProfileProvider: AUTH_PROFILE_RUNTIME_CONTRACT.openAiCodexProvider,
authProfileOverride: AUTH_PROFILE_RUNTIME_CONTRACT.openAiCodexProfileId,
sessionHasHistory: true,
cfg: {
agents: {
list: [
{
id: "main",
agentRuntime: { id: "codex" },
},
],
},
} as OpenClawConfig,
cfg: providerRuntimeConfig(AUTH_PROFILE_RUNTIME_CONTRACT.openAiProvider, "codex"),
});
expect(runEmbeddedPiAgentMock).toHaveBeenCalledTimes(1);
expect(runEmbeddedPiAgentMock.mock.calls[0]?.[0]).toMatchObject({
agentHarnessId: "codex",
authProfileId: AUTH_PROFILE_RUNTIME_CONTRACT.openAiCodexProfileId,
});
});

View File

@@ -62,7 +62,9 @@ describe("external CLI auth scope", () => {
{
id: "worker",
model: "opencode-go/kimi-k2.6",
agentRuntime: { id: "codex-app-server" },
models: {
"opencode-go/kimi-k2.6": { agentRuntime: { id: "codex-app-server" } },
},
subagents: { model: { primary: "z.ai/glm-4.7" } },
},
],
@@ -92,10 +94,12 @@ describe("external CLI auth scope", () => {
agents: {
defaults: {
model: "openai/gpt-5.5",
agentRuntime: { id: "claude-cli" },
cliBackends: {
"claude-cli": { command: "claude" },
},
models: {
"openai/gpt-5.5": { agentRuntime: { id: "claude-cli" } },
},
},
},
});

View File

@@ -58,6 +58,15 @@ function addExternalCliRuntimeScope(out: Set<string>, value: string | undefined)
}
}
function addExternalCliRuntimeScopeFromModelMap(
out: Set<string>,
models: Record<string, { agentRuntime?: { id?: string } }> | undefined,
): void {
for (const entry of Object.values(models ?? {})) {
addExternalCliRuntimeScope(out, entry?.agentRuntime?.id);
}
}
export function resolveExternalCliAuthScopeFromConfig(
cfg: OpenClawConfig,
): ExternalCliAuthScope | undefined {
@@ -91,14 +100,18 @@ export function resolveExternalCliAuthScopeFromConfig(
addProviderScopeFromModelConfig(providerIds, defaults?.videoGenerationModel);
addProviderScopeFromModelConfig(providerIds, defaults?.musicGenerationModel);
addProviderScopeFromModelConfig(providerIds, defaults?.pdfModel);
addExternalCliRuntimeScope(providerIds, defaults?.agentRuntime?.id);
addExternalCliRuntimeScope(providerIds, defaults?.embeddedHarness?.runtime);
addExternalCliRuntimeScopeFromModelMap(providerIds, defaults?.models);
for (const provider of Object.values(cfg.models?.providers ?? {})) {
addExternalCliRuntimeScope(providerIds, provider?.agentRuntime?.id);
for (const model of provider?.models ?? []) {
addExternalCliRuntimeScope(providerIds, model?.agentRuntime?.id);
}
}
for (const agent of cfg.agents?.list ?? []) {
addProviderScopeFromModelConfig(providerIds, agent.model);
addProviderScopeFromModelConfig(providerIds, agent.subagents?.model);
addExternalCliRuntimeScope(providerIds, agent.agentRuntime?.id);
addExternalCliRuntimeScope(providerIds, agent.embeddedHarness?.runtime);
addExternalCliRuntimeScopeFromModelMap(providerIds, agent.models);
}
if (providerIds.size === 0 && profileIds.size === 0) {

View File

@@ -256,8 +256,6 @@ async function resolveRuntimeModel(params: {
agentId: params.agentId,
sessionKey: params.sessionKey,
}).runtime,
sessionAgentHarnessId: params.sessionEntry?.agentHarnessId,
sessionAgentRuntimeOverride: params.sessionEntry?.agentRuntimeOverride,
}),
agentDir: params.agentDir,
sessionEntry: params.sessionEntry,

View File

@@ -653,7 +653,9 @@ describe("CLI attempt execution", () => {
cfg: {
agents: {
defaults: {
agentRuntime: { id: "claude-cli" },
models: {
"anthropic/claude-opus-4-7": { agentRuntime: { id: "claude-cli" } },
},
},
},
} as OpenClawConfig,
@@ -708,7 +710,9 @@ describe("CLI attempt execution", () => {
cfg: {
agents: {
defaults: {
agentRuntime: { id: "codex-cli" },
models: {
"openai/gpt-5.4": { agentRuntime: { id: "codex-cli" } },
},
},
},
} as OpenClawConfig,
@@ -890,7 +894,7 @@ describe("embedded attempt harness pinning", () => {
await fs.rm(tmpDir, { recursive: true, force: true });
});
it("treats legacy OpenAI sessions with history as Codex-pinned", async () => {
it("does not store a session harness pin for default OpenAI Codex routing", async () => {
const sessionEntry: SessionEntry = {
sessionId: "legacy-session",
updatedAt: Date.now(),
@@ -929,12 +933,57 @@ describe("embedded attempt harness pinning", () => {
expect(runEmbeddedPiAgent).toHaveBeenCalledWith(
expect.objectContaining({
agentHarnessId: "codex",
agentHarnessId: undefined,
}),
);
});
it("pins sessions with history to the configured Codex harness instead of PI", async () => {
it("ignores stale session Codex harness pins on non-OpenAI model switches", async () => {
const sessionEntry: SessionEntry = {
sessionId: "mixed-provider-session",
updatedAt: Date.now(),
agentHarnessId: "codex",
};
runEmbeddedPiAgentMock.mockResolvedValueOnce({
meta: { durationMs: 1 },
} satisfies EmbeddedPiRunResult);
await runAgentAttempt({
providerOverride: "minimax",
originalProvider: "minimax",
modelOverride: "minimax-m2.7",
cfg: {} as OpenClawConfig,
sessionEntry,
sessionId: sessionEntry.sessionId,
sessionKey: "agent:main:main",
sessionAgentId: "main",
sessionFile: path.join(tmpDir, "session.jsonl"),
workspaceDir: tmpDir,
body: "switch to minimax",
isFallbackRetry: false,
resolvedThinkLevel: "medium",
timeoutMs: 1_000,
runId: "run-mixed-provider-auto-runtime",
opts: { senderIsOwner: false } as Parameters<typeof runAgentAttempt>[0]["opts"],
runContext: {} as Parameters<typeof runAgentAttempt>[0]["runContext"],
spawnedBy: undefined,
messageChannel: undefined,
skillsSnapshot: undefined,
resolvedVerboseLevel: undefined,
agentDir: tmpDir,
onAgentEvent: vi.fn(),
authProfileProvider: "minimax",
sessionHasHistory: true,
});
expect(runEmbeddedPiAgent).toHaveBeenCalledWith(
expect.objectContaining({
agentHarnessId: undefined,
}),
);
});
it("lets provider/model runtime policy choose Codex without storing a session harness pin", async () => {
const sessionEntry: SessionEntry = {
sessionId: "codex-history-session",
updatedAt: Date.now(),
@@ -948,9 +997,13 @@ describe("embedded attempt harness pinning", () => {
originalProvider: "codex",
modelOverride: "gpt-5.4",
cfg: {
agents: {
defaults: {
agentRuntime: { id: "codex" },
models: {
providers: {
codex: {
baseUrl: "https://api.openai.com/v1",
agentRuntime: { id: "codex" },
models: [],
},
},
},
} as OpenClawConfig,
@@ -979,7 +1032,7 @@ describe("embedded attempt harness pinning", () => {
expect(runEmbeddedPiAgent).toHaveBeenCalledWith(
expect.objectContaining({
agentHarnessId: "codex",
agentHarnessId: undefined,
}),
);
});
@@ -1038,7 +1091,7 @@ describe("embedded attempt harness pinning", () => {
expect(runEmbeddedPiAgent).toHaveBeenCalledWith(
expect.objectContaining({
agentHarnessId: "codex",
agentHarnessId: undefined,
authProfileId: "openai-codex:work",
authProfileIdSource: "auto",
}),
@@ -1084,12 +1137,12 @@ describe("embedded attempt harness pinning", () => {
expect(runEmbeddedPiAgent).toHaveBeenCalledWith(
expect.objectContaining({
agentHarnessId: "codex",
agentHarnessId: undefined,
}),
);
});
it("repairs stale OpenAI sessions pinned to PI back to the default Codex harness", async () => {
it("ignores stale OpenAI sessions pinned to PI and relies on default Codex routing", async () => {
const sessionEntry: SessionEntry = {
sessionId: "stale-pi-session",
updatedAt: Date.now(),
@@ -1130,7 +1183,7 @@ describe("embedded attempt harness pinning", () => {
expect(runEmbeddedPiAgentMock).toHaveBeenCalledWith(
expect.objectContaining({
provider: "openai",
agentHarnessId: "codex",
agentHarnessId: undefined,
}),
);
});
@@ -1151,9 +1204,13 @@ describe("embedded attempt harness pinning", () => {
originalProvider: "openai",
modelOverride: "gpt-5.4",
cfg: {
agents: {
defaults: {
agentRuntime: { id: "pi" },
models: {
providers: {
openai: {
baseUrl: "https://api.openai.com/v1",
agentRuntime: { id: "pi" },
models: [],
},
},
},
} as OpenClawConfig,
@@ -1184,7 +1241,7 @@ describe("embedded attempt harness pinning", () => {
expect.objectContaining({
provider: "openai-codex",
model: "gpt-5.4",
agentHarnessId: "pi",
agentHarnessId: undefined,
authProfileId: "openai-codex:work",
authProfileIdSource: "user",
}),

View File

@@ -21,10 +21,9 @@ import { runCliAgent } from "../cli-runner.js";
import { getCliSessionBinding, setCliSessionBinding } from "../cli-session.js";
import { FailoverError } from "../failover-error.js";
import { resolveAgentHarnessPolicy } from "../harness/selection.js";
import { isCliRuntimeAlias, resolveCliRuntimeExecutionProvider } from "../model-runtime-aliases.js";
import { resolveCliRuntimeExecutionProvider } from "../model-runtime-aliases.js";
import { isCliProvider } from "../model-selection.js";
import { isOpenAIProvider, resolveOpenAIRuntimeProviderForPi } from "../openai-codex-routing.js";
import { normalizeEmbeddedAgentRuntime } from "../pi-embedded-runner/runtime.js";
import { resolveOpenAIRuntimeProviderForPi } from "../openai-codex-routing.js";
import { runEmbeddedPiAgent, type EmbeddedPiRunResult } from "../pi-embedded.js";
import { buildAgentRuntimeAuthPlan } from "../runtime-plan/auth.js";
import {
@@ -409,28 +408,14 @@ export function runAgentAttempt(params: {
);
const bootstrapPromptWarningSignature =
bootstrapPromptWarningSignaturesSeen[bootstrapPromptWarningSignaturesSeen.length - 1];
const sessionPinnedAgentHarnessId = isRawModelRun
? "pi"
: resolveSessionPinnedAgentHarnessId({
cfg: params.cfg,
sessionAgentId: params.sessionAgentId,
sessionEntry: params.sessionEntry,
sessionHasHistory: params.sessionHasHistory,
sessionId: params.sessionId,
sessionKey: params.sessionKey ?? params.sessionId,
provider: params.providerOverride,
modelId: params.modelOverride,
});
const agentRuntimeOverride = isRawModelRun
? undefined
: params.sessionEntry?.agentRuntimeOverride?.trim();
const requestedAgentHarnessId = isRawModelRun ? "pi" : undefined;
const cliExecutionProvider = isRawModelRun
? params.providerOverride
: (resolveCliRuntimeExecutionProvider({
provider: params.providerOverride,
cfg: params.cfg,
agentId: params.sessionAgentId,
runtimeOverride: agentRuntimeOverride,
modelId: params.modelOverride,
}) ?? params.providerOverride);
const agentHarnessPolicy = isRawModelRun
? ({ runtime: "pi" } as const)
@@ -449,7 +434,7 @@ export function runAgentAttempt(params: {
authProfileProvider: params.authProfileProvider,
sessionAuthProfileId: params.sessionEntry?.authProfileOverride,
sessionAuthProfileSource: params.sessionEntry?.authProfileOverrideSource,
harnessId: sessionPinnedAgentHarnessId,
harnessId: requestedAgentHarnessId,
harnessRuntime: agentHarnessPolicy.runtime,
allowHarnessAuthProfileForwarding: !isCliProvider(cliExecutionProvider, params.cfg),
});
@@ -459,7 +444,7 @@ export function runAgentAttempt(params: {
sessionAuthProfileId: harnessAuthSelection.authProfileId,
config: params.cfg,
workspaceDir: params.workspaceDir,
harnessId: sessionPinnedAgentHarnessId,
harnessId: requestedAgentHarnessId,
harnessRuntime: agentHarnessPolicy.runtime,
allowHarnessAuthProfileForwarding: !isCliProvider(cliExecutionProvider, params.cfg),
});
@@ -467,7 +452,7 @@ export function runAgentAttempt(params: {
const embeddedPiProvider = resolveOpenAIRuntimeProviderForPi({
provider: params.providerOverride,
harnessRuntime: agentHarnessPolicy.runtime,
agentHarnessId: sessionPinnedAgentHarnessId,
agentHarnessId: requestedAgentHarnessId,
authProfileProvider: runtimeAuthPlan.authProfileProviderForAuth,
authProfileId,
config: params.cfg,
@@ -618,7 +603,7 @@ export function runAgentAttempt(params: {
sessionFile: params.sessionFile,
workspaceDir: params.workspaceDir,
config: params.cfg,
agentHarnessId: sessionPinnedAgentHarnessId,
agentHarnessId: requestedAgentHarnessId,
skillsSnapshot: params.skillsSnapshot,
prompt: effectivePrompt,
images: params.isFallbackRetry ? undefined : params.opts.images,
@@ -656,72 +641,6 @@ export function runAgentAttempt(params: {
});
}
function resolveSessionPinnedAgentHarnessId(params: {
cfg: OpenClawConfig;
sessionAgentId: string;
sessionEntry?: SessionEntry;
sessionHasHistory?: boolean;
sessionId: string;
sessionKey: string;
provider: string;
modelId?: string;
}): string | undefined {
if (params.sessionEntry?.sessionId !== params.sessionId) {
return resolveConfiguredAgentHarnessId(params);
}
if (params.sessionEntry.agentHarnessId) {
if (isOpenAIProvider(params.provider)) {
const configuredPolicy = resolveAgentHarnessPolicy({
config: params.cfg,
agentId: params.sessionAgentId,
sessionKey: params.sessionKey,
provider: params.provider,
modelId: params.modelId,
});
const configuredAgentHarnessId =
configuredPolicy.runtime === "auto" || isCliRuntimeAlias(configuredPolicy.runtime)
? undefined
: configuredPolicy.runtime;
const storedRuntime = normalizeEmbeddedAgentRuntime(params.sessionEntry.agentHarnessId);
if (configuredAgentHarnessId && configuredPolicy.runtimeSource !== "implicit") {
return configuredAgentHarnessId;
}
if (storedRuntime === "pi" && configuredAgentHarnessId) {
return configuredAgentHarnessId;
}
}
return params.sessionEntry.agentHarnessId;
}
const configuredAgentHarnessId = resolveConfiguredAgentHarnessId(params);
if (configuredAgentHarnessId) {
return configuredAgentHarnessId;
}
if (!params.sessionHasHistory) {
return undefined;
}
return "pi";
}
function resolveConfiguredAgentHarnessId(params: {
cfg: OpenClawConfig;
sessionAgentId: string;
sessionKey: string;
provider: string;
modelId?: string;
}): string | undefined {
const policy = resolveAgentHarnessPolicy({
config: params.cfg,
agentId: params.sessionAgentId,
sessionKey: params.sessionKey,
provider: params.provider,
modelId: params.modelId,
});
if (policy.runtime === "auto" || isCliRuntimeAlias(policy.runtime)) {
return undefined;
}
return policy.runtime;
}
export function buildAcpResult(params: {
payloadText: string;
startedAt: number;

View File

@@ -1,10 +1,10 @@
import type { OpenClawConfig } from "../config/types.openclaw.js";
import { normalizeOptionalLowercaseString } from "../shared/string-coerce.js";
import { isRecord } from "../utils.js";
import { resolveAgentRuntimePolicy } from "./agent-runtime-policy.js";
import { isCliRuntimeAlias } from "./model-runtime-aliases.js";
import { resolveModelRuntimePolicy } from "./model-runtime-policy.js";
import { modelSelectionShouldEnsureCodexPlugin } from "./openai-codex-routing.js";
import { normalizeEmbeddedAgentRuntime } from "./pi-embedded-runner/runtime.js";
import { normalizeProviderId } from "./provider-id.js";
function normalizeRuntimeId(value: unknown): string | undefined {
if (typeof value !== "string") {
@@ -38,20 +38,73 @@ function listAgentModelRefs(value: unknown): string[] {
return refs;
}
function hasOpenAIModelRef(config: OpenClawConfig, value: unknown): boolean {
function parseConfiguredModelRef(
value: unknown,
): { provider: string; modelId: string } | undefined {
if (typeof value !== "string") {
return undefined;
}
const trimmed = value.trim();
const slash = trimmed.indexOf("/");
if (slash <= 0 || slash >= trimmed.length - 1) {
return undefined;
}
return {
provider: normalizeProviderId(trimmed.slice(0, slash)),
modelId: trimmed.slice(slash + 1).trim(),
};
}
function hasOpenAIModelRef(config: OpenClawConfig, value: unknown, agentId?: string): boolean {
return listAgentModelRefs(value).some((ref) => {
return modelSelectionShouldEnsureCodexPlugin({ model: ref, config });
if (!modelSelectionShouldEnsureCodexPlugin({ model: ref, config })) {
return false;
}
const parsed = parseConfiguredModelRef(ref);
const policy = resolveModelRuntimePolicy({
config,
provider: parsed?.provider,
modelId: parsed?.modelId,
agentId,
});
const runtime = normalizeRuntimeId(policy.policy?.id);
return !runtime || runtime === "auto" || runtime === "codex";
});
}
function openAIModelUsesImplicitCodexHarness(runtime: string | undefined): boolean {
if (!runtime || runtime === "auto") {
return true;
function pushConfiguredModelRuntimeIds(config: OpenClawConfig, runtimes: Set<string>): void {
for (const providerConfig of Object.values(config.models?.providers ?? {})) {
const providerRuntime = normalizeRuntimeId(providerConfig?.agentRuntime?.id);
if (providerRuntime && providerRuntime !== "auto" && providerRuntime !== "pi") {
runtimes.add(providerRuntime);
}
for (const modelConfig of providerConfig?.models ?? []) {
const modelRuntime = normalizeRuntimeId(modelConfig?.agentRuntime?.id);
if (modelRuntime && modelRuntime !== "auto" && modelRuntime !== "pi") {
runtimes.add(modelRuntime);
}
}
}
if (runtime === "pi") {
return false;
const pushModelMapRuntimeIds = (models: unknown) => {
if (!isRecord(models)) {
return;
}
for (const entry of Object.values(models)) {
if (!isRecord(entry)) {
continue;
}
const runtime = normalizeRuntimeId(
isRecord(entry.agentRuntime) ? entry.agentRuntime.id : undefined,
);
if (runtime && runtime !== "auto" && runtime !== "pi") {
runtimes.add(runtime);
}
}
};
pushModelMapRuntimeIds(config.agents?.defaults?.models);
for (const agent of config.agents?.list ?? []) {
pushModelMapRuntimeIds(agent.models);
}
return runtime === "codex" || isCliRuntimeAlias(runtime);
}
export function collectConfiguredAgentHarnessRuntimes(
@@ -59,40 +112,27 @@ export function collectConfiguredAgentHarnessRuntimes(
env: NodeJS.ProcessEnv,
): string[] {
const runtimes = new Set<string>();
const pushRuntime = (value: unknown) => {
const normalized = normalizeRuntimeId(value);
if (!normalized || normalized === "auto" || normalized === "pi") {
return;
}
runtimes.add(normalized);
};
const pushCodexForOpenAIModel = (model: unknown, runtime: string | undefined) => {
if (hasOpenAIModelRef(config, model) && openAIModelUsesImplicitCodexHarness(runtime)) {
const pushCodexForOpenAIModel = (model: unknown, agentId?: string) => {
if (hasOpenAIModelRef(config, model, agentId)) {
runtimes.add("codex");
}
};
const envRuntime = normalizeRuntimeId(env.OPENCLAW_AGENT_RUNTIME);
const defaultsRuntime = normalizeRuntimeId(
resolveAgentRuntimePolicy(config.agents?.defaults)?.id,
);
void env;
pushConfiguredModelRuntimeIds(config, runtimes);
const defaultsModel = config.agents?.defaults?.model;
pushRuntime(defaultsRuntime);
pushCodexForOpenAIModel(defaultsModel, envRuntime ?? defaultsRuntime);
pushCodexForOpenAIModel(defaultsModel);
if (Array.isArray(config.agents?.list)) {
for (const agent of config.agents.list) {
if (!isRecord(agent)) {
continue;
}
const agentRuntime = normalizeRuntimeId(resolveAgentRuntimePolicy(agent)?.id);
pushRuntime(agentRuntime);
pushCodexForOpenAIModel(
agent.model ?? defaultsModel,
envRuntime ?? agentRuntime ?? defaultsRuntime,
typeof agent.id === "string" ? agent.id : undefined,
);
}
}
pushRuntime(envRuntime);
return [...runtimes].toSorted((left, right) => left.localeCompare(right));
}

View File

@@ -0,0 +1,56 @@
import type { OpenClawConfig } from "../../config/types.openclaw.js";
import { resolveModelRuntimePolicy } from "../model-runtime-policy.js";
import {
isOpenAICodexProvider,
openAIProviderUsesCodexRuntimeByDefault,
} from "../openai-codex-routing.js";
import {
normalizeEmbeddedAgentRuntime,
type EmbeddedAgentRuntime,
} from "../pi-embedded-runner/runtime.js";
export type AgentHarnessPolicy = {
runtime: EmbeddedAgentRuntime;
runtimeSource?: "model" | "provider" | "implicit";
};
export function resolveAgentHarnessPolicy(params: {
provider?: string;
modelId?: string;
config?: OpenClawConfig;
agentId?: string;
sessionKey?: string;
env?: NodeJS.ProcessEnv;
}): AgentHarnessPolicy {
const configured = resolveModelRuntimePolicy({
config: params.config,
provider: params.provider,
modelId: params.modelId,
agentId: params.agentId,
sessionKey: params.sessionKey,
});
const configuredRuntime = configured.policy?.id?.trim();
const runtimeSource = configured.source ?? "implicit";
const runtime =
configuredRuntime && configuredRuntime !== "default"
? normalizeEmbeddedAgentRuntime(configuredRuntime)
: "auto";
if (
openAIProviderUsesCodexRuntimeByDefault({ provider: params.provider, config: params.config })
) {
if (runtime === "auto") {
return { runtime: "codex", runtimeSource };
}
return { runtime, runtimeSource };
}
if (isOpenAICodexProvider(params.provider)) {
if (runtime === "auto") {
return { runtime: "codex", runtimeSource };
}
return { runtime, runtimeSource };
}
return {
runtime,
runtimeSource,
};
}

View File

@@ -110,20 +110,58 @@ function registerSuccessfulCodexHarness(): void {
);
}
function providerRuntimeConfig(provider: string, runtime: string): OpenClawConfig {
return {
models: {
providers: {
[provider]: {
baseUrl: "https://api.openai.com/v1",
agentRuntime: { id: runtime },
models: [],
},
},
},
} as OpenClawConfig;
}
function agentModelRuntimeConfig(
modelRef: string,
runtime: string,
agentId?: string,
): OpenClawConfig {
if (agentId) {
return {
agents: {
list: [
{ id: "main", default: true },
{ id: agentId, models: { [modelRef]: { agentRuntime: { id: runtime } } } },
],
},
} as OpenClawConfig;
}
return {
agents: {
defaults: {
models: {
[modelRef]: { agentRuntime: { id: runtime } },
},
},
},
} as OpenClawConfig;
}
describe("runAgentHarnessAttempt", () => {
it("fails when a forced plugin harness is unavailable and fallback is omitted", async () => {
process.env.OPENCLAW_AGENT_RUNTIME = "codex";
await expect(runAgentHarnessAttempt(createAttemptParams())).rejects.toThrow(
'Requested agent harness "codex" is not registered.',
);
await expect(
runAgentHarnessAttempt(createAttemptParams(providerRuntimeConfig("codex", "codex"))),
).rejects.toThrow('Requested agent harness "codex" is not registered.');
expect(piRunAttempt).not.toHaveBeenCalled();
});
it("falls back to the PI harness in auto mode when no plugin harness matches", async () => {
const result = await runAgentHarnessAttempt(
createAttemptParams({ agents: { defaults: { agentRuntime: { id: "auto" } } } }),
);
const result = await runAgentHarnessAttempt(createAttemptParams());
expect(result.sessionIdUsed).toBe("pi");
expect(piRunAttempt).toHaveBeenCalledTimes(1);
@@ -132,30 +170,26 @@ describe("runAgentHarnessAttempt", () => {
it("surfaces an auto-selected plugin harness failure instead of replaying through PI", async () => {
registerFailingCodexHarness();
await expect(
runAgentHarnessAttempt(
createAttemptParams({ agents: { defaults: { agentRuntime: { id: "auto" } } } }),
),
).rejects.toThrow("codex startup failed");
await expect(runAgentHarnessAttempt(createAttemptParams())).rejects.toThrow(
"codex startup failed",
);
expect(piRunAttempt).not.toHaveBeenCalled();
});
it("uses PI by default even when plugin harnesses would support the model", async () => {
it("auto-selects a supporting plugin harness by default", async () => {
registerFailingCodexHarness();
const result = await runAgentHarnessAttempt(createAttemptParams());
expect(result.sessionIdUsed).toBe("pi");
expect(piRunAttempt).toHaveBeenCalledTimes(1);
await expect(runAgentHarnessAttempt(createAttemptParams())).rejects.toThrow(
"codex startup failed",
);
expect(piRunAttempt).not.toHaveBeenCalled();
});
it("surfaces a forced plugin harness failure instead of replaying through PI", async () => {
registerFailingCodexHarness();
await expect(
runAgentHarnessAttempt(
createAttemptParams({ agents: { defaults: { agentRuntime: { id: "codex" } } } }),
),
runAgentHarnessAttempt(createAttemptParams(providerRuntimeConfig("codex", "codex"))),
).rejects.toThrow("codex startup failed");
expect(piRunAttempt).not.toHaveBeenCalled();
});
@@ -178,9 +212,7 @@ describe("runAgentHarnessAttempt", () => {
it("honors explicit PI runtime for OpenAI agent model runs", async () => {
await expect(
runAgentHarnessAttempt({
...createAttemptParams({
agents: { defaults: { agentRuntime: { id: "pi" } } },
}),
...createAttemptParams(providerRuntimeConfig("openai", "pi")),
provider: "openai",
modelId: "gpt-5.4",
}),
@@ -204,9 +236,7 @@ describe("runAgentHarnessAttempt", () => {
{ ownerPluginId: "codex" },
);
const params = createAttemptParams({
agents: { defaults: { agentRuntime: { id: "auto" } } },
});
const params = createAttemptParams();
const result = await runAgentHarnessAttempt(params);
expect(classify).toHaveBeenCalledWith(
@@ -221,22 +251,15 @@ describe("runAgentHarnessAttempt", () => {
it("fails for config-forced plugin harnesses when fallback is omitted", async () => {
await expect(
runAgentHarnessAttempt(
createAttemptParams({ agents: { defaults: { agentRuntime: { id: "codex" } } } }),
),
runAgentHarnessAttempt(createAttemptParams(providerRuntimeConfig("codex", "codex"))),
).rejects.toThrow('Requested agent harness "codex" is not registered');
expect(piRunAttempt).not.toHaveBeenCalled();
});
it("does not let a strict agent plugin runtime fall back to PI", async () => {
it("does not let a strict agent model plugin runtime fall back to PI", async () => {
await expect(
runAgentHarnessAttempt({
...createAttemptParams({
agents: {
defaults: { agentRuntime: { id: "auto" } },
list: [{ id: "strict", agentRuntime: { id: "codex" } }],
},
}),
...createAttemptParams(agentModelRuntimeConfig("codex/gpt-5.4", "codex", "strict")),
sessionKey: "agent:strict:session-1",
}),
).rejects.toThrow('Requested agent harness "codex" is not registered');
@@ -245,7 +268,7 @@ describe("runAgentHarnessAttempt", () => {
});
describe("selectAgentHarness", () => {
it("defaults to PI unless auto runtime is explicitly selected", () => {
it("auto-selects plugin support by default", () => {
const supports = vi.fn(() => ({ supported: true as const, priority: 100 }));
registerAgentHarness({
id: "codex",
@@ -259,8 +282,8 @@ describe("selectAgentHarness", () => {
modelId: "gpt-5.4",
});
expect(harness.id).toBe("pi");
expect(supports).not.toHaveBeenCalled();
expect(harness.id).toBe("codex");
expect(supports).toHaveBeenCalledTimes(1);
});
it("auto-selects the highest-priority plugin harness without duplicate support probes", () => {
@@ -309,7 +332,6 @@ describe("selectAgentHarness", () => {
const harness = selectAgentHarness({
provider: "codex",
modelId: "gpt-5.4",
config: { agents: { defaults: { agentRuntime: { id: "auto" } } } },
});
expect(harness.id).toBe("codex-high");
@@ -318,7 +340,7 @@ describe("selectAgentHarness", () => {
expect(unsupportedSupports).toHaveBeenCalledTimes(1);
});
it("keeps pinned PI selection from probing plugin support", () => {
it("ignores session-level PI pins when selecting a harness", () => {
const supports = vi.fn(() => ({ supported: true as const, priority: 100 }));
registerAgentHarness({
id: "codex",
@@ -333,20 +355,12 @@ describe("selectAgentHarness", () => {
agentHarnessId: "pi",
});
expect(harness.id).toBe("pi");
expect(supports).not.toHaveBeenCalled();
expect(harness.id).toBe("codex");
expect(supports).toHaveBeenCalledTimes(1);
});
it("allows per-agent runtime policy overrides", () => {
const config: OpenClawConfig = {
agents: {
defaults: { agentRuntime: { id: "auto" } },
list: [
{ id: "main", default: true },
{ id: "strict", agentRuntime: { id: "codex" } },
],
},
};
it("allows per-agent model runtime policy overrides", () => {
const config = agentModelRuntimeConfig("anthropic/sonnet-4.6", "codex", "strict");
expect(() =>
selectAgentHarness({
@@ -361,14 +375,14 @@ describe("selectAgentHarness", () => {
);
});
it("uses agentRuntime as the runtime policy source", () => {
const config: OpenClawConfig = {
it("ignores legacy agentRuntime as a runtime policy source", () => {
const config = {
agents: {
defaults: {
agentRuntime: { id: "auto" },
agentRuntime: { id: "codex" },
},
},
};
} as OpenClawConfig;
expect(
selectAgentHarness({
@@ -379,7 +393,7 @@ describe("selectAgentHarness", () => {
).toBe("pi");
});
it("does not treat CLI runtime aliases as PI for OpenAI agent model runs", async () => {
it("ignores legacy agent CLI runtime aliases for OpenAI agent model runs", async () => {
registerSuccessfulCodexHarness();
const config: OpenClawConfig = {
agents: {
@@ -403,7 +417,7 @@ describe("selectAgentHarness", () => {
expect(piRunAttempt).not.toHaveBeenCalled();
});
it("keeps an existing session pinned to PI even when config now forces a plugin harness", () => {
it("ignores existing session PI pins when provider policy forces a plugin harness", () => {
registerFailingCodexHarness();
expect(
@@ -411,12 +425,12 @@ describe("selectAgentHarness", () => {
provider: "codex",
modelId: "gpt-5.4",
agentHarnessId: "pi",
config: { agents: { defaults: { agentRuntime: { id: "codex" } } } },
config: providerRuntimeConfig("codex", "codex"),
}).id,
).toBe("pi");
).toBe("codex");
});
it("keeps an existing session pinned to its plugin harness even when env now forces PI", () => {
it("ignores env-forced PI for OpenAI default runtime selection", () => {
process.env.OPENCLAW_AGENT_RUNTIME = "pi";
registerFailingCodexHarness();

View File

@@ -1,37 +1,21 @@
import type { AgentRuntimePolicyConfig } from "../../config/types.agents-shared.js";
import type { OpenClawConfig } from "../../config/types.openclaw.js";
import { formatErrorMessage } from "../../infra/errors.js";
import { createSubsystemLogger } from "../../logging/subsystem.js";
import { normalizeAgentId } from "../../routing/session-key.js";
import { resolveAgentRuntimePolicy } from "../agent-runtime-policy.js";
import { listAgentEntries, resolveSessionAgentIds } from "../agent-scope.js";
import { isCliRuntimeAlias } from "../model-runtime-aliases.js";
import {
isOpenAICodexProvider,
openAIProviderUsesCodexRuntimeByDefault,
} from "../openai-codex-routing.js";
import type { CompactEmbeddedPiSessionParams } from "../pi-embedded-runner/compact.types.js";
import type {
EmbeddedRunAttemptParams,
EmbeddedRunAttemptResult,
} from "../pi-embedded-runner/run/types.js";
import {
normalizeEmbeddedAgentRuntime,
resolveEmbeddedAgentRuntime,
type EmbeddedAgentRuntime,
} from "../pi-embedded-runner/runtime.js";
import type { EmbeddedPiCompactResult } from "../pi-embedded-runner/types.js";
import { createPiAgentHarness } from "./builtin-pi.js";
import { resolveAgentHarnessPolicy, type AgentHarnessPolicy } from "./policy.js";
import { listRegisteredAgentHarnesses } from "./registry.js";
import type { AgentHarness, AgentHarnessSupport } from "./types.js";
import { adaptAgentHarnessToV2, runAgentHarnessV2LifecycleAttempt } from "./v2.js";
const log = createSubsystemLogger("agents/harness");
type AgentHarnessPolicy = {
runtime: EmbeddedAgentRuntime;
runtimeSource?: "env" | "agent" | "defaults" | "implicit" | "pinned";
};
export { resolveAgentHarnessPolicy };
export type { AgentHarnessPolicy };
type AgentHarnessSelectionCandidate = {
id: string;
@@ -47,7 +31,6 @@ type AgentHarnessSelectionDecision = {
policy: AgentHarnessPolicy;
selectedHarnessId: string;
selectedReason:
| "pinned"
| "forced_pi"
| "forced_plugin"
// Auto mode chose a registered plugin harness that supports the provider/model.
@@ -91,10 +74,7 @@ function selectAgentHarnessDecision(params: {
sessionKey?: string;
agentHarnessId?: string;
}): AgentHarnessSelectionDecision {
const pinnedPolicy = resolvePinnedAgentHarnessPolicy({
agentHarnessId: params.agentHarnessId,
});
const policy = pinnedPolicy ?? resolveAgentHarnessPolicy(params);
const policy = resolveAgentHarnessPolicy(params);
// PI is intentionally not part of the plugin candidate list. Explicit plugin
// runtimes fail closed; only `auto` may route an unmatched turn to PI.
const pluginHarnesses = listPluginAgentHarnesses();
@@ -104,7 +84,7 @@ function selectAgentHarnessDecision(params: {
return buildSelectionDecision({
harness: piHarness,
policy,
selectedReason: pinnedPolicy ? "pinned" : "forced_pi",
selectedReason: "forced_pi",
candidates: listHarnessCandidates(pluginHarnesses),
});
}
@@ -114,7 +94,7 @@ function selectAgentHarnessDecision(params: {
return buildSelectionDecision({
harness: forced,
policy,
selectedReason: pinnedPolicy ? "pinned" : "forced_plugin",
selectedReason: "forced_plugin",
candidates: listHarnessCandidates(pluginHarnesses),
});
}
@@ -249,20 +229,6 @@ function logAgentHarnessSelection(
});
}
function resolvePinnedAgentHarnessPolicy(params: {
agentHarnessId: string | undefined;
}): AgentHarnessPolicy | undefined {
const { agentHarnessId } = params;
if (!agentHarnessId?.trim()) {
return undefined;
}
const runtime = normalizeEmbeddedAgentRuntime(agentHarnessId);
if (runtime === "auto") {
return undefined;
}
return { runtime, runtimeSource: "pinned" };
}
export async function maybeCompactAgentHarnessSession(
params: CompactEmbeddedPiSessionParams,
): Promise<EmbeddedPiCompactResult | undefined> {
@@ -271,7 +237,6 @@ export async function maybeCompactAgentHarnessSession(
modelId: params.model,
config: params.config,
sessionKey: params.sessionKey,
agentHarnessId: params.agentHarnessId,
});
if (!harness.compact) {
if (harness.id !== "pi") {
@@ -285,87 +250,3 @@ export async function maybeCompactAgentHarnessSession(
}
return harness.compact(params);
}
export function resolveAgentHarnessPolicy(params: {
provider?: string;
modelId?: string;
config?: OpenClawConfig;
agentId?: string;
sessionKey?: string;
env?: NodeJS.ProcessEnv;
}): AgentHarnessPolicy {
const env = params.env ?? process.env;
// Harness policy can be session-scoped because users may switch between agents
// with different strictness requirements inside the same gateway process.
const agentPolicy = resolveAgentEmbeddedHarnessConfig(params.config, {
agentId: params.agentId,
sessionKey: params.sessionKey,
});
const defaultsPolicy = resolveAgentRuntimePolicy(params.config?.agents?.defaults);
const envRuntime = env.OPENCLAW_AGENT_RUNTIME?.trim();
const agentRuntime = agentPolicy?.id?.trim();
const defaultsRuntime = defaultsPolicy?.id?.trim();
const runtimeSource = envRuntime
? "env"
: agentRuntime
? "agent"
: defaultsRuntime
? "defaults"
: "implicit";
const runtime = envRuntime
? resolveEmbeddedAgentRuntime(env)
: normalizeEmbeddedAgentRuntime(agentRuntime ?? defaultsRuntime);
if (
openAIProviderUsesCodexRuntimeByDefault({ provider: params.provider, config: params.config })
) {
if (runtime === "pi") {
if (runtimeSource === "implicit") {
return { runtime: "codex", runtimeSource };
}
return { runtime, runtimeSource };
}
if (runtime === "auto" || isCliRuntimeAlias(runtime)) {
return { runtime: "codex", runtimeSource };
}
return { runtime, runtimeSource };
}
if (isOpenAICodexProvider(params.provider)) {
if (runtime === "pi") {
if (runtimeSource === "implicit") {
return { runtime: "codex", runtimeSource };
}
return { runtime, runtimeSource };
}
if (runtime === "auto" || isCliRuntimeAlias(runtime)) {
return { runtime: "codex", runtimeSource };
}
return { runtime, runtimeSource };
}
if (isCliRuntimeAlias(runtime)) {
return {
runtime: "pi",
runtimeSource,
};
}
return {
runtime,
runtimeSource,
};
}
function resolveAgentEmbeddedHarnessConfig(
config: OpenClawConfig | undefined,
params: { agentId?: string; sessionKey?: string },
): AgentRuntimePolicyConfig | undefined {
if (!config) {
return undefined;
}
const { sessionAgentId } = resolveSessionAgentIds({
config,
agentId: params.agentId,
sessionKey: params.sessionKey,
});
return resolveAgentRuntimePolicy(
listAgentEntries(config).find((entry) => normalizeAgentId(entry.id) === sessionAgentId),
);
}

View File

@@ -1,6 +1,5 @@
import type { OpenClawConfig } from "../config/types.openclaw.js";
import { normalizeAgentId } from "../routing/session-key.js";
import { resolveAgentRuntimePolicy } from "./agent-runtime-policy.js";
import { resolveModelRuntimePolicy } from "./model-runtime-policy.js";
import { normalizeProviderId } from "./provider-id.js";
type LegacyRuntimeModelProviderAlias = {
@@ -98,37 +97,26 @@ export function isCliRuntimeAlias(runtime: string | undefined): boolean {
function resolveConfiguredRuntime(params: {
cfg?: OpenClawConfig;
provider: string;
agentId?: string;
runtimeOverride?: string;
modelId?: string;
}): string | undefined {
const override = params.runtimeOverride?.trim();
if (override) {
return normalizeProviderId(override);
}
if (params.agentId) {
const agentEntry = params.cfg?.agents?.list?.find(
(entry) => normalizeAgentId(entry.id) === normalizeAgentId(params.agentId ?? ""),
);
const agentRuntime = resolveAgentRuntimePolicy(agentEntry)?.id?.trim();
if (agentRuntime) {
return normalizeProviderId(agentRuntime);
}
}
const defaults = resolveAgentRuntimePolicy(params.cfg?.agents?.defaults)?.id?.trim();
if (defaults) {
return normalizeProviderId(defaults);
}
return undefined;
return resolveModelRuntimePolicy({
config: params.cfg,
provider: params.provider,
modelId: params.modelId,
agentId: params.agentId,
}).policy?.id?.trim();
}
export function resolveCliRuntimeExecutionProvider(params: {
provider: string;
cfg?: OpenClawConfig;
agentId?: string;
runtimeOverride?: string;
modelId?: string;
}): string | undefined {
const provider = normalizeProviderId(params.provider);
const runtime = resolveConfiguredRuntime(params);
const runtime = resolveConfiguredRuntime({ ...params, provider });
if (!runtime || runtime === "auto" || runtime === "pi") {
return undefined;
}

View File

@@ -0,0 +1,166 @@
import type { AgentModelEntryConfig } from "../config/types.agent-defaults.js";
import type { AgentRuntimePolicyConfig } from "../config/types.agents-shared.js";
import type { ModelDefinitionConfig, ModelProviderConfig } from "../config/types.models.js";
import type { OpenClawConfig } from "../config/types.openclaw.js";
import { normalizeAgentId } from "../routing/session-key.js";
import { listAgentEntries, resolveSessionAgentIds } from "./agent-scope.js";
import { normalizeProviderId } from "./provider-id.js";
export type ModelRuntimePolicySource = "model" | "provider";
export type ResolvedModelRuntimePolicy = {
policy?: AgentRuntimePolicyConfig;
source?: ModelRuntimePolicySource;
};
function hasRuntimePolicy(value: AgentRuntimePolicyConfig | undefined): boolean {
return Boolean(value?.id?.trim());
}
function resolveProviderConfig(
config: OpenClawConfig | undefined,
provider: string | undefined,
): ModelProviderConfig | undefined {
if (!config?.models?.providers || !provider?.trim()) {
return undefined;
}
const providers = config.models.providers;
const direct = providers[provider];
if (direct) {
return direct;
}
const normalizedProvider = normalizeProviderId(provider);
for (const [candidateProvider, providerConfig] of Object.entries(providers)) {
if (normalizeProviderId(candidateProvider) === normalizedProvider) {
return providerConfig;
}
}
return undefined;
}
function normalizeModelIdForProvider(
provider: string | undefined,
modelId: string | undefined,
): string | undefined {
const trimmed = modelId?.trim();
if (!trimmed) {
return undefined;
}
const slash = trimmed.indexOf("/");
if (slash <= 0) {
return trimmed;
}
const modelProvider = normalizeProviderId(trimmed.slice(0, slash));
const expectedProvider = normalizeProviderId(provider ?? "");
if (expectedProvider && modelProvider !== expectedProvider) {
return undefined;
}
return trimmed.slice(slash + 1).trim() || undefined;
}
function modelEntryMatches(params: {
entry: Pick<ModelDefinitionConfig, "id">;
provider: string | undefined;
modelId: string;
}): boolean {
const entryId = params.entry.id.trim();
if (entryId === params.modelId) {
return true;
}
const slash = entryId.indexOf("/");
if (slash <= 0) {
return false;
}
return (
normalizeProviderId(entryId.slice(0, slash)) === normalizeProviderId(params.provider ?? "") &&
entryId.slice(slash + 1).trim() === params.modelId
);
}
function modelKeyMatches(params: {
key: string;
provider: string | undefined;
modelId: string;
}): boolean {
return modelEntryMatches({
entry: { id: params.key },
provider: params.provider,
modelId: params.modelId,
});
}
function resolveAgentModelEntryRuntimePolicy(params: {
config?: OpenClawConfig;
provider?: string;
modelId?: string;
agentId?: string;
sessionKey?: string;
}): ResolvedModelRuntimePolicy {
const modelId = normalizeModelIdForProvider(params.provider, params.modelId);
if (!params.config || !modelId) {
return {};
}
const { sessionAgentId } = resolveSessionAgentIds({
config: params.config,
agentId: params.agentId,
sessionKey: params.sessionKey,
});
const agentEntry = listAgentEntries(params.config).find(
(entry) => normalizeAgentId(entry.id) === sessionAgentId,
);
const modelMaps: Array<Record<string, AgentModelEntryConfig> | undefined> = [
agentEntry?.models,
params.config.agents?.defaults?.models,
];
for (const models of modelMaps) {
for (const [key, entry] of Object.entries(models ?? {})) {
if (
modelKeyMatches({ key, provider: params.provider, modelId }) &&
hasRuntimePolicy(entry?.agentRuntime)
) {
return { policy: entry.agentRuntime, source: "model" };
}
}
}
return {};
}
function resolveModelConfig(params: {
providerConfig?: ModelProviderConfig;
provider?: string;
modelId?: string;
}): ModelDefinitionConfig | undefined {
const modelId = normalizeModelIdForProvider(params.provider, params.modelId);
if (!modelId || !Array.isArray(params.providerConfig?.models)) {
return undefined;
}
return params.providerConfig.models.find((entry) =>
modelEntryMatches({ entry, provider: params.provider, modelId }),
);
}
export function resolveModelRuntimePolicy(params: {
config?: OpenClawConfig;
provider?: string;
modelId?: string;
agentId?: string;
sessionKey?: string;
}): ResolvedModelRuntimePolicy {
const agentModelPolicy = resolveAgentModelEntryRuntimePolicy(params);
if (agentModelPolicy.policy) {
return agentModelPolicy;
}
const providerConfig = resolveProviderConfig(params.config, params.provider);
const modelConfig = resolveModelConfig({
providerConfig,
provider: params.provider,
modelId: params.modelId,
});
if (hasRuntimePolicy(modelConfig?.agentRuntime)) {
return { policy: modelConfig?.agentRuntime, source: "model" };
}
if (hasRuntimePolicy(providerConfig?.agentRuntime)) {
return { policy: providerConfig?.agentRuntime, source: "provider" };
}
return {};
}

View File

@@ -51,13 +51,12 @@ describe("OpenAI Codex routing policy", () => {
).toBe("openai-codex");
});
it("honors explicit session PI pins when validating OpenAI auth profiles", () => {
it("ignores session PI pins when validating OpenAI auth profiles", () => {
expect(
listOpenAIAuthProfileProvidersForAgentRuntime({
provider: "openai",
harnessRuntime: "codex",
sessionAgentRuntimeOverride: "pi",
}),
).toEqual(["openai", "openai-codex"]);
).toEqual(["openai-codex"]);
});
});

View File

@@ -112,17 +112,12 @@ export function listOpenAIAuthProfileProvidersForAgentRuntime(params: {
provider: string;
harnessRuntime?: string;
agentHarnessId?: string;
sessionAgentHarnessId?: string;
sessionAgentRuntimeOverride?: string;
}): string[] {
if (!isOpenAIProvider(params.provider)) {
return [params.provider];
}
const runtime = normalizeEmbeddedAgentRuntime(
normalizeExplicitRuntimePin(params.sessionAgentRuntimeOverride) ??
normalizeExplicitRuntimePin(params.sessionAgentHarnessId) ??
normalizeExplicitRuntimePin(params.agentHarnessId) ??
params.harnessRuntime,
normalizeExplicitRuntimePin(params.agentHarnessId) ?? params.harnessRuntime,
);
if (runtime === "codex") {
return [OPENAI_CODEX_PROVIDER_ID];

View File

@@ -32,7 +32,10 @@ describe("agents_list tool", () => {
id: "codex",
name: "Codex",
model: "openai/gpt-5.5",
agentRuntime: { id: "codex" },
agentRuntime: { id: "pi" },
models: {
"openai/gpt-5.5": { agentRuntime: { id: "codex" } },
},
},
],
},
@@ -52,7 +55,7 @@ describe("agents_list tool", () => {
name: "Codex",
configured: true,
model: "openai/gpt-5.5",
agentRuntime: { id: "codex", source: "agent" },
agentRuntime: { id: "codex", source: "model" },
},
],
});
@@ -83,7 +86,7 @@ describe("agents_list tool", () => {
});
});
it("reports env-forced plugin runtime selections", async () => {
it("ignores legacy env-forced plugin runtime selections", async () => {
vi.stubEnv("OPENCLAW_AGENT_RUNTIME", "codex");
loadConfigMock.mockReturnValue({
agents: {
@@ -104,13 +107,13 @@ describe("agents_list tool", () => {
agents: [
{
id: "main",
agentRuntime: { id: "codex", source: "env" },
agentRuntime: { id: "codex", source: "implicit" },
},
],
});
});
it("reports per-agent runtime overrides", async () => {
it("ignores legacy per-agent runtime overrides", async () => {
loadConfigMock.mockReturnValue({
agents: {
defaults: {
@@ -134,7 +137,7 @@ describe("agents_list tool", () => {
agents: [
{
id: "strict",
agentRuntime: { id: "codex", source: "agent" },
agentRuntime: { id: "codex", source: "implicit" },
},
],
});

View File

@@ -5,8 +5,9 @@ import {
normalizeAgentId,
parseAgentSessionKey,
} from "../../routing/session-key.js";
import { resolveAgentRuntimeMetadata } from "../agent-runtime-metadata.js";
import { resolveModelAgentRuntimeMetadata } from "../agent-runtime-metadata.js";
import { resolveAgentConfig, resolveAgentEffectiveModelPrimary } from "../agent-scope.js";
import { resolveDefaultModelForAgent } from "../model-selection.js";
import { resolveSubagentAllowedTargetIds } from "../subagent-target-policy.js";
import type { AnyAgentTool } from "./common.js";
import { jsonResult } from "./common.js";
@@ -21,7 +22,7 @@ type AgentListEntry = {
model?: string;
agentRuntime?: {
id: string;
source: "env" | "agent" | "defaults" | "implicit";
source: "env" | "agent" | "defaults" | "model" | "provider" | "implicit";
};
};
@@ -79,12 +80,19 @@ export function createAgentsListTool(opts?: {
.toSorted((a, b) => a.localeCompare(b));
const ordered = all.includes(requesterAgentId) ? [requesterAgentId, ...rest] : rest;
const agents: AgentListEntry[] = ordered.map((id) => {
const agentRuntime = resolveAgentRuntimeMetadata(cfg, id);
const model = resolveAgentEffectiveModelPrimary(cfg, id);
const resolvedModel = resolveDefaultModelForAgent({ cfg, agentId: id });
const agentRuntime = resolveModelAgentRuntimeMetadata({
cfg,
agentId: id,
provider: resolvedModel.provider,
model: resolvedModel.model,
});
return {
id,
name: configuredNameMap.get(id),
configured: configuredIds.includes(id),
model: resolveAgentEffectiveModelPrimary(cfg, id),
model,
agentRuntime,
};
});

View File

@@ -15,10 +15,7 @@ import { resolveContextTokensForModel } from "../../agents/context.js";
import { resolveAgentHarnessPolicy } from "../../agents/harness/selection.js";
import { LiveSessionModelSwitchError } from "../../agents/live-model-switch-error.js";
import { runWithModelFallback, isFallbackSummaryError } from "../../agents/model-fallback.js";
import {
isCliRuntimeAlias,
resolveCliRuntimeExecutionProvider,
} from "../../agents/model-runtime-aliases.js";
import { resolveCliRuntimeExecutionProvider } from "../../agents/model-runtime-aliases.js";
import { isCliProvider, resolveModelRefFromString } from "../../agents/model-selection.js";
import { resolveOpenAIRuntimeProviderForPi } from "../../agents/openai-codex-routing.js";
import {
@@ -1404,15 +1401,12 @@ export async function runAgentTurnWithFallback(params: {
);
}
const agentRuntimeOverride = normalizeOptionalString(
params.getActiveSessionEntry()?.agentRuntimeOverride,
);
const cliExecutionProvider =
resolveCliRuntimeExecutionProvider({
provider,
cfg: runtimeConfig,
agentId: params.followupRun.run.agentId,
runtimeOverride: agentRuntimeOverride,
modelId: model,
}) ?? provider;
if (isCliProvider(cliExecutionProvider, runtimeConfig)) {
@@ -1565,13 +1559,6 @@ export async function runAgentTurnWithFallback(params: {
model,
},
);
const requestedAgentHarnessId =
agentRuntimeOverride &&
agentRuntimeOverride !== "auto" &&
agentRuntimeOverride !== "default" &&
!isCliRuntimeAlias(agentRuntimeOverride)
? agentRuntimeOverride
: undefined;
const agentHarnessPolicy = resolveAgentHarnessPolicy({
provider,
modelId: model,
@@ -1581,8 +1568,7 @@ export async function runAgentTurnWithFallback(params: {
});
const embeddedRunProvider = resolveOpenAIRuntimeProviderForPi({
provider,
harnessRuntime: requestedAgentHarnessId ?? agentHarnessPolicy.runtime,
agentHarnessId: requestedAgentHarnessId,
harnessRuntime: agentHarnessPolicy.runtime,
authProfileProvider: runBaseParams.authProfileId?.split(":", 1)[0],
authProfileId: runBaseParams.authProfileId,
config: runtimeConfig,
@@ -1607,7 +1593,6 @@ export async function runAgentTurnWithFallback(params: {
...senderContext,
...runBaseParams,
provider: embeddedRunProvider,
...(requestedAgentHarnessId ? { agentHarnessId: requestedAgentHarnessId } : {}),
sandboxSessionKey: params.runtimePolicySessionKey,
prompt: params.commandBody,
transcriptPrompt: params.transcriptCommandBody,

View File

@@ -7,6 +7,7 @@ import {
abortEmbeddedPiRun,
isEmbeddedPiRunActive,
} from "../../agents/pi-embedded-runner/runs.js";
import { clearRuntimeConfigSnapshot } from "../../config/config.js";
import * as sessionTypesModule from "../../config/sessions.js";
import type { SessionEntry } from "../../config/sessions.js";
import { loadSessionStore, saveSessionStore } from "../../config/sessions.js";
@@ -149,6 +150,7 @@ type RunWithModelFallbackParams = {
};
beforeEach(() => {
clearRuntimeConfigSnapshot();
resetDiagnosticEventsForTest();
embeddedRunTesting.resetActiveEmbeddedRuns();
replyRunRegistryTesting.resetReplyRunRegistry();
@@ -182,6 +184,7 @@ beforeEach(() => {
});
afterEach(() => {
clearRuntimeConfigSnapshot();
resetDiagnosticEventsForTest();
vi.useRealTimers();
clearMemoryPluginState();
@@ -1810,7 +1813,6 @@ describe("runReplyAgent claude-cli routing", () => {
const sessionEntry = {
sessionId: "session",
updatedAt: Date.now(),
agentRuntimeOverride: "claude-cli",
} as SessionEntry;
const followupRun = {
prompt: "hello",
@@ -1822,7 +1824,15 @@ describe("runReplyAgent claude-cli routing", () => {
messageProvider: "webchat",
sessionFile: "/tmp/session.jsonl",
workspaceDir: "/tmp",
config: { agents: { defaults: { agentRuntime: { id: "claude-cli" } } } },
config: {
agents: {
defaults: {
models: {
"anthropic/claude-opus-4-7": { agentRuntime: { id: "claude-cli" } },
},
},
},
},
skillsSnapshot: {},
provider: "anthropic",
model: "claude-opus-4-7",

View File

@@ -6,8 +6,10 @@ import { parseInlineDirectives } from "./directive-handling.parse.js";
import { persistInlineDirectives } from "./directive-handling.persist.js";
vi.mock("../../agents/agent-scope.js", () => ({
listAgentEntries: vi.fn(() => []),
resolveAgentConfig: vi.fn(() => ({})),
resolveAgentDir: vi.fn(() => "/tmp/agent"),
resolveSessionAgentIds: vi.fn(() => ({ requestedAgentId: "main", sessionAgentId: "main" })),
resolveSessionAgentId: vi.fn(() => "main"),
resolveDefaultAgentId: vi.fn(() => "main"),
}));

View File

@@ -598,7 +598,7 @@ describe("/model chat UX", () => {
expect(reply?.text).not.toContain("via codex runtime");
});
it("does not borrow Codex auth when OpenAI is pinned to PI runtime", async () => {
it("does not borrow Codex auth when OpenAI model policy pins PI runtime", async () => {
setAuthProfiles({
"openai-codex:patrick@example.test": {
type: "oauth",
@@ -619,10 +619,11 @@ describe("/model chat UX", () => {
commands: { text: true },
agents: {
defaults: {
agentRuntime: { id: "pi" },
model: { primary: "openai/gpt-5.5" },
models: {
"openai/gpt-5.5": {},
"openai/gpt-5.5": {
agentRuntime: { id: "pi" },
},
},
},
},
@@ -911,7 +912,7 @@ describe("/model chat UX", () => {
expect(sessionEntry.authProfileOverride).toBe(OPENAI_DATE_PROFILE_ID);
});
it("persists provider-compatible runtime overrides for mixed-content messages", async () => {
it("ignores provider-compatible runtime overrides for mixed-content messages", async () => {
const { sessionEntry } = await persistModelDirectiveForTest({
command: "/model openai/gpt-4o --runtime codex hello",
allowedModelKeys: ["openai/gpt-4o"],
@@ -919,16 +920,16 @@ describe("/model chat UX", () => {
expect(sessionEntry.providerOverride).toBe("openai");
expect(sessionEntry.modelOverride).toBe("gpt-4o");
expect(sessionEntry.agentRuntimeOverride).toBe("codex");
expect(sessionEntry.agentRuntimeOverride).toBeUndefined();
});
it("canonicalizes legacy Codex app-server runtime overrides during persistence", async () => {
it("ignores legacy Codex app-server runtime overrides during persistence", async () => {
const { sessionEntry } = await persistModelDirectiveForTest({
command: "/model openai/gpt-4o --runtime codex-app-server hello",
allowedModelKeys: ["openai/gpt-4o"],
});
expect(sessionEntry.agentRuntimeOverride).toBe("codex");
expect(sessionEntry.agentRuntimeOverride).toBeUndefined();
});
it("uses Codex OAuth context config for persisted native Codex runtime directives", async () => {
@@ -988,7 +989,7 @@ describe("/model chat UX", () => {
initialModelLabel: "openai/gpt-4o",
});
expect(sessionEntry.agentRuntimeOverride).toBe("pi");
expect(sessionEntry.agentRuntimeOverride).toBeUndefined();
expect(enqueueSystemEvent).toHaveBeenCalledWith(
"Ignored unsupported runtime claude-cli for openai.",
{

View File

@@ -39,12 +39,8 @@ function resolveStatusHarnessRuntime(params: {
sessionEntry?: Pick<SessionEntry, "agentHarnessId" | "agentRuntimeOverride">;
defaultRuntime: string;
}): string {
const sessionRuntime = normalizeOptionalString(
params.sessionEntry?.agentRuntimeOverride ?? params.sessionEntry?.agentHarnessId,
);
return sessionRuntime && sessionRuntime !== "auto" && sessionRuntime !== "default"
? sessionRuntime
: params.defaultRuntime;
void params.sessionEntry;
return params.defaultRuntime;
}
async function resolveStatusAuthLabel(params: {

View File

@@ -3,6 +3,7 @@ import {
resolveDefaultAgentId,
resolveSessionAgentId,
} from "../../agents/agent-scope.js";
import { resolveAgentHarnessPolicy } from "../../agents/harness/selection.js";
import type { ModelCatalogEntry } from "../../agents/model-catalog.js";
import { listLegacyRuntimeModelProviderAliases } from "../../agents/model-runtime-aliases.js";
import { normalizeProviderId, type ModelAliasIndex } from "../../agents/model-selection.js";
@@ -79,17 +80,6 @@ function resolveContextConfigProviderForRuntime(params: {
return params.provider;
}
function resolveDirectiveRuntimeId(params: {
agentCfg: NonNullable<OpenClawConfig["agents"]>["defaults"] | undefined;
sessionEntry?: SessionEntry;
}): string | undefined {
return (
params.sessionEntry?.agentRuntimeOverride ??
params.sessionEntry?.agentHarnessId ??
params.agentCfg?.agentRuntime?.id
);
}
export async function persistInlineDirectives(params: {
directives: InlineDirectives;
effectiveModelDirective?: string;
@@ -278,11 +268,22 @@ export async function persistInlineDirectives(params: {
updated = true;
}
} else if (runtimeOverride?.kind === "set") {
if (sessionEntry.agentRuntimeOverride !== runtimeOverride.runtime) {
sessionEntry.agentRuntimeOverride = runtimeOverride.runtime;
if (sessionEntry.agentRuntimeOverride) {
delete sessionEntry.agentRuntimeOverride;
updated = true;
}
enqueueSystemEvent(
`Ignored session runtime ${runtimeOverride.runtime}; configure provider or model runtime policy instead.`,
{
sessionKey,
contextKey: `model-runtime:${modelResolution.modelSelection.provider}:${runtimeOverride.runtime}:ignored-session-runtime`,
},
);
} else if (runtimeOverride?.kind === "invalid") {
if (sessionEntry.agentRuntimeOverride) {
delete sessionEntry.agentRuntimeOverride;
updated = true;
}
enqueueSystemEvent(
`Ignored unsupported runtime ${runtimeOverride.runtime} for ${modelResolution.modelSelection.provider}.`,
{
@@ -369,7 +370,13 @@ export async function persistInlineDirectives(params: {
agentCfg,
provider: resolveContextConfigProviderForRuntime({
provider,
runtimeId: resolveDirectiveRuntimeId({ agentCfg, sessionEntry }),
runtimeId: resolveAgentHarnessPolicy({
provider,
modelId: model,
config: cfg,
agentId: activeAgentId,
sessionKey,
}).runtime,
}),
model,
}),

View File

@@ -1,4 +1,5 @@
import { beforeAll, beforeEach, describe, expect, it, vi } from "vitest";
import { clearAgentHarnesses } from "../../agents/harness/registry.js";
import type { PluginHookReplyDispatchResult } from "../../plugins/hooks.js";
import { createInternalHookEventPayload } from "../../test-utils/internal-hook-event-payload.js";
import {
@@ -29,6 +30,7 @@ describe("dispatchReplyFromConfig reply_dispatch hook", () => {
});
beforeEach(() => {
clearAgentHarnesses();
setDiscordTestRegistry();
resetInboundDedupe();
mocks.routeReply.mockReset().mockResolvedValue({ ok: true, messageId: "mock" });

View File

@@ -318,9 +318,6 @@ const resolveHarnessSourceVisibleRepliesDefault = (params: {
config: params.cfg,
agentId: params.sessionAgentId,
sessionKey: params.sessionKey,
agentHarnessId:
normalizeOptionalString(params.entry?.agentHarnessId) ??
normalizeOptionalString(params.entry?.agentRuntimeOverride),
});
return harness.deliveryDefaults?.sourceVisibleReplies;
} catch (error) {

View File

@@ -885,13 +885,11 @@ export async function runPreparedReply(
agentId,
sessionKey: runtimePolicySessionKey,
});
const resolveAcceptedAuthProfileProviders = (entry: SessionEntry | undefined) =>
const resolveAcceptedAuthProfileProviders = () =>
agentHarnessPolicy
? listOpenAIAuthProfileProvidersForAgentRuntime({
provider,
harnessRuntime: agentHarnessPolicy.runtime,
sessionAgentHarnessId: entry?.agentHarnessId,
sessionAgentRuntimeOverride: entry?.agentRuntimeOverride,
})
: [provider];
let authProfileId = useFastReplyRuntime
@@ -900,9 +898,7 @@ export async function runPreparedReply(
resolveSessionAuthProfileOverride({
cfg,
provider,
acceptedProviderIds: resolveAcceptedAuthProfileProviders(
preparedSessionState.sessionEntry,
),
acceptedProviderIds: resolveAcceptedAuthProfileProviders(),
agentDir,
sessionEntry: preparedSessionState.sessionEntry,
sessionStore,
@@ -961,9 +957,7 @@ export async function runPreparedReply(
: await resolveSessionAuthProfileOverride({
cfg,
provider,
acceptedProviderIds: resolveAcceptedAuthProfileProviders(
preparedSessionState.sessionEntry,
),
acceptedProviderIds: resolveAcceptedAuthProfileProviders(),
agentDir,
sessionEntry: preparedSessionState.sessionEntry,
sessionStore,

View File

@@ -232,7 +232,7 @@ describe("createModelSelectionState catalog loading", () => {
expect(loadModelCatalog).toHaveBeenCalledOnce();
});
it("preserves OpenAI API-key session auth when the session explicitly pins PI", async () => {
it("preserves OpenAI API-key session auth when model policy explicitly pins PI", async () => {
authProfileStoreMock.store = {
version: 1,
profiles: {
@@ -243,12 +243,21 @@ describe("createModelSelectionState catalog loading", () => {
sessionId: "s1",
updatedAt: 1,
authProfileOverride: "openai:work",
agentRuntimeOverride: "pi",
};
const sessionStore = { main: sessionEntry };
await createModelSelectionState({
cfg: {} as OpenClawConfig,
cfg: {
models: {
providers: {
openai: {
baseUrl: "https://api.openai.com/v1",
agentRuntime: { id: "pi" },
models: [],
},
},
},
} as OpenClawConfig,
agentCfg: undefined,
defaultProvider: "openai",
defaultModel: "gpt-5.5",

View File

@@ -244,8 +244,6 @@ export async function createModelSelectionState(params: {
const acceptedAuthProviders = listOpenAIAuthProfileProvidersForAgentRuntime({
provider,
harnessRuntime: harnessPolicy.runtime,
sessionAgentHarnessId: sessionEntry.agentHarnessId,
sessionAgentRuntimeOverride: sessionEntry.agentRuntimeOverride,
}).map(normalizeProviderId);
if (!profile || !acceptedAuthProviders.includes(profileProvider ?? "")) {
await clearSessionAuthProfileOverride({

View File

@@ -362,13 +362,19 @@ describe("agentCommand", () => {
});
});
it("does not enable Codex for one-shot OpenAI overrides when the agent forces PI", async () => {
it("does not enable Codex for one-shot OpenAI overrides when the provider forces PI", async () => {
await withTempHome(async (home) => {
const storePath = path.join(home, "sessions.json");
mockConfig(home, storePath, {
agentRuntime: { id: "pi" },
models: undefined,
});
const cfg = mockConfig(home, storePath, { models: undefined });
cfg.models = {
providers: {
openai: {
baseUrl: "https://api.openai.com/v1",
agentRuntime: { id: "pi" },
models: [],
},
},
};
await agentCommand(
{

View File

@@ -130,7 +130,6 @@ describe("noteClaudeCliHealth", () => {
{
agents: {
defaults: {
agentRuntime: { id: "codex" },
model: { primary: "openai/gpt-5.5" },
},
list: [
@@ -138,13 +137,14 @@ describe("noteClaudeCliHealth", () => {
id: "coder",
default: true,
workspace: defaultWorkspace,
agentRuntime: { id: "codex" },
},
{
id: "xiaoao",
workspace: claudeWorkspace,
agentRuntime: { id: "claude-cli" },
model: "anthropic/claude-opus-4-7",
models: {
"anthropic/claude-opus-4-7": { agentRuntime: { id: "claude-cli" } },
},
},
],
},

View File

@@ -1,7 +1,7 @@
import fs from "node:fs";
import os from "node:os";
import path from "node:path";
import { resolveAgentRuntimeMetadata } from "../agents/agent-runtime-metadata.js";
import { resolveModelAgentRuntimeMetadata } from "../agents/agent-runtime-metadata.js";
import {
listAgentIds,
resolveAgentWorkspaceDir,
@@ -174,10 +174,10 @@ function formatProjectDirHealthLine(
return `- ${label}: ${display} is not writable by this user.`;
}
function resolveClaudeCliAgentIds(cfg: OpenClawConfig, env: NodeJS.ProcessEnv): string[] {
function resolveClaudeCliAgentIds(cfg: OpenClawConfig): string[] {
const agentIds = listAgentIds(cfg);
const runtimeAgentIds = agentIds.filter(
(agentId) => resolveAgentRuntimeMetadata(cfg, agentId, env).id === CLAUDE_CLI_PROVIDER,
(agentId) => resolveModelAgentRuntimeMetadata({ cfg, agentId }).id === CLAUDE_CLI_PROVIDER,
);
if (runtimeAgentIds.length > 0) {
return runtimeAgentIds;
@@ -202,7 +202,7 @@ function resolveClaudeCliWorkspaceTargets(params: {
homeDir?: string;
workspaceDir?: string;
}): ClaudeCliWorkspaceTarget[] {
const agentIds = resolveClaudeCliAgentIds(params.cfg, params.env);
const agentIds = resolveClaudeCliAgentIds(params.cfg);
const defaultAgentId = resolveDefaultAgentId(params.cfg);
const seen = new Set<string>();
return agentIds

View File

@@ -526,7 +526,7 @@ describe("normalizeCompatibilityConfigValues", () => {
expect(res.changes).toEqual([]);
});
it("migrates legacy Codex primary refs to OpenAI refs plus explicit Codex runtime", () => {
it("migrates legacy Codex primary refs to OpenAI refs without agent runtime pins", () => {
const res = normalizeCompatibilityConfigValues({
agents: {
defaults: {
@@ -554,9 +554,7 @@ describe("normalizeCompatibilityConfigValues", () => {
primary: "openai/gpt-5.5",
fallbacks: ["anthropic/claude-sonnet-4-6", "openai/gpt-5.4-mini"],
});
expect(res.config.agents?.defaults?.agentRuntime).toEqual({
id: "codex",
});
expect(res.config.agents?.defaults?.agentRuntime).toEqual({ id: "auto" });
expect(res.config.agents?.defaults?.models).toEqual({
"codex/gpt-5.5": { alias: "legacy-codex" },
"openai/gpt-5.5": { alias: "gpt", params: { temperature: 0.2 } },
@@ -565,7 +563,6 @@ describe("normalizeCompatibilityConfigValues", () => {
});
expect(res.config.agents?.list?.[0]).toMatchObject({
id: "reviewer",
agentRuntime: { id: "codex" },
model: "openai/gpt-5.4-mini",
});
expect(res.changes).toEqual(
@@ -598,7 +595,7 @@ describe("normalizeCompatibilityConfigValues", () => {
expect(res.changes).toEqual([]);
});
it("migrates legacy Claude CLI primary refs to Anthropic refs plus explicit runtime", () => {
it("migrates legacy Claude CLI primary refs to Anthropic refs plus model runtime", () => {
const res = normalizeCompatibilityConfigValues({
agents: {
defaults: {
@@ -618,14 +615,20 @@ describe("normalizeCompatibilityConfigValues", () => {
primary: "anthropic/claude-opus-4-7",
fallbacks: ["anthropic/claude-sonnet-4-6"],
});
expect(res.config.agents?.defaults?.agentRuntime).toEqual({ id: "claude-cli" });
expect(res.config.agents?.defaults?.agentRuntime).toBeUndefined();
expect(res.config.agents?.defaults?.models).toEqual({
"claude-cli/claude-opus-4-7": { alias: "Opus" },
"anthropic/claude-opus-4-7": { alias: "Anthropic Opus" },
"anthropic/claude-opus-4-7": {
alias: "Anthropic Opus",
agentRuntime: { id: "claude-cli" },
},
"anthropic/claude-sonnet-4-6": {
agentRuntime: { id: "claude-cli" },
},
});
});
it("migrates legacy Codex CLI primary refs to OpenAI refs plus explicit runtime", () => {
it("migrates legacy Codex CLI primary refs to OpenAI refs plus model runtime", () => {
const res = normalizeCompatibilityConfigValues({
agents: {
defaults: {
@@ -645,14 +648,20 @@ describe("normalizeCompatibilityConfigValues", () => {
primary: "openai/gpt-5.5",
fallbacks: ["openai/gpt-5.4-mini"],
});
expect(res.config.agents?.defaults?.agentRuntime).toEqual({ id: "codex-cli" });
expect(res.config.agents?.defaults?.agentRuntime).toBeUndefined();
expect(res.config.agents?.defaults?.models).toEqual({
"codex-cli/gpt-5.5": { alias: "Codex CLI" },
"openai/gpt-5.5": { alias: "OpenAI GPT" },
"openai/gpt-5.5": {
alias: "OpenAI GPT",
agentRuntime: { id: "codex-cli" },
},
"openai/gpt-5.4-mini": {
agentRuntime: { id: "codex-cli" },
},
});
});
it("migrates legacy Gemini CLI primary refs to Google refs plus explicit runtime", () => {
it("migrates legacy Gemini CLI primary refs to Google refs plus model runtime", () => {
const res = normalizeCompatibilityConfigValues({
agents: {
defaults: {
@@ -672,12 +681,16 @@ describe("normalizeCompatibilityConfigValues", () => {
primary: "google/gemini-3.1-pro-preview",
fallbacks: ["google/gemini-3-flash-preview"],
});
expect(res.config.agents?.defaults?.agentRuntime).toEqual({
id: "google-gemini-cli",
});
expect(res.config.agents?.defaults?.agentRuntime).toBeUndefined();
expect(res.config.agents?.defaults?.models).toEqual({
"google-gemini-cli/gemini-3.1-pro-preview": { alias: "Gemini CLI" },
"google/gemini-3.1-pro-preview": { alias: "Gemini API" },
"google/gemini-3.1-pro-preview": {
alias: "Gemini API",
agentRuntime: { id: "google-gemini-cli" },
},
"google/gemini-3-flash-preview": {
agentRuntime: { id: "google-gemini-cli" },
},
});
});

View File

@@ -45,14 +45,22 @@ describe("doctor session state provider routes", () => {
).toBe(true);
});
it("preserves raw configured CLI runtimes before harness policy normalization", () => {
it("preserves configured provider CLI runtimes before harness policy normalization", () => {
expect(
resolveConfiguredDoctorSessionStateRoute({
cfg: {
agents: {
defaults: {
model: { primary: "openai/gpt-5.5" },
agentRuntime: { id: "codex-cli" },
},
},
models: {
providers: {
openai: {
baseUrl: "https://api.openai.com/v1",
agentRuntime: { id: "codex-cli" },
models: [],
},
},
},
},
@@ -66,7 +74,7 @@ describe("doctor session state provider routes", () => {
});
});
it("lets environment CLI runtime overrides reach plugin-owned scanners", () => {
it("ignores legacy environment runtime overrides before plugin-owned scans", () => {
expect(
resolveConfiguredDoctorSessionStateRoute({
cfg: {
@@ -81,7 +89,7 @@ describe("doctor session state provider routes", () => {
env: { OPENCLAW_AGENT_RUNTIME: "codex-cli" },
}),
).toMatchObject({
runtime: "codex-cli",
runtime: "codex",
});
});

View File

@@ -1,6 +1,4 @@
import { resolveAgentRuntimePolicy } from "../agents/agent-runtime-policy.js";
import {
listAgentEntries,
resolveAgentModelFallbacksOverride,
resolveDefaultAgentId,
} from "../agents/agent-scope.js";
@@ -17,7 +15,6 @@ import { updateSessionStore } from "../config/sessions/store.js";
import type { OpenClawConfig } from "../config/types.openclaw.js";
import { listPluginDoctorSessionRouteStateOwners } from "../plugins/doctor-contract-registry.js";
import type { DoctorSessionRouteStateOwner } from "../plugins/doctor-session-route-state-owner-types.js";
import { normalizeAgentId } from "../routing/session-key.js";
import { parseAgentSessionKey } from "../sessions/session-key-utils.js";
import { note } from "../terminal/note.js";
@@ -63,27 +60,6 @@ function resolveSessionAgentId(cfg: OpenClawConfig, sessionKey: string): string
return parseAgentSessionKey(sessionKey)?.agentId ?? resolveDefaultAgentId(cfg);
}
function resolveRawConfiguredRuntime(params: {
cfg: OpenClawConfig;
agentId: string;
env?: NodeJS.ProcessEnv;
}): string | undefined {
const envRuntime = params.env?.OPENCLAW_AGENT_RUNTIME?.trim();
if (envRuntime) {
return normalizeProviderId(envRuntime);
}
const agentRuntime = resolveAgentRuntimePolicy(
listAgentEntries(params.cfg).find(
(entry) => normalizeAgentId(entry.id) === normalizeAgentId(params.agentId),
),
)?.id?.trim();
if (agentRuntime) {
return normalizeProviderId(agentRuntime);
}
const defaultsRuntime = resolveAgentRuntimePolicy(params.cfg.agents?.defaults)?.id?.trim();
return defaultsRuntime ? normalizeProviderId(defaultsRuntime) : undefined;
}
export function resolveConfiguredDoctorSessionStateRoute(params: {
cfg: OpenClawConfig;
sessionKey: string;
@@ -108,15 +84,16 @@ export function resolveConfiguredDoctorSessionStateRoute(params: {
}
}
const runtime = resolveAgentHarnessPolicy({
provider: primary.provider,
modelId: primary.model,
config: params.cfg,
agentId,
sessionKey: params.sessionKey,
env: params.env,
}).runtime;
return {
defaultProvider: primary.provider,
configuredModelRefs: [...configuredModelRefs],
runtime: resolveRawConfiguredRuntime({ cfg: params.cfg, agentId, env: params.env }) ?? runtime,
runtime,
};
}

View File

@@ -397,11 +397,11 @@ describe("doctor repair sequencing", () => {
);
});
it("moves legacy Codex routes to Codex before missing plugin install repair", async () => {
it("moves legacy Codex routes to canonical OpenAI before missing plugin install repair", async () => {
mocks.repairMissingConfiguredPluginInstalls.mockImplementationOnce(
async (params: { cfg: OpenClawConfig }) => {
expect(params.cfg.agents?.defaults?.model).toBe("openai/gpt-5.5");
expect(params.cfg.agents?.defaults?.agentRuntime).toEqual({ id: "codex" });
expect(params.cfg.agents?.defaults?.agentRuntime).toBeUndefined();
return {
changes: [],
warnings: [],
@@ -434,9 +434,9 @@ describe("doctor repair sequencing", () => {
expect(result.state.pendingChanges).toBe(true);
expect(result.state.candidate.agents?.defaults?.model).toBe("openai/gpt-5.5");
expect(result.state.candidate.agents?.defaults?.agentRuntime).toEqual({ id: "codex" });
expect(result.state.candidate.agents?.defaults?.agentRuntime).toBeUndefined();
expect(result.changeNotes.join("\n")).toContain(
'agents.defaults.model: openai-codex/gpt-5.5 -> openai/gpt-5.5; set agentRuntime.id to "codex".',
"agents.defaults.model: openai-codex/gpt-5.5 -> openai/gpt-5.5.",
);
expect(result.changeNotes.join("\n")).not.toContain("Installed missing configured plugin");
});

View File

@@ -2,6 +2,7 @@ import type { Dirent } from "node:fs";
import fs from "node:fs/promises";
import os from "node:os";
import path from "node:path";
import { collectConfiguredAgentHarnessRuntimes } from "../../../agents/harness-runtimes.js";
import type { OpenClawConfig } from "../../../config/types.openclaw.js";
export type CodexNativeAssetHit = {
@@ -113,16 +114,7 @@ async function discoverPluginHits(root: string): Promise<CodexNativeAssetHit[]>
}
function isCodexRuntimeConfigured(cfg: OpenClawConfig, env: NodeJS.ProcessEnv): boolean {
if (normalizeString(env.OPENCLAW_AGENT_RUNTIME) === "codex") {
return true;
}
const defaults = cfg.agents?.defaults;
if (normalizeString(defaults?.agentRuntime?.id) === "codex") {
return true;
}
return (cfg.agents?.list ?? []).some(
(agent) => normalizeString(agent.agentRuntime?.id) === "codex",
);
return collectConfiguredAgentHarnessRuntimes(cfg, env).includes("codex");
}
function isCodexPluginConfigured(cfg: OpenClawConfig): boolean {

View File

@@ -67,8 +67,7 @@ describe("collectCodexRouteWarnings", () => {
expect(warnings).toEqual([expect.stringContaining("Legacy `openai-codex/*`")]);
expect(warnings[0]).toContain("agents.defaults.model");
expect(warnings[0]).toContain("openai/gpt-5.5");
expect(warnings[0]).toContain('runtime is "codex"');
expect(warnings[0]).toContain('agentRuntime.id: "codex"');
expect(warnings[0]).not.toContain("agentRuntime.id");
});
it("still warns when the native Codex runtime is selected with a legacy model ref", () => {
@@ -120,7 +119,7 @@ describe("collectCodexRouteWarnings", () => {
expect(warnings).toEqual([]);
});
it("repairs configured Codex model refs to canonical OpenAI refs with the Codex runtime when ready", () => {
it("repairs configured Codex model refs to canonical OpenAI refs without pinning runtime", () => {
const result = maybeRepairCodexRoutes({
cfg: {
agents: {
@@ -204,7 +203,7 @@ describe("collectCodexRouteWarnings", () => {
});
expect(result.cfg.agents?.defaults?.compaction?.model).toBe("openai/gpt-5.4");
expect(result.cfg.agents?.defaults?.compaction?.memoryFlush?.model).toBe("openai/gpt-5.4-mini");
expect(result.cfg.agents?.defaults?.agentRuntime).toEqual({ id: "codex" });
expect(result.cfg.agents?.defaults?.agentRuntime).toBeUndefined();
expect(result.cfg.agents?.defaults?.models).toEqual({
"openai/gpt-5.5": { alias: "codex" },
});
@@ -223,7 +222,7 @@ describe("collectCodexRouteWarnings", () => {
expect(result.cfg.messages?.tts?.summaryModel).toBe("openai/gpt-5.4-mini");
});
it("repairs legacy routes to Codex even when OAuth readiness cannot be proven", () => {
it("repairs legacy routes without requiring OAuth readiness", () => {
const result = maybeRepairCodexRoutes({
cfg: {
agents: {
@@ -236,11 +235,11 @@ describe("collectCodexRouteWarnings", () => {
});
expect(result.cfg.agents?.defaults?.model).toBe("openai/gpt-5.5");
expect(result.cfg.agents?.defaults?.agentRuntime).toEqual({ id: "codex" });
expect(result.changes.join("\n")).toContain('set agentRuntime.id to "codex"');
expect(result.cfg.agents?.defaults?.agentRuntime).toBeUndefined();
expect(result.changes.join("\n")).not.toContain("agentRuntime.id");
});
it("repairs persisted session route pins to Codex and preserves Codex auth pins", () => {
it("repairs persisted session route refs, clears runtime pins, and preserves auth pins", () => {
const store: Record<string, SessionEntry> = {
main: {
sessionId: "s1",
@@ -268,7 +267,6 @@ describe("collectCodexRouteWarnings", () => {
const result = repairCodexSessionStoreRoutes({
store,
runtime: "codex",
now: 123,
});
@@ -280,12 +278,12 @@ describe("collectCodexRouteWarnings", () => {
providerOverride: "openai",
modelOverride: "gpt-5.4",
modelOverrideSource: "auto",
agentHarnessId: "codex",
agentRuntimeOverride: "codex",
authProfileOverride: "openai-codex:default",
authProfileOverrideSource: "auto",
authProfileOverrideCompactionCount: 2,
});
expect(store.main.agentHarnessId).toBeUndefined();
expect(store.main.agentRuntimeOverride).toBeUndefined();
expect(store.main.fallbackNoticeSelectedModel).toBeUndefined();
expect(store.main.fallbackNoticeActiveModel).toBeUndefined();
expect(store.main.fallbackNoticeReason).toBeUndefined();
@@ -295,14 +293,13 @@ describe("collectCodexRouteWarnings", () => {
});
});
it("keeps Codex session auth pins when the Codex runtime is ready", () => {
it("keeps Codex session auth pins while leaving runtime unpinned", () => {
const store: Record<string, SessionEntry> = {
main: {
sessionId: "s1",
updatedAt: 1,
providerOverride: "openai-codex",
modelOverride: "gpt-5.5",
agentHarnessId: "codex",
authProfileOverride: "openai-codex:default",
authProfileOverrideSource: "auto",
},
@@ -310,7 +307,6 @@ describe("collectCodexRouteWarnings", () => {
const result = repairCodexSessionStoreRoutes({
store,
runtime: "codex",
now: 123,
});
@@ -319,11 +315,11 @@ describe("collectCodexRouteWarnings", () => {
updatedAt: 123,
providerOverride: "openai",
modelOverride: "gpt-5.5",
agentHarnessId: "codex",
agentRuntimeOverride: "codex",
authProfileOverride: "openai-codex:default",
authProfileOverrideSource: "auto",
});
expect(store.main.agentHarnessId).toBeUndefined();
expect(store.main.agentRuntimeOverride).toBeUndefined();
});
it("preserves canonical OpenAI sessions that are explicitly pinned to PI", () => {
@@ -343,7 +339,6 @@ describe("collectCodexRouteWarnings", () => {
const result = repairCodexSessionStoreRoutes({
store,
runtime: "codex",
now: 123,
});
@@ -356,7 +351,7 @@ describe("collectCodexRouteWarnings", () => {
});
});
it("repairs legacy routes to Codex without probing OAuth readiness", () => {
it("repairs legacy routes without probing OAuth readiness", () => {
const store = {
profiles: {
"openai-codex:default": {
@@ -406,10 +401,10 @@ describe("collectCodexRouteWarnings", () => {
expect(mocks.isInstalledPluginEnabled).not.toHaveBeenCalled();
expect(mocks.resolveAuthProfileOrder).not.toHaveBeenCalled();
expect(result.cfg.agents?.defaults?.model).toBe("openai/gpt-5.5");
expect(result.cfg.agents?.defaults?.agentRuntime).toEqual({ id: "codex" });
expect(result.cfg.agents?.defaults?.agentRuntime).toBeUndefined();
});
it("still repairs to Codex when installed plugin metadata is unavailable", () => {
it("still repairs routes when installed plugin metadata is unavailable", () => {
const store = {
profiles: {
"openai-codex:default": {
@@ -449,6 +444,6 @@ describe("collectCodexRouteWarnings", () => {
});
expect(result.cfg.agents?.defaults?.model).toBe("openai/gpt-5.5");
expect(result.cfg.agents?.defaults?.agentRuntime).toEqual({ id: "codex" });
expect(result.cfg.agents?.defaults?.agentRuntime).toBeUndefined();
});
});

View File

@@ -11,10 +11,8 @@ type CodexRouteHit = {
model: string;
canonicalModel: string;
runtime?: string;
setsRuntime?: boolean;
};
type CodexRepairRuntime = "codex" | "pi";
type MutableRecord = Record<string, unknown>;
type SessionRouteRepairResult = {
changed: boolean;
@@ -62,12 +60,11 @@ function resolveRuntime(params: {
env?: NodeJS.ProcessEnv;
agentRuntime?: AgentRuntimePolicyConfig;
defaultsRuntime?: AgentRuntimePolicyConfig;
}): string {
}): string | undefined {
return (
normalizeString(params.env?.OPENCLAW_AGENT_RUNTIME) ??
normalizeString(params.agentRuntime?.id) ??
normalizeString(params.defaultsRuntime?.id) ??
"codex"
normalizeString(params.defaultsRuntime?.id)
);
}
@@ -76,7 +73,6 @@ function recordCodexModelHit(params: {
path: string;
model: string;
runtime?: string;
setsRuntime?: boolean;
}): string | undefined {
const canonicalModel = toCanonicalOpenAIModelRef(params.model);
if (!canonicalModel) {
@@ -87,7 +83,6 @@ function recordCodexModelHit(params: {
model: params.model,
canonicalModel,
...(params.runtime ? { runtime: params.runtime } : {}),
...(params.setsRuntime ? { setsRuntime: true } : {}),
});
return canonicalModel;
}
@@ -97,7 +92,6 @@ function collectStringModelSlot(params: {
path: string;
value: unknown;
runtime?: string;
setsRuntime?: boolean;
}): boolean {
if (typeof params.value !== "string") {
return false;
@@ -111,7 +105,6 @@ function collectStringModelSlot(params: {
path: params.path,
model,
runtime: params.runtime,
setsRuntime: params.setsRuntime,
});
}
@@ -120,7 +113,6 @@ function collectModelConfigSlot(params: {
path: string;
value: unknown;
runtime?: string;
setsRuntimeOnPrimary?: boolean;
}): boolean {
if (typeof params.value === "string") {
return collectStringModelSlot({
@@ -128,7 +120,6 @@ function collectModelConfigSlot(params: {
path: params.path,
value: params.value,
runtime: params.runtime,
setsRuntime: params.setsRuntimeOnPrimary,
});
}
const record = asMutableRecord(params.value);
@@ -142,7 +133,6 @@ function collectModelConfigSlot(params: {
path: `${params.path}.primary`,
value: record.primary,
runtime: params.runtime,
setsRuntime: params.setsRuntimeOnPrimary,
});
}
if (Array.isArray(record.fallbacks)) {
@@ -195,7 +185,6 @@ function collectAgentModelRefs(params: {
path: `${params.path}.${key}`,
value: agent[key],
runtime: key === "model" ? params.runtime : undefined,
setsRuntimeOnPrimary: key === "model",
});
}
collectStringModelSlot({
@@ -307,7 +296,6 @@ function rewriteStringModelSlot(params: {
key: string;
path: string;
runtime?: string;
setsRuntime?: boolean;
}): boolean {
if (!params.container) {
return false;
@@ -322,7 +310,6 @@ function rewriteStringModelSlot(params: {
path: params.path,
model,
runtime: params.runtime,
setsRuntime: params.setsRuntime,
});
if (!canonicalModel) {
return false;
@@ -337,7 +324,6 @@ function rewriteModelConfigSlot(params: {
key: string;
path: string;
runtime?: string;
setsRuntimeOnPrimary?: boolean;
}): boolean {
if (!params.container) {
return false;
@@ -350,7 +336,6 @@ function rewriteModelConfigSlot(params: {
key: params.key,
path: params.path,
runtime: params.runtime,
setsRuntime: params.setsRuntimeOnPrimary,
});
}
const record = asMutableRecord(value);
@@ -363,7 +348,6 @@ function rewriteModelConfigSlot(params: {
key: "primary",
path: `${params.path}.primary`,
runtime: params.runtime,
setsRuntime: params.setsRuntimeOnPrimary,
});
if (Array.isArray(record.fallbacks)) {
record.fallbacks = record.fallbacks.map((entry, index) => {
@@ -409,27 +393,20 @@ function rewriteAgentModelRefs(params: {
hits: CodexRouteHit[];
agent: MutableRecord | undefined;
path: string;
runtime: CodexRepairRuntime;
currentRuntime: string;
currentRuntime?: string;
rewriteModelsMap?: boolean;
}): void {
if (!params.agent) {
return;
}
for (const key of AGENT_MODEL_CONFIG_KEYS) {
const rewrotePrimary = rewriteModelConfigSlot({
rewriteModelConfigSlot({
hits: params.hits,
container: params.agent,
key,
path: `${params.path}.${key}`,
runtime: key === "model" ? params.currentRuntime : undefined,
setsRuntimeOnPrimary: key === "model",
});
if (key === "model" && rewrotePrimary) {
const agentRuntime = asMutableRecord(params.agent.agentRuntime) ?? {};
agentRuntime.id = params.runtime;
params.agent.agentRuntime = agentRuntime;
}
}
rewriteStringModelSlot({
hits: params.hits,
@@ -465,11 +442,10 @@ function rewriteAgentModelRefs(params: {
}
}
function rewriteConfigModelRefs(params: {
function rewriteConfigModelRefs(params: { cfg: OpenClawConfig; env?: NodeJS.ProcessEnv }): {
cfg: OpenClawConfig;
env?: NodeJS.ProcessEnv;
runtime: CodexRepairRuntime;
}): { cfg: OpenClawConfig; changes: CodexRouteHit[] } {
changes: CodexRouteHit[];
} {
const nextConfig = structuredClone(params.cfg);
const hits: CodexRouteHit[] = [];
const defaultsRuntime = nextConfig.agents?.defaults?.agentRuntime;
@@ -477,7 +453,6 @@ function rewriteConfigModelRefs(params: {
hits,
agent: asMutableRecord(nextConfig.agents?.defaults),
path: "agents.defaults",
runtime: params.runtime,
currentRuntime: resolveRuntime({ env: params.env, defaultsRuntime }),
rewriteModelsMap: true,
});
@@ -487,7 +462,6 @@ function rewriteConfigModelRefs(params: {
hits,
agent: agent as MutableRecord,
path: `agents.list.${id}`,
runtime: params.runtime,
currentRuntime: resolveRuntime({
env: params.env,
agentRuntime: agent.agentRuntime,
@@ -550,18 +524,8 @@ function rewriteConfigModelRefs(params: {
};
}
function resolveCodexRepairRuntime(params: {
cfg: OpenClawConfig;
env?: NodeJS.ProcessEnv;
codexRuntimeReady?: boolean;
}): CodexRepairRuntime {
void params;
return "codex";
}
function formatCodexRouteChange(hit: CodexRouteHit, runtime: CodexRepairRuntime): string {
const suffix = hit.setsRuntime ? `; set agentRuntime.id to "${runtime}"` : "";
return `${hit.path}: ${hit.model} -> ${hit.canonicalModel}${suffix}.`;
function formatCodexRouteChange(hit: CodexRouteHit): string {
return `${hit.path}: ${hit.model} -> ${hit.canonicalModel}.`;
}
export function collectCodexRouteWarnings(params: {
@@ -581,7 +545,7 @@ export function collectCodexRouteWarnings(params: {
hit.runtime ? `; current runtime is "${hit.runtime}"` : ""
}.`,
),
'- Run `openclaw doctor --fix`: it rewrites configured model refs and stale sessions to `openai/*` with `agentRuntime.id: "codex"`.',
"- Run `openclaw doctor --fix`: it rewrites configured model refs and stale sessions to `openai/*` without changing explicit runtime policy.",
].join("\n"),
];
}
@@ -603,22 +567,16 @@ export function maybeRepairCodexRoutes(params: {
changes: [],
};
}
const runtime = resolveCodexRepairRuntime({
cfg: params.cfg,
env: params.env,
codexRuntimeReady: params.codexRuntimeReady,
});
const repaired = rewriteConfigModelRefs({
cfg: params.cfg,
env: params.env,
runtime,
});
return {
cfg: repaired.cfg,
warnings: [],
changes: [
`Repaired Codex model routes:\n${repaired.changes
.map((hit) => `- ${formatCodexRouteChange(hit, runtime)}`)
.map((hit) => `- ${formatCodexRouteChange(hit)}`)
.join("\n")}`,
],
};
@@ -667,19 +625,21 @@ function clearStaleCodexFallbackNotice(entry: SessionEntry): boolean {
return true;
}
function clearStaleCodexAuthOverride(entry: SessionEntry, runtime: CodexRepairRuntime): boolean {
if (runtime === "codex" || !entry.authProfileOverride?.startsWith("openai-codex:")) {
return false;
function clearStaleSessionRuntimePins(entry: SessionEntry): boolean {
let changed = false;
if (entry.agentHarnessId !== undefined) {
delete entry.agentHarnessId;
changed = true;
}
delete entry.authProfileOverride;
delete entry.authProfileOverrideSource;
delete entry.authProfileOverrideCompactionCount;
return true;
if (entry.agentRuntimeOverride !== undefined) {
delete entry.agentRuntimeOverride;
changed = true;
}
return changed;
}
export function repairCodexSessionStoreRoutes(params: {
store: Record<string, SessionEntry>;
runtime: CodexRepairRuntime;
now?: number;
}): SessionRouteRepairResult {
const now = params.now ?? Date.now();
@@ -700,14 +660,11 @@ export function repairCodexSessionStoreRoutes(params: {
});
const changedModelRoute = changedRuntimeModelRoute || changedOverrideModelRoute;
const changedFallbackNotice = clearStaleCodexFallbackNotice(entry);
const changedAuthOverride = clearStaleCodexAuthOverride(entry, params.runtime);
if (!changedModelRoute && !changedFallbackNotice && !changedAuthOverride) {
const changedRuntimePins =
changedModelRoute || changedFallbackNotice ? clearStaleSessionRuntimePins(entry) : false;
if (!changedModelRoute && !changedFallbackNotice && !changedRuntimePins) {
continue;
}
if (changedModelRoute) {
entry.agentHarnessId = params.runtime;
entry.agentRuntimeOverride = params.runtime;
}
entry.updatedAt = now;
sessionKeys.push(sessionKey);
}
@@ -717,11 +674,7 @@ export function repairCodexSessionStoreRoutes(params: {
};
}
function scanCodexSessionStoreRoutes(
store: Record<string, SessionEntry>,
runtime: CodexRepairRuntime,
): string[] {
void runtime;
function scanCodexSessionStoreRoutes(store: Record<string, SessionEntry>): string[] {
return Object.entries(store).flatMap(([sessionKey, entry]) => {
if (!entry) {
return [];
@@ -756,13 +709,8 @@ export async function maybeRepairCodexSessionRoutes(params: {
};
}
if (!params.shouldRepair) {
const runtime = resolveCodexRepairRuntime({
cfg: params.cfg,
env: params.env,
codexRuntimeReady: params.codexRuntimeReady,
});
const stale = targets.flatMap((target) => {
const sessionKeys = scanCodexSessionStoreRoutes(loadSessionStore(target.storePath), runtime);
const sessionKeys = scanCodexSessionStoreRoutes(loadSessionStore(target.storePath));
return sessionKeys.map((sessionKey) => `${target.agentId}:${sessionKey}`);
});
return {
@@ -782,24 +730,16 @@ export async function maybeRepairCodexSessionRoutes(params: {
changes: [],
};
}
const runtime = resolveCodexRepairRuntime({
cfg: params.cfg,
env: params.env,
codexRuntimeReady: params.codexRuntimeReady,
});
let repairedStores = 0;
let repairedSessions = 0;
for (const target of targets) {
const staleSessionKeys = scanCodexSessionStoreRoutes(
loadSessionStore(target.storePath),
runtime,
);
const staleSessionKeys = scanCodexSessionStoreRoutes(loadSessionStore(target.storePath));
if (staleSessionKeys.length === 0) {
continue;
}
const result = await updateSessionStore(
target.storePath,
(store) => repairCodexSessionStoreRoutes({ store, runtime }),
(store) => repairCodexSessionStoreRoutes({ store }),
{ skipMaintenance: true },
);
if (!result.changed) {
@@ -818,7 +758,7 @@ export async function maybeRepairCodexSessionRoutes(params: {
? [
`Repaired Codex session routes: moved ${repairedSessions} session${
repairedSessions === 1 ? "" : "s"
} across ${repairedStores} store${repairedStores === 1 ? "" : "s"} to openai/* with agentRuntime "${runtime}".`,
} across ${repairedStores} store${repairedStores === 1 ? "" : "s"} to openai/* while preserving runtime policy.`,
]
: [],
};

View File

@@ -229,9 +229,6 @@ type ModelProviderEntry = Partial<
>;
type ModelsConfigPatch = Partial<NonNullable<OpenClawConfig["models"]>>;
type ModelDefinitionEntry = NonNullable<ModelProviderEntry["models"]>[number];
type AgentRuntimePolicyPatch = NonNullable<
NonNullable<NonNullable<OpenClawConfig["agents"]>["defaults"]>["agentRuntime"]
>;
function mergeModelEntry(legacyEntry: unknown, currentEntry: unknown): unknown {
if (!isRecord(legacyEntry) || !isRecord(currentEntry)) {
@@ -244,42 +241,81 @@ function normalizeLegacyRuntimeAgentModelConfig(raw: unknown): {
value?: unknown;
changed: boolean;
selectedRuntime?: string;
selectedRefs: string[];
} {
if (typeof raw === "string") {
const migrated = migrateLegacyRuntimeModelRef(raw);
return migrated
? { value: migrated.ref, changed: true, selectedRuntime: migrated.runtime }
: { value: raw, changed: false };
? {
value: migrated.ref,
changed: true,
selectedRuntime: migrated.runtime,
selectedRefs: [migrated.ref],
}
: { value: raw, changed: false, selectedRefs: [] };
}
if (!isRecord(raw)) {
return { value: raw, changed: false };
return { value: raw, changed: false, selectedRefs: [] };
}
const migratedPrimary =
typeof raw.primary === "string" ? migrateLegacyRuntimeModelRef(raw.primary) : null;
if (!migratedPrimary) {
return { value: raw, changed: false };
return { value: raw, changed: false, selectedRefs: [] };
}
const next: Record<string, unknown> = { ...raw, primary: migratedPrimary.ref };
const selectedRefs = [migratedPrimary.ref];
if (Array.isArray(raw.fallbacks)) {
next.fallbacks = raw.fallbacks.map((fallback) => {
if (typeof fallback !== "string") {
return fallback;
}
const migratedFallback = migrateLegacyRuntimeModelRef(fallback);
return migratedFallback?.runtime === migratedPrimary.runtime
? migratedFallback.ref
: fallback;
if (migratedFallback?.runtime === migratedPrimary.runtime) {
selectedRefs.push(migratedFallback.ref);
return migratedFallback.ref;
}
return fallback;
});
}
return {
value: next,
changed: true,
selectedRuntime: migratedPrimary.runtime,
selectedRefs,
};
}
function runtimeNeedsExplicitModelPolicy(runtime: string | undefined): runtime is string {
return Boolean(runtime && runtime !== "codex");
}
function modelEntryWithRuntimePolicy(entry: unknown, runtime: string): Record<string, unknown> {
const base = isRecord(entry) ? { ...entry } : {};
const currentRuntime = isRecord(base.agentRuntime)
? normalizeOptionalLowercaseString(base.agentRuntime.id)
: undefined;
if (!currentRuntime || currentRuntime === "auto") {
base.agentRuntime = {
...(isRecord(base.agentRuntime) ? base.agentRuntime : {}),
id: runtime,
};
}
return base;
}
function mergeModelEntryWithRuntimePolicy(
legacyEntry: unknown,
currentEntry: unknown,
runtime: string | undefined,
): unknown {
const merged = mergeModelEntry(legacyEntry, currentEntry);
return runtimeNeedsExplicitModelPolicy(runtime)
? modelEntryWithRuntimePolicy(merged, runtime)
: merged;
}
function normalizeLegacyRuntimeAllowlistModels(
rawModels: unknown,
selectedRuntime: string | undefined,
@@ -305,29 +341,30 @@ function normalizeLegacyRuntimeAllowlistModels(
next[rawKey] = mergeModelEntry(entry, next[rawKey]);
}
for (const [migratedKey, entry] of legacyEntries) {
next[migratedKey] = mergeModelEntry(entry, next[migratedKey]);
next[migratedKey] = mergeModelEntryWithRuntimePolicy(entry, next[migratedKey], selectedRuntime);
}
return { value: next, changed };
}
function ensureAgentRuntimePolicy(
raw: unknown,
selectedRuntime: string,
): {
value: AgentRuntimePolicyPatch;
changed: boolean;
} {
if (!isRecord(raw)) {
return { value: { id: selectedRuntime }, changed: true };
function ensureSelectedModelRuntimePolicies(
rawModels: unknown,
selectedRefs: readonly string[],
selectedRuntime: string | undefined,
): { value?: unknown; changed: boolean } {
if (!runtimeNeedsExplicitModelPolicy(selectedRuntime) || selectedRefs.length === 0) {
return { value: rawModels, changed: false };
}
const currentRuntime = normalizeOptionalLowercaseString(raw.id);
if (!currentRuntime || currentRuntime === "auto") {
return {
value: { ...raw, id: selectedRuntime } as AgentRuntimePolicyPatch,
changed: currentRuntime !== selectedRuntime,
};
const next: Record<string, unknown> = isRecord(rawModels) ? { ...rawModels } : {};
let changed = false;
for (const ref of selectedRefs) {
const current = next[ref];
const updated = modelEntryWithRuntimePolicy(current, selectedRuntime);
if (JSON.stringify(updated) !== JSON.stringify(current ?? {})) {
next[ref] = updated;
changed = true;
}
}
return { value: raw as AgentRuntimePolicyPatch, changed: false };
return { value: next, changed };
}
function normalizeLegacyRuntimeAgentContainer(
@@ -358,10 +395,15 @@ function normalizeLegacyRuntimeAgentContainer(
}
if (model.selectedRuntime) {
const agentRuntime = ensureAgentRuntimePolicy(raw.agentRuntime, model.selectedRuntime);
if (agentRuntime.changed) {
next.agentRuntime = agentRuntime.value;
const modelRuntimes = ensureSelectedModelRuntimePolicies(
next.models,
model.selectedRefs,
model.selectedRuntime,
);
if (modelRuntimes.changed) {
next.models = modelRuntimes.value;
changed = true;
changes.push(`Selected ${model.selectedRuntime} runtime for ${path}.models entries.`);
}
}

View File

@@ -315,7 +315,7 @@ describe("legacy migrate sandbox scope aliases", () => {
});
});
it("moves legacy embeddedHarness runtime policy into agentRuntime", () => {
it("removes ignored agent-wide runtime policy", () => {
const res = migrateLegacyConfigForTest({
agents: {
defaults: {
@@ -339,20 +339,14 @@ describe("legacy migrate sandbox scope aliases", () => {
expect(res.changes).toEqual(
expect.arrayContaining([
"Moved agents.defaults.embeddedHarness → agents.defaults.agentRuntime.",
"Moved agents.list.0.embeddedHarness → agents.list.0.agentRuntime.",
"Removed agents.defaults.embeddedHarness; runtime is now provider/model scoped.",
"Removed agents.list.0.embeddedHarness; runtime is now provider/model scoped.",
"Removed agents.list.0.agentRuntime; runtime is now provider/model scoped.",
]),
);
expect(res.config?.agents?.defaults).toEqual({
agentRuntime: {
id: "claude-cli",
},
});
expect(res.config?.agents?.defaults).toEqual({});
expect(res.config?.agents?.list?.[0]).toEqual({
id: "reviewer",
agentRuntime: {
id: "codex",
},
});
});

View File

@@ -58,24 +58,30 @@ const LEGACY_AGENT_RUNTIME_POLICY_RULES: LegacyConfigRule[] = [
{
path: ["agents", "defaults", "agentRuntime", "fallback"],
message:
'agents.defaults.agentRuntime.fallback is no longer supported; explicit runtimes fail closed and auto mode owns PI fallback. Run "openclaw doctor --fix".',
'agents.defaults.agentRuntime is ignored; set models.providers.<provider>.agentRuntime or a model-scoped agentRuntime instead. Run "openclaw doctor --fix".',
},
{
path: ["agents", "defaults", "embeddedHarness"],
message:
'agents.defaults.embeddedHarness is legacy; use agents.defaults.agentRuntime instead. Run "openclaw doctor --fix".',
'agents.defaults.embeddedHarness is legacy and ignored; set provider/model runtime policy instead. Run "openclaw doctor --fix".',
match: (value) => getRecord(value) !== null,
},
{
path: ["agents", "defaults", "agentRuntime"],
message:
'agents.defaults.agentRuntime is ignored; set models.providers.<provider>.agentRuntime or a model-scoped agentRuntime instead. Run "openclaw doctor --fix".',
match: (value) => getRecord(value) !== null,
},
{
path: ["agents", "list"],
message:
'agents.list[].agentRuntime.fallback is no longer supported; explicit runtimes fail closed and auto mode owns PI fallback. Run "openclaw doctor --fix".',
match: (value) => hasAgentListRuntimeFallback(value),
'agents.list[].agentRuntime is ignored; set provider/model runtime policy instead. Run "openclaw doctor --fix".',
match: (value) => hasAgentListRuntimePolicy(value),
},
{
path: ["agents", "list"],
message:
'agents.list[].embeddedHarness is legacy; use agents.list[].agentRuntime instead. Run "openclaw doctor --fix".',
'agents.list[].embeddedHarness is legacy and ignored; set provider/model runtime policy instead. Run "openclaw doctor --fix".',
match: (value) => hasLegacyAgentListEmbeddedHarness(value),
},
];
@@ -166,16 +172,11 @@ function hasLegacyAgentListEmbeddedHarness(value: unknown): boolean {
return value.some((agent) => getRecord(getRecord(agent)?.embeddedHarness) !== null);
}
function hasAgentRuntimeFallback(value: unknown): boolean {
const runtime = getRecord(value);
return Boolean(runtime && Object.prototype.hasOwnProperty.call(runtime, "fallback"));
}
function hasAgentListRuntimeFallback(value: unknown): boolean {
function hasAgentListRuntimePolicy(value: unknown): boolean {
if (!Array.isArray(value)) {
return false;
}
return value.some((agent) => hasAgentRuntimeFallback(getRecord(agent)?.agentRuntime));
return value.some((agent) => getRecord(getRecord(agent)?.agentRuntime) !== null);
}
function migrateLegacySandboxPerSession(
@@ -199,45 +200,19 @@ function migrateLegacySandboxPerSession(
delete sandbox.perSession;
}
function migrateLegacyAgentRuntimePolicy(
function removeLegacyAgentRuntimePolicy(
container: Record<string, unknown>,
pathLabel: string,
changes: string[],
): void {
const legacy = getRecord(container.embeddedHarness);
if (!legacy) {
return;
if (getRecord(container.embeddedHarness) !== null) {
delete container.embeddedHarness;
changes.push(`Removed ${pathLabel}.embeddedHarness; runtime is now provider/model scoped.`);
}
const existing = getRecord(container.agentRuntime);
const next = existing ? structuredClone(existing) : {};
if (next.id === undefined && legacy.runtime !== undefined) {
next.id = legacy.runtime;
}
if (Object.keys(next).length > 0) {
container.agentRuntime = next;
}
delete container.embeddedHarness;
changes.push(`Moved ${pathLabel}.embeddedHarness → ${pathLabel}.agentRuntime.`);
}
function removeAgentRuntimeFallback(
container: Record<string, unknown>,
pathLabel: string,
changes: string[],
): void {
const runtime = getRecord(container.agentRuntime);
if (!runtime || !Object.prototype.hasOwnProperty.call(runtime, "fallback")) {
return;
}
delete runtime.fallback;
if (Object.keys(runtime).length > 0) {
container.agentRuntime = runtime;
} else {
if (getRecord(container.agentRuntime) !== null) {
delete container.agentRuntime;
changes.push(`Removed ${pathLabel}.agentRuntime; runtime is now provider/model scoped.`);
}
changes.push(`Removed ${pathLabel}.agentRuntime.fallback.`);
}
export const LEGACY_CONFIG_MIGRATIONS_RUNTIME_AGENTS: LegacyConfigMigrationSpec[] = [
@@ -257,15 +232,14 @@ export const LEGACY_CONFIG_MIGRATIONS_RUNTIME_AGENTS: LegacyConfigMigrationSpec[
},
}),
defineLegacyConfigMigration({
id: "agents.embeddedHarness->agentRuntime",
describe: "Move legacy embeddedHarness runtime policy to agentRuntime",
id: "agents.agentRuntime-ignored",
describe: "Remove ignored agent-wide runtime policy",
legacyRules: LEGACY_AGENT_RUNTIME_POLICY_RULES,
apply: (raw, changes) => {
const agents = getRecord(raw.agents);
const defaults = getRecord(agents?.defaults);
if (defaults) {
migrateLegacyAgentRuntimePolicy(defaults, "agents.defaults", changes);
removeAgentRuntimeFallback(defaults, "agents.defaults", changes);
removeLegacyAgentRuntimePolicy(defaults, "agents.defaults", changes);
}
if (!Array.isArray(agents?.list)) {
@@ -276,8 +250,7 @@ export const LEGACY_CONFIG_MIGRATIONS_RUNTIME_AGENTS: LegacyConfigMigrationSpec[
if (!agentRecord) {
continue;
}
migrateLegacyAgentRuntimePolicy(agentRecord, `agents.list.${index}`, changes);
removeAgentRuntimeFallback(agentRecord, `agents.list.${index}`, changes);
removeLegacyAgentRuntimePolicy(agentRecord, `agents.list.${index}`, changes);
}
},
}),

View File

@@ -1186,26 +1186,48 @@ describe("repairMissingConfiguredPluginInstalls", () => {
it.each([
[
"default agent runtime",
"default OpenAI model route",
{
agents: {
defaults: {
agentRuntime: { id: "codex" },
model: "openai/gpt-5.5",
},
},
},
{},
],
[
"agent runtime override",
"provider runtime policy",
{
agents: {
list: [{ id: "main", agentRuntime: { id: "codex" } }],
models: {
providers: {
openai: {
baseUrl: "https://api.openai.com/v1",
agentRuntime: { id: "codex" },
models: [],
},
},
},
},
{},
],
[
"agent model runtime policy",
{
agents: {
list: [
{
id: "main",
model: "anthropic/claude-opus-4-7",
models: {
"anthropic/claude-opus-4-7": { agentRuntime: { id: "codex" } },
},
},
],
},
},
{},
],
["environment runtime override", {}, { OPENCLAW_AGENT_RUNTIME: "codex" }],
])("repairs a missing Codex plugin selected by %s", async (_label, cfg, env) => {
mocks.installPluginFromNpmSpec.mockResolvedValueOnce({
ok: true,
@@ -1262,6 +1284,55 @@ describe("repairMissingConfiguredPluginInstalls", () => {
});
});
it.each([
[
"default agent runtime",
{
agents: {
defaults: {
agentRuntime: { id: "codex" },
},
},
},
{},
],
[
"agent runtime override",
{
agents: {
list: [{ id: "main", agentRuntime: { id: "codex" } }],
},
},
{},
],
["environment runtime override", {}, { OPENCLAW_AGENT_RUNTIME: "codex" }],
])("ignores legacy whole-agent Codex runtime selected by %s", async (_label, cfg, env) => {
mocks.listOfficialExternalPluginCatalogEntries.mockReturnValue([
{
id: "codex",
label: "Codex",
install: {
npmSpec: "@openclaw/codex",
defaultChoice: "npm",
},
},
]);
const { repairMissingConfiguredPluginInstalls } =
await import("./missing-configured-plugin-install.js");
const result = await repairMissingConfiguredPluginInstalls({
cfg,
env,
});
expect(mocks.installPluginFromNpmSpec).not.toHaveBeenCalled();
expect(mocks.writePersistedInstalledPluginIndexInstallRecords).not.toHaveBeenCalled();
expect(result).toEqual({
changes: [],
warnings: [],
});
});
it("does not install a blocked downloadable plugin from explicit channel ids", async () => {
mocks.listChannelPluginCatalogEntries.mockReturnValue([
{

View File

@@ -1,5 +1,6 @@
import { existsSync } from "node:fs";
import path from "node:path";
import { collectConfiguredAgentHarnessRuntimes } from "../../../agents/harness-runtimes.js";
import {
listExplicitlyDisabledChannelIdsForConfig,
listPotentialConfiguredChannelIds,
@@ -108,13 +109,8 @@ function addConfiguredAgentRuntimePluginIds(
cfg: OpenClawConfig,
env?: NodeJS.ProcessEnv,
): void {
addConfiguredPluginId(ids, env?.OPENCLAW_AGENT_RUNTIME);
const agents = asObjectRecord(cfg.agents);
const defaults = asObjectRecord(agents?.defaults);
addConfiguredPluginId(ids, asObjectRecord(defaults?.agentRuntime)?.id);
const list = Array.isArray(agents?.list) ? agents.list : [];
for (const entry of list) {
addConfiguredPluginId(ids, asObjectRecord(asObjectRecord(entry)?.agentRuntime)?.id);
for (const runtime of collectConfiguredAgentHarnessRuntimes(cfg, env ?? process.env)) {
addConfiguredPluginId(ids, runtime);
}
}

View File

@@ -74,9 +74,10 @@ describe("sessionsCommand model resolution", () => {
setMockSessionsConfig(() => ({
agents: {
defaults: {
agentRuntime: { id: "claude-cli" },
model: { primary: "anthropic/claude-opus-4-7" },
models: { "anthropic/claude-opus-4-7": {} },
models: {
"anthropic/claude-opus-4-7": { agentRuntime: { id: "claude-cli" } },
},
contextTokens: 200_000,
},
},
@@ -100,7 +101,7 @@ describe("sessionsCommand model resolution", () => {
expect(session?.model).toBe("claude-opus-4-7");
expect(session?.agentRuntime).toEqual({
id: "claude-cli",
source: "defaults",
source: "model",
});
});
@@ -108,9 +109,10 @@ describe("sessionsCommand model resolution", () => {
setMockSessionsConfig(() => ({
agents: {
defaults: {
agentRuntime: { id: "claude-cli" },
model: { primary: "openai/gpt-5.4" },
models: { "anthropic/claude-opus-4-7": {} },
models: {
"anthropic/claude-opus-4-7": { agentRuntime: { id: "claude-cli" } },
},
contextTokens: 200_000,
},
},

View File

@@ -57,9 +57,10 @@ describe("sessionsCommand", () => {
setMockSessionsConfig(() => ({
agents: {
defaults: {
agentRuntime: { id: "claude-cli" },
model: { primary: "anthropic/claude-opus-4-7" },
models: { "anthropic/claude-opus-4-7": {} },
models: {
"anthropic/claude-opus-4-7": { agentRuntime: { id: "claude-cli" } },
},
contextTokens: 200_000,
},
},
@@ -92,9 +93,10 @@ describe("sessionsCommand", () => {
setMockSessionsConfig(() => ({
agents: {
defaults: {
agentRuntime: { id: "claude-cli" },
model: { primary: "anthropic/claude-opus-4-7" },
models: { "anthropic/claude-opus-4-7": {} },
models: {
"anthropic/claude-opus-4-7": { agentRuntime: { id: "claude-cli" } },
},
contextTokens: 200_000,
},
},

View File

@@ -1,6 +1,5 @@
import { resolveAgentRuntimeMetadata } from "../agents/agent-runtime-metadata.js";
import { resolveModelAgentRuntimeMetadata } from "../agents/agent-runtime-metadata.js";
import { DEFAULT_CONTEXT_TOKENS } from "../agents/defaults.js";
import { selectAgentHarness } from "../agents/harness/selection.js";
import { getRuntimeConfig } from "../config/config.js";
import { loadSessionStore, resolveSessionTotalTokens } from "../config/sessions.js";
import type { SessionEntry } from "../config/sessions/types.js";
@@ -34,7 +33,7 @@ import {
type SessionRow = SessionDisplayRow & {
agentId: string;
kind: "cron" | "direct" | "group" | "global" | "unknown";
agentRuntime: ReturnType<typeof resolveAgentRuntimeMetadata>;
agentRuntime: ReturnType<typeof resolveModelAgentRuntimeMetadata>;
runtimeLabel: string;
};
@@ -172,42 +171,14 @@ const formatKindCell = (kind: SessionRow["kind"], rich: boolean) => {
function resolveSessionRuntimeLabel(params: {
cfg: OpenClawConfig;
entry: SessionEntry;
agentRuntime: ReturnType<typeof resolveAgentRuntimeMetadata>;
agentRuntime: ReturnType<typeof resolveModelAgentRuntimeMetadata>;
modelProvider: string;
model: string;
agentId: string;
sessionKey: string;
}): string {
const explicitRuntime =
normalizeOptionalLowercaseString(params.entry.agentRuntimeOverride) ??
normalizeOptionalLowercaseString(params.entry.agentHarnessId) ??
(params.agentRuntime.source === "implicit"
? undefined
: normalizeOptionalLowercaseString(params.agentRuntime.id));
if (explicitRuntime && explicitRuntime !== "auto" && explicitRuntime !== "default") {
return resolveAgentRuntimeLabel({
config: params.cfg,
sessionEntry: params.entry,
resolvedHarness: explicitRuntime,
fallbackProvider: params.modelProvider,
});
}
let resolvedHarness: string | undefined;
try {
const selected = selectAgentHarness({
provider: params.modelProvider,
modelId: params.model,
config: params.cfg,
agentId: params.agentId,
sessionKey: params.sessionKey,
agentHarnessId: params.entry.agentHarnessId,
});
const id = normalizeOptionalLowercaseString(selected.id);
resolvedHarness = id && id !== "pi" ? id : undefined;
} catch {
resolvedHarness = undefined;
}
const id = normalizeOptionalLowercaseString(params.agentRuntime.id);
const resolvedHarness = id && id !== "pi" && id !== "auto" ? id : undefined;
return resolveAgentRuntimeLabel({
config: params.cfg,
sessionEntry: params.entry,
@@ -291,7 +262,13 @@ export async function sessionsCommand(
const row = toSessionDisplayRow(key, entry);
const agentId = parseAgentSessionKey(row.key)?.agentId ?? target.agentId;
const modelRef = resolveSessionDisplayModelRef(cfg, row);
const agentRuntime = resolveAgentRuntimeMetadata(cfg, agentId);
const agentRuntime = resolveModelAgentRuntimeMetadata({
cfg,
agentId,
provider: modelRef.provider,
model: modelRef.model,
sessionKey: row.key,
});
return Object.assign({}, row, {
agentId,
agentRuntime,

View File

@@ -51,14 +51,13 @@ describe("statusSummaryRuntime.classifySessionKey", () => {
});
describe("statusSummaryRuntime.resolveSessionRuntimeLabel", () => {
it("uses the shared /status runtime labels for persisted harness metadata", () => {
it("uses the shared /status runtime label for the implicit OpenAI Codex route", () => {
expect(
statusSummaryRuntime.resolveSessionRuntimeLabel({
cfg: {} as never,
entry: {
sessionId: "session-1",
updatedAt: 0,
agentRuntimeOverride: "codex",
},
provider: "openai",
model: "gpt-5.5",
@@ -67,13 +66,15 @@ describe("statusSummaryRuntime.resolveSessionRuntimeLabel", () => {
).toBe("OpenAI Codex");
});
it("preserves configured default CLI runtimes when sessions lack persisted harness metadata", () => {
it("preserves configured default model CLI runtimes", () => {
expect(
statusSummaryRuntime.resolveSessionRuntimeLabel({
cfg: {
agents: {
defaults: {
agentRuntime: { id: "claude-cli" },
models: {
"anthropic/claude-sonnet-4-6": { agentRuntime: { id: "claude-cli" } },
},
},
},
} as never,
@@ -88,18 +89,22 @@ describe("statusSummaryRuntime.resolveSessionRuntimeLabel", () => {
).toBe("Claude CLI");
});
it("preserves configured agent runtimes before harness selection", () => {
it("preserves configured agent model runtimes before harness selection", () => {
expect(
statusSummaryRuntime.resolveSessionRuntimeLabel({
cfg: {
agents: {
defaults: {
agentRuntime: { id: "pi" },
models: {
"openai/gpt-5.5": { agentRuntime: { id: "pi" } },
},
},
list: [
{
id: "research",
agentRuntime: { id: "codex" },
models: {
"openai/gpt-5.5": { agentRuntime: { id: "codex" } },
},
},
],
},

View File

@@ -1,7 +1,6 @@
import { resolveAgentRuntimeMetadata } from "../agents/agent-runtime-metadata.js";
import { resolveModelAgentRuntimeMetadata } from "../agents/agent-runtime-metadata.js";
import { resolveConfiguredProviderFallback } from "../agents/configured-provider-fallback.js";
import { DEFAULT_CONTEXT_TOKENS, DEFAULT_MODEL, DEFAULT_PROVIDER } from "../agents/defaults.js";
import { selectAgentHarness } from "../agents/harness/selection.js";
import { parseModelRef, resolvePersistedSelectedModelRef } from "../agents/model-selection.js";
import { normalizeProviderId } from "../agents/provider-id.js";
import { resolveAgentModelPrimaryValue } from "../config/model-input.js";
@@ -178,37 +177,15 @@ function resolveSessionRuntimeLabel(params: {
agentId?: string;
sessionKey: string;
}): string {
const agentRuntime = resolveAgentRuntimeMetadata(params.cfg, params.agentId ?? "");
const explicitRuntime =
normalizeOptionalLowercaseString(params.entry?.agentRuntimeOverride) ??
normalizeOptionalLowercaseString(params.entry?.agentHarnessId) ??
(agentRuntime.source === "implicit"
? undefined
: normalizeOptionalLowercaseString(agentRuntime.id));
if (explicitRuntime && explicitRuntime !== "auto" && explicitRuntime !== "default") {
return resolveAgentRuntimeLabel({
config: params.cfg,
sessionEntry: params.entry,
resolvedHarness: explicitRuntime,
fallbackProvider: params.provider,
});
}
let resolvedHarness: string | undefined;
try {
const selected = selectAgentHarness({
provider: params.provider,
modelId: params.model,
config: params.cfg,
agentId: params.agentId,
sessionKey: params.sessionKey,
agentHarnessId: params.entry?.agentHarnessId,
});
const id = normalizeOptionalLowercaseString(selected.id);
resolvedHarness = id && id !== "pi" ? id : undefined;
} catch {
resolvedHarness = undefined;
}
const runtime = resolveModelAgentRuntimeMetadata({
cfg: params.cfg,
agentId: params.agentId ?? "",
provider: params.provider,
model: params.model,
sessionKey: params.sessionKey,
});
const id = normalizeOptionalLowercaseString(runtime.id);
const resolvedHarness = id && id !== "pi" && id !== "auto" ? id : undefined;
return resolveAgentRuntimeLabel({
config: params.cfg,
sessionEntry: params.entry,

View File

@@ -1,6 +1,10 @@
import { normalizeProviderId } from "../agents/provider-id.js";
import { normalizeGooglePreviewModelId } from "../plugin-sdk/provider-model-id-normalize.js";
import { normalizeOptionalString, resolvePrimaryStringValue } from "../shared/string-coerce.js";
import {
normalizeLowercaseStringOrEmpty,
normalizeOptionalString,
resolvePrimaryStringValue,
} from "../shared/string-coerce.js";
import type { AgentModelConfig } from "./types.agents-shared.js";
type AgentModelListLike = {
@@ -20,7 +24,9 @@ function modelKeyForConfig(provider: string, model: string): string {
if (!modelId) {
return providerId;
}
return modelId.toLowerCase().startsWith(`${providerId.toLowerCase()}/`)
return normalizeLowercaseStringOrEmpty(modelId).startsWith(
`${normalizeLowercaseStringOrEmpty(providerId)}/`,
)
? modelId
: `${providerId}/${modelId}`;
}

View File

@@ -675,13 +675,17 @@ describe("applyPluginAutoEnable core", () => {
]);
});
it("auto-enables an opt-in plugin when an agent runtime is configured", () => {
it("auto-enables an opt-in plugin when a provider runtime is configured", () => {
const result = applyPluginAutoEnable({
config: {
agents: {
defaults: {
agentRuntime: {
id: "codex",
models: {
providers: {
openai: {
baseUrl: "https://api.openai.com/v1",
models: [],
agentRuntime: {
id: "codex",
},
},
},
},
@@ -702,13 +706,17 @@ describe("applyPluginAutoEnable core", () => {
expect(result.changes).toContain("codex agent runtime configured, enabled automatically.");
});
it("auto-enables a CLI backend owner when an agent runtime is configured", () => {
it("auto-enables a CLI backend owner when a provider runtime is configured", () => {
const result = applyPluginAutoEnable({
config: {
agents: {
defaults: {
agentRuntime: {
id: "claude-cli",
models: {
providers: {
anthropic: {
baseUrl: "https://api.anthropic.com",
models: [],
agentRuntime: {
id: "claude-cli",
},
},
},
},
@@ -732,7 +740,7 @@ describe("applyPluginAutoEnable core", () => {
expect(result.changes).toContain("claude-cli agent runtime configured, enabled automatically.");
});
it("auto-enables an opt-in plugin when an agent harness runtime is forced by env", () => {
it("ignores agent harness runtime env when auto-enabling plugins", () => {
const result = applyPluginAutoEnable({
config: {},
env: makeIsolatedEnv({ OPENCLAW_AGENT_RUNTIME: "codex" }),
@@ -747,8 +755,8 @@ describe("applyPluginAutoEnable core", () => {
]),
});
expect(result.config.plugins?.entries?.codex?.enabled).toBe(true);
expect(result.changes).toContain("codex agent runtime configured, enabled automatically.");
expect(result.config.plugins?.entries?.codex?.enabled).toBeUndefined();
expect(result.changes).not.toContain("codex agent runtime configured, enabled automatically.");
});
it("skips auto-enable work for configs without channel or plugin-owned surfaces", () => {

View File

@@ -885,6 +885,10 @@ export const FIELD_HELP: Record<string, string> = {
"Static HTTP headers merged into provider requests for tenant routing, proxy auth, or custom gateway requirements. Use this sparingly and keep sensitive header values in secrets.",
"models.providers.*.authHeader":
"When true, credentials are sent via the HTTP Authorization header even if alternate auth is possible. Use this only when your provider or proxy explicitly requires Authorization forwarding.",
"models.providers.*.agentRuntime":
"Optional low-level agent runtime policy for this provider. Use provider/model runtime policy instead of agent-wide runtime pins; omitted/default lets OpenClaw choose the runtime for the selected provider.",
"models.providers.*.agentRuntime.id":
'Provider agent runtime id: "pi", "auto", a registered plugin harness id such as "codex", or a supported CLI backend alias such as "claude-cli". OpenAI on the official endpoint defaults to the Codex harness when omitted.',
"models.providers.*.request":
"Optional request overrides for model-provider requests, including extra headers, auth overrides, proxy routing, TLS client settings, and optional allowPrivateNetwork for trusted self-hosted endpoints. Use these only when your upstream or enterprise network path requires transport customization.",
"models.providers.*.request.headers":
@@ -939,6 +943,10 @@ export const FIELD_HELP: Record<string, string> = {
"When true, allow HTTPS to the model base URL when DNS resolves to private, CGNAT, or similar ranges, via the provider HTTP fetch guard (fetchWithSsrFGuard). OpenAI Responses WebSocket reuses request for headers/TLS but does not use that fetch SSRF path. Use only for operator-controlled self-hosted OpenAI-compatible endpoints (LAN, overlay, split DNS). Default is false.",
"models.providers.*.models":
"Declared model list for a provider including identifiers, metadata, provider-specific params, and optional compatibility/cost hints. Keep IDs exact to provider catalog values so selection and fallback resolve correctly.",
"models.providers.*.models[].agentRuntime":
"Optional low-level agent runtime policy for this specific model. Model runtime policy overrides the provider runtime policy.",
"models.providers.*.models[].agentRuntime.id":
'Model agent runtime id: "pi", "auto", a registered plugin harness id such as "codex", or a supported CLI backend alias such as "claude-cli".',
auth: "Authentication profile root used for multi-profile provider credentials and cooldown-based failover ordering. Keep profiles minimal and explicit so automatic failover behavior stays auditable.",
"channels.matrix.allowBots":
'Allow messages from other configured Matrix bot accounts to trigger replies (default: false). Set "mentions" to only accept bot messages that visibly mention this bot.',
@@ -1015,6 +1023,10 @@ export const FIELD_HELP: Record<string, string> = {
'Include absolute timestamps in message envelopes ("on" or "off").',
"agents.defaults.envelopeElapsed": 'Include elapsed time in message envelopes ("on" or "off").',
"agents.defaults.models": "Configured model catalog (keys are full provider/model IDs).",
"agents.defaults.models.*.agentRuntime":
"Optional per-model runtime policy for the default agent. Use this for model-specific runtime exceptions instead of setting a whole-agent runtime.",
"agents.defaults.models.*.agentRuntime.id":
'Default-agent model runtime id: "pi", "auto", a registered plugin harness id such as "codex", or a supported CLI backend alias such as "claude-cli".',
"agents.defaults.memorySearch":
"Vector search over MEMORY.md and memory/*.md (per-agent overrides supported).",
"agents.defaults.memorySearch.enabled":
@@ -1263,19 +1275,26 @@ export const FIELD_HELP: Record<string, string> = {
"agents.defaults.model.fallbacks":
"Ordered fallback models (provider/model). Used when the primary model fails.",
"agents.defaults.agentRuntime":
"Default agent runtime policy. Omitted id uses built-in OpenClaw Pi. Use id=auto for plugin harness selection, a registered harness id such as codex, or a supported CLI backend alias such as claude-cli.",
"Legacy whole-agent runtime policy. It is ignored by runtime selection; configure runtime policy on a provider or model instead. Run openclaw doctor --fix to remove stale values.",
"agents.defaults.agentRuntime.id":
"Agent runtime id: pi, auto, a registered plugin harness id such as codex, or a supported CLI backend alias such as claude-cli. Omitted id uses built-in OpenClaw Pi.",
"Legacy whole-agent runtime id. It is ignored by runtime selection; configure models.providers.<provider>.agentRuntime.id or a model-specific agentRuntime.id instead.",
"agents.defaults.embeddedHarness":
"Legacy input for agents.defaults.agentRuntime. Run openclaw doctor --fix to rewrite it to agentRuntime.",
"agents.defaults.embeddedHarness.runtime": "Legacy input for agents.defaults.agentRuntime.id.",
"Legacy whole-agent embedded harness input. Run openclaw doctor --fix to remove it and use provider/model runtime policy where needed.",
"agents.defaults.embeddedHarness.runtime":
"Legacy whole-agent embedded harness runtime. Runtime selection ignores it; use provider/model runtime policy.",
"agents.list.*.models": "Per-agent model catalog overrides keyed by full provider/model IDs.",
"agents.list.*.models.*.agentRuntime":
"Optional per-model runtime policy for this agent. Use this for agent-specific model exceptions instead of setting a whole-agent runtime.",
"agents.list.*.models.*.agentRuntime.id":
'Per-agent model runtime id: "pi", "auto", a registered plugin harness id such as "codex", or a supported CLI backend alias such as "claude-cli".',
"agents.list.*.agentRuntime":
"Per-agent agent runtime policy override. Use id=codex to force Codex for one agent while defaults stay in auto mode.",
"Legacy per-agent runtime policy. It is ignored by runtime selection; configure provider/model runtime policy instead. Run openclaw doctor --fix to remove stale values.",
"agents.list.*.agentRuntime.id":
"Per-agent agent runtime id: pi, auto, a registered plugin harness id such as codex, or a supported CLI backend alias such as claude-cli. Omitted id inherits the default OpenClaw Pi behavior.",
"Legacy per-agent runtime id. It is ignored by runtime selection; configure a provider/model runtime id instead.",
"agents.list.*.embeddedHarness":
"Legacy input for agents.list.*.agentRuntime. Run openclaw doctor --fix to rewrite it to agentRuntime.",
"agents.list.*.embeddedHarness.runtime": "Legacy input for agents.list.*.agentRuntime.id.",
"Legacy per-agent embedded harness input. Run openclaw doctor --fix to remove it and use provider/model runtime policy where needed.",
"agents.list.*.embeddedHarness.runtime":
"Legacy per-agent embedded harness runtime. Runtime selection ignores it; use provider/model runtime policy.",
"agents.defaults.imageModel.primary":
"Optional image model (provider/model) used when the primary model lacks image input.",
"agents.defaults.imageModel.fallbacks": "Ordered fallback image models (provider/model).",

View File

@@ -86,8 +86,8 @@ export const FIELD_LABELS: Record<string, string> = {
"agents.defaults.contextLimits.memoryGetDefaultLines": "Default memory_get Line Window",
"agents.defaults.contextLimits.toolResultMaxChars": "Default Tool Result Max Chars",
"agents.defaults.contextLimits.postCompactionMaxChars": "Default Post-compaction Max Chars",
"agents.defaults.agentRuntime": "Default Agent Runtime Settings",
"agents.defaults.agentRuntime.id": "Default Agent Runtime",
"agents.defaults.agentRuntime": "Legacy Default Agent Runtime",
"agents.defaults.agentRuntime.id": "Legacy Default Agent Runtime ID",
"agents.defaults.embeddedHarness": "Default Legacy Embedded Harness Settings",
"agents.defaults.embeddedHarness.runtime": "Default Legacy Embedded Harness Runtime",
"agents.list": "Agent List",
@@ -98,8 +98,11 @@ export const FIELD_LABELS: Record<string, string> = {
"agents.list[].contextLimits.memoryGetDefaultLines": "Agent memory_get Line Window",
"agents.list[].contextLimits.toolResultMaxChars": "Agent Tool Result Max Chars",
"agents.list[].contextLimits.postCompactionMaxChars": "Agent Post-compaction Max Chars",
"agents.list.*.agentRuntime": "Agent Runtime",
"agents.list.*.agentRuntime.id": "Agent Runtime",
"agents.list.*.models": "Agent Model Overrides",
"agents.list.*.models.*.agentRuntime": "Agent Model Runtime",
"agents.list.*.models.*.agentRuntime.id": "Agent Model Runtime ID",
"agents.list.*.agentRuntime": "Legacy Agent Runtime",
"agents.list.*.agentRuntime.id": "Legacy Agent Runtime ID",
"agents.list.*.embeddedHarness": "Agent Legacy Embedded Harness",
"agents.list.*.embeddedHarness.runtime": "Agent Legacy Embedded Harness Runtime",
gateway: "Gateway",
@@ -538,6 +541,8 @@ export const FIELD_LABELS: Record<string, string> = {
"models.providers.*.params": "Model Provider Runtime Parameters",
"models.providers.*.headers": "Model Provider Headers",
"models.providers.*.authHeader": "Model Provider Authorization Header",
"models.providers.*.agentRuntime": "Model Provider Runtime",
"models.providers.*.agentRuntime.id": "Model Provider Runtime ID",
"models.providers.*.request": "Model Provider Request Overrides",
"models.providers.*.request.headers": "Model Provider Request Headers",
"models.providers.*.request.auth": "Model Provider Request Auth Override",
@@ -566,6 +571,8 @@ export const FIELD_LABELS: Record<string, string> = {
"models.providers.*.request.tls.insecureSkipVerify": "Model Provider Request TLS Skip Verify",
"models.providers.*.request.allowPrivateNetwork": "Model Provider Request Allow Private Network",
"models.providers.*.models": "Model Provider Model List",
"models.providers.*.models[].agentRuntime": "Model Runtime",
"models.providers.*.models[].agentRuntime.id": "Model Runtime ID",
"auth.cooldowns.billingBackoffHours": "Billing Backoff (hours)",
"auth.cooldowns.billingBackoffHoursByProvider": "Billing Backoff Overrides",
"auth.cooldowns.billingMaxHours": "Billing Backoff Cap (hours)",
@@ -576,6 +583,8 @@ export const FIELD_LABELS: Record<string, string> = {
"auth.cooldowns.overloadedBackoffMs": "Overloaded Backoff (ms)",
"auth.cooldowns.rateLimitedProfileRotations": "Rate-Limited Profile Rotations",
"agents.defaults.models": "Models",
"agents.defaults.models.*.agentRuntime": "Default Agent Model Runtime",
"agents.defaults.models.*.agentRuntime.id": "Default Agent Model Runtime ID",
"agents.defaults.model.primary": "Primary Model",
"agents.defaults.model.fallbacks": "Model Fallbacks",
"agents.defaults.imageModel.primary": "Image Model",

View File

@@ -34,6 +34,8 @@ export type AgentModelEntryConfig = {
alias?: string;
/** Provider-specific API parameters (e.g., GLM-4.7 thinking mode). */
params?: Record<string, unknown>;
/** Optional agent execution runtime for this specific provider/model entry. */
agentRuntime?: AgentRuntimePolicyConfig;
/** Enable streaming for this model (default: true, false for Ollama to avoid SDK issue #1205). */
streaming?: boolean;
};

View File

@@ -2,6 +2,7 @@ import type { ChatType } from "../channels/chat-type.js";
import type {
AgentContextLimitsConfig,
AgentDefaultsConfig,
AgentModelEntryConfig,
EmbeddedPiExecutionContract,
} from "./types.agent-defaults.js";
import type {
@@ -86,6 +87,8 @@ export type AgentConfig = {
/** @deprecated Use agentRuntime. */
embeddedHarness?: AgentEmbeddedHarnessConfig;
model?: AgentModelConfig;
/** Per-model metadata overrides for this agent. */
models?: Record<string, AgentModelEntryConfig>;
/** Optional per-agent default thinking level (overrides agents.defaults.thinkingDefault). */
thinkingDefault?: "off" | "minimal" | "low" | "medium" | "high" | "xhigh" | "adaptive" | "max";
/** Optional per-agent default verbosity level. */

View File

@@ -3,6 +3,7 @@ import type {
OpenAICompletionsCompat,
OpenAIResponsesCompat,
} from "@mariozechner/pi-ai";
import type { AgentRuntimePolicyConfig } from "./types.agents-shared.js";
import type { ConfiguredModelProviderRequest } from "./types.provider-request.js";
import type { SecretInput } from "./types.secrets.js";
@@ -109,6 +110,8 @@ export type ModelDefinitionConfig = {
maxTokens: number;
/** Provider-specific request/runtime parameters passed through to provider plugins. */
params?: Record<string, unknown>;
/** Optional agent execution runtime override for this provider/model pair. */
agentRuntime?: AgentRuntimePolicyConfig;
headers?: Record<string, string>;
compat?: ModelCompatConfig;
metadataSource?: "models-add";
@@ -126,6 +129,8 @@ export type ModelProviderConfig = {
injectNumCtxForOpenAICompat?: boolean;
/** Provider-specific runtime parameters interpreted by provider plugins. */
params?: Record<string, unknown>;
/** Optional default agent execution runtime for models under this provider. */
agentRuntime?: AgentRuntimePolicyConfig;
headers?: Record<string, SecretInput>;
authHeader?: boolean;
request?: ConfiguredModelProviderRequest;

View File

@@ -70,6 +70,7 @@ export const AgentDefaultsSchema = z
alias: z.string().optional(),
/** Provider-specific API parameters (e.g., GLM-4.7 thinking mode). */
params: z.record(z.string(), z.unknown()).optional(),
agentRuntime: AgentRuntimePolicySchema,
/** Enable streaming for this model (default: true, false for Ollama to avoid SDK issue #1205). */
streaming: z.boolean().optional(),
})

View File

@@ -847,6 +847,19 @@ export const AgentEntrySchema = z
agentRuntime: AgentRuntimePolicySchema,
embeddedHarness: AgentEmbeddedHarnessSchema,
model: AgentModelSchema.optional(),
models: z
.record(
z.string(),
z
.object({
alias: z.string().optional(),
params: z.record(z.string(), z.unknown()).optional(),
agentRuntime: AgentRuntimePolicySchema,
streaming: z.boolean().optional(),
})
.strict(),
)
.optional(),
thinkingDefault: z
.enum(["off", "minimal", "low", "medium", "high", "xhigh", "adaptive", "max"])
.optional(),

View File

@@ -305,6 +305,13 @@ const ConfiguredModelProviderRequestSchema = z
.strict()
.optional();
const ModelAgentRuntimePolicySchema = z
.object({
id: z.string().optional(),
})
.strict()
.optional();
const ModelDefinitionSchema = z
.object({
id: z.string().min(1),
@@ -343,6 +350,7 @@ const ModelDefinitionSchema = z
contextTokens: z.number().int().positive().optional(),
maxTokens: z.number().positive().optional(),
params: z.record(z.string(), z.unknown()).optional(),
agentRuntime: ModelAgentRuntimePolicySchema,
headers: z.record(z.string(), z.string()).optional(),
compat: ModelCompatSchema,
metadataSource: z.literal("models-add").optional(),
@@ -363,6 +371,7 @@ const ModelProviderSchema = z
timeoutSeconds: z.number().int().positive().optional(),
injectNumCtxForOpenAICompat: z.boolean().optional(),
params: z.record(z.string(), z.unknown()).optional(),
agentRuntime: ModelAgentRuntimePolicySchema,
headers: z.record(z.string(), SecretInputSchema.register(sensitive)).optional(),
authHeader: z.boolean().optional(),
request: ConfiguredModelProviderRequestSchema,

View File

@@ -141,6 +141,7 @@ export function createCronPromptExecutor(params: {
provider: providerOverride,
cfg: params.cfgWithAgentDefaults,
agentId: params.agentId,
modelId: modelOverride,
}) ?? providerOverride;
const bootstrapPromptWarningSignature =
bootstrapPromptWarningSignaturesSeen[bootstrapPromptWarningSignaturesSeen.length - 1];

View File

@@ -84,11 +84,14 @@ describe("runCronIsolatedAgentTurn — payload.fallbacks", () => {
cfg: {
agents: {
defaults: {
agentRuntime: { id: "claude-cli" },
model: {
primary: "anthropic/claude-opus-4-6",
fallbacks: ["anthropic/claude-sonnet-4-6"],
},
models: {
"anthropic/claude-opus-4-6": { agentRuntime: { id: "claude-cli" } },
"anthropic/claude-sonnet-4-6": { agentRuntime: { id: "claude-cli" } },
},
},
},
},

View File

@@ -746,8 +746,6 @@ async function prepareCronRunContext(params: {
agentId,
sessionKey: agentSessionKey,
}).runtime,
sessionAgentHarnessId: cronSession.sessionEntry.agentHarnessId,
sessionAgentRuntimeOverride: cronSession.sessionEntry.agentRuntimeOverride,
}),
agentDir,
sessionEntry: cronSession.sessionEntry,

View File

@@ -48,6 +48,8 @@ export const AgentSummarySchema = Type.Object(
Type.Literal("env"),
Type.Literal("agent"),
Type.Literal("defaults"),
Type.Literal("model"),
Type.Literal("provider"),
Type.Literal("implicit"),
]),
},

View File

@@ -2,7 +2,7 @@ import { randomUUID } from "node:crypto";
import fs from "node:fs";
import path from "node:path";
import { CURRENT_SESSION_VERSION } from "@mariozechner/pi-coding-agent";
import { resolveAgentRuntimeMetadata } from "../../agents/agent-runtime-metadata.js";
import { resolveModelAgentRuntimeMetadata } from "../../agents/agent-runtime-metadata.js";
import {
listAgentIds,
resolveAgentWorkspaceDir,
@@ -1601,7 +1601,13 @@ export const sessionsHandlers: GatewayRequestHandlers = {
provider: resolved.provider,
model: resolved.model,
});
const agentRuntime = resolveAgentRuntimeMetadata(cfg, agentId);
const agentRuntime = resolveModelAgentRuntimeMetadata({
cfg,
agentId,
provider: resolvedDisplayModel.provider,
model: resolvedDisplayModel.model,
sessionKey: target.canonicalKey ?? key,
});
const result: SessionsPatchResult = {
ok: true,
path: storePath,

View File

@@ -291,7 +291,7 @@ test("session:patch hook mutations cannot change the response path", async () =>
expect(patched.payload?.resolved).toEqual({
modelProvider: "anthropic",
model: "claude-opus-4-6",
agentRuntime: { id: "pi", source: "implicit" },
agentRuntime: { id: "auto", source: "implicit" },
});
expect(patched.payload?.entry.label).toBe("cfg-isolation");

View File

@@ -352,7 +352,7 @@ test("lists and patches session store via sessions.* RPC", async () => {
expect(modelPatched.payload?.resolved?.modelProvider).toBe("openai");
expect(modelPatched.payload?.resolved?.model).toBe("gpt-test-a");
expect(modelPatched.payload?.resolved?.agentRuntime).toEqual({
id: "pi",
id: "codex",
source: "implicit",
});
@@ -370,7 +370,7 @@ test("lists and patches session store via sessions.* RPC", async () => {
);
expect(mainAfterModelPatch?.modelProvider).toBe("openai");
expect(mainAfterModelPatch?.model).toBe("gpt-test-a");
expect(mainAfterModelPatch?.agentRuntime).toEqual({ id: "pi", source: "implicit" });
expect(mainAfterModelPatch?.agentRuntime).toEqual({ id: "codex", source: "implicit" });
const compacted = await directSessionReq<{ ok: true; compacted: boolean }>("sessions.compact", {
key: "agent:main:main",

View File

@@ -58,15 +58,19 @@ function createSingleAgentAvatarConfig(workspace: string): OpenClawConfig {
function createModelDefaultsConfig(params: {
primary: string;
models?: Record<string, Record<string, never>>;
models?: Record<string, { agentRuntime?: { id: string } }>;
agentRuntime?: { id: string };
}): OpenClawConfig {
return {
agents: {
defaults: {
model: { primary: params.primary },
models: params.models,
agentRuntime: params.agentRuntime,
models: {
...params.models,
...(params.agentRuntime
? { [params.primary]: { agentRuntime: params.agentRuntime } }
: {}),
},
},
},
} as OpenClawConfig;
@@ -1049,9 +1053,8 @@ describe("gateway session utils", () => {
primary: "openai/gpt-5.4",
fallbacks: ["openai-codex/gpt-5.4"],
},
agentRuntime: { id: "pi" },
},
list: [{ id: "main", default: true, agentRuntime: { id: "claude-cli" } }],
list: [{ id: "main", default: true }],
},
} as OpenClawConfig;
@@ -1064,8 +1067,8 @@ describe("gateway session utils", () => {
fallbacks: ["openai-codex/gpt-5.4"],
},
agentRuntime: {
id: "claude-cli",
source: "agent",
id: "codex",
source: "implicit",
},
});
});
@@ -1073,9 +1076,18 @@ describe("gateway session utils", () => {
test("listAgentsForGateway reports explicit plugin runtime metadata", () => {
const cfg = {
session: { mainKey: "main" },
models: {
providers: {
openai: {
baseUrl: "https://api.openai.com/v1",
agentRuntime: { id: "codex" },
models: [],
},
},
},
agents: {
defaults: {
agentRuntime: { id: "codex" },
model: { primary: "openai/gpt-5.4" },
},
list: [{ id: "main", default: true }],
},
@@ -1086,7 +1098,7 @@ describe("gateway session utils", () => {
id: "main",
agentRuntime: {
id: "codex",
source: "defaults",
source: "provider",
},
});
});
@@ -1312,7 +1324,7 @@ describe("listSessionsFromStore selected model display", () => {
lastMessagePreview: "last 0",
}),
);
expect(listed.sessions[0]?.agentRuntime).toEqual({ id: "pi", source: "implicit" });
expect(listed.sessions[0]?.agentRuntime).toEqual({ id: "codex", source: "implicit" });
expect(listed.sessions[0]?.thinkingLevel).toBeUndefined();
expect(listed.sessions[0]?.thinkingLevels?.length).toBeGreaterThan(0);
expect(listed.sessions[0]?.thinkingOptions?.length).toBeGreaterThan(0);
@@ -1441,7 +1453,7 @@ describe("listSessionsFromStore selected model display", () => {
expect(result.sessions[0]?.model).toBe("claude-opus-4-7");
expect(result.sessions[0]?.agentRuntime).toEqual({
id: "claude-cli",
source: "defaults",
source: "model",
});
});

View File

@@ -1,6 +1,6 @@
import fs from "node:fs";
import path from "node:path";
import { resolveAgentRuntimeMetadata } from "../agents/agent-runtime-metadata.js";
import { resolveModelAgentRuntimeMetadata } from "../agents/agent-runtime-metadata.js";
import {
listAgentIds,
resolveAgentConfig,
@@ -1012,13 +1012,20 @@ export function listAgentsForGateway(cfg: OpenClawConfig): {
const agents = agentIds.map((id) => {
const meta = configuredById.get(id);
const model = resolveGatewayAgentModel(cfg, id);
const resolvedModel = resolveDefaultModelForAgent({ cfg, agentId: id });
return Object.assign(
{
id,
name: meta?.name,
identity: meta?.identity,
workspace: resolveAgentWorkspaceDir(cfg, id),
agentRuntime: resolveAgentRuntimeMetadata(cfg, id),
agentRuntime: resolveModelAgentRuntimeMetadata({
cfg,
agentId: id,
provider: resolvedModel.provider,
model: resolvedModel.model,
sessionKey: resolveAgentMainSessionKey({ cfg, agentId: id }),
}),
},
model ? { model } : {},
);
@@ -1711,7 +1718,6 @@ export function buildGatewaySessionRow(params: {
const latestCompactionCheckpoint = buildCompactionCheckpointPreview(
resolveLatestCompactionCheckpoint(entry),
);
const agentRuntime = resolveAgentRuntimeMetadata(cfg, sessionAgentId);
const selectedOrRuntimeModelProvider = selectedModel?.provider ?? modelProvider;
const selectedOrRuntimeModel = selectedModel?.model ?? model;
const rowModelIdentity = lightweight
@@ -1724,6 +1730,13 @@ export function buildGatewaySessionRow(params: {
});
const rowModelProvider = rowModelIdentity.provider;
const rowModel = rowModelIdentity.model;
const agentRuntime = resolveModelAgentRuntimeMetadata({
cfg,
agentId: sessionAgentId,
provider: rowModelProvider,
model: rowModel,
sessionKey: key,
});
const estimatedCostUsd = lightweight
? resolveNonNegativeNumber(entry?.estimatedCostUsd)
: (resolveEstimatedSessionCostUsd({

View File

@@ -14,7 +14,7 @@ export type GatewayAgentModel = {
export type GatewayAgentRuntime = {
id: string;
fallback?: "pi" | "none";
source: "env" | "agent" | "defaults" | "implicit";
source: "env" | "agent" | "defaults" | "model" | "provider" | "implicit";
};
export type GatewayAgentRow = {

View File

@@ -32,10 +32,7 @@ export function resolveAgentRuntimeLabel(args: {
return backend ? `${acpAgent} (acp/${backend})` : `${acpAgent} (acp)`;
}
const runtimeRaw =
normalizeOptionalString(args.resolvedHarness) ??
normalizeOptionalString(args.sessionEntry?.agentRuntimeOverride) ??
normalizeOptionalString(args.sessionEntry?.agentHarnessId);
const runtimeRaw = normalizeOptionalString(args.resolvedHarness);
const runtime = normalizeOptionalLowercaseString(runtimeRaw);
if (runtime && runtime !== "auto" && runtime !== "default") {
return AGENT_RUNTIME_LABELS[runtime] ?? sanitizeTerminalText(runtimeRaw ?? runtime);