Keep OpenAI Codex migrations on automatic runtime routing (#79238)

* fix: keep migrated openai codex routes automatic

* scope runtime policy to providers and models

* fix runtime policy surfaces

* fix ci runtime policy checks

* fix doctor stale session runtime pins
This commit is contained in:
pashpashpash
2026-05-08 00:05:35 -07:00
committed by GitHub
parent b7aca7dc6e
commit 02fe0d8978
92 changed files with 1421 additions and 1264 deletions

View File

@@ -96,9 +96,6 @@ Computer Use available before a thread starts:
agents: {
defaults: {
model: "openai/gpt-5.5",
agentRuntime: {
id: "codex",
},
},
},
}
@@ -114,9 +111,8 @@ register the bundled Codex marketplace from
fails. If setup still cannot make the MCP server available, the turn fails
before the thread starts.
Existing sessions keep their runtime and Codex thread binding. After changing
`agentRuntime` or Computer Use config, use `/new` or `/reset` in the affected
chat before testing.
After changing Computer Use config, use `/new` or `/reset` in the affected chat
before testing if an existing Codex thread has already started.
## Commands

View File

@@ -50,7 +50,8 @@ First sign in with Codex OAuth if you have not already:
openclaw models auth login --provider openai-codex
```
Then enable the bundled `codex` plugin and force the Codex runtime:
Then enable the bundled `codex` plugin and use the canonical OpenAI model ref.
OpenAI agent turns select the Codex runtime by default:
```json5
{
@@ -64,9 +65,6 @@ Then enable the bundled `codex` plugin and force the Codex runtime:
agents: {
defaults: {
model: "openai/gpt-5.5",
agentRuntime: {
id: "codex",
},
},
},
}
@@ -98,7 +96,7 @@ The bundled `codex` plugin contributes several separate capabilities:
| Capability | How you use it | What it does |
| --------------------------------- | --------------------------------------------------- | ----------------------------------------------------------------------------- |
| Native embedded runtime | `agentRuntime.id: "codex"` | Runs OpenClaw embedded agent turns through Codex app-server. |
| Native embedded runtime | `openai/gpt-*` agent model refs | Runs OpenClaw embedded agent turns through Codex app-server. |
| Native chat-control commands | `/codex bind`, `/codex resume`, `/codex steer`, ... | Binds and controls Codex app-server threads from a messaging conversation. |
| Codex app-server provider/catalog | `codex` internals, surfaced through the harness | Lets the runtime discover and validate app-server models. |
| Codex media-understanding path | `codex/*` image-model compatibility paths | Runs bounded Codex app-server turns for supported image understanding models. |
@@ -110,7 +108,7 @@ Enabling the plugin makes those capabilities available. It does **not**:
realtime
- convert `openai-codex/*` model refs without `openclaw doctor --fix`
- make ACP/acpx the default Codex path
- hot-switch existing sessions that already recorded a PI runtime
- use stale whole-agent or session runtime pins for routing
- replace OpenClaw channel delivery, session files, auth-profile storage, or
message routing
@@ -141,35 +139,37 @@ For the plugin hook semantics themselves, see [Plugin hooks](/plugins/hooks)
and [Plugin guard behavior](/tools/plugin).
OpenAI agent model refs use the harness by default. New configs should keep
OpenAI model refs canonical as `openai/gpt-*`; `agentRuntime.id: "codex"` is
still valid but no longer required for OpenAI agent turns. Legacy `codex/*`
model refs still auto-select the harness for compatibility, but
OpenAI model refs canonical as `openai/gpt-*`; provider/model
`agentRuntime.id: "codex"` is still valid but no longer required for OpenAI
agent turns. Legacy `codex/*` model refs still auto-select the harness for
compatibility, but
runtime-backed legacy provider prefixes are not shown as normal model/provider
choices.
If any configured model route is still `openai-codex/*`, `openclaw doctor --fix`
rewrites it to `openai/*`. For matching agent routes, it sets the agent runtime
to `codex` and preserves existing `openai-codex` auth profile overrides.
rewrites it to `openai/*` and preserves existing `openai-codex` auth profile
overrides. It does not pin the whole agent to `agentRuntime.id: "codex"` because
canonical OpenAI refs already select the Codex harness automatically.
## Route map
Use this table before changing config:
| Desired behavior | Model ref | Runtime config | Auth/profile route | Expected status label |
| ---------------------------------------------------- | -------------------------- | -------------------------------------- | ------------------------------ | ---------------------------- |
| ChatGPT/Codex subscription with native Codex runtime | `openai/gpt-*` | omitted or `agentRuntime.id: "codex"` | Codex OAuth or Codex account | `Runtime: OpenAI Codex` |
| OpenAI API-key auth for agent models | `openai/gpt-*` | omitted or `agentRuntime.id: "codex"` | `openai-codex` API-key profile | `Runtime: OpenAI Codex` |
| Legacy config that needs doctor repair | `openai-codex/gpt-*` | repaired to `codex` | Existing configured auth | Recheck after `doctor --fix` |
| Mixed providers with conservative auto mode | provider-specific refs | `agentRuntime.id: "auto"` | Per selected provider | Depends on selected runtime |
| Explicit Codex ACP adapter session | ACP prompt/model dependent | `sessions_spawn` with `runtime: "acp"` | ACP backend auth | ACP task/session status |
| Desired behavior | Model ref | Runtime config | Auth/profile route | Expected status label |
| ---------------------------------------------------- | -------------------------- | -------------------------------------------------------- | ------------------------------ | ---------------------------- |
| ChatGPT/Codex subscription with native Codex runtime | `openai/gpt-*` | omitted or provider/model `agentRuntime.id: "codex"` | Codex OAuth or Codex account | `Runtime: OpenAI Codex` |
| OpenAI API-key auth for agent models | `openai/gpt-*` | omitted or provider/model `agentRuntime.id: "codex"` | `openai-codex` API-key profile | `Runtime: OpenAI Codex` |
| Legacy config that needs doctor repair | `openai-codex/gpt-*` | preserved or automatic | Existing configured auth | Recheck after `doctor --fix` |
| Mixed providers with conservative auto mode | provider-specific refs | omitted unless a provider/model needs a runtime override | Per selected provider | Depends on selected runtime |
| Explicit Codex ACP adapter session | ACP prompt/model dependent | `sessions_spawn` with `runtime: "acp"` | ACP backend auth | ACP task/session status |
The important split is provider versus runtime:
- `openai-codex/*` is a legacy route that doctor rewrites.
- `agentRuntime.id: "codex"` requires the Codex harness and fails closed if it
is unavailable.
- `agentRuntime.id: "auto"` lets registered harnesses claim matching provider
routes; OpenAI agent refs resolve to Codex instead of PI.
- Provider/model `agentRuntime.id: "codex"` requires the Codex harness and fails
closed if it is unavailable.
- Provider/model `agentRuntime.id: "auto"` lets registered harnesses claim
matching provider routes; OpenAI agent refs resolve to Codex instead of PI.
- `/codex ...` answers "which native Codex conversation should this chat bind
or control?"
- ACP answers "which external harness process should acpx launch?"
@@ -188,13 +188,14 @@ Treat `openai-codex/*` as legacy config that doctor should rewrite:
GPT-5.5 can appear on both direct OpenAI API-key and Codex subscription routes
when your account exposes them. Use `openai/gpt-5.5` with the Codex app-server
harness for native Codex runtime, or `openai/gpt-5.5` without a Codex runtime
override for direct API-key traffic.
harness for native Codex runtime. For direct API-key traffic through PI, opt in
with provider/model `agentRuntime.id: "pi"` and a normal `openai` auth profile.
Legacy `codex/gpt-*` refs remain accepted as compatibility aliases. Doctor
compatibility migration rewrites legacy runtime refs to canonical model refs
and records the runtime policy separately. New native app-server harness configs
should use `openai/gpt-*` plus `agentRuntime.id: "codex"`.
should use `openai/gpt-*`; explicit provider/model `agentRuntime.id: "codex"`
is only needed when you want the policy written down.
`agents.defaults.imageModel` follows the same prefix split. Use
`openai/gpt-*` for the normal OpenAI route and `codex/gpt-*` when image
@@ -213,27 +214,13 @@ in `auto` mode, each plugin candidate's support result.
`openclaw doctor` warns when configured model refs or persisted session route
state still use `openai-codex/*`. `openclaw doctor --fix` rewrites those routes
to:
to `openai/<model>`. Canonical OpenAI agent refs already select the native Codex
harness, so doctor does not pin the whole agent to Codex.
- `openai/<model>`
- `agentRuntime.id: "codex"`
The `codex` route forces the native Codex harness. PI runtime config is not
allowed for OpenAI agent model turns.
Doctor also repairs stale persisted session pins across discovered agent session
stores so old conversations do not stay wedged on the removed route.
Harness selection is not a live session control. When an embedded turn runs,
OpenClaw records the selected harness id on that session and keeps using it for
later turns in the same session id. Change `agentRuntime` config or
`OPENCLAW_AGENT_RUNTIME` when you want future sessions to use another harness;
use `/new` or `/reset` to start a fresh session before switching an existing
conversation between PI and Codex. This avoids replaying one transcript through
two incompatible native session systems.
Legacy sessions created before harness pins are treated as PI-pinned once they
have transcript history. Use `/new` or `/reset` to opt that conversation into
Codex after changing config.
Whole-session and whole-agent runtime pins are legacy state. Runtime selection
now comes from provider/model policy; `openclaw doctor --fix` removes stale
session pins and old whole-agent runtime config so they do not mask the selected
provider/model route.
`/status` shows the effective model runtime. The default PI harness appears as
`Runtime: OpenClaw Pi Default`, and the Codex app-server harness appears as
@@ -274,22 +261,21 @@ Codex behavior-shaping lane without duplicating `AGENTS.md`.
## Add Codex alongside other models
Do not set `agentRuntime.id: "codex"` globally if the same agent should freely switch
between Codex and non-Codex provider models. A forced runtime applies to every
embedded turn for that agent or session. If you select an Anthropic model while
that runtime is forced, OpenClaw still tries the Codex harness and fails closed
instead of silently routing that turn through PI.
Do not set a whole-agent runtime. Whole-agent runtime pins are legacy and
ignored, and they were the source of mixed-provider traps after upgrades. Keep
runtime policy on the provider or model that needs it.
Use one of these shapes instead:
- Put Codex on a dedicated agent with `agentRuntime.id: "codex"`.
- Keep the default agent on `agentRuntime.id: "auto"` and PI fallback for normal mixed
provider usage.
- Use `openai/gpt-*` for OpenAI agent turns; Codex is selected by default.
- Put runtime overrides on `models.providers.<provider>.agentRuntime` or on a
model entry such as `agents.defaults.models["anthropic/claude-opus-4-7"].agentRuntime`.
- Use legacy `codex/*` refs only for compatibility. New configs should prefer
`openai/*` plus an explicit Codex runtime policy.
`openai/*`; add an explicit Codex runtime policy only when you need to make
the provider/model rule strict.
For example, this keeps the default agent on normal automatic selection and
adds a separate Codex agent:
For example, this keeps mixed-provider routing ergonomic while using OpenAI
through Codex by default and Claude through PI:
```json5
{
@@ -302,9 +288,7 @@ adds a separate Codex agent:
},
agents: {
defaults: {
agentRuntime: {
id: "auto",
},
model: "anthropic/claude-opus-4-6",
},
list: [
{
@@ -316,9 +300,6 @@ adds a separate Codex agent:
id: "codex",
name: "Codex",
model: "openai/gpt-5.5",
agentRuntime: {
id: "codex",
},
},
],
},
@@ -355,45 +336,36 @@ routing.
## Codex-only deployments
Force the Codex harness when you need to prove that every embedded agent turn
uses Codex. Explicit plugin runtimes fail closed and are never silently retried
through PI:
For OpenAI agent turns, `openai/gpt-*` already resolves to Codex. If you need a
strict written policy, put it on the OpenAI provider or model. Explicit plugin
runtimes fail closed and are never silently retried through PI:
```json5
{
agents: {
defaults: {
model: "openai/gpt-5.5",
agentRuntime: {
id: "codex",
models: {
providers: {
openai: {
agentRuntime: {
id: "codex",
},
},
},
},
agents: { defaults: { model: "openai/gpt-5.5" } },
}
```
Environment override:
```bash
OPENCLAW_AGENT_RUNTIME=codex openclaw gateway run
```
With Codex forced, OpenClaw fails early if the Codex plugin is disabled, the
app-server is too old, or the app-server cannot start.
## Per-agent Codex
You can make one agent Codex-only while the default agent keeps normal
auto-selection:
You can make one agent Codex-strict while the default agent keeps normal
selection by using a per-agent model runtime override:
```json5
{
agents: {
defaults: {
agentRuntime: {
id: "auto",
},
},
list: [
{
id: "main",
@@ -404,8 +376,12 @@ auto-selection:
id: "codex",
name: "Codex",
model: "openai/gpt-5.5",
agentRuntime: {
id: "codex",
models: {
"openai/gpt-5.5": {
agentRuntime: {
id: "codex",
},
},
},
},
],
@@ -827,9 +803,6 @@ Minimal config:
agents: {
defaults: {
model: "openai/gpt-5.5",
agentRuntime: {
id: "codex",
},
},
},
}
@@ -876,12 +849,18 @@ Codex-only harness validation:
```json5
{
models: {
providers: {
openai: {
agentRuntime: {
id: "codex",
},
},
},
},
agents: {
defaults: {
model: "openai/gpt-5.5",
agentRuntime: {
id: "codex",
},
},
},
plugins: {
@@ -1185,16 +1164,16 @@ understanding continue to use the matching provider/model settings such as
## Troubleshooting
**Codex does not appear as a normal `/model` provider:** that is expected for
new configs. Select an `openai/gpt-*` model with
`agentRuntime.id: "codex"` (or a legacy `codex/*` ref), enable
new configs. Select an `openai/gpt-*` model, enable
`plugins.entries.codex.enabled`, and check whether `plugins.allow` excludes
`codex`.
`codex`. Legacy `codex/*` refs remain compatibility aliases, not normal model
provider choices.
**OpenClaw uses PI instead of Codex:** `agentRuntime.id: "auto"` can still use PI as the
compatibility backend when no Codex harness claims the run. Set
`agentRuntime.id: "codex"` to force Codex selection while testing. A
forced Codex runtime fails instead of falling back to PI. Once Codex app-server
is selected, its failures surface directly.
**OpenClaw uses PI instead of Codex:** make sure the model ref is `openai/gpt-*`
on the official OpenAI provider and that the Codex plugin is installed/enabled.
If you need a strict policy while testing, set provider/model
`agentRuntime.id: "codex"`. A forced Codex runtime fails instead of falling back
to PI. Once Codex app-server is selected, its failures surface directly.
**The app-server is rejected:** upgrade Codex so the app-server handshake
reports version `0.125.0` or newer. Same-version prereleases or build-suffixed
@@ -1207,11 +1186,11 @@ or disable discovery.
**WebSocket transport fails immediately:** check `appServer.url`, `authToken`,
and that the remote app-server speaks the same Codex app-server protocol version.
**A non-Codex model uses PI:** that is expected unless you forced
`agentRuntime.id: "codex"` for that agent or selected a legacy
`codex/*` ref. Plain `openai/gpt-*` and other provider refs stay on their normal
provider path in `auto` mode. If you force `agentRuntime.id: "codex"`, every embedded
turn for that agent must be a Codex-supported OpenAI model.
**A non-Codex model uses PI:** that is expected unless provider/model runtime
policy routes it to another harness. Plain non-OpenAI provider refs stay on
their normal provider path in `auto` mode. If you force
`agentRuntime.id: "codex"` on a provider or model, matching embedded turns must
be Codex-supported OpenAI models.
**Computer Use is installed but tools do not run:** check
`/codex computer-use status` from a fresh session. If a tool reports

View File

@@ -103,14 +103,11 @@ export default definePluginEntry({
OpenClaw chooses a harness after provider/model resolution:
1. An existing session's recorded harness id wins, so config/env changes do not
hot-switch that transcript to another runtime.
2. `OPENCLAW_AGENT_RUNTIME=<id>` forces a registered harness with that id for
sessions that are not already pinned.
3. `OPENCLAW_AGENT_RUNTIME=pi` forces the built-in PI harness.
4. `OPENCLAW_AGENT_RUNTIME=auto` asks registered harnesses if they support the
resolved provider/model.
5. If no registered harness matches, OpenClaw uses PI unless PI fallback is
1. Model-scoped runtime policy wins.
2. Provider-scoped runtime policy comes next.
3. `auto` asks registered harnesses if they support the resolved
provider/model.
4. If no registered harness matches, OpenClaw uses PI unless PI fallback is
disabled.
Plugin harness failures surface as run failures. In `auto` mode, PI fallback is
@@ -119,11 +116,10 @@ provider/model. Once a plugin harness has claimed a run, OpenClaw does not
replay that same turn through PI because that can change auth/runtime semantics
or duplicate side effects.
The selected harness id is persisted with the session id after an embedded run.
Legacy sessions created before harness pins are treated as PI-pinned once they
have transcript history. Use a new/reset session when changing between PI and a
native plugin harness. `/status` shows non-default harness ids such as `codex`
next to `Fast`; PI stays hidden because it is the default compatibility path.
Whole-session and whole-agent runtime pins are ignored by selection. That
includes stale session `agentHarnessId` values, `agents.defaults.agentRuntime`,
`agents.list[].agentRuntime`, and `OPENCLAW_AGENT_RUNTIME`. `/status` shows the
effective runtime selected from the provider/model route.
If the selected harness is surprising, enable `agents/harness` debug logging and
inspect the gateway's structured `agent harness selected` record. It includes
the selected harness id, selection reason, runtime/fallback policy, and, in
@@ -141,8 +137,7 @@ OpenClaw. The harness then claims that provider in `supports(...)`.
The bundled Codex plugin follows this pattern:
- preferred user model refs: `openai/gpt-5.5` plus
`agentRuntime.id: "codex"`
- preferred user model refs: `openai/gpt-5.5`
- compatibility refs: legacy `codex/gpt-*` refs remain accepted, but new
configs should not use them as normal provider/model refs
- harness id: `codex`
@@ -151,10 +146,9 @@ The bundled Codex plugin follows this pattern:
- app-server request: OpenClaw sends the bare model id to Codex and lets the
harness talk to the native app-server protocol
The Codex plugin is additive. Plain `openai/gpt-*` refs continue to use the
normal OpenClaw provider path unless you force the Codex harness with
`agentRuntime.id: "codex"`. Older `codex/gpt-*` refs still select the
Codex provider and harness for compatibility.
The Codex plugin is additive. Plain `openai/gpt-*` agent refs on the official
OpenAI provider select the Codex harness by default. Older `codex/gpt-*` refs
still select the Codex provider and harness for compatibility.
For operator setup, model prefix examples, and Codex-only configs, see
[Codex Harness](/plugins/codex-harness).
@@ -202,74 +196,94 @@ aliases for the native harness.
When this mode runs, Codex owns the native thread id, resume behavior,
compaction, and app-server execution. OpenClaw still owns the chat channel,
visible transcript mirror, tool policy, approvals, media delivery, and session
selection. Use `agentRuntime.id: "codex"` when you need to prove that only the
Codex app-server path can claim the run. Explicit plugin runtimes fail closed;
Codex app-server selection failures and runtime failures are not retried through
PI.
selection. Use provider/model `agentRuntime.id: "codex"` when you need to prove
that only the Codex app-server path can claim the run. Explicit plugin runtimes
fail closed; Codex app-server selection failures and runtime failures are not
retried through PI.
## Runtime strictness
By default, OpenClaw runs embedded agents with OpenClaw Pi. In `auto` mode,
registered plugin harnesses can claim a provider/model pair, and PI handles the
turn when none match. Use an explicit plugin runtime such as
By default, OpenClaw uses `auto` provider/model runtime policy: registered
plugin harnesses can claim a provider/model pair, and PI handles the turn when
none match. OpenAI agent refs on the official OpenAI provider default to Codex.
Use an explicit provider/model plugin runtime such as
`agentRuntime.id: "codex"` when missing harness selection should fail instead
of routing through PI. Selected plugin harness failures always fail hard. This
does not block an explicit `agentRuntime.id: "pi"` or
`OPENCLAW_AGENT_RUNTIME=pi`.
does not block an explicit provider/model `agentRuntime.id: "pi"`.
For Codex-only embedded runs:
```json
{
"models": {
"providers": {
"openai": {
"agentRuntime": {
"id": "codex"
}
}
}
},
"agents": {
"defaults": {
"model": "openai/gpt-5.5",
"agentRuntime": {
"id": "codex"
"model": "openai/gpt-5.5"
}
}
}
```
If you want a CLI backend for one canonical model, put the runtime on that
model entry:
```json
{
"agents": {
"defaults": {
"model": "anthropic/claude-opus-4-7",
"models": {
"anthropic/claude-opus-4-7": {
"agentRuntime": {
"id": "claude-cli"
}
}
}
}
}
}
```
If you want any registered plugin harness to claim matching models and otherwise
use PI, set `id: "auto"`:
Per-agent overrides use the same model-scoped shape:
```json
{
"agents": {
"defaults": {
"agentRuntime": {
"id": "auto"
}
}
}
}
```
Per-agent overrides use the same shape:
```json
{
"agents": {
"defaults": {
"agentRuntime": { "id": "auto" }
},
"list": [
{
"id": "codex-only",
"model": "openai/gpt-5.5",
"agentRuntime": { "id": "codex" }
"models": {
"openai/gpt-5.5": {
"agentRuntime": { "id": "codex" }
}
}
}
]
}
}
```
`OPENCLAW_AGENT_RUNTIME` still overrides the configured runtime.
Legacy whole-agent runtime examples like this are ignored:
```bash
OPENCLAW_AGENT_RUNTIME=codex openclaw gateway run
```json
{
"agents": {
"defaults": {
"agentRuntime": {
"id": "codex"
}
}
}
}
```
With an explicit plugin runtime, a session fails early when the requested