fix: migrate agent runtime config

This commit is contained in:
Peter Steinberger
2026-04-26 07:58:48 +01:00
parent 9d6e79019f
commit 5b9be2cdb1
61 changed files with 873 additions and 335 deletions

View File

@@ -26,7 +26,7 @@ The bundled `codex` plugin contributes several separate capabilities:
| Capability | How you use it | What it does |
| --------------------------------- | --------------------------------------------------- | ----------------------------------------------------------------------------- |
| Native embedded runtime | `embeddedHarness.runtime: "codex"` | Runs OpenClaw embedded agent turns through Codex app-server. |
| Native embedded runtime | `agentRuntime.id: "codex"` | Runs OpenClaw embedded agent turns through Codex app-server. |
| Native chat-control commands | `/codex bind`, `/codex resume`, `/codex steer`, ... | Binds and controls Codex app-server threads from a messaging conversation. |
| Codex app-server provider/catalog | `codex` internals, surfaced through the harness | Lets the runtime discover and validate app-server models. |
| Codex media-understanding path | `codex/*` image-model compatibility paths | Runs bounded Codex app-server turns for supported image understanding models. |
@@ -69,7 +69,7 @@ and [Plugin guard behavior](/tools/plugin).
The harness is off by default. New configs should keep OpenAI model refs
canonical as `openai/gpt-*` and explicitly force
`embeddedHarness.runtime: "codex"` or `OPENCLAW_AGENT_RUNTIME=codex` when they
`agentRuntime.id: "codex"` or `OPENCLAW_AGENT_RUNTIME=codex` when they
want native app-server execution. Legacy `codex/*` model refs still auto-select
the harness for compatibility, but runtime-backed legacy provider prefixes are
not shown as normal model/provider choices.
@@ -87,14 +87,14 @@ Use this table before changing config:
| ------------------------------------------- | -------------------------- | -------------------------------------- | --------------------------- | ------------------------------ |
| OpenAI API through normal OpenClaw runner | `openai/gpt-*` | omitted or `runtime: "pi"` | OpenAI provider | `Runtime: OpenClaw Pi Default` |
| Codex OAuth/subscription through PI | `openai-codex/gpt-*` | omitted or `runtime: "pi"` | OpenAI Codex OAuth provider | `Runtime: OpenClaw Pi Default` |
| Native Codex app-server embedded turns | `openai/gpt-*` | `embeddedHarness.runtime: "codex"` | `codex` plugin | `Runtime: OpenAI Codex` |
| Mixed providers with conservative auto mode | provider-specific refs | `runtime: "auto", fallback: "pi"` | Optional plugin runtimes | Depends on selected runtime |
| Native Codex app-server embedded turns | `openai/gpt-*` | `agentRuntime.id: "codex"` | `codex` plugin | `Runtime: OpenAI Codex` |
| Mixed providers with conservative auto mode | provider-specific refs | `agentRuntime.id: "auto"` | Optional plugin runtimes | Depends on selected runtime |
| Explicit Codex ACP adapter session | ACP prompt/model dependent | `sessions_spawn` with `runtime: "acp"` | healthy `acpx` backend | ACP task/session status |
The important split is provider versus runtime:
- `openai-codex/*` answers "which provider/auth route should PI use?"
- `embeddedHarness.runtime: "codex"` answers "which loop should execute this
- `agentRuntime.id: "codex"` answers "which loop should execute this
embedded turn?"
- `/codex ...` answers "which native Codex conversation should this chat bind
or control?"
@@ -106,11 +106,11 @@ OpenAI-family routes are prefix-specific. Use `openai-codex/*` when you want
Codex OAuth through PI; use `openai/*` when you want direct OpenAI API access or
when you are forcing the native Codex app-server harness:
| Model ref | Runtime path | Use when |
| ----------------------------------------------------- | -------------------------------------------- | ------------------------------------------------------------------------- |
| `openai/gpt-5.4` | OpenAI provider through OpenClaw/PI plumbing | You want current direct OpenAI Platform API access with `OPENAI_API_KEY`. |
| `openai-codex/gpt-5.5` | OpenAI Codex OAuth through OpenClaw/PI | You want ChatGPT/Codex subscription auth with the default PI runner. |
| `openai/gpt-5.5` + `embeddedHarness.runtime: "codex"` | Codex app-server harness | You want native Codex app-server execution for the embedded agent turn. |
| Model ref | Runtime path | Use when |
| --------------------------------------------- | -------------------------------------------- | ------------------------------------------------------------------------- |
| `openai/gpt-5.4` | OpenAI provider through OpenClaw/PI plumbing | You want current direct OpenAI Platform API access with `OPENAI_API_KEY`. |
| `openai-codex/gpt-5.5` | OpenAI Codex OAuth through OpenClaw/PI | You want ChatGPT/Codex subscription auth with the default PI runner. |
| `openai/gpt-5.5` + `agentRuntime.id: "codex"` | Codex app-server harness | You want native Codex app-server execution for the embedded agent turn. |
GPT-5.5 is currently subscription/OAuth-only in OpenClaw. Use
`openai-codex/gpt-5.5` for PI OAuth, or `openai/gpt-5.5` with the Codex
@@ -123,7 +123,7 @@ refs and records the runtime policy separately, while fallback-only legacy refs
are left unchanged because runtime is configured for the whole agent container.
New PI Codex OAuth configs should use `openai-codex/gpt-*`; new native
app-server harness configs should use `openai/gpt-*` plus
`embeddedHarness.runtime: "codex"`.
`agentRuntime.id: "codex"`.
`agents.defaults.imageModel` follows the same prefix split. Use
`openai-codex/gpt-*` when image understanding should run through the OpenAI
@@ -152,14 +152,14 @@ means:
- **No change is required** if you intended ChatGPT/Codex OAuth through PI.
- Change the model to `openai/<model>` and set
`embeddedHarness.runtime: "codex"` if you intended native app-server
`agentRuntime.id: "codex"` if you intended native app-server
execution.
- Existing sessions still need `/new` or `/reset` after a runtime change,
because session runtime pins are sticky.
Harness selection is not a live session control. When an embedded turn runs,
OpenClaw records the selected harness id on that session and keeps using it for
later turns in the same session id. Change `embeddedHarness` config or
later turns in the same session id. Change `agentRuntime` config or
`OPENCLAW_AGENT_RUNTIME` when you want future sessions to use another harness;
use `/new` or `/reset` to start a fresh session before switching an existing
conversation between PI and Codex. This avoids replaying one transcript through
@@ -205,8 +205,8 @@ Use `openai/gpt-5.5`, enable the bundled plugin, and force the `codex` harness:
agents: {
defaults: {
model: "openai/gpt-5.5",
embeddedHarness: {
runtime: "codex",
agentRuntime: {
id: "codex",
},
},
},
@@ -230,11 +230,11 @@ If your config uses `plugins.allow`, include `codex` there too:
Legacy configs that set `agents.defaults.model` or an agent model to
`codex/<model>` still auto-enable the bundled `codex` plugin. New configs should
prefer `openai/<model>` plus the explicit `embeddedHarness` entry above.
prefer `openai/<model>` plus the explicit `agentRuntime` entry above.
## Add Codex alongside other models
Do not set `runtime: "codex"` globally if the same agent should freely switch
Do not set `agentRuntime.id: "codex"` globally if the same agent should freely switch
between Codex and non-Codex provider models. A forced runtime applies to every
embedded turn for that agent or session. If you select an Anthropic model while
that runtime is forced, OpenClaw still tries the Codex harness and fails closed
@@ -242,8 +242,8 @@ instead of silently routing that turn through PI.
Use one of these shapes instead:
- Put Codex on a dedicated agent with `embeddedHarness.runtime: "codex"`.
- Keep the default agent on `runtime: "auto"` and PI fallback for normal mixed
- Put Codex on a dedicated agent with `agentRuntime.id: "codex"`.
- Keep the default agent on `agentRuntime.id: "auto"` and PI fallback for normal mixed
provider usage.
- Use legacy `codex/*` refs only for compatibility. New configs should prefer
`openai/*` plus an explicit Codex runtime policy.
@@ -262,8 +262,8 @@ adds a separate Codex agent:
},
agents: {
defaults: {
embeddedHarness: {
runtime: "auto",
agentRuntime: {
id: "auto",
fallback: "pi",
},
},
@@ -277,8 +277,8 @@ adds a separate Codex agent:
id: "codex",
name: "Codex",
model: "openai/gpt-5.5",
embeddedHarness: {
runtime: "codex",
agentRuntime: {
id: "codex",
},
},
],
@@ -302,7 +302,7 @@ Agents should route user requests by intent, not by the word "Codex" alone:
| "Bind this chat to Codex" | `/codex bind` |
| "Resume Codex thread `<id>` here" | `/codex resume <id>` |
| "Show Codex threads" | `/codex threads` |
| "Use Codex as the runtime for this agent" | config change to `embeddedHarness.runtime` |
| "Use Codex as the runtime for this agent" | config change to `agentRuntime.id` |
| "Use my ChatGPT/Codex subscription with normal OpenClaw" | `openai-codex/*` model refs |
| "Run Codex through ACP/acpx" | ACP `sessions_spawn({ runtime: "acp", ... })` |
| "Start Claude Code/Gemini/OpenCode/Cursor in a thread" | ACP/acpx, not `/codex` and not native sub-agents |
@@ -323,8 +323,8 @@ uses Codex. Explicit plugin runtimes default to no PI fallback, so
agents: {
defaults: {
model: "openai/gpt-5.5",
embeddedHarness: {
runtime: "codex",
agentRuntime: {
id: "codex",
fallback: "none",
},
},
@@ -352,8 +352,8 @@ auto-selection:
{
agents: {
defaults: {
embeddedHarness: {
runtime: "auto",
agentRuntime: {
id: "auto",
fallback: "pi",
},
},
@@ -367,8 +367,8 @@ auto-selection:
id: "codex",
name: "Codex",
model: "openai/gpt-5.5",
embeddedHarness: {
runtime: "codex",
agentRuntime: {
id: "codex",
fallback: "none",
},
},
@@ -565,8 +565,8 @@ Codex-only harness validation:
agents: {
defaults: {
model: "openai/gpt-5.5",
embeddedHarness: {
runtime: "codex",
agentRuntime: {
id: "codex",
},
},
},
@@ -772,15 +772,15 @@ understanding continue to use the matching provider/model settings such as
**Codex does not appear as a normal `/model` provider:** that is expected for
new configs. Select an `openai/gpt-*` model with
`embeddedHarness.runtime: "codex"` (or a legacy `codex/*` ref), enable
`agentRuntime.id: "codex"` (or a legacy `codex/*` ref), enable
`plugins.entries.codex.enabled`, and check whether `plugins.allow` excludes
`codex`.
**OpenClaw uses PI instead of Codex:** `runtime: "auto"` can still use PI as the
**OpenClaw uses PI instead of Codex:** `agentRuntime.id: "auto"` can still use PI as the
compatibility backend when no Codex harness claims the run. Set
`embeddedHarness.runtime: "codex"` to force Codex selection while testing. A
`agentRuntime.id: "codex"` to force Codex selection while testing. A
forced Codex runtime now fails instead of falling back to PI unless you
explicitly set `embeddedHarness.fallback: "pi"`. Once Codex app-server is
explicitly set `agentRuntime.fallback: "pi"`. Once Codex app-server is
selected, its failures surface directly without extra fallback config.
**The app-server is rejected:** upgrade Codex so the app-server handshake
@@ -795,9 +795,9 @@ or disable discovery.
and that the remote app-server speaks the same Codex app-server protocol version.
**A non-Codex model uses PI:** that is expected unless you forced
`embeddedHarness.runtime: "codex"` for that agent or selected a legacy
`agentRuntime.id: "codex"` for that agent or selected a legacy
`codex/*` ref. Plain `openai/gpt-*` and other provider refs stay on their normal
provider path in `auto` mode. If you force `runtime: "codex"`, every embedded
provider path in `auto` mode. If you force `agentRuntime.id: "codex"`, every embedded
turn for that agent must be a Codex-supported OpenAI model.
## Related

View File

@@ -82,8 +82,8 @@ Current compatibility records include:
- bundled plugin allowlist and enablement behavior
- legacy provider/channel env-var manifest metadata
- activation hints that are being replaced by manifest contribution ownership
- `embeddedHarness` and `agent-harness` naming aliases while public naming moves
toward `agentRuntime`
- legacy runtime-policy config keys while doctor migrates operators to
`agentRuntime`
- generated bundled channel config metadata fallback while registry-first
`channelConfigs` metadata lands
- the persisted plugin registry disable env while repair flows migrate operators

View File

@@ -142,7 +142,7 @@ OpenClaw. The harness then claims that provider in `supports(...)`.
The bundled Codex plugin follows this pattern:
- preferred user model refs: `openai/gpt-5.5` plus
`embeddedHarness.runtime: "codex"`
`agentRuntime.id: "codex"`
- compatibility refs: legacy `codex/gpt-*` refs remain accepted, but new
configs should not use them as normal provider/model refs
- harness id: `codex`
@@ -153,7 +153,7 @@ The bundled Codex plugin follows this pattern:
The Codex plugin is additive. Plain `openai/gpt-*` refs continue to use the
normal OpenClaw provider path unless you force the Codex harness with
`embeddedHarness.runtime: "codex"`. Older `codex/gpt-*` refs still select the
`agentRuntime.id: "codex"`. Older `codex/gpt-*` refs still select the
Codex provider and harness for compatibility.
For operator setup, model prefix examples, and Codex-only configs, see
@@ -194,14 +194,14 @@ intentional silent replies such as `NO_REPLY` unclassified.
The bundled `codex` harness is the native Codex mode for embedded OpenClaw
agent turns. Enable the bundled `codex` plugin first, and include `codex` in
`plugins.allow` if your config uses a restrictive allowlist. Native app-server
configs should use `openai/gpt-*` with `embeddedHarness.runtime: "codex"`.
configs should use `openai/gpt-*` with `agentRuntime.id: "codex"`.
Use `openai-codex/*` for Codex OAuth through PI instead. Legacy `codex/*`
model refs remain compatibility aliases for the native harness.
When this mode runs, Codex owns the native thread id, resume behavior,
compaction, and app-server execution. OpenClaw still owns the chat channel,
visible transcript mirror, tool policy, approvals, media delivery, and session
selection. Use `embeddedHarness.runtime: "codex"` without a `fallback` override
selection. Use `agentRuntime.id: "codex"` without a `fallback` override
when you need to prove that only the Codex app-server path can claim the run.
Explicit plugin runtimes already fail closed by default. Set `fallback: "pi"`
only when you intentionally want PI to handle missing harness selection. Codex
@@ -209,8 +209,8 @@ app-server failures already fail directly instead of retrying through PI.
## Disable PI fallback
By default, OpenClaw runs embedded agents with `agents.defaults.embeddedHarness`
set to `{ runtime: "auto", fallback: "pi" }`. In `auto` mode, registered plugin
By default, OpenClaw runs embedded agents with `agents.defaults.agentRuntime`
set to `{ id: "auto", fallback: "pi" }`. In `auto` mode, registered plugin
harnesses can claim a provider/model pair. If none match, OpenClaw falls back
to PI.
@@ -228,8 +228,8 @@ For Codex-only embedded runs:
"agents": {
"defaults": {
"model": "openai/gpt-5.5",
"embeddedHarness": {
"runtime": "codex"
"agentRuntime": {
"id": "codex"
}
}
}
@@ -244,8 +244,8 @@ the fallback:
{
"agents": {
"defaults": {
"embeddedHarness": {
"runtime": "auto",
"agentRuntime": {
"id": "auto",
"fallback": "none"
}
}
@@ -259,8 +259,8 @@ Per-agent overrides use the same shape:
{
"agents": {
"defaults": {
"embeddedHarness": {
"runtime": "auto",
"agentRuntime": {
"id": "auto",
"fallback": "pi"
}
},
@@ -268,8 +268,8 @@ Per-agent overrides use the same shape:
{
"id": "codex-only",
"model": "openai/gpt-5.5",
"embeddedHarness": {
"runtime": "codex",
"agentRuntime": {
"id": "codex",
"fallback": "none"
}
}