fix: migrate agent runtime config

This commit is contained in:
Peter Steinberger
2026-04-26 07:58:48 +01:00
parent 9d6e79019f
commit 5b9be2cdb1
61 changed files with 873 additions and 335 deletions

View File

@@ -18,14 +18,24 @@ configuration. They are different layers:
| ------------- | ------------------------------------- | ------------------------------------------------------------------- |
| Provider | `openai`, `anthropic`, `openai-codex` | How OpenClaw authenticates, discovers models, and names model refs. |
| Model | `gpt-5.5`, `claude-opus-4-6` | The model selected for the agent turn. |
| Agent runtime | `pi`, `codex`, ACP-backed runtimes | The low level loop that executes the prepared turn. |
| Agent runtime | `pi`, `codex`, `claude-cli` | The low level loop or backend that executes the prepared turn. |
| Channel | Telegram, Discord, Slack, WhatsApp | Where messages enter and leave OpenClaw. |
You will also see the word **harness** in code and config. A harness is the
implementation that provides an agent runtime. For example, the bundled Codex
harness implements the `codex` runtime. The config key is still named
`embeddedHarness` for compatibility, but user-facing docs and status output
should generally say runtime.
You will also see the word **harness** in code. A harness is the implementation
that provides an agent runtime. For example, the bundled Codex harness
implements the `codex` runtime. Public config uses `agentRuntime.id`; `openclaw
doctor --fix` rewrites older runtime-policy keys to that shape.
There are two runtime families:
- **Embedded harnesses** run inside OpenClaw's prepared agent loop. Today this
is the built-in `pi` runtime plus registered plugin harnesses such as
`codex`.
- **CLI backends** run a local CLI process while keeping the model ref
canonical. For example, `anthropic/claude-opus-4-7` with
`agentRuntime.id: "claude-cli"` means "select the Anthropic model, execute
through Claude CLI." `claude-cli` is not an embedded harness id and must not
be passed to AgentHarness selection.
## Three things named Codex
@@ -34,7 +44,7 @@ Most confusion comes from three different surfaces sharing the Codex name:
| Surface | OpenClaw name/config | What it does |
| ---------------------------------------------------- | ------------------------------------ | --------------------------------------------------------------------------------------------------- |
| Codex OAuth provider route | `openai-codex/*` model refs | Uses ChatGPT/Codex subscription OAuth through the normal OpenClaw PI runner. |
| Native Codex app-server runtime | `embeddedHarness.runtime: "codex"` | Runs the embedded agent turn through the bundled Codex app-server harness. |
| Native Codex app-server runtime | `agentRuntime.id: "codex"` | Runs the embedded agent turn through the bundled Codex app-server harness. |
| Codex ACP adapter | `runtime: "acp"`, `agentId: "codex"` | Runs Codex through the external ACP/acpx control plane. Use only when ACP/acpx is explicitly asked. |
| Native Codex chat-control command set | `/codex ...` | Binds, resumes, steers, stops, and inspects Codex app-server threads from chat. |
| OpenAI Platform API route for GPT/Codex-style models | `openai/*` model refs | Uses OpenAI API-key auth unless a runtime override, such as `runtime: "codex"`, runs the turn. |
@@ -52,8 +62,8 @@ The common Codex setup uses the `openai` provider with the `codex` runtime:
agents: {
defaults: {
model: "openai/gpt-5.5",
embeddedHarness: {
runtime: "codex",
agentRuntime: {
id: "codex",
},
},
},
@@ -76,7 +86,7 @@ This is the agent-facing decision tree:
1. If the user asks for **Codex bind/control/thread/resume/steer/stop**, use the
native `/codex` command surface when the bundled `codex` plugin is enabled.
2. If the user asks for **Codex as the embedded runtime**, use
`openai/<model>` with `embeddedHarness.runtime: "codex"`.
`openai/<model>` with `agentRuntime.id: "codex"`.
3. If the user asks for **Codex OAuth/subscription auth on the normal OpenClaw
runner**, use `openai-codex/<model>` and leave the runtime as PI.
4. If the user explicitly says **ACP**, **acpx**, or **Codex ACP adapter**, use
@@ -87,7 +97,7 @@ This is the agent-facing decision tree:
| You mean... | Use... |
| --------------------------------------- | -------------------------------------------- |
| Codex app-server chat/thread control | `/codex ...` from the bundled `codex` plugin |
| Codex app-server embedded agent runtime | `embeddedHarness.runtime: "codex"` |
| Codex app-server embedded agent runtime | `agentRuntime.id: "codex"` |
| OpenAI Codex OAuth on the PI runner | `openai-codex/*` model refs |
| Claude Code or other external harness | ACP/acpx |
@@ -122,9 +132,9 @@ OpenClaw chooses an embedded runtime after provider and model resolution:
1. A session's recorded runtime wins. Config changes do not hot-switch an
existing transcript to a different native thread system.
2. `OPENCLAW_AGENT_RUNTIME=<id>` forces that runtime for new or reset sessions.
3. `agents.defaults.embeddedHarness.runtime` or
`agents.list[].embeddedHarness.runtime` can set `auto`, `pi`, or a registered
runtime id such as `codex`.
3. `agents.defaults.agentRuntime.id` or `agents.list[].agentRuntime.id` can set
`auto`, `pi`, a registered embedded harness id such as `codex`, or a
supported CLI backend alias such as `claude-cli`.
4. In `auto` mode, registered plugin runtimes can claim supported provider/model
pairs.
5. If no runtime claims a turn in `auto` mode and `fallback: "pi"` is set
@@ -137,6 +147,24 @@ Explicit plugin runtimes fail closed by default. For example,
a broader fallback setting, so an agent-level `runtime: "codex"` is not silently
routed back to PI just because defaults used `fallback: "pi"`.
CLI backend aliases are different from embedded harness ids. The preferred
Claude CLI form is:
```json5
{
agents: {
defaults: {
model: "anthropic/claude-opus-4-7",
agentRuntime: { id: "claude-cli" },
},
},
}
```
Legacy refs such as `claude-cli/claude-opus-4-7` remain supported for
compatibility, but new config should keep the provider/model canonical and put
the execution backend in `agentRuntime.id`.
`auto` mode is intentionally conservative. Plugin runtimes can claim
provider/model pairs they understand, but the Codex plugin does not claim the
`openai-codex` provider in `auto` mode. That keeps
@@ -146,7 +174,7 @@ moving subscription-auth configs onto the native app-server harness.
If `openclaw doctor` warns that the `codex` plugin is enabled while
`openai-codex/*` still routes through PI, treat that as a diagnosis, not a
migration. Keep the config unchanged when PI Codex OAuth is what you want.
Switch to `openai/<model>` plus `runtime: "codex"` only when you want native
Switch to `openai/<model>` plus `agentRuntime.id: "codex"` only when you want native
Codex app-server execution.
## Compatibility contract

View File

@@ -24,17 +24,17 @@ Reference for **LLM/model providers** (not chat channels like WhatsApp/Telegram)
- `openai/<model>` uses the direct OpenAI API-key provider in PI.
- `openai-codex/<model>` uses Codex OAuth in PI.
- `openai/<model>` plus `agents.defaults.embeddedHarness.runtime: "codex"` uses the native Codex app-server harness.
- `openai/<model>` plus `agents.defaults.agentRuntime.id: "codex"` uses the native Codex app-server harness.
See [OpenAI](/providers/openai) and [Codex harness](/plugins/codex-harness). If the provider/runtime split is confusing, read [Agent runtimes](/concepts/agent-runtimes) first.
Plugin auto-enable follows the same boundary: `openai-codex/<model>` belongs to the OpenAI plugin, while the Codex plugin is enabled by `embeddedHarness.runtime: "codex"` or legacy `codex/<model>` refs.
Plugin auto-enable follows the same boundary: `openai-codex/<model>` belongs to the OpenAI plugin, while the Codex plugin is enabled by `agentRuntime.id: "codex"` or legacy `codex/<model>` refs.
GPT-5.5 is available through `openai/gpt-5.5` for direct API-key traffic, `openai-codex/gpt-5.5` in PI for Codex OAuth, and the native Codex app-server harness when `embeddedHarness.runtime: "codex"` is set.
GPT-5.5 is available through `openai/gpt-5.5` for direct API-key traffic, `openai-codex/gpt-5.5` in PI for Codex OAuth, and the native Codex app-server harness when `agentRuntime.id: "codex"` is set.
</Accordion>
<Accordion title="CLI runtimes">
CLI runtimes use the same split: choose canonical model refs such as `anthropic/claude-*`, `google/gemini-*`, or `openai/gpt-*`, then set `agents.defaults.embeddedHarness.runtime` to `claude-cli`, `google-gemini-cli`, or `codex-cli` when you want a local CLI backend.
CLI runtimes use the same split: choose canonical model refs such as `anthropic/claude-*`, `google/gemini-*`, or `openai/gpt-*`, then set `agents.defaults.agentRuntime.id` to `claude-cli`, `google-gemini-cli`, or `codex-cli` when you want a local CLI backend.
Legacy `claude-cli/*`, `google-gemini-cli/*`, and `codex-cli/*` refs migrate back to canonical provider refs with the runtime recorded separately.
@@ -108,6 +108,10 @@ OpenClaw ships with the piai catalog. These providers require **no** `models.
- Example model: `anthropic/claude-opus-4-6`
- CLI: `openclaw onboard --auth-choice apiKey`
- Direct public Anthropic requests support the shared `/fast` toggle and `params.fastMode`, including API-key and OAuth-authenticated traffic sent to `api.anthropic.com`; OpenClaw maps that to Anthropic `service_tier` (`auto` vs `standard_only`)
- Preferred Claude CLI config keeps the model ref canonical and selects the CLI
backend separately: `anthropic/claude-opus-4-7` with
`agents.defaults.agentRuntime.id: "claude-cli"`. Legacy
`claude-cli/claude-opus-4-7` refs still work for compatibility.
<Note>
Anthropic staff told us OpenClaw-style Claude CLI usage is allowed again, so OpenClaw treats Claude CLI reuse and `claude -p` usage as sanctioned for this integration unless Anthropic publishes a new policy. Anthropic setup-token remains available as a supported OpenClaw token path, but OpenClaw now prefers Claude CLI reuse and `claude -p` when available.
@@ -124,7 +128,7 @@ Anthropic staff told us OpenClaw-style Claude CLI usage is allowed again, so Ope
- Provider: `openai-codex`
- Auth: OAuth (ChatGPT)
- PI model ref: `openai-codex/gpt-5.5`
- Native Codex app-server harness ref: `openai/gpt-5.5` with `agents.defaults.embeddedHarness.runtime: "codex"`
- Native Codex app-server harness ref: `openai/gpt-5.5` with `agents.defaults.agentRuntime.id: "codex"`
- Native Codex app-server harness docs: [Codex harness](/plugins/codex-harness)
- Legacy model refs: `codex/gpt-*`
- Plugin boundary: `openai-codex/*` loads the OpenAI plugin; the native Codex app-server plugin is selected only by the Codex harness runtime or legacy `codex/*` refs.

View File

@@ -13,7 +13,7 @@ Quick provider overview + examples: [/concepts/model-providers](/concepts/model-
Model refs choose a provider and model. They do not usually choose the
low-level agent runtime. For example, `openai/gpt-5.5` can run through the
normal OpenAI provider path or through the Codex app-server runtime, depending
on `agents.defaults.embeddedHarness.runtime`. See
on `agents.defaults.agentRuntime.id`. See
[/concepts/agent-runtimes](/concepts/agent-runtimes).
## How model selection works