* fix: keep migrated openai codex routes automatic * scope runtime policy to providers and models * fix runtime policy surfaces * fix ci runtime policy checks * fix doctor stale session runtime pins
14 KiB
summary, title, read_when
| summary | title | read_when | |||
|---|---|---|---|---|---|
| How OpenClaw separates model providers, models, channels, and agent runtimes | Agent runtimes |
|
An agent runtime is the component that owns one prepared model loop: it receives the prompt, drives model output, handles native tool calls, and returns the finished turn to OpenClaw.
Runtimes are easy to confuse with providers because both show up near model configuration. They are different layers:
| Layer | Examples | What it means |
|---|---|---|
| Provider | openai, anthropic, openai-codex |
How OpenClaw authenticates, discovers models, and names model refs. |
| Model | gpt-5.5, claude-opus-4-6 |
The model selected for the agent turn. |
| Agent runtime | pi, codex, claude-cli |
The low level loop or backend that executes the prepared turn. |
| Channel | Telegram, Discord, Slack, WhatsApp | Where messages enter and leave OpenClaw. |
You will also see the word harness in code. A harness is the implementation
that provides an agent runtime. For example, the bundled Codex harness
implements the codex runtime. Public config uses agentRuntime.id on
provider or model entries; whole-agent runtime keys are legacy and ignored.
openclaw doctor --fix removes old whole-agent runtime pins and rewrites
legacy runtime model refs to canonical provider/model refs plus model-scoped
runtime policy where needed.
There are two runtime families:
- Embedded harnesses run inside OpenClaw's prepared agent loop. Today this
is the built-in
piruntime plus registered plugin harnesses such ascodex. - CLI backends run a local CLI process while keeping the model ref
canonical. For example,
anthropic/claude-opus-4-7with a model-scopedagentRuntime.id: "claude-cli"means "select the Anthropic model, execute through Claude CLI."claude-cliis not an embedded harness id and must not be passed to AgentHarness selection.
Codex surfaces
Most confusion comes from several different surfaces sharing the Codex name:
| Surface | OpenClaw name/config | What it does |
|---|---|---|
| Native Codex app-server runtime | openai/* model refs |
Runs OpenAI embedded agent turns through Codex app-server. This is the usual ChatGPT/Codex subscription setup. |
| Codex OAuth auth profiles | openai-codex auth provider |
Stores ChatGPT/Codex subscription auth that the Codex app-server harness consumes. |
| Codex ACP adapter | runtime: "acp", agentId: "codex" |
Runs Codex through the external ACP/acpx control plane. Use only when ACP/acpx is explicitly asked. |
| Native Codex chat-control command set | /codex ... |
Binds, resumes, steers, stops, and inspects Codex app-server threads from chat. |
| OpenAI Platform API route for non-agent surfaces | openai/* plus API-key auth |
Used for direct OpenAI APIs such as images, embeddings, speech, and realtime. |
Those surfaces are intentionally independent. Enabling the codex plugin makes
the native app-server features available; openclaw doctor --fix owns legacy
openai-codex/* route repair and stale session pin cleanup. Selecting
openai/* for an agent model now means "run this through Codex" unless a
non-agent OpenAI API surface is being used.
The common ChatGPT/Codex subscription setup uses Codex OAuth for auth, but keeps
the model ref as openai/* and selects the codex runtime:
{
agents: {
defaults: {
model: "openai/gpt-5.5",
},
},
}
That means OpenClaw selects an OpenAI model ref, then asks the Codex app-server runtime to run the embedded agent turn. It does not mean "use API billing," and it does not mean the channel, model provider catalog, or OpenClaw session store becomes Codex.
When the bundled codex plugin is enabled, natural-language Codex control
should use the native /codex command surface (/codex bind, /codex threads,
/codex resume, /codex steer, /codex stop) instead of ACP. Use ACP for
Codex only when the user explicitly asks for ACP/acpx or is testing the ACP
adapter path. Claude Code, Gemini CLI, OpenCode, Cursor, and similar external
harnesses still use ACP.
This is the agent-facing decision tree:
- If the user asks for Codex bind/control/thread/resume/steer/stop, use the
native
/codexcommand surface when the bundledcodexplugin is enabled. - If the user asks for Codex as the embedded runtime or wants the normal
subscription-backed Codex agent experience, use
openai/<model>. - If the user explicitly chooses PI for an OpenAI model, keep the model ref
as
openai/<model>and set provider/model runtime policy toagentRuntime.id: "pi". A selectedopenai-codexauth profile is routed internally through PI's legacy Codex-auth transport. - If legacy config still contains
openai-codex/*model refs, repair it toopenai/<model>withopenclaw doctor --fix. - If the user explicitly says ACP, acpx, or Codex ACP adapter, use
ACP with
runtime: "acp"andagentId: "codex". - If the request is for Claude Code, Gemini CLI, OpenCode, Cursor, Droid, or another external harness, use ACP/acpx, not the native sub-agent runtime.
| You mean... | Use... |
|---|---|
| Codex app-server chat/thread control | /codex ... from the bundled codex plugin |
| Codex app-server embedded agent runtime | openai/* agent model refs |
| OpenAI Codex OAuth | openai-codex auth profiles |
| Claude Code or other external harness | ACP/acpx |
For the OpenAI-family prefix split, see OpenAI and Model providers. For the Codex runtime support contract, see Codex harness.
Runtime ownership
Different runtimes own different amounts of the loop.
| Surface | OpenClaw PI embedded | Codex app-server |
|---|---|---|
| Model loop owner | OpenClaw through the PI embedded runner | Codex app-server |
| Canonical thread state | OpenClaw transcript | Codex thread, plus OpenClaw transcript mirror |
| OpenClaw dynamic tools | Native OpenClaw tool loop | Bridged through the Codex adapter |
| Native shell and file tools | PI/OpenClaw path | Codex-native tools, bridged through native hooks where supported |
| Context engine | Native OpenClaw context assembly | OpenClaw projects assembled context into the Codex turn |
| Compaction | OpenClaw or selected context engine | Codex-native compaction, with OpenClaw notifications and mirror maintenance |
| Channel delivery | OpenClaw | OpenClaw |
This ownership split is the main design rule:
- If OpenClaw owns the surface, OpenClaw can provide normal plugin hook behavior.
- If the native runtime owns the surface, OpenClaw needs runtime events or native hooks.
- If the native runtime owns canonical thread state, OpenClaw should mirror and project context, not rewrite unsupported internals.
Runtime selection
OpenClaw chooses an embedded runtime after provider and model resolution:
- Model-scoped runtime policy wins. This can live in a configured provider
model entry or in
agents.defaults.models["provider/model"].agentRuntime/agents.list[].models["provider/model"].agentRuntime. - Provider-scoped runtime policy comes next at
models.providers.<provider>.agentRuntime. - In
automode, registered plugin runtimes can claim supported provider/model pairs. - If no runtime claims a turn in
automode, OpenClaw uses PI as the compatibility runtime. Use an explicit runtime id when the run must be strict.
Whole-session and whole-agent runtime pins are ignored. That includes
OPENCLAW_AGENT_RUNTIME, session agentHarnessId/agentRuntimeOverride state,
agents.defaults.agentRuntime, and agents.list[].agentRuntime. Run
openclaw doctor --fix to remove stale whole-agent runtime config and convert
legacy runtime model refs where OpenClaw can preserve the intent.
Explicit provider/model plugin runtimes fail closed. For example,
agentRuntime.id: "codex" on a provider or model means Codex or a clear
selection/runtime error; it is never silently routed back to PI.
CLI backend aliases are different from embedded harness ids. The preferred Claude CLI form is:
{
agents: {
defaults: {
model: "anthropic/claude-opus-4-7",
models: {
"anthropic/claude-opus-4-7": {
agentRuntime: { id: "claude-cli" },
},
},
},
},
}
Legacy refs such as claude-cli/claude-opus-4-7 remain supported for
compatibility, but new config should keep the provider/model canonical and put
the execution backend in provider/model runtime policy.
auto mode is intentionally conservative for most providers. OpenAI agent
models are the exception: unset runtime and auto both resolve to the Codex
harness. Explicit PI runtime config remains an opt-in compatibility route for
openai/* agent turns; when paired with a selected openai-codex auth profile,
OpenClaw routes PI internally through the legacy Codex-auth transport while
keeping the public model ref as openai/*. Stale OpenAI PI session pins are
ignored by runtime selection and can be cleaned with openclaw doctor --fix.
If openclaw doctor warns that the codex plugin is enabled while
openai-codex/* remains in config, treat that as legacy route state. Run
openclaw doctor --fix to rewrite it to openai/* with the Codex runtime.
Compatibility contract
When a runtime is not PI, it should document what OpenClaw surfaces it supports. Use this shape for runtime docs:
| Question | Why it matters |
|---|---|
| Who owns the model loop? | Determines where retries, tool continuation, and final answer decisions happen. |
| Who owns canonical thread history? | Determines whether OpenClaw can edit history or only mirror it. |
| Do OpenClaw dynamic tools work? | Messaging, sessions, cron, and OpenClaw-owned tools rely on this. |
| Do dynamic tool hooks work? | Plugins expect before_tool_call, after_tool_call, and middleware around OpenClaw-owned tools. |
| Do native tool hooks work? | Shell, patch, and runtime-owned tools need native hook support for policy and observation. |
| Does the context engine lifecycle run? | Memory and context plugins depend on assemble, ingest, after-turn, and compaction lifecycle. |
| What compaction data is exposed? | Some plugins only need notifications, while others need kept/dropped metadata. |
| What is intentionally unsupported? | Users should not assume PI equivalence where the native runtime owns more state. |
The Codex runtime support contract is documented in Codex harness.
Status labels
Status output may show both Execution and Runtime labels. Read them as
diagnostics, not as provider names.
- A model ref such as
openai/gpt-5.5tells you the selected provider/model. - A runtime id such as
codextells you which loop is executing the turn. - A channel label such as Telegram or Discord tells you where the conversation is happening.
If a run still shows an unexpected runtime, inspect the selected provider/model runtime policy first. Legacy session runtime pins no longer decide routing.