fix: migrate agent runtime config

This commit is contained in:
Peter Steinberger
2026-04-26 07:58:48 +01:00
parent 9d6e79019f
commit 5b9be2cdb1
61 changed files with 873 additions and 335 deletions

View File

@@ -97,6 +97,25 @@ Anthropic's current public docs:
Setup and runtime details for the Claude CLI backend are in [CLI Backends](/gateway/cli-backends).
</Note>
### Config example
Prefer the canonical Anthropic model ref plus a CLI runtime override:
```json5
{
agents: {
defaults: {
model: { primary: "anthropic/claude-opus-4-7" },
agentRuntime: { id: "claude-cli" },
},
},
}
```
Legacy `claude-cli/claude-opus-4-7` model refs still work for
compatibility, but new config should keep provider/model selection as
`anthropic/*` and put the execution backend in `agentRuntime.id`.
<Tip>
If you want the clearest billing path, use an Anthropic API key instead. OpenClaw also supports subscription-style options from [OpenAI Codex](/providers/openai), [Qwen Cloud](/providers/qwen), [MiniMax](/providers/minimax), and [Z.AI / GLM](/providers/glm).
</Tip>

View File

@@ -13,7 +13,7 @@ Gemini Grounding.
- Provider: `google`
- Auth: `GEMINI_API_KEY` or `GOOGLE_API_KEY`
- API: Google Gemini API
- Runtime option: `agents.defaults.embeddedHarness.runtime: "google-gemini-cli"`
- Runtime option: `agents.defaults.agentRuntime.id: "google-gemini-cli"`
reuses Gemini CLI OAuth while keeping model refs canonical as `google/*`.
## Getting started

View File

@@ -17,7 +17,7 @@ embedded agent loop:
- **API key** — direct OpenAI Platform access with usage-based billing (`openai/*` models)
- **Codex subscription through PI** — ChatGPT/Codex sign-in with subscription access (`openai-codex/*` models)
- **Codex app-server harness** — native Codex app-server execution (`openai/*` models plus `agents.defaults.embeddedHarness.runtime: "codex"`)
- **Codex app-server harness** — native Codex app-server execution (`openai/*` models plus `agents.defaults.agentRuntime.id: "codex"`)
OpenAI explicitly supports subscription OAuth usage in external tools and workflows like OpenClaw.
@@ -27,13 +27,13 @@ changing config.
## Quick choice
| Goal | Use | Notes |
| --------------------------------------------- | -------------------------------------------------------- | ---------------------------------------------------------------------------- |
| Direct API-key billing | `openai/gpt-5.5` | Set `OPENAI_API_KEY` or run OpenAI API-key onboarding. |
| GPT-5.5 with ChatGPT/Codex subscription auth | `openai-codex/gpt-5.5` | Default PI route for Codex OAuth. Best first choice for subscription setups. |
| GPT-5.5 with native Codex app-server behavior | `openai/gpt-5.5` plus `embeddedHarness.runtime: "codex"` | Forces the Codex app-server harness for that model ref. |
| Image generation or editing | `openai/gpt-image-2` | Works with either `OPENAI_API_KEY` or OpenAI Codex OAuth. |
| Transparent-background images | `openai/gpt-image-1.5` | Use `outputFormat=png` or `webp` and `openai.background=transparent`. |
| Goal | Use | Notes |
| --------------------------------------------- | ------------------------------------------------ | ---------------------------------------------------------------------------- |
| Direct API-key billing | `openai/gpt-5.5` | Set `OPENAI_API_KEY` or run OpenAI API-key onboarding. |
| GPT-5.5 with ChatGPT/Codex subscription auth | `openai-codex/gpt-5.5` | Default PI route for Codex OAuth. Best first choice for subscription setups. |
| GPT-5.5 with native Codex app-server behavior | `openai/gpt-5.5` plus `agentRuntime.id: "codex"` | Forces the Codex app-server harness for that model ref. |
| Image generation or editing | `openai/gpt-image-2` | Works with either `OPENAI_API_KEY` or OpenAI Codex OAuth. |
| Transparent-background images | `openai/gpt-image-1.5` | Use `outputFormat=png` or `webp` and `openai.background=transparent`. |
## Naming map
@@ -44,7 +44,7 @@ The names are similar but not interchangeable:
| `openai` | Provider prefix | Direct OpenAI Platform API route. |
| `openai-codex` | Provider prefix | OpenAI Codex OAuth/subscription route through the normal OpenClaw PI runner. |
| `codex` plugin | Plugin | Bundled OpenClaw plugin that provides native Codex app-server runtime and `/codex` chat controls. |
| `embeddedHarness.runtime: codex` | Agent runtime | Force the native Codex app-server harness for embedded turns. |
| `agentRuntime.id: codex` | Agent runtime | Force the native Codex app-server harness for embedded turns. |
| `/codex ...` | Chat command set | Bind/control Codex app-server threads from a conversation. |
| `runtime: "acp", agentId: "codex"` | ACP session route | Explicit fallback path that runs Codex through ACP/acpx. |
@@ -57,7 +57,7 @@ combination so you can confirm it is intentional; it does not rewrite it.
GPT-5.5 is available through both direct OpenAI Platform API-key access and
subscription/OAuth routes. Use `openai/gpt-5.5` for direct `OPENAI_API_KEY`
traffic, `openai-codex/gpt-5.5` for Codex OAuth through PI, or
`openai/gpt-5.5` with `embeddedHarness.runtime: "codex"` for the native Codex
`openai/gpt-5.5` with `agentRuntime.id: "codex"` for the native Codex
app-server harness.
</Note>
@@ -65,7 +65,7 @@ app-server harness.
Enabling the OpenAI plugin, or selecting an `openai-codex/*` model, does not
enable the bundled Codex app-server plugin. OpenClaw enables that plugin only
when you explicitly select the native Codex harness with
`embeddedHarness.runtime: "codex"` or use a legacy `codex/*` model ref.
`agentRuntime.id: "codex"` or use a legacy `codex/*` model ref.
If the bundled `codex` plugin is enabled but `openai-codex/*` still resolves
through PI, `openclaw doctor` warns and leaves the route unchanged.
</Note>
@@ -76,7 +76,7 @@ through PI, `openclaw doctor` warns and leaves the route unchanged.
| ------------------------- | ---------------------------------------------------------- | ------------------------------------------------------ |
| Chat / Responses | `openai/<model>` model provider | Yes |
| Codex subscription models | `openai-codex/<model>` with `openai-codex` OAuth | Yes |
| Codex app-server harness | `openai/<model>` with `embeddedHarness.runtime: codex` | Yes |
| Codex app-server harness | `openai/<model>` with `agentRuntime.id: codex` | Yes |
| Server-side web search | Native OpenAI Responses tool | Yes, when web search is enabled and no provider pinned |
| Images | `image_generate` | Yes |
| Videos | `video_generate` | Yes |
@@ -120,15 +120,15 @@ Choose your preferred auth method and follow the setup steps.
| Model ref | Runtime config | Route | Auth |
| ---------------------- | -------------------------- | --------------------------- | ---------------- |
| `openai/gpt-5.5` | omitted / `runtime: "pi"` | Direct OpenAI Platform API | `OPENAI_API_KEY` |
| `openai/gpt-5.4-mini` | omitted / `runtime: "pi"` | Direct OpenAI Platform API | `OPENAI_API_KEY` |
| `openai/gpt-5.5` | `runtime: "codex"` | Codex app-server harness | Codex app-server |
| `openai/gpt-5.5` | omitted / `agentRuntime.id: "pi"` | Direct OpenAI Platform API | `OPENAI_API_KEY` |
| `openai/gpt-5.4-mini` | omitted / `agentRuntime.id: "pi"` | Direct OpenAI Platform API | `OPENAI_API_KEY` |
| `openai/gpt-5.5` | `agentRuntime.id: "codex"` | Codex app-server harness | Codex app-server |
<Note>
`openai/*` is the direct OpenAI API-key route unless you explicitly force
the Codex app-server harness. Use `openai-codex/*` for Codex OAuth through
the default PI runner, or use `openai/gpt-5.5` with
`embeddedHarness.runtime: "codex"` for native Codex app-server execution.
`agentRuntime.id: "codex"` for native Codex app-server execution.
</Note>
### Config example
@@ -185,7 +185,7 @@ Choose your preferred auth method and follow the setup steps.
|-----------|----------------|-------|------|
| `openai-codex/gpt-5.5` | omitted / `runtime: "pi"` | ChatGPT/Codex OAuth through PI | Codex sign-in |
| `openai-codex/gpt-5.5` | `runtime: "auto"` | Still PI unless a plugin explicitly claims `openai-codex` | Codex sign-in |
| `openai/gpt-5.5` | `embeddedHarness.runtime: "codex"` | Codex app-server harness | Codex app-server auth |
| `openai/gpt-5.5` | `agentRuntime.id: "codex"` | Codex app-server harness | Codex app-server auth |
<Note>
Keep using the `openai-codex` provider id for auth/profile commands. The
@@ -211,7 +211,7 @@ Choose your preferred auth method and follow the setup steps.
The default PI harness appears as `Runtime: OpenClaw Pi Default`. When the
bundled Codex app-server harness is selected, `/status` shows
`Runtime: OpenAI Codex`. Existing sessions keep their recorded harness id, so use
`/new` or `/reset` after changing `embeddedHarness` if you want `/status` to
`/new` or `/reset` after changing `agentRuntime` if you want `/status` to
reflect a new PI/Codex choice.
### Doctor warning
@@ -220,7 +220,7 @@ Choose your preferred auth method and follow the setup steps.
`openai-codex/*` route is selected, `openclaw doctor` warns that the model
still resolves through PI. Keep the config unchanged when that is the
intended subscription-auth route. Switch to `openai/<model>` plus
`embeddedHarness.runtime: "codex"` only when you want native Codex
`agentRuntime.id: "codex"` only when you want native Codex
app-server execution.
### Context window cap
@@ -380,7 +380,7 @@ See [Video Generation](/tools/video-generation) for shared tool parameters, prov
OpenClaw adds a shared GPT-5 prompt contribution for GPT-5-family runs across providers. It applies by model id, so `openai-codex/gpt-5.5`, `openai/gpt-5.5`, `openrouter/openai/gpt-5.5`, `opencode/gpt-5.5`, and other compatible GPT-5 refs receive the same overlay. Older GPT-4.x models do not.
The bundled native Codex harness uses the same GPT-5 behavior and heartbeat overlay through Codex app-server developer instructions, so `openai/gpt-5.x` sessions forced through `embeddedHarness.runtime: "codex"` keep the same follow-through and proactive heartbeat guidance even though Codex owns the rest of the harness prompt.
The bundled native Codex harness uses the same GPT-5 behavior and heartbeat overlay through Codex app-server developer instructions, so `openai/gpt-5.x` sessions forced through `agentRuntime.id: "codex"` keep the same follow-through and proactive heartbeat guidance even though Codex owns the rest of the harness prompt.
The GPT-5 contribution adds a tagged behavior contract for persona persistence, execution safety, tool discipline, output shape, completion checks, and verification. Channel-specific reply and silent-message behavior stays in the shared OpenClaw system prompt and outbound delivery policy. The GPT-5 guidance is always enabled for matching models. The friendly interaction-style layer is separate and configurable.
@@ -766,7 +766,7 @@ the Server-side compaction accordion below.
- Injects `context_management: [{ type: "compaction", compact_threshold: ... }]`
- Default `compact_threshold`: 70% of `contextWindow` (or `80000` when unavailable)
This applies to the built-in Pi harness path and to OpenAI provider hooks used by embedded runs. The native Codex app-server harness manages its own context through Codex and is configured separately with `agents.defaults.embeddedHarness.runtime`.
This applies to the built-in Pi harness path and to OpenAI provider hooks used by embedded runs. The native Codex app-server harness manages its own context through Codex and is configured separately with `agents.defaults.agentRuntime.id`.
<Tabs>
<Tab title="Enable explicitly">