docs(openai): canonicalize GPT model refs

This commit is contained in:
Peter Steinberger
2026-04-23 20:38:45 +01:00
parent 17830983ce
commit a8173276bf
14 changed files with 104 additions and 118 deletions

View File

@@ -122,19 +122,19 @@ OpenClaw. The harness then claims that provider in `supports(...)`.
The bundled Codex plugin follows this pattern:
- provider id: `codex`
- user model refs: `codex/gpt-5.5`, `codex/gpt-5.2`, or another model returned
by the Codex app server
- user model refs: canonical `openai/gpt-5.5` plus
`embeddedHarness.runtime: "codex"`; legacy `codex/gpt-*` refs remain accepted
for compatibility
- harness id: `codex`
- auth: synthetic provider availability, because the Codex harness owns the
native Codex login/session
- app-server request: OpenClaw sends the bare model id to Codex and lets the
harness talk to the native app-server protocol
The Codex plugin is additive. Plain `openai/gpt-*` refs remain OpenAI provider
refs and continue to use the normal OpenClaw provider path. Select `codex/gpt-*`
when you want Codex-managed auth, Codex model discovery, native threads, and
Codex app-server execution. `/model` can switch among the Codex models returned
by the Codex app server without requiring OpenAI provider credentials.
The Codex plugin is additive. Plain `openai/gpt-*` refs continue to use the
normal OpenClaw provider path unless you force the Codex harness with
`embeddedHarness.runtime: "codex"`. Older `codex/gpt-*` refs still select the
Codex provider and harness for compatibility.
For operator setup, model prefix examples, and Codex-only configs, see
[Codex Harness](/plugins/codex-harness).
@@ -156,13 +156,9 @@ into the OpenClaw transcript.
The bundled `codex` harness is the native Codex mode for embedded OpenClaw
agent turns. Enable the bundled `codex` plugin first, and include `codex` in
`plugins.allow` if your config uses a restrictive allowlist. It is different
from `openai-codex/*`:
- `openai-codex/*` uses ChatGPT/Codex OAuth through the normal OpenClaw provider
path.
- `codex/*` uses the bundled Codex provider and routes the turn through Codex
app-server.
`plugins.allow` if your config uses a restrictive allowlist. New configs should
use `openai/gpt-*` with `embeddedHarness.runtime: "codex"`. Legacy
`openai-codex/*` and `codex/*` model refs remain compatibility aliases.
When this mode runs, Codex owns the native thread id, resume behavior,
compaction, and app-server execution. OpenClaw still owns the chat channel,
@@ -189,7 +185,7 @@ For Codex-only embedded runs:
{
"agents": {
"defaults": {
"model": "codex/gpt-5.5",
"model": "openai/gpt-5.5",
"embeddedHarness": {
"runtime": "codex",
"fallback": "none"
@@ -230,7 +226,7 @@ Per-agent overrides use the same shape:
"list": [
{
"id": "codex-only",
"model": "codex/gpt-5.5",
"model": "openai/gpt-5.5",
"embeddedHarness": {
"runtime": "codex",
"fallback": "none"