docs(openai): canonicalize GPT model refs

This commit is contained in:
Peter Steinberger
2026-04-23 20:38:45 +01:00
parent 17830983ce
commit a8173276bf
14 changed files with 104 additions and 118 deletions

View File

@@ -658,26 +658,27 @@ Quick answers plus deeper troubleshooting for real-world setups (local dev, VPS,
</Accordion>
<Accordion title="How does Codex auth work?">
OpenClaw supports **OpenAI Code (Codex)** via OAuth (ChatGPT sign-in). Onboarding can run the OAuth flow and will set the default model to `openai-codex/gpt-5.5` when appropriate. See [Model providers](/concepts/model-providers) and [Onboarding (CLI)](/start/wizard).
OpenClaw supports **OpenAI Code (Codex)** via OAuth (ChatGPT sign-in). New model refs should use the canonical `openai/gpt-5.5` path; `openai-codex/gpt-*` remains a legacy compatibility alias. See [Model providers](/concepts/model-providers) and [Onboarding (CLI)](/start/wizard).
</Accordion>
<Accordion title="Why does ChatGPT GPT-5.5 not unlock openai/gpt-5.5 in OpenClaw?">
OpenClaw treats the two routes separately:
<Accordion title="Why does OpenClaw still mention openai-codex?">
`openai-codex` is still the internal auth/profile provider id for ChatGPT/Codex OAuth. The model ref should be canonical OpenAI:
- `openai-codex/gpt-5.5` = ChatGPT/Codex OAuth
- `openai/gpt-5.5` = direct OpenAI Platform API
- `openai/gpt-5.5` = canonical GPT-5.5 model ref
- `openai-codex/gpt-5.5` = legacy compatibility alias
- `openai-codex:...` = auth profile id, not a model ref
In OpenClaw, ChatGPT/Codex sign-in is wired to the `openai-codex/*` route,
not the direct `openai/*` route. If you want the direct API path in
OpenClaw, set `OPENAI_API_KEY` (or the equivalent OpenAI provider config).
If you want ChatGPT/Codex sign-in in OpenClaw, use `openai-codex/*`.
If you want the direct OpenAI Platform billing/limit path, set
`OPENAI_API_KEY`. If you want ChatGPT/Codex subscription auth, sign in with
`openclaw models auth login --provider openai-codex` and keep model refs on
`openai/*` in new configs.
</Accordion>
<Accordion title="Why can Codex OAuth limits differ from ChatGPT web?">
`openai-codex/*` uses the Codex OAuth route, and its usable quota windows are
OpenAI-managed and plan-dependent. In practice, those limits can differ from
the ChatGPT website/app experience, even when both are tied to the same account.
Codex OAuth uses OpenAI-managed, plan-dependent quota windows. In practice,
those limits can differ from the ChatGPT website/app experience, even when
both are tied to the same account.
OpenClaw can show the currently visible provider usage/quota windows in
`openclaw models status`, but it does not invent or normalize ChatGPT-web
@@ -2344,8 +2345,8 @@ Quick answers plus deeper troubleshooting for real-world setups (local dev, VPS,
<Accordion title="Can I use GPT 5.5 for daily tasks and Codex 5.5 for coding?">
Yes. Set one as default and switch as needed:
- **Quick switch (per session):** `/model gpt-5.5` for daily tasks, `/model openai-codex/gpt-5.5` for coding with Codex OAuth.
- **Default + switch:** set `agents.defaults.model.primary` to `openai/gpt-5.5`, then switch to `openai-codex/gpt-5.5` when coding (or the other way around).
- **Quick switch (per session):** `/model gpt-5.5` for daily tasks, or keep the same model and switch auth/profile as needed.
- **Default:** set `agents.defaults.model.primary` to `openai/gpt-5.5`.
- **Sub-agents:** route coding tasks to sub-agents with a different default model.
See [Models](/concepts/models) and [Slash commands](/tools/slash-commands).
@@ -2355,9 +2356,9 @@ Quick answers plus deeper troubleshooting for real-world setups (local dev, VPS,
<Accordion title="How do I configure fast mode for GPT 5.5?">
Use either a session toggle or a config default:
- **Per session:** send `/fast on` while the session is using `openai/gpt-5.5` or `openai-codex/gpt-5.5`.
- **Per session:** send `/fast on` while the session is using `openai/gpt-5.5`.
- **Per model default:** set `agents.defaults.models["openai/gpt-5.5"].params.fastMode` to `true`.
- **Codex OAuth too:** if you also use `openai-codex/gpt-5.5`, set the same flag there.
- **Legacy aliases:** older `openai-codex/gpt-*` entries can keep their own params, but new configs should put params on `openai/gpt-*`.
Example:
@@ -2371,11 +2372,6 @@ Quick answers plus deeper troubleshooting for real-world setups (local dev, VPS,
fastMode: true,
},
},
"openai-codex/gpt-5.5": {
params: {
fastMode: true,
},
},
},
},
},

View File

@@ -681,7 +681,7 @@ Docker notes:
`agent` method:
- load the bundled `codex` plugin
- select `OPENCLAW_AGENT_RUNTIME=codex`
- send a first gateway agent turn to `codex/gpt-5.5`
- send a first gateway agent turn to `openai/gpt-5.5` with the Codex harness forced
- send a second turn to the same OpenClaw session and verify the app-server
thread can resume
- run `/codex status` and `/codex models` through the same gateway command
@@ -691,7 +691,7 @@ Docker notes:
denied so the agent asks back
- Test: `src/gateway/gateway-codex-harness.live.test.ts`
- Enable: `OPENCLAW_LIVE_CODEX_HARNESS=1`
- Default model: `codex/gpt-5.5`
- Default model: `openai/gpt-5.5`
- Optional image probe: `OPENCLAW_LIVE_CODEX_HARNESS_IMAGE_PROBE=1`
- Optional MCP/tool probe: `OPENCLAW_LIVE_CODEX_HARNESS_MCP_PROBE=1`
- Optional Guardian probe: `OPENCLAW_LIVE_CODEX_HARNESS_GUARDIAN_PROBE=1`
@@ -708,7 +708,7 @@ OPENCLAW_LIVE_CODEX_HARNESS=1 \
OPENCLAW_LIVE_CODEX_HARNESS_IMAGE_PROBE=1 \
OPENCLAW_LIVE_CODEX_HARNESS_MCP_PROBE=1 \
OPENCLAW_LIVE_CODEX_HARNESS_GUARDIAN_PROBE=1 \
OPENCLAW_LIVE_CODEX_HARNESS_MODEL=codex/gpt-5.5 \
OPENCLAW_LIVE_CODEX_HARNESS_MODEL=openai/gpt-5.5 \
pnpm test:live -- src/gateway/gateway-codex-harness.live.test.ts
```
@@ -731,7 +731,7 @@ Docker notes:
`OPENCLAW_LIVE_CODEX_HARNESS_GUARDIAN_PROBE=0` when you need a narrower debug
run.
- Docker also exports `OPENCLAW_AGENT_HARNESS_FALLBACK=none`, matching the live
test config so `openai-codex/*` or PI fallback cannot hide a Codex harness
test config so legacy aliases or PI fallback cannot hide a Codex harness
regression.
### Recommended live recipes
@@ -769,7 +769,7 @@ There is no fixed “CI model list” (live is opt-in), but these are the **reco
This is the “common models” run we expect to keep working:
- OpenAI (non-Codex): `openai/gpt-5.5` (optional: `openai/gpt-5.4-mini`)
- OpenAI Codex: `openai-codex/gpt-5.5`
- OpenAI Codex OAuth: `openai/gpt-5.5` (`openai-codex/gpt-*` remains a legacy alias)
- Anthropic: `anthropic/claude-opus-4-6` (or `anthropic/claude-sonnet-4-6`)
- Google (Gemini API): `google/gemini-3.1-pro-preview` and `google/gemini-3-flash-preview` (avoid older Gemini 2.x models)
- Google (Antigravity): `google-antigravity/claude-opus-4-6-thinking` and `google-antigravity/gemini-3-flash`
@@ -777,7 +777,7 @@ This is the “common models” run we expect to keep working:
- MiniMax: `minimax/MiniMax-M2.7`
Run gateway smoke with tools + image:
`OPENCLAW_LIVE_GATEWAY_MODELS="openai/gpt-5.5,openai-codex/gpt-5.5,anthropic/claude-opus-4-6,google/gemini-3.1-pro-preview,google/gemini-3-flash-preview,google-antigravity/claude-opus-4-6-thinking,google-antigravity/gemini-3-flash,zai/glm-4.7,minimax/MiniMax-M2.7" pnpm test:live src/gateway/gateway-models.profiles.live.test.ts`
`OPENCLAW_LIVE_GATEWAY_MODELS="openai/gpt-5.5,anthropic/claude-opus-4-6,google/gemini-3.1-pro-preview,google/gemini-3-flash-preview,google-antigravity/claude-opus-4-6-thinking,google-antigravity/gemini-3-flash,zai/glm-4.7,minimax/MiniMax-M2.7" pnpm test:live src/gateway/gateway-models.profiles.live.test.ts`
### Baseline: tool calling (Read + optional Exec)