docs(openai): canonicalize GPT model refs

This commit is contained in:
Peter Steinberger
2026-04-23 20:38:45 +01:00
parent 17830983ce
commit a8173276bf
14 changed files with 104 additions and 118 deletions

View File

@@ -658,26 +658,27 @@ Quick answers plus deeper troubleshooting for real-world setups (local dev, VPS,
</Accordion>
<Accordion title="How does Codex auth work?">
OpenClaw supports **OpenAI Code (Codex)** via OAuth (ChatGPT sign-in). Onboarding can run the OAuth flow and will set the default model to `openai-codex/gpt-5.5` when appropriate. See [Model providers](/concepts/model-providers) and [Onboarding (CLI)](/start/wizard).
OpenClaw supports **OpenAI Code (Codex)** via OAuth (ChatGPT sign-in). New model refs should use the canonical `openai/gpt-5.5` path; `openai-codex/gpt-*` remains a legacy compatibility alias. See [Model providers](/concepts/model-providers) and [Onboarding (CLI)](/start/wizard).
</Accordion>
<Accordion title="Why does ChatGPT GPT-5.5 not unlock openai/gpt-5.5 in OpenClaw?">
OpenClaw treats the two routes separately:
<Accordion title="Why does OpenClaw still mention openai-codex?">
`openai-codex` is still the internal auth/profile provider id for ChatGPT/Codex OAuth. The model ref should be canonical OpenAI:
- `openai-codex/gpt-5.5` = ChatGPT/Codex OAuth
- `openai/gpt-5.5` = direct OpenAI Platform API
- `openai/gpt-5.5` = canonical GPT-5.5 model ref
- `openai-codex/gpt-5.5` = legacy compatibility alias
- `openai-codex:...` = auth profile id, not a model ref
In OpenClaw, ChatGPT/Codex sign-in is wired to the `openai-codex/*` route,
not the direct `openai/*` route. If you want the direct API path in
OpenClaw, set `OPENAI_API_KEY` (or the equivalent OpenAI provider config).
If you want ChatGPT/Codex sign-in in OpenClaw, use `openai-codex/*`.
If you want the direct OpenAI Platform billing/limit path, set
`OPENAI_API_KEY`. If you want ChatGPT/Codex subscription auth, sign in with
`openclaw models auth login --provider openai-codex` and keep model refs on
`openai/*` in new configs.
</Accordion>
<Accordion title="Why can Codex OAuth limits differ from ChatGPT web?">
`openai-codex/*` uses the Codex OAuth route, and its usable quota windows are
OpenAI-managed and plan-dependent. In practice, those limits can differ from
the ChatGPT website/app experience, even when both are tied to the same account.
Codex OAuth uses OpenAI-managed, plan-dependent quota windows. In practice,
those limits can differ from the ChatGPT website/app experience, even when
both are tied to the same account.
OpenClaw can show the currently visible provider usage/quota windows in
`openclaw models status`, but it does not invent or normalize ChatGPT-web
@@ -2344,8 +2345,8 @@ Quick answers plus deeper troubleshooting for real-world setups (local dev, VPS,
<Accordion title="Can I use GPT 5.5 for daily tasks and Codex 5.5 for coding?">
Yes. Set one as default and switch as needed:
- **Quick switch (per session):** `/model gpt-5.5` for daily tasks, `/model openai-codex/gpt-5.5` for coding with Codex OAuth.
- **Default + switch:** set `agents.defaults.model.primary` to `openai/gpt-5.5`, then switch to `openai-codex/gpt-5.5` when coding (or the other way around).
- **Quick switch (per session):** `/model gpt-5.5` for daily tasks, or keep the same model and switch auth/profile as needed.
- **Default:** set `agents.defaults.model.primary` to `openai/gpt-5.5`.
- **Sub-agents:** route coding tasks to sub-agents with a different default model.
See [Models](/concepts/models) and [Slash commands](/tools/slash-commands).
@@ -2355,9 +2356,9 @@ Quick answers plus deeper troubleshooting for real-world setups (local dev, VPS,
<Accordion title="How do I configure fast mode for GPT 5.5?">
Use either a session toggle or a config default:
- **Per session:** send `/fast on` while the session is using `openai/gpt-5.5` or `openai-codex/gpt-5.5`.
- **Per session:** send `/fast on` while the session is using `openai/gpt-5.5`.
- **Per model default:** set `agents.defaults.models["openai/gpt-5.5"].params.fastMode` to `true`.
- **Codex OAuth too:** if you also use `openai-codex/gpt-5.5`, set the same flag there.
- **Legacy aliases:** older `openai-codex/gpt-*` entries can keep their own params, but new configs should put params on `openai/gpt-*`.
Example:
@@ -2371,11 +2372,6 @@ Quick answers plus deeper troubleshooting for real-world setups (local dev, VPS,
fastMode: true,
},
},
"openai-codex/gpt-5.5": {
params: {
fastMode: true,
},
},
},
},
},