docs(openai): document GPT-5.5 defaults

This commit is contained in:
Peter Steinberger
2026-04-23 20:01:02 +01:00
parent cd5bc2fc93
commit 89051c6bf6
33 changed files with 128 additions and 126 deletions

View File

@@ -658,14 +658,14 @@ Quick answers plus deeper troubleshooting for real-world setups (local dev, VPS,
</Accordion>
<Accordion title="How does Codex auth work?">
OpenClaw supports **OpenAI Code (Codex)** via OAuth (ChatGPT sign-in). Onboarding can run the OAuth flow and will set the default model to `openai-codex/gpt-5.4` when appropriate. See [Model providers](/concepts/model-providers) and [Onboarding (CLI)](/start/wizard).
OpenClaw supports **OpenAI Code (Codex)** via OAuth (ChatGPT sign-in). Onboarding can run the OAuth flow and will set the default model to `openai-codex/gpt-5.5` when appropriate. See [Model providers](/concepts/model-providers) and [Onboarding (CLI)](/start/wizard).
</Accordion>
<Accordion title="Why does ChatGPT GPT-5.4 not unlock openai/gpt-5.4 in OpenClaw?">
<Accordion title="Why does ChatGPT GPT-5.5 not unlock openai/gpt-5.5 in OpenClaw?">
OpenClaw treats the two routes separately:
- `openai-codex/gpt-5.4` = ChatGPT/Codex OAuth
- `openai/gpt-5.4` = direct OpenAI Platform API
- `openai-codex/gpt-5.5` = ChatGPT/Codex OAuth
- `openai/gpt-5.5` = direct OpenAI Platform API
In OpenClaw, ChatGPT/Codex sign-in is wired to the `openai-codex/*` route,
not the direct `openai/*` route. If you want the direct API path in
@@ -2219,7 +2219,7 @@ Quick answers plus deeper troubleshooting for real-world setups (local dev, VPS,
agents.defaults.model.primary
```
Models are referenced as `provider/model` (example: `openai/gpt-5.4`). If you omit the provider, OpenClaw first tries an alias, then a unique configured-provider match for that exact model id, and only then falls back to the configured default provider as a deprecated compatibility path. If that provider no longer exposes the configured default model, OpenClaw falls back to the first configured provider/model instead of surfacing a stale removed-provider default. You should still **explicitly** set `provider/model`.
Models are referenced as `provider/model` (example: `openai/gpt-5.5`). If you omit the provider, OpenClaw first tries an alias, then a unique configured-provider match for that exact model id, and only then falls back to the configured default provider as a deprecated compatibility path. If that provider no longer exposes the configured default model, OpenClaw falls back to the first configured provider/model instead of surfacing a stale removed-provider default. You should still **explicitly** set `provider/model`.
</Accordion>
@@ -2341,23 +2341,23 @@ Quick answers plus deeper troubleshooting for real-world setups (local dev, VPS,
</Accordion>
<Accordion title="Can I use GPT 5.2 for daily tasks and Codex 5.3 for coding?">
<Accordion title="Can I use GPT 5.5 for daily tasks and Codex 5.5 for coding?">
Yes. Set one as default and switch as needed:
- **Quick switch (per session):** `/model gpt-5.4` for daily tasks, `/model openai-codex/gpt-5.4` for coding with Codex OAuth.
- **Default + switch:** set `agents.defaults.model.primary` to `openai/gpt-5.4`, then switch to `openai-codex/gpt-5.4` when coding (or the other way around).
- **Quick switch (per session):** `/model gpt-5.5` for daily tasks, `/model openai-codex/gpt-5.5` for coding with Codex OAuth.
- **Default + switch:** set `agents.defaults.model.primary` to `openai/gpt-5.5`, then switch to `openai-codex/gpt-5.5` when coding (or the other way around).
- **Sub-agents:** route coding tasks to sub-agents with a different default model.
See [Models](/concepts/models) and [Slash commands](/tools/slash-commands).
</Accordion>
<Accordion title="How do I configure fast mode for GPT 5.4?">
<Accordion title="How do I configure fast mode for GPT 5.5?">
Use either a session toggle or a config default:
- **Per session:** send `/fast on` while the session is using `openai/gpt-5.4` or `openai-codex/gpt-5.4`.
- **Per model default:** set `agents.defaults.models["openai/gpt-5.4"].params.fastMode` to `true`.
- **Codex OAuth too:** if you also use `openai-codex/gpt-5.4`, set the same flag there.
- **Per session:** send `/fast on` while the session is using `openai/gpt-5.5` or `openai-codex/gpt-5.5`.
- **Per model default:** set `agents.defaults.models["openai/gpt-5.5"].params.fastMode` to `true`.
- **Codex OAuth too:** if you also use `openai-codex/gpt-5.5`, set the same flag there.
Example:
@@ -2366,12 +2366,12 @@ Quick answers plus deeper troubleshooting for real-world setups (local dev, VPS,
agents: {
defaults: {
models: {
"openai/gpt-5.4": {
"openai/gpt-5.5": {
params: {
fastMode: true,
},
},
"openai-codex/gpt-5.4": {
"openai-codex/gpt-5.5": {
params: {
fastMode: true,
},
@@ -2442,7 +2442,7 @@ Quick answers plus deeper troubleshooting for real-world setups (local dev, VPS,
model: { primary: "minimax/MiniMax-M2.7" },
models: {
"minimax/MiniMax-M2.7": { alias: "minimax" },
"openai/gpt-5.4": { alias: "gpt" },
"openai/gpt-5.5": { alias: "gpt" },
},
},
},
@@ -2470,7 +2470,7 @@ Quick answers plus deeper troubleshooting for real-world setups (local dev, VPS,
- `opus` → `anthropic/claude-opus-4-6`
- `sonnet` → `anthropic/claude-sonnet-4-6`
- `gpt` → `openai/gpt-5.4`
- `gpt` → `openai/gpt-5.5`
- `gpt-mini` → `openai/gpt-5.4-mini`
- `gpt-nano` → `openai/gpt-5.4-nano`
- `gemini` → `google/gemini-3.1-pro-preview`