docs: clarify OpenAI GPT-5.5 auth routes

This commit is contained in:
Peter Steinberger
2026-04-23 23:49:12 +01:00
parent 367b9721b6
commit 14c4143723
25 changed files with 173 additions and 121 deletions

View File

@@ -235,7 +235,7 @@ Run an isolated agent turn:
curl -X POST http://127.0.0.1:18789/hooks/agent \
-H 'Authorization: Bearer SECRET' \
-H 'Content-Type: application/json' \
-d '{"message":"Summarize inbox","name":"Email","model":"openai/gpt-5.5"}'
-d '{"message":"Summarize inbox","name":"Email","model":"openai/gpt-5.4"}'
```
Fields: `message` (required), `name`, `agentId`, `wakeMode`, `deliver`, `channel`, `to`, `model`, `thinking`, `timeoutSeconds`.

View File

@@ -38,7 +38,7 @@ openclaw config get browser.executablePath
openclaw config set browser.executablePath "/usr/bin/google-chrome"
openclaw config set agents.defaults.heartbeat.every "2h"
openclaw config set agents.list[0].tools.exec.node "node-id-or-name"
openclaw config set agents.defaults.models '{"openai/gpt-5.5":{}}' --strict-json --merge
openclaw config set agents.defaults.models '{"openai/gpt-5.4":{}}' --strict-json --merge
openclaw config set channels.discord.token --ref-provider default --ref-source env --ref-id DISCORD_BOT_TOKEN
openclaw config set secrets.providers.vaultfile --provider-source file --provider-path /etc/openclaw/secrets.json --provider-mode json
openclaw config unset plugins.entries.brave.config.webSearch.apiKey
@@ -115,7 +115,7 @@ you pass `--replace`.
Use `--merge` when adding entries to those maps:
```bash
openclaw config set agents.defaults.models '{"openai/gpt-5.5":{}}' --strict-json --merge
openclaw config set agents.defaults.models '{"openai/gpt-5.4":{}}' --strict-json --merge
openclaw config set models.providers.ollama.models '[{"id":"llama3.2","name":"Llama 3.2"}]' --strict-json --merge
```

View File

@@ -16,7 +16,16 @@ For model selection rules, see [/concepts/models](/concepts/models).
- CLI helpers: `openclaw onboard`, `openclaw models list`, `openclaw models set <provider/model>`.
- `models.providers.*.models[].contextWindow` is native model metadata; `contextTokens` is the effective runtime cap.
- Fallback rules, cooldown probes, and session-override persistence: [Model failover](/concepts/model-failover).
- OpenAI GPT model refs are canonical as `openai/<model>`. Legacy `openai-codex/<model>` and `codex/<model>` refs remain compatibility aliases for older configs and tests. For native Codex app-server execution, keep the model ref as `openai/gpt-*` and force `agents.defaults.embeddedHarness.runtime: "codex"` — see [Codex harness](/plugins/codex-harness).
- OpenAI-family routes are prefix-specific: `openai/<model>` uses the direct
OpenAI API-key provider in PI, `openai-codex/<model>` uses Codex OAuth in PI,
and `openai/<model>` plus `agents.defaults.embeddedHarness.runtime: "codex"`
uses the native Codex app-server harness. See [OpenAI](/providers/openai)
and [Codex harness](/plugins/codex-harness).
- GPT-5.5 is currently available through subscription/OAuth routes:
`openai-codex/gpt-5.5` in PI or `openai/gpt-5.5` with the Codex app-server
harness. The direct API-key route for `openai/gpt-5.5` is supported once
OpenAI enables GPT-5.5 on the public API; until then use API-enabled models
such as `openai/gpt-5.4` for `OPENAI_API_KEY` setups.
## Plugin-owned provider behavior
@@ -55,7 +64,8 @@ OpenClaw ships with the piai catalog. These providers require **no**
- Provider: `openai`
- Auth: `OPENAI_API_KEY`
- Optional rotation: `OPENAI_API_KEYS`, `OPENAI_API_KEY_1`, `OPENAI_API_KEY_2`, plus `OPENCLAW_LIVE_OPENAI_KEY` (single override)
- Example models: `openai/gpt-5.5`, `openai/gpt-5.5-pro`
- Example models: `openai/gpt-5.4`, `openai/gpt-5.4-mini`
- GPT-5.5 direct API support is future-ready here once OpenAI exposes GPT-5.5 on the API
- CLI: `openclaw onboard --auth-choice openai-api-key`
- Default transport is `auto` (WebSocket-first, SSE fallback)
- Override per model via `agents.defaults.models["openai/<model>"].params.transport` (`"sse"`, `"websocket"`, or `"auto"`)
@@ -72,7 +82,7 @@ OpenClaw ships with the piai catalog. These providers require **no**
```json5
{
agents: { defaults: { model: { primary: "openai/gpt-5.5" } } },
agents: { defaults: { model: { primary: "openai/gpt-5.4" } } },
}
```
@@ -97,22 +107,24 @@ OpenClaw ships with the piai catalog. These providers require **no**
- Provider: `openai-codex`
- Auth: OAuth (ChatGPT)
- Canonical model ref: `openai/gpt-5.5`
- Legacy model refs: `openai-codex/gpt-*`, `codex/gpt-*`
- PI model ref: `openai-codex/gpt-5.5`
- Native Codex app-server harness ref: `openai/gpt-5.5` with `agents.defaults.embeddedHarness.runtime: "codex"`
- Legacy model refs: `codex/gpt-*`
- CLI: `openclaw onboard --auth-choice openai-codex` or `openclaw models auth login --provider openai-codex`
- Default transport is `auto` (WebSocket-first, SSE fallback)
- Override per model via `agents.defaults.models["openai/<model>"].params.transport` (`"sse"`, `"websocket"`, or `"auto"`)
- Override per PI model via `agents.defaults.models["openai-codex/<model>"].params.transport` (`"sse"`, `"websocket"`, or `"auto"`)
- `params.serviceTier` is also forwarded on native Codex Responses requests (`chatgpt.com/backend-api`)
- Hidden OpenClaw attribution headers (`originator`, `version`,
`User-Agent`) are only attached on native Codex traffic to
`chatgpt.com/backend-api`, not generic OpenAI-compatible proxies
- Shares the same `/fast` toggle and `params.fastMode` config as direct `openai/*`; OpenClaw maps that to `service_tier=priority`
- `openai/gpt-5.5` keeps native `contextWindow = 1000000` and a default runtime `contextTokens = 272000`; override the runtime cap with `models.providers.openai-codex.models[].contextTokens`
- `openai-codex/gpt-5.5` keeps native `contextWindow = 1000000` and a default runtime `contextTokens = 272000`; override the runtime cap with `models.providers.openai-codex.models[].contextTokens`
- Policy note: OpenAI Codex OAuth is explicitly supported for external tools/workflows like OpenClaw.
- Current GPT-5.5 access uses this OAuth/subscription route until OpenAI enables GPT-5.5 on the public API.
```json5
{
agents: { defaults: { model: { primary: "openai/gpt-5.5" } } },
agents: { defaults: { model: { primary: "openai-codex/gpt-5.5" } } },
}
```

View File

@@ -70,7 +70,7 @@ Provider configuration examples (including OpenCode) live in
Use additive writes when updating `agents.defaults.models` by hand:
```bash
openclaw config set agents.defaults.models '{"openai/gpt-5.5":{}}' --strict-json --merge
openclaw config set agents.defaults.models '{"openai/gpt-5.4":{}}' --strict-json --merge
```
`openclaw config set` protects model/provider maps from accidental clobbers. A
@@ -122,7 +122,7 @@ You can switch models for the current session without restarting:
/model
/model list
/model 3
/model openai/gpt-5.5
/model openai/gpt-5.4
/model status
```

View File

@@ -207,7 +207,7 @@ refs and write a judged Markdown report:
```bash
pnpm openclaw qa character-eval \
--model openai/gpt-5.5,thinking=xhigh \
--model openai-codex/gpt-5.5,thinking=xhigh \
--model openai/gpt-5.2,thinking=xhigh \
--model openai/gpt-5,thinking=xhigh \
--model anthropic/claude-opus-4-6,thinking=high \
@@ -215,7 +215,7 @@ pnpm openclaw qa character-eval \
--model zai/glm-5.1,thinking=high \
--model moonshot/kimi-k2.5,thinking=high \
--model google/gemini-3.1-pro-preview,thinking=high \
--judge-model openai/gpt-5.5,thinking=xhigh,fast \
--judge-model openai-codex/gpt-5.5,thinking=xhigh,fast \
--judge-model anthropic/claude-opus-4-6,thinking=high \
--blind-judge-models \
--concurrency 16 \
@@ -247,12 +247,12 @@ Candidate and judge model runs both default to concurrency 16. Lower
`--concurrency` or `--judge-concurrency` when provider limits or local gateway
pressure make a run too noisy.
When no candidate `--model` is passed, the character eval defaults to
`openai/gpt-5.5`, `openai/gpt-5.2`, `openai/gpt-5`, `anthropic/claude-opus-4-6`,
`openai-codex/gpt-5.5`, `openai/gpt-5.4`, `openai/gpt-5.2`, `anthropic/claude-opus-4-6`,
`anthropic/claude-sonnet-4-6`, `zai/glm-5.1`,
`moonshot/kimi-k2.5`, and
`google/gemini-3.1-pro-preview` when no `--model` is passed.
When no `--judge-model` is passed, the judges default to
`openai/gpt-5.5,thinking=xhigh,fast` and
`openai-codex/gpt-5.5,thinking=xhigh,fast` and
`anthropic/claude-opus-4-6,thinking=high`.
## Related docs

View File

@@ -234,7 +234,7 @@ Save to `~/.openclaw/openclaw.json` and you can DM the bot from that number.
userTimezone: "America/Chicago",
model: {
primary: "anthropic/claude-sonnet-4-6",
fallbacks: ["anthropic/claude-opus-4-6", "openai/gpt-5.5"],
fallbacks: ["anthropic/claude-opus-4-6", "openai/gpt-5.4"],
},
imageModel: {
primary: "openrouter/anthropic/claude-sonnet-4-6",
@@ -242,7 +242,7 @@ Save to `~/.openclaw/openclaw.json` and you can DM the bot from that number.
models: {
"anthropic/claude-opus-4-6": { alias: "opus" },
"anthropic/claude-sonnet-4-6": { alias: "sonnet" },
"openai/gpt-5.5": { alias: "gpt" },
"openai/gpt-5.4": { alias: "gpt" },
},
skills: ["github", "weather"], // inherited by agents that omit list[].skills
thinkingDefault: "low",

View File

@@ -1234,7 +1234,7 @@ Time format in system prompt. Default: `auto` (OS preference).
- `pdfMaxPages`: default maximum pages considered by extraction fallback mode in the `pdf` tool.
- `verboseDefault`: default verbose level for agents. Values: `"off"`, `"on"`, `"full"`. Default: `"off"`.
- `elevatedDefault`: default elevated-output level for agents. Values: `"off"`, `"on"`, `"ask"`, `"full"`. Default: `"on"`.
- `model.primary`: format `provider/model` (e.g. `openai/gpt-5.5`). If you omit the provider, OpenClaw tries an alias first, then a unique configured-provider match for that exact model id, and only then falls back to the configured default provider (deprecated compatibility behavior, so prefer explicit `provider/model`). If that provider no longer exposes the configured default model, OpenClaw falls back to the first configured provider/model instead of surfacing a stale removed-provider default.
- `model.primary`: format `provider/model` (e.g. `openai/gpt-5.4` for API-key access or `openai-codex/gpt-5.5` for Codex OAuth). If you omit the provider, OpenClaw tries an alias first, then a unique configured-provider match for that exact model id, and only then falls back to the configured default provider (deprecated compatibility behavior, so prefer explicit `provider/model`). If that provider no longer exposes the configured default model, OpenClaw falls back to the first configured provider/model instead of surfacing a stale removed-provider default.
- `models`: the configured model catalog and allowlist for `/model`. Each entry can include `alias` (shortcut) and `params` (provider-specific, for example `temperature`, `maxTokens`, `cacheRetention`, `context1m`).
- Safe edits: use `openclaw config set agents.defaults.models '<json>' --strict-json --merge` to add entries. `config set` refuses replacements that would remove existing allowlist entries unless you pass `--replace`.
- Provider-scoped configure/onboarding flows merge selected provider models into this map and preserve unrelated providers already configured.
@@ -1274,16 +1274,16 @@ Codex app-server harness.
**Built-in alias shorthands** (only apply when the model is in `agents.defaults.models`):
| Alias | Model |
| ------------------- | -------------------------------------- |
| `opus` | `anthropic/claude-opus-4-6` |
| `sonnet` | `anthropic/claude-sonnet-4-6` |
| `gpt` | `openai/gpt-5.5` |
| `gpt-mini` | `openai/gpt-5.4-mini` |
| `gpt-nano` | `openai/gpt-5.4-nano` |
| `gemini` | `google/gemini-3.1-pro-preview` |
| `gemini-flash` | `google/gemini-3-flash-preview` |
| `gemini-flash-lite` | `google/gemini-3.1-flash-lite-preview` |
| Alias | Model |
| ------------------- | -------------------------------------------------- |
| `opus` | `anthropic/claude-opus-4-6` |
| `sonnet` | `anthropic/claude-sonnet-4-6` |
| `gpt` | `openai/gpt-5.4` or configured Codex OAuth GPT-5.5 |
| `gpt-mini` | `openai/gpt-5.4-mini` |
| `gpt-nano` | `openai/gpt-5.4-nano` |
| `gemini` | `google/gemini-3.1-pro-preview` |
| `gemini-flash` | `google/gemini-3-flash-preview` |
| `gemini-flash-lite` | `google/gemini-3.1-flash-lite-preview` |
Your configured aliases always win over defaults.
@@ -2251,7 +2251,7 @@ Further restrict tools for specific providers or models. Order: base profile →
profile: "coding",
byProvider: {
"google-antigravity": { profile: "minimal" },
"openai/gpt-5.5": { allow: ["group:fs", "sessions_list"] },
"openai/gpt-5.4": { allow: ["group:fs", "sessions_list"] },
},
},
}

View File

@@ -135,11 +135,11 @@ is skipped when a candidate contains redacted secret placeholders such as `***`.
defaults: {
model: {
primary: "anthropic/claude-sonnet-4-6",
fallbacks: ["openai/gpt-5.5"],
fallbacks: ["openai/gpt-5.4"],
},
models: {
"anthropic/claude-sonnet-4-6": { alias: "Sonnet" },
"openai/gpt-5.5": { alias: "GPT" },
"openai/gpt-5.4": { alias: "GPT" },
},
},
},

View File

@@ -169,7 +169,7 @@ This is the highest-leverage compatibility set for self-hosted frontends and too
Use `x-openclaw-model`.
Examples:
`x-openclaw-model: openai/gpt-5.5`
`x-openclaw-model: openai/gpt-5.4`
`x-openclaw-model: gpt-5.5`
If you omit it, the selected agent runs with its normal configured model choice.
@@ -237,7 +237,7 @@ Streaming:
curl -N http://127.0.0.1:18789/v1/chat/completions \
-H 'Authorization: Bearer YOUR_TOKEN' \
-H 'Content-Type: application/json' \
-H 'x-openclaw-model: openai/gpt-5.5' \
-H 'x-openclaw-model: openai/gpt-5.4' \
-d '{
"model": "openclaw/research",
"stream": true,

View File

@@ -65,7 +65,7 @@ Rules of thumb:
- If `allow` is non-empty, everything else is treated as blocked.
- Tool policy is the hard stop: `/exec` cannot override a denied `exec` tool.
- `/exec` only changes session defaults for authorized senders; it does not grant tool access.
Provider tool keys accept either `provider` (e.g. `google-antigravity`) or `provider/model` (e.g. `openai/gpt-5.5`).
Provider tool keys accept either `provider` (e.g. `google-antigravity`) or `provider/model` (e.g. `openai/gpt-5.4`).
### Tool groups (shorthands)

View File

@@ -656,20 +656,29 @@ Quick answers plus deeper troubleshooting for real-world setups (local dev, VPS,
</Accordion>
<Accordion title="How does Codex auth work?">
OpenClaw supports **OpenAI Code (Codex)** via OAuth (ChatGPT sign-in). New model refs should use the canonical `openai/gpt-5.5` path; `openai-codex/gpt-*` remains a legacy compatibility alias. See [Model providers](/concepts/model-providers) and [Onboarding (CLI)](/start/wizard).
OpenClaw supports **OpenAI Code (Codex)** via OAuth (ChatGPT sign-in). Use
`openai-codex/gpt-5.5` for Codex OAuth through the default PI runner. Use
`openai/gpt-5.4` for current direct OpenAI API-key access. GPT-5.5 direct
API-key access is supported once OpenAI enables it on the public API; today
GPT-5.5 uses subscription/OAuth via `openai-codex/gpt-5.5` or native Codex
app-server runs with `openai/gpt-5.5` and `embeddedHarness.runtime: "codex"`.
See [Model providers](/concepts/model-providers) and [Onboarding (CLI)](/start/wizard).
</Accordion>
<Accordion title="Why does OpenClaw still mention openai-codex?">
`openai-codex` is still the internal auth/profile provider id for ChatGPT/Codex OAuth. The model ref should be canonical OpenAI:
`openai-codex` is the provider and auth-profile id for ChatGPT/Codex OAuth.
It is also the explicit PI model prefix for Codex OAuth:
- `openai/gpt-5.5` = canonical GPT-5.5 model ref
- `openai-codex/gpt-5.5` = legacy compatibility alias
- `openai/gpt-5.4` = current direct OpenAI API-key route in PI
- `openai/gpt-5.5` = future direct API-key route once OpenAI enables GPT-5.5 on the API
- `openai-codex/gpt-5.5` = Codex OAuth route in PI
- `openai/gpt-5.5` + `embeddedHarness.runtime: "codex"` = native Codex app-server route
- `openai-codex:...` = auth profile id, not a model ref
If you want the direct OpenAI Platform billing/limit path, set
`OPENAI_API_KEY`. If you want ChatGPT/Codex subscription auth, sign in with
`openclaw models auth login --provider openai-codex` and keep model refs on
`openai/*` in new configs.
`openclaw models auth login --provider openai-codex` and use
`openai-codex/*` model refs for PI runs.
</Accordion>
@@ -2218,7 +2227,7 @@ Quick answers plus deeper troubleshooting for real-world setups (local dev, VPS,
agents.defaults.model.primary
```
Models are referenced as `provider/model` (example: `openai/gpt-5.5`). If you omit the provider, OpenClaw first tries an alias, then a unique configured-provider match for that exact model id, and only then falls back to the configured default provider as a deprecated compatibility path. If that provider no longer exposes the configured default model, OpenClaw falls back to the first configured provider/model instead of surfacing a stale removed-provider default. You should still **explicitly** set `provider/model`.
Models are referenced as `provider/model` (example: `openai/gpt-5.4` or `openai-codex/gpt-5.5`). If you omit the provider, OpenClaw first tries an alias, then a unique configured-provider match for that exact model id, and only then falls back to the configured default provider as a deprecated compatibility path. If that provider no longer exposes the configured default model, OpenClaw falls back to the first configured provider/model instead of surfacing a stale removed-provider default. You should still **explicitly** set `provider/model`.
</Accordion>
@@ -2343,10 +2352,13 @@ Quick answers plus deeper troubleshooting for real-world setups (local dev, VPS,
<Accordion title="Can I use GPT 5.5 for daily tasks and Codex 5.5 for coding?">
Yes. Set one as default and switch as needed:
- **Quick switch (per session):** `/model gpt-5.5` for daily tasks, or keep the same model and switch auth/profile as needed.
- **Default:** set `agents.defaults.model.primary` to `openai/gpt-5.5`.
- **Quick switch (per session):** `/model openai/gpt-5.4` for current direct OpenAI API-key tasks or `/model openai-codex/gpt-5.5` for GPT-5.5 Codex OAuth tasks.
- **Default:** set `agents.defaults.model.primary` to `openai/gpt-5.4` for API-key usage or `openai-codex/gpt-5.5` for GPT-5.5 Codex OAuth usage.
- **Sub-agents:** route coding tasks to sub-agents with a different default model.
Direct API-key access for `openai/gpt-5.5` is supported once OpenAI enables
GPT-5.5 on the public API. Until then GPT-5.5 is subscription/OAuth-only.
See [Models](/concepts/models) and [Slash commands](/tools/slash-commands).
</Accordion>
@@ -2354,9 +2366,8 @@ Quick answers plus deeper troubleshooting for real-world setups (local dev, VPS,
<Accordion title="How do I configure fast mode for GPT 5.5?">
Use either a session toggle or a config default:
- **Per session:** send `/fast on` while the session is using `openai/gpt-5.5`.
- **Per model default:** set `agents.defaults.models["openai/gpt-5.5"].params.fastMode` to `true`.
- **Legacy aliases:** older `openai-codex/gpt-*` entries can keep their own params, but new configs should put params on `openai/gpt-*`.
- **Per session:** send `/fast on` while the session is using `openai/gpt-5.4` or `openai-codex/gpt-5.5`.
- **Per model default:** set `agents.defaults.models["openai/gpt-5.4"].params.fastMode` or `agents.defaults.models["openai-codex/gpt-5.5"].params.fastMode` to `true`.
Example:
@@ -2365,7 +2376,7 @@ Quick answers plus deeper troubleshooting for real-world setups (local dev, VPS,
agents: {
defaults: {
models: {
"openai/gpt-5.5": {
"openai/gpt-5.4": {
params: {
fastMode: true,
},
@@ -2436,7 +2447,7 @@ Quick answers plus deeper troubleshooting for real-world setups (local dev, VPS,
model: { primary: "minimax/MiniMax-M2.7" },
models: {
"minimax/MiniMax-M2.7": { alias: "minimax" },
"openai/gpt-5.5": { alias: "gpt" },
"openai/gpt-5.4": { alias: "gpt" },
},
},
},
@@ -2464,7 +2475,7 @@ Quick answers plus deeper troubleshooting for real-world setups (local dev, VPS,
- `opus` → `anthropic/claude-opus-4-6`
- `sonnet` → `anthropic/claude-sonnet-4-6`
- `gpt` → `openai/gpt-5.5`
- `gpt` → `openai/gpt-5.4` for API-key setups, or `openai-codex/gpt-5.5` when configured for Codex OAuth
- `gpt-mini` → `openai/gpt-5.4-mini`
- `gpt-nano` → `openai/gpt-5.4-nano`
- `gemini` → `google/gemini-3.1-pro-preview`

View File

@@ -516,7 +516,7 @@ Live tests are split into two layers so we can isolate failures:
- How to select models:
- `OPENCLAW_LIVE_MODELS=modern` to run the modern allowlist (Opus/Sonnet 4.6+, GPT-5.x + Codex, Gemini 3, GLM 4.7, MiniMax M2.7, Grok 4)
- `OPENCLAW_LIVE_MODELS=all` is an alias for the modern allowlist
- or `OPENCLAW_LIVE_MODELS="openai/gpt-5.5,anthropic/claude-opus-4-6,..."` (comma allowlist)
- or `OPENCLAW_LIVE_MODELS="openai/gpt-5.4,openai-codex/gpt-5.5,anthropic/claude-opus-4-6,..."` (comma allowlist)
- Modern/all sweeps default to a curated high-signal cap; set `OPENCLAW_LIVE_MAX_MODELS=0` for an exhaustive modern sweep or a positive number for a smaller cap.
- How to select providers:
- `OPENCLAW_LIVE_PROVIDERS="google,google-antigravity,google-gemini-cli"` (comma allowlist)
@@ -702,8 +702,9 @@ Docker notes:
- Optional Guardian probe: `OPENCLAW_LIVE_CODEX_HARNESS_GUARDIAN_PROBE=1`
- The smoke sets `OPENCLAW_AGENT_HARNESS_FALLBACK=none` so a broken Codex
harness cannot pass by silently falling back to PI.
- Auth: `OPENAI_API_KEY` from the shell/profile, plus optional copied
`~/.codex/auth.json` and `~/.codex/config.toml`
- Auth: Codex app-server auth from the local Codex subscription login. Docker
smokes can also provide `OPENAI_API_KEY` for non-Codex probes when applicable,
plus optional copied `~/.codex/auth.json` and `~/.codex/config.toml`.
Local recipe:
@@ -744,13 +745,13 @@ Docker notes:
Narrow, explicit allowlists are fastest and least flaky:
- Single model, direct (no gateway):
- `OPENCLAW_LIVE_MODELS="openai/gpt-5.5" pnpm test:live src/agents/models.profiles.live.test.ts`
- `OPENCLAW_LIVE_MODELS="openai/gpt-5.4" pnpm test:live src/agents/models.profiles.live.test.ts`
- Single model, gateway smoke:
- `OPENCLAW_LIVE_GATEWAY_MODELS="openai/gpt-5.5" pnpm test:live src/gateway/gateway-models.profiles.live.test.ts`
- `OPENCLAW_LIVE_GATEWAY_MODELS="openai/gpt-5.4" pnpm test:live src/gateway/gateway-models.profiles.live.test.ts`
- Tool calling across several providers:
- `OPENCLAW_LIVE_GATEWAY_MODELS="openai/gpt-5.5,anthropic/claude-opus-4-6,google/gemini-3-flash-preview,zai/glm-4.7,minimax/MiniMax-M2.7" pnpm test:live src/gateway/gateway-models.profiles.live.test.ts`
- `OPENCLAW_LIVE_GATEWAY_MODELS="openai/gpt-5.4,openai-codex/gpt-5.5,anthropic/claude-opus-4-6,google/gemini-3-flash-preview,zai/glm-4.7,minimax/MiniMax-M2.7" pnpm test:live src/gateway/gateway-models.profiles.live.test.ts`
- Google focus (Gemini API key + Antigravity):
- Gemini (API key): `OPENCLAW_LIVE_GATEWAY_MODELS="google/gemini-3-flash-preview" pnpm test:live src/gateway/gateway-models.profiles.live.test.ts`
@@ -773,8 +774,8 @@ There is no fixed “CI model list” (live is opt-in), but these are the **reco
This is the “common models” run we expect to keep working:
- OpenAI (non-Codex): `openai/gpt-5.5` (optional: `openai/gpt-5.4-mini`)
- OpenAI Codex OAuth: `openai/gpt-5.5` (`openai-codex/gpt-*` remains a legacy alias)
- OpenAI (non-Codex): `openai/gpt-5.4` (optional: `openai/gpt-5.4-mini`)
- OpenAI Codex OAuth: `openai-codex/gpt-5.5`
- Anthropic: `anthropic/claude-opus-4-6` (or `anthropic/claude-sonnet-4-6`)
- Google (Gemini API): `google/gemini-3.1-pro-preview` and `google/gemini-3-flash-preview` (avoid older Gemini 2.x models)
- Google (Antigravity): `google-antigravity/claude-opus-4-6-thinking` and `google-antigravity/gemini-3-flash`
@@ -782,13 +783,13 @@ This is the “common models” run we expect to keep working:
- MiniMax: `minimax/MiniMax-M2.7`
Run gateway smoke with tools + image:
`OPENCLAW_LIVE_GATEWAY_MODELS="openai/gpt-5.5,anthropic/claude-opus-4-6,google/gemini-3.1-pro-preview,google/gemini-3-flash-preview,google-antigravity/claude-opus-4-6-thinking,google-antigravity/gemini-3-flash,zai/glm-4.7,minimax/MiniMax-M2.7" pnpm test:live src/gateway/gateway-models.profiles.live.test.ts`
`OPENCLAW_LIVE_GATEWAY_MODELS="openai/gpt-5.4,openai-codex/gpt-5.5,anthropic/claude-opus-4-6,google/gemini-3.1-pro-preview,google/gemini-3-flash-preview,google-antigravity/claude-opus-4-6-thinking,google-antigravity/gemini-3-flash,zai/glm-4.7,minimax/MiniMax-M2.7" pnpm test:live src/gateway/gateway-models.profiles.live.test.ts`
### Baseline: tool calling (Read + optional Exec)
Pick at least one per provider family:
- OpenAI: `openai/gpt-5.5` (or `openai/gpt-5.4-mini`)
- OpenAI: `openai/gpt-5.4` (or `openai/gpt-5.4-mini`)
- Anthropic: `anthropic/claude-opus-4-6` (or `anthropic/claude-sonnet-4-6`)
- Google: `google/gemini-3-flash-preview` (or `google/gemini-3.1-pro-preview`)
- Z.AI (GLM): `zai/glm-4.7`

View File

@@ -156,7 +156,7 @@ read_when:
"defaults": {
"model": {
"primary": "anthropic/claude-opus-4-6",
"fallbacks": ["anthropic/claude-sonnet-4-6", "openai/gpt-5.5"]
"fallbacks": ["anthropic/claude-sonnet-4-6", "openai/gpt-5.4"]
},
"maxConcurrent": 4
},

View File

@@ -422,7 +422,7 @@ File-attachment extraction behavior:
When media understanding runs, `/status` includes a short summary line:
```
📎 Media: image ok (openai/gpt-5.5) · audio skipped (maxBytes)
📎 Media: image ok (openai/gpt-5.4) · audio skipped (maxBytes)
```
This shows percapability outcomes and the chosen provider/model when applicable.

View File

@@ -37,15 +37,25 @@ the harness for compatibility.
## Pick the right model prefix
OpenClaw now keeps OpenAI GPT model refs canonical as `openai/*`:
OpenAI-family routes are prefix-specific. Use `openai-codex/*` when you want
Codex OAuth through PI; use `openai/*` when you want direct OpenAI API access or
when you are forcing the native Codex app-server harness:
| Model ref | Runtime path | Use when |
| ----------------------------------------------------- | -------------------------------------------- | ----------------------------------------------------------------------- |
| `openai/gpt-5.5` | OpenAI provider through OpenClaw/PI plumbing | You want direct OpenAI Platform API access with `OPENAI_API_KEY`. |
| `openai/gpt-5.5` + `embeddedHarness.runtime: "codex"` | Codex app-server harness | You want native Codex app-server execution for the embedded agent turn. |
| Model ref | Runtime path | Use when |
| ----------------------------------------------------- | -------------------------------------------- | ------------------------------------------------------------------------- |
| `openai/gpt-5.4` | OpenAI provider through OpenClaw/PI plumbing | You want current direct OpenAI Platform API access with `OPENAI_API_KEY`. |
| `openai-codex/gpt-5.5` | OpenAI Codex OAuth through OpenClaw/PI | You want ChatGPT/Codex subscription auth with the default PI runner. |
| `openai/gpt-5.5` + `embeddedHarness.runtime: "codex"` | Codex app-server harness | You want native Codex app-server execution for the embedded agent turn. |
Legacy `openai-codex/gpt-*` and `codex/gpt-*` refs remain accepted as
compatibility aliases, but new docs/config examples should use `openai/gpt-*`.
GPT-5.5 is currently subscription/OAuth-only in OpenClaw. Use
`openai-codex/gpt-5.5` for PI OAuth, or `openai/gpt-5.5` with the Codex
app-server harness. Direct API-key access for `openai/gpt-5.5` is supported
once OpenAI enables GPT-5.5 on the public API.
Legacy `codex/gpt-*` refs remain accepted as compatibility aliases. New PI
Codex OAuth configs should use `openai-codex/gpt-*`; new native app-server
harness configs should use `openai/gpt-*` plus `embeddedHarness.runtime:
"codex"`.
Use `/status` to confirm the effective harness for the current session. If the
selection is surprising, enable debug logging for the `agents/harness` subsystem

View File

@@ -124,9 +124,8 @@ OpenClaw. The harness then claims that provider in `supports(...)`.
The bundled Codex plugin follows this pattern:
- provider id: `codex`
- user model refs: canonical `openai/gpt-5.5` plus
`embeddedHarness.runtime: "codex"`; legacy `codex/gpt-*` refs remain accepted
for compatibility
- user model refs: `openai/gpt-5.5` plus `embeddedHarness.runtime: "codex"`;
legacy `codex/gpt-*` refs remain accepted for compatibility
- harness id: `codex`
- auth: synthetic provider availability, because the Codex harness owns the
native Codex login/session
@@ -158,9 +157,10 @@ into the OpenClaw transcript.
The bundled `codex` harness is the native Codex mode for embedded OpenClaw
agent turns. Enable the bundled `codex` plugin first, and include `codex` in
`plugins.allow` if your config uses a restrictive allowlist. New configs should
use `openai/gpt-*` with `embeddedHarness.runtime: "codex"`. Legacy
`openai-codex/*` and `codex/*` model refs remain compatibility aliases.
`plugins.allow` if your config uses a restrictive allowlist. Native app-server
configs should use `openai/gpt-*` with `embeddedHarness.runtime: "codex"`.
Use `openai-codex/*` for Codex OAuth through PI instead. Legacy `codex/*`
model refs remain compatibility aliases for the native harness.
When this mode runs, Codex owns the native thread id, resume behavior,
compaction, and app-server execution. OpenClaw still owns the chat channel,

View File

@@ -7,27 +7,37 @@ read_when:
title: "OpenAI"
---
OpenAI provides developer APIs for GPT models. OpenClaw supports two auth routes behind the same canonical OpenAI model refs:
OpenAI provides developer APIs for GPT models. OpenClaw supports three OpenAI-family routes. The model prefix selects the route:
- **API key** — direct OpenAI Platform access with usage-based billing (`openai/*` models)
- **Codex subscription** — ChatGPT/Codex sign-in with subscription access. The internal auth/provider id is `openai-codex`, but new model refs should still use `openai/*`.
- **Codex subscription through PI** — ChatGPT/Codex sign-in with subscription access (`openai-codex/*` models)
- **Codex app-server harness** — native Codex app-server execution (`openai/*` models plus `agents.defaults.embeddedHarness.runtime: "codex"`)
OpenAI explicitly supports subscription OAuth usage in external tools and workflows like OpenClaw.
<Note>
GPT-5.5 is currently available in OpenClaw through subscription/OAuth routes:
`openai-codex/gpt-5.5` with the PI runner, or `openai/gpt-5.5` with the
Codex app-server harness. Direct API-key access for `openai/gpt-5.5` is
supported once OpenAI enables GPT-5.5 on the public API; until then use an
API-enabled model such as `openai/gpt-5.4` for `OPENAI_API_KEY` setups.
</Note>
## OpenClaw feature coverage
| OpenAI capability | OpenClaw surface | Status |
| ------------------------- | ----------------------------------------- | ------------------------------------------------------ |
| Chat / Responses | `openai/<model>` model provider | Yes |
| Codex subscription models | `openai/<model>` with `openai-codex` auth | Yes |
| Server-side web search | Native OpenAI Responses tool | Yes, when web search is enabled and no provider pinned |
| Images | `image_generate` | Yes |
| Videos | `video_generate` | Yes |
| Text-to-speech | `messages.tts.provider: "openai"` / `tts` | Yes |
| Batch speech-to-text | `tools.media.audio` / media understanding | Yes |
| Streaming speech-to-text | Voice Call `streaming.provider: "openai"` | Yes |
| Realtime voice | Voice Call `realtime.provider: "openai"` | Yes |
| Embeddings | memory embedding provider | Yes |
| OpenAI capability | OpenClaw surface | Status |
| ------------------------- | ------------------------------------------------------ | ------------------------------------------------------ |
| Chat / Responses | `openai/<model>` model provider | Yes |
| Codex subscription models | `openai-codex/<model>` with `openai-codex` OAuth | Yes |
| Codex app-server harness | `openai/<model>` with `embeddedHarness.runtime: codex` | Yes |
| Server-side web search | Native OpenAI Responses tool | Yes, when web search is enabled and no provider pinned |
| Images | `image_generate` | Yes |
| Videos | `video_generate` | Yes |
| Text-to-speech | `messages.tts.provider: "openai"` / `tts` | Yes |
| Batch speech-to-text | `tools.media.audio` / media understanding | Yes |
| Streaming speech-to-text | Voice Call `streaming.provider: "openai"` | Yes |
| Realtime voice | Voice Call `realtime.provider: "openai"` | Yes |
| Embeddings | memory embedding provider | Yes |
## Getting started
@@ -63,11 +73,14 @@ Choose your preferred auth method and follow the setup steps.
| Model ref | Route | Auth |
|-----------|-------|------|
| `openai/gpt-5.5` | Direct OpenAI Platform API | `OPENAI_API_KEY` |
| `openai/gpt-5.5-pro` | Direct OpenAI Platform API | `OPENAI_API_KEY` |
| `openai/gpt-5.4` | Direct OpenAI Platform API | `OPENAI_API_KEY` |
| `openai/gpt-5.4-mini` | Direct OpenAI Platform API | `OPENAI_API_KEY` |
| `openai/gpt-5.5` | Future direct API route once OpenAI enables GPT-5.5 on the API | `OPENAI_API_KEY` |
<Note>
`openai-codex/*` remains accepted as a legacy compatibility alias, but new configs should use `openai/*`.
`openai/*` is the direct OpenAI API-key route unless you explicitly force
the Codex app-server harness. GPT-5.5 itself is currently subscription/OAuth
only; use `openai-codex/*` for Codex OAuth through the default PI runner.
</Note>
### Config example
@@ -75,7 +88,7 @@ Choose your preferred auth method and follow the setup steps.
```json5
{
env: { OPENAI_API_KEY: "sk-..." },
agents: { defaults: { model: { primary: "openai/gpt-5.5" } } },
agents: { defaults: { model: { primary: "openai/gpt-5.4" } } },
}
```
@@ -108,7 +121,7 @@ Choose your preferred auth method and follow the setup steps.
</Step>
<Step title="Set the default model">
```bash
openclaw config set agents.defaults.model.primary openai/gpt-5.5
openclaw config set agents.defaults.model.primary openai-codex/gpt-5.5
```
</Step>
<Step title="Verify the model is available">
@@ -122,17 +135,19 @@ Choose your preferred auth method and follow the setup steps.
| Model ref | Route | Auth |
|-----------|-------|------|
| `openai/gpt-5.5` | ChatGPT/Codex OAuth | Codex sign-in |
| `openai-codex/gpt-5.5` | ChatGPT/Codex OAuth through PI | Codex sign-in |
| `openai/gpt-5.5` + `embeddedHarness.runtime: "codex"` | Codex app-server harness | Codex app-server auth |
<Note>
`openai-codex/*` and `codex/*` model refs are legacy compatibility aliases. Keep using the `openai-codex` provider id for auth/profile commands.
Keep using the `openai-codex` provider id for auth/profile commands. The
`openai-codex/*` model prefix is also the explicit PI route for Codex OAuth.
</Note>
### Config example
```json5
{
agents: { defaults: { model: { primary: "openai/gpt-5.5" } } },
agents: { defaults: { model: { primary: "openai-codex/gpt-5.5" } } },
}
```
@@ -154,7 +169,7 @@ Choose your preferred auth method and follow the setup steps.
OpenClaw treats model metadata and the runtime context cap as separate values.
For `openai/gpt-5.5` through Codex OAuth:
For `openai-codex/gpt-5.5` through Codex OAuth:
- Native `contextWindow`: `1000000`
- Default runtime `contextTokens` cap: `272000`
@@ -262,7 +277,7 @@ See [Video Generation](/tools/video-generation) for shared tool parameters, prov
## GPT-5 prompt contribution
OpenClaw adds a shared GPT-5 prompt contribution for GPT-5-family runs across providers. It applies by model id, so `openai/gpt-5.5`, `openrouter/openai/gpt-5.5`, `opencode/gpt-5.5`, and other compatible GPT-5 refs receive the same overlay. Older GPT-4.x models do not.
OpenClaw adds a shared GPT-5 prompt contribution for GPT-5-family runs across providers. It applies by model id, so `openai-codex/gpt-5.5`, `openai/gpt-5.4`, `openrouter/openai/gpt-5.5`, `opencode/gpt-5.5`, and other compatible GPT-5 refs receive the same overlay. Older GPT-4.x models do not.
The bundled native Codex harness uses the same GPT-5 behavior and heartbeat overlay through Codex app-server developer instructions, so `openai/gpt-5.x` sessions forced through `embeddedHarness.runtime: "codex"` keep the same follow-through and proactive heartbeat guidance even though Codex owns the rest of the harness prompt.
@@ -554,7 +569,10 @@ the Server-side compaction accordion below.
agents: {
defaults: {
models: {
"openai/gpt-5.5": {
"openai/gpt-5.4": {
params: { transport: "auto" },
},
"openai-codex/gpt-5.5": {
params: { transport: "auto" },
},
},
@@ -570,7 +588,7 @@ the Server-side compaction accordion below.
</Accordion>
<Accordion title="WebSocket warm-up">
OpenClaw enables WebSocket warm-up by default for `openai/*` to reduce first-turn latency.
OpenClaw enables WebSocket warm-up by default for `openai/*` and `openai-codex/*` to reduce first-turn latency.
```json5
// Disable warm-up
@@ -578,7 +596,7 @@ the Server-side compaction accordion below.
agents: {
defaults: {
models: {
"openai/gpt-5.5": {
"openai/gpt-5.4": {
params: { openaiWsWarmup: false },
},
},
@@ -590,7 +608,7 @@ the Server-side compaction accordion below.
</Accordion>
<Accordion title="Fast mode">
OpenClaw exposes a shared fast-mode toggle for `openai/*`:
OpenClaw exposes a shared fast-mode toggle for `openai/*` and `openai-codex/*`:
- **Chat/UI:** `/fast status|on|off`
- **Config:** `agents.defaults.models["<provider>/<model>"].params.fastMode`
@@ -602,7 +620,7 @@ the Server-side compaction accordion below.
agents: {
defaults: {
models: {
"openai/gpt-5.5": { params: { fastMode: true } },
"openai/gpt-5.4": { params: { fastMode: true } },
},
},
},
@@ -623,7 +641,7 @@ the Server-side compaction accordion below.
agents: {
defaults: {
models: {
"openai/gpt-5.5": { params: { serviceTier: "priority" } },
"openai/gpt-5.4": { params: { serviceTier: "priority" } },
},
},
},
@@ -669,7 +687,7 @@ the Server-side compaction accordion below.
agents: {
defaults: {
models: {
"openai/gpt-5.5": {
"openai/gpt-5.4": {
params: {
responsesServerCompaction: true,
responsesCompactThreshold: 120000,
@@ -687,7 +705,7 @@ the Server-side compaction accordion below.
agents: {
defaults: {
models: {
"openai/gpt-5.5": {
"openai/gpt-5.4": {
params: { responsesServerCompaction: false },
},
},

View File

@@ -171,7 +171,7 @@ title: Image generation roundtrip
surface: image
tags: [media, image, roundtrip]
models:
primary: openai/gpt-5.5
primary: openai/gpt-5.4
requires:
tools: [image_generate]
plugins: [openai, qa-channel]

View File

@@ -32,11 +32,11 @@ For a high-level overview, see [Onboarding (CLI)](/start/wizard).
- **Anthropic API key**: preferred Anthropic assistant choice in onboarding/configure.
- **Anthropic setup-token**: still available in onboarding/configure, though OpenClaw now prefers Claude CLI reuse when available.
- **OpenAI Code (Codex) subscription (OAuth)**: browser flow; paste the `code#state`.
- Sets `agents.defaults.model` to `openai/gpt-5.5` when model is unset or already OpenAI-family.
- Sets `agents.defaults.model` to `openai-codex/gpt-5.5` when model is unset or already OpenAI-family.
- **OpenAI Code (Codex) subscription (device pairing)**: browser pairing flow with a short-lived device code.
- Sets `agents.defaults.model` to `openai/gpt-5.5` when model is unset or already OpenAI-family.
- Sets `agents.defaults.model` to `openai-codex/gpt-5.5` when model is unset or already OpenAI-family.
- **OpenAI API key**: uses `OPENAI_API_KEY` if present or prompts for a key, then stores it in auth profiles.
- Sets `agents.defaults.model` to `openai/gpt-5.5` when model is unset, `openai/*`, or `openai-codex/*`.
- Sets `agents.defaults.model` to `openai/gpt-5.4` when model is unset, `openai/*`, or `openai-codex/*`.
- **xAI (Grok) API key**: prompts for `XAI_API_KEY` and configures xAI as a model provider.
- **OpenCode**: prompts for `OPENCODE_API_KEY` (or `OPENCODE_ZEN_API_KEY`, get it at https://opencode.ai/auth) and lets you pick the Zen or Go catalog.
- **Ollama**: offers **Cloud + Local**, **Cloud only**, or **Local only** first. `Cloud only` prompts for `OLLAMA_API_KEY` and uses `https://ollama.com`; the host-backed modes prompt for the Ollama base URL, discover available models, and auto-pull the selected local model when needed; `Cloud + Local` also checks whether that Ollama host is signed in for cloud access.
@@ -182,7 +182,7 @@ Use this reference page for flag semantics and step ordering.
```bash
openclaw agents add work \
--workspace ~/.openclaw/workspace-work \
--model openai/gpt-5.5 \
--model openai/gpt-5.4 \
--bind whatsapp:biz \
--non-interactive \
--json

View File

@@ -201,7 +201,7 @@ sessions, and auth profiles. Running without `--workspace` launches the wizard.
```bash
openclaw agents add work \
--workspace ~/.openclaw/workspace-work \
--model openai/gpt-5.5 \
--model openai/gpt-5.4 \
--bind whatsapp:biz \
--non-interactive \
--json

View File

@@ -130,19 +130,19 @@ What you set:
<Accordion title="OpenAI Code subscription (OAuth)">
Browser flow; paste `code#state`.
Sets `agents.defaults.model` to `openai/gpt-5.5` when model is unset or already OpenAI-family.
Sets `agents.defaults.model` to `openai-codex/gpt-5.5` when model is unset or already OpenAI-family.
</Accordion>
<Accordion title="OpenAI Code subscription (device pairing)">
Browser pairing flow with a short-lived device code.
Sets `agents.defaults.model` to `openai/gpt-5.5` when model is unset or already OpenAI-family.
Sets `agents.defaults.model` to `openai-codex/gpt-5.5` when model is unset or already OpenAI-family.
</Accordion>
<Accordion title="OpenAI API key">
Uses `OPENAI_API_KEY` if present or prompts for a key, then stores the credential in auth profiles.
Sets `agents.defaults.model` to `openai/gpt-5.5` when model is unset, `openai/*`, or `openai-codex/*`.
Sets `agents.defaults.model` to `openai/gpt-5.4` when model is unset, `openai/*`, or `openai-codex/*`.
</Accordion>
<Accordion title="xAI (Grok) API key">

View File

@@ -481,7 +481,7 @@ Notes:
| `/acp close` | Close session and unbind thread targets. | `/acp close` |
| `/acp status` | Show backend, mode, state, runtime options, capabilities. | `/acp status` |
| `/acp set-mode` | Set runtime mode for target session. | `/acp set-mode plan` |
| `/acp set` | Generic runtime config option write. | `/acp set model openai/gpt-5.5` |
| `/acp set` | Generic runtime config option write. | `/acp set model openai/gpt-5.4` |
| `/acp cwd` | Set runtime working directory override. | `/acp cwd /Users/user/Projects/repo` |
| `/acp permissions` | Set approval policy profile. | `/acp permissions strict` |
| `/acp timeout` | Set runtime timeout (seconds). | `/acp timeout 120` |

View File

@@ -53,7 +53,7 @@ without writing custom OpenClaw code for each workflow.
"defaultProvider": "openai-codex",
"defaultModel": "gpt-5.5",
"defaultAuthProfileId": "main",
"allowedModels": ["openai/gpt-5.5"],
"allowedModels": ["openai/gpt-5.4"],
"maxTokens": 800,
"timeoutMs": 30000
}

View File

@@ -205,7 +205,7 @@ The filtering order is:
Each level can further restrict tools, but cannot grant back denied tools from earlier levels.
If `agents.list[].tools.sandbox.tools` is set, it replaces `tools.sandbox.tools` for that agent.
If `agents.list[].tools.profile` is set, it overrides `tools.profile` for that agent.
Provider tool keys accept either `provider` (e.g. `google-antigravity`) or `provider/model` (e.g. `openai/gpt-5.5`).
Provider tool keys accept either `provider` (e.g. `google-antigravity`) or `provider/model` (e.g. `openai/gpt-5.4`).
Tool policies support `group:*` shorthands that expand to multiple tools. See [Tool groups](/gateway/sandbox-vs-tool-policy-vs-elevated#tool-groups-shorthands) for the full list.

View File

@@ -241,7 +241,7 @@ Examples:
/model
/model list
/model 3
/model openai/gpt-5.5
/model openai/gpt-5.4
/model opus@anthropic:default
/model status
```