* feat(openai): add gpt-5.4 support and priority processing * feat(openai-codex): add gpt-5.4 oauth support * fix(openai): preserve provider overrides in gpt-5.4 fallback * fix(openai-codex): keep xhigh for gpt-5.4 default * fix(models): preserve configured overrides in list output * fix(models): close gpt-5.4 integration gaps * fix(openai): scope service tier to public api * fix(openai): complete prep followups for gpt-5.4 support (#36590) (thanks @dorukardahan) --------- Co-authored-by: Tyler Yust <TYTYYUST@YAHOO.COM>
5.5 KiB
summary, read_when, title
| summary | read_when | title | ||
|---|---|---|---|---|
| Use OpenAI via API keys or Codex subscription in OpenClaw |
|
OpenAI |
OpenAI
OpenAI provides developer APIs for GPT models. Codex supports ChatGPT sign-in for subscription access or API key sign-in for usage-based access. Codex cloud requires ChatGPT sign-in. OpenAI explicitly supports subscription OAuth usage in external tools/workflows like OpenClaw.
Option A: OpenAI API key (OpenAI Platform)
Best for: direct API access and usage-based billing. Get your API key from the OpenAI dashboard.
CLI setup
openclaw onboard --auth-choice openai-api-key
# or non-interactive
openclaw onboard --openai-api-key "$OPENAI_API_KEY"
Config snippet
{
env: { OPENAI_API_KEY: "sk-..." },
agents: { defaults: { model: { primary: "openai/gpt-5.4" } } },
}
OpenAI's current API model docs list gpt-5.4 and gpt-5.4-pro for direct
OpenAI API usage. OpenClaw forwards both through the openai/* Responses path.
Option B: OpenAI Code (Codex) subscription
Best for: using ChatGPT/Codex subscription access instead of an API key. Codex cloud requires ChatGPT sign-in, while the Codex CLI supports ChatGPT or API key sign-in.
CLI setup (Codex OAuth)
# Run Codex OAuth in the wizard
openclaw onboard --auth-choice openai-codex
# Or run OAuth directly
openclaw models auth login --provider openai-codex
Config snippet (Codex subscription)
{
agents: { defaults: { model: { primary: "openai-codex/gpt-5.4" } } },
}
OpenAI's current Codex docs list gpt-5.4 as the current Codex model. OpenClaw
maps that to openai-codex/gpt-5.4 for ChatGPT/Codex OAuth usage.
Transport default
OpenClaw uses pi-ai for model streaming. For both openai/* and
openai-codex/*, default transport is "auto" (WebSocket-first, then SSE
fallback).
You can set agents.defaults.models.<provider/model>.params.transport:
"sse": force SSE"websocket": force WebSocket"auto": try WebSocket, then fall back to SSE
For openai/* (Responses API), OpenClaw also enables WebSocket warm-up by
default (openaiWsWarmup: true) when WebSocket transport is used.
Related OpenAI docs:
{
agents: {
defaults: {
model: { primary: "openai-codex/gpt-5.4" },
models: {
"openai-codex/gpt-5.4": {
params: {
transport: "auto",
},
},
},
},
},
}
OpenAI WebSocket warm-up
OpenAI docs describe warm-up as optional. OpenClaw enables it by default for
openai/* to reduce first-turn latency when using WebSocket transport.
Disable warm-up
{
agents: {
defaults: {
models: {
"openai/gpt-5.4": {
params: {
openaiWsWarmup: false,
},
},
},
},
},
}
Enable warm-up explicitly
{
agents: {
defaults: {
models: {
"openai/gpt-5.4": {
params: {
openaiWsWarmup: true,
},
},
},
},
},
}
OpenAI priority processing
OpenAI's API exposes priority processing via service_tier=priority. In
OpenClaw, set agents.defaults.models["openai/<model>"].params.serviceTier to
pass that field through on direct openai/* Responses requests.
{
agents: {
defaults: {
models: {
"openai/gpt-5.4": {
params: {
serviceTier: "priority",
},
},
},
},
},
}
Supported values are auto, default, flex, and priority.
OpenAI Responses server-side compaction
For direct OpenAI Responses models (openai/* using api: "openai-responses" with
baseUrl on api.openai.com), OpenClaw now auto-enables OpenAI server-side
compaction payload hints:
- Forces
store: true(unless model compat setssupportsStore: false) - Injects
context_management: [{ type: "compaction", compact_threshold: ... }]
By default, compact_threshold is 70% of model contextWindow (or 80000
when unavailable).
Enable server-side compaction explicitly
Use this when you want to force context_management injection on compatible
Responses models (for example Azure OpenAI Responses):
{
agents: {
defaults: {
models: {
"azure-openai-responses/gpt-5.4": {
params: {
responsesServerCompaction: true,
},
},
},
},
},
}
Enable with a custom threshold
{
agents: {
defaults: {
models: {
"openai/gpt-5.4": {
params: {
responsesServerCompaction: true,
responsesCompactThreshold: 120000,
},
},
},
},
},
}
Disable server-side compaction
{
agents: {
defaults: {
models: {
"openai/gpt-5.4": {
params: {
responsesServerCompaction: false,
},
},
},
},
},
}
responsesServerCompaction only controls context_management injection.
Direct OpenAI Responses models still force store: true unless compat sets
supportsStore: false.
Notes
- Model refs always use
provider/model(see /concepts/models). - Auth details + reuse rules are in /concepts/oauth.