Files
openclaw/docs/providers/openai.md

4.7 KiB

summary, read_when, title
summary read_when title
Use OpenAI via API keys or Codex subscription in OpenClaw
You want to use OpenAI models in OpenClaw
You want Codex subscription auth instead of API keys
OpenAI

OpenAI

OpenAI provides developer APIs for GPT models. Codex supports ChatGPT sign-in for subscription access or API key sign-in for usage-based access. Codex cloud requires ChatGPT sign-in. OpenAI explicitly supports subscription OAuth usage in external tools/workflows like OpenClaw.

Option A: OpenAI API key (OpenAI Platform)

Best for: direct API access and usage-based billing. Get your API key from the OpenAI dashboard.

CLI setup

openclaw onboard --auth-choice openai-api-key
# or non-interactive
openclaw onboard --openai-api-key "$OPENAI_API_KEY"

Config snippet

{
  env: { OPENAI_API_KEY: "sk-..." },
  agents: { defaults: { model: { primary: "openai/gpt-5.2" } } },
}

Option B: OpenAI Code (Codex) subscription

Best for: using ChatGPT/Codex subscription access instead of an API key. Codex cloud requires ChatGPT sign-in, while the Codex CLI supports ChatGPT or API key sign-in.

CLI setup (Codex OAuth)

# Run Codex OAuth in the wizard
openclaw onboard --auth-choice openai-codex

# Or run OAuth directly
openclaw models auth login --provider openai-codex

Config snippet (Codex subscription)

{
  agents: { defaults: { model: { primary: "openai-codex/gpt-5.3-codex" } } },
}

Transport default

OpenClaw uses pi-ai for model streaming. For both openai/* and openai-codex/*, default transport is "auto" (WebSocket-first, then SSE fallback).

You can set agents.defaults.models.<provider/model>.params.transport:

  • "sse": force SSE
  • "websocket": force WebSocket
  • "auto": try WebSocket, then fall back to SSE

For openai/* (Responses API), OpenClaw also enables WebSocket warm-up by default (openaiWsWarmup: true) when WebSocket transport is used.

Related OpenAI docs:

{
  agents: {
    defaults: {
      model: { primary: "openai-codex/gpt-5.3-codex" },
      models: {
        "openai-codex/gpt-5.3-codex": {
          params: {
            transport: "auto",
          },
        },
      },
    },
  },
}

OpenAI WebSocket warm-up

OpenAI docs describe warm-up as optional. OpenClaw enables it by default for openai/* to reduce first-turn latency when using WebSocket transport.

Disable warm-up

{
  agents: {
    defaults: {
      models: {
        "openai/gpt-5.2": {
          params: {
            openaiWsWarmup: false,
          },
        },
      },
    },
  },
}

Enable warm-up explicitly

{
  agents: {
    defaults: {
      models: {
        "openai/gpt-5.2": {
          params: {
            openaiWsWarmup: true,
          },
        },
      },
    },
  },
}

OpenAI Responses server-side compaction

For direct OpenAI Responses models (openai/* using api: "openai-responses" with baseUrl on api.openai.com), OpenClaw now auto-enables OpenAI server-side compaction payload hints:

  • Forces store: true (unless model compat sets supportsStore: false)
  • Injects context_management: [{ type: "compaction", compact_threshold: ... }]

By default, compact_threshold is 70% of model contextWindow (or 80000 when unavailable).

Enable server-side compaction explicitly

Use this when you want to force context_management injection on compatible Responses models (for example Azure OpenAI Responses):

{
  agents: {
    defaults: {
      models: {
        "azure-openai-responses/gpt-5.2": {
          params: {
            responsesServerCompaction: true,
          },
        },
      },
    },
  },
}

Enable with a custom threshold

{
  agents: {
    defaults: {
      models: {
        "openai/gpt-5.2": {
          params: {
            responsesServerCompaction: true,
            responsesCompactThreshold: 120000,
          },
        },
      },
    },
  },
}

Disable server-side compaction

{
  agents: {
    defaults: {
      models: {
        "openai/gpt-5.2": {
          params: {
            responsesServerCompaction: false,
          },
        },
      },
    },
  },
}

responsesServerCompaction only controls context_management injection. Direct OpenAI Responses models still force store: true unless compat sets supportsStore: false.

Notes