mirror of
https://github.com/openclaw/openclaw.git
synced 2026-05-06 17:00:50 +00:00
docs: update OpenAI GPT-5.5 API guidance
This commit is contained in:
@@ -10,7 +10,7 @@ The CI runs on every push to `main` and every pull request. It uses smart scopin
|
||||
|
||||
QA Lab has dedicated CI lanes outside the main smart-scoped workflow. The
|
||||
`Parity gate` workflow runs on matching PR changes and manual dispatch; it
|
||||
builds the private QA runtime and compares the mock GPT-5.4 and Opus 4.6
|
||||
builds the private QA runtime and compares the mock GPT-5.5 and Opus 4.6
|
||||
agentic packs. The `QA-Lab - All Lanes` workflow runs nightly on `main` and on
|
||||
manual dispatch; it fans out the mock parity gate, live Matrix lane, and live
|
||||
Telegram lane as parallel jobs. The live jobs use the `qa-live-shared`
|
||||
|
||||
@@ -30,9 +30,9 @@ Reference for **LLM/model providers** (not chat channels like WhatsApp/Telegram)
|
||||
`google-gemini-cli`, or `codex-cli` when you want a local CLI backend.
|
||||
Legacy `claude-cli/*`, `google-gemini-cli/*`, and `codex-cli/*` refs migrate
|
||||
back to canonical provider refs with the runtime recorded separately.
|
||||
- GPT-5.5 is available through `openai-codex/gpt-5.5` in PI, the native
|
||||
Codex app-server harness, and the public OpenAI API when the bundled PI
|
||||
catalog exposes `openai/gpt-5.5` for your install.
|
||||
- GPT-5.5 is available through `openai/gpt-5.5` for direct API-key traffic,
|
||||
`openai-codex/gpt-5.5` in PI for Codex OAuth, and the native Codex
|
||||
app-server harness when `embeddedHarness.runtime: "codex"` is set.
|
||||
|
||||
## Plugin-owned provider behavior
|
||||
|
||||
@@ -71,10 +71,9 @@ OpenClaw ships with the pi‑ai catalog. These providers require **no**
|
||||
- Provider: `openai`
|
||||
- Auth: `OPENAI_API_KEY`
|
||||
- Optional rotation: `OPENAI_API_KEYS`, `OPENAI_API_KEY_1`, `OPENAI_API_KEY_2`, plus `OPENCLAW_LIVE_OPENAI_KEY` (single override)
|
||||
- Example models: `openai/gpt-5.5`, `openai/gpt-5.4`, `openai/gpt-5.4-mini`
|
||||
- GPT-5.5 direct API support depends on the bundled PI catalog version for
|
||||
your install; verify with `openclaw models list --provider openai` before
|
||||
using `openai/gpt-5.5` without the Codex app-server runtime.
|
||||
- Example models: `openai/gpt-5.5`, `openai/gpt-5.4-mini`
|
||||
- Verify account/model availability with `openclaw models list --provider openai`
|
||||
if a specific install or API key behaves differently.
|
||||
- CLI: `openclaw onboard --auth-choice openai-api-key`
|
||||
- Default transport is `auto` (WebSocket-first, SSE fallback)
|
||||
- Override per model via `agents.defaults.models["openai/<model>"].params.transport` (`"sse"`, `"websocket"`, or `"auto"`)
|
||||
@@ -91,7 +90,7 @@ OpenClaw ships with the pi‑ai catalog. These providers require **no**
|
||||
|
||||
```json5
|
||||
{
|
||||
agents: { defaults: { model: { primary: "openai/gpt-5.4" } } },
|
||||
agents: { defaults: { model: { primary: "openai/gpt-5.5" } } },
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
@@ -238,7 +238,7 @@ refs and write a judged Markdown report:
|
||||
|
||||
```bash
|
||||
pnpm openclaw qa character-eval \
|
||||
--model openai/gpt-5.4,thinking=medium,fast \
|
||||
--model openai/gpt-5.5,thinking=medium,fast \
|
||||
--model openai/gpt-5.2,thinking=xhigh \
|
||||
--model openai/gpt-5,thinking=xhigh \
|
||||
--model anthropic/claude-opus-4-6,thinking=high \
|
||||
@@ -246,7 +246,7 @@ pnpm openclaw qa character-eval \
|
||||
--model zai/glm-5.1,thinking=high \
|
||||
--model moonshot/kimi-k2.5,thinking=high \
|
||||
--model google/gemini-3.1-pro-preview,thinking=high \
|
||||
--judge-model openai/gpt-5.4,thinking=xhigh,fast \
|
||||
--judge-model openai/gpt-5.5,thinking=xhigh,fast \
|
||||
--judge-model anthropic/claude-opus-4-6,thinking=high \
|
||||
--blind-judge-models \
|
||||
--concurrency 16 \
|
||||
@@ -263,7 +263,7 @@ Use `--blind-judge-models` when comparing providers: the judge prompt still gets
|
||||
every transcript and run status, but candidate refs are replaced with neutral
|
||||
labels such as `candidate-01`; the report maps rankings back to real refs after
|
||||
parsing.
|
||||
Candidate runs default to `high` thinking, with `medium` for GPT-5.4 and `xhigh`
|
||||
Candidate runs default to `high` thinking, with `medium` for GPT-5.5 and `xhigh`
|
||||
for older OpenAI eval refs that support it. Override a specific candidate inline with
|
||||
`--model provider/model,thinking=<level>`. `--thinking <level>` still sets a
|
||||
global fallback, and the older `--model-thinking <provider/model=level>` form is
|
||||
@@ -278,12 +278,12 @@ Candidate and judge model runs both default to concurrency 16. Lower
|
||||
`--concurrency` or `--judge-concurrency` when provider limits or local gateway
|
||||
pressure make a run too noisy.
|
||||
When no candidate `--model` is passed, the character eval defaults to
|
||||
`openai/gpt-5.4`, `openai/gpt-5.2`, `openai/gpt-5`, `anthropic/claude-opus-4-6`,
|
||||
`openai/gpt-5.5`, `openai/gpt-5.2`, `openai/gpt-5`, `anthropic/claude-opus-4-6`,
|
||||
`anthropic/claude-sonnet-4-6`, `zai/glm-5.1`,
|
||||
`moonshot/kimi-k2.5`, and
|
||||
`google/gemini-3.1-pro-preview` when no `--model` is passed.
|
||||
When no `--judge-model` is passed, the judges default to
|
||||
`openai/gpt-5.4,thinking=xhigh,fast` and
|
||||
`openai/gpt-5.5,thinking=xhigh,fast` and
|
||||
`anthropic/claude-opus-4-6,thinking=high`.
|
||||
|
||||
## Related docs
|
||||
|
||||
@@ -363,7 +363,7 @@ Time format in system prompt. Default: `auto` (OS preference).
|
||||
- `pdfMaxPages`: default maximum pages considered by extraction fallback mode in the `pdf` tool.
|
||||
- `verboseDefault`: default verbose level for agents. Values: `"off"`, `"on"`, `"full"`. Default: `"off"`.
|
||||
- `elevatedDefault`: default elevated-output level for agents. Values: `"off"`, `"on"`, `"ask"`, `"full"`. Default: `"on"`.
|
||||
- `model.primary`: format `provider/model` (e.g. `openai/gpt-5.4` for API-key access or `openai-codex/gpt-5.5` for Codex OAuth). If you omit the provider, OpenClaw tries an alias first, then a unique configured-provider match for that exact model id, and only then falls back to the configured default provider (deprecated compatibility behavior, so prefer explicit `provider/model`). If that provider no longer exposes the configured default model, OpenClaw falls back to the first configured provider/model instead of surfacing a stale removed-provider default.
|
||||
- `model.primary`: format `provider/model` (e.g. `openai/gpt-5.5` for API-key access or `openai-codex/gpt-5.5` for Codex OAuth). If you omit the provider, OpenClaw tries an alias first, then a unique configured-provider match for that exact model id, and only then falls back to the configured default provider (deprecated compatibility behavior, so prefer explicit `provider/model`). If that provider no longer exposes the configured default model, OpenClaw falls back to the first configured provider/model instead of surfacing a stale removed-provider default.
|
||||
- `models`: the configured model catalog and allowlist for `/model`. Each entry can include `alias` (shortcut) and `params` (provider-specific, for example `temperature`, `maxTokens`, `cacheRetention`, `context1m`, `responsesServerCompaction`, `responsesCompactThreshold`, `extra_body`/`extraBody`).
|
||||
- Safe edits: use `openclaw config set agents.defaults.models '<json>' --strict-json --merge` to add entries. `config set` refuses replacements that would remove existing allowlist entries unless you pass `--replace`.
|
||||
- Provider-scoped configure/onboarding flows merge selected provider models into this map and preserve unrelated providers already configured.
|
||||
@@ -406,16 +406,16 @@ Codex app-server harness. For the mental model, see
|
||||
|
||||
**Built-in alias shorthands** (only apply when the model is in `agents.defaults.models`):
|
||||
|
||||
| Alias | Model |
|
||||
| ------------------- | -------------------------------------------------- |
|
||||
| `opus` | `anthropic/claude-opus-4-6` |
|
||||
| `sonnet` | `anthropic/claude-sonnet-4-6` |
|
||||
| `gpt` | `openai/gpt-5.4` or configured Codex OAuth GPT-5.5 |
|
||||
| `gpt-mini` | `openai/gpt-5.4-mini` |
|
||||
| `gpt-nano` | `openai/gpt-5.4-nano` |
|
||||
| `gemini` | `google/gemini-3.1-pro-preview` |
|
||||
| `gemini-flash` | `google/gemini-3-flash-preview` |
|
||||
| `gemini-flash-lite` | `google/gemini-3.1-flash-lite-preview` |
|
||||
| Alias | Model |
|
||||
| ------------------- | ------------------------------------------ |
|
||||
| `opus` | `anthropic/claude-opus-4-6` |
|
||||
| `sonnet` | `anthropic/claude-sonnet-4-6` |
|
||||
| `gpt` | `openai/gpt-5.5` or `openai-codex/gpt-5.5` |
|
||||
| `gpt-mini` | `openai/gpt-5.4-mini` |
|
||||
| `gpt-nano` | `openai/gpt-5.4-nano` |
|
||||
| `gemini` | `google/gemini-3.1-pro-preview` |
|
||||
| `gemini-flash` | `google/gemini-3-flash-preview` |
|
||||
| `gemini-flash-lite` | `google/gemini-3.1-flash-lite-preview` |
|
||||
|
||||
Your configured aliases always win over defaults.
|
||||
|
||||
|
||||
@@ -595,10 +595,9 @@ and troubleshooting see the main [FAQ](/help/faq).
|
||||
<Accordion title="How does Codex auth work?">
|
||||
OpenClaw supports **OpenAI Code (Codex)** via OAuth (ChatGPT sign-in). Use
|
||||
`openai-codex/gpt-5.5` for Codex OAuth through the default PI runner. Use
|
||||
`openai/gpt-5.4` for current direct OpenAI API-key access. GPT-5.5 direct
|
||||
API-key access is supported once OpenAI enables it on the public API; today
|
||||
GPT-5.5 uses subscription/OAuth via `openai-codex/gpt-5.5` or native Codex
|
||||
app-server runs with `openai/gpt-5.5` and `embeddedHarness.runtime: "codex"`.
|
||||
`openai/gpt-5.5` for direct OpenAI API-key access. GPT-5.5 can also use
|
||||
subscription/OAuth via `openai-codex/gpt-5.5` or native Codex app-server
|
||||
runs with `openai/gpt-5.5` and `embeddedHarness.runtime: "codex"`.
|
||||
See [Model providers](/concepts/model-providers) and [Onboarding (CLI)](/start/wizard).
|
||||
</Accordion>
|
||||
|
||||
@@ -606,8 +605,7 @@ and troubleshooting see the main [FAQ](/help/faq).
|
||||
`openai-codex` is the provider and auth-profile id for ChatGPT/Codex OAuth.
|
||||
It is also the explicit PI model prefix for Codex OAuth:
|
||||
|
||||
- `openai/gpt-5.4` = current direct OpenAI API-key route in PI
|
||||
- `openai/gpt-5.5` = future direct API-key route once OpenAI enables GPT-5.5 on the API
|
||||
- `openai/gpt-5.5` = current direct OpenAI API-key route in PI
|
||||
- `openai-codex/gpt-5.5` = Codex OAuth route in PI
|
||||
- `openai/gpt-5.5` + `embeddedHarness.runtime: "codex"` = native Codex app-server route
|
||||
- `openai-codex:...` = auth profile id, not a model ref
|
||||
|
||||
@@ -21,7 +21,7 @@ troubleshooting, see the main [FAQ](/help/faq).
|
||||
agents.defaults.model.primary
|
||||
```
|
||||
|
||||
Models are referenced as `provider/model` (example: `openai/gpt-5.4` or `openai-codex/gpt-5.5`). If you omit the provider, OpenClaw first tries an alias, then a unique configured-provider match for that exact model id, and only then falls back to the configured default provider as a deprecated compatibility path. If that provider no longer exposes the configured default model, OpenClaw falls back to the first configured provider/model instead of surfacing a stale removed-provider default. You should still **explicitly** set `provider/model`.
|
||||
Models are referenced as `provider/model` (example: `openai/gpt-5.5` or `openai-codex/gpt-5.5`). If you omit the provider, OpenClaw first tries an alias, then a unique configured-provider match for that exact model id, and only then falls back to the configured default provider as a deprecated compatibility path. If that provider no longer exposes the configured default model, OpenClaw falls back to the first configured provider/model instead of surfacing a stale removed-provider default. You should still **explicitly** set `provider/model`.
|
||||
|
||||
</Accordion>
|
||||
|
||||
@@ -146,13 +146,10 @@ troubleshooting, see the main [FAQ](/help/faq).
|
||||
<Accordion title="Can I use GPT 5.5 for daily tasks and Codex 5.5 for coding?">
|
||||
Yes. Set one as default and switch as needed:
|
||||
|
||||
- **Quick switch (per session):** `/model openai/gpt-5.4` for current direct OpenAI API-key tasks or `/model openai-codex/gpt-5.5` for GPT-5.5 Codex OAuth tasks.
|
||||
- **Default:** set `agents.defaults.model.primary` to `openai/gpt-5.4` for API-key usage or `openai-codex/gpt-5.5` for GPT-5.5 Codex OAuth usage.
|
||||
- **Quick switch (per session):** `/model openai/gpt-5.5` for current direct OpenAI API-key tasks or `/model openai-codex/gpt-5.5` for GPT-5.5 Codex OAuth tasks.
|
||||
- **Default:** set `agents.defaults.model.primary` to `openai/gpt-5.5` for API-key usage or `openai-codex/gpt-5.5` for GPT-5.5 Codex OAuth usage.
|
||||
- **Sub-agents:** route coding tasks to sub-agents with a different default model.
|
||||
|
||||
Direct API-key access for `openai/gpt-5.5` is supported once OpenAI enables
|
||||
GPT-5.5 on the public API. Until then GPT-5.5 is subscription/OAuth-only.
|
||||
|
||||
See [Models](/concepts/models) and [Slash commands](/tools/slash-commands).
|
||||
|
||||
</Accordion>
|
||||
@@ -160,8 +157,8 @@ troubleshooting, see the main [FAQ](/help/faq).
|
||||
<Accordion title="How do I configure fast mode for GPT 5.5?">
|
||||
Use either a session toggle or a config default:
|
||||
|
||||
- **Per session:** send `/fast on` while the session is using `openai/gpt-5.4` or `openai-codex/gpt-5.5`.
|
||||
- **Per model default:** set `agents.defaults.models["openai/gpt-5.4"].params.fastMode` or `agents.defaults.models["openai-codex/gpt-5.5"].params.fastMode` to `true`.
|
||||
- **Per session:** send `/fast on` while the session is using `openai/gpt-5.5` or `openai-codex/gpt-5.5`.
|
||||
- **Per model default:** set `agents.defaults.models["openai/gpt-5.5"].params.fastMode` or `agents.defaults.models["openai-codex/gpt-5.5"].params.fastMode` to `true`.
|
||||
|
||||
Example:
|
||||
|
||||
@@ -170,7 +167,7 @@ troubleshooting, see the main [FAQ](/help/faq).
|
||||
agents: {
|
||||
defaults: {
|
||||
models: {
|
||||
"openai/gpt-5.4": {
|
||||
"openai/gpt-5.5": {
|
||||
params: {
|
||||
fastMode: true,
|
||||
},
|
||||
@@ -241,7 +238,7 @@ troubleshooting, see the main [FAQ](/help/faq).
|
||||
model: { primary: "minimax/MiniMax-M2.7" },
|
||||
models: {
|
||||
"minimax/MiniMax-M2.7": { alias: "minimax" },
|
||||
"openai/gpt-5.4": { alias: "gpt" },
|
||||
"openai/gpt-5.5": { alias: "gpt" },
|
||||
},
|
||||
},
|
||||
},
|
||||
@@ -269,7 +266,7 @@ troubleshooting, see the main [FAQ](/help/faq).
|
||||
|
||||
- `opus` → `anthropic/claude-opus-4-6`
|
||||
- `sonnet` → `anthropic/claude-sonnet-4-6`
|
||||
- `gpt` → `openai/gpt-5.4` for API-key setups, or `openai-codex/gpt-5.5` when configured for Codex OAuth
|
||||
- `gpt` → `openai/gpt-5.5` for API-key setups, or `openai-codex/gpt-5.5` when configured for Codex OAuth
|
||||
- `gpt-mini` → `openai/gpt-5.4-mini`
|
||||
- `gpt-nano` → `openai/gpt-5.4-nano`
|
||||
- `gemini` → `google/gemini-3.1-pro-preview`
|
||||
|
||||
@@ -23,17 +23,17 @@ changing config.
|
||||
|
||||
| Goal | Use | Notes |
|
||||
| --------------------------------------------- | -------------------------------------------------------- | ---------------------------------------------------------------------------- |
|
||||
| Direct API-key billing | `openai/gpt-5.4` | Set `OPENAI_API_KEY` or run OpenAI API-key onboarding. |
|
||||
| Direct API-key billing | `openai/gpt-5.5` | Set `OPENAI_API_KEY` or run OpenAI API-key onboarding. |
|
||||
| GPT-5.5 with ChatGPT/Codex subscription auth | `openai-codex/gpt-5.5` | Default PI route for Codex OAuth. Best first choice for subscription setups. |
|
||||
| GPT-5.5 with native Codex app-server behavior | `openai/gpt-5.5` plus `embeddedHarness.runtime: "codex"` | Uses the Codex app-server harness, not the public OpenAI API route. |
|
||||
| GPT-5.5 with native Codex app-server behavior | `openai/gpt-5.5` plus `embeddedHarness.runtime: "codex"` | Forces the Codex app-server harness for that model ref. |
|
||||
| Image generation or editing | `openai/gpt-image-2` | Works with either `OPENAI_API_KEY` or OpenAI Codex OAuth. |
|
||||
|
||||
<Note>
|
||||
GPT-5.5 is currently available in OpenClaw through subscription/OAuth routes:
|
||||
`openai-codex/gpt-5.5` with the PI runner, or `openai/gpt-5.5` with the
|
||||
Codex app-server harness. Direct API-key access for `openai/gpt-5.5` is
|
||||
supported once OpenAI enables GPT-5.5 on the public API; until then use an
|
||||
API-enabled model such as `openai/gpt-5.4` for `OPENAI_API_KEY` setups.
|
||||
GPT-5.5 is available through both direct OpenAI Platform API-key access and
|
||||
subscription/OAuth routes. Use `openai/gpt-5.5` for direct `OPENAI_API_KEY`
|
||||
traffic, `openai-codex/gpt-5.5` for Codex OAuth through PI, or
|
||||
`openai/gpt-5.5` with `embeddedHarness.runtime: "codex"` for the native Codex
|
||||
app-server harness.
|
||||
</Note>
|
||||
|
||||
<Note>
|
||||
@@ -93,16 +93,14 @@ Choose your preferred auth method and follow the setup steps.
|
||||
|
||||
| Model ref | Route | Auth |
|
||||
|-----------|-------|------|
|
||||
| `openai/gpt-5.4` | Direct OpenAI Platform API | `OPENAI_API_KEY` |
|
||||
| `openai/gpt-5.5` | Direct OpenAI Platform API | `OPENAI_API_KEY` |
|
||||
| `openai/gpt-5.4-mini` | Direct OpenAI Platform API | `OPENAI_API_KEY` |
|
||||
| `openai/gpt-5.5` | Future direct API route once OpenAI enables GPT-5.5 on the API | `OPENAI_API_KEY` |
|
||||
|
||||
<Note>
|
||||
`openai/*` is the direct OpenAI API-key route unless you explicitly force
|
||||
the Codex app-server harness. GPT-5.5 itself is currently subscription/OAuth
|
||||
only; use `openai-codex/*` for Codex OAuth through the default PI runner, or
|
||||
use `openai/gpt-5.5` with `embeddedHarness.runtime: "codex"` for native
|
||||
Codex app-server execution.
|
||||
the Codex app-server harness. Use `openai-codex/*` for Codex OAuth through
|
||||
the default PI runner, or use `openai/gpt-5.5` with
|
||||
`embeddedHarness.runtime: "codex"` for native Codex app-server execution.
|
||||
</Note>
|
||||
|
||||
### Config example
|
||||
@@ -110,7 +108,7 @@ Choose your preferred auth method and follow the setup steps.
|
||||
```json5
|
||||
{
|
||||
env: { OPENAI_API_KEY: "sk-..." },
|
||||
agents: { defaults: { model: { primary: "openai/gpt-5.4" } } },
|
||||
agents: { defaults: { model: { primary: "openai/gpt-5.5" } } },
|
||||
}
|
||||
```
|
||||
|
||||
@@ -311,7 +309,7 @@ See [Video Generation](/tools/video-generation) for shared tool parameters, prov
|
||||
|
||||
## GPT-5 prompt contribution
|
||||
|
||||
OpenClaw adds a shared GPT-5 prompt contribution for GPT-5-family runs across providers. It applies by model id, so `openai-codex/gpt-5.5`, `openai/gpt-5.4`, `openrouter/openai/gpt-5.5`, `opencode/gpt-5.5`, and other compatible GPT-5 refs receive the same overlay. Older GPT-4.x models do not.
|
||||
OpenClaw adds a shared GPT-5 prompt contribution for GPT-5-family runs across providers. It applies by model id, so `openai-codex/gpt-5.5`, `openai/gpt-5.5`, `openrouter/openai/gpt-5.5`, `opencode/gpt-5.5`, and other compatible GPT-5 refs receive the same overlay. Older GPT-4.x models do not.
|
||||
|
||||
The bundled native Codex harness uses the same GPT-5 behavior and heartbeat overlay through Codex app-server developer instructions, so `openai/gpt-5.x` sessions forced through `embeddedHarness.runtime: "codex"` keep the same follow-through and proactive heartbeat guidance even though Codex owns the rest of the harness prompt.
|
||||
|
||||
@@ -603,7 +601,7 @@ the Server-side compaction accordion below.
|
||||
agents: {
|
||||
defaults: {
|
||||
models: {
|
||||
"openai/gpt-5.4": {
|
||||
"openai/gpt-5.5": {
|
||||
params: { transport: "auto" },
|
||||
},
|
||||
"openai-codex/gpt-5.5": {
|
||||
@@ -630,7 +628,7 @@ the Server-side compaction accordion below.
|
||||
agents: {
|
||||
defaults: {
|
||||
models: {
|
||||
"openai/gpt-5.4": {
|
||||
"openai/gpt-5.5": {
|
||||
params: { openaiWsWarmup: false },
|
||||
},
|
||||
},
|
||||
@@ -654,7 +652,7 @@ the Server-side compaction accordion below.
|
||||
agents: {
|
||||
defaults: {
|
||||
models: {
|
||||
"openai/gpt-5.4": { params: { fastMode: true } },
|
||||
"openai/gpt-5.5": { params: { fastMode: true } },
|
||||
},
|
||||
},
|
||||
},
|
||||
@@ -675,7 +673,7 @@ the Server-side compaction accordion below.
|
||||
agents: {
|
||||
defaults: {
|
||||
models: {
|
||||
"openai/gpt-5.4": { params: { serviceTier: "priority" } },
|
||||
"openai/gpt-5.5": { params: { serviceTier: "priority" } },
|
||||
},
|
||||
},
|
||||
},
|
||||
@@ -723,7 +721,7 @@ the Server-side compaction accordion below.
|
||||
agents: {
|
||||
defaults: {
|
||||
models: {
|
||||
"openai/gpt-5.4": {
|
||||
"openai/gpt-5.5": {
|
||||
params: {
|
||||
responsesServerCompaction: true,
|
||||
responsesCompactThreshold: 120000,
|
||||
@@ -741,7 +739,7 @@ the Server-side compaction accordion below.
|
||||
agents: {
|
||||
defaults: {
|
||||
models: {
|
||||
"openai/gpt-5.4": {
|
||||
"openai/gpt-5.5": {
|
||||
params: { responsesServerCompaction: false },
|
||||
},
|
||||
},
|
||||
|
||||
@@ -36,7 +36,7 @@ For a high-level overview, see [Onboarding (CLI)](/start/wizard).
|
||||
- **OpenAI Code (Codex) subscription (device pairing)**: browser pairing flow with a short-lived device code.
|
||||
- Sets `agents.defaults.model` to `openai-codex/gpt-5.5` when model is unset or already OpenAI-family.
|
||||
- **OpenAI API key**: uses `OPENAI_API_KEY` if present or prompts for a key, then stores it in auth profiles.
|
||||
- Sets `agents.defaults.model` to `openai/gpt-5.4` when model is unset, `openai/*`, or `openai-codex/*`.
|
||||
- Sets `agents.defaults.model` to `openai/gpt-5.5` when model is unset, `openai/*`, or `openai-codex/*`.
|
||||
- **xAI (Grok) API key**: prompts for `XAI_API_KEY` and configures xAI as a model provider.
|
||||
- **OpenCode**: prompts for `OPENCODE_API_KEY` (or `OPENCODE_ZEN_API_KEY`, get it at https://opencode.ai/auth) and lets you pick the Zen or Go catalog.
|
||||
- **Ollama**: offers **Cloud + Local**, **Cloud only**, or **Local only** first. `Cloud only` prompts for `OLLAMA_API_KEY` and uses `https://ollama.com`; the host-backed modes prompt for the Ollama base URL, discover available models, and auto-pull the selected local model when needed; `Cloud + Local` also checks whether that Ollama host is signed in for cloud access.
|
||||
@@ -182,7 +182,7 @@ Use this reference page for flag semantics and step ordering.
|
||||
```bash
|
||||
openclaw agents add work \
|
||||
--workspace ~/.openclaw/workspace-work \
|
||||
--model openai/gpt-5.4 \
|
||||
--model openai/gpt-5.5 \
|
||||
--bind whatsapp:biz \
|
||||
--non-interactive \
|
||||
--json
|
||||
|
||||
@@ -204,7 +204,7 @@ sessions, and auth profiles. Running without `--workspace` launches the wizard.
|
||||
```bash
|
||||
openclaw agents add work \
|
||||
--workspace ~/.openclaw/workspace-work \
|
||||
--model openai/gpt-5.4 \
|
||||
--model openai/gpt-5.5 \
|
||||
--bind whatsapp:biz \
|
||||
--non-interactive \
|
||||
--json
|
||||
|
||||
@@ -142,7 +142,7 @@ What you set:
|
||||
<Accordion title="OpenAI API key">
|
||||
Uses `OPENAI_API_KEY` if present or prompts for a key, then stores the credential in auth profiles.
|
||||
|
||||
Sets `agents.defaults.model` to `openai/gpt-5.4` when model is unset, `openai/*`, or `openai-codex/*`.
|
||||
Sets `agents.defaults.model` to `openai/gpt-5.5` when model is unset, `openai/*`, or `openai-codex/*`.
|
||||
|
||||
</Accordion>
|
||||
<Accordion title="xAI (Grok) API key">
|
||||
|
||||
Reference in New Issue
Block a user