fix(models): expose codex runtime context caps

This commit is contained in:
Peter Steinberger
2026-04-25 07:37:56 +01:00
parent 3d554aefdf
commit 2ff7eb36cf
18 changed files with 240 additions and 46 deletions

View File

@@ -30,11 +30,9 @@ Reference for **LLM/model providers** (not chat channels like WhatsApp/Telegram)
`google-gemini-cli`, or `codex-cli` when you want a local CLI backend.
Legacy `claude-cli/*`, `google-gemini-cli/*`, and `codex-cli/*` refs migrate
back to canonical provider refs with the runtime recorded separately.
- GPT-5.5 is currently available through subscription/OAuth routes:
`openai-codex/gpt-5.5` in PI or `openai/gpt-5.5` with the Codex app-server
harness. The direct API-key route for `openai/gpt-5.5` is supported once
OpenAI enables GPT-5.5 on the public API; until then use API-enabled models
such as `openai/gpt-5.4` for `OPENAI_API_KEY` setups.
- GPT-5.5 is available through `openai-codex/gpt-5.5` in PI, the native
Codex app-server harness, and the public OpenAI API when the bundled PI
catalog exposes `openai/gpt-5.5` for your install.
## Plugin-owned provider behavior
@@ -73,10 +71,10 @@ OpenClaw ships with the piai catalog. These providers require **no**
- Provider: `openai`
- Auth: `OPENAI_API_KEY`
- Optional rotation: `OPENAI_API_KEYS`, `OPENAI_API_KEY_1`, `OPENAI_API_KEY_2`, plus `OPENCLAW_LIVE_OPENAI_KEY` (single override)
- Example models: `openai/gpt-5.4`, `openai/gpt-5.4-mini`
- GPT-5.5 direct API support is future-ready here once OpenAI exposes GPT-5.5 on the API
- Verify direct API availability with `openclaw models list --provider openai`
before using `openai/gpt-5.5` without the Codex app-server runtime
- Example models: `openai/gpt-5.5`, `openai/gpt-5.4`, `openai/gpt-5.4-mini`
- GPT-5.5 direct API support depends on the bundled PI catalog version for
your install; verify with `openclaw models list --provider openai` before
using `openai/gpt-5.5` without the Codex app-server runtime.
- CLI: `openclaw onboard --auth-choice openai-api-key`
- Default transport is `auto` (WebSocket-first, SSE fallback)
- Override per model via `agents.defaults.models["openai/<model>"].params.transport` (`"sse"`, `"websocket"`, or `"auto"`)
@@ -133,9 +131,9 @@ OpenClaw ships with the piai catalog. These providers require **no**
`User-Agent`) are only attached on native Codex traffic to
`chatgpt.com/backend-api`, not generic OpenAI-compatible proxies
- Shares the same `/fast` toggle and `params.fastMode` config as direct `openai/*`; OpenClaw maps that to `service_tier=priority`
- `openai-codex/gpt-5.5` keeps native `contextWindow = 1000000` and a default runtime `contextTokens = 272000`; override the runtime cap with `models.providers.openai-codex.models[].contextTokens`
- `openai-codex/gpt-5.5` uses the Codex catalog native `contextWindow = 400000` and default runtime `contextTokens = 272000`; override the runtime cap with `models.providers.openai-codex.models[].contextTokens`
- Policy note: OpenAI Codex OAuth is explicitly supported for external tools/workflows like OpenClaw.
- Current GPT-5.5 access uses this OAuth/subscription route until OpenAI enables GPT-5.5 on the public API.
- Use `openai-codex/gpt-5.5` when you want the Codex OAuth/subscription route; use `openai/gpt-5.5` when your API-key setup and local catalog expose the public API route.
```json5
{