mirror of
https://github.com/openclaw/openclaw.git
synced 2026-03-12 07:20:45 +00:00
feat(openai): add websocket warm-up with configurable toggle
This commit is contained in:
@@ -68,6 +68,9 @@ You can set `agents.defaults.models.<provider/model>.params.transport`:
|
||||
- `"websocket"`: force WebSocket
|
||||
- `"auto"`: try WebSocket, then fall back to SSE
|
||||
|
||||
For `openai/*` (Responses API), OpenClaw also enables WebSocket warm-up by
|
||||
default (`openaiWsWarmup: true`) when WebSocket transport is used.
|
||||
|
||||
```json5
|
||||
{
|
||||
agents: {
|
||||
@@ -85,6 +88,47 @@ You can set `agents.defaults.models.<provider/model>.params.transport`:
|
||||
}
|
||||
```
|
||||
|
||||
### OpenAI WebSocket warm-up
|
||||
|
||||
OpenAI docs describe warm-up as optional. OpenClaw enables it by default for
|
||||
`openai/*` to reduce first-turn latency when using WebSocket transport.
|
||||
|
||||
### Disable warm-up
|
||||
|
||||
```json5
|
||||
{
|
||||
agents: {
|
||||
defaults: {
|
||||
models: {
|
||||
"openai/gpt-5": {
|
||||
params: {
|
||||
openaiWsWarmup: false,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
### Enable warm-up explicitly
|
||||
|
||||
```json5
|
||||
{
|
||||
agents: {
|
||||
defaults: {
|
||||
models: {
|
||||
"openai/gpt-5": {
|
||||
params: {
|
||||
openaiWsWarmup: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
### OpenAI Responses server-side compaction
|
||||
|
||||
For direct OpenAI Responses models (`openai/*` using `api: "openai-responses"` with
|
||||
|
||||
Reference in New Issue
Block a user