fix(agents): auto-enable OpenAI Responses server-side compaction (#16930, #22441, #25088)

Landed from contributor PRs #16930, #22441, and #25088.

Co-authored-by: liweiguang <codingpunk@gmail.com>
Co-authored-by: EdwardWu7 <wuzhiyuan7@gmail.com>
Co-authored-by: MoerAI <friendnt@g.skku.edu>
This commit is contained in:
Peter Steinberger
2026-02-27 16:14:49 +00:00
parent 6675aacb5e
commit 8da3a9a92d
5 changed files with 277 additions and 14 deletions

View File

@@ -83,6 +83,39 @@ OpenClaw uses `pi-ai` for model streaming. For `openai-codex/*` models you can s
}
```
### OpenAI Responses server-side compaction
For direct OpenAI Responses models (`openai/*` using `api: "openai-responses"` with
`baseUrl` on `api.openai.com`), OpenClaw now auto-enables OpenAI server-side
compaction payload hints:
- Forces `store: true` (unless model compat sets `supportsStore: false`)
- Injects `context_management: [{ type: "compaction", compact_threshold: ... }]`
By default, `compact_threshold` is `70%` of model `contextWindow` (or `80000`
when unavailable).
You can override per model:
```json5
{
agents: {
defaults: {
models: {
"openai/gpt-5": {
params: {
responsesServerCompaction: true,
responsesCompactThreshold: 120000,
},
},
},
},
},
}
```
Set `responsesServerCompaction: false` to disable this injection for a model.
## Notes
- Model refs always use `provider/model` (see [/concepts/models](/concepts/models)).