fix: allow memory flush model override

This commit is contained in:
Peter Steinberger
2026-04-28 05:25:42 +01:00
parent dc3df62e67
commit 540cbe24be
18 changed files with 186 additions and 3 deletions

View File

@@ -132,7 +132,23 @@ By default, compaction runs silently. Set `notifyUser` to show brief status mess
### Memory flush
Before compaction, OpenClaw can run a **silent memory flush** turn to store durable notes to disk. See [Memory](/concepts/memory) for details and config.
Before compaction, OpenClaw can run a **silent memory flush** turn to store durable notes to disk. Set `agents.defaults.compaction.memoryFlush.model` when this housekeeping turn should use a local model instead of the active conversation model:
```json
{
"agents": {
"defaults": {
"compaction": {
"memoryFlush": {
"model": "ollama/qwen3:8b"
}
}
}
}
}
```
The memory-flush model override is exact and does not inherit the active session fallback chain. See [Memory](/concepts/memory) for details and config.
## Pluggable compaction providers

View File

@@ -110,6 +110,26 @@ Before [compaction](/concepts/compaction) summarizes your conversation, OpenClaw
runs a silent turn that reminds the agent to save important context to memory
files. This is on by default — you do not need to configure anything.
To keep that housekeeping turn on a local model, set an exact memory-flush model
override:
```json
{
"agents": {
"defaults": {
"compaction": {
"memoryFlush": {
"model": "ollama/qwen3:8b"
}
}
}
}
}
```
The override applies only to the memory-flush turn and does not inherit the
active session fallback chain.
<Tip>
The memory flush prevents context loss during compaction. If your agent has
important facts in the conversation that are not yet written to a file, they