mirror of
https://github.com/openclaw/openclaw.git
synced 2026-05-06 19:10:58 +00:00
fix: keep local embedding batches from flooding providers
This commit is contained in:
@@ -1,4 +1,4 @@
|
||||
0b0d796bceddfb9e2929518ba84af626da7f5d75c392a217041f36e850c4e74f config-baseline.json
|
||||
271fdf1d6652927e0fc160a6f25276bf6dccb8f1b27fab15e0fc2620e8cacab4 config-baseline.core.json
|
||||
3b9a8841973205560a5396e7a18d301852941a95a561900984ad618e69a99d05 config-baseline.json
|
||||
089ab9493c8482687f19da89d37e069fc402543696c92e6e3be86072c1e48c68 config-baseline.core.json
|
||||
7cd9c908f066c143eab2a201efbc9640f483ab28bba92ddeca1d18cc2b528bc3 config-baseline.channel.json
|
||||
17eb3f8887193579ff32e35f9bd520ba2bd6049e52ab18855c5d41fcbf195d83 config-baseline.plugin.json
|
||||
|
||||
@@ -135,6 +135,11 @@ earlier conversations. This is opt-in via
|
||||
**Only keyword matches?** Your embedding provider may not be configured. Check
|
||||
`openclaw memory status --deep`.
|
||||
|
||||
**Local embeddings time out?** `ollama`, `lmstudio`, and `local` use a longer
|
||||
inline batch timeout by default. If the host is simply slow, set
|
||||
`agents.defaults.memorySearch.sync.embeddingBatchTimeoutSeconds` and rerun
|
||||
`openclaw memory index --force`.
|
||||
|
||||
**CJK text not found?** Rebuild the FTS index with
|
||||
`openclaw memory index --force`.
|
||||
|
||||
|
||||
@@ -219,6 +219,17 @@ to an existing local file. `hf:` and HTTP(S) model references can still be used
|
||||
explicitly with `provider: "local"`, but they do not make `auto` select local
|
||||
before the model is available on disk.
|
||||
|
||||
### Inline embedding timeout
|
||||
|
||||
| Key | Type | Default | Description |
|
||||
| ----------------------------------- | -------- | ---------------- | ------------------------------------------------------------------------ |
|
||||
| `sync.embeddingBatchTimeoutSeconds` | `number` | provider default | Override the timeout for inline embedding batches during memory indexing |
|
||||
|
||||
Unset uses the provider default: 600 seconds for local/self-hosted providers
|
||||
such as `local`, `ollama`, and `lmstudio`, and 120 seconds for hosted providers.
|
||||
|
||||
Increase this when local CPU-bound embedding batches are healthy but slow.
|
||||
|
||||
---
|
||||
|
||||
## Hybrid search config
|
||||
@@ -347,6 +358,10 @@ Prevents re-embedding unchanged text during reindex or transcript updates.
|
||||
Available for `openai`, `gemini`, and `voyage`. OpenAI batch is typically
|
||||
fastest and cheapest for large backfills.
|
||||
|
||||
This is separate from `sync.embeddingBatchTimeoutSeconds`, which controls inline
|
||||
embedding calls used by local/self-hosted providers and hosted providers when
|
||||
provider batch APIs are not active.
|
||||
|
||||
---
|
||||
|
||||
## Session memory search (experimental)
|
||||
|
||||
Reference in New Issue
Block a user