fix(memory): cap ollama non-batch embedding concurrency

This commit is contained in:
Peter Steinberger
2026-04-28 00:33:53 +01:00
parent 5de3196a60
commit 802f13ac15
15 changed files with 103 additions and 14 deletions

View File

@@ -1,4 +1,4 @@
5ffabe5ff76d8e4a0d121e89f74f84917b919447e63bf12e0e5b0e4c0211d451 config-baseline.json
7dcb21e47ddd5de98e2af1ecbc41e11ac0c5742819c359e6d851fbc39c0226e9 config-baseline.core.json
0f57fb6d20b9d300c4325b227e49f17f04349b0f3c27dd218397fe7a3b5001dc config-baseline.json
9d1815981dc3f89d1dfdc72f0a4723d4fd5efca8e5b8a1a1cbf6a053c50c937d config-baseline.core.json
c4f07c228d4f07e7afafa5b600b4a80f5b26aaed7267c7287a64d04a527be8e8 config-baseline.channel.json
6938050627f0d120109d2045b4300aa8b508b35132542db434033ed0fe3e2b3a config-baseline.plugin.json

View File

@@ -885,7 +885,13 @@ For the full setup and behavior details, see [Ollama Web Search](/tools/ollama-s
{
agents: {
defaults: {
memorySearch: { provider: "ollama" },
memorySearch: {
provider: "ollama",
remote: {
// Default for Ollama. Raise on larger hosts if reindexing is too slow.
nonBatchConcurrency: 1,
},
},
},
},
}
@@ -899,10 +905,11 @@ For the full setup and behavior details, see [Ollama Web Search](/tools/ollama-s
defaults: {
memorySearch: {
provider: "ollama",
model: "nomic-embed-text",
remote: {
baseUrl: "http://gpu-box.local:11434",
model: "nomic-embed-text",
apiKey: "ollama-local",
nonBatchConcurrency: 2,
},
},
},

View File

@@ -386,6 +386,7 @@ Prevents re-embedding unchanged text during reindex or transcript updates.
| Key | Type | Default | Description |
| ----------------------------- | --------- | ------- | -------------------------- |
| `remote.nonBatchConcurrency` | `number` | `4` | Parallel inline embeddings |
| `remote.batch.enabled` | `boolean` | `false` | Enable batch embedding API |
| `remote.batch.concurrency` | `number` | `2` | Parallel batch jobs |
| `remote.batch.wait` | `boolean` | `true` | Wait for batch completion |
@@ -394,7 +395,9 @@ Prevents re-embedding unchanged text during reindex or transcript updates.
Available for `openai`, `gemini`, and `voyage`. OpenAI batch is typically fastest and cheapest for large backfills.
This is separate from `sync.embeddingBatchTimeoutSeconds`, which controls inline embedding calls used by local/self-hosted providers and hosted providers when provider batch APIs are not active.
`remote.nonBatchConcurrency` controls inline embedding calls used by local/self-hosted providers and hosted providers when provider batch APIs are not active. Ollama defaults to `1` for non-batch indexing to avoid overwhelming smaller local hosts; set a higher value on larger machines.
This is separate from `sync.embeddingBatchTimeoutSeconds`, which controls the timeout for inline embedding calls.
---