fix(memory): keep llama runtime optional (#71425)

* fix(memory): keep llama runtime optional

* fix(memory): harden optional llama runtime guard
This commit is contained in:
Vincent Koc
2026-04-25 00:09:12 -07:00
committed by GitHub
parent 4005a4f731
commit 9895ecead3
10 changed files with 69 additions and 746 deletions

View File

@@ -38,8 +38,9 @@ To set a provider explicitly:
Without an embedding provider, only keyword search is available.
To force the built-in local embedding provider, point `local.modelPath` at a
GGUF file:
To force the built-in local embedding provider, install the optional
`node-llama-cpp` runtime package next to OpenClaw, then point `local.modelPath`
at a GGUF file:
```json5
{
@@ -66,7 +67,7 @@ GGUF file:
| Voyage | `voyage` | Yes | |
| Mistral | `mistral` | Yes | |
| Ollama | `ollama` | No | Local, set explicitly |
| Local | `local` | Yes (first) | GGUF model, ~0.6 GB download |
| Local | `local` | Yes (first) | Optional `node-llama-cpp` runtime |
Auto-detection picks the first provider whose API key can be resolved, in the
order shown. Set `memorySearch.provider` to override.

View File

@@ -15,7 +15,8 @@ binary, and can index content beyond your workspace memory files.
- **Reranking and query expansion** for better recall.
- **Index extra directories** -- project docs, team notes, anything on disk.
- **Index session transcripts** -- recall earlier conversations.
- **Fully local** -- runs via Bun + node-llama-cpp, auto-downloads GGUF models.
- **Fully local** -- runs with the optional node-llama-cpp runtime package and
auto-downloads GGUF models.
- **Automatic fallback** -- if QMD is unavailable, OpenClaw falls back to the
builtin engine seamlessly.

View File

@@ -29,8 +29,8 @@ explicitly:
}
```
For local embeddings with no API key, use `provider: "local"` (requires
node-llama-cpp).
For local embeddings with no API key, install the optional `node-llama-cpp`
runtime package next to OpenClaw and use `provider: "local"`.
## Supported providers