mirror of
https://github.com/openclaw/openclaw.git
synced 2026-05-06 15:10:52 +00:00
fix(memory): resolve custom embedding provider ids
This commit is contained in:
@@ -29,6 +29,10 @@ explicitly:
|
||||
}
|
||||
```
|
||||
|
||||
For multi-endpoint setups, `provider` can also be a custom
|
||||
`models.providers.<id>` entry, such as `ollama-5080`, when that provider sets
|
||||
`api: "ollama"` or another embedding adapter owner.
|
||||
|
||||
For local embeddings with no API key, install the optional `node-llama-cpp`
|
||||
runtime package next to OpenClaw and use `provider: "local"`.
|
||||
|
||||
|
||||
@@ -25,7 +25,7 @@ Ollama provider config uses `baseUrl` as the canonical key. OpenClaw also accept
|
||||
Remote public hosts and Ollama Cloud (`https://ollama.com`) require a real credential through `OLLAMA_API_KEY`, an auth profile, or the provider's `apiKey`.
|
||||
</Accordion>
|
||||
<Accordion title="Custom provider ids">
|
||||
Custom provider ids that set `api: "ollama"` follow the same rules. For example, an `ollama-remote` provider that points at a private LAN Ollama host can use `apiKey: "ollama-local"` and sub-agents will resolve that marker through the Ollama provider hook instead of treating it as a missing credential.
|
||||
Custom provider ids that set `api: "ollama"` follow the same rules. For example, an `ollama-remote` provider that points at a private LAN Ollama host can use `apiKey: "ollama-local"` and sub-agents will resolve that marker through the Ollama provider hook instead of treating it as a missing credential. Memory search can also set `agents.defaults.memorySearch.provider` to that custom provider id so embeddings use the matching Ollama endpoint.
|
||||
</Accordion>
|
||||
<Accordion title="Memory embedding scope">
|
||||
When Ollama is used for memory embeddings, bearer auth is scoped to the host where it was declared:
|
||||
|
||||
@@ -46,12 +46,12 @@ See [Active Memory](/concepts/active-memory) for the activation model, plugin-ow
|
||||
|
||||
## Provider selection
|
||||
|
||||
| Key | Type | Default | Description |
|
||||
| ---------- | --------- | ---------------- | -------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `provider` | `string` | auto-detected | Embedding adapter ID: `bedrock`, `deepinfra`, `gemini`, `github-copilot`, `local`, `mistral`, `ollama`, `openai`, `voyage` |
|
||||
| `model` | `string` | provider default | Embedding model name |
|
||||
| `fallback` | `string` | `"none"` | Fallback adapter ID when the primary fails |
|
||||
| `enabled` | `boolean` | `true` | Enable or disable memory search |
|
||||
| Key | Type | Default | Description |
|
||||
| ---------- | --------- | ---------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `provider` | `string` | auto-detected | Embedding adapter ID such as `bedrock`, `deepinfra`, `gemini`, `github-copilot`, `local`, `mistral`, `ollama`, `openai`, or `voyage`; may also be a configured `models.providers.<id>` whose `api` points at one of those adapters |
|
||||
| `model` | `string` | provider default | Embedding model name |
|
||||
| `fallback` | `string` | `"none"` | Fallback adapter ID when the primary fails |
|
||||
| `enabled` | `boolean` | `true` | Enable or disable memory search |
|
||||
|
||||
### Auto-detection order
|
||||
|
||||
@@ -86,6 +86,33 @@ When `provider` is not set, OpenClaw selects the first available:
|
||||
|
||||
`ollama` is supported but not auto-detected (set it explicitly).
|
||||
|
||||
### Custom provider ids
|
||||
|
||||
`memorySearch.provider` can point at a custom `models.providers.<id>` entry. OpenClaw resolves that provider's `api` owner for the embedding adapter while preserving the custom provider id for endpoint, auth, and model-prefix handling. This lets multi-GPU or multi-host setups dedicate memory embeddings to a specific local endpoint:
|
||||
|
||||
```json5
|
||||
{
|
||||
models: {
|
||||
providers: {
|
||||
"ollama-5080": {
|
||||
api: "ollama",
|
||||
baseUrl: "http://gpu-box.local:11435",
|
||||
apiKey: "ollama-local",
|
||||
models: [{ id: "qwen3-embedding:0.6b" }],
|
||||
},
|
||||
},
|
||||
},
|
||||
agents: {
|
||||
defaults: {
|
||||
memorySearch: {
|
||||
provider: "ollama-5080",
|
||||
model: "qwen3-embedding:0.6b",
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
### API key resolution
|
||||
|
||||
Remote embeddings require an API key. Bedrock uses the AWS SDK default credential chain instead (instance roles, SSO, access keys).
|
||||
|
||||
Reference in New Issue
Block a user