mirror of
https://github.com/openclaw/openclaw.git
synced 2026-05-06 11:10:45 +00:00
fix(models): default local custom providers to completions
This commit is contained in:
@@ -432,7 +432,7 @@ OpenClaw uses the built-in model catalog. Add custom providers via `models.provi
|
||||
- Safe edits: use `openclaw config set models.providers.<id> '<json>' --strict-json --merge` or `openclaw config set models.providers.<id>.models '<json-array>' --strict-json --merge` for additive updates. `config set` refuses destructive replacements unless you pass `--replace`.
|
||||
</Accordion>
|
||||
<Accordion title="Provider connection and auth">
|
||||
- `models.providers.*.api`: request adapter (`openai-completions`, `openai-responses`, `anthropic-messages`, `google-generative-ai`, etc). For self-hosted `/v1/chat/completions` backends such as MLX, vLLM, SGLang, and most OpenAI-compatible local servers, use `openai-completions`. Use `openai-responses` only when the backend supports `/v1/responses`.
|
||||
- `models.providers.*.api`: request adapter (`openai-completions`, `openai-responses`, `anthropic-messages`, `google-generative-ai`, etc). For self-hosted `/v1/chat/completions` backends such as MLX, vLLM, SGLang, and most OpenAI-compatible local servers, use `openai-completions`. A custom provider with `baseUrl` but no `api` defaults to `openai-completions`; set `openai-responses` only when the backend supports `/v1/responses`.
|
||||
- `models.providers.*.apiKey`: provider credential (prefer SecretRef/env substitution).
|
||||
- `models.providers.*.auth`: auth strategy (`api-key`, `token`, `oauth`, `aws-sdk`).
|
||||
- `models.providers.*.contextWindow`: default native context window for models under this provider when the model entry does not set `contextWindow`.
|
||||
@@ -451,7 +451,7 @@ OpenClaw uses the built-in model catalog. Add custom providers via `models.provi
|
||||
- `request.auth`: auth strategy override. Modes: `"provider-default"` (use provider's built-in auth), `"authorization-bearer"` (with `token`), `"header"` (with `headerName`, `value`, optional `prefix`).
|
||||
- `request.proxy`: HTTP proxy override. Modes: `"env-proxy"` (use `HTTP_PROXY`/`HTTPS_PROXY` env vars), `"explicit-proxy"` (with `url`). Both modes accept an optional `tls` sub-object.
|
||||
- `request.tls`: TLS override for direct connections. Fields: `ca`, `cert`, `key`, `passphrase` (all accept SecretRef), `serverName`, `insecureSkipVerify`.
|
||||
- `request.allowPrivateNetwork`: when `true`, allow HTTPS to `baseUrl` when DNS resolves to private, CGNAT, or similar ranges, via the provider HTTP fetch guard (operator opt-in for trusted self-hosted OpenAI-compatible endpoints). WebSocket uses the same `request` for headers/TLS but not that fetch SSRF gate. Default `false`.
|
||||
- `request.allowPrivateNetwork`: when `true`, allow HTTPS to `baseUrl` when DNS resolves to private, CGNAT, or similar ranges, via the provider HTTP fetch guard (operator opt-in for trusted self-hosted OpenAI-compatible endpoints). Loopback model-provider stream URLs such as `localhost`, `127.0.0.1`, and `[::1]` are allowed automatically unless this is explicitly set to `false`; LAN, tailnet, and private DNS hosts still require opt-in. WebSocket uses the same `request` for headers/TLS but not that fetch SSRF gate. Default `false`.
|
||||
|
||||
</Accordion>
|
||||
<Accordion title="Model catalog entries">
|
||||
|
||||
@@ -151,6 +151,11 @@ endpoint and model ID:
|
||||
}
|
||||
```
|
||||
|
||||
If `api` is omitted on a custom provider with a `baseUrl`, OpenClaw defaults to
|
||||
`openai-completions`. Loopback endpoints such as `127.0.0.1` are trusted
|
||||
automatically; LAN, tailnet, and private DNS endpoints still need
|
||||
`request.allowPrivateNetwork: true`.
|
||||
|
||||
The `models.providers.<id>.models[].id` value is provider-local. Do not
|
||||
include the provider prefix there. For example, an MLX server started with
|
||||
`mlx_lm.server --model mlx-community/Qwen3-30B-A3B-6bit` should use this
|
||||
|
||||
@@ -189,7 +189,7 @@ Use the LM Studio host's reachable address, keep `/v1`, and make sure LM Studio
|
||||
}
|
||||
```
|
||||
|
||||
Unlike generic OpenAI-compatible providers, `lmstudio` automatically trusts its configured local/private endpoint for guarded model requests. If you use a custom provider id instead of `lmstudio`, set `models.providers.<id>.request.allowPrivateNetwork: true` explicitly.
|
||||
Unlike generic OpenAI-compatible providers, `lmstudio` automatically trusts its configured local/private endpoint for guarded model requests. Custom loopback provider IDs such as `localhost` or `127.0.0.1` are also trusted automatically; for LAN, tailnet, or private DNS custom provider IDs, set `models.providers.<id>.request.allowPrivateNetwork: true` explicitly.
|
||||
|
||||
## Related
|
||||
|
||||
|
||||
Reference in New Issue
Block a user