mirror of
https://github.com/openclaw/openclaw.git
synced 2026-05-06 11:20:43 +00:00
fix: let lmstudio skip native preload
This commit is contained in:
@@ -1,4 +1,4 @@
|
||||
051884bad7339a302ecb75e5f61831b1726c6f0360de27485aac76097570c808 config-baseline.json
|
||||
80e6e8dce647aef2d1310de55a81d27de52cca47fc24bd7ad81b80f43a72b84c config-baseline.core.json
|
||||
94f7879b0771e81973c0749c719c19283fdc26e0e42fe6536f8ee563be6a44e5 config-baseline.json
|
||||
a38ea77d2f0f0188f14ce0e3a8a564ff80e51415849359042f51921eb01ec2d9 config-baseline.core.json
|
||||
eab8a85eefa2792fb8b98a07698e5ec31ff0b6f8af6222767e8049dcc5c4f529 config-baseline.channel.json
|
||||
6bd6c72b17801072b2d3285c82f4c21adcc95f0edffc1e6f64e767d0a07b678f config-baseline.plugin.json
|
||||
|
||||
@@ -555,7 +555,7 @@ Then set a model (replace with one of the IDs returned by `http://localhost:1234
|
||||
}
|
||||
```
|
||||
|
||||
OpenClaw uses LM Studio's native `/api/v1/models` and `/api/v1/models/load` for discovery + auto-load, with `/v1/chat/completions` for inference by default. See [/providers/lmstudio](/providers/lmstudio) for setup and troubleshooting.
|
||||
OpenClaw uses LM Studio's native `/api/v1/models` and `/api/v1/models/load` for discovery + auto-load, with `/v1/chat/completions` for inference by default. If you want LM Studio JIT loading, TTL, and auto-evict to own model lifecycle, set `models.providers.lmstudio.params.preload: false`. See [/providers/lmstudio](/providers/lmstudio) for setup and troubleshooting.
|
||||
|
||||
### Ollama
|
||||
|
||||
|
||||
@@ -176,7 +176,22 @@ If setup reports HTTP 401, verify your API key:
|
||||
|
||||
### Just-in-time model loading
|
||||
|
||||
LM Studio supports just-in-time (JIT) model loading, where models are loaded on first request. Make sure you have this enabled to avoid 'Model not loaded' errors.
|
||||
LM Studio supports just-in-time (JIT) model loading, where models are loaded on first request. OpenClaw preloads models through LM Studio's native load endpoint by default, which helps when JIT is disabled. To let LM Studio's JIT, idle TTL, and auto-evict behavior own model lifecycle, disable OpenClaw's preload step:
|
||||
|
||||
```json5
|
||||
{
|
||||
models: {
|
||||
providers: {
|
||||
lmstudio: {
|
||||
baseUrl: "http://localhost:1234/v1",
|
||||
api: "openai-completions",
|
||||
params: { preload: false },
|
||||
models: [{ id: "qwen/qwen3.5-9b" }],
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
### LAN or tailnet LM Studio host
|
||||
|
||||
|
||||
Reference in New Issue
Block a user