Files
openclaw/docs/gateway/local-models.md
Vincent Koc 9ef0131e1c docs(local-models): note LAN-local auth marker support (fee16865b2 + 0dd2844991)
Trace to fee16865b2 (fix(agents): accept LAN local auth markers) and the
companion 0dd2844991 (fix: preserve Ollama local marker auth). The fix
extends ollama-local marker handling to any custom OpenAI-compatible
provider whose baseUrl resolves to loopback, a private LAN, .local, or a
bare hostname, so persisted local markers no longer fail with missing-auth
errors for non-Ollama-typed local providers (LM Studio, vLLM, LiteLLM).

The Ollama provider page already covers ollama-local for Ollama-typed
providers; this note lives in docs/gateway/local-models.md where custom
OpenAI-compatible local stacks are documented.
2026-04-27 03:39:26 -07:00

257 lines
11 KiB
Markdown

---
summary: "Run OpenClaw on local LLMs (LM Studio, vLLM, LiteLLM, custom OpenAI endpoints)"
read_when:
- You want to serve models from your own GPU box
- You are wiring LM Studio or an OpenAI-compatible proxy
- You need the safest local model guidance
title: "Local models"
---
Local is doable, but OpenClaw expects large context + strong defenses against prompt injection. Small cards truncate context and leak safety. Aim high: **≥2 maxed-out Mac Studios or equivalent GPU rig (~$30k+)**. A single **24 GB** GPU works only for lighter prompts with higher latency. Use the **largest / full-size model variant you can run**; aggressively quantized or “small” checkpoints raise prompt-injection risk (see [Security](/gateway/security)).
If you want the lowest-friction local setup, start with [LM Studio](/providers/lmstudio) or [Ollama](/providers/ollama) and `openclaw onboard`. This page is the opinionated guide for higher-end local stacks and custom OpenAI-compatible local servers.
## Recommended: LM Studio + large local model (Responses API)
Best current local stack. Load a large model in LM Studio (for example, a full-size Qwen, DeepSeek, or Llama build), enable the local server (default `http://127.0.0.1:1234`), and use Responses API to keep reasoning separate from final text.
```json5
{
agents: {
defaults: {
model: { primary: "lmstudio/my-local-model" },
models: {
"anthropic/claude-opus-4-6": { alias: "Opus" },
"lmstudio/my-local-model": { alias: "Local" },
},
},
},
models: {
mode: "merge",
providers: {
lmstudio: {
baseUrl: "http://127.0.0.1:1234/v1",
apiKey: "lmstudio",
api: "openai-responses",
models: [
{
id: "my-local-model",
name: "Local Model",
reasoning: false,
input: ["text"],
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
contextWindow: 196608,
maxTokens: 8192,
},
],
},
},
},
}
```
**Setup checklist**
- Install LM Studio: [https://lmstudio.ai](https://lmstudio.ai)
- In LM Studio, download the **largest model build available** (avoid “small”/heavily quantized variants), start the server, confirm `http://127.0.0.1:1234/v1/models` lists it.
- Replace `my-local-model` with the actual model ID shown in LM Studio.
- Keep the model loaded; cold-load adds startup latency.
- Adjust `contextWindow`/`maxTokens` if your LM Studio build differs.
- For WhatsApp, stick to Responses API so only final text is sent.
Keep hosted models configured even when running local; use `models.mode: "merge"` so fallbacks stay available.
### Hybrid config: hosted primary, local fallback
```json5
{
agents: {
defaults: {
model: {
primary: "anthropic/claude-sonnet-4-6",
fallbacks: ["lmstudio/my-local-model", "anthropic/claude-opus-4-6"],
},
models: {
"anthropic/claude-sonnet-4-6": { alias: "Sonnet" },
"lmstudio/my-local-model": { alias: "Local" },
"anthropic/claude-opus-4-6": { alias: "Opus" },
},
},
},
models: {
mode: "merge",
providers: {
lmstudio: {
baseUrl: "http://127.0.0.1:1234/v1",
apiKey: "lmstudio",
api: "openai-responses",
models: [
{
id: "my-local-model",
name: "Local Model",
reasoning: false,
input: ["text"],
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
contextWindow: 196608,
maxTokens: 8192,
},
],
},
},
},
}
```
### Local-first with hosted safety net
Swap the primary and fallback order; keep the same providers block and `models.mode: "merge"` so you can fall back to Sonnet or Opus when the local box is down.
### Regional hosting / data routing
- Hosted MiniMax/Kimi/GLM variants also exist on OpenRouter with region-pinned endpoints (e.g., US-hosted). Pick the regional variant there to keep traffic in your chosen jurisdiction while still using `models.mode: "merge"` for Anthropic/OpenAI fallbacks.
- Local-only remains the strongest privacy path; hosted regional routing is the middle ground when you need provider features but want control over data flow.
## Other OpenAI-compatible local proxies
vLLM, LiteLLM, OAI-proxy, or custom gateways work if they expose an OpenAI-style `/v1` endpoint. Replace the provider block above with your endpoint and model ID:
```json5
{
models: {
mode: "merge",
providers: {
local: {
baseUrl: "http://127.0.0.1:8000/v1",
apiKey: "sk-local",
api: "openai-responses",
timeoutSeconds: 300,
models: [
{
id: "my-local-model",
name: "Local Model",
reasoning: false,
input: ["text"],
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
contextWindow: 120000,
maxTokens: 8192,
},
],
},
},
},
}
```
Keep `models.mode: "merge"` so hosted models stay available as fallbacks.
Use `models.providers.<id>.timeoutSeconds` for slow local or remote model
servers before raising `agents.defaults.timeoutSeconds`. The provider timeout
applies only to model HTTP requests, including connect, headers, body streaming,
and the total guarded-fetch abort.
<Note>
For custom OpenAI-compatible providers, persisting a non-secret local marker such as `apiKey: "ollama-local"` is accepted when `baseUrl` resolves to loopback, a private LAN, `.local`, or a bare hostname. OpenClaw treats it as a valid local credential instead of reporting a missing key. Use a real value for any provider that accepts a public hostname.
</Note>
Behavior note for local/proxied `/v1` backends:
- OpenClaw treats these as proxy-style OpenAI-compatible routes, not native
OpenAI endpoints
- native OpenAI-only request shaping does not apply here: no
`service_tier`, no Responses `store`, no OpenAI reasoning-compat payload
shaping, and no prompt-cache hints
- hidden OpenClaw attribution headers (`originator`, `version`, `User-Agent`)
are not injected on these custom proxy URLs
Compatibility notes for stricter OpenAI-compatible backends:
- Some servers accept only string `messages[].content` on Chat Completions, not
structured content-part arrays. Set
`models.providers.<provider>.models[].compat.requiresStringContent: true` for
those endpoints.
- Some local models emit standalone bracketed tool requests as text, such as
`[tool_name]` followed by JSON and `[END_TOOL_REQUEST]`. OpenClaw promotes
those into real tool calls only when the name exactly matches a registered
tool for the turn; otherwise the block is treated as unsupported text and is
hidden from user-visible replies.
- If a model emits JSON, XML, or ReAct-style text that looks like a tool call
but the provider did not emit a structured invocation, OpenClaw leaves it as
text and logs a warning with the run id, provider/model, detected pattern, and
tool name when available. Treat that as provider/model tool-call
incompatibility, not a completed tool run.
- If tools appear as assistant text instead of running, for example raw JSON,
XML, ReAct syntax, or an empty `tool_calls` array in the provider response,
first verify the server is using a tool-call-capable chat template/parser. For
OpenAI-compatible Chat Completions backends whose parser works only when tool
use is forced, set a per-model request override instead of relying on text
parsing:
```json5
{
agents: {
defaults: {
models: {
"local/my-local-model": {
params: {
extra_body: {
tool_choice: "required",
},
},
},
},
},
},
}
```
Use this only for models/sessions where every normal turn should call a tool.
It overrides OpenClaw's default proxy value of `tool_choice: "auto"`.
Replace `local/my-local-model` with the exact provider/model ref shown by
`openclaw models list`.
```bash
openclaw config set agents.defaults.models '{"local/my-local-model":{"params":{"extra_body":{"tool_choice":"required"}}}}' --strict-json --merge
```
- Some smaller or stricter local backends are unstable with OpenClaw's full
agent-runtime prompt shape, especially when tool schemas are included. If the
backend works for tiny direct `/v1/chat/completions` calls but fails on normal
OpenClaw agent turns, first try
`agents.defaults.experimental.localModelLean: true` to drop heavyweight
default tools like `browser`, `cron`, and `message`; this is an experimental
flag, not a stable default-mode setting. See
[Experimental Features](/concepts/experimental-features). If that still fails, try
`models.providers.<provider>.models[].compat.supportsTools: false`.
- If the backend still fails only on larger OpenClaw runs, the remaining issue
is usually upstream model/server capacity or a backend bug, not OpenClaw's
transport layer.
## Troubleshooting
- Gateway can reach the proxy? `curl http://127.0.0.1:1234/v1/models`.
- LM Studio model unloaded? Reload; cold start is a common “hanging” cause.
- Local server says `terminated`, `ECONNRESET`, or closes the stream mid-turn?
OpenClaw records a low-cardinality `model.call.error.failureKind` plus the
OpenClaw process RSS/heap snapshot in diagnostics. For LM Studio/Ollama
memory pressure, match that timestamp against the server log or macOS crash /
jetsam log to confirm whether the model server was killed.
- OpenClaw warns when the detected context window is below **32k** and blocks below **16k**. If you hit that preflight, raise the server/model context limit or choose a larger model.
- Context errors? Lower `contextWindow` or raise your server limit.
- OpenAI-compatible server returns `messages[].content ... expected a string`?
Add `compat.requiresStringContent: true` on that model entry.
- Direct tiny `/v1/chat/completions` calls work, but `openclaw infer model run`
fails on Gemma or another local model? Disable tool schemas first with
`compat.supportsTools: false`, then retest. If the server still crashes only
on larger OpenClaw prompts, treat it as an upstream server/model limitation.
- Tool calls show up as raw JSON/XML/ReAct text, or the provider returns an
empty `tool_calls` array? Do not add a proxy that blindly converts assistant
text into tool execution. Fix the server chat template/parser first. If the
model only works when tool use is forced, add the per-model
`params.extra_body.tool_choice: "required"` override above and use that model
entry only for sessions where a tool call is expected on every turn.
- Safety: local models skip provider-side filters; keep agents narrow and compaction on to limit prompt injection blast radius.
## Related
- [Configuration reference](/gateway/configuration-reference)
- [Model failover](/concepts/model-failover)