Files
openclaw/docs/gateway/local-models.md
Vincent Koc 3aac43e30b docs: remove stale MiniMax M2.5 refs and add image generation docs
After the M2.7-only catalog trim (#54487), update 10 docs files:
- Replace removed M2.5/VL-01 model references across FAQ, wizard,
  config reference, local-models, and provider pages
- Make local-models guide model-agnostic (generic LM Studio placeholder)
- Add image-01 generation section to minimax.md
- Leave third-party catalogs (Synthetic, Venice) unchanged
2026-03-29 17:26:02 +09:00

5.4 KiB

summary, read_when, title
summary read_when title
Run OpenClaw on local LLMs (LM Studio, vLLM, LiteLLM, custom OpenAI endpoints)
You want to serve models from your own GPU box
You are wiring LM Studio or an OpenAI-compatible proxy
You need the safest local model guidance
Local Models

Local models

Local is doable, but OpenClaw expects large context + strong defenses against prompt injection. Small cards truncate context and leak safety. Aim high: ≥2 maxed-out Mac Studios or equivalent GPU rig (~$30k+). A single 24 GB GPU works only for lighter prompts with higher latency. Use the largest / full-size model variant you can run; aggressively quantized or “small” checkpoints raise prompt-injection risk (see Security).

If you want the lowest-friction local setup, start with Ollama and openclaw onboard. This page is the opinionated guide for higher-end local stacks and custom OpenAI-compatible local servers.

Best current local stack. Load a large model in LM Studio (for example, a full-size Qwen, DeepSeek, or Llama build), enable the local server (default http://127.0.0.1:1234), and use Responses API to keep reasoning separate from final text.

{
  agents: {
    defaults: {
      model: { primary: “lmstudio/my-local-model” },
      models: {
        “anthropic/claude-opus-4-6”: { alias: “Opus” },
        “lmstudio/my-local-model”: { alias: “Local” },
      },
    },
  },
  models: {
    mode: “merge”,
    providers: {
      lmstudio: {
        baseUrl: “http://127.0.0.1:1234/v1”,
        apiKey: “lmstudio”,
        api: “openai-responses”,
        models: [
          {
            id: “my-local-model”,
            name: “Local Model”,
            reasoning: false,
            input: [“text”],
            cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
            contextWindow: 196608,
            maxTokens: 8192,
          },
        ],
      },
    },
  },
}

Setup checklist

  • Install LM Studio: https://lmstudio.ai
  • In LM Studio, download the largest model build available (avoid “small”/heavily quantized variants), start the server, confirm http://127.0.0.1:1234/v1/models lists it.
  • Replace my-local-model with the actual model ID shown in LM Studio.
  • Keep the model loaded; cold-load adds startup latency.
  • Adjust contextWindow/maxTokens if your LM Studio build differs.
  • For WhatsApp, stick to Responses API so only final text is sent.

Keep hosted models configured even when running local; use models.mode: "merge" so fallbacks stay available.

Hybrid config: hosted primary, local fallback

{
  agents: {
    defaults: {
      model: {
        primary: "anthropic/claude-sonnet-4-6",
        fallbacks: ["lmstudio/my-local-model", "anthropic/claude-opus-4-6"],
      },
      models: {
        "anthropic/claude-sonnet-4-6": { alias: "Sonnet" },
        "lmstudio/my-local-model": { alias: "Local" },
        "anthropic/claude-opus-4-6": { alias: "Opus" },
      },
    },
  },
  models: {
    mode: "merge",
    providers: {
      lmstudio: {
        baseUrl: "http://127.0.0.1:1234/v1",
        apiKey: "lmstudio",
        api: "openai-responses",
        models: [
          {
            id: "my-local-model",
            name: "Local Model",
            reasoning: false,
            input: ["text"],
            cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
            contextWindow: 196608,
            maxTokens: 8192,
          },
        ],
      },
    },
  },
}

Local-first with hosted safety net

Swap the primary and fallback order; keep the same providers block and models.mode: "merge" so you can fall back to Sonnet or Opus when the local box is down.

Regional hosting / data routing

  • Hosted MiniMax/Kimi/GLM variants also exist on OpenRouter with region-pinned endpoints (e.g., US-hosted). Pick the regional variant there to keep traffic in your chosen jurisdiction while still using models.mode: "merge" for Anthropic/OpenAI fallbacks.
  • Local-only remains the strongest privacy path; hosted regional routing is the middle ground when you need provider features but want control over data flow.

Other OpenAI-compatible local proxies

vLLM, LiteLLM, OAI-proxy, or custom gateways work if they expose an OpenAI-style /v1 endpoint. Replace the provider block above with your endpoint and model ID:

{
  models: {
    mode: "merge",
    providers: {
      local: {
        baseUrl: "http://127.0.0.1:8000/v1",
        apiKey: "sk-local",
        api: "openai-responses",
        models: [
          {
            id: "my-local-model",
            name: "Local Model",
            reasoning: false,
            input: ["text"],
            cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
            contextWindow: 120000,
            maxTokens: 8192,
          },
        ],
      },
    },
  },
}

Keep models.mode: "merge" so hosted models stay available as fallbacks.

Troubleshooting

  • Gateway can reach the proxy? curl http://127.0.0.1:1234/v1/models.
  • LM Studio model unloaded? Reload; cold start is a common “hanging” cause.
  • Context errors? Lower contextWindow or raise your server limit.
  • Safety: local models skip provider-side filters; keep agents narrow and compaction on to limit prompt injection blast radius.