From 3dbf31885cfd4a3d433a919b20a6e9d05aac97ee Mon Sep 17 00:00:00 2001 From: Vincent Koc Date: Sat, 2 May 2026 14:42:40 -0700 Subject: [PATCH] docs(experimental): explain local model lean mode for operators --- docs/concepts/experimental-features.md | 52 +++++++++++++++++++++++--- 1 file changed, 46 insertions(+), 6 deletions(-) diff --git a/docs/concepts/experimental-features.md b/docs/concepts/experimental-features.md index f8d96aaa6f0..93bec91c322 100644 --- a/docs/concepts/experimental-features.md +++ b/docs/concepts/experimental-features.md @@ -29,13 +29,53 @@ Treat them differently from normal config: ## Local model lean mode -`agents.defaults.experimental.localModelLean: true` is a pressure-release valve -for weaker local-model setups. It trims heavyweight default tools like -`browser`, `cron`, and `message` so the prompt shape is smaller and less brittle -for small-context or stricter OpenAI-compatible backends. +`agents.defaults.experimental.localModelLean: true` is a pressure-release valve for weaker local-model setups. When it is on, OpenClaw drops three default tools — `browser`, `cron`, and `message` — from the agent's tool surface for every turn. Nothing else changes. -That is intentionally **not** the normal path. If your backend handles the full -runtime cleanly, leave this off. +### Why these three tools + +These three tools have the largest descriptions and the most parameter shapes in the default OpenClaw runtime. On a small-context or stricter OpenAI-compatible backend that is the difference between: + +- Tool schemas fitting cleanly in the prompt vs. crowding out conversation history. +- The model picking the right tool vs. emitting malformed tool calls because there are too many similar-looking schemas. +- The Chat Completions adapter staying inside the server's structured-output limits vs. tripping a 400 on tool-call payload size. + +Removing them does not silently rewire OpenClaw — it just makes the tool list shorter. The model still has `read`, `write`, `edit`, `exec`, `apply_patch`, web search/fetch (when configured), memory, and session/agent tools available. + +### When to turn it on + +Enable lean mode when you have already proved the model can talk to the Gateway but full agent turns misbehave. The typical signal chain is: + +1. `openclaw infer model run --gateway --model --prompt "Reply with exactly: pong"` succeeds. +2. A normal agent turn fails with malformed tool calls, oversized prompts, or the model ignoring its tools. +3. Toggling `localModelLean: true` clears the failure. + +### When to leave it off + +If your backend handles the full default runtime cleanly, leave this off. Lean mode is a workaround, not a default. It exists because some local stacks need a smaller tool surface to behave; hosted models and well-resourced local rigs do not. + +Lean mode also does not replace `tools.profile`, `tools.allow`/`tools.deny`, or the model `compat.supportsTools: false` escape hatch. If you need a permanent narrower tool surface for a specific agent, prefer those stable knobs over the experimental flag. + +### Enable + +```json5 +{ + agents: { + defaults: { + experimental: { + localModelLean: true, + }, + }, + }, +} +``` + +Restart the Gateway after changing the flag, then confirm the trimmed tool list with: + +```bash +openclaw status --deep +``` + +The deep status output lists the active agent tools; `browser`, `cron`, and `message` should be absent when lean mode is on. ## Experimental does not mean hidden