58 KiB
summary, read_when, title
| summary | read_when | title | |||
|---|---|---|---|---|---|
| Agent defaults, multi-agent routing, session, messages, and talk config |
|
Configuration — agents |
Agent-scoped configuration keys under agents.*, multiAgent.*, session.*,
messages.*, and talk.*. For channels, tools, gateway runtime, and other
top-level keys, see Configuration reference.
Agent defaults
agents.defaults.workspace
Default: ~/.openclaw/workspace.
{
agents: { defaults: { workspace: "~/.openclaw/workspace" } },
}
agents.defaults.repoRoot
Optional repository root shown in the system prompt's Runtime line. If unset, OpenClaw auto-detects by walking upward from the workspace.
{
agents: { defaults: { repoRoot: "~/Projects/openclaw" } },
}
agents.defaults.skills
Optional default skill allowlist for agents that do not set
agents.list[].skills.
{
agents: {
defaults: { skills: ["github", "weather"] },
list: [
{ id: "writer" }, // inherits github, weather
{ id: "docs", skills: ["docs-search"] }, // replaces defaults
{ id: "locked-down", skills: [] }, // no skills
],
},
}
- Omit
agents.defaults.skillsfor unrestricted skills by default. - Omit
agents.list[].skillsto inherit the defaults. - Set
agents.list[].skills: []for no skills. - A non-empty
agents.list[].skillslist is the final set for that agent; it does not merge with defaults.
agents.defaults.skipBootstrap
Disables automatic creation of workspace bootstrap files (AGENTS.md, SOUL.md, TOOLS.md, IDENTITY.md, USER.md, HEARTBEAT.md, BOOTSTRAP.md).
{
agents: { defaults: { skipBootstrap: true } },
}
agents.defaults.contextInjection
Controls when workspace bootstrap files are injected into the system prompt. Default: "always".
"continuation-skip": safe continuation turns (after a completed assistant response) skip workspace bootstrap re-injection, reducing prompt size. Heartbeat runs and post-compaction retries still rebuild context."never": disable workspace bootstrap and context-file injection on every turn. Use this only for agents that fully own their prompt lifecycle (custom context engines, native runtimes that build their own context, or specialized bootstrap-free workflows). Heartbeat and compaction-recovery turns also skip injection.
{
agents: { defaults: { contextInjection: "continuation-skip" } },
}
agents.defaults.bootstrapMaxChars
Max characters per workspace bootstrap file before truncation. Default: 12000.
{
agents: { defaults: { bootstrapMaxChars: 12000 } },
}
agents.defaults.bootstrapTotalMaxChars
Max total characters injected across all workspace bootstrap files. Default: 60000.
{
agents: { defaults: { bootstrapTotalMaxChars: 60000 } },
}
agents.defaults.bootstrapPromptTruncationWarning
Controls agent-visible warning text when bootstrap context is truncated.
Default: "once".
"off": never inject warning text into the system prompt."once": inject warning once per unique truncation signature (recommended)."always": inject warning on every run when truncation exists.
{
agents: { defaults: { bootstrapPromptTruncationWarning: "once" } }, // off | once | always
}
Context budget ownership map
OpenClaw has multiple high-volume prompt/context budgets, and they are intentionally split by subsystem instead of all flowing through one generic knob.
agents.defaults.bootstrapMaxChars/agents.defaults.bootstrapTotalMaxChars: normal workspace bootstrap injection.agents.defaults.startupContext.*: one-shot/newand/resetstartup prelude, including recent dailymemory/*.mdfiles.skills.limits.*: the compact skills list injected into the system prompt.agents.defaults.contextLimits.*: bounded runtime excerpts and injected runtime-owned blocks.memory.qmd.limits.*: indexed memory-search snippet and injection sizing.
Use the matching per-agent override only when one agent needs a different budget:
agents.list[].skillsLimits.maxSkillsPromptCharsagents.list[].contextLimits.*
agents.defaults.startupContext
Controls the first-turn startup prelude injected on bare /new and /reset
runs.
{
agents: {
defaults: {
startupContext: {
enabled: true,
applyOn: ["new", "reset"],
dailyMemoryDays: 2,
maxFileBytes: 16384,
maxFileChars: 1200,
maxTotalChars: 2800,
},
},
},
}
agents.defaults.contextLimits
Shared defaults for bounded runtime context surfaces.
{
agents: {
defaults: {
contextLimits: {
memoryGetMaxChars: 12000,
memoryGetDefaultLines: 120,
toolResultMaxChars: 16000,
postCompactionMaxChars: 1800,
},
},
},
}
memoryGetMaxChars: defaultmemory_getexcerpt cap before truncation metadata and continuation notice are added.memoryGetDefaultLines: defaultmemory_getline window whenlinesis omitted.toolResultMaxChars: live tool-result cap used for persisted results and overflow recovery.postCompactionMaxChars: AGENTS.md excerpt cap used during post-compaction refresh injection.
agents.list[].contextLimits
Per-agent override for the shared contextLimits knobs. Omitted fields inherit
from agents.defaults.contextLimits.
{
agents: {
defaults: {
contextLimits: {
memoryGetMaxChars: 12000,
toolResultMaxChars: 16000,
},
},
list: [
{
id: "tiny-local",
contextLimits: {
memoryGetMaxChars: 6000,
toolResultMaxChars: 8000,
},
},
],
},
}
skills.limits.maxSkillsPromptChars
Global cap for the compact skills list injected into the system prompt. This
does not affect reading SKILL.md files on demand.
{
skills: {
limits: {
maxSkillsPromptChars: 18000,
},
},
}
agents.list[].skillsLimits.maxSkillsPromptChars
Per-agent override for the skills prompt budget.
{
agents: {
list: [
{
id: "tiny-local",
skillsLimits: {
maxSkillsPromptChars: 6000,
},
},
],
},
}
agents.defaults.imageMaxDimensionPx
Max pixel size for the longest image side in transcript/tool image blocks before provider calls.
Default: 1200.
Lower values usually reduce vision-token usage and request payload size for screenshot-heavy runs. Higher values preserve more visual detail.
{
agents: { defaults: { imageMaxDimensionPx: 1200 } },
}
agents.defaults.userTimezone
Timezone for system prompt context (not message timestamps). Falls back to host timezone.
{
agents: { defaults: { userTimezone: "America/Chicago" } },
}
agents.defaults.timeFormat
Time format in system prompt. Default: auto (OS preference).
{
agents: { defaults: { timeFormat: "auto" } }, // auto | 12 | 24
}
agents.defaults.model
{
agents: {
defaults: {
models: {
"anthropic/claude-opus-4-6": { alias: "opus" },
"minimax/MiniMax-M2.7": { alias: "minimax" },
},
model: {
primary: "anthropic/claude-opus-4-6",
fallbacks: ["minimax/MiniMax-M2.7"],
},
imageModel: {
primary: "openrouter/qwen/qwen-2.5-vl-72b-instruct:free",
fallbacks: ["openrouter/google/gemini-2.0-flash-vision:free"],
},
imageGenerationModel: {
primary: "openai/gpt-image-2",
fallbacks: ["google/gemini-3.1-flash-image-preview"],
},
videoGenerationModel: {
primary: "qwen/wan2.6-t2v",
fallbacks: ["qwen/wan2.6-i2v"],
},
pdfModel: {
primary: "anthropic/claude-opus-4-6",
fallbacks: ["openai/gpt-5.4-mini"],
},
params: { cacheRetention: "long" }, // global default provider params
agentRuntime: {
id: "pi", // pi | auto | registered harness id, e.g. codex
fallback: "pi", // pi | none
},
pdfMaxBytesMb: 10,
pdfMaxPages: 20,
thinkingDefault: "low",
verboseDefault: "off",
elevatedDefault: "on",
timeoutSeconds: 600,
mediaMaxMb: 5,
contextTokens: 200000,
maxConcurrent: 3,
},
},
}
model: accepts either a string ("provider/model") or an object ({ primary, fallbacks }).- String form sets only the primary model.
- Object form sets primary plus ordered failover models.
imageModel: accepts either a string ("provider/model") or an object ({ primary, fallbacks }).- Used by the
imagetool path as its vision-model config. - Also used as fallback routing when the selected/default model cannot accept image input.
- Used by the
imageGenerationModel: accepts either a string ("provider/model") or an object ({ primary, fallbacks }).- Used by the shared image-generation capability and any future tool/plugin surface that generates images.
- Typical values:
google/gemini-3.1-flash-image-previewfor native Gemini image generation,fal/fal-ai/flux/devfor fal,openai/gpt-image-2for OpenAI Images, oropenai/gpt-image-1.5for transparent-background OpenAI PNG/WebP output. - If you select a provider/model directly, configure matching provider auth too (for example
GEMINI_API_KEYorGOOGLE_API_KEYforgoogle/*,OPENAI_API_KEYor OpenAI Codex OAuth foropenai/gpt-image-2/openai/gpt-image-1.5,FAL_KEYforfal/*). - If omitted,
image_generatecan still infer an auth-backed provider default. It tries the current default provider first, then the remaining registered image-generation providers in provider-id order.
musicGenerationModel: accepts either a string ("provider/model") or an object ({ primary, fallbacks }).- Used by the shared music-generation capability and the built-in
music_generatetool. - Typical values:
google/lyria-3-clip-preview,google/lyria-3-pro-preview, orminimax/music-2.6. - If omitted,
music_generatecan still infer an auth-backed provider default. It tries the current default provider first, then the remaining registered music-generation providers in provider-id order. - If you select a provider/model directly, configure the matching provider auth/API key too.
- Used by the shared music-generation capability and the built-in
videoGenerationModel: accepts either a string ("provider/model") or an object ({ primary, fallbacks }).- Used by the shared video-generation capability and the built-in
video_generatetool. - Typical values:
qwen/wan2.6-t2v,qwen/wan2.6-i2v,qwen/wan2.6-r2v,qwen/wan2.6-r2v-flash, orqwen/wan2.7-r2v. - If omitted,
video_generatecan still infer an auth-backed provider default. It tries the current default provider first, then the remaining registered video-generation providers in provider-id order. - If you select a provider/model directly, configure the matching provider auth/API key too.
- The bundled Qwen video-generation provider supports up to 1 output video, 1 input image, 4 input videos, 10 seconds duration, and provider-level
size,aspectRatio,resolution,audio, andwatermarkoptions.
- Used by the shared video-generation capability and the built-in
pdfModel: accepts either a string ("provider/model") or an object ({ primary, fallbacks }).- Used by the
pdftool for model routing. - If omitted, the PDF tool falls back to
imageModel, then to the resolved session/default model.
- Used by the
pdfMaxBytesMb: default PDF size limit for thepdftool whenmaxBytesMbis not passed at call time.pdfMaxPages: default maximum pages considered by extraction fallback mode in thepdftool.verboseDefault: default verbose level for agents. Values:"off","on","full". Default:"off".elevatedDefault: default elevated-output level for agents. Values:"off","on","ask","full". Default:"on".model.primary: formatprovider/model(e.g.openai/gpt-5.5for API-key access oropenai-codex/gpt-5.5for Codex OAuth). If you omit the provider, OpenClaw tries an alias first, then a unique configured-provider match for that exact model id, and only then falls back to the configured default provider (deprecated compatibility behavior, so prefer explicitprovider/model). If that provider no longer exposes the configured default model, OpenClaw falls back to the first configured provider/model instead of surfacing a stale removed-provider default.models: the configured model catalog and allowlist for/model. Each entry can includealias(shortcut) andparams(provider-specific, for exampletemperature,maxTokens,cacheRetention,context1m,responsesServerCompaction,responsesCompactThreshold,chat_template_kwargs,extra_body/extraBody).- Safe edits: use
openclaw config set agents.defaults.models '<json>' --strict-json --mergeto add entries.config setrefuses replacements that would remove existing allowlist entries unless you pass--replace. - Provider-scoped configure/onboarding flows merge selected provider models into this map and preserve unrelated providers already configured.
- For direct OpenAI Responses models, server-side compaction is enabled automatically. Use
params.responsesServerCompaction: falseto stop injectingcontext_management, orparams.responsesCompactThresholdto override the threshold. See OpenAI server-side compaction.
- Safe edits: use
params: global default provider parameters applied to all models. Set atagents.defaults.params(e.g.{ cacheRetention: "long" }).paramsmerge precedence (config):agents.defaults.params(global base) is overridden byagents.defaults.models["provider/model"].params(per-model), thenagents.list[].params(matching agent id) overrides by key. See Prompt Caching for details.params.extra_body/params.extraBody: advanced pass-through JSON merged intoapi: "openai-completions"request bodies for OpenAI-compatible proxies. If it collides with generated request keys, the extra body wins; non-native completions routes still strip OpenAI-onlystoreafterward.params.chat_template_kwargs: vLLM/OpenAI-compatible chat-template arguments merged into top-levelapi: "openai-completions"request bodies. Forvllm/nemotron-3-*with thinking off, OpenClaw automatically sendsenable_thinking: falseandforce_nonempty_content: true; explicitchat_template_kwargsoverride those defaults, andextra_body.chat_template_kwargsstill has final precedence.params.preserveThinking: Z.AI-only opt-in for preserved thinking. When enabled and thinking is on, OpenClaw sendsthinking.clear_thinking: falseand replays priorreasoning_content; see Z.AI thinking and preserved thinking.agentRuntime: default low-level agent runtime policy. Omitted id defaults to OpenClaw Pi. Useid: "pi"to force the built-in PI harness,id: "auto"to let registered plugin harnesses claim supported models, a registered harness id such asid: "codex", or a supported CLI backend alias such asid: "claude-cli". Setfallback: "none"to disable automatic PI fallback. Explicit plugin runtimes such ascodexfail closed by default unless you setfallback: "pi"in the same override scope. Keep model refs canonical asprovider/model; select Codex, Claude CLI, Gemini CLI, and other execution backends through runtime config instead of legacy runtime provider prefixes. See Agent runtimes for how this differs from provider/model selection.- Config writers that mutate these fields (for example
/models set,/models set-image, and fallback add/remove commands) save canonical object form and preserve existing fallback lists when possible. maxConcurrent: max parallel agent runs across sessions (each session still serialized). Default: 4.
agents.defaults.agentRuntime
agentRuntime controls which low-level executor runs agent turns. Most
deployments should keep the default OpenClaw Pi runtime. Use it when a trusted
plugin provides a native harness, such as the bundled Codex app-server harness,
or when you want a supported CLI backend such as Claude CLI. For the mental
model, see Agent runtimes.
{
agents: {
defaults: {
model: "openai/gpt-5.5",
agentRuntime: {
id: "codex",
fallback: "none",
},
},
},
}
id:"auto","pi", a registered plugin harness id, or a supported CLI backend alias. The bundled Codex plugin registerscodex; the bundled Anthropic plugin provides theclaude-cliCLI backend.fallback:"pi"or"none". Inid: "auto", omitted fallback defaults to"pi"so old configs can keep using PI when no plugin harness claims a run. In explicit plugin runtime mode, such asid: "codex", omitted fallback defaults to"none"so a missing harness fails instead of silently using PI. Runtime overrides do not inherit fallback from a broader scope; setfallback: "pi"alongside the explicit runtime when you intentionally want that compatibility fallback. Selected plugin harness failures always surface directly.- Environment overrides:
OPENCLAW_AGENT_RUNTIME=<id|auto|pi>overridesid;OPENCLAW_AGENT_HARNESS_FALLBACK=pi|noneoverrides fallback for that process. - For Codex-only deployments, set
model: "openai/gpt-5.5"andagentRuntime.id: "codex". You may also setagentRuntime.fallback: "none"explicitly for readability; it is the default for explicit plugin runtimes. - For Claude CLI deployments, prefer
model: "anthropic/claude-opus-4-7"plusagentRuntime.id: "claude-cli". Legacyclaude-cli/claude-opus-4-7model refs still work for compatibility, but new config should keep provider/model selection canonical and put the execution backend inagentRuntime.id. - Older runtime-policy keys are rewritten to
agentRuntimebyopenclaw doctor --fix. - Harness choice is pinned per session id after the first embedded run. Config/env changes affect new or reset sessions, not an existing transcript. Legacy sessions with transcript history but no recorded pin are treated as PI-pinned.
/statusreports the effective runtime, for exampleRuntime: OpenClaw Pi DefaultorRuntime: OpenAI Codex. - This only controls text agent-turn execution. Media generation, vision, PDF, music, video, and TTS still use their provider/model settings.
Built-in alias shorthands (only apply when the model is in agents.defaults.models):
| Alias | Model |
|---|---|
opus |
anthropic/claude-opus-4-6 |
sonnet |
anthropic/claude-sonnet-4-6 |
gpt |
openai/gpt-5.5 or openai-codex/gpt-5.5 |
gpt-mini |
openai/gpt-5.4-mini |
gpt-nano |
openai/gpt-5.4-nano |
gemini |
google/gemini-3.1-pro-preview |
gemini-flash |
google/gemini-3-flash-preview |
gemini-flash-lite |
google/gemini-3.1-flash-lite-preview |
Your configured aliases always win over defaults.
Z.AI GLM-4.x models automatically enable thinking mode unless you set --thinking off or define agents.defaults.models["zai/<model>"].params.thinking yourself.
Z.AI models enable tool_stream by default for tool call streaming. Set agents.defaults.models["zai/<model>"].params.tool_stream to false to disable it.
Anthropic Claude 4.6 models default to adaptive thinking when no explicit thinking level is set.
agents.defaults.cliBackends
Optional CLI backends for text-only fallback runs (no tool calls). Useful as a backup when API providers fail.
{
agents: {
defaults: {
cliBackends: {
"codex-cli": {
command: "/opt/homebrew/bin/codex",
},
"my-cli": {
command: "my-cli",
args: ["--json"],
output: "json",
modelArg: "--model",
sessionArg: "--session",
sessionMode: "existing",
systemPromptArg: "--system",
// Or use systemPromptFileArg when the CLI accepts a prompt file flag.
systemPromptWhen: "first",
imageArg: "--image",
imageMode: "repeat",
},
},
},
},
}
- CLI backends are text-first; tools are always disabled.
- Sessions supported when
sessionArgis set. - Image pass-through supported when
imageArgaccepts file paths.
agents.defaults.systemPromptOverride
Replace the entire OpenClaw-assembled system prompt with a fixed string. Set at the default level (agents.defaults.systemPromptOverride) or per agent (agents.list[].systemPromptOverride). Per-agent values take precedence; an empty or whitespace-only value is ignored. Useful for controlled prompt experiments.
{
agents: {
defaults: {
systemPromptOverride: "You are a helpful assistant.",
},
},
}
agents.defaults.promptOverlays
Provider-independent prompt overlays applied by model family. GPT-5-family model ids receive the shared behavior contract across providers; personality controls only the friendly interaction-style layer.
{
agents: {
defaults: {
promptOverlays: {
gpt5: {
personality: "friendly", // friendly | on | off
},
},
},
},
}
"friendly"(default) and"on"enable the friendly interaction-style layer."off"disables only the friendly layer; the tagged GPT-5 behavior contract remains enabled.- Legacy
plugins.entries.openai.config.personalityis still read when this shared setting is unset.
agents.defaults.heartbeat
Periodic heartbeat runs.
{
agents: {
defaults: {
heartbeat: {
every: "30m", // 0m disables
model: "openai/gpt-5.4-mini",
includeReasoning: false,
includeSystemPromptSection: true, // default: true; false omits the Heartbeat section from the system prompt
lightContext: false, // default: false; true keeps only HEARTBEAT.md from workspace bootstrap files
isolatedSession: false, // default: false; true runs each heartbeat in a fresh session (no conversation history)
session: "main",
to: "+15555550123",
directPolicy: "allow", // allow (default) | block
target: "none", // default: none | options: last | whatsapp | telegram | discord | ...
prompt: "Read HEARTBEAT.md if it exists...",
ackMaxChars: 300,
suppressToolErrorWarnings: false,
timeoutSeconds: 45,
},
},
},
}
every: duration string (ms/s/m/h). Default:30m(API-key auth) or1h(OAuth auth). Set to0mto disable.includeSystemPromptSection: when false, omits the Heartbeat section from the system prompt and skipsHEARTBEAT.mdinjection into bootstrap context. Default:true.suppressToolErrorWarnings: when true, suppresses tool error warning payloads during heartbeat runs.timeoutSeconds: maximum time in seconds allowed for a heartbeat agent turn before it is aborted. Leave unset to useagents.defaults.timeoutSeconds.directPolicy: direct/DM delivery policy.allow(default) permits direct-target delivery.blocksuppresses direct-target delivery and emitsreason=dm-blocked.lightContext: when true, heartbeat runs use lightweight bootstrap context and keep onlyHEARTBEAT.mdfrom workspace bootstrap files.isolatedSession: when true, each heartbeat runs in a fresh session with no prior conversation history. Same isolation pattern as cronsessionTarget: "isolated". Reduces per-heartbeat token cost from ~100K to ~2-5K tokens.- Per-agent: set
agents.list[].heartbeat. When any agent definesheartbeat, only those agents run heartbeats. - Heartbeats run full agent turns — shorter intervals burn more tokens.
agents.defaults.compaction
{
agents: {
defaults: {
compaction: {
mode: "safeguard", // default | safeguard
provider: "my-provider", // id of a registered compaction provider plugin (optional)
timeoutSeconds: 900,
reserveTokensFloor: 24000,
keepRecentTokens: 50000,
identifierPolicy: "strict", // strict | off | custom
identifierInstructions: "Preserve deployment IDs, ticket IDs, and host:port pairs exactly.", // used when identifierPolicy=custom
qualityGuard: { enabled: true, maxRetries: 1 },
postCompactionSections: ["Session Startup", "Red Lines"], // [] disables reinjection
model: "openrouter/anthropic/claude-sonnet-4-6", // optional compaction-only model override
truncateAfterCompaction: true, // rotate to a smaller successor JSONL after compaction
maxActiveTranscriptBytes: "20mb", // optional preflight local compaction trigger
notifyUser: true, // send brief notices when compaction starts and completes (default: false)
memoryFlush: {
enabled: true,
softThresholdTokens: 6000,
systemPrompt: "Session nearing compaction. Store durable memories now.",
prompt: "Write any lasting notes to memory/YYYY-MM-DD.md; reply with the exact silent token NO_REPLY if nothing to store.",
},
},
},
},
}
mode:defaultorsafeguard(chunked summarization for long histories). See Compaction.provider: id of a registered compaction provider plugin. When set, the provider'ssummarize()is called instead of built-in LLM summarization. Falls back to built-in on failure. Setting a provider forcesmode: "safeguard". See Compaction.timeoutSeconds: maximum seconds allowed for a single compaction operation before OpenClaw aborts it. Default:900.keepRecentTokens: Pi cut-point budget for keeping the most recent transcript tail verbatim. Manual/compacthonors this when explicitly set; otherwise manual compaction is a hard checkpoint.identifierPolicy:strict(default),off, orcustom.strictprepends built-in opaque identifier retention guidance during compaction summarization.identifierInstructions: optional custom identifier-preservation text used whenidentifierPolicy=custom.qualityGuard: retry-on-malformed-output checks for safeguard summaries. Enabled by default in safeguard mode; setenabled: falseto skip the audit.postCompactionSections: optional AGENTS.md H2/H3 section names to re-inject after compaction. Defaults to["Session Startup", "Red Lines"]; set[]to disable reinjection. When unset or explicitly set to that default pair, olderEvery Session/Safetyheadings are also accepted as a legacy fallback.model: optionalprovider/model-idoverride for compaction summarization only. Use this when the main session should keep one model but compaction summaries should run on another; when unset, compaction uses the session's primary model.maxActiveTranscriptBytes: optional byte threshold (numberor strings like"20mb") that triggers normal local compaction before a run when the active JSONL grows past the threshold. RequirestruncateAfterCompactionso successful compaction can rotate to a smaller successor transcript. Disabled when unset or0.notifyUser: whentrue, sends brief notices to the user when compaction starts and when it completes (for example, "Compacting context..." and "Compaction complete"). Disabled by default to keep compaction silent.memoryFlush: silent agentic turn before auto-compaction to store durable memories. Skipped when workspace is read-only.
agents.defaults.contextPruning
Prunes old tool results from in-memory context before sending to the LLM. Does not modify session history on disk.
{
agents: {
defaults: {
contextPruning: {
mode: "cache-ttl", // off | cache-ttl
ttl: "1h", // duration (ms/s/m/h), default unit: minutes
keepLastAssistants: 3,
softTrimRatio: 0.3,
hardClearRatio: 0.5,
minPrunableToolChars: 50000,
softTrim: { maxChars: 4000, headChars: 1500, tailChars: 1500 },
hardClear: { enabled: true, placeholder: "[Old tool result content cleared]" },
tools: { deny: ["browser", "canvas"] },
},
},
},
}
mode: "cache-ttl"enables pruning passes.ttlcontrols how often pruning can run again (after the last cache touch).- Pruning soft-trims oversized tool results first, then hard-clears older tool results if needed.
Soft-trim keeps beginning + end and inserts ... in the middle.
Hard-clear replaces the entire tool result with the placeholder.
Notes:
- Image blocks are never trimmed/cleared.
- Ratios are character-based (approximate), not exact token counts.
- If fewer than
keepLastAssistantsassistant messages exist, pruning is skipped.
See Session Pruning for behavior details.
Block streaming
{
agents: {
defaults: {
blockStreamingDefault: "off", // on | off
blockStreamingBreak: "text_end", // text_end | message_end
blockStreamingChunk: { minChars: 800, maxChars: 1200 },
blockStreamingCoalesce: { idleMs: 1000 },
humanDelay: { mode: "natural" }, // off | natural | custom (use minMs/maxMs)
},
},
}
- Non-Telegram channels require explicit
*.blockStreaming: trueto enable block replies. - Channel overrides:
channels.<channel>.blockStreamingCoalesce(and per-account variants). Signal/Slack/Discord/Google Chat defaultminChars: 1500. humanDelay: randomized pause between block replies.natural= 800–2500ms. Per-agent override:agents.list[].humanDelay.
See Streaming for behavior + chunking details.
Typing indicators
{
agents: {
defaults: {
typingMode: "instant", // never | instant | thinking | message
typingIntervalSeconds: 6,
},
},
}
- Defaults:
instantfor direct chats/mentions,messagefor unmentioned group chats. - Per-session overrides:
session.typingMode,session.typingIntervalSeconds.
See Typing Indicators.
agents.defaults.sandbox
Optional sandboxing for the embedded agent. See Sandboxing for the full guide.
{
agents: {
defaults: {
sandbox: {
mode: "non-main", // off | non-main | all
backend: "docker", // docker | ssh | openshell
scope: "agent", // session | agent | shared
workspaceAccess: "none", // none | ro | rw
workspaceRoot: "~/.openclaw/sandboxes",
docker: {
image: "openclaw-sandbox:bookworm-slim",
containerPrefix: "openclaw-sbx-",
workdir: "/workspace",
readOnlyRoot: true,
tmpfs: ["/tmp", "/var/tmp", "/run"],
network: "none",
user: "1000:1000",
capDrop: ["ALL"],
env: { LANG: "C.UTF-8" },
setupCommand: "apt-get update && apt-get install -y git curl jq",
pidsLimit: 256,
memory: "1g",
memorySwap: "2g",
cpus: 1,
ulimits: {
nofile: { soft: 1024, hard: 2048 },
nproc: 256,
},
seccompProfile: "/path/to/seccomp.json",
apparmorProfile: "openclaw-sandbox",
dns: ["1.1.1.1", "8.8.8.8"],
extraHosts: ["internal.service:10.0.0.5"],
binds: ["/home/user/source:/source:rw"],
},
ssh: {
target: "user@gateway-host:22",
command: "ssh",
workspaceRoot: "/tmp/openclaw-sandboxes",
strictHostKeyChecking: true,
updateHostKeys: true,
identityFile: "~/.ssh/id_ed25519",
certificateFile: "~/.ssh/id_ed25519-cert.pub",
knownHostsFile: "~/.ssh/known_hosts",
// SecretRefs / inline contents also supported:
// identityData: { source: "env", provider: "default", id: "SSH_IDENTITY" },
// certificateData: { source: "env", provider: "default", id: "SSH_CERTIFICATE" },
// knownHostsData: { source: "env", provider: "default", id: "SSH_KNOWN_HOSTS" },
},
browser: {
enabled: false,
image: "openclaw-sandbox-browser:bookworm-slim",
network: "openclaw-sandbox-browser",
cdpPort: 9222,
cdpSourceRange: "172.21.0.1/32",
vncPort: 5900,
noVncPort: 6080,
headless: false,
enableNoVnc: true,
allowHostControl: false,
autoStart: true,
autoStartTimeoutMs: 12000,
},
prune: {
idleHours: 24,
maxAgeDays: 7,
},
},
},
},
tools: {
sandbox: {
tools: {
allow: [
"exec",
"process",
"read",
"write",
"edit",
"apply_patch",
"sessions_list",
"sessions_history",
"sessions_send",
"sessions_spawn",
"session_status",
],
deny: ["browser", "canvas", "nodes", "cron", "discord", "gateway"],
},
},
},
}
Backend:
docker: local Docker runtime (default)ssh: generic SSH-backed remote runtimeopenshell: OpenShell runtime
When backend: "openshell" is selected, runtime-specific settings move to
plugins.entries.openshell.config.
SSH backend config:
target: SSH target inuser@host[:port]formcommand: SSH client command (default:ssh)workspaceRoot: absolute remote root used for per-scope workspacesidentityFile/certificateFile/knownHostsFile: existing local files passed to OpenSSHidentityData/certificateData/knownHostsData: inline contents or SecretRefs that OpenClaw materializes into temp files at runtimestrictHostKeyChecking/updateHostKeys: OpenSSH host-key policy knobs
SSH auth precedence:
identityDatawins overidentityFilecertificateDatawins overcertificateFileknownHostsDatawins overknownHostsFile- SecretRef-backed
*Datavalues are resolved from the active secrets runtime snapshot before the sandbox session starts
SSH backend behavior:
- seeds the remote workspace once after create or recreate
- then keeps the remote SSH workspace canonical
- routes
exec, file tools, and media paths over SSH - does not sync remote changes back to the host automatically
- does not support sandbox browser containers
Workspace access:
none: per-scope sandbox workspace under~/.openclaw/sandboxesro: sandbox workspace at/workspace, agent workspace mounted read-only at/agentrw: agent workspace mounted read/write at/workspace
Scope:
session: per-session container + workspaceagent: one container + workspace per agent (default)shared: shared container and workspace (no cross-session isolation)
OpenShell plugin config:
{
plugins: {
entries: {
openshell: {
enabled: true,
config: {
mode: "mirror", // mirror | remote
from: "openclaw",
remoteWorkspaceDir: "/sandbox",
remoteAgentWorkspaceDir: "/agent",
gateway: "lab", // optional
gatewayEndpoint: "https://lab.example", // optional
policy: "strict", // optional OpenShell policy id
providers: ["openai"], // optional
autoProviders: true,
timeoutSeconds: 120,
},
},
},
},
}
OpenShell mode:
mirror: seed remote from local before exec, sync back after exec; local workspace stays canonicalremote: seed remote once when the sandbox is created, then keep the remote workspace canonical
In remote mode, host-local edits made outside OpenClaw are not synced into the sandbox automatically after the seed step.
Transport is SSH into the OpenShell sandbox, but the plugin owns sandbox lifecycle and optional mirror sync.
setupCommand runs once after container creation (via sh -lc). Needs network egress, writable root, root user.
Containers default to network: "none" — set to "bridge" (or a custom bridge network) if the agent needs outbound access.
"host" is blocked. "container:<id>" is blocked by default unless you explicitly set
sandbox.docker.dangerouslyAllowContainerNamespaceJoin: true (break-glass).
Inbound attachments are staged into media/inbound/* in the active workspace.
docker.binds mounts additional host directories; global and per-agent binds are merged.
Sandboxed browser (sandbox.browser.enabled): Chromium + CDP in a container. noVNC URL injected into system prompt. Does not require browser.enabled in openclaw.json.
noVNC observer access uses VNC auth by default and OpenClaw emits a short-lived token URL (instead of exposing the password in the shared URL).
allowHostControl: false(default) blocks sandboxed sessions from targeting the host browser.networkdefaults toopenclaw-sandbox-browser(dedicated bridge network). Set tobridgeonly when you explicitly want global bridge connectivity.cdpSourceRangeoptionally restricts CDP ingress at the container edge to a CIDR range (for example172.21.0.1/32).sandbox.browser.bindsmounts additional host directories into the sandbox browser container only. When set (including[]), it replacesdocker.bindsfor the browser container.- Launch defaults are defined in
scripts/sandbox-browser-entrypoint.shand tuned for container hosts:--remote-debugging-address=127.0.0.1--remote-debugging-port=<derived from OPENCLAW_BROWSER_CDP_PORT>--user-data-dir=${HOME}/.chrome--no-first-run--no-default-browser-check--disable-3d-apis--disable-gpu--disable-software-rasterizer--disable-dev-shm-usage--disable-background-networking--disable-features=TranslateUI--disable-breakpad--disable-crash-reporter--renderer-process-limit=2--no-zygote--metrics-recording-only--disable-extensions(default enabled)--disable-3d-apis,--disable-software-rasterizer, and--disable-gpuare enabled by default and can be disabled withOPENCLAW_BROWSER_DISABLE_GRAPHICS_FLAGS=0if WebGL/3D usage requires it.OPENCLAW_BROWSER_DISABLE_EXTENSIONS=0re-enables extensions if your workflow depends on them.--renderer-process-limit=2can be changed withOPENCLAW_BROWSER_RENDERER_PROCESS_LIMIT=<N>; set0to use Chromium's default process limit.- plus
--no-sandboxwhennoSandboxis enabled. - Defaults are the container image baseline; use a custom browser image with a custom entrypoint to change container defaults.
Browser sandboxing and sandbox.docker.binds are Docker-only.
Build images:
scripts/sandbox-setup.sh # main sandbox image
scripts/sandbox-browser-setup.sh # optional browser image
agents.list (per-agent overrides)
Use agents.list[].tts to give an agent its own TTS provider, voice, model,
style, or auto-TTS mode. The agent block deep-merges over global
messages.tts, so shared credentials can stay in one place while individual
agents override only the voice or provider fields they need. The active agent's
override applies to automatic spoken replies, /tts audio, /tts status, and
the tts agent tool. See Text-to-speech
for provider examples and precedence.
{
agents: {
list: [
{
id: "main",
default: true,
name: "Main Agent",
workspace: "~/.openclaw/workspace",
agentDir: "~/.openclaw/agents/main/agent",
model: "anthropic/claude-opus-4-6", // or { primary, fallbacks }
thinkingDefault: "high", // per-agent thinking level override
reasoningDefault: "on", // per-agent reasoning visibility override
fastModeDefault: false, // per-agent fast mode override
agentRuntime: { id: "auto", fallback: "pi" },
params: { cacheRetention: "none" }, // overrides matching defaults.models params by key
tts: {
providers: {
elevenlabs: { voiceId: "EXAVITQu4vr4xnSDxMaL" },
},
},
skills: ["docs-search"], // replaces agents.defaults.skills when set
identity: {
name: "Samantha",
theme: "helpful sloth",
emoji: "🦥",
avatar: "avatars/samantha.png",
},
groupChat: { mentionPatterns: ["@openclaw"] },
sandbox: { mode: "off" },
runtime: {
type: "acp",
acp: {
agent: "codex",
backend: "acpx",
mode: "persistent",
cwd: "/workspace/openclaw",
},
},
subagents: { allowAgents: ["*"] },
tools: {
profile: "coding",
allow: ["browser"],
deny: ["canvas"],
elevated: { enabled: true },
},
},
],
},
}
id: stable agent id (required).default: when multiple are set, first wins (warning logged). If none set, first list entry is default.model: string form overridesprimaryonly; object form{ primary, fallbacks }overrides both ([]disables global fallbacks). Cron jobs that only overrideprimarystill inherit default fallbacks unless you setfallbacks: [].params: per-agent stream params merged over the selected model entry inagents.defaults.models. Use this for agent-specific overrides likecacheRetention,temperature, ormaxTokenswithout duplicating the whole model catalog.tts: optional per-agent text-to-speech overrides. The block deep-merges overmessages.tts, so keep shared provider credentials and fallback policy inmessages.ttsand set only persona-specific values such as provider, voice, model, style, or auto mode here.skills: optional per-agent skill allowlist. If omitted, the agent inheritsagents.defaults.skillswhen set; an explicit list replaces defaults instead of merging, and[]means no skills.thinkingDefault: optional per-agent default thinking level (off | minimal | low | medium | high | xhigh | adaptive | max). Overridesagents.defaults.thinkingDefaultfor this agent when no per-message or session override is set. The selected provider/model profile controls which values are valid; for Google Gemini,adaptivekeeps provider-owned dynamic thinking (thinkingLevelomitted on Gemini 3/3.1,thinkingBudget: -1on Gemini 2.5).reasoningDefault: optional per-agent default reasoning visibility (on | off | stream). Applies when no per-message or session reasoning override is set.fastModeDefault: optional per-agent default for fast mode (true | false). Applies when no per-message or session fast-mode override is set.agentRuntime: optional per-agent low-level runtime policy override. Use{ id: "codex" }to make one agent Codex-only while other agents keep the default PI fallback inautomode.runtime: optional per-agent runtime descriptor. Usetype: "acp"withruntime.acpdefaults (agent,backend,mode,cwd) when the agent should default to ACP harness sessions.identity.avatar: workspace-relative path,http(s)URL, ordata:URI.identityderives defaults:ackReactionfromemoji,mentionPatternsfromname/emoji.subagents.allowAgents: allowlist of agent ids forsessions_spawn(["*"]= any; default: same agent only).- Sandbox inheritance guard: if the requester session is sandboxed,
sessions_spawnrejects targets that would run unsandboxed. subagents.requireAgentId: when true, blocksessions_spawncalls that omitagentId(forces explicit profile selection; default: false).
Multi-agent routing
Run multiple isolated agents inside one Gateway. See Multi-Agent.
{
agents: {
list: [
{ id: "home", default: true, workspace: "~/.openclaw/workspace-home" },
{ id: "work", workspace: "~/.openclaw/workspace-work" },
],
},
bindings: [
{ agentId: "home", match: { channel: "whatsapp", accountId: "personal" } },
{ agentId: "work", match: { channel: "whatsapp", accountId: "biz" } },
],
}
Binding match fields
type(optional):routefor normal routing (missing type defaults to route),acpfor persistent ACP conversation bindings.match.channel(required)match.accountId(optional;*= any account; omitted = default account)match.peer(optional;{ kind: direct|group|channel, id })match.guildId/match.teamId(optional; channel-specific)acp(optional; only fortype: "acp"):{ mode, label, cwd, backend }
Deterministic match order:
match.peermatch.guildIdmatch.teamIdmatch.accountId(exact, no peer/guild/team)match.accountId: "*"(channel-wide)- Default agent
Within each tier, the first matching bindings entry wins.
For type: "acp" entries, OpenClaw resolves by exact conversation identity (match.channel + account + match.peer.id) and does not use the route binding tier order above.
Per-agent access profiles
{
agents: {
list: [
{
id: "personal",
workspace: "~/.openclaw/workspace-personal",
sandbox: { mode: "off" },
},
],
},
}
{
agents: {
list: [
{
id: "family",
workspace: "~/.openclaw/workspace-family",
sandbox: { mode: "all", scope: "agent", workspaceAccess: "ro" },
tools: {
allow: [
"read",
"sessions_list",
"sessions_history",
"sessions_send",
"sessions_spawn",
"session_status",
],
deny: ["write", "edit", "apply_patch", "exec", "process", "browser"],
},
},
],
},
}
{
agents: {
list: [
{
id: "public",
workspace: "~/.openclaw/workspace-public",
sandbox: { mode: "all", scope: "agent", workspaceAccess: "none" },
tools: {
allow: [
"sessions_list",
"sessions_history",
"sessions_send",
"sessions_spawn",
"session_status",
"whatsapp",
"telegram",
"slack",
"discord",
"gateway",
],
deny: [
"read",
"write",
"edit",
"apply_patch",
"exec",
"process",
"browser",
"canvas",
"nodes",
"cron",
"gateway",
"image",
],
},
},
],
},
}
See Multi-Agent Sandbox & Tools for precedence details.
Session
{
session: {
scope: "per-sender",
dmScope: "main", // main | per-peer | per-channel-peer | per-account-channel-peer
identityLinks: {
alice: ["telegram:123456789", "discord:987654321012345678"],
},
reset: {
mode: "daily", // daily | idle
atHour: 4,
idleMinutes: 60,
},
resetByType: {
thread: { mode: "daily", atHour: 4 },
direct: { mode: "idle", idleMinutes: 240 },
group: { mode: "idle", idleMinutes: 120 },
},
resetTriggers: ["/new", "/reset"],
store: "~/.openclaw/agents/{agentId}/sessions/sessions.json",
parentForkMaxTokens: 100000, // skip parent-thread fork above this token count (0 disables)
maintenance: {
mode: "warn", // warn | enforce
pruneAfter: "30d",
maxEntries: 500,
rotateBytes: "10mb",
resetArchiveRetention: "30d", // duration or false
maxDiskBytes: "500mb", // optional hard budget
highWaterBytes: "400mb", // optional cleanup target
},
threadBindings: {
enabled: true,
idleHours: 24, // default inactivity auto-unfocus in hours (`0` disables)
maxAgeHours: 0, // default hard max age in hours (`0` disables)
},
mainKey: "main", // legacy (runtime always uses "main")
agentToAgent: { maxPingPongTurns: 5 },
sendPolicy: {
rules: [{ action: "deny", match: { channel: "discord", chatType: "group" } }],
default: "allow",
},
},
}
scope: base session grouping strategy for group-chat contexts.per-sender(default): each sender gets an isolated session within a channel context.global: all participants in a channel context share a single session (use only when shared context is intended).
dmScope: how DMs are grouped.main: all DMs share the main session.per-peer: isolate by sender id across channels.per-channel-peer: isolate per channel + sender (recommended for multi-user inboxes).per-account-channel-peer: isolate per account + channel + sender (recommended for multi-account).
identityLinks: map canonical ids to provider-prefixed peers for cross-channel session sharing.reset: primary reset policy.dailyresets atatHourlocal time;idleresets afteridleMinutes. When both configured, whichever expires first wins. Daily reset freshness uses the session row'ssessionStartedAt; idle reset freshness useslastInteractionAt. Background/system-event writes such as heartbeat, cron wakeups, exec notifications, and gateway bookkeeping can updateupdatedAt, but they do not keep daily/idle sessions fresh.resetByType: per-type overrides (direct,group,thread). Legacydmaccepted as alias fordirect.parentForkMaxTokens: max parent-sessiontotalTokensallowed when creating a forked thread session (default100000).- If parent
totalTokensis above this value, OpenClaw starts a fresh thread session instead of inheriting parent transcript history. - Set
0to disable this guard and always allow parent forking.
- If parent
mainKey: legacy field. Runtime always uses"main"for the main direct-chat bucket.agentToAgent.maxPingPongTurns: maximum reply-back turns between agents during agent-to-agent exchanges (integer, range:0–5).0disables ping-pong chaining.sendPolicy: match bychannel,chatType(direct|group|channel, with legacydmalias),keyPrefix, orrawKeyPrefix. First deny wins.maintenance: session-store cleanup + retention controls.mode:warnemits warnings only;enforceapplies cleanup.pruneAfter: age cutoff for stale entries (default30d).maxEntries: maximum number of entries insessions.json(default500).rotateBytes: rotatesessions.jsonwhen it exceeds this size (default10mb).resetArchiveRetention: retention for*.reset.<timestamp>transcript archives. Defaults topruneAfter; setfalseto disable.maxDiskBytes: optional sessions-directory disk budget. Inwarnmode it logs warnings; inenforcemode it removes oldest artifacts/sessions first.highWaterBytes: optional target after budget cleanup. Defaults to80%ofmaxDiskBytes.
threadBindings: global defaults for thread-bound session features.enabled: master default switch (providers can override; Discord useschannels.discord.threadBindings.enabled)idleHours: default inactivity auto-unfocus in hours (0disables; providers can override)maxAgeHours: default hard max age in hours (0disables; providers can override)
Messages
{
messages: {
responsePrefix: "🦞", // or "auto"
ackReaction: "👀",
ackReactionScope: "group-mentions", // group-mentions | group-all | direct | all
removeAckAfterReply: false,
queue: {
mode: "collect", // steer | followup | collect | steer-backlog | steer+backlog | queue | interrupt
debounceMs: 1000,
cap: 20,
drop: "summarize", // old | new | summarize
byChannel: {
whatsapp: "collect",
telegram: "collect",
},
},
inbound: {
debounceMs: 2000, // 0 disables
byChannel: {
whatsapp: 5000,
slack: 1500,
},
},
},
}
Response prefix
Per-channel/account overrides: channels.<channel>.responsePrefix, channels.<channel>.accounts.<id>.responsePrefix.
Resolution (most specific wins): account → channel → global. "" disables and stops cascade. "auto" derives [{identity.name}].
Template variables:
| Variable | Description | Example |
|---|---|---|
{model} |
Short model name | claude-opus-4-6 |
{modelFull} |
Full model identifier | anthropic/claude-opus-4-6 |
{provider} |
Provider name | anthropic |
{thinkingLevel} |
Current thinking level | high, low, off |
{identity.name} |
Agent identity name | (same as "auto") |
Variables are case-insensitive. {think} is an alias for {thinkingLevel}.
Ack reaction
- Defaults to active agent's
identity.emoji, otherwise"👀". Set""to disable. - Per-channel overrides:
channels.<channel>.ackReaction,channels.<channel>.accounts.<id>.ackReaction. - Resolution order: account → channel →
messages.ackReaction→ identity fallback. - Scope:
group-mentions(default),group-all,direct,all. removeAckAfterReply: removes ack after reply on reaction-capable channels such as Slack, Discord, Telegram, WhatsApp, and BlueBubbles.messages.statusReactions.enabled: enables lifecycle status reactions on Slack, Discord, and Telegram. On Slack and Discord, unset keeps status reactions enabled when ack reactions are active. On Telegram, set it explicitly totrueto enable lifecycle status reactions.
Inbound debounce
Batches rapid text-only messages from the same sender into a single agent turn. Media/attachments flush immediately. Control commands bypass debouncing.
TTS (text-to-speech)
{
messages: {
tts: {
auto: "always", // off | always | inbound | tagged
mode: "final", // final | all
provider: "elevenlabs",
summaryModel: "openai/gpt-4.1-mini",
modelOverrides: { enabled: true },
maxTextLength: 4000,
timeoutMs: 30000,
prefsPath: "~/.openclaw/settings/tts.json",
providers: {
elevenlabs: {
apiKey: "elevenlabs_api_key",
baseUrl: "https://api.elevenlabs.io",
voiceId: "voice_id",
modelId: "eleven_multilingual_v2",
seed: 42,
applyTextNormalization: "auto",
languageCode: "en",
voiceSettings: {
stability: 0.5,
similarityBoost: 0.75,
style: 0.0,
useSpeakerBoost: true,
speed: 1.0,
},
},
microsoft: {
voice: "en-US-AvaMultilingualNeural",
lang: "en-US",
outputFormat: "audio-24khz-48kbitrate-mono-mp3",
},
openai: {
apiKey: "openai_api_key",
baseUrl: "https://api.openai.com/v1",
model: "gpt-4o-mini-tts",
voice: "alloy",
},
},
},
},
}
autocontrols the default auto-TTS mode:off,always,inbound, ortagged./tts on|offcan override local prefs, and/tts statusshows the effective state.summaryModeloverridesagents.defaults.model.primaryfor auto-summary.modelOverridesis enabled by default;modelOverrides.allowProviderdefaults tofalse(opt-in).- API keys fall back to
ELEVENLABS_API_KEY/XI_API_KEYandOPENAI_API_KEY. - Bundled speech providers are plugin-owned. If
plugins.allowis set, include each TTS provider plugin you want to use, for examplemicrosoftfor Edge TTS. The legacyedgeprovider id is accepted as an alias formicrosoft. providers.openai.baseUrloverrides the OpenAI TTS endpoint. Resolution order is config, thenOPENAI_TTS_BASE_URL, thenhttps://api.openai.com/v1.- When
providers.openai.baseUrlpoints to a non-OpenAI endpoint, OpenClaw treats it as an OpenAI-compatible TTS server and relaxes model/voice validation.
Talk
Defaults for Talk mode (macOS/iOS/Android).
{
talk: {
provider: "elevenlabs",
providers: {
elevenlabs: {
voiceId: "elevenlabs_voice_id",
voiceAliases: {
Clawd: "EXAVITQu4vr4xnSDxMaL",
Roger: "CwhRBWXzGAHq8TQ4Fs17",
},
modelId: "eleven_v3",
outputFormat: "mp3_44100_128",
apiKey: "elevenlabs_api_key",
},
mlx: {
modelId: "mlx-community/Soprano-80M-bf16",
},
system: {},
},
speechLocale: "ru-RU",
silenceTimeoutMs: 1500,
interruptOnSpeech: true,
},
}
talk.providermust match a key intalk.providerswhen multiple Talk providers are configured.- Legacy flat Talk keys (
talk.voiceId,talk.voiceAliases,talk.modelId,talk.outputFormat,talk.apiKey) are compatibility-only and are auto-migrated intotalk.providers.<provider>. - Voice IDs fall back to
ELEVENLABS_VOICE_IDorSAG_VOICE_ID. providers.*.apiKeyaccepts plaintext strings or SecretRef objects.ELEVENLABS_API_KEYfallback applies only when no Talk API key is configured.providers.*.voiceAliaseslets Talk directives use friendly names.providers.mlx.modelIdselects the Hugging Face repo used by the macOS local MLX helper. If omitted, macOS usesmlx-community/Soprano-80M-bf16.- macOS MLX playback runs through the bundled
openclaw-mlx-ttshelper when present, or an executable onPATH;OPENCLAW_MLX_TTS_BINoverrides the helper path for development. speechLocalesets the BCP 47 locale id used by iOS/macOS Talk speech recognition. Leave unset to use the device default.silenceTimeoutMscontrols how long Talk mode waits after user silence before it sends the transcript. Unset keeps the platform default pause window (700 ms on macOS and Android, 900 ms on iOS).
Related
- Configuration reference — all other config keys
- Configuration — common tasks and quick setup
- Configuration examples