diff --git a/docs/providers/chutes.md b/docs/providers/chutes.md
index 0a080ff73e8..fcc91358cd4 100644
--- a/docs/providers/chutes.md
+++ b/docs/providers/chutes.md
@@ -13,44 +13,58 @@ read_when:
OpenAI-compatible API. OpenClaw supports both browser OAuth and direct API-key
auth for the bundled `chutes` provider.
-- Provider: `chutes`
-- API: OpenAI-compatible
-- Base URL: `https://llm.chutes.ai/v1`
-- Auth:
- - OAuth via `openclaw onboard --auth-choice chutes`
- - API key via `openclaw onboard --auth-choice chutes-api-key`
- - Runtime env vars: `CHUTES_API_KEY`, `CHUTES_OAUTH_TOKEN`
+| Property | Value |
+| -------- | ---------------------------- |
+| Provider | `chutes` |
+| API | OpenAI-compatible |
+| Base URL | `https://llm.chutes.ai/v1` |
+| Auth | OAuth or API key (see below) |
-## Quick start
+## Getting started
-### OAuth
+
+
+
+
+ ```bash
+ openclaw onboard --auth-choice chutes
+ ```
+ OpenClaw launches the browser flow locally, or shows a URL + redirect-paste
+ flow on remote/headless hosts. OAuth tokens auto-refresh through OpenClaw auth
+ profiles.
+
+
+ After onboarding, the default model is set to
+ `chutes/zai-org/GLM-4.7-TEE` and the bundled Chutes catalog is
+ registered.
+
+
+
+
+
+
+ Create a key at
+ [chutes.ai/settings/api-keys](https://chutes.ai/settings/api-keys).
+
+
+ ```bash
+ openclaw onboard --auth-choice chutes-api-key
+ ```
+
+
+ After onboarding, the default model is set to
+ `chutes/zai-org/GLM-4.7-TEE` and the bundled Chutes catalog is
+ registered.
+
+
+
+
-```bash
-openclaw onboard --auth-choice chutes
-```
-
-OpenClaw launches the browser flow locally, or shows a URL + redirect-paste
-flow on remote/headless hosts. OAuth tokens auto-refresh through OpenClaw auth
-profiles.
-
-Optional OAuth overrides:
-
-- `CHUTES_CLIENT_ID`
-- `CHUTES_CLIENT_SECRET`
-- `CHUTES_OAUTH_REDIRECT_URI`
-- `CHUTES_OAUTH_SCOPES`
-
-### API key
-
-```bash
-openclaw onboard --auth-choice chutes-api-key
-```
-
-Get your key at
-[chutes.ai/settings/api-keys](https://chutes.ai/settings/api-keys).
-
-Both auth paths register the bundled Chutes catalog and set the default model
-to `chutes/zai-org/GLM-4.7-TEE`.
+
+Both auth paths register the bundled Chutes catalog and set the default model to
+`chutes/zai-org/GLM-4.7-TEE`. Runtime environment variables: `CHUTES_API_KEY`,
+`CHUTES_OAUTH_TOKEN`.
+
## Discovery behavior
@@ -60,25 +74,28 @@ back to a bundled static catalog so onboarding and startup still work.
## Default aliases
-OpenClaw also registers three convenience aliases for the bundled Chutes
-catalog:
+OpenClaw registers three convenience aliases for the bundled Chutes catalog:
-- `chutes-fast` -> `chutes/zai-org/GLM-4.7-FP8`
-- `chutes-pro` -> `chutes/deepseek-ai/DeepSeek-V3.2-TEE`
-- `chutes-vision` -> `chutes/chutesai/Mistral-Small-3.2-24B-Instruct-2506`
+| Alias | Target model |
+| --------------- | ----------------------------------------------------- |
+| `chutes-fast` | `chutes/zai-org/GLM-4.7-FP8` |
+| `chutes-pro` | `chutes/deepseek-ai/DeepSeek-V3.2-TEE` |
+| `chutes-vision` | `chutes/chutesai/Mistral-Small-3.2-24B-Instruct-2506` |
## Built-in starter catalog
-The bundled fallback catalog includes current Chutes refs such as:
+The bundled fallback catalog includes current Chutes refs:
-- `chutes/zai-org/GLM-4.7-TEE`
-- `chutes/zai-org/GLM-5-TEE`
-- `chutes/deepseek-ai/DeepSeek-V3.2-TEE`
-- `chutes/deepseek-ai/DeepSeek-R1-0528-TEE`
-- `chutes/moonshotai/Kimi-K2.5-TEE`
-- `chutes/chutesai/Mistral-Small-3.2-24B-Instruct-2506`
-- `chutes/Qwen/Qwen3-Coder-Next-TEE`
-- `chutes/openai/gpt-oss-120b-TEE`
+| Model ref |
+| ----------------------------------------------------- |
+| `chutes/zai-org/GLM-4.7-TEE` |
+| `chutes/zai-org/GLM-5-TEE` |
+| `chutes/deepseek-ai/DeepSeek-V3.2-TEE` |
+| `chutes/deepseek-ai/DeepSeek-R1-0528-TEE` |
+| `chutes/moonshotai/Kimi-K2.5-TEE` |
+| `chutes/chutesai/Mistral-Small-3.2-24B-Instruct-2506` |
+| `chutes/Qwen/Qwen3-Coder-Next-TEE` |
+| `chutes/openai/gpt-oss-120b-TEE` |
## Config example
@@ -96,8 +113,42 @@ The bundled fallback catalog includes current Chutes refs such as:
}
```
-## Notes
+
+
+ You can customize the OAuth flow with optional environment variables:
-- OAuth help and redirect-app requirements: [Chutes OAuth docs](https://chutes.ai/docs/sign-in-with-chutes/overview)
-- API-key and OAuth discovery both use the same `chutes` provider id.
-- Chutes models are registered as `chutes/`.
+ | Variable | Purpose |
+ | -------- | ------- |
+ | `CHUTES_CLIENT_ID` | Custom OAuth client ID |
+ | `CHUTES_CLIENT_SECRET` | Custom OAuth client secret |
+ | `CHUTES_OAUTH_REDIRECT_URI` | Custom redirect URI |
+ | `CHUTES_OAUTH_SCOPES` | Custom OAuth scopes |
+
+ See the [Chutes OAuth docs](https://chutes.ai/docs/sign-in-with-chutes/overview)
+ for redirect-app requirements and help.
+
+
+
+
+ - API-key and OAuth discovery both use the same `chutes` provider id.
+ - Chutes models are registered as `chutes/`.
+ - If discovery fails at startup, the bundled static catalog is used automatically.
+
+
+
+## Related
+
+
+
+ Provider rules, model refs, and failover behavior.
+
+
+ Full config schema including provider settings.
+
+
+ Chutes dashboard and API docs.
+
+
+ Create and manage Chutes API keys.
+
+
diff --git a/docs/providers/deepgram.md b/docs/providers/deepgram.md
index b7a21fa6f13..e86ebbc38a8 100644
--- a/docs/providers/deepgram.md
+++ b/docs/providers/deepgram.md
@@ -15,79 +15,128 @@ When enabled, OpenClaw uploads the audio file to Deepgram and injects the transc
into the reply pipeline (`{{Transcript}}` + `[Audio]` block). This is **not streaming**;
it uses the pre-recorded transcription endpoint.
-Website: [https://deepgram.com](https://deepgram.com)
-Docs: [https://developers.deepgram.com](https://developers.deepgram.com)
+| Detail | Value |
+| ------------- | ---------------------------------------------------------- |
+| Website | [deepgram.com](https://deepgram.com) |
+| Docs | [developers.deepgram.com](https://developers.deepgram.com) |
+| Auth | `DEEPGRAM_API_KEY` |
+| Default model | `nova-3` |
-## Quick start
+## Getting started
-1. Set your API key:
+
+
+ Add your Deepgram API key to the environment:
-```
-DEEPGRAM_API_KEY=dg_...
-```
+ ```
+ DEEPGRAM_API_KEY=dg_...
+ ```
-2. Enable the provider:
-
-```json5
-{
- tools: {
- media: {
- audio: {
- enabled: true,
- models: [{ provider: "deepgram", model: "nova-3" }],
- },
- },
- },
-}
-```
-
-## Options
-
-- `model`: Deepgram model id (default: `nova-3`)
-- `language`: language hint (optional)
-- `tools.media.audio.providerOptions.deepgram.detect_language`: enable language detection (optional)
-- `tools.media.audio.providerOptions.deepgram.punctuate`: enable punctuation (optional)
-- `tools.media.audio.providerOptions.deepgram.smart_format`: enable smart formatting (optional)
-
-Example with language:
-
-```json5
-{
- tools: {
- media: {
- audio: {
- enabled: true,
- models: [{ provider: "deepgram", model: "nova-3", language: "en" }],
- },
- },
- },
-}
-```
-
-Example with Deepgram options:
-
-```json5
-{
- tools: {
- media: {
- audio: {
- enabled: true,
- providerOptions: {
- deepgram: {
- detect_language: true,
- punctuate: true,
- smart_format: true,
+
+
+ ```json5
+ {
+ tools: {
+ media: {
+ audio: {
+ enabled: true,
+ models: [{ provider: "deepgram", model: "nova-3" }],
},
},
- models: [{ provider: "deepgram", model: "nova-3" }],
},
- },
- },
-}
-```
+ }
+ ```
+
+
+ Send an audio message through any connected channel. OpenClaw transcribes it
+ via Deepgram and injects the transcript into the reply pipeline.
+
+
+
+## Configuration options
+
+| Option | Path | Description |
+| ----------------- | ------------------------------------------------------------ | ------------------------------------- |
+| `model` | `tools.media.audio.models[].model` | Deepgram model id (default: `nova-3`) |
+| `language` | `tools.media.audio.models[].language` | Language hint (optional) |
+| `detect_language` | `tools.media.audio.providerOptions.deepgram.detect_language` | Enable language detection (optional) |
+| `punctuate` | `tools.media.audio.providerOptions.deepgram.punctuate` | Enable punctuation (optional) |
+| `smart_format` | `tools.media.audio.providerOptions.deepgram.smart_format` | Enable smart formatting (optional) |
+
+
+
+ ```json5
+ {
+ tools: {
+ media: {
+ audio: {
+ enabled: true,
+ models: [{ provider: "deepgram", model: "nova-3", language: "en" }],
+ },
+ },
+ },
+ }
+ ```
+
+
+ ```json5
+ {
+ tools: {
+ media: {
+ audio: {
+ enabled: true,
+ providerOptions: {
+ deepgram: {
+ detect_language: true,
+ punctuate: true,
+ smart_format: true,
+ },
+ },
+ models: [{ provider: "deepgram", model: "nova-3" }],
+ },
+ },
+ },
+ }
+ ```
+
+
## Notes
-- Authentication follows the standard provider auth order; `DEEPGRAM_API_KEY` is the simplest path.
-- Override endpoints or headers with `tools.media.audio.baseUrl` and `tools.media.audio.headers` when using a proxy.
-- Output follows the same audio rules as other providers (size caps, timeouts, transcript injection).
+
+
+ Authentication follows the standard provider auth order. `DEEPGRAM_API_KEY` is
+ the simplest path.
+
+
+ Override endpoints or headers with `tools.media.audio.baseUrl` and
+ `tools.media.audio.headers` when using a proxy.
+
+
+ Output follows the same audio rules as other providers (size caps, timeouts,
+ transcript injection).
+
+
+
+
+Deepgram transcription is **pre-recorded only** (not real-time streaming). OpenClaw
+uploads the complete audio file and waits for the full transcript before injecting
+it into the conversation.
+
+
+## Related
+
+
+
+ Audio, image, and video processing pipeline overview.
+
+
+ Full config reference including media tool settings.
+
+
+ Common issues and debugging steps.
+
+
+ Frequently asked questions about OpenClaw setup.
+
+
diff --git a/docs/providers/synthetic.md b/docs/providers/synthetic.md
index 58a4ec8989b..140fb6c658a 100644
--- a/docs/providers/synthetic.md
+++ b/docs/providers/synthetic.md
@@ -8,23 +8,42 @@ title: "Synthetic"
# Synthetic
-Synthetic exposes Anthropic-compatible endpoints. OpenClaw registers it as the
-`synthetic` provider and uses the Anthropic Messages API.
+[Synthetic](https://synthetic.new) exposes Anthropic-compatible endpoints.
+OpenClaw registers it as the `synthetic` provider and uses the Anthropic
+Messages API.
-## Quick setup
+| Property | Value |
+| -------- | ------------------------------------- |
+| Provider | `synthetic` |
+| Auth | `SYNTHETIC_API_KEY` |
+| API | Anthropic Messages |
+| Base URL | `https://api.synthetic.new/anthropic` |
-1. Set `SYNTHETIC_API_KEY` (or run the wizard below).
-2. Run onboarding:
+## Getting started
-```bash
-openclaw onboard --auth-choice synthetic-api-key
-```
+
+
+ Obtain a `SYNTHETIC_API_KEY` from your Synthetic account, or let the
+ onboarding wizard prompt you for one.
+
+
+ ```bash
+ openclaw onboard --auth-choice synthetic-api-key
+ ```
+
+
+ After onboarding the default model is set to:
+ ```
+ synthetic/hf:MiniMaxAI/MiniMax-M2.5
+ ```
+
+
-The default model is set to:
-
-```
-synthetic/hf:MiniMaxAI/MiniMax-M2.5
-```
+
+OpenClaw's Anthropic client appends `/v1` to the base URL automatically, so use
+`https://api.synthetic.new/anthropic` (not `/anthropic/v1`). If Synthetic
+changes its base URL, override `models.providers.synthetic.baseUrl`.
+
## Config example
@@ -61,41 +80,77 @@ synthetic/hf:MiniMaxAI/MiniMax-M2.5
}
```
-Note: OpenClaw's Anthropic client appends `/v1` to the base URL, so use
-`https://api.synthetic.new/anthropic` (not `/anthropic/v1`). If Synthetic changes
-its base URL, override `models.providers.synthetic.baseUrl`.
-
## Model catalog
-All models below use cost `0` (input/output/cache).
+All Synthetic models use cost `0` (input/output/cache).
| Model ID | Context window | Max tokens | Reasoning | Input |
| ------------------------------------------------------ | -------------- | ---------- | --------- | ------------ |
-| `hf:MiniMaxAI/MiniMax-M2.5` | 192000 | 65536 | false | text |
-| `hf:moonshotai/Kimi-K2-Thinking` | 256000 | 8192 | true | text |
-| `hf:zai-org/GLM-4.7` | 198000 | 128000 | false | text |
-| `hf:deepseek-ai/DeepSeek-R1-0528` | 128000 | 8192 | false | text |
-| `hf:deepseek-ai/DeepSeek-V3-0324` | 128000 | 8192 | false | text |
-| `hf:deepseek-ai/DeepSeek-V3.1` | 128000 | 8192 | false | text |
-| `hf:deepseek-ai/DeepSeek-V3.1-Terminus` | 128000 | 8192 | false | text |
-| `hf:deepseek-ai/DeepSeek-V3.2` | 159000 | 8192 | false | text |
-| `hf:meta-llama/Llama-3.3-70B-Instruct` | 128000 | 8192 | false | text |
-| `hf:meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8` | 524000 | 8192 | false | text |
-| `hf:moonshotai/Kimi-K2-Instruct-0905` | 256000 | 8192 | false | text |
-| `hf:moonshotai/Kimi-K2.5` | 256000 | 8192 | true | text + image |
-| `hf:openai/gpt-oss-120b` | 128000 | 8192 | false | text |
-| `hf:Qwen/Qwen3-235B-A22B-Instruct-2507` | 256000 | 8192 | false | text |
-| `hf:Qwen/Qwen3-Coder-480B-A35B-Instruct` | 256000 | 8192 | false | text |
-| `hf:Qwen/Qwen3-VL-235B-A22B-Instruct` | 250000 | 8192 | false | text + image |
-| `hf:zai-org/GLM-4.5` | 128000 | 128000 | false | text |
-| `hf:zai-org/GLM-4.6` | 198000 | 128000 | false | text |
-| `hf:zai-org/GLM-5` | 256000 | 128000 | true | text + image |
-| `hf:deepseek-ai/DeepSeek-V3` | 128000 | 8192 | false | text |
-| `hf:Qwen/Qwen3-235B-A22B-Thinking-2507` | 256000 | 8192 | true | text |
+| `hf:MiniMaxAI/MiniMax-M2.5` | 192,000 | 65,536 | no | text |
+| `hf:moonshotai/Kimi-K2-Thinking` | 256,000 | 8,192 | yes | text |
+| `hf:zai-org/GLM-4.7` | 198,000 | 128,000 | no | text |
+| `hf:deepseek-ai/DeepSeek-R1-0528` | 128,000 | 8,192 | no | text |
+| `hf:deepseek-ai/DeepSeek-V3-0324` | 128,000 | 8,192 | no | text |
+| `hf:deepseek-ai/DeepSeek-V3.1` | 128,000 | 8,192 | no | text |
+| `hf:deepseek-ai/DeepSeek-V3.1-Terminus` | 128,000 | 8,192 | no | text |
+| `hf:deepseek-ai/DeepSeek-V3.2` | 159,000 | 8,192 | no | text |
+| `hf:meta-llama/Llama-3.3-70B-Instruct` | 128,000 | 8,192 | no | text |
+| `hf:meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8` | 524,000 | 8,192 | no | text |
+| `hf:moonshotai/Kimi-K2-Instruct-0905` | 256,000 | 8,192 | no | text |
+| `hf:moonshotai/Kimi-K2.5` | 256,000 | 8,192 | yes | text + image |
+| `hf:openai/gpt-oss-120b` | 128,000 | 8,192 | no | text |
+| `hf:Qwen/Qwen3-235B-A22B-Instruct-2507` | 256,000 | 8,192 | no | text |
+| `hf:Qwen/Qwen3-Coder-480B-A35B-Instruct` | 256,000 | 8,192 | no | text |
+| `hf:Qwen/Qwen3-VL-235B-A22B-Instruct` | 250,000 | 8,192 | no | text + image |
+| `hf:zai-org/GLM-4.5` | 128,000 | 128,000 | no | text |
+| `hf:zai-org/GLM-4.6` | 198,000 | 128,000 | no | text |
+| `hf:zai-org/GLM-5` | 256,000 | 128,000 | yes | text + image |
+| `hf:deepseek-ai/DeepSeek-V3` | 128,000 | 8,192 | no | text |
+| `hf:Qwen/Qwen3-235B-A22B-Thinking-2507` | 256,000 | 8,192 | yes | text |
-## Notes
+
+Model refs use the form `synthetic/`. Use
+`openclaw models list --provider synthetic` to see all models available on your
+account.
+
-- Model refs use `synthetic/`.
-- If you enable a model allowlist (`agents.defaults.models`), add every model you
- plan to use.
-- See [Model providers](/concepts/model-providers) for provider rules.
+
+
+ If you enable a model allowlist (`agents.defaults.models`), add every
+ Synthetic model you plan to use. Models not in the allowlist will be hidden
+ from the agent.
+
+
+
+ If Synthetic changes its API endpoint, override the base URL in your config:
+
+ ```json5
+ {
+ models: {
+ providers: {
+ synthetic: {
+ baseUrl: "https://new-api.synthetic.new/anthropic",
+ },
+ },
+ },
+ }
+ ```
+
+ Remember that OpenClaw appends `/v1` automatically.
+
+
+
+
+## Related
+
+
+
+ Provider rules, model refs, and failover behavior.
+
+
+ Full config schema including provider settings.
+
+
+ Synthetic dashboard and API docs.
+
+
diff --git a/docs/providers/together.md b/docs/providers/together.md
index 42898f4e08a..01a5e778e46 100644
--- a/docs/providers/together.md
+++ b/docs/providers/together.md
@@ -8,34 +8,42 @@ read_when:
# Together AI
-The [Together AI](https://together.ai) provides access to leading open-source models including Llama, DeepSeek, Kimi, and more through a unified API.
+[Together AI](https://together.ai) provides access to leading open-source
+models including Llama, DeepSeek, Kimi, and more through a unified API.
-- Provider: `together`
-- Auth: `TOGETHER_API_KEY`
-- API: OpenAI-compatible
-- Base URL: `https://api.together.xyz/v1`
+| Property | Value |
+| -------- | ----------------------------- |
+| Provider | `together` |
+| Auth | `TOGETHER_API_KEY` |
+| API | OpenAI-compatible |
+| Base URL | `https://api.together.xyz/v1` |
-## Quick start
+## Getting started
-1. Set the API key (recommended: store it for the Gateway):
+
+
+ Create an API key at
+ [api.together.ai/settings/api-keys](https://api.together.ai/settings/api-keys).
+
+
+ ```bash
+ openclaw onboard --auth-choice together-api-key
+ ```
+
+
+ ```json5
+ {
+ agents: {
+ defaults: {
+ model: { primary: "together/moonshotai/Kimi-K2.5" },
+ },
+ },
+ }
+ ```
+
+
-```bash
-openclaw onboard --auth-choice together-api-key
-```
-
-2. Set a default model:
-
-```json5
-{
- agents: {
- defaults: {
- model: { primary: "together/moonshotai/Kimi-K2.5" },
- },
- },
-}
-```
-
-## Non-interactive example
+### Non-interactive example
```bash
openclaw onboard --non-interactive \
@@ -44,17 +52,14 @@ openclaw onboard --non-interactive \
--together-api-key "$TOGETHER_API_KEY"
```
-This will set `together/moonshotai/Kimi-K2.5` as the default model.
-
-## Environment note
-
-If the Gateway runs as a daemon (launchd/systemd), make sure `TOGETHER_API_KEY`
-is available to that process (for example, in `~/.openclaw/.env` or via
-`env.shellEnv`).
+
+The onboarding preset sets `together/moonshotai/Kimi-K2.5` as the default
+model.
+
## Built-in catalog
-OpenClaw currently ships this bundled Together catalog:
+OpenClaw ships this bundled Together catalog:
| Model ref | Name | Input | Context | Notes |
| ------------------------------------------------------------ | -------------------------------------- | ----------- | ---------- | -------------------------------- |
@@ -67,16 +72,16 @@ OpenClaw currently ships this bundled Together catalog:
| `together/deepseek-ai/DeepSeek-R1` | DeepSeek R1 | text | 131,072 | Reasoning model |
| `together/moonshotai/Kimi-K2-Instruct-0905` | Kimi K2-Instruct 0905 | text | 262,144 | Secondary Kimi text model |
-The onboarding preset sets `together/moonshotai/Kimi-K2.5` as the default model.
-
## Video generation
The bundled `together` plugin also registers video generation through the
shared `video_generate` tool.
-- Default video model: `together/Wan-AI/Wan2.2-T2V-A14B`
-- Modes: text-to-video and single-image reference flows
-- Supports `aspectRatio` and `resolution`
+| Property | Value |
+| -------------------- | ------------------------------------- |
+| Default video model | `together/Wan-AI/Wan2.2-T2V-A14B` |
+| Modes | text-to-video, single-image reference |
+| Supported parameters | `aspectRatio`, `resolution` |
To use Together as the default video provider:
@@ -92,5 +97,46 @@ To use Together as the default video provider:
}
```
-See [Video Generation](/tools/video-generation) for the shared tool
-parameters, provider selection, and failover behavior.
+
+See [Video Generation](/tools/video-generation) for the shared tool parameters,
+provider selection, and failover behavior.
+
+
+
+
+ If the Gateway runs as a daemon (launchd/systemd), make sure
+ `TOGETHER_API_KEY` is available to that process (for example, in
+ `~/.openclaw/.env` or via `env.shellEnv`).
+
+
+ Keys set only in your interactive shell are not visible to daemon-managed
+ gateway processes. Use `~/.openclaw/.env` or `env.shellEnv` config for
+ persistent availability.
+
+
+
+
+
+ - Verify your key works: `openclaw models list --provider together`
+ - If models are not appearing, confirm the API key is set in the correct
+ environment for your Gateway process.
+ - Model refs use the form `together/`.
+
+
+
+## Related
+
+
+
+ Provider rules, model refs, and failover behavior.
+
+
+ Shared video generation tool parameters and provider selection.
+
+
+ Full config schema including provider settings.
+
+
+ Together AI dashboard, API docs, and pricing.
+
+
diff --git a/docs/providers/volcengine.md b/docs/providers/volcengine.md
index 50ff9c33a74..e0afa94628a 100644
--- a/docs/providers/volcengine.md
+++ b/docs/providers/volcengine.md
@@ -12,31 +12,46 @@ The Volcengine provider gives access to Doubao models and third-party models
hosted on Volcano Engine, with separate endpoints for general and coding
workloads.
-- Providers: `volcengine` (general) + `volcengine-plan` (coding)
-- Auth: `VOLCANO_ENGINE_API_KEY`
-- API: OpenAI-compatible
+| Detail | Value |
+| --------- | --------------------------------------------------- |
+| Providers | `volcengine` (general) + `volcengine-plan` (coding) |
+| Auth | `VOLCANO_ENGINE_API_KEY` |
+| API | OpenAI-compatible |
-## Quick start
+## Getting started
-1. Set the API key:
+
+
+ Run interactive onboarding:
-```bash
-openclaw onboard --auth-choice volcengine-api-key
-```
+ ```bash
+ openclaw onboard --auth-choice volcengine-api-key
+ ```
-2. Set a default model:
+ This registers both the general (`volcengine`) and coding (`volcengine-plan`) providers from a single API key.
-```json5
-{
- agents: {
- defaults: {
- model: { primary: "volcengine-plan/ark-code-latest" },
- },
- },
-}
-```
+
+
+ ```json5
+ {
+ agents: {
+ defaults: {
+ model: { primary: "volcengine-plan/ark-code-latest" },
+ },
+ },
+ }
+ ```
+
+
+ ```bash
+ openclaw models list --provider volcengine
+ openclaw models list --provider volcengine-plan
+ ```
+
+
-## Non-interactive example
+
+For non-interactive setup (CI, scripting), pass the key directly:
```bash
openclaw onboard --non-interactive \
@@ -45,6 +60,8 @@ openclaw onboard --non-interactive \
--volcengine-api-key "$VOLCANO_ENGINE_API_KEY"
```
+
+
## Providers and endpoints
| Provider | Endpoint | Use case |
@@ -52,43 +69,75 @@ openclaw onboard --non-interactive \
| `volcengine` | `ark.cn-beijing.volces.com/api/v3` | General models |
| `volcengine-plan` | `ark.cn-beijing.volces.com/api/coding/v3` | Coding models |
-Both providers are configured from a single API key. Setup registers both
-automatically.
+
+Both providers are configured from a single API key. Setup registers both automatically.
+
## Available models
-General provider (`volcengine`):
+
+
+ | Model ref | Name | Input | Context |
+ | -------------------------------------------- | ------------------------------- | ----------- | ------- |
+ | `volcengine/doubao-seed-1-8-251228` | Doubao Seed 1.8 | text, image | 256,000 |
+ | `volcengine/doubao-seed-code-preview-251028` | doubao-seed-code-preview-251028 | text, image | 256,000 |
+ | `volcengine/kimi-k2-5-260127` | Kimi K2.5 | text, image | 256,000 |
+ | `volcengine/glm-4-7-251222` | GLM 4.7 | text, image | 200,000 |
+ | `volcengine/deepseek-v3-2-251201` | DeepSeek V3.2 | text, image | 128,000 |
+
+
+ | Model ref | Name | Input | Context |
+ | ------------------------------------------------- | ------------------------ | ----- | ------- |
+ | `volcengine-plan/ark-code-latest` | Ark Coding Plan | text | 256,000 |
+ | `volcengine-plan/doubao-seed-code` | Doubao Seed Code | text | 256,000 |
+ | `volcengine-plan/glm-4.7` | GLM 4.7 Coding | text | 200,000 |
+ | `volcengine-plan/kimi-k2-thinking` | Kimi K2 Thinking | text | 256,000 |
+ | `volcengine-plan/kimi-k2.5` | Kimi K2.5 Coding | text | 256,000 |
+ | `volcengine-plan/doubao-seed-code-preview-251028` | Doubao Seed Code Preview | text | 256,000 |
+
+
-| Model ref | Name | Input | Context |
-| -------------------------------------------- | ------------------------------- | ----------- | ------- |
-| `volcengine/doubao-seed-1-8-251228` | Doubao Seed 1.8 | text, image | 256,000 |
-| `volcengine/doubao-seed-code-preview-251028` | doubao-seed-code-preview-251028 | text, image | 256,000 |
-| `volcengine/kimi-k2-5-260127` | Kimi K2.5 | text, image | 256,000 |
-| `volcengine/glm-4-7-251222` | GLM 4.7 | text, image | 200,000 |
-| `volcengine/deepseek-v3-2-251201` | DeepSeek V3.2 | text, image | 128,000 |
+## Advanced notes
-Coding provider (`volcengine-plan`):
+
+
+ `openclaw onboard --auth-choice volcengine-api-key` currently sets
+ `volcengine-plan/ark-code-latest` as the default model while also registering
+ the general `volcengine` catalog.
+
-| Model ref | Name | Input | Context |
-| ------------------------------------------------- | ------------------------ | ----- | ------- |
-| `volcengine-plan/ark-code-latest` | Ark Coding Plan | text | 256,000 |
-| `volcengine-plan/doubao-seed-code` | Doubao Seed Code | text | 256,000 |
-| `volcengine-plan/glm-4.7` | GLM 4.7 Coding | text | 200,000 |
-| `volcengine-plan/kimi-k2-thinking` | Kimi K2 Thinking | text | 256,000 |
-| `volcengine-plan/kimi-k2.5` | Kimi K2.5 Coding | text | 256,000 |
-| `volcengine-plan/doubao-seed-code-preview-251028` | Doubao Seed Code Preview | text | 256,000 |
+
+ During onboarding/configure model selection, the Volcengine auth choice prefers
+ both `volcengine/*` and `volcengine-plan/*` rows. If those models are not
+ loaded yet, OpenClaw falls back to the unfiltered catalog instead of showing an
+ empty provider-scoped picker.
+
-`openclaw onboard --auth-choice volcengine-api-key` currently sets
-`volcengine-plan/ark-code-latest` as the default model while also registering
-the general `volcengine` catalog.
+
+ If the Gateway runs as a daemon (launchd/systemd), make sure
+ `VOLCANO_ENGINE_API_KEY` is available to that process (for example, in
+ `~/.openclaw/.env` or via `env.shellEnv`).
+
+
-During onboarding/configure model selection, the Volcengine auth choice prefers
-both `volcengine/*` and `volcengine-plan/*` rows. If those models are not
-loaded yet, OpenClaw falls back to the unfiltered catalog instead of showing an
-empty provider-scoped picker.
+
+When running OpenClaw as a background service, environment variables set in your
+interactive shell are not automatically inherited. See the daemon note above.
+
-## Environment note
+## Related
-If the Gateway runs as a daemon (launchd/systemd), make sure
-`VOLCANO_ENGINE_API_KEY` is available to that process (for example, in
-`~/.openclaw/.env` or via `env.shellEnv`).
+
+
+ Choosing providers, model refs, and failover behavior.
+
+
+ Full config reference for agents, models, and providers.
+
+
+ Common issues and debugging steps.
+
+
+ Frequently asked questions about OpenClaw setup.
+
+