diff --git a/docs/providers/alibaba.md b/docs/providers/alibaba.md
index 27f420e1858..2496f1f3ae9 100644
--- a/docs/providers/alibaba.md
+++ b/docs/providers/alibaba.md
@@ -16,57 +16,101 @@ Alibaba Model Studio / DashScope.
- Also accepted: `DASHSCOPE_API_KEY`, `QWEN_API_KEY`
- API: DashScope / Model Studio async video generation
-## Quick start
+## Getting started
-1. Set an API key:
-
-```bash
-openclaw onboard --auth-choice qwen-standard-api-key
-```
-
-2. Set a default video model:
-
-```json5
-{
- agents: {
- defaults: {
- videoGenerationModel: {
- primary: "alibaba/wan2.6-t2v",
+
+
+ ```bash
+ openclaw onboard --auth-choice qwen-standard-api-key
+ ```
+
+
+ ```json5
+ {
+ agents: {
+ defaults: {
+ videoGenerationModel: {
+ primary: "alibaba/wan2.6-t2v",
+ },
+ },
},
- },
- },
-}
-```
+ }
+ ```
+
+
+ ```bash
+ openclaw models list --provider alibaba
+ ```
+
+
+
+
+Any of the accepted auth keys (`MODELSTUDIO_API_KEY`, `DASHSCOPE_API_KEY`, `QWEN_API_KEY`) will work. The `qwen-standard-api-key` onboarding choice configures the shared DashScope credential.
+
## Built-in Wan models
The bundled `alibaba` provider currently registers:
-- `alibaba/wan2.6-t2v`
-- `alibaba/wan2.6-i2v`
-- `alibaba/wan2.6-r2v`
-- `alibaba/wan2.6-r2v-flash`
-- `alibaba/wan2.7-r2v`
+| Model ref | Mode |
+| -------------------------- | ------------------------- |
+| `alibaba/wan2.6-t2v` | Text-to-video |
+| `alibaba/wan2.6-i2v` | Image-to-video |
+| `alibaba/wan2.6-r2v` | Reference-to-video |
+| `alibaba/wan2.6-r2v-flash` | Reference-to-video (fast) |
+| `alibaba/wan2.7-r2v` | Reference-to-video |
## Current limits
-- Up to **1** output video per request
-- Up to **1** input image
-- Up to **4** input videos
-- Up to **10 seconds** duration
-- Supports `size`, `aspectRatio`, `resolution`, `audio`, and `watermark`
-- Reference image/video mode currently requires **remote http(s) URLs**
+| Parameter | Limit |
+| --------------------- | --------------------------------------------------------- |
+| Output videos | Up to **1** per request |
+| Input images | Up to **1** |
+| Input videos | Up to **4** |
+| Duration | Up to **10 seconds** |
+| Supported controls | `size`, `aspectRatio`, `resolution`, `audio`, `watermark` |
+| Reference image/video | Remote `http(s)` URLs only |
-## Relationship to Qwen
+
+Reference image/video mode currently requires **remote http(s) URLs**. Local file paths are not supported for reference inputs.
+
-The bundled `qwen` provider also uses Alibaba-hosted DashScope endpoints for
-Wan video generation. Use:
+## Advanced configuration
-- `qwen/...` when you want the canonical Qwen provider surface
-- `alibaba/...` when you want the direct vendor-owned Wan video surface
+
+
+ The bundled `qwen` provider also uses Alibaba-hosted DashScope endpoints for
+ Wan video generation. Use:
+
+ - `qwen/...` when you want the canonical Qwen provider surface
+ - `alibaba/...` when you want the direct vendor-owned Wan video surface
+
+ See the [Qwen provider docs](/providers/qwen) for more detail.
+
+
+
+
+ OpenClaw checks for auth keys in this order:
+
+ 1. `MODELSTUDIO_API_KEY` (preferred)
+ 2. `DASHSCOPE_API_KEY`
+ 3. `QWEN_API_KEY`
+
+ Any of these will authenticate the `alibaba` provider.
+
+
+
## Related
-- [Video Generation](/tools/video-generation)
-- [Qwen](/providers/qwen)
-- [Configuration Reference](/gateway/configuration-reference#agent-defaults)
+
+
+ Shared video tool parameters and provider selection.
+
+
+ Qwen provider setup and DashScope integration.
+
+
+ Agent defaults and model configuration.
+
+
diff --git a/docs/providers/cloudflare-ai-gateway.md b/docs/providers/cloudflare-ai-gateway.md
index 392a611e705..1d189420514 100644
--- a/docs/providers/cloudflare-ai-gateway.md
+++ b/docs/providers/cloudflare-ai-gateway.md
@@ -10,35 +10,55 @@ read_when:
Cloudflare AI Gateway sits in front of provider APIs and lets you add analytics, caching, and controls. For Anthropic, OpenClaw uses the Anthropic Messages API through your Gateway endpoint.
-- Provider: `cloudflare-ai-gateway`
-- Base URL: `https://gateway.ai.cloudflare.com/v1///anthropic`
-- Default model: `cloudflare-ai-gateway/claude-sonnet-4-5`
-- API key: `CLOUDFLARE_AI_GATEWAY_API_KEY` (your provider API key for requests through the Gateway)
+| Property | Value |
+| ------------- | ---------------------------------------------------------------------------------------- |
+| Provider | `cloudflare-ai-gateway` |
+| Base URL | `https://gateway.ai.cloudflare.com/v1///anthropic` |
+| Default model | `cloudflare-ai-gateway/claude-sonnet-4-5` |
+| API key | `CLOUDFLARE_AI_GATEWAY_API_KEY` (your provider API key for requests through the Gateway) |
-For Anthropic models, use your Anthropic API key.
+
+For Anthropic models routed through Cloudflare AI Gateway, use your **Anthropic API key** as the provider key.
+
-## Quick start
+## Getting started
-1. Set the provider API key and Gateway details:
+
+
+ Run onboarding and choose the Cloudflare AI Gateway auth option:
-```bash
-openclaw onboard --auth-choice cloudflare-ai-gateway-api-key
-```
+ ```bash
+ openclaw onboard --auth-choice cloudflare-ai-gateway-api-key
+ ```
-2. Set a default model:
+ This prompts for your account ID, gateway ID, and API key.
-```json5
-{
- agents: {
- defaults: {
- model: { primary: "cloudflare-ai-gateway/claude-sonnet-4-5" },
- },
- },
-}
-```
+
+
+ Add the model to your OpenClaw config:
+
+ ```json5
+ {
+ agents: {
+ defaults: {
+ model: { primary: "cloudflare-ai-gateway/claude-sonnet-4-5" },
+ },
+ },
+ }
+ ```
+
+
+
+ ```bash
+ openclaw models list --provider cloudflare-ai-gateway
+ ```
+
+
## Non-interactive example
+For scripted or CI setups, pass all values on the command line:
+
```bash
openclaw onboard --non-interactive \
--mode local \
@@ -48,24 +68,49 @@ openclaw onboard --non-interactive \
--cloudflare-ai-gateway-api-key "$CLOUDFLARE_AI_GATEWAY_API_KEY"
```
-## Authenticated gateways
+## Advanced configuration
-If you enabled Gateway authentication in Cloudflare, add the `cf-aig-authorization` header (this is in addition to your provider API key).
+
+
+ If you enabled Gateway authentication in Cloudflare, add the `cf-aig-authorization` header. This is **in addition to** your provider API key.
-```json5
-{
- models: {
- providers: {
- "cloudflare-ai-gateway": {
- headers: {
- "cf-aig-authorization": "Bearer ",
+ ```json5
+ {
+ models: {
+ providers: {
+ "cloudflare-ai-gateway": {
+ headers: {
+ "cf-aig-authorization": "Bearer ",
+ },
+ },
},
},
- },
- },
-}
-```
+ }
+ ```
-## Environment note
+
+ The `cf-aig-authorization` header authenticates with the Cloudflare Gateway itself, while the provider API key (for example, your Anthropic key) authenticates with the upstream provider.
+
-If the Gateway runs as a daemon (launchd/systemd), make sure `CLOUDFLARE_AI_GATEWAY_API_KEY` is available to that process (for example, in `~/.openclaw/.env` or via `env.shellEnv`).
+
+
+
+ If the Gateway runs as a daemon (launchd/systemd), make sure `CLOUDFLARE_AI_GATEWAY_API_KEY` is available to that process.
+
+
+ A key sitting only in `~/.profile` will not help a launchd/systemd daemon unless that environment is imported there as well. Set the key in `~/.openclaw/.env` or via `env.shellEnv` to ensure the gateway process can read it.
+
+
+
+
+
+## Related
+
+
+
+ Choosing providers, model refs, and failover behavior.
+
+
+ General troubleshooting and FAQ.
+
+
diff --git a/docs/providers/fireworks.md b/docs/providers/fireworks.md
index 92ceaa467f5..f9704ef96ea 100644
--- a/docs/providers/fireworks.md
+++ b/docs/providers/fireworks.md
@@ -7,26 +7,38 @@ read_when:
# Fireworks
-[Fireworks](https://fireworks.ai) exposes open-weight and routed models through an OpenAI-compatible API. OpenClaw now includes a bundled Fireworks provider plugin.
+[Fireworks](https://fireworks.ai) exposes open-weight and routed models through an OpenAI-compatible API. OpenClaw includes a bundled Fireworks provider plugin.
-- Provider: `fireworks`
-- Auth: `FIREWORKS_API_KEY`
-- API: OpenAI-compatible chat/completions
-- Base URL: `https://api.fireworks.ai/inference/v1`
-- Default model: `fireworks/accounts/fireworks/routers/kimi-k2p5-turbo`
+| Property | Value |
+| ------------- | ------------------------------------------------------ |
+| Provider | `fireworks` |
+| Auth | `FIREWORKS_API_KEY` |
+| API | OpenAI-compatible chat/completions |
+| Base URL | `https://api.fireworks.ai/inference/v1` |
+| Default model | `fireworks/accounts/fireworks/routers/kimi-k2p5-turbo` |
-## Quick start
+## Getting started
-Set up Fireworks auth through onboarding:
+
+
+ ```bash
+ openclaw onboard --auth-choice fireworks-api-key
+ ```
-```bash
-openclaw onboard --auth-choice fireworks-api-key
-```
+ This stores your Fireworks key in OpenClaw config and sets the Fire Pass starter model as the default.
-This stores your Fireworks key in OpenClaw config and sets the Fire Pass starter model as the default.
+
+
+ ```bash
+ openclaw models list --provider fireworks
+ ```
+
+
## Non-interactive example
+For scripted or CI setups, pass all values on the command line:
+
```bash
openclaw onboard --non-interactive \
--mode local \
@@ -36,24 +48,20 @@ openclaw onboard --non-interactive \
--accept-risk
```
-## Environment note
-
-If the Gateway runs outside your interactive shell, make sure `FIREWORKS_API_KEY`
-is available to that process too. A key sitting only in `~/.profile` will not
-help a launchd/systemd daemon unless that environment is imported there as well.
-
## Built-in catalog
| Model ref | Name | Input | Context | Max output | Notes |
| ------------------------------------------------------ | --------------------------- | ---------- | ------- | ---------- | ------------------------------------------ |
| `fireworks/accounts/fireworks/routers/kimi-k2p5-turbo` | Kimi K2.5 Turbo (Fire Pass) | text,image | 256,000 | 256,000 | Default bundled starter model on Fireworks |
+
+If Fireworks publishes a newer model such as a fresh Qwen or Gemma release, you can switch to it directly by using its Fireworks model id without waiting for a bundled catalog update.
+
+
## Custom Fireworks model ids
OpenClaw accepts dynamic Fireworks model ids too. Use the exact model or router id shown by Fireworks and prefix it with `fireworks/`.
-Example:
-
```json5
{
agents: {
@@ -66,4 +74,34 @@ Example:
}
```
-If Fireworks publishes a newer model such as a fresh Qwen or Gemma release, you can switch to it directly by using its Fireworks model id without waiting for a bundled catalog update.
+
+
+ Every Fireworks model ref in OpenClaw starts with `fireworks/` followed by the exact id or router path from the Fireworks platform. For example:
+
+ - Router model: `fireworks/accounts/fireworks/routers/kimi-k2p5-turbo`
+ - Direct model: `fireworks/accounts/fireworks/models/`
+
+ OpenClaw strips the `fireworks/` prefix when building the API request and sends the remaining path to the Fireworks endpoint.
+
+
+
+
+ If the Gateway runs outside your interactive shell, make sure `FIREWORKS_API_KEY` is available to that process too.
+
+
+ A key sitting only in `~/.profile` will not help a launchd/systemd daemon unless that environment is imported there as well. Set the key in `~/.openclaw/.env` or via `env.shellEnv` to ensure the gateway process can read it.
+
+
+
+
+
+## Related
+
+
+
+ Choosing providers, model refs, and failover behavior.
+
+
+ General troubleshooting and FAQ.
+
+
diff --git a/docs/providers/mistral.md b/docs/providers/mistral.md
index bda3bfb3e15..f4241ad14b1 100644
--- a/docs/providers/mistral.md
+++ b/docs/providers/mistral.md
@@ -12,22 +12,42 @@ OpenClaw supports Mistral for both text/image model routing (`mistral/...`) and
audio transcription via Voxtral in media understanding.
Mistral can also be used for memory embeddings (`memorySearch.provider = "mistral"`).
-## CLI setup
+- Provider: `mistral`
+- Auth: `MISTRAL_API_KEY`
+- API: Mistral Chat Completions (`https://api.mistral.ai/v1`)
-```bash
-openclaw onboard --auth-choice mistral-api-key
-# or non-interactive
-openclaw onboard --mistral-api-key "$MISTRAL_API_KEY"
-```
+## Getting started
-## Config snippet (LLM provider)
+
+
+ Create an API key in the [Mistral Console](https://console.mistral.ai/).
+
+
+ ```bash
+ openclaw onboard --auth-choice mistral-api-key
+ ```
-```json5
-{
- env: { MISTRAL_API_KEY: "sk-..." },
- agents: { defaults: { model: { primary: "mistral/mistral-large-latest" } } },
-}
-```
+ Or pass the key directly:
+
+ ```bash
+ openclaw onboard --mistral-api-key "$MISTRAL_API_KEY"
+ ```
+
+
+
+ ```json5
+ {
+ env: { MISTRAL_API_KEY: "sk-..." },
+ agents: { defaults: { model: { primary: "mistral/mistral-large-latest" } } },
+ }
+ ```
+
+
+ ```bash
+ openclaw models list --provider mistral
+ ```
+
+
## Built-in LLM catalog
@@ -43,7 +63,9 @@ OpenClaw currently ships this bundled Mistral catalog:
| `mistral/devstral-medium-latest` | text | 262,144 | 32,768 | Devstral 2 |
| `mistral/magistral-small` | text | 128,000 | 40,000 | Reasoning-enabled |
-## Config snippet (audio transcription with Voxtral)
+## Audio transcription (Voxtral)
+
+Use Voxtral for audio transcription through the media understanding pipeline.
```json5
{
@@ -58,22 +80,55 @@ OpenClaw currently ships this bundled Mistral catalog:
}
```
-## Adjustable reasoning (`mistral-small-latest`)
+
+The media transcription path uses `/v1/audio/transcriptions`. The default audio model for Mistral is `voxtral-mini-latest`.
+
-`mistral/mistral-small-latest` maps to Mistral Small 4 and supports [adjustable reasoning](https://docs.mistral.ai/capabilities/reasoning/adjustable) on the Chat Completions API via `reasoning_effort` (`none` minimizes extra thinking in the output; `high` surfaces full thinking traces before the final answer).
+## Advanced configuration
-OpenClaw maps the session **thinking** level to Mistral’s API:
+
+
+ `mistral/mistral-small-latest` maps to Mistral Small 4 and supports [adjustable reasoning](https://docs.mistral.ai/capabilities/reasoning/adjustable) on the Chat Completions API via `reasoning_effort` (`none` minimizes extra thinking in the output; `high` surfaces full thinking traces before the final answer).
-- **off** / **minimal** → `none`
-- **low** / **medium** / **high** / **xhigh** / **adaptive** → `high`
+ OpenClaw maps the session **thinking** level to Mistral's API:
-Other bundled Mistral catalog models do not use this parameter; keep using `magistral-*` models when you want Mistral’s native reasoning-first behavior.
+ | OpenClaw thinking level | Mistral `reasoning_effort` |
+ | ------------------------------------------------ | -------------------------- |
+ | **off** / **minimal** | `none` |
+ | **low** / **medium** / **high** / **xhigh** / **adaptive** | `high` |
-## Notes
+
+ Other bundled Mistral catalog models do not use this parameter. Keep using `magistral-*` models when you want Mistral's native reasoning-first behavior.
+
-- Mistral auth uses `MISTRAL_API_KEY`.
-- Provider base URL defaults to `https://api.mistral.ai/v1`.
-- Onboarding default model is `mistral/mistral-large-latest`.
-- Media-understanding default audio model for Mistral is `voxtral-mini-latest`.
-- Media transcription path uses `/v1/audio/transcriptions`.
-- Memory embeddings path uses `/v1/embeddings` (default model: `mistral-embed`).
+
+
+
+ Mistral can serve memory embeddings via `/v1/embeddings` (default model: `mistral-embed`).
+
+ ```json5
+ {
+ memorySearch: { provider: "mistral" },
+ }
+ ```
+
+
+
+
+ - Mistral auth uses `MISTRAL_API_KEY`.
+ - Provider base URL defaults to `https://api.mistral.ai/v1`.
+ - Onboarding default model is `mistral/mistral-large-latest`.
+ - Z.AI uses Bearer auth with your API key.
+
+
+
+## Related
+
+
+
+ Choosing providers, model refs, and failover behavior.
+
+
+ Audio transcription setup and provider selection.
+
+
diff --git a/docs/providers/zai.md b/docs/providers/zai.md
index 91a478fb28a..1dc3234d37e 100644
--- a/docs/providers/zai.md
+++ b/docs/providers/zai.md
@@ -12,64 +12,142 @@ Z.AI is the API platform for **GLM** models. It provides REST APIs for GLM and u
for authentication. Create your API key in the Z.AI console. OpenClaw uses the `zai` provider
with a Z.AI API key.
-## CLI setup
+- Provider: `zai`
+- Auth: `ZAI_API_KEY`
+- API: Z.AI Chat Completions (Bearer auth)
-```bash
-# Generic API-key setup with endpoint auto-detection
-openclaw onboard --auth-choice zai-api-key
+## Getting started
-# Coding Plan Global, recommended for Coding Plan users
-openclaw onboard --auth-choice zai-coding-global
+
+
+ **Best for:** most users. OpenClaw detects the matching Z.AI endpoint from the key and applies the correct base URL automatically.
-# Coding Plan CN (China region), recommended for Coding Plan users
-openclaw onboard --auth-choice zai-coding-cn
+
+
+ ```bash
+ openclaw onboard --auth-choice zai-api-key
+ ```
+
+
+ ```json5
+ {
+ env: { ZAI_API_KEY: "sk-..." },
+ agents: { defaults: { model: { primary: "zai/glm-5.1" } } },
+ }
+ ```
+
+
+ ```bash
+ openclaw models list --provider zai
+ ```
+
+
-# General API
-openclaw onboard --auth-choice zai-global
+
-# General API CN (China region)
-openclaw onboard --auth-choice zai-cn
-```
+
+ **Best for:** users who want to force a specific Coding Plan or general API surface.
-## Config snippet
+
+
+ ```bash
+ # Coding Plan Global (recommended for Coding Plan users)
+ openclaw onboard --auth-choice zai-coding-global
-```json5
-{
- env: { ZAI_API_KEY: "sk-..." },
- agents: { defaults: { model: { primary: "zai/glm-5.1" } } },
-}
-```
+ # Coding Plan CN (China region)
+ openclaw onboard --auth-choice zai-coding-cn
-`zai-api-key` lets OpenClaw detect the matching Z.AI endpoint from the key and
-apply the correct base URL automatically. Use the explicit regional choices when
-you want to force a specific Coding Plan or general API surface.
+ # General API
+ openclaw onboard --auth-choice zai-global
+
+ # General API CN (China region)
+ openclaw onboard --auth-choice zai-cn
+ ```
+
+
+ ```json5
+ {
+ env: { ZAI_API_KEY: "sk-..." },
+ agents: { defaults: { model: { primary: "zai/glm-5.1" } } },
+ }
+ ```
+
+
+ ```bash
+ openclaw models list --provider zai
+ ```
+
+
+
+
+
## Bundled GLM catalog
OpenClaw currently seeds the bundled `zai` provider with:
-- `glm-5.1`
-- `glm-5`
-- `glm-5-turbo`
-- `glm-5v-turbo`
-- `glm-4.7`
-- `glm-4.7-flash`
-- `glm-4.7-flashx`
-- `glm-4.6`
-- `glm-4.6v`
-- `glm-4.5`
-- `glm-4.5-air`
-- `glm-4.5-flash`
-- `glm-4.5v`
+| Model ref | Notes |
+| -------------------- | ------------- |
+| `zai/glm-5.1` | Default model |
+| `zai/glm-5` | |
+| `zai/glm-5-turbo` | |
+| `zai/glm-5v-turbo` | |
+| `zai/glm-4.7` | |
+| `zai/glm-4.7-flash` | |
+| `zai/glm-4.7-flashx` | |
+| `zai/glm-4.6` | |
+| `zai/glm-4.6v` | |
+| `zai/glm-4.5` | |
+| `zai/glm-4.5-air` | |
+| `zai/glm-4.5-flash` | |
+| `zai/glm-4.5v` | |
-## Notes
+
+GLM models are available as `zai/` (example: `zai/glm-5`). The default bundled model ref is `zai/glm-5.1`.
+
-- GLM models are available as `zai/` (example: `zai/glm-5`).
-- Default bundled model ref: `zai/glm-5.1`
-- Unknown `glm-5*` ids still forward-resolve on the bundled provider path by
- synthesizing provider-owned metadata from the `glm-4.7` template when the id
- matches the current GLM-5 family shape.
-- `tool_stream` is enabled by default for Z.AI tool-call streaming. Set
- `agents.defaults.models["zai/"].params.tool_stream` to `false` to disable it.
-- See [/providers/glm](/providers/glm) for the model family overview.
-- Z.AI uses Bearer auth with your API key.
+## Advanced configuration
+
+
+
+ Unknown `glm-5*` ids still forward-resolve on the bundled provider path by
+ synthesizing provider-owned metadata from the `glm-4.7` template when the id
+ matches the current GLM-5 family shape.
+
+
+
+ `tool_stream` is enabled by default for Z.AI tool-call streaming. To disable it:
+
+ ```json5
+ {
+ agents: {
+ defaults: {
+ models: {
+ "zai/": {
+ params: { tool_stream: false },
+ },
+ },
+ },
+ },
+ }
+ ```
+
+
+
+
+ - Z.AI uses Bearer auth with your API key.
+ - The `zai-api-key` onboarding choice auto-detects the matching Z.AI endpoint from the key prefix.
+ - Use the explicit regional choices (`zai-coding-global`, `zai-coding-cn`, `zai-global`, `zai-cn`) when you want to force a specific API surface.
+
+
+
+## Related
+
+
+
+ Model family overview for GLM.
+
+
+ Choosing providers, model refs, and failover behavior.
+
+