diff --git a/docs/providers/arcee.md b/docs/providers/arcee.md
index a085fa47d3d..a7fdee17582 100644
--- a/docs/providers/arcee.md
+++ b/docs/providers/arcee.md
@@ -12,58 +12,89 @@ read_when:
Arcee AI models can be accessed directly via the Arcee platform or through [OpenRouter](/providers/openrouter).
-- Provider: `arcee`
-- Auth: `ARCEEAI_API_KEY` (direct) or `OPENROUTER_API_KEY` (via OpenRouter)
-- API: OpenAI-compatible
-- Base URL: `https://api.arcee.ai/api/v1` (direct) or `https://openrouter.ai/api/v1` (OpenRouter)
+| Property | Value |
+| -------- | ------------------------------------------------------------------------------------- |
+| Provider | `arcee` |
+| Auth | `ARCEEAI_API_KEY` (direct) or `OPENROUTER_API_KEY` (via OpenRouter) |
+| API | OpenAI-compatible |
+| Base URL | `https://api.arcee.ai/api/v1` (direct) or `https://openrouter.ai/api/v1` (OpenRouter) |
-## Quick start
+## Getting started
-1. Get an API key from [Arcee AI](https://chat.arcee.ai/) or [OpenRouter](https://openrouter.ai/keys).
+
+
+
+
+ Create an API key at [Arcee AI](https://chat.arcee.ai/).
+
+
+ ```bash
+ openclaw onboard --auth-choice arceeai-api-key
+ ```
+
+
+ ```json5
+ {
+ agents: {
+ defaults: {
+ model: { primary: "arcee/trinity-large-thinking" },
+ },
+ },
+ }
+ ```
+
+
+
-2. Set the API key (recommended: store it for the Gateway):
+
+
+
+ Create an API key at [OpenRouter](https://openrouter.ai/keys).
+
+
+ ```bash
+ openclaw onboard --auth-choice arceeai-openrouter
+ ```
+
+
+ ```json5
+ {
+ agents: {
+ defaults: {
+ model: { primary: "arcee/trinity-large-thinking" },
+ },
+ },
+ }
+ ```
-```bash
-# Direct (Arcee platform)
-openclaw onboard --auth-choice arceeai-api-key
+ The same model refs work for both direct and OpenRouter setups (for example `arcee/trinity-large-thinking`).
+
+
-# Via OpenRouter
-openclaw onboard --auth-choice arceeai-openrouter
-```
+
+
-3. Set a default model:
+## Non-interactive setup
-```json5
-{
- agents: {
- defaults: {
- model: { primary: "arcee/trinity-large-thinking" },
- },
- },
-}
-```
+
+
+ ```bash
+ openclaw onboard --non-interactive \
+ --mode local \
+ --auth-choice arceeai-api-key \
+ --arceeai-api-key "$ARCEEAI_API_KEY"
+ ```
+
-## Non-interactive example
-
-```bash
-# Direct (Arcee platform)
-openclaw onboard --non-interactive \
- --mode local \
- --auth-choice arceeai-api-key \
- --arceeai-api-key "$ARCEEAI_API_KEY"
-
-# Via OpenRouter
-openclaw onboard --non-interactive \
- --mode local \
- --auth-choice arceeai-openrouter \
- --openrouter-api-key "$OPENROUTER_API_KEY"
-```
-
-## Environment note
-
-If the Gateway runs as a daemon (launchd/systemd), make sure `ARCEEAI_API_KEY`
-(or `OPENROUTER_API_KEY`) is available to that process (for example, in
-`~/.openclaw/.env` or via `env.shellEnv`).
+
+ ```bash
+ openclaw onboard --non-interactive \
+ --mode local \
+ --auth-choice arceeai-openrouter \
+ --openrouter-api-key "$OPENROUTER_API_KEY"
+ ```
+
+
## Built-in catalog
@@ -75,13 +106,41 @@ OpenClaw currently ships this bundled Arcee catalog:
| `arcee/trinity-large-preview` | Trinity Large Preview | text | 128K | $0.25 / $1.00 | General-purpose; 400B params, 13B active |
| `arcee/trinity-mini` | Trinity Mini 26B | text | 128K | $0.045 / $0.15 | Fast and cost-efficient; function calling |
-The same model refs work for both direct and OpenRouter setups (for example `arcee/trinity-large-thinking`).
-
+
The onboarding preset sets `arcee/trinity-large-thinking` as the default model.
+
## Supported features
-- Streaming
-- Tool use / function calling
-- Structured output (JSON mode and JSON schema)
-- Extended thinking (Trinity Large Thinking)
+| Feature | Supported |
+| --------------------------------------------- | ---------------------------- |
+| Streaming | Yes |
+| Tool use / function calling | Yes |
+| Structured output (JSON mode and JSON schema) | Yes |
+| Extended thinking | Yes (Trinity Large Thinking) |
+
+
+
+ If the Gateway runs as a daemon (launchd/systemd), make sure `ARCEEAI_API_KEY`
+ (or `OPENROUTER_API_KEY`) is available to that process (for example, in
+ `~/.openclaw/.env` or via `env.shellEnv`).
+
+
+
+ When using Arcee models via OpenRouter, the same `arcee/*` model refs apply.
+ OpenClaw handles routing transparently based on your auth choice. See the
+ [OpenRouter provider docs](/providers/openrouter) for OpenRouter-specific
+ configuration details.
+
+
+
+## Related
+
+
+
+ Access Arcee models and many others through a single API key.
+
+
+ Choosing providers, model refs, and failover behavior.
+
+
diff --git a/docs/providers/github-copilot.md b/docs/providers/github-copilot.md
index 3b01947253f..f18942f904f 100644
--- a/docs/providers/github-copilot.md
+++ b/docs/providers/github-copilot.md
@@ -8,73 +8,107 @@ title: "GitHub Copilot"
# GitHub Copilot
-## What is GitHub Copilot?
-
GitHub Copilot is GitHub's AI coding assistant. It provides access to Copilot
models for your GitHub account and plan. OpenClaw can use Copilot as a model
provider in two different ways.
## Two ways to use Copilot in OpenClaw
-### 1) Built-in GitHub Copilot provider (`github-copilot`)
+
+
+ Use the native device-login flow to obtain a GitHub token, then exchange it for
+ Copilot API tokens when OpenClaw runs. This is the **default** and simplest path
+ because it does not require VS Code.
-Use the native device-login flow to obtain a GitHub token, then exchange it for
-Copilot API tokens when OpenClaw runs. This is the **default** and simplest path
-because it does not require VS Code.
+
+
+ ```bash
+ openclaw models auth login-github-copilot
+ ```
-### 2) Copilot Proxy plugin (`copilot-proxy`)
+ You will be prompted to visit a URL and enter a one-time code. Keep the
+ terminal open until it completes.
+
+
+ ```bash
+ openclaw models set github-copilot/gpt-4o
+ ```
-Use the **Copilot Proxy** VS Code extension as a local bridge. OpenClaw talks to
-the proxy’s `/v1` endpoint and uses the model list you configure there. Choose
-this when you already run Copilot Proxy in VS Code or need to route through it.
-You must enable the plugin and keep the VS Code extension running.
+ Or in config:
-Use GitHub Copilot as a model provider (`github-copilot`). The login command runs
-the GitHub device flow, saves an auth profile, and updates your config to use that
-profile.
+ ```json5
+ {
+ agents: { defaults: { model: { primary: "github-copilot/gpt-4o" } } },
+ }
+ ```
+
+
-## CLI setup
-
-```bash
-openclaw models auth login-github-copilot
-```
-
-You'll be prompted to visit a URL and enter a one-time code. Keep the terminal
-open until it completes.
-
-### Optional flags
+
+
+
+ Use the **Copilot Proxy** VS Code extension as a local bridge. OpenClaw talks to
+ the proxy's `/v1` endpoint and uses the model list you configure there.
+
+
+ Choose this when you already run Copilot Proxy in VS Code or need to route
+ through it. You must enable the plugin and keep the VS Code extension running.
+
+
+
+
+
+## Optional flags
+
+| Flag | Description |
+| --------------- | --------------------------------------------------- |
+| `--yes` | Skip the confirmation prompt |
+| `--set-default` | Also apply the provider's recommended default model |
```bash
+# Skip confirmation
openclaw models auth login-github-copilot --yes
-```
-To also apply the provider's recommended default model in one step, use the
-generic auth command instead:
-
-```bash
+# Login and set the default model in one step
openclaw models auth login --provider github-copilot --method device --set-default
```
-## Set a default model
+
+
+ The device-login flow requires an interactive TTY. Run it directly in a
+ terminal, not in a non-interactive script or CI pipeline.
+
-```bash
-openclaw models set github-copilot/gpt-4o
-```
+
+ Copilot model availability depends on your GitHub plan. If a model is
+ rejected, try another ID (for example `github-copilot/gpt-4.1`).
+
-### Config snippet
+
+ Claude model IDs use the Anthropic Messages transport automatically. GPT,
+ o-series, and Gemini models keep the OpenAI Responses transport. OpenClaw
+ selects the correct transport based on the model ref.
+
-```json5
-{
- agents: { defaults: { model: { primary: "github-copilot/gpt-4o" } } },
-}
-```
+
+ The login stores a GitHub token in the auth profile store and exchanges it
+ for a Copilot API token when OpenClaw runs. You do not need to manage the
+ token manually.
+
+
-## Notes
+
+Requires an interactive TTY. Run the login command directly in a terminal, not
+inside a headless script or CI job.
+
-- Requires an interactive TTY; run it directly in a terminal.
-- Copilot model availability depends on your plan; if a model is rejected, try
- another ID (for example `github-copilot/gpt-4.1`).
-- Claude model IDs use the Anthropic Messages transport automatically; GPT, o-series,
- and Gemini models keep the OpenAI Responses transport.
-- The login stores a GitHub token in the auth profile store and exchanges it for a
- Copilot API token when OpenClaw runs.
+## Related
+
+
+
+ Choosing providers, model refs, and failover behavior.
+
+
+ Auth details and credential reuse rules.
+
+
diff --git a/docs/providers/kilocode.md b/docs/providers/kilocode.md
index 51d984062ea..378d9d3684e 100644
--- a/docs/providers/kilocode.md
+++ b/docs/providers/kilocode.md
@@ -11,25 +11,73 @@ read_when:
Kilo Gateway provides a **unified API** that routes requests to many models behind a single
endpoint and API key. It is OpenAI-compatible, so most OpenAI SDKs work by switching the base URL.
-## Getting an API key
+| Property | Value |
+| -------- | ---------------------------------- |
+| Provider | `kilocode` |
+| Auth | `KILOCODE_API_KEY` |
+| API | OpenAI-compatible |
+| Base URL | `https://api.kilo.ai/api/gateway/` |
-1. Go to [app.kilo.ai](https://app.kilo.ai)
-2. Sign in or create an account
-3. Navigate to API Keys and generate a new key
+## Getting started
-## CLI setup
+
+
+ Go to [app.kilo.ai](https://app.kilo.ai), sign in or create an account, then navigate to API Keys and generate a new key.
+
+
+ ```bash
+ openclaw onboard --auth-choice kilocode-api-key
+ ```
-```bash
-openclaw onboard --auth-choice kilocode-api-key
-```
+ Or set the environment variable directly:
-Or set the environment variable:
+ ```bash
+ export KILOCODE_API_KEY="" # pragma: allowlist secret
+ ```
-```bash
-export KILOCODE_API_KEY="" # pragma: allowlist secret
-```
+
+
+ ```bash
+ openclaw models list --provider kilocode
+ ```
+
+
-## Config snippet
+## Default model
+
+The default model is `kilocode/kilo/auto`, a provider-owned smart-routing
+model managed by Kilo Gateway.
+
+
+OpenClaw treats `kilocode/kilo/auto` as the stable default ref, but does not
+publish a source-backed task-to-upstream-model mapping for that route. Exact
+upstream routing behind `kilocode/kilo/auto` is owned by Kilo Gateway, not
+hard-coded in OpenClaw.
+
+
+## Available models
+
+OpenClaw dynamically discovers available models from the Kilo Gateway at startup. Use
+`/models kilocode` to see the full list of models available with your account.
+
+Any model available on the gateway can be used with the `kilocode/` prefix:
+
+| Model ref | Notes |
+| -------------------------------------- | ---------------------------------- |
+| `kilocode/kilo/auto` | Default — smart routing |
+| `kilocode/anthropic/claude-sonnet-4` | Anthropic via Kilo |
+| `kilocode/openai/gpt-5.4` | OpenAI via Kilo |
+| `kilocode/google/gemini-3-pro-preview` | Google via Kilo |
+| ...and many more | Use `/models kilocode` to list all |
+
+
+At startup, OpenClaw queries `GET https://api.kilo.ai/api/gateway/models` and merges
+discovered models ahead of the static fallback catalog. The bundled fallback always
+includes `kilocode/kilo/auto` (`Kilo Auto`) with `input: ["text", "image"]`,
+`reasoning: true`, `contextWindow: 1000000`, and `maxTokens: 128000`.
+
+
+## Config example
```json5
{
@@ -42,48 +90,47 @@ export KILOCODE_API_KEY="" # pragma: allowlist secret
}
```
-## Default model
+
+
+ Kilo Gateway is documented in source as OpenRouter-compatible, so it stays on
+ the proxy-style OpenAI-compatible path rather than native OpenAI request shaping.
-The default model is `kilocode/kilo/auto`, a provider-owned smart-routing
-model managed by Kilo Gateway.
+ - Gemini-backed Kilo refs stay on the proxy-Gemini path, so OpenClaw keeps
+ Gemini thought-signature sanitation there without enabling native Gemini
+ replay validation or bootstrap rewrites.
+ - Kilo Gateway uses a Bearer token with your API key under the hood.
-OpenClaw treats `kilocode/kilo/auto` as the stable default ref, but does not
-publish a source-backed task-to-upstream-model mapping for that route.
+
-## Available models
+
+ Kilo's shared stream wrapper adds the provider app header and normalizes
+ proxy reasoning payloads for supported concrete model refs.
-OpenClaw dynamically discovers available models from the Kilo Gateway at startup. Use
-`/models kilocode` to see the full list of models available with your account.
+
+ `kilocode/kilo/auto` and other proxy-reasoning-unsupported hints skip reasoning
+ injection. If you need reasoning support, use a concrete model ref such as
+ `kilocode/anthropic/claude-sonnet-4`.
+
-Any model available on the gateway can be used with the `kilocode/` prefix:
+
-```
-kilocode/kilo/auto (default - smart routing)
-kilocode/anthropic/claude-sonnet-4
-kilocode/openai/gpt-5.4
-kilocode/google/gemini-3-pro-preview
-...and many more
-```
+
+ - If model discovery fails at startup, OpenClaw falls back to the bundled static catalog containing `kilocode/kilo/auto`.
+ - Confirm your API key is valid and that your Kilo account has the desired models enabled.
+ - When the Gateway runs as a daemon, ensure `KILOCODE_API_KEY` is available to that process (for example in `~/.openclaw/.env` or via `env.shellEnv`).
+
+
-## Notes
+## Related
-- Model refs are `kilocode/` (e.g., `kilocode/anthropic/claude-sonnet-4`).
-- Default model: `kilocode/kilo/auto`
-- Base URL: `https://api.kilo.ai/api/gateway/`
-- Bundled fallback catalog always includes `kilocode/kilo/auto` (`Kilo Auto`) with
- `input: ["text", "image"]`, `reasoning: true`, `contextWindow: 1000000`,
- and `maxTokens: 128000`
-- At startup, OpenClaw tries `GET https://api.kilo.ai/api/gateway/models` and
- merges discovered models ahead of the static fallback catalog
-- Exact upstream routing behind `kilocode/kilo/auto` is owned by Kilo Gateway,
- not hard-coded in OpenClaw
-- Kilo Gateway is documented in source as OpenRouter-compatible, so it stays on
- the proxy-style OpenAI-compatible path rather than native OpenAI request shaping
-- Gemini-backed Kilo refs stay on the proxy-Gemini path, so OpenClaw keeps
- Gemini thought-signature sanitation there without enabling native Gemini
- replay validation or bootstrap rewrites.
-- Kilo's shared stream wrapper adds the provider app header and normalizes
- proxy reasoning payloads for supported concrete model refs. `kilocode/kilo/auto`
- and other proxy-reasoning-unsupported hints skip that reasoning injection.
-- For more model/provider options, see [/concepts/model-providers](/concepts/model-providers).
-- Kilo Gateway uses a Bearer token with your API key under the hood.
+
+
+ Choosing providers, model refs, and failover behavior.
+
+
+ Full OpenClaw configuration reference.
+
+
+ Kilo Gateway dashboard, API keys, and account management.
+
+
diff --git a/docs/providers/qianfan.md b/docs/providers/qianfan.md
index b87d949299e..bf00d7fc668 100644
--- a/docs/providers/qianfan.md
+++ b/docs/providers/qianfan.md
@@ -6,31 +6,51 @@ read_when:
title: "Qianfan"
---
-# Qianfan Provider Guide
+# Qianfan
-Qianfan is Baidu's MaaS platform, provides a **unified API** that routes requests to many models behind a single
+Qianfan is Baidu's MaaS platform, providing a **unified API** that routes requests to many models behind a single
endpoint and API key. It is OpenAI-compatible, so most OpenAI SDKs work by switching the base URL.
-## Prerequisites
+| Property | Value |
+| -------- | --------------------------------- |
+| Provider | `qianfan` |
+| Auth | `QIANFAN_API_KEY` |
+| API | OpenAI-compatible |
+| Base URL | `https://qianfan.baidubce.com/v2` |
-1. A Baidu Cloud account with Qianfan API access
-2. An API key from the Qianfan console
-3. OpenClaw installed on your system
+## Getting started
-## Getting Your API Key
+
+
+ Sign up or log in at the [Qianfan Console](https://console.bce.baidu.com/qianfan/ais/console/apiKey) and ensure you have Qianfan API access enabled.
+
+
+ Create a new application or select an existing one, then generate an API key. The key format is `bce-v3/ALTAK-...`.
+
+
+ ```bash
+ openclaw onboard --auth-choice qianfan-api-key
+ ```
+
+
+ ```bash
+ openclaw models list --provider qianfan
+ ```
+
+
-1. Visit the [Qianfan Console](https://console.bce.baidu.com/qianfan/ais/console/apiKey)
-2. Create a new application or select an existing one
-3. Generate an API key (format: `bce-v3/ALTAK-...`)
-4. Copy the API key for use with OpenClaw
+## Available models
-## CLI setup
+| Model ref | Input | Context | Max output | Reasoning | Notes |
+| ------------------------------------ | ----------- | ------- | ---------- | --------- | ------------- |
+| `qianfan/deepseek-v3.2` | text | 98,304 | 32,768 | Yes | Default model |
+| `qianfan/ernie-5.0-thinking-preview` | text, image | 119,000 | 64,000 | Yes | Multimodal |
-```bash
-openclaw onboard --auth-choice qianfan-api-key
-```
+
+The default bundled model ref is `qianfan/deepseek-v3.2`. You only need to override `models.providers.qianfan` when you need a custom base URL or model metadata.
+
-## Config snippet
+## Config example
```json5
{
@@ -74,17 +94,40 @@ openclaw onboard --auth-choice qianfan-api-key
}
```
-## Notes
+
+
+ Qianfan runs through the OpenAI-compatible transport path, not native OpenAI request shaping. This means standard OpenAI SDK features work, but provider-specific parameters may not be forwarded.
+
-- Default bundled model ref: `qianfan/deepseek-v3.2`
-- Default base URL: `https://qianfan.baidubce.com/v2`
-- Bundled catalog currently includes `deepseek-v3.2` and `ernie-5.0-thinking-preview`
-- Add or override `models.providers.qianfan` only when you need custom base URL or model metadata
-- Qianfan runs through the OpenAI-compatible transport path, not native OpenAI request shaping
+
+ The bundled catalog currently includes `deepseek-v3.2` and `ernie-5.0-thinking-preview`. Add or override `models.providers.qianfan` only when you need a custom base URL or model metadata.
-## Related Documentation
+
+ Model refs use the `qianfan/` prefix (for example `qianfan/deepseek-v3.2`).
+
-- [OpenClaw Configuration](/gateway/configuration)
-- [Model Providers](/concepts/model-providers)
-- [Agent Setup](/concepts/agent)
-- [Qianfan API Documentation](https://cloud.baidu.com/doc/qianfan-api/s/3m7of64lb)
+
+
+
+ - Ensure your API key starts with `bce-v3/ALTAK-` and has Qianfan API access enabled in the Baidu Cloud console.
+ - If models are not listed, confirm your account has the Qianfan service activated.
+ - The default base URL is `https://qianfan.baidubce.com/v2`. Only change it if you use a custom endpoint or proxy.
+
+
+
+## Related
+
+
+
+ Choosing providers, model refs, and failover behavior.
+
+
+ Full OpenClaw configuration reference.
+
+
+ Configuring agent defaults and model assignments.
+
+
+ Official Qianfan API documentation.
+
+
diff --git a/docs/providers/xiaomi.md b/docs/providers/xiaomi.md
index 0d2c8c8b724..3c098728bef 100644
--- a/docs/providers/xiaomi.md
+++ b/docs/providers/xiaomi.md
@@ -9,31 +9,53 @@ title: "Xiaomi MiMo"
# Xiaomi MiMo
Xiaomi MiMo is the API platform for **MiMo** models. OpenClaw uses the Xiaomi
-OpenAI-compatible endpoint with API-key authentication. Create your API key in the
-[Xiaomi MiMo console](https://platform.xiaomimimo.com/#/console/api-keys), then configure the
-bundled `xiaomi` provider with that key.
+OpenAI-compatible endpoint with API-key authentication.
-## Built-in catalog
+| Property | Value |
+| -------- | ------------------------------- |
+| Provider | `xiaomi` |
+| Auth | `XIAOMI_API_KEY` |
+| API | OpenAI-compatible |
+| Base URL | `https://api.xiaomimimo.com/v1` |
-- Base URL: `https://api.xiaomimimo.com/v1`
-- API: `openai-completions`
-- Authorization: `Bearer $XIAOMI_API_KEY`
+## Getting started
-| Model ref | Input | Context | Max output | Notes |
-| ---------------------- | ----------- | --------- | ---------- | ---------------------------- |
-| `xiaomi/mimo-v2-flash` | text | 262,144 | 8,192 | Default model |
-| `xiaomi/mimo-v2-pro` | text | 1,048,576 | 32,000 | Reasoning-enabled |
-| `xiaomi/mimo-v2-omni` | text, image | 262,144 | 32,000 | Reasoning-enabled multimodal |
+
+
+ Create an API key in the [Xiaomi MiMo console](https://platform.xiaomimimo.com/#/console/api-keys).
+
+
+ ```bash
+ openclaw onboard --auth-choice xiaomi-api-key
+ ```
-## CLI setup
+ Or pass the key directly:
-```bash
-openclaw onboard --auth-choice xiaomi-api-key
-# or non-interactive
-openclaw onboard --auth-choice xiaomi-api-key --xiaomi-api-key "$XIAOMI_API_KEY"
-```
+ ```bash
+ openclaw onboard --auth-choice xiaomi-api-key --xiaomi-api-key "$XIAOMI_API_KEY"
+ ```
-## Config snippet
+
+
+ ```bash
+ openclaw models list --provider xiaomi
+ ```
+
+
+
+## Available models
+
+| Model ref | Input | Context | Max output | Reasoning | Notes |
+| ---------------------- | ----------- | --------- | ---------- | --------- | ------------- |
+| `xiaomi/mimo-v2-flash` | text | 262,144 | 8,192 | No | Default model |
+| `xiaomi/mimo-v2-pro` | text | 1,048,576 | 32,000 | Yes | Large context |
+| `xiaomi/mimo-v2-omni` | text, image | 262,144 | 32,000 | Yes | Multimodal |
+
+
+The default model ref is `xiaomi/mimo-v2-flash`. The provider is injected automatically when `XIAOMI_API_KEY` is set or an auth profile exists.
+
+
+## Config example
```json5
{
@@ -81,9 +103,43 @@ openclaw onboard --auth-choice xiaomi-api-key --xiaomi-api-key "$XIAOMI_API_KEY"
}
```
-## Notes
+
+
+ The `xiaomi` provider is injected automatically when `XIAOMI_API_KEY` is set in your environment or an auth profile exists. You do not need to manually configure the provider unless you want to override model metadata or the base URL.
+
-- Default model ref: `xiaomi/mimo-v2-flash`.
-- Additional built-in models: `xiaomi/mimo-v2-pro`, `xiaomi/mimo-v2-omni`.
-- The provider is injected automatically when `XIAOMI_API_KEY` is set (or an auth profile exists).
-- See [/concepts/model-providers](/concepts/model-providers) for provider rules.
+
+ - **mimo-v2-flash** — lightweight and fast, ideal for general-purpose text tasks. No reasoning support.
+ - **mimo-v2-pro** — supports reasoning with a 1M token context window for long-document workloads.
+ - **mimo-v2-omni** — reasoning-enabled multimodal model that accepts both text and image inputs.
+
+
+ All models use the `xiaomi/` prefix (for example `xiaomi/mimo-v2-pro`).
+
+
+
+
+
+ - If models do not appear, confirm `XIAOMI_API_KEY` is set and valid.
+ - When the Gateway runs as a daemon, ensure the key is available to that process (for example in `~/.openclaw/.env` or via `env.shellEnv`).
+
+
+ Keys set only in your interactive shell are not visible to daemon-managed gateway processes. Use `~/.openclaw/.env` or `env.shellEnv` config for persistent availability.
+
+
+
+
+
+## Related
+
+
+
+ Choosing providers, model refs, and failover behavior.
+
+
+ Full OpenClaw configuration reference.
+
+
+ Xiaomi MiMo dashboard and API key management.
+
+