docs(providers): improve qianfan, xiaomi, kilocode, arcee, github-copilot with Mintlify components

This commit is contained in:
Vincent Koc
2026-04-12 11:28:11 +01:00
parent 362e48d876
commit 4d3ce427ad
5 changed files with 437 additions and 198 deletions

View File

@@ -12,58 +12,89 @@ read_when:
Arcee AI models can be accessed directly via the Arcee platform or through [OpenRouter](/providers/openrouter).
- Provider: `arcee`
- Auth: `ARCEEAI_API_KEY` (direct) or `OPENROUTER_API_KEY` (via OpenRouter)
- API: OpenAI-compatible
- Base URL: `https://api.arcee.ai/api/v1` (direct) or `https://openrouter.ai/api/v1` (OpenRouter)
| Property | Value |
| -------- | ------------------------------------------------------------------------------------- |
| Provider | `arcee` |
| Auth | `ARCEEAI_API_KEY` (direct) or `OPENROUTER_API_KEY` (via OpenRouter) |
| API | OpenAI-compatible |
| Base URL | `https://api.arcee.ai/api/v1` (direct) or `https://openrouter.ai/api/v1` (OpenRouter) |
## Quick start
## Getting started
1. Get an API key from [Arcee AI](https://chat.arcee.ai/) or [OpenRouter](https://openrouter.ai/keys).
<Tabs>
<Tab title="Direct (Arcee platform)">
<Steps>
<Step title="Get an API key">
Create an API key at [Arcee AI](https://chat.arcee.ai/).
</Step>
<Step title="Run onboarding">
```bash
openclaw onboard --auth-choice arceeai-api-key
```
</Step>
<Step title="Set a default model">
```json5
{
agents: {
defaults: {
model: { primary: "arcee/trinity-large-thinking" },
},
},
}
```
</Step>
</Steps>
</Tab>
2. Set the API key (recommended: store it for the Gateway):
<Tab title="Via OpenRouter">
<Steps>
<Step title="Get an API key">
Create an API key at [OpenRouter](https://openrouter.ai/keys).
</Step>
<Step title="Run onboarding">
```bash
openclaw onboard --auth-choice arceeai-openrouter
```
</Step>
<Step title="Set a default model">
```json5
{
agents: {
defaults: {
model: { primary: "arcee/trinity-large-thinking" },
},
},
}
```
```bash
# Direct (Arcee platform)
openclaw onboard --auth-choice arceeai-api-key
The same model refs work for both direct and OpenRouter setups (for example `arcee/trinity-large-thinking`).
</Step>
</Steps>
# Via OpenRouter
openclaw onboard --auth-choice arceeai-openrouter
```
</Tab>
</Tabs>
3. Set a default model:
## Non-interactive setup
```json5
{
agents: {
defaults: {
model: { primary: "arcee/trinity-large-thinking" },
},
},
}
```
<Tabs>
<Tab title="Direct (Arcee platform)">
```bash
openclaw onboard --non-interactive \
--mode local \
--auth-choice arceeai-api-key \
--arceeai-api-key "$ARCEEAI_API_KEY"
```
</Tab>
## Non-interactive example
```bash
# Direct (Arcee platform)
openclaw onboard --non-interactive \
--mode local \
--auth-choice arceeai-api-key \
--arceeai-api-key "$ARCEEAI_API_KEY"
# Via OpenRouter
openclaw onboard --non-interactive \
--mode local \
--auth-choice arceeai-openrouter \
--openrouter-api-key "$OPENROUTER_API_KEY"
```
## Environment note
If the Gateway runs as a daemon (launchd/systemd), make sure `ARCEEAI_API_KEY`
(or `OPENROUTER_API_KEY`) is available to that process (for example, in
`~/.openclaw/.env` or via `env.shellEnv`).
<Tab title="Via OpenRouter">
```bash
openclaw onboard --non-interactive \
--mode local \
--auth-choice arceeai-openrouter \
--openrouter-api-key "$OPENROUTER_API_KEY"
```
</Tab>
</Tabs>
## Built-in catalog
@@ -75,13 +106,41 @@ OpenClaw currently ships this bundled Arcee catalog:
| `arcee/trinity-large-preview` | Trinity Large Preview | text | 128K | $0.25 / $1.00 | General-purpose; 400B params, 13B active |
| `arcee/trinity-mini` | Trinity Mini 26B | text | 128K | $0.045 / $0.15 | Fast and cost-efficient; function calling |
The same model refs work for both direct and OpenRouter setups (for example `arcee/trinity-large-thinking`).
<Tip>
The onboarding preset sets `arcee/trinity-large-thinking` as the default model.
</Tip>
## Supported features
- Streaming
- Tool use / function calling
- Structured output (JSON mode and JSON schema)
- Extended thinking (Trinity Large Thinking)
| Feature | Supported |
| --------------------------------------------- | ---------------------------- |
| Streaming | Yes |
| Tool use / function calling | Yes |
| Structured output (JSON mode and JSON schema) | Yes |
| Extended thinking | Yes (Trinity Large Thinking) |
<AccordionGroup>
<Accordion title="Environment note">
If the Gateway runs as a daemon (launchd/systemd), make sure `ARCEEAI_API_KEY`
(or `OPENROUTER_API_KEY`) is available to that process (for example, in
`~/.openclaw/.env` or via `env.shellEnv`).
</Accordion>
<Accordion title="OpenRouter routing">
When using Arcee models via OpenRouter, the same `arcee/*` model refs apply.
OpenClaw handles routing transparently based on your auth choice. See the
[OpenRouter provider docs](/providers/openrouter) for OpenRouter-specific
configuration details.
</Accordion>
</AccordionGroup>
## Related
<CardGroup cols={2}>
<Card title="OpenRouter" href="/providers/openrouter" icon="shuffle">
Access Arcee models and many others through a single API key.
</Card>
<Card title="Model selection" href="/concepts/model-providers" icon="layers">
Choosing providers, model refs, and failover behavior.
</Card>
</CardGroup>

View File

@@ -8,73 +8,107 @@ title: "GitHub Copilot"
# GitHub Copilot
## What is GitHub Copilot?
GitHub Copilot is GitHub's AI coding assistant. It provides access to Copilot
models for your GitHub account and plan. OpenClaw can use Copilot as a model
provider in two different ways.
## Two ways to use Copilot in OpenClaw
### 1) Built-in GitHub Copilot provider (`github-copilot`)
<Tabs>
<Tab title="Built-in provider (github-copilot)">
Use the native device-login flow to obtain a GitHub token, then exchange it for
Copilot API tokens when OpenClaw runs. This is the **default** and simplest path
because it does not require VS Code.
Use the native device-login flow to obtain a GitHub token, then exchange it for
Copilot API tokens when OpenClaw runs. This is the **default** and simplest path
because it does not require VS Code.
<Steps>
<Step title="Run the login command">
```bash
openclaw models auth login-github-copilot
```
### 2) Copilot Proxy plugin (`copilot-proxy`)
You will be prompted to visit a URL and enter a one-time code. Keep the
terminal open until it completes.
</Step>
<Step title="Set a default model">
```bash
openclaw models set github-copilot/gpt-4o
```
Use the **Copilot Proxy** VS Code extension as a local bridge. OpenClaw talks to
the proxys `/v1` endpoint and uses the model list you configure there. Choose
this when you already run Copilot Proxy in VS Code or need to route through it.
You must enable the plugin and keep the VS Code extension running.
Or in config:
Use GitHub Copilot as a model provider (`github-copilot`). The login command runs
the GitHub device flow, saves an auth profile, and updates your config to use that
profile.
```json5
{
agents: { defaults: { model: { primary: "github-copilot/gpt-4o" } } },
}
```
</Step>
</Steps>
## CLI setup
```bash
openclaw models auth login-github-copilot
```
You'll be prompted to visit a URL and enter a one-time code. Keep the terminal
open until it completes.
### Optional flags
</Tab>
<Tab title="Copilot Proxy plugin (copilot-proxy)">
Use the **Copilot Proxy** VS Code extension as a local bridge. OpenClaw talks to
the proxy's `/v1` endpoint and uses the model list you configure there.
<Note>
Choose this when you already run Copilot Proxy in VS Code or need to route
through it. You must enable the plugin and keep the VS Code extension running.
</Note>
</Tab>
</Tabs>
## Optional flags
| Flag | Description |
| --------------- | --------------------------------------------------- |
| `--yes` | Skip the confirmation prompt |
| `--set-default` | Also apply the provider's recommended default model |
```bash
# Skip confirmation
openclaw models auth login-github-copilot --yes
```
To also apply the provider's recommended default model in one step, use the
generic auth command instead:
```bash
# Login and set the default model in one step
openclaw models auth login --provider github-copilot --method device --set-default
```
## Set a default model
<AccordionGroup>
<Accordion title="Interactive TTY required">
The device-login flow requires an interactive TTY. Run it directly in a
terminal, not in a non-interactive script or CI pipeline.
</Accordion>
```bash
openclaw models set github-copilot/gpt-4o
```
<Accordion title="Model availability depends on your plan">
Copilot model availability depends on your GitHub plan. If a model is
rejected, try another ID (for example `github-copilot/gpt-4.1`).
</Accordion>
### Config snippet
<Accordion title="Transport selection">
Claude model IDs use the Anthropic Messages transport automatically. GPT,
o-series, and Gemini models keep the OpenAI Responses transport. OpenClaw
selects the correct transport based on the model ref.
</Accordion>
```json5
{
agents: { defaults: { model: { primary: "github-copilot/gpt-4o" } } },
}
```
<Accordion title="Token storage">
The login stores a GitHub token in the auth profile store and exchanges it
for a Copilot API token when OpenClaw runs. You do not need to manage the
token manually.
</Accordion>
</AccordionGroup>
## Notes
<Warning>
Requires an interactive TTY. Run the login command directly in a terminal, not
inside a headless script or CI job.
</Warning>
- Requires an interactive TTY; run it directly in a terminal.
- Copilot model availability depends on your plan; if a model is rejected, try
another ID (for example `github-copilot/gpt-4.1`).
- Claude model IDs use the Anthropic Messages transport automatically; GPT, o-series,
and Gemini models keep the OpenAI Responses transport.
- The login stores a GitHub token in the auth profile store and exchanges it for a
Copilot API token when OpenClaw runs.
## Related
<CardGroup cols={2}>
<Card title="Model selection" href="/concepts/model-providers" icon="layers">
Choosing providers, model refs, and failover behavior.
</Card>
<Card title="OAuth and auth" href="/gateway/authentication" icon="key">
Auth details and credential reuse rules.
</Card>
</CardGroup>

View File

@@ -11,25 +11,73 @@ read_when:
Kilo Gateway provides a **unified API** that routes requests to many models behind a single
endpoint and API key. It is OpenAI-compatible, so most OpenAI SDKs work by switching the base URL.
## Getting an API key
| Property | Value |
| -------- | ---------------------------------- |
| Provider | `kilocode` |
| Auth | `KILOCODE_API_KEY` |
| API | OpenAI-compatible |
| Base URL | `https://api.kilo.ai/api/gateway/` |
1. Go to [app.kilo.ai](https://app.kilo.ai)
2. Sign in or create an account
3. Navigate to API Keys and generate a new key
## Getting started
## CLI setup
<Steps>
<Step title="Create an account">
Go to [app.kilo.ai](https://app.kilo.ai), sign in or create an account, then navigate to API Keys and generate a new key.
</Step>
<Step title="Run onboarding">
```bash
openclaw onboard --auth-choice kilocode-api-key
```
```bash
openclaw onboard --auth-choice kilocode-api-key
```
Or set the environment variable directly:
Or set the environment variable:
```bash
export KILOCODE_API_KEY="<your-kilocode-api-key>" # pragma: allowlist secret
```
```bash
export KILOCODE_API_KEY="<your-kilocode-api-key>" # pragma: allowlist secret
```
</Step>
<Step title="Verify the model is available">
```bash
openclaw models list --provider kilocode
```
</Step>
</Steps>
## Config snippet
## Default model
The default model is `kilocode/kilo/auto`, a provider-owned smart-routing
model managed by Kilo Gateway.
<Note>
OpenClaw treats `kilocode/kilo/auto` as the stable default ref, but does not
publish a source-backed task-to-upstream-model mapping for that route. Exact
upstream routing behind `kilocode/kilo/auto` is owned by Kilo Gateway, not
hard-coded in OpenClaw.
</Note>
## Available models
OpenClaw dynamically discovers available models from the Kilo Gateway at startup. Use
`/models kilocode` to see the full list of models available with your account.
Any model available on the gateway can be used with the `kilocode/` prefix:
| Model ref | Notes |
| -------------------------------------- | ---------------------------------- |
| `kilocode/kilo/auto` | Default — smart routing |
| `kilocode/anthropic/claude-sonnet-4` | Anthropic via Kilo |
| `kilocode/openai/gpt-5.4` | OpenAI via Kilo |
| `kilocode/google/gemini-3-pro-preview` | Google via Kilo |
| ...and many more | Use `/models kilocode` to list all |
<Tip>
At startup, OpenClaw queries `GET https://api.kilo.ai/api/gateway/models` and merges
discovered models ahead of the static fallback catalog. The bundled fallback always
includes `kilocode/kilo/auto` (`Kilo Auto`) with `input: ["text", "image"]`,
`reasoning: true`, `contextWindow: 1000000`, and `maxTokens: 128000`.
</Tip>
## Config example
```json5
{
@@ -42,48 +90,47 @@ export KILOCODE_API_KEY="<your-kilocode-api-key>" # pragma: allowlist secret
}
```
## Default model
<AccordionGroup>
<Accordion title="Transport and compatibility">
Kilo Gateway is documented in source as OpenRouter-compatible, so it stays on
the proxy-style OpenAI-compatible path rather than native OpenAI request shaping.
The default model is `kilocode/kilo/auto`, a provider-owned smart-routing
model managed by Kilo Gateway.
- Gemini-backed Kilo refs stay on the proxy-Gemini path, so OpenClaw keeps
Gemini thought-signature sanitation there without enabling native Gemini
replay validation or bootstrap rewrites.
- Kilo Gateway uses a Bearer token with your API key under the hood.
OpenClaw treats `kilocode/kilo/auto` as the stable default ref, but does not
publish a source-backed task-to-upstream-model mapping for that route.
</Accordion>
## Available models
<Accordion title="Stream wrapper and reasoning">
Kilo's shared stream wrapper adds the provider app header and normalizes
proxy reasoning payloads for supported concrete model refs.
OpenClaw dynamically discovers available models from the Kilo Gateway at startup. Use
`/models kilocode` to see the full list of models available with your account.
<Warning>
`kilocode/kilo/auto` and other proxy-reasoning-unsupported hints skip reasoning
injection. If you need reasoning support, use a concrete model ref such as
`kilocode/anthropic/claude-sonnet-4`.
</Warning>
Any model available on the gateway can be used with the `kilocode/` prefix:
</Accordion>
```
kilocode/kilo/auto (default - smart routing)
kilocode/anthropic/claude-sonnet-4
kilocode/openai/gpt-5.4
kilocode/google/gemini-3-pro-preview
...and many more
```
<Accordion title="Troubleshooting">
- If model discovery fails at startup, OpenClaw falls back to the bundled static catalog containing `kilocode/kilo/auto`.
- Confirm your API key is valid and that your Kilo account has the desired models enabled.
- When the Gateway runs as a daemon, ensure `KILOCODE_API_KEY` is available to that process (for example in `~/.openclaw/.env` or via `env.shellEnv`).
</Accordion>
</AccordionGroup>
## Notes
## Related
- Model refs are `kilocode/<model-id>` (e.g., `kilocode/anthropic/claude-sonnet-4`).
- Default model: `kilocode/kilo/auto`
- Base URL: `https://api.kilo.ai/api/gateway/`
- Bundled fallback catalog always includes `kilocode/kilo/auto` (`Kilo Auto`) with
`input: ["text", "image"]`, `reasoning: true`, `contextWindow: 1000000`,
and `maxTokens: 128000`
- At startup, OpenClaw tries `GET https://api.kilo.ai/api/gateway/models` and
merges discovered models ahead of the static fallback catalog
- Exact upstream routing behind `kilocode/kilo/auto` is owned by Kilo Gateway,
not hard-coded in OpenClaw
- Kilo Gateway is documented in source as OpenRouter-compatible, so it stays on
the proxy-style OpenAI-compatible path rather than native OpenAI request shaping
- Gemini-backed Kilo refs stay on the proxy-Gemini path, so OpenClaw keeps
Gemini thought-signature sanitation there without enabling native Gemini
replay validation or bootstrap rewrites.
- Kilo's shared stream wrapper adds the provider app header and normalizes
proxy reasoning payloads for supported concrete model refs. `kilocode/kilo/auto`
and other proxy-reasoning-unsupported hints skip that reasoning injection.
- For more model/provider options, see [/concepts/model-providers](/concepts/model-providers).
- Kilo Gateway uses a Bearer token with your API key under the hood.
<CardGroup cols={2}>
<Card title="Model selection" href="/concepts/model-providers" icon="layers">
Choosing providers, model refs, and failover behavior.
</Card>
<Card title="Configuration reference" href="/gateway/configuration" icon="gear">
Full OpenClaw configuration reference.
</Card>
<Card title="Kilo Gateway" href="https://app.kilo.ai" icon="arrow-up-right-from-square">
Kilo Gateway dashboard, API keys, and account management.
</Card>
</CardGroup>

View File

@@ -6,31 +6,51 @@ read_when:
title: "Qianfan"
---
# Qianfan Provider Guide
# Qianfan
Qianfan is Baidu's MaaS platform, provides a **unified API** that routes requests to many models behind a single
Qianfan is Baidu's MaaS platform, providing a **unified API** that routes requests to many models behind a single
endpoint and API key. It is OpenAI-compatible, so most OpenAI SDKs work by switching the base URL.
## Prerequisites
| Property | Value |
| -------- | --------------------------------- |
| Provider | `qianfan` |
| Auth | `QIANFAN_API_KEY` |
| API | OpenAI-compatible |
| Base URL | `https://qianfan.baidubce.com/v2` |
1. A Baidu Cloud account with Qianfan API access
2. An API key from the Qianfan console
3. OpenClaw installed on your system
## Getting started
## Getting Your API Key
<Steps>
<Step title="Create a Baidu Cloud account">
Sign up or log in at the [Qianfan Console](https://console.bce.baidu.com/qianfan/ais/console/apiKey) and ensure you have Qianfan API access enabled.
</Step>
<Step title="Generate an API key">
Create a new application or select an existing one, then generate an API key. The key format is `bce-v3/ALTAK-...`.
</Step>
<Step title="Run onboarding">
```bash
openclaw onboard --auth-choice qianfan-api-key
```
</Step>
<Step title="Verify the model is available">
```bash
openclaw models list --provider qianfan
```
</Step>
</Steps>
1. Visit the [Qianfan Console](https://console.bce.baidu.com/qianfan/ais/console/apiKey)
2. Create a new application or select an existing one
3. Generate an API key (format: `bce-v3/ALTAK-...`)
4. Copy the API key for use with OpenClaw
## Available models
## CLI setup
| Model ref | Input | Context | Max output | Reasoning | Notes |
| ------------------------------------ | ----------- | ------- | ---------- | --------- | ------------- |
| `qianfan/deepseek-v3.2` | text | 98,304 | 32,768 | Yes | Default model |
| `qianfan/ernie-5.0-thinking-preview` | text, image | 119,000 | 64,000 | Yes | Multimodal |
```bash
openclaw onboard --auth-choice qianfan-api-key
```
<Tip>
The default bundled model ref is `qianfan/deepseek-v3.2`. You only need to override `models.providers.qianfan` when you need a custom base URL or model metadata.
</Tip>
## Config snippet
## Config example
```json5
{
@@ -74,17 +94,40 @@ openclaw onboard --auth-choice qianfan-api-key
}
```
## Notes
<AccordionGroup>
<Accordion title="Transport and compatibility">
Qianfan runs through the OpenAI-compatible transport path, not native OpenAI request shaping. This means standard OpenAI SDK features work, but provider-specific parameters may not be forwarded.
</Accordion>
- Default bundled model ref: `qianfan/deepseek-v3.2`
- Default base URL: `https://qianfan.baidubce.com/v2`
- Bundled catalog currently includes `deepseek-v3.2` and `ernie-5.0-thinking-preview`
- Add or override `models.providers.qianfan` only when you need custom base URL or model metadata
- Qianfan runs through the OpenAI-compatible transport path, not native OpenAI request shaping
<Accordion title="Catalog and overrides">
The bundled catalog currently includes `deepseek-v3.2` and `ernie-5.0-thinking-preview`. Add or override `models.providers.qianfan` only when you need a custom base URL or model metadata.
## Related Documentation
<Note>
Model refs use the `qianfan/` prefix (for example `qianfan/deepseek-v3.2`).
</Note>
- [OpenClaw Configuration](/gateway/configuration)
- [Model Providers](/concepts/model-providers)
- [Agent Setup](/concepts/agent)
- [Qianfan API Documentation](https://cloud.baidu.com/doc/qianfan-api/s/3m7of64lb)
</Accordion>
<Accordion title="Troubleshooting">
- Ensure your API key starts with `bce-v3/ALTAK-` and has Qianfan API access enabled in the Baidu Cloud console.
- If models are not listed, confirm your account has the Qianfan service activated.
- The default base URL is `https://qianfan.baidubce.com/v2`. Only change it if you use a custom endpoint or proxy.
</Accordion>
</AccordionGroup>
## Related
<CardGroup cols={2}>
<Card title="Model selection" href="/concepts/model-providers" icon="layers">
Choosing providers, model refs, and failover behavior.
</Card>
<Card title="Configuration reference" href="/gateway/configuration" icon="gear">
Full OpenClaw configuration reference.
</Card>
<Card title="Agent setup" href="/concepts/agent" icon="robot">
Configuring agent defaults and model assignments.
</Card>
<Card title="Qianfan API docs" href="https://cloud.baidu.com/doc/qianfan-api/s/3m7of64lb" icon="arrow-up-right-from-square">
Official Qianfan API documentation.
</Card>
</CardGroup>

View File

@@ -9,31 +9,53 @@ title: "Xiaomi MiMo"
# Xiaomi MiMo
Xiaomi MiMo is the API platform for **MiMo** models. OpenClaw uses the Xiaomi
OpenAI-compatible endpoint with API-key authentication. Create your API key in the
[Xiaomi MiMo console](https://platform.xiaomimimo.com/#/console/api-keys), then configure the
bundled `xiaomi` provider with that key.
OpenAI-compatible endpoint with API-key authentication.
## Built-in catalog
| Property | Value |
| -------- | ------------------------------- |
| Provider | `xiaomi` |
| Auth | `XIAOMI_API_KEY` |
| API | OpenAI-compatible |
| Base URL | `https://api.xiaomimimo.com/v1` |
- Base URL: `https://api.xiaomimimo.com/v1`
- API: `openai-completions`
- Authorization: `Bearer $XIAOMI_API_KEY`
## Getting started
| Model ref | Input | Context | Max output | Notes |
| ---------------------- | ----------- | --------- | ---------- | ---------------------------- |
| `xiaomi/mimo-v2-flash` | text | 262,144 | 8,192 | Default model |
| `xiaomi/mimo-v2-pro` | text | 1,048,576 | 32,000 | Reasoning-enabled |
| `xiaomi/mimo-v2-omni` | text, image | 262,144 | 32,000 | Reasoning-enabled multimodal |
<Steps>
<Step title="Get an API key">
Create an API key in the [Xiaomi MiMo console](https://platform.xiaomimimo.com/#/console/api-keys).
</Step>
<Step title="Run onboarding">
```bash
openclaw onboard --auth-choice xiaomi-api-key
```
## CLI setup
Or pass the key directly:
```bash
openclaw onboard --auth-choice xiaomi-api-key
# or non-interactive
openclaw onboard --auth-choice xiaomi-api-key --xiaomi-api-key "$XIAOMI_API_KEY"
```
```bash
openclaw onboard --auth-choice xiaomi-api-key --xiaomi-api-key "$XIAOMI_API_KEY"
```
## Config snippet
</Step>
<Step title="Verify the model is available">
```bash
openclaw models list --provider xiaomi
```
</Step>
</Steps>
## Available models
| Model ref | Input | Context | Max output | Reasoning | Notes |
| ---------------------- | ----------- | --------- | ---------- | --------- | ------------- |
| `xiaomi/mimo-v2-flash` | text | 262,144 | 8,192 | No | Default model |
| `xiaomi/mimo-v2-pro` | text | 1,048,576 | 32,000 | Yes | Large context |
| `xiaomi/mimo-v2-omni` | text, image | 262,144 | 32,000 | Yes | Multimodal |
<Tip>
The default model ref is `xiaomi/mimo-v2-flash`. The provider is injected automatically when `XIAOMI_API_KEY` is set or an auth profile exists.
</Tip>
## Config example
```json5
{
@@ -81,9 +103,43 @@ openclaw onboard --auth-choice xiaomi-api-key --xiaomi-api-key "$XIAOMI_API_KEY"
}
```
## Notes
<AccordionGroup>
<Accordion title="Auto-injection behavior">
The `xiaomi` provider is injected automatically when `XIAOMI_API_KEY` is set in your environment or an auth profile exists. You do not need to manually configure the provider unless you want to override model metadata or the base URL.
</Accordion>
- Default model ref: `xiaomi/mimo-v2-flash`.
- Additional built-in models: `xiaomi/mimo-v2-pro`, `xiaomi/mimo-v2-omni`.
- The provider is injected automatically when `XIAOMI_API_KEY` is set (or an auth profile exists).
- See [/concepts/model-providers](/concepts/model-providers) for provider rules.
<Accordion title="Model details">
- **mimo-v2-flash** — lightweight and fast, ideal for general-purpose text tasks. No reasoning support.
- **mimo-v2-pro** — supports reasoning with a 1M token context window for long-document workloads.
- **mimo-v2-omni** — reasoning-enabled multimodal model that accepts both text and image inputs.
<Note>
All models use the `xiaomi/` prefix (for example `xiaomi/mimo-v2-pro`).
</Note>
</Accordion>
<Accordion title="Troubleshooting">
- If models do not appear, confirm `XIAOMI_API_KEY` is set and valid.
- When the Gateway runs as a daemon, ensure the key is available to that process (for example in `~/.openclaw/.env` or via `env.shellEnv`).
<Warning>
Keys set only in your interactive shell are not visible to daemon-managed gateway processes. Use `~/.openclaw/.env` or `env.shellEnv` config for persistent availability.
</Warning>
</Accordion>
</AccordionGroup>
## Related
<CardGroup cols={2}>
<Card title="Model selection" href="/concepts/model-providers" icon="layers">
Choosing providers, model refs, and failover behavior.
</Card>
<Card title="Configuration reference" href="/gateway/configuration" icon="gear">
Full OpenClaw configuration reference.
</Card>
<Card title="Xiaomi MiMo console" href="https://platform.xiaomimimo.com" icon="arrow-up-right-from-square">
Xiaomi MiMo dashboard and API key management.
</Card>
</CardGroup>