docs(providers): improve mistral, zai, alibaba, cloudflare-ai-gateway, fireworks with Mintlify components

This commit is contained in:
Vincent Koc
2026-04-12 11:31:43 +01:00
parent 4d3ce427ad
commit 0d9eca0e1a
5 changed files with 427 additions and 167 deletions

View File

@@ -16,57 +16,101 @@ Alibaba Model Studio / DashScope.
- Also accepted: `DASHSCOPE_API_KEY`, `QWEN_API_KEY`
- API: DashScope / Model Studio async video generation
## Quick start
## Getting started
1. Set an API key:
```bash
openclaw onboard --auth-choice qwen-standard-api-key
```
2. Set a default video model:
```json5
{
agents: {
defaults: {
videoGenerationModel: {
primary: "alibaba/wan2.6-t2v",
<Steps>
<Step title="Set an API key">
```bash
openclaw onboard --auth-choice qwen-standard-api-key
```
</Step>
<Step title="Set a default video model">
```json5
{
agents: {
defaults: {
videoGenerationModel: {
primary: "alibaba/wan2.6-t2v",
},
},
},
},
},
}
```
}
```
</Step>
<Step title="Verify the provider is available">
```bash
openclaw models list --provider alibaba
```
</Step>
</Steps>
<Note>
Any of the accepted auth keys (`MODELSTUDIO_API_KEY`, `DASHSCOPE_API_KEY`, `QWEN_API_KEY`) will work. The `qwen-standard-api-key` onboarding choice configures the shared DashScope credential.
</Note>
## Built-in Wan models
The bundled `alibaba` provider currently registers:
- `alibaba/wan2.6-t2v`
- `alibaba/wan2.6-i2v`
- `alibaba/wan2.6-r2v`
- `alibaba/wan2.6-r2v-flash`
- `alibaba/wan2.7-r2v`
| Model ref | Mode |
| -------------------------- | ------------------------- |
| `alibaba/wan2.6-t2v` | Text-to-video |
| `alibaba/wan2.6-i2v` | Image-to-video |
| `alibaba/wan2.6-r2v` | Reference-to-video |
| `alibaba/wan2.6-r2v-flash` | Reference-to-video (fast) |
| `alibaba/wan2.7-r2v` | Reference-to-video |
## Current limits
- Up to **1** output video per request
- Up to **1** input image
- Up to **4** input videos
- Up to **10 seconds** duration
- Supports `size`, `aspectRatio`, `resolution`, `audio`, and `watermark`
- Reference image/video mode currently requires **remote http(s) URLs**
| Parameter | Limit |
| --------------------- | --------------------------------------------------------- |
| Output videos | Up to **1** per request |
| Input images | Up to **1** |
| Input videos | Up to **4** |
| Duration | Up to **10 seconds** |
| Supported controls | `size`, `aspectRatio`, `resolution`, `audio`, `watermark` |
| Reference image/video | Remote `http(s)` URLs only |
## Relationship to Qwen
<Warning>
Reference image/video mode currently requires **remote http(s) URLs**. Local file paths are not supported for reference inputs.
</Warning>
The bundled `qwen` provider also uses Alibaba-hosted DashScope endpoints for
Wan video generation. Use:
## Advanced configuration
- `qwen/...` when you want the canonical Qwen provider surface
- `alibaba/...` when you want the direct vendor-owned Wan video surface
<AccordionGroup>
<Accordion title="Relationship to Qwen">
The bundled `qwen` provider also uses Alibaba-hosted DashScope endpoints for
Wan video generation. Use:
- `qwen/...` when you want the canonical Qwen provider surface
- `alibaba/...` when you want the direct vendor-owned Wan video surface
See the [Qwen provider docs](/providers/qwen) for more detail.
</Accordion>
<Accordion title="Auth key priority">
OpenClaw checks for auth keys in this order:
1. `MODELSTUDIO_API_KEY` (preferred)
2. `DASHSCOPE_API_KEY`
3. `QWEN_API_KEY`
Any of these will authenticate the `alibaba` provider.
</Accordion>
</AccordionGroup>
## Related
- [Video Generation](/tools/video-generation)
- [Qwen](/providers/qwen)
- [Configuration Reference](/gateway/configuration-reference#agent-defaults)
<CardGroup cols={2}>
<Card title="Video generation" href="/tools/video-generation" icon="video">
Shared video tool parameters and provider selection.
</Card>
<Card title="Qwen" href="/providers/qwen" icon="microchip">
Qwen provider setup and DashScope integration.
</Card>
<Card title="Configuration reference" href="/gateway/configuration-reference#agent-defaults" icon="gear">
Agent defaults and model configuration.
</Card>
</CardGroup>

View File

@@ -10,35 +10,55 @@ read_when:
Cloudflare AI Gateway sits in front of provider APIs and lets you add analytics, caching, and controls. For Anthropic, OpenClaw uses the Anthropic Messages API through your Gateway endpoint.
- Provider: `cloudflare-ai-gateway`
- Base URL: `https://gateway.ai.cloudflare.com/v1/<account_id>/<gateway_id>/anthropic`
- Default model: `cloudflare-ai-gateway/claude-sonnet-4-5`
- API key: `CLOUDFLARE_AI_GATEWAY_API_KEY` (your provider API key for requests through the Gateway)
| Property | Value |
| ------------- | ---------------------------------------------------------------------------------------- |
| Provider | `cloudflare-ai-gateway` |
| Base URL | `https://gateway.ai.cloudflare.com/v1/<account_id>/<gateway_id>/anthropic` |
| Default model | `cloudflare-ai-gateway/claude-sonnet-4-5` |
| API key | `CLOUDFLARE_AI_GATEWAY_API_KEY` (your provider API key for requests through the Gateway) |
For Anthropic models, use your Anthropic API key.
<Note>
For Anthropic models routed through Cloudflare AI Gateway, use your **Anthropic API key** as the provider key.
</Note>
## Quick start
## Getting started
1. Set the provider API key and Gateway details:
<Steps>
<Step title="Set the provider API key and Gateway details">
Run onboarding and choose the Cloudflare AI Gateway auth option:
```bash
openclaw onboard --auth-choice cloudflare-ai-gateway-api-key
```
```bash
openclaw onboard --auth-choice cloudflare-ai-gateway-api-key
```
2. Set a default model:
This prompts for your account ID, gateway ID, and API key.
```json5
{
agents: {
defaults: {
model: { primary: "cloudflare-ai-gateway/claude-sonnet-4-5" },
},
},
}
```
</Step>
<Step title="Set a default model">
Add the model to your OpenClaw config:
```json5
{
agents: {
defaults: {
model: { primary: "cloudflare-ai-gateway/claude-sonnet-4-5" },
},
},
}
```
</Step>
<Step title="Verify the model is available">
```bash
openclaw models list --provider cloudflare-ai-gateway
```
</Step>
</Steps>
## Non-interactive example
For scripted or CI setups, pass all values on the command line:
```bash
openclaw onboard --non-interactive \
--mode local \
@@ -48,24 +68,49 @@ openclaw onboard --non-interactive \
--cloudflare-ai-gateway-api-key "$CLOUDFLARE_AI_GATEWAY_API_KEY"
```
## Authenticated gateways
## Advanced configuration
If you enabled Gateway authentication in Cloudflare, add the `cf-aig-authorization` header (this is in addition to your provider API key).
<AccordionGroup>
<Accordion title="Authenticated gateways">
If you enabled Gateway authentication in Cloudflare, add the `cf-aig-authorization` header. This is **in addition to** your provider API key.
```json5
{
models: {
providers: {
"cloudflare-ai-gateway": {
headers: {
"cf-aig-authorization": "Bearer <cloudflare-ai-gateway-token>",
```json5
{
models: {
providers: {
"cloudflare-ai-gateway": {
headers: {
"cf-aig-authorization": "Bearer <cloudflare-ai-gateway-token>",
},
},
},
},
},
},
}
```
}
```
## Environment note
<Tip>
The `cf-aig-authorization` header authenticates with the Cloudflare Gateway itself, while the provider API key (for example, your Anthropic key) authenticates with the upstream provider.
</Tip>
If the Gateway runs as a daemon (launchd/systemd), make sure `CLOUDFLARE_AI_GATEWAY_API_KEY` is available to that process (for example, in `~/.openclaw/.env` or via `env.shellEnv`).
</Accordion>
<Accordion title="Environment note">
If the Gateway runs as a daemon (launchd/systemd), make sure `CLOUDFLARE_AI_GATEWAY_API_KEY` is available to that process.
<Warning>
A key sitting only in `~/.profile` will not help a launchd/systemd daemon unless that environment is imported there as well. Set the key in `~/.openclaw/.env` or via `env.shellEnv` to ensure the gateway process can read it.
</Warning>
</Accordion>
</AccordionGroup>
## Related
<CardGroup cols={2}>
<Card title="Model selection" href="/concepts/model-providers" icon="layers">
Choosing providers, model refs, and failover behavior.
</Card>
<Card title="Troubleshooting" href="/help/troubleshooting" icon="wrench">
General troubleshooting and FAQ.
</Card>
</CardGroup>

View File

@@ -7,26 +7,38 @@ read_when:
# Fireworks
[Fireworks](https://fireworks.ai) exposes open-weight and routed models through an OpenAI-compatible API. OpenClaw now includes a bundled Fireworks provider plugin.
[Fireworks](https://fireworks.ai) exposes open-weight and routed models through an OpenAI-compatible API. OpenClaw includes a bundled Fireworks provider plugin.
- Provider: `fireworks`
- Auth: `FIREWORKS_API_KEY`
- API: OpenAI-compatible chat/completions
- Base URL: `https://api.fireworks.ai/inference/v1`
- Default model: `fireworks/accounts/fireworks/routers/kimi-k2p5-turbo`
| Property | Value |
| ------------- | ------------------------------------------------------ |
| Provider | `fireworks` |
| Auth | `FIREWORKS_API_KEY` |
| API | OpenAI-compatible chat/completions |
| Base URL | `https://api.fireworks.ai/inference/v1` |
| Default model | `fireworks/accounts/fireworks/routers/kimi-k2p5-turbo` |
## Quick start
## Getting started
Set up Fireworks auth through onboarding:
<Steps>
<Step title="Set up Fireworks auth through onboarding">
```bash
openclaw onboard --auth-choice fireworks-api-key
```
```bash
openclaw onboard --auth-choice fireworks-api-key
```
This stores your Fireworks key in OpenClaw config and sets the Fire Pass starter model as the default.
This stores your Fireworks key in OpenClaw config and sets the Fire Pass starter model as the default.
</Step>
<Step title="Verify the model is available">
```bash
openclaw models list --provider fireworks
```
</Step>
</Steps>
## Non-interactive example
For scripted or CI setups, pass all values on the command line:
```bash
openclaw onboard --non-interactive \
--mode local \
@@ -36,24 +48,20 @@ openclaw onboard --non-interactive \
--accept-risk
```
## Environment note
If the Gateway runs outside your interactive shell, make sure `FIREWORKS_API_KEY`
is available to that process too. A key sitting only in `~/.profile` will not
help a launchd/systemd daemon unless that environment is imported there as well.
## Built-in catalog
| Model ref | Name | Input | Context | Max output | Notes |
| ------------------------------------------------------ | --------------------------- | ---------- | ------- | ---------- | ------------------------------------------ |
| `fireworks/accounts/fireworks/routers/kimi-k2p5-turbo` | Kimi K2.5 Turbo (Fire Pass) | text,image | 256,000 | 256,000 | Default bundled starter model on Fireworks |
<Tip>
If Fireworks publishes a newer model such as a fresh Qwen or Gemma release, you can switch to it directly by using its Fireworks model id without waiting for a bundled catalog update.
</Tip>
## Custom Fireworks model ids
OpenClaw accepts dynamic Fireworks model ids too. Use the exact model or router id shown by Fireworks and prefix it with `fireworks/`.
Example:
```json5
{
agents: {
@@ -66,4 +74,34 @@ Example:
}
```
If Fireworks publishes a newer model such as a fresh Qwen or Gemma release, you can switch to it directly by using its Fireworks model id without waiting for a bundled catalog update.
<AccordionGroup>
<Accordion title="How model id prefixing works">
Every Fireworks model ref in OpenClaw starts with `fireworks/` followed by the exact id or router path from the Fireworks platform. For example:
- Router model: `fireworks/accounts/fireworks/routers/kimi-k2p5-turbo`
- Direct model: `fireworks/accounts/fireworks/models/<model-name>`
OpenClaw strips the `fireworks/` prefix when building the API request and sends the remaining path to the Fireworks endpoint.
</Accordion>
<Accordion title="Environment note">
If the Gateway runs outside your interactive shell, make sure `FIREWORKS_API_KEY` is available to that process too.
<Warning>
A key sitting only in `~/.profile` will not help a launchd/systemd daemon unless that environment is imported there as well. Set the key in `~/.openclaw/.env` or via `env.shellEnv` to ensure the gateway process can read it.
</Warning>
</Accordion>
</AccordionGroup>
## Related
<CardGroup cols={2}>
<Card title="Model selection" href="/concepts/model-providers" icon="layers">
Choosing providers, model refs, and failover behavior.
</Card>
<Card title="Troubleshooting" href="/help/troubleshooting" icon="wrench">
General troubleshooting and FAQ.
</Card>
</CardGroup>

View File

@@ -12,22 +12,42 @@ OpenClaw supports Mistral for both text/image model routing (`mistral/...`) and
audio transcription via Voxtral in media understanding.
Mistral can also be used for memory embeddings (`memorySearch.provider = "mistral"`).
## CLI setup
- Provider: `mistral`
- Auth: `MISTRAL_API_KEY`
- API: Mistral Chat Completions (`https://api.mistral.ai/v1`)
```bash
openclaw onboard --auth-choice mistral-api-key
# or non-interactive
openclaw onboard --mistral-api-key "$MISTRAL_API_KEY"
```
## Getting started
## Config snippet (LLM provider)
<Steps>
<Step title="Get your API key">
Create an API key in the [Mistral Console](https://console.mistral.ai/).
</Step>
<Step title="Run onboarding">
```bash
openclaw onboard --auth-choice mistral-api-key
```
```json5
{
env: { MISTRAL_API_KEY: "sk-..." },
agents: { defaults: { model: { primary: "mistral/mistral-large-latest" } } },
}
```
Or pass the key directly:
```bash
openclaw onboard --mistral-api-key "$MISTRAL_API_KEY"
```
</Step>
<Step title="Set a default model">
```json5
{
env: { MISTRAL_API_KEY: "sk-..." },
agents: { defaults: { model: { primary: "mistral/mistral-large-latest" } } },
}
```
</Step>
<Step title="Verify the model is available">
```bash
openclaw models list --provider mistral
```
</Step>
</Steps>
## Built-in LLM catalog
@@ -43,7 +63,9 @@ OpenClaw currently ships this bundled Mistral catalog:
| `mistral/devstral-medium-latest` | text | 262,144 | 32,768 | Devstral 2 |
| `mistral/magistral-small` | text | 128,000 | 40,000 | Reasoning-enabled |
## Config snippet (audio transcription with Voxtral)
## Audio transcription (Voxtral)
Use Voxtral for audio transcription through the media understanding pipeline.
```json5
{
@@ -58,22 +80,55 @@ OpenClaw currently ships this bundled Mistral catalog:
}
```
## Adjustable reasoning (`mistral-small-latest`)
<Tip>
The media transcription path uses `/v1/audio/transcriptions`. The default audio model for Mistral is `voxtral-mini-latest`.
</Tip>
`mistral/mistral-small-latest` maps to Mistral Small 4 and supports [adjustable reasoning](https://docs.mistral.ai/capabilities/reasoning/adjustable) on the Chat Completions API via `reasoning_effort` (`none` minimizes extra thinking in the output; `high` surfaces full thinking traces before the final answer).
## Advanced configuration
OpenClaw maps the session **thinking** level to Mistrals API:
<AccordionGroup>
<Accordion title="Adjustable reasoning (mistral-small-latest)">
`mistral/mistral-small-latest` maps to Mistral Small 4 and supports [adjustable reasoning](https://docs.mistral.ai/capabilities/reasoning/adjustable) on the Chat Completions API via `reasoning_effort` (`none` minimizes extra thinking in the output; `high` surfaces full thinking traces before the final answer).
- **off** / **minimal**`none`
- **low** / **medium** / **high** / **xhigh** / **adaptive**`high`
OpenClaw maps the session **thinking** level to Mistral's API:
Other bundled Mistral catalog models do not use this parameter; keep using `magistral-*` models when you want Mistrals native reasoning-first behavior.
| OpenClaw thinking level | Mistral `reasoning_effort` |
| ------------------------------------------------ | -------------------------- |
| **off** / **minimal** | `none` |
| **low** / **medium** / **high** / **xhigh** / **adaptive** | `high` |
## Notes
<Note>
Other bundled Mistral catalog models do not use this parameter. Keep using `magistral-*` models when you want Mistral's native reasoning-first behavior.
</Note>
- Mistral auth uses `MISTRAL_API_KEY`.
- Provider base URL defaults to `https://api.mistral.ai/v1`.
- Onboarding default model is `mistral/mistral-large-latest`.
- Media-understanding default audio model for Mistral is `voxtral-mini-latest`.
- Media transcription path uses `/v1/audio/transcriptions`.
- Memory embeddings path uses `/v1/embeddings` (default model: `mistral-embed`).
</Accordion>
<Accordion title="Memory embeddings">
Mistral can serve memory embeddings via `/v1/embeddings` (default model: `mistral-embed`).
```json5
{
memorySearch: { provider: "mistral" },
}
```
</Accordion>
<Accordion title="Auth and base URL">
- Mistral auth uses `MISTRAL_API_KEY`.
- Provider base URL defaults to `https://api.mistral.ai/v1`.
- Onboarding default model is `mistral/mistral-large-latest`.
- Z.AI uses Bearer auth with your API key.
</Accordion>
</AccordionGroup>
## Related
<CardGroup cols={2}>
<Card title="Model selection" href="/concepts/model-providers" icon="layers">
Choosing providers, model refs, and failover behavior.
</Card>
<Card title="Media understanding" href="/tools/media-understanding" icon="microphone">
Audio transcription setup and provider selection.
</Card>
</CardGroup>

View File

@@ -12,64 +12,142 @@ Z.AI is the API platform for **GLM** models. It provides REST APIs for GLM and u
for authentication. Create your API key in the Z.AI console. OpenClaw uses the `zai` provider
with a Z.AI API key.
## CLI setup
- Provider: `zai`
- Auth: `ZAI_API_KEY`
- API: Z.AI Chat Completions (Bearer auth)
```bash
# Generic API-key setup with endpoint auto-detection
openclaw onboard --auth-choice zai-api-key
## Getting started
# Coding Plan Global, recommended for Coding Plan users
openclaw onboard --auth-choice zai-coding-global
<Tabs>
<Tab title="Auto-detect endpoint">
**Best for:** most users. OpenClaw detects the matching Z.AI endpoint from the key and applies the correct base URL automatically.
# Coding Plan CN (China region), recommended for Coding Plan users
openclaw onboard --auth-choice zai-coding-cn
<Steps>
<Step title="Run onboarding">
```bash
openclaw onboard --auth-choice zai-api-key
```
</Step>
<Step title="Set a default model">
```json5
{
env: { ZAI_API_KEY: "sk-..." },
agents: { defaults: { model: { primary: "zai/glm-5.1" } } },
}
```
</Step>
<Step title="Verify the model is available">
```bash
openclaw models list --provider zai
```
</Step>
</Steps>
# General API
openclaw onboard --auth-choice zai-global
</Tab>
# General API CN (China region)
openclaw onboard --auth-choice zai-cn
```
<Tab title="Explicit regional endpoint">
**Best for:** users who want to force a specific Coding Plan or general API surface.
## Config snippet
<Steps>
<Step title="Pick the right onboarding choice">
```bash
# Coding Plan Global (recommended for Coding Plan users)
openclaw onboard --auth-choice zai-coding-global
```json5
{
env: { ZAI_API_KEY: "sk-..." },
agents: { defaults: { model: { primary: "zai/glm-5.1" } } },
}
```
# Coding Plan CN (China region)
openclaw onboard --auth-choice zai-coding-cn
`zai-api-key` lets OpenClaw detect the matching Z.AI endpoint from the key and
apply the correct base URL automatically. Use the explicit regional choices when
you want to force a specific Coding Plan or general API surface.
# General API
openclaw onboard --auth-choice zai-global
# General API CN (China region)
openclaw onboard --auth-choice zai-cn
```
</Step>
<Step title="Set a default model">
```json5
{
env: { ZAI_API_KEY: "sk-..." },
agents: { defaults: { model: { primary: "zai/glm-5.1" } } },
}
```
</Step>
<Step title="Verify the model is available">
```bash
openclaw models list --provider zai
```
</Step>
</Steps>
</Tab>
</Tabs>
## Bundled GLM catalog
OpenClaw currently seeds the bundled `zai` provider with:
- `glm-5.1`
- `glm-5`
- `glm-5-turbo`
- `glm-5v-turbo`
- `glm-4.7`
- `glm-4.7-flash`
- `glm-4.7-flashx`
- `glm-4.6`
- `glm-4.6v`
- `glm-4.5`
- `glm-4.5-air`
- `glm-4.5-flash`
- `glm-4.5v`
| Model ref | Notes |
| -------------------- | ------------- |
| `zai/glm-5.1` | Default model |
| `zai/glm-5` | |
| `zai/glm-5-turbo` | |
| `zai/glm-5v-turbo` | |
| `zai/glm-4.7` | |
| `zai/glm-4.7-flash` | |
| `zai/glm-4.7-flashx` | |
| `zai/glm-4.6` | |
| `zai/glm-4.6v` | |
| `zai/glm-4.5` | |
| `zai/glm-4.5-air` | |
| `zai/glm-4.5-flash` | |
| `zai/glm-4.5v` | |
## Notes
<Tip>
GLM models are available as `zai/<model>` (example: `zai/glm-5`). The default bundled model ref is `zai/glm-5.1`.
</Tip>
- GLM models are available as `zai/<model>` (example: `zai/glm-5`).
- Default bundled model ref: `zai/glm-5.1`
- Unknown `glm-5*` ids still forward-resolve on the bundled provider path by
synthesizing provider-owned metadata from the `glm-4.7` template when the id
matches the current GLM-5 family shape.
- `tool_stream` is enabled by default for Z.AI tool-call streaming. Set
`agents.defaults.models["zai/<model>"].params.tool_stream` to `false` to disable it.
- See [/providers/glm](/providers/glm) for the model family overview.
- Z.AI uses Bearer auth with your API key.
## Advanced configuration
<AccordionGroup>
<Accordion title="Forward-resolving unknown GLM-5 models">
Unknown `glm-5*` ids still forward-resolve on the bundled provider path by
synthesizing provider-owned metadata from the `glm-4.7` template when the id
matches the current GLM-5 family shape.
</Accordion>
<Accordion title="Tool-call streaming">
`tool_stream` is enabled by default for Z.AI tool-call streaming. To disable it:
```json5
{
agents: {
defaults: {
models: {
"zai/<model>": {
params: { tool_stream: false },
},
},
},
},
}
```
</Accordion>
<Accordion title="Auth details">
- Z.AI uses Bearer auth with your API key.
- The `zai-api-key` onboarding choice auto-detects the matching Z.AI endpoint from the key prefix.
- Use the explicit regional choices (`zai-coding-global`, `zai-coding-cn`, `zai-global`, `zai-cn`) when you want to force a specific API surface.
</Accordion>
</AccordionGroup>
## Related
<CardGroup cols={2}>
<Card title="GLM model family" href="/providers/glm" icon="microchip">
Model family overview for GLM.
</Card>
<Card title="Model selection" href="/concepts/model-providers" icon="layers">
Choosing providers, model refs, and failover behavior.
</Card>
</CardGroup>