docs(providers): improve chutes, synthetic, together, volcengine, deepgram with Mintlify components

This commit is contained in:
Vincent Koc
2026-04-12 11:24:24 +01:00
parent e7076617f9
commit 4081603ad5
5 changed files with 501 additions and 251 deletions

View File

@@ -13,44 +13,58 @@ read_when:
OpenAI-compatible API. OpenClaw supports both browser OAuth and direct API-key
auth for the bundled `chutes` provider.
- Provider: `chutes`
- API: OpenAI-compatible
- Base URL: `https://llm.chutes.ai/v1`
- Auth:
- OAuth via `openclaw onboard --auth-choice chutes`
- API key via `openclaw onboard --auth-choice chutes-api-key`
- Runtime env vars: `CHUTES_API_KEY`, `CHUTES_OAUTH_TOKEN`
| Property | Value |
| -------- | ---------------------------- |
| Provider | `chutes` |
| API | OpenAI-compatible |
| Base URL | `https://llm.chutes.ai/v1` |
| Auth | OAuth or API key (see below) |
## Quick start
## Getting started
### OAuth
<Tabs>
<Tab title="OAuth">
<Steps>
<Step title="Run the OAuth onboarding flow">
```bash
openclaw onboard --auth-choice chutes
```
OpenClaw launches the browser flow locally, or shows a URL + redirect-paste
flow on remote/headless hosts. OAuth tokens auto-refresh through OpenClaw auth
profiles.
</Step>
<Step title="Verify the default model">
After onboarding, the default model is set to
`chutes/zai-org/GLM-4.7-TEE` and the bundled Chutes catalog is
registered.
</Step>
</Steps>
</Tab>
<Tab title="API key">
<Steps>
<Step title="Get an API key">
Create a key at
[chutes.ai/settings/api-keys](https://chutes.ai/settings/api-keys).
</Step>
<Step title="Run the API key onboarding flow">
```bash
openclaw onboard --auth-choice chutes-api-key
```
</Step>
<Step title="Verify the default model">
After onboarding, the default model is set to
`chutes/zai-org/GLM-4.7-TEE` and the bundled Chutes catalog is
registered.
</Step>
</Steps>
</Tab>
</Tabs>
```bash
openclaw onboard --auth-choice chutes
```
OpenClaw launches the browser flow locally, or shows a URL + redirect-paste
flow on remote/headless hosts. OAuth tokens auto-refresh through OpenClaw auth
profiles.
Optional OAuth overrides:
- `CHUTES_CLIENT_ID`
- `CHUTES_CLIENT_SECRET`
- `CHUTES_OAUTH_REDIRECT_URI`
- `CHUTES_OAUTH_SCOPES`
### API key
```bash
openclaw onboard --auth-choice chutes-api-key
```
Get your key at
[chutes.ai/settings/api-keys](https://chutes.ai/settings/api-keys).
Both auth paths register the bundled Chutes catalog and set the default model
to `chutes/zai-org/GLM-4.7-TEE`.
<Note>
Both auth paths register the bundled Chutes catalog and set the default model to
`chutes/zai-org/GLM-4.7-TEE`. Runtime environment variables: `CHUTES_API_KEY`,
`CHUTES_OAUTH_TOKEN`.
</Note>
## Discovery behavior
@@ -60,25 +74,28 @@ back to a bundled static catalog so onboarding and startup still work.
## Default aliases
OpenClaw also registers three convenience aliases for the bundled Chutes
catalog:
OpenClaw registers three convenience aliases for the bundled Chutes catalog:
- `chutes-fast` -> `chutes/zai-org/GLM-4.7-FP8`
- `chutes-pro` -> `chutes/deepseek-ai/DeepSeek-V3.2-TEE`
- `chutes-vision` -> `chutes/chutesai/Mistral-Small-3.2-24B-Instruct-2506`
| Alias | Target model |
| --------------- | ----------------------------------------------------- |
| `chutes-fast` | `chutes/zai-org/GLM-4.7-FP8` |
| `chutes-pro` | `chutes/deepseek-ai/DeepSeek-V3.2-TEE` |
| `chutes-vision` | `chutes/chutesai/Mistral-Small-3.2-24B-Instruct-2506` |
## Built-in starter catalog
The bundled fallback catalog includes current Chutes refs such as:
The bundled fallback catalog includes current Chutes refs:
- `chutes/zai-org/GLM-4.7-TEE`
- `chutes/zai-org/GLM-5-TEE`
- `chutes/deepseek-ai/DeepSeek-V3.2-TEE`
- `chutes/deepseek-ai/DeepSeek-R1-0528-TEE`
- `chutes/moonshotai/Kimi-K2.5-TEE`
- `chutes/chutesai/Mistral-Small-3.2-24B-Instruct-2506`
- `chutes/Qwen/Qwen3-Coder-Next-TEE`
- `chutes/openai/gpt-oss-120b-TEE`
| Model ref |
| ----------------------------------------------------- |
| `chutes/zai-org/GLM-4.7-TEE` |
| `chutes/zai-org/GLM-5-TEE` |
| `chutes/deepseek-ai/DeepSeek-V3.2-TEE` |
| `chutes/deepseek-ai/DeepSeek-R1-0528-TEE` |
| `chutes/moonshotai/Kimi-K2.5-TEE` |
| `chutes/chutesai/Mistral-Small-3.2-24B-Instruct-2506` |
| `chutes/Qwen/Qwen3-Coder-Next-TEE` |
| `chutes/openai/gpt-oss-120b-TEE` |
## Config example
@@ -96,8 +113,42 @@ The bundled fallback catalog includes current Chutes refs such as:
}
```
## Notes
<AccordionGroup>
<Accordion title="OAuth overrides">
You can customize the OAuth flow with optional environment variables:
- OAuth help and redirect-app requirements: [Chutes OAuth docs](https://chutes.ai/docs/sign-in-with-chutes/overview)
- API-key and OAuth discovery both use the same `chutes` provider id.
- Chutes models are registered as `chutes/<model-id>`.
| Variable | Purpose |
| -------- | ------- |
| `CHUTES_CLIENT_ID` | Custom OAuth client ID |
| `CHUTES_CLIENT_SECRET` | Custom OAuth client secret |
| `CHUTES_OAUTH_REDIRECT_URI` | Custom redirect URI |
| `CHUTES_OAUTH_SCOPES` | Custom OAuth scopes |
See the [Chutes OAuth docs](https://chutes.ai/docs/sign-in-with-chutes/overview)
for redirect-app requirements and help.
</Accordion>
<Accordion title="Notes">
- API-key and OAuth discovery both use the same `chutes` provider id.
- Chutes models are registered as `chutes/<model-id>`.
- If discovery fails at startup, the bundled static catalog is used automatically.
</Accordion>
</AccordionGroup>
## Related
<CardGroup cols={2}>
<Card title="Model providers" href="/concepts/model-providers" icon="layers">
Provider rules, model refs, and failover behavior.
</Card>
<Card title="Configuration reference" href="/gateway/configuration-reference" icon="gear">
Full config schema including provider settings.
</Card>
<Card title="Chutes" href="https://chutes.ai" icon="arrow-up-right-from-square">
Chutes dashboard and API docs.
</Card>
<Card title="Chutes API keys" href="https://chutes.ai/settings/api-keys" icon="key">
Create and manage Chutes API keys.
</Card>
</CardGroup>

View File

@@ -15,79 +15,128 @@ When enabled, OpenClaw uploads the audio file to Deepgram and injects the transc
into the reply pipeline (`{{Transcript}}` + `[Audio]` block). This is **not streaming**;
it uses the pre-recorded transcription endpoint.
Website: [https://deepgram.com](https://deepgram.com)
Docs: [https://developers.deepgram.com](https://developers.deepgram.com)
| Detail | Value |
| ------------- | ---------------------------------------------------------- |
| Website | [deepgram.com](https://deepgram.com) |
| Docs | [developers.deepgram.com](https://developers.deepgram.com) |
| Auth | `DEEPGRAM_API_KEY` |
| Default model | `nova-3` |
## Quick start
## Getting started
1. Set your API key:
<Steps>
<Step title="Set your API key">
Add your Deepgram API key to the environment:
```
DEEPGRAM_API_KEY=dg_...
```
```
DEEPGRAM_API_KEY=dg_...
```
2. Enable the provider:
```json5
{
tools: {
media: {
audio: {
enabled: true,
models: [{ provider: "deepgram", model: "nova-3" }],
},
},
},
}
```
## Options
- `model`: Deepgram model id (default: `nova-3`)
- `language`: language hint (optional)
- `tools.media.audio.providerOptions.deepgram.detect_language`: enable language detection (optional)
- `tools.media.audio.providerOptions.deepgram.punctuate`: enable punctuation (optional)
- `tools.media.audio.providerOptions.deepgram.smart_format`: enable smart formatting (optional)
Example with language:
```json5
{
tools: {
media: {
audio: {
enabled: true,
models: [{ provider: "deepgram", model: "nova-3", language: "en" }],
},
},
},
}
```
Example with Deepgram options:
```json5
{
tools: {
media: {
audio: {
enabled: true,
providerOptions: {
deepgram: {
detect_language: true,
punctuate: true,
smart_format: true,
</Step>
<Step title="Enable the audio provider">
```json5
{
tools: {
media: {
audio: {
enabled: true,
models: [{ provider: "deepgram", model: "nova-3" }],
},
},
models: [{ provider: "deepgram", model: "nova-3" }],
},
},
},
}
```
}
```
</Step>
<Step title="Send a voice note">
Send an audio message through any connected channel. OpenClaw transcribes it
via Deepgram and injects the transcript into the reply pipeline.
</Step>
</Steps>
## Configuration options
| Option | Path | Description |
| ----------------- | ------------------------------------------------------------ | ------------------------------------- |
| `model` | `tools.media.audio.models[].model` | Deepgram model id (default: `nova-3`) |
| `language` | `tools.media.audio.models[].language` | Language hint (optional) |
| `detect_language` | `tools.media.audio.providerOptions.deepgram.detect_language` | Enable language detection (optional) |
| `punctuate` | `tools.media.audio.providerOptions.deepgram.punctuate` | Enable punctuation (optional) |
| `smart_format` | `tools.media.audio.providerOptions.deepgram.smart_format` | Enable smart formatting (optional) |
<Tabs>
<Tab title="With language hint">
```json5
{
tools: {
media: {
audio: {
enabled: true,
models: [{ provider: "deepgram", model: "nova-3", language: "en" }],
},
},
},
}
```
</Tab>
<Tab title="With Deepgram options">
```json5
{
tools: {
media: {
audio: {
enabled: true,
providerOptions: {
deepgram: {
detect_language: true,
punctuate: true,
smart_format: true,
},
},
models: [{ provider: "deepgram", model: "nova-3" }],
},
},
},
}
```
</Tab>
</Tabs>
## Notes
- Authentication follows the standard provider auth order; `DEEPGRAM_API_KEY` is the simplest path.
- Override endpoints or headers with `tools.media.audio.baseUrl` and `tools.media.audio.headers` when using a proxy.
- Output follows the same audio rules as other providers (size caps, timeouts, transcript injection).
<AccordionGroup>
<Accordion title="Authentication">
Authentication follows the standard provider auth order. `DEEPGRAM_API_KEY` is
the simplest path.
</Accordion>
<Accordion title="Proxy and custom endpoints">
Override endpoints or headers with `tools.media.audio.baseUrl` and
`tools.media.audio.headers` when using a proxy.
</Accordion>
<Accordion title="Output behavior">
Output follows the same audio rules as other providers (size caps, timeouts,
transcript injection).
</Accordion>
</AccordionGroup>
<Note>
Deepgram transcription is **pre-recorded only** (not real-time streaming). OpenClaw
uploads the complete audio file and waits for the full transcript before injecting
it into the conversation.
</Note>
## Related
<CardGroup cols={2}>
<Card title="Media tools" href="/tools/media" icon="photo-film">
Audio, image, and video processing pipeline overview.
</Card>
<Card title="Configuration" href="/configuration" icon="gear">
Full config reference including media tool settings.
</Card>
<Card title="Troubleshooting" href="/help/troubleshooting" icon="wrench">
Common issues and debugging steps.
</Card>
<Card title="FAQ" href="/help/faq" icon="circle-question">
Frequently asked questions about OpenClaw setup.
</Card>
</CardGroup>

View File

@@ -8,23 +8,42 @@ title: "Synthetic"
# Synthetic
Synthetic exposes Anthropic-compatible endpoints. OpenClaw registers it as the
`synthetic` provider and uses the Anthropic Messages API.
[Synthetic](https://synthetic.new) exposes Anthropic-compatible endpoints.
OpenClaw registers it as the `synthetic` provider and uses the Anthropic
Messages API.
## Quick setup
| Property | Value |
| -------- | ------------------------------------- |
| Provider | `synthetic` |
| Auth | `SYNTHETIC_API_KEY` |
| API | Anthropic Messages |
| Base URL | `https://api.synthetic.new/anthropic` |
1. Set `SYNTHETIC_API_KEY` (or run the wizard below).
2. Run onboarding:
## Getting started
```bash
openclaw onboard --auth-choice synthetic-api-key
```
<Steps>
<Step title="Get an API key">
Obtain a `SYNTHETIC_API_KEY` from your Synthetic account, or let the
onboarding wizard prompt you for one.
</Step>
<Step title="Run onboarding">
```bash
openclaw onboard --auth-choice synthetic-api-key
```
</Step>
<Step title="Verify the default model">
After onboarding the default model is set to:
```
synthetic/hf:MiniMaxAI/MiniMax-M2.5
```
</Step>
</Steps>
The default model is set to:
```
synthetic/hf:MiniMaxAI/MiniMax-M2.5
```
<Warning>
OpenClaw's Anthropic client appends `/v1` to the base URL automatically, so use
`https://api.synthetic.new/anthropic` (not `/anthropic/v1`). If Synthetic
changes its base URL, override `models.providers.synthetic.baseUrl`.
</Warning>
## Config example
@@ -61,41 +80,77 @@ synthetic/hf:MiniMaxAI/MiniMax-M2.5
}
```
Note: OpenClaw's Anthropic client appends `/v1` to the base URL, so use
`https://api.synthetic.new/anthropic` (not `/anthropic/v1`). If Synthetic changes
its base URL, override `models.providers.synthetic.baseUrl`.
## Model catalog
All models below use cost `0` (input/output/cache).
All Synthetic models use cost `0` (input/output/cache).
| Model ID | Context window | Max tokens | Reasoning | Input |
| ------------------------------------------------------ | -------------- | ---------- | --------- | ------------ |
| `hf:MiniMaxAI/MiniMax-M2.5` | 192000 | 65536 | false | text |
| `hf:moonshotai/Kimi-K2-Thinking` | 256000 | 8192 | true | text |
| `hf:zai-org/GLM-4.7` | 198000 | 128000 | false | text |
| `hf:deepseek-ai/DeepSeek-R1-0528` | 128000 | 8192 | false | text |
| `hf:deepseek-ai/DeepSeek-V3-0324` | 128000 | 8192 | false | text |
| `hf:deepseek-ai/DeepSeek-V3.1` | 128000 | 8192 | false | text |
| `hf:deepseek-ai/DeepSeek-V3.1-Terminus` | 128000 | 8192 | false | text |
| `hf:deepseek-ai/DeepSeek-V3.2` | 159000 | 8192 | false | text |
| `hf:meta-llama/Llama-3.3-70B-Instruct` | 128000 | 8192 | false | text |
| `hf:meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8` | 524000 | 8192 | false | text |
| `hf:moonshotai/Kimi-K2-Instruct-0905` | 256000 | 8192 | false | text |
| `hf:moonshotai/Kimi-K2.5` | 256000 | 8192 | true | text + image |
| `hf:openai/gpt-oss-120b` | 128000 | 8192 | false | text |
| `hf:Qwen/Qwen3-235B-A22B-Instruct-2507` | 256000 | 8192 | false | text |
| `hf:Qwen/Qwen3-Coder-480B-A35B-Instruct` | 256000 | 8192 | false | text |
| `hf:Qwen/Qwen3-VL-235B-A22B-Instruct` | 250000 | 8192 | false | text + image |
| `hf:zai-org/GLM-4.5` | 128000 | 128000 | false | text |
| `hf:zai-org/GLM-4.6` | 198000 | 128000 | false | text |
| `hf:zai-org/GLM-5` | 256000 | 128000 | true | text + image |
| `hf:deepseek-ai/DeepSeek-V3` | 128000 | 8192 | false | text |
| `hf:Qwen/Qwen3-235B-A22B-Thinking-2507` | 256000 | 8192 | true | text |
| `hf:MiniMaxAI/MiniMax-M2.5` | 192,000 | 65,536 | no | text |
| `hf:moonshotai/Kimi-K2-Thinking` | 256,000 | 8,192 | yes | text |
| `hf:zai-org/GLM-4.7` | 198,000 | 128,000 | no | text |
| `hf:deepseek-ai/DeepSeek-R1-0528` | 128,000 | 8,192 | no | text |
| `hf:deepseek-ai/DeepSeek-V3-0324` | 128,000 | 8,192 | no | text |
| `hf:deepseek-ai/DeepSeek-V3.1` | 128,000 | 8,192 | no | text |
| `hf:deepseek-ai/DeepSeek-V3.1-Terminus` | 128,000 | 8,192 | no | text |
| `hf:deepseek-ai/DeepSeek-V3.2` | 159,000 | 8,192 | no | text |
| `hf:meta-llama/Llama-3.3-70B-Instruct` | 128,000 | 8,192 | no | text |
| `hf:meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8` | 524,000 | 8,192 | no | text |
| `hf:moonshotai/Kimi-K2-Instruct-0905` | 256,000 | 8,192 | no | text |
| `hf:moonshotai/Kimi-K2.5` | 256,000 | 8,192 | yes | text + image |
| `hf:openai/gpt-oss-120b` | 128,000 | 8,192 | no | text |
| `hf:Qwen/Qwen3-235B-A22B-Instruct-2507` | 256,000 | 8,192 | no | text |
| `hf:Qwen/Qwen3-Coder-480B-A35B-Instruct` | 256,000 | 8,192 | no | text |
| `hf:Qwen/Qwen3-VL-235B-A22B-Instruct` | 250,000 | 8,192 | no | text + image |
| `hf:zai-org/GLM-4.5` | 128,000 | 128,000 | no | text |
| `hf:zai-org/GLM-4.6` | 198,000 | 128,000 | no | text |
| `hf:zai-org/GLM-5` | 256,000 | 128,000 | yes | text + image |
| `hf:deepseek-ai/DeepSeek-V3` | 128,000 | 8,192 | no | text |
| `hf:Qwen/Qwen3-235B-A22B-Thinking-2507` | 256,000 | 8,192 | yes | text |
## Notes
<Tip>
Model refs use the form `synthetic/<modelId>`. Use
`openclaw models list --provider synthetic` to see all models available on your
account.
</Tip>
- Model refs use `synthetic/<modelId>`.
- If you enable a model allowlist (`agents.defaults.models`), add every model you
plan to use.
- See [Model providers](/concepts/model-providers) for provider rules.
<AccordionGroup>
<Accordion title="Model allowlist">
If you enable a model allowlist (`agents.defaults.models`), add every
Synthetic model you plan to use. Models not in the allowlist will be hidden
from the agent.
</Accordion>
<Accordion title="Base URL override">
If Synthetic changes its API endpoint, override the base URL in your config:
```json5
{
models: {
providers: {
synthetic: {
baseUrl: "https://new-api.synthetic.new/anthropic",
},
},
},
}
```
Remember that OpenClaw appends `/v1` automatically.
</Accordion>
</AccordionGroup>
## Related
<CardGroup cols={2}>
<Card title="Model providers" href="/concepts/model-providers" icon="layers">
Provider rules, model refs, and failover behavior.
</Card>
<Card title="Configuration reference" href="/gateway/configuration-reference" icon="gear">
Full config schema including provider settings.
</Card>
<Card title="Synthetic" href="https://synthetic.new" icon="arrow-up-right-from-square">
Synthetic dashboard and API docs.
</Card>
</CardGroup>

View File

@@ -8,34 +8,42 @@ read_when:
# Together AI
The [Together AI](https://together.ai) provides access to leading open-source models including Llama, DeepSeek, Kimi, and more through a unified API.
[Together AI](https://together.ai) provides access to leading open-source
models including Llama, DeepSeek, Kimi, and more through a unified API.
- Provider: `together`
- Auth: `TOGETHER_API_KEY`
- API: OpenAI-compatible
- Base URL: `https://api.together.xyz/v1`
| Property | Value |
| -------- | ----------------------------- |
| Provider | `together` |
| Auth | `TOGETHER_API_KEY` |
| API | OpenAI-compatible |
| Base URL | `https://api.together.xyz/v1` |
## Quick start
## Getting started
1. Set the API key (recommended: store it for the Gateway):
<Steps>
<Step title="Get an API key">
Create an API key at
[api.together.ai/settings/api-keys](https://api.together.ai/settings/api-keys).
</Step>
<Step title="Run onboarding">
```bash
openclaw onboard --auth-choice together-api-key
```
</Step>
<Step title="Set a default model">
```json5
{
agents: {
defaults: {
model: { primary: "together/moonshotai/Kimi-K2.5" },
},
},
}
```
</Step>
</Steps>
```bash
openclaw onboard --auth-choice together-api-key
```
2. Set a default model:
```json5
{
agents: {
defaults: {
model: { primary: "together/moonshotai/Kimi-K2.5" },
},
},
}
```
## Non-interactive example
### Non-interactive example
```bash
openclaw onboard --non-interactive \
@@ -44,17 +52,14 @@ openclaw onboard --non-interactive \
--together-api-key "$TOGETHER_API_KEY"
```
This will set `together/moonshotai/Kimi-K2.5` as the default model.
## Environment note
If the Gateway runs as a daemon (launchd/systemd), make sure `TOGETHER_API_KEY`
is available to that process (for example, in `~/.openclaw/.env` or via
`env.shellEnv`).
<Note>
The onboarding preset sets `together/moonshotai/Kimi-K2.5` as the default
model.
</Note>
## Built-in catalog
OpenClaw currently ships this bundled Together catalog:
OpenClaw ships this bundled Together catalog:
| Model ref | Name | Input | Context | Notes |
| ------------------------------------------------------------ | -------------------------------------- | ----------- | ---------- | -------------------------------- |
@@ -67,16 +72,16 @@ OpenClaw currently ships this bundled Together catalog:
| `together/deepseek-ai/DeepSeek-R1` | DeepSeek R1 | text | 131,072 | Reasoning model |
| `together/moonshotai/Kimi-K2-Instruct-0905` | Kimi K2-Instruct 0905 | text | 262,144 | Secondary Kimi text model |
The onboarding preset sets `together/moonshotai/Kimi-K2.5` as the default model.
## Video generation
The bundled `together` plugin also registers video generation through the
shared `video_generate` tool.
- Default video model: `together/Wan-AI/Wan2.2-T2V-A14B`
- Modes: text-to-video and single-image reference flows
- Supports `aspectRatio` and `resolution`
| Property | Value |
| -------------------- | ------------------------------------- |
| Default video model | `together/Wan-AI/Wan2.2-T2V-A14B` |
| Modes | text-to-video, single-image reference |
| Supported parameters | `aspectRatio`, `resolution` |
To use Together as the default video provider:
@@ -92,5 +97,46 @@ To use Together as the default video provider:
}
```
See [Video Generation](/tools/video-generation) for the shared tool
parameters, provider selection, and failover behavior.
<Tip>
See [Video Generation](/tools/video-generation) for the shared tool parameters,
provider selection, and failover behavior.
</Tip>
<AccordionGroup>
<Accordion title="Environment note">
If the Gateway runs as a daemon (launchd/systemd), make sure
`TOGETHER_API_KEY` is available to that process (for example, in
`~/.openclaw/.env` or via `env.shellEnv`).
<Warning>
Keys set only in your interactive shell are not visible to daemon-managed
gateway processes. Use `~/.openclaw/.env` or `env.shellEnv` config for
persistent availability.
</Warning>
</Accordion>
<Accordion title="Troubleshooting">
- Verify your key works: `openclaw models list --provider together`
- If models are not appearing, confirm the API key is set in the correct
environment for your Gateway process.
- Model refs use the form `together/<model-id>`.
</Accordion>
</AccordionGroup>
## Related
<CardGroup cols={2}>
<Card title="Model providers" href="/concepts/model-providers" icon="layers">
Provider rules, model refs, and failover behavior.
</Card>
<Card title="Video generation" href="/tools/video-generation" icon="video">
Shared video generation tool parameters and provider selection.
</Card>
<Card title="Configuration reference" href="/gateway/configuration-reference" icon="gear">
Full config schema including provider settings.
</Card>
<Card title="Together AI" href="https://together.ai" icon="arrow-up-right-from-square">
Together AI dashboard, API docs, and pricing.
</Card>
</CardGroup>

View File

@@ -12,31 +12,46 @@ The Volcengine provider gives access to Doubao models and third-party models
hosted on Volcano Engine, with separate endpoints for general and coding
workloads.
- Providers: `volcengine` (general) + `volcengine-plan` (coding)
- Auth: `VOLCANO_ENGINE_API_KEY`
- API: OpenAI-compatible
| Detail | Value |
| --------- | --------------------------------------------------- |
| Providers | `volcengine` (general) + `volcengine-plan` (coding) |
| Auth | `VOLCANO_ENGINE_API_KEY` |
| API | OpenAI-compatible |
## Quick start
## Getting started
1. Set the API key:
<Steps>
<Step title="Set the API key">
Run interactive onboarding:
```bash
openclaw onboard --auth-choice volcengine-api-key
```
```bash
openclaw onboard --auth-choice volcengine-api-key
```
2. Set a default model:
This registers both the general (`volcengine`) and coding (`volcengine-plan`) providers from a single API key.
```json5
{
agents: {
defaults: {
model: { primary: "volcengine-plan/ark-code-latest" },
},
},
}
```
</Step>
<Step title="Set a default model">
```json5
{
agents: {
defaults: {
model: { primary: "volcengine-plan/ark-code-latest" },
},
},
}
```
</Step>
<Step title="Verify the model is available">
```bash
openclaw models list --provider volcengine
openclaw models list --provider volcengine-plan
```
</Step>
</Steps>
## Non-interactive example
<Tip>
For non-interactive setup (CI, scripting), pass the key directly:
```bash
openclaw onboard --non-interactive \
@@ -45,6 +60,8 @@ openclaw onboard --non-interactive \
--volcengine-api-key "$VOLCANO_ENGINE_API_KEY"
```
</Tip>
## Providers and endpoints
| Provider | Endpoint | Use case |
@@ -52,43 +69,75 @@ openclaw onboard --non-interactive \
| `volcengine` | `ark.cn-beijing.volces.com/api/v3` | General models |
| `volcengine-plan` | `ark.cn-beijing.volces.com/api/coding/v3` | Coding models |
Both providers are configured from a single API key. Setup registers both
automatically.
<Note>
Both providers are configured from a single API key. Setup registers both automatically.
</Note>
## Available models
General provider (`volcengine`):
<Tabs>
<Tab title="General (volcengine)">
| Model ref | Name | Input | Context |
| -------------------------------------------- | ------------------------------- | ----------- | ------- |
| `volcengine/doubao-seed-1-8-251228` | Doubao Seed 1.8 | text, image | 256,000 |
| `volcengine/doubao-seed-code-preview-251028` | doubao-seed-code-preview-251028 | text, image | 256,000 |
| `volcengine/kimi-k2-5-260127` | Kimi K2.5 | text, image | 256,000 |
| `volcengine/glm-4-7-251222` | GLM 4.7 | text, image | 200,000 |
| `volcengine/deepseek-v3-2-251201` | DeepSeek V3.2 | text, image | 128,000 |
</Tab>
<Tab title="Coding (volcengine-plan)">
| Model ref | Name | Input | Context |
| ------------------------------------------------- | ------------------------ | ----- | ------- |
| `volcengine-plan/ark-code-latest` | Ark Coding Plan | text | 256,000 |
| `volcengine-plan/doubao-seed-code` | Doubao Seed Code | text | 256,000 |
| `volcengine-plan/glm-4.7` | GLM 4.7 Coding | text | 200,000 |
| `volcengine-plan/kimi-k2-thinking` | Kimi K2 Thinking | text | 256,000 |
| `volcengine-plan/kimi-k2.5` | Kimi K2.5 Coding | text | 256,000 |
| `volcengine-plan/doubao-seed-code-preview-251028` | Doubao Seed Code Preview | text | 256,000 |
</Tab>
</Tabs>
| Model ref | Name | Input | Context |
| -------------------------------------------- | ------------------------------- | ----------- | ------- |
| `volcengine/doubao-seed-1-8-251228` | Doubao Seed 1.8 | text, image | 256,000 |
| `volcengine/doubao-seed-code-preview-251028` | doubao-seed-code-preview-251028 | text, image | 256,000 |
| `volcengine/kimi-k2-5-260127` | Kimi K2.5 | text, image | 256,000 |
| `volcengine/glm-4-7-251222` | GLM 4.7 | text, image | 200,000 |
| `volcengine/deepseek-v3-2-251201` | DeepSeek V3.2 | text, image | 128,000 |
## Advanced notes
Coding provider (`volcengine-plan`):
<AccordionGroup>
<Accordion title="Default model after onboarding">
`openclaw onboard --auth-choice volcengine-api-key` currently sets
`volcengine-plan/ark-code-latest` as the default model while also registering
the general `volcengine` catalog.
</Accordion>
| Model ref | Name | Input | Context |
| ------------------------------------------------- | ------------------------ | ----- | ------- |
| `volcengine-plan/ark-code-latest` | Ark Coding Plan | text | 256,000 |
| `volcengine-plan/doubao-seed-code` | Doubao Seed Code | text | 256,000 |
| `volcengine-plan/glm-4.7` | GLM 4.7 Coding | text | 200,000 |
| `volcengine-plan/kimi-k2-thinking` | Kimi K2 Thinking | text | 256,000 |
| `volcengine-plan/kimi-k2.5` | Kimi K2.5 Coding | text | 256,000 |
| `volcengine-plan/doubao-seed-code-preview-251028` | Doubao Seed Code Preview | text | 256,000 |
<Accordion title="Model picker fallback behavior">
During onboarding/configure model selection, the Volcengine auth choice prefers
both `volcengine/*` and `volcengine-plan/*` rows. If those models are not
loaded yet, OpenClaw falls back to the unfiltered catalog instead of showing an
empty provider-scoped picker.
</Accordion>
`openclaw onboard --auth-choice volcengine-api-key` currently sets
`volcengine-plan/ark-code-latest` as the default model while also registering
the general `volcengine` catalog.
<Accordion title="Environment variables for daemon processes">
If the Gateway runs as a daemon (launchd/systemd), make sure
`VOLCANO_ENGINE_API_KEY` is available to that process (for example, in
`~/.openclaw/.env` or via `env.shellEnv`).
</Accordion>
</AccordionGroup>
During onboarding/configure model selection, the Volcengine auth choice prefers
both `volcengine/*` and `volcengine-plan/*` rows. If those models are not
loaded yet, OpenClaw falls back to the unfiltered catalog instead of showing an
empty provider-scoped picker.
<Warning>
When running OpenClaw as a background service, environment variables set in your
interactive shell are not automatically inherited. See the daemon note above.
</Warning>
## Environment note
## Related
If the Gateway runs as a daemon (launchd/systemd), make sure
`VOLCANO_ENGINE_API_KEY` is available to that process (for example, in
`~/.openclaw/.env` or via `env.shellEnv`).
<CardGroup cols={2}>
<Card title="Model selection" href="/concepts/model-providers" icon="layers">
Choosing providers, model refs, and failover behavior.
</Card>
<Card title="Configuration" href="/configuration" icon="gear">
Full config reference for agents, models, and providers.
</Card>
<Card title="Troubleshooting" href="/help/troubleshooting" icon="wrench">
Common issues and debugging steps.
</Card>
<Card title="FAQ" href="/help/faq" icon="circle-question">
Frequently asked questions about OpenClaw setup.
</Card>
</CardGroup>