docs(providers): document cerebras setup

This commit is contained in:
Peter Steinberger
2026-04-27 10:22:07 +01:00
parent 4de235f908
commit 1b81f75654
4 changed files with 102 additions and 5 deletions

View File

@@ -317,7 +317,7 @@ See [/providers/kilocode](/providers/kilocode) for setup details.
Uses the xAI Responses path. `/fast` or `params.fastMode: true` rewrites `grok-3`, `grok-3-mini`, `grok-4`, and `grok-4-0709` to their `*-fast` variants. `tool_stream` defaults on; disable via `agents.defaults.models["xai/<model>"].params.tool_stream=false`.
</Accordion>
<Accordion title="Cerebras">
GLM models use `zai-glm-4.7` / `zai-glm-4.6`; OpenAI-compatible base URL is `https://api.cerebras.ai/v1`.
Ships as the bundled `cerebras` provider plugin. GLM uses `zai-glm-4.7`; OpenAI-compatible base URL is `https://api.cerebras.ai/v1`.
</Accordion>
</AccordionGroup>

View File

@@ -475,7 +475,9 @@ OpenClaw uses the built-in model catalog. Add custom providers via `models.provi
### Provider examples
<AccordionGroup>
<Accordion title="Cerebras (GLM 4.6 / 4.7)">
<Accordion title="Cerebras (GLM 4.7 / GPT OSS)">
The bundled `cerebras` provider plugin can configure this via `openclaw onboard --auth-choice cerebras-api-key`. Use explicit provider config only when overriding defaults.
```json5
{
env: { CEREBRAS_API_KEY: "sk-..." },
@@ -483,11 +485,11 @@ OpenClaw uses the built-in model catalog. Add custom providers via `models.provi
defaults: {
model: {
primary: "cerebras/zai-glm-4.7",
fallbacks: ["cerebras/zai-glm-4.6"],
fallbacks: ["cerebras/gpt-oss-120b"],
},
models: {
"cerebras/zai-glm-4.7": { alias: "GLM 4.7 (Cerebras)" },
"cerebras/zai-glm-4.6": { alias: "GLM 4.6 (Cerebras)" },
"cerebras/gpt-oss-120b": { alias: "GPT OSS 120B (Cerebras)" },
},
},
},
@@ -500,7 +502,7 @@ OpenClaw uses the built-in model catalog. Add custom providers via `models.provi
api: "openai-completions",
models: [
{ id: "zai-glm-4.7", name: "GLM 4.7 (Cerebras)" },
{ id: "zai-glm-4.6", name: "GLM 4.6 (Cerebras)" },
{ id: "gpt-oss-120b", name: "GPT OSS 120B (Cerebras)" },
],
},
},

View File

@@ -0,0 +1,94 @@
---
summary: "Cerebras setup (auth + model selection)"
title: "Cerebras"
read_when:
- You want to use Cerebras with OpenClaw
- You need the Cerebras API key env var or CLI auth choice
---
[Cerebras](https://www.cerebras.ai) provides high-speed OpenAI-compatible inference.
| Property | Value |
| -------- | ---------------------------- |
| Provider | `cerebras` |
| Auth | `CEREBRAS_API_KEY` |
| API | OpenAI-compatible |
| Base URL | `https://api.cerebras.ai/v1` |
## Getting Started
<Steps>
<Step title="Get an API key">
Create an API key in the [Cerebras Cloud Console](https://cloud.cerebras.ai).
</Step>
<Step title="Run onboarding">
```bash
openclaw onboard --auth-choice cerebras-api-key
```
</Step>
<Step title="Verify models are available">
```bash
openclaw models list --provider cerebras
```
</Step>
</Steps>
### Non-Interactive Setup
```bash
openclaw onboard --non-interactive \
--mode local \
--auth-choice cerebras-api-key \
--cerebras-api-key "$CEREBRAS_API_KEY"
```
## Built-In Catalog
OpenClaw ships a static Cerebras catalog for the public OpenAI-compatible endpoint:
| Model ref | Name | Notes |
| ----------------------------------------- | -------------------- | -------------------------------------- |
| `cerebras/zai-glm-4.7` | Z.ai GLM 4.7 | Default model; preview reasoning model |
| `cerebras/gpt-oss-120b` | GPT OSS 120B | Production reasoning model |
| `cerebras/qwen-3-235b-a22b-instruct-2507` | Qwen 3 235B Instruct | Preview non-reasoning model |
| `cerebras/llama3.1-8b` | Llama 3.1 8B | Production speed-focused model |
<Warning>
Cerebras marks `zai-glm-4.7` and `qwen-3-235b-a22b-instruct-2507` as preview models, and `llama3.1-8b` / `qwen-3-235b-a22b-instruct-2507` are documented for deprecation on May 27, 2026. Check Cerebras' supported-models page before relying on them for production.
</Warning>
## Manual Config
The bundled plugin usually means you only need the API key. Use explicit
`models.providers.cerebras` config when you want to override model metadata:
```json5
{
env: { CEREBRAS_API_KEY: "sk-..." },
agents: {
defaults: {
model: { primary: "cerebras/zai-glm-4.7" },
},
},
models: {
mode: "merge",
providers: {
cerebras: {
baseUrl: "https://api.cerebras.ai/v1",
apiKey: "${CEREBRAS_API_KEY}",
api: "openai-completions",
models: [
{ id: "zai-glm-4.7", name: "Z.ai GLM 4.7" },
{ id: "gpt-oss-120b", name: "GPT OSS 120B" },
],
},
},
},
}
```
<Note>
If the Gateway runs as a daemon (launchd/systemd), make sure `CEREBRAS_API_KEY`
is available to that process, for example in `~/.openclaw/.env` or through
`env.shellEnv`.
</Note>

View File

@@ -33,6 +33,7 @@ Looking for chat channel docs (WhatsApp/Telegram/Discord/Slack/Mattermost (plugi
- [Arcee AI (Trinity models)](/providers/arcee)
- [Azure Speech](/providers/azure-speech)
- [BytePlus (International)](/concepts/model-providers#byteplus-international)
- [Cerebras](/providers/cerebras)
- [Chutes](/providers/chutes)
- [Cloudflare AI Gateway](/providers/cloudflare-ai-gateway)
- [ComfyUI](/providers/comfy)