mirror of
https://github.com/openclaw/openclaw.git
synced 2026-05-06 05:10:44 +00:00
Cerebras (docs/providers/cerebras.md): rewrote against extensions/cerebras/openclaw.plugin.json. Added a complete properties summary, CodeGroup for onboarding/direct-flag/env, a Reasoning column on the four-model catalog table (Z.ai GLM 4.7 and GPT OSS 120B are reasoning-capable; Qwen 3 235B and Llama 3.1 8B are not), and a CardGroup of related links. Groq (docs/providers/groq.md): expanded the catalog from 4 hand-picked entries to all 18 bundled models from extensions/groq/openclaw.plugin.json with model refs, reasoning flags, input modalities, and context windows. Removed a stale 'Mixtral 8x7B' row that does not exist in the bundled catalog. Surfaced the audio media-understanding contract (whisper-large-v3-turbo, auto priority 20) as a properties table and explained the per-model reasoning_effort mapping for qwen/qwen3-32b vs the GPT OSS reasoning models. Added an onboarding CodeGroup so the API-key step does not skip 'openclaw onboard --auth-choice groq-api-key'. SGLang (docs/providers/sglang.md): added a properties summary table at the top, including the Qwen/Qwen3-8B model placeholder from extensions/sglang/defaults.ts, the supportsStreamingUsage runtime flag, and the modelPricing.external: false setting. Clarified that the onboarding choice id is bare 'sglang' (custom method) rather than the '-api-key' suffix used by other providers, matching the manifest.
4.8 KiB
4.8 KiB
summary, title, read_when
| summary | title | read_when | ||
|---|---|---|---|---|
| Cerebras setup (auth + model selection) | Cerebras |
|
Cerebras provides high-speed OpenAI-compatible inference on custom inference hardware. OpenClaw includes a bundled Cerebras provider plugin with a static four-model catalog.
| Property | Value |
|---|---|
| Provider id | cerebras |
| Plugin | bundled, enabledByDefault: true |
| Auth env var | CEREBRAS_API_KEY |
| Onboarding flag | --auth-choice cerebras-api-key |
| Direct CLI flag | --cerebras-api-key <key> |
| API | OpenAI-compatible (openai-completions) |
| Base URL | https://api.cerebras.ai/v1 |
| Default model | cerebras/zai-glm-4.7 |
Getting started
Create an API key in the [Cerebras Cloud Console](https://cloud.cerebras.ai).openclaw onboard --auth-choice cerebras-api-key
openclaw onboard --non-interactive \
--auth-choice cerebras-api-key \
--cerebras-api-key "$CEREBRAS_API_KEY"
export CEREBRAS_API_KEY=csk-...
</CodeGroup>
```bash
openclaw models list --provider cerebras
```
The list should include all four bundled models. If `CEREBRAS_API_KEY` is unresolved, `openclaw models status --json` reports the missing credential under `auth.unusableProfiles`.
Non-interactive setup
openclaw onboard --non-interactive \
--mode local \
--auth-choice cerebras-api-key \
--cerebras-api-key "$CEREBRAS_API_KEY"
Built-in catalog
OpenClaw ships a static Cerebras catalog that mirrors the public OpenAI-compatible endpoint. All four models share a 128k context and 8,192 max-output tokens.
| Model ref | Name | Reasoning | Notes |
|---|---|---|---|
cerebras/zai-glm-4.7 |
Z.ai GLM 4.7 | yes | Default model; preview reasoning model |
cerebras/gpt-oss-120b |
GPT OSS 120B | yes | Production reasoning model |
cerebras/qwen-3-235b-a22b-instruct-2507 |
Qwen 3 235B Instruct | no | Preview non-reasoning model |
cerebras/llama3.1-8b |
Llama 3.1 8B | no | Production speed-focused model |
Manual config
The bundled plugin usually means you only need the API key. Use explicit models.providers.cerebras config when you want to override model metadata or run in mode: "merge" against the static catalog:
{
env: { CEREBRAS_API_KEY: "csk-..." },
agents: {
defaults: {
model: { primary: "cerebras/zai-glm-4.7" },
},
},
models: {
mode: "merge",
providers: {
cerebras: {
baseUrl: "https://api.cerebras.ai/v1",
apiKey: "${CEREBRAS_API_KEY}",
api: "openai-completions",
models: [
{ id: "zai-glm-4.7", name: "Z.ai GLM 4.7" },
{ id: "gpt-oss-120b", name: "GPT OSS 120B" },
],
},
},
},
}