Files
openclaw/docs/providers/cerebras.md
Vincent Koc 58c706451e docs(providers): rewrite Cerebras, Groq, and SGLang with code-verified setup
Cerebras (docs/providers/cerebras.md): rewrote against
extensions/cerebras/openclaw.plugin.json. Added a complete properties
summary, CodeGroup for onboarding/direct-flag/env, a Reasoning column on
the four-model catalog table (Z.ai GLM 4.7 and GPT OSS 120B are
reasoning-capable; Qwen 3 235B and Llama 3.1 8B are not), and a
CardGroup of related links.

Groq (docs/providers/groq.md): expanded the catalog from 4 hand-picked
entries to all 18 bundled models from extensions/groq/openclaw.plugin.json
with model refs, reasoning flags, input modalities, and context windows.
Removed a stale 'Mixtral 8x7B' row that does not exist in the bundled
catalog. Surfaced the audio media-understanding contract (whisper-large-v3-turbo,
auto priority 20) as a properties table and explained the per-model
reasoning_effort mapping for qwen/qwen3-32b vs the GPT OSS reasoning
models. Added an onboarding CodeGroup so the API-key step does not skip
'openclaw onboard --auth-choice groq-api-key'.

SGLang (docs/providers/sglang.md): added a properties summary table at
the top, including the Qwen/Qwen3-8B model placeholder from
extensions/sglang/defaults.ts, the supportsStreamingUsage runtime flag,
and the modelPricing.external: false setting. Clarified that the
onboarding choice id is bare 'sglang' (custom method) rather than the
'-api-key' suffix used by other providers, matching the manifest.
2026-05-05 16:58:01 -07:00

4.8 KiB

summary, title, read_when
summary title read_when
Cerebras setup (auth + model selection) Cerebras
You want to use Cerebras with OpenClaw
You need the Cerebras API key env var or CLI auth choice

Cerebras provides high-speed OpenAI-compatible inference on custom inference hardware. OpenClaw includes a bundled Cerebras provider plugin with a static four-model catalog.

Property Value
Provider id cerebras
Plugin bundled, enabledByDefault: true
Auth env var CEREBRAS_API_KEY
Onboarding flag --auth-choice cerebras-api-key
Direct CLI flag --cerebras-api-key <key>
API OpenAI-compatible (openai-completions)
Base URL https://api.cerebras.ai/v1
Default model cerebras/zai-glm-4.7

Getting started

Create an API key in the [Cerebras Cloud Console](https://cloud.cerebras.ai).
openclaw onboard --auth-choice cerebras-api-key
openclaw onboard --non-interactive \
  --auth-choice cerebras-api-key \
  --cerebras-api-key "$CEREBRAS_API_KEY"
export CEREBRAS_API_KEY=csk-...
</CodeGroup>
```bash openclaw models list --provider cerebras ```
The list should include all four bundled models. If `CEREBRAS_API_KEY` is unresolved, `openclaw models status --json` reports the missing credential under `auth.unusableProfiles`.

Non-interactive setup

openclaw onboard --non-interactive \
  --mode local \
  --auth-choice cerebras-api-key \
  --cerebras-api-key "$CEREBRAS_API_KEY"

Built-in catalog

OpenClaw ships a static Cerebras catalog that mirrors the public OpenAI-compatible endpoint. All four models share a 128k context and 8,192 max-output tokens.

Model ref Name Reasoning Notes
cerebras/zai-glm-4.7 Z.ai GLM 4.7 yes Default model; preview reasoning model
cerebras/gpt-oss-120b GPT OSS 120B yes Production reasoning model
cerebras/qwen-3-235b-a22b-instruct-2507 Qwen 3 235B Instruct no Preview non-reasoning model
cerebras/llama3.1-8b Llama 3.1 8B no Production speed-focused model
Cerebras marks `zai-glm-4.7` and `qwen-3-235b-a22b-instruct-2507` as preview models, and `llama3.1-8b` plus `qwen-3-235b-a22b-instruct-2507` are documented for deprecation on May 27, 2026. Check Cerebras' supported-models page before relying on them for production workloads.

Manual config

The bundled plugin usually means you only need the API key. Use explicit models.providers.cerebras config when you want to override model metadata or run in mode: "merge" against the static catalog:

{
  env: { CEREBRAS_API_KEY: "csk-..." },
  agents: {
    defaults: {
      model: { primary: "cerebras/zai-glm-4.7" },
    },
  },
  models: {
    mode: "merge",
    providers: {
      cerebras: {
        baseUrl: "https://api.cerebras.ai/v1",
        apiKey: "${CEREBRAS_API_KEY}",
        api: "openai-completions",
        models: [
          { id: "zai-glm-4.7", name: "Z.ai GLM 4.7" },
          { id: "gpt-oss-120b", name: "GPT OSS 120B" },
        ],
      },
    },
  },
}
If the Gateway runs as a daemon (launchd, systemd, Docker), make sure `CEREBRAS_API_KEY` is available to that process — for example in `~/.openclaw/.env` or through `env.shellEnv`. A key sitting only in `~/.profile` will not help a managed service unless the env is imported separately. Choosing providers, model refs, and failover behavior. Reasoning effort levels for the two reasoning-capable Cerebras models. Agent defaults and model configuration. Auth profiles, switching models, and resolving "no profile" errors.