Files
openclaw/docs/providers/groq.md
2026-05-01 13:45:40 +01:00

4.1 KiB

summary, title, read_when
summary title read_when
Groq setup (auth + model selection) Groq
You want to use Groq with OpenClaw
You need the API key env var or CLI auth choice

Groq provides ultra-fast inference on open-source models (Llama, Gemma, Mistral, and more) using custom LPU hardware. OpenClaw connects to Groq through its OpenAI-compatible API.

Property Value
Provider groq
Auth GROQ_API_KEY
API OpenAI-compatible

Getting started

Create an API key at [console.groq.com/keys](https://console.groq.com/keys). ```bash export GROQ_API_KEY="gsk_..." ``` ```json5 { agents: { defaults: { model: { primary: "groq/llama-3.3-70b-versatile" }, }, }, } ```

Config file example

{
  env: { GROQ_API_KEY: "gsk_..." },
  agents: {
    defaults: {
      model: { primary: "groq/llama-3.3-70b-versatile" },
    },
  },
}

Built-in catalog

OpenClaw ships a manifest-backed Groq catalog for fast provider-filtered model listing. Run openclaw models list --all --provider groq to see the bundled rows, or check console.groq.com/docs/models.

Model Notes
Llama 3.3 70B Versatile General-purpose, large context
Llama 3.1 8B Instant Fast, lightweight
Gemma 2 9B Compact, efficient
Mixtral 8x7B MoE architecture, strong reasoning
Use `openclaw models list --all --provider groq` for the manifest-backed Groq rows known to this OpenClaw version.

Reasoning models

OpenClaw maps its shared /think levels to Groq's model-specific reasoning_effort values. For qwen/qwen3-32b, disabled thinking sends none and enabled thinking sends default. For Groq GPT-OSS reasoning models, OpenClaw sends low, medium, or high; disabled thinking omits reasoning_effort because those models do not support a disabled value.

Audio transcription

Groq also provides fast Whisper-based audio transcription. When configured as a media-understanding provider, OpenClaw uses Groq's whisper-large-v3-turbo model to transcribe voice messages through the shared tools.media.audio surface.

{
  tools: {
    media: {
      audio: {
        models: [{ provider: "groq" }],
      },
    },
  },
}
| Property | Value | |----------|-------| | Shared config path | `tools.media.audio` | | Default base URL | `https://api.groq.com/openai/v1` | | Default model | `whisper-large-v3-turbo` | | API endpoint | OpenAI-compatible `/audio/transcriptions` | If the Gateway runs as a daemon (launchd/systemd), make sure `GROQ_API_KEY` is available to that process (for example, in `~/.openclaw/.env` or via `env.shellEnv`).
<Warning>
Keys set only in your interactive shell are not visible to daemon-managed
gateway processes. Use `~/.openclaw/.env` or `env.shellEnv` config for
persistent availability.
</Warning>
Choosing providers, model refs, and failover behavior. Full config schema including provider and audio settings. Groq dashboard, API docs, and pricing. Official Groq model catalog.