4.1 KiB
summary, title, read_when
| summary | title | read_when | ||
|---|---|---|---|---|
| Groq setup (auth + model selection) | Groq |
|
Groq provides ultra-fast inference on open-source models (Llama, Gemma, Mistral, and more) using custom LPU hardware. OpenClaw connects to Groq through its OpenAI-compatible API.
| Property | Value |
|---|---|
| Provider | groq |
| Auth | GROQ_API_KEY |
| API | OpenAI-compatible |
Getting started
Create an API key at [console.groq.com/keys](https://console.groq.com/keys). ```bash export GROQ_API_KEY="gsk_..." ``` ```json5 { agents: { defaults: { model: { primary: "groq/llama-3.3-70b-versatile" }, }, }, } ```Config file example
{
env: { GROQ_API_KEY: "gsk_..." },
agents: {
defaults: {
model: { primary: "groq/llama-3.3-70b-versatile" },
},
},
}
Built-in catalog
OpenClaw ships a manifest-backed Groq catalog for fast provider-filtered model
listing. Run openclaw models list --all --provider groq to see the bundled
rows, or check
console.groq.com/docs/models.
| Model | Notes |
|---|---|
| Llama 3.3 70B Versatile | General-purpose, large context |
| Llama 3.1 8B Instant | Fast, lightweight |
| Gemma 2 9B | Compact, efficient |
| Mixtral 8x7B | MoE architecture, strong reasoning |
Reasoning models
OpenClaw maps its shared /think levels to Groq's model-specific
reasoning_effort values. For qwen/qwen3-32b, disabled thinking sends
none and enabled thinking sends default. For Groq GPT-OSS reasoning models,
OpenClaw sends low, medium, or high; disabled thinking omits
reasoning_effort because those models do not support a disabled value.
Audio transcription
Groq also provides fast Whisper-based audio transcription. When configured as a
media-understanding provider, OpenClaw uses Groq's whisper-large-v3-turbo
model to transcribe voice messages through the shared tools.media.audio
surface.
{
tools: {
media: {
audio: {
models: [{ provider: "groq" }],
},
},
},
}
<Warning>
Keys set only in your interactive shell are not visible to daemon-managed
gateway processes. Use `~/.openclaw/.env` or `env.shellEnv` config for
persistent availability.
</Warning>