mirror of
https://github.com/openclaw/openclaw.git
synced 2026-03-12 07:20:45 +00:00
1.4 KiB
1.4 KiB
summary, read_when, title
| summary | read_when | title | ||
|---|---|---|---|---|
| Use Mistral models and Voxtral transcription with OpenClaw |
|
Mistral |
Mistral
OpenClaw supports Mistral for both text/image model routing (mistral/...) and
audio transcription via Voxtral in media understanding.
Mistral can also be used for memory embeddings (memorySearch.provider = "mistral").
CLI setup
openclaw onboard --auth-choice mistral-api-key
# or non-interactive
openclaw onboard --mistral-api-key "$MISTRAL_API_KEY"
Config snippet (LLM provider)
{
env: { MISTRAL_API_KEY: "sk-..." },
agents: { defaults: { model: { primary: "mistral/mistral-large-latest" } } },
}
Config snippet (audio transcription with Voxtral)
{
tools: {
media: {
audio: {
enabled: true,
models: [{ provider: "mistral", model: "voxtral-mini-latest" }],
},
},
},
}
Notes
- Mistral auth uses
MISTRAL_API_KEY. - Provider base URL defaults to
https://api.mistral.ai/v1. - Onboarding default model is
mistral/mistral-large-latest. - Media-understanding default audio model for Mistral is
voxtral-mini-latest. - Media transcription path uses
/v1/audio/transcriptions. - Memory embeddings path uses
/v1/embeddings(default model:mistral-embed).