Files
openclaw/docs/providers/mistral.md
2026-03-07 10:09:00 -08:00

1.4 KiB

summary, read_when, title
summary read_when title
Use Mistral models and Voxtral transcription with OpenClaw
You want to use Mistral models in OpenClaw
You need Mistral API key onboarding and model refs
Mistral

Mistral

OpenClaw supports Mistral for both text/image model routing (mistral/...) and audio transcription via Voxtral in media understanding. Mistral can also be used for memory embeddings (memorySearch.provider = "mistral").

CLI setup

openclaw onboard --auth-choice mistral-api-key
# or non-interactive
openclaw onboard --mistral-api-key "$MISTRAL_API_KEY"

Config snippet (LLM provider)

{
  env: { MISTRAL_API_KEY: "sk-..." },
  agents: { defaults: { model: { primary: "mistral/mistral-large-latest" } } },
}

Config snippet (audio transcription with Voxtral)

{
  tools: {
    media: {
      audio: {
        enabled: true,
        models: [{ provider: "mistral", model: "voxtral-mini-latest" }],
      },
    },
  },
}

Notes

  • Mistral auth uses MISTRAL_API_KEY.
  • Provider base URL defaults to https://api.mistral.ai/v1.
  • Onboarding default model is mistral/mistral-large-latest.
  • Media-understanding default audio model for Mistral is voxtral-mini-latest.
  • Media transcription path uses /v1/audio/transcriptions.
  • Memory embeddings path uses /v1/embeddings (default model: mistral-embed).