Files
openclaw/docs/providers/mistral.md
Neerav Makwana b9179ee4b6 Docs: match Greptile wording for magistral-* line
Made-with: Cursor
2026-04-07 12:52:47 +05:30

3.3 KiB
Raw Blame History

summary, read_when, title
summary read_when title
Use Mistral models and Voxtral transcription with OpenClaw
You want to use Mistral models in OpenClaw
You need Mistral API key onboarding and model refs
Mistral

Mistral

OpenClaw supports Mistral for both text/image model routing (mistral/...) and audio transcription via Voxtral in media understanding. Mistral can also be used for memory embeddings (memorySearch.provider = "mistral").

CLI setup

openclaw onboard --auth-choice mistral-api-key
# or non-interactive
openclaw onboard --mistral-api-key "$MISTRAL_API_KEY"

Config snippet (LLM provider)

{
  env: { MISTRAL_API_KEY: "sk-..." },
  agents: { defaults: { model: { primary: "mistral/mistral-large-latest" } } },
}

Built-in LLM catalog

OpenClaw currently ships this bundled Mistral catalog:

Model ref Input Context Max output Notes
mistral/mistral-large-latest text, image 262,144 16,384 Default model
mistral/mistral-medium-2508 text, image 262,144 8,192 Mistral Medium 3.1
mistral/mistral-small-latest text, image 128,000 16,384 Mistral Small 4; adjustable reasoning via API reasoning_effort
mistral/pixtral-large-latest text, image 128,000 32,768 Pixtral
mistral/codestral-latest text 256,000 4,096 Coding
mistral/devstral-medium-latest text 262,144 32,768 Devstral 2
mistral/magistral-small text 128,000 40,000 Reasoning-enabled

Config snippet (audio transcription with Voxtral)

{
  tools: {
    media: {
      audio: {
        enabled: true,
        models: [{ provider: "mistral", model: "voxtral-mini-latest" }],
      },
    },
  },
}

Adjustable reasoning (mistral-small-latest)

mistral/mistral-small-latest maps to Mistral Small 4 and supports adjustable reasoning on the Chat Completions API via reasoning_effort (none minimizes extra thinking in the output; high surfaces full thinking traces before the final answer).

OpenClaw maps the session thinking level to Mistrals API:

  • off / minimalnone
  • low / medium / high / xhigh / adaptivehigh

Other bundled Mistral catalog models do not use this parameter; keep using magistral-* models when you want Mistrals native reasoning-first behavior.

Notes

  • Mistral auth uses MISTRAL_API_KEY.
  • Provider base URL defaults to https://api.mistral.ai/v1.
  • Onboarding default model is mistral/mistral-large-latest.
  • Media-understanding default audio model for Mistral is voxtral-mini-latest.
  • Media transcription path uses /v1/audio/transcriptions.
  • Memory embeddings path uses /v1/embeddings (default model: mistral-embed).