Files
openclaw/docs/providers/minimax.md
2026-03-20 00:05:04 -04:00

6.9 KiB
Raw Blame History

summary, read_when, title
summary read_when title
Use MiniMax models in OpenClaw
You want MiniMax models in OpenClaw
You need MiniMax setup guidance
MiniMax

MiniMax

OpenClaw's MiniMax provider defaults to MiniMax M2.7 and keeps MiniMax M2.5 in the catalog for compatibility.

Model lineup

  • MiniMax-M2.7: default hosted text model.
  • MiniMax-M2.7-highspeed: faster M2.7 text tier.
  • MiniMax-M2.5: previous text model, still available in the MiniMax catalog.
  • MiniMax-M2.5-highspeed: faster M2.5 text tier.
  • MiniMax-VL-01: vision model for text + image inputs.

Choose a setup

Best for: quick setup with MiniMax Coding Plan via OAuth, no API key required.

Enable the bundled OAuth plugin and authenticate:

openclaw plugins enable minimax  # skip if already loaded.
openclaw gateway restart  # restart if gateway is already running
openclaw onboard --auth-choice minimax-portal

You will be prompted to select an endpoint:

  • Global - International users (api.minimax.io)
  • CN - Users in China (api.minimaxi.com)

See MiniMax plugin README for details.

MiniMax M2.7 (API key)

Best for: hosted MiniMax with Anthropic-compatible API.

Configure via CLI:

  • Run openclaw configure
  • Select Model/auth
  • Choose a MiniMax auth option
{
  env: { MINIMAX_API_KEY: "sk-..." },
  agents: { defaults: { model: { primary: "minimax/MiniMax-M2.7" } } },
  models: {
    mode: "merge",
    providers: {
      minimax: {
        baseUrl: "https://api.minimax.io/anthropic",
        apiKey: "${MINIMAX_API_KEY}",
        api: "anthropic-messages",
        models: [
          {
            id: "MiniMax-M2.7",
            name: "MiniMax M2.7",
            reasoning: true,
            input: ["text"],
            cost: { input: 0.3, output: 1.2, cacheRead: 0.03, cacheWrite: 0.12 },
            contextWindow: 200000,
            maxTokens: 8192,
          },
          {
            id: "MiniMax-M2.7-highspeed",
            name: "MiniMax M2.7 Highspeed",
            reasoning: true,
            input: ["text"],
            cost: { input: 0.3, output: 1.2, cacheRead: 0.03, cacheWrite: 0.12 },
            contextWindow: 200000,
            maxTokens: 8192,
          },
          {
            id: "MiniMax-M2.5",
            name: "MiniMax M2.5",
            reasoning: true,
            input: ["text"],
            cost: { input: 0.3, output: 1.2, cacheRead: 0.03, cacheWrite: 0.12 },
            contextWindow: 200000,
            maxTokens: 8192,
          },
          {
            id: "MiniMax-M2.5-highspeed",
            name: "MiniMax M2.5 Highspeed",
            reasoning: true,
            input: ["text"],
            cost: { input: 0.3, output: 1.2, cacheRead: 0.03, cacheWrite: 0.12 },
            contextWindow: 200000,
            maxTokens: 8192,
          },
        ],
      },
    },
  },
}

MiniMax M2.7 as fallback (example)

Best for: keep your strongest latest-generation model as primary, fail over to MiniMax M2.7. Example below uses Opus as a concrete primary; swap to your preferred latest-gen primary model.

{
  env: { MINIMAX_API_KEY: "sk-..." },
  agents: {
    defaults: {
      models: {
        "anthropic/claude-opus-4-6": { alias: "primary" },
        "minimax/MiniMax-M2.7": { alias: "minimax" },
      },
      model: {
        primary: "anthropic/claude-opus-4-6",
        fallbacks: ["minimax/MiniMax-M2.7"],
      },
    },
  },
}

Optional: Local via LM Studio (manual)

Best for: local inference with LM Studio. We have seen strong results with MiniMax M2.5 on powerful hardware (e.g. a desktop/server) using LM Studio's local server.

Configure manually via openclaw.json:

{
  agents: {
    defaults: {
      model: { primary: "lmstudio/minimax-m2.5-gs32" },
      models: { "lmstudio/minimax-m2.5-gs32": { alias: "Minimax" } },
    },
  },
  models: {
    mode: "merge",
    providers: {
      lmstudio: {
        baseUrl: "http://127.0.0.1:1234/v1",
        apiKey: "lmstudio",
        api: "openai-responses",
        models: [
          {
            id: "minimax-m2.5-gs32",
            name: "MiniMax M2.5 GS32",
            reasoning: true,
            input: ["text"],
            cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
            contextWindow: 196608,
            maxTokens: 8192,
          },
        ],
      },
    },
  },
}

Configure via openclaw configure

Use the interactive config wizard to set MiniMax without editing JSON:

  1. Run openclaw configure.
  2. Select Model/auth.
  3. Choose a MiniMax auth option.
  4. Pick your default model when prompted.

Configuration options

  • models.providers.minimax.baseUrl: prefer https://api.minimax.io/anthropic (Anthropic-compatible); https://api.minimax.io/v1 is optional for OpenAI-compatible payloads.
  • models.providers.minimax.api: prefer anthropic-messages; openai-completions is optional for OpenAI-compatible payloads.
  • models.providers.minimax.apiKey: MiniMax API key (MINIMAX_API_KEY).
  • models.providers.minimax.models: define id, name, reasoning, contextWindow, maxTokens, cost.
  • agents.defaults.models: alias models you want in the allowlist.
  • models.mode: keep merge if you want to add MiniMax alongside built-ins.

Notes

  • Model refs are minimax/<model>.
  • Default text model: MiniMax-M2.7.
  • Alternate text models: MiniMax-M2.7-highspeed, MiniMax-M2.5, MiniMax-M2.5-highspeed.
  • Coding Plan usage API: https://api.minimaxi.com/v1/api/openplatform/coding_plan/remains (requires a coding plan key).
  • Update pricing values in models.json if you need exact cost tracking.
  • Referral link for MiniMax Coding Plan (10% off): https://platform.minimax.io/subscribe/coding-plan?code=DbXJTRClnb&source=link
  • See /concepts/model-providers for provider rules.
  • Use openclaw models list and openclaw models set minimax/MiniMax-M2.7 to switch.

Troubleshooting

"Unknown model: minimax/MiniMax-M2.7"

This usually means the MiniMax provider isnt configured (no provider entry and no MiniMax auth profile/env key found). A fix for this detection is in 2026.1.12 (unreleased at the time of writing). Fix by:

  • Upgrading to 2026.1.12 (or run from source main), then restarting the gateway.
  • Running openclaw configure and selecting a MiniMax auth option, or
  • Adding the models.providers.minimax block manually, or
  • Setting MINIMAX_API_KEY (or a MiniMax auth profile) so the provider can be injected.

Make sure the model id is casesensitive:

  • minimax/MiniMax-M2.7
  • minimax/MiniMax-M2.7-highspeed
  • minimax/MiniMax-M2.5
  • minimax/MiniMax-M2.5-highspeed

Then recheck with:

openclaw models list