docs: tighten subscription guidance and update MiniMax M2.5 refs

This commit is contained in:
Peter Steinberger
2026-03-03 00:02:25 +00:00
parent 3e1ec5ad8b
commit 6b85ec3022
54 changed files with 272 additions and 245 deletions

View File

@@ -40,7 +40,7 @@ New install? Start here: [Getting started](https://docs.openclaw.ai/start/gettin
- **[OpenAI](https://openai.com/)** (ChatGPT/Codex)
Model note: while any model is supported, I strongly recommend **Anthropic Pro/Max (100/200) + Opus 4.6** for longcontext strength and better promptinjection resistance. See [Onboarding](https://docs.openclaw.ai/start/onboarding).
Model note: while many providers/models are supported, for the best experience and lower prompt-injection risk use the strongest latest-generation model available to you. See [Onboarding](https://docs.openclaw.ai/start/onboarding).
## Models (selection + auth)

View File

@@ -828,7 +828,7 @@ Tip: when calling `config.set`/`config.apply`/`config.patch` directly, pass `bas
See [/concepts/models](/concepts/models) for fallback behavior and scanning strategy.
Preferred Anthropic auth (setup-token):
Anthropic setup-token (supported):
```bash
claude setup-token
@@ -836,6 +836,10 @@ openclaw models auth setup-token --provider anthropic
openclaw models status
```
Policy note: this is technical compatibility. Anthropic has blocked some
subscription usage outside Claude Code in the past; verify current Anthropic
terms before relying on setup-token in production.
### `models` (root)
`openclaw models` is an alias for `models status`.

View File

@@ -77,3 +77,4 @@ Notes:
- `setup-token` prompts for a setup-token value (generate it with `claude setup-token` on any machine).
- `paste-token` accepts a token string generated elsewhere or from automation.
- Anthropic policy note: setup-token support is technical compatibility. Anthropic has blocked some subscription usage outside Claude Code in the past, so verify current terms before using it broadly.

View File

@@ -60,6 +60,8 @@ OpenClaw ships with the piai catalog. These providers require **no**
- Optional rotation: `ANTHROPIC_API_KEYS`, `ANTHROPIC_API_KEY_1`, `ANTHROPIC_API_KEY_2`, plus `OPENCLAW_LIVE_ANTHROPIC_KEY` (single override)
- Example model: `anthropic/claude-opus-4-6`
- CLI: `openclaw onboard --auth-choice token` (paste setup-token) or `openclaw models auth paste-token --provider anthropic`
- Policy note: setup-token support is technical compatibility; Anthropic has blocked some subscription usage outside Claude Code in the past. Verify current Anthropic terms and decide based on your risk tolerance.
- Recommendation: Anthropic API key auth is the safer, recommended path over subscription setup-token auth.
```json5
{
@@ -75,6 +77,7 @@ OpenClaw ships with the piai catalog. These providers require **no**
- CLI: `openclaw onboard --auth-choice openai-codex` or `openclaw models auth login --provider openai-codex`
- Default transport is `auto` (WebSocket-first, SSE fallback)
- Override per model via `agents.defaults.models["openai-codex/<model>"].params.transport` (`"sse"`, `"websocket"`, or `"auto"`)
- Policy note: OpenAI Codex OAuth is explicitly supported for external tools/workflows like OpenClaw.
```json5
{
@@ -307,13 +310,13 @@ Synthetic provides Anthropic-compatible models behind the `synthetic` provider:
- Provider: `synthetic`
- Auth: `SYNTHETIC_API_KEY`
- Example model: `synthetic/hf:MiniMaxAI/MiniMax-M2.1`
- Example model: `synthetic/hf:MiniMaxAI/MiniMax-M2.5`
- CLI: `openclaw onboard --auth-choice synthetic-api-key`
```json5
{
agents: {
defaults: { model: { primary: "synthetic/hf:MiniMaxAI/MiniMax-M2.1" } },
defaults: { model: { primary: "synthetic/hf:MiniMaxAI/MiniMax-M2.5" } },
},
models: {
mode: "merge",
@@ -322,7 +325,7 @@ Synthetic provides Anthropic-compatible models behind the `synthetic` provider:
baseUrl: "https://api.synthetic.new/anthropic",
apiKey: "${SYNTHETIC_API_KEY}",
api: "anthropic-messages",
models: [{ id: "hf:MiniMaxAI/MiniMax-M2.1", name: "MiniMax M2.1" }],
models: [{ id: "hf:MiniMaxAI/MiniMax-M2.5", name: "MiniMax M2.5" }],
},
},
},
@@ -396,8 +399,8 @@ Example (OpenAIcompatible):
{
agents: {
defaults: {
model: { primary: "lmstudio/minimax-m2.1-gs32" },
models: { "lmstudio/minimax-m2.1-gs32": { alias: "Minimax" } },
model: { primary: "lmstudio/minimax-m2.5-gs32" },
models: { "lmstudio/minimax-m2.5-gs32": { alias: "Minimax" } },
},
},
models: {
@@ -408,8 +411,8 @@ Example (OpenAIcompatible):
api: "openai-completions",
models: [
{
id: "minimax-m2.1-gs32",
name: "MiniMax M2.1",
id: "minimax-m2.5-gs32",
name: "MiniMax M2.5",
reasoning: false,
input: ["text"],
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },

View File

@@ -28,10 +28,11 @@ Related:
- `agents.defaults.imageModel` is used **only when** the primary model cant accept images.
- Per-agent defaults can override `agents.defaults.model` via `agents.list[].model` plus bindings (see [/concepts/multi-agent](/concepts/multi-agent)).
## Quick model picks (anecdotal)
## Quick model policy
- **GLM**: a bit better for coding/tool calling.
- **MiniMax**: better for writing and vibes.
- Set your primary to the strongest latest-generation model available to you.
- Use fallbacks for cost/latency-sensitive tasks and lower-stakes chat.
- For tool-enabled agents or untrusted inputs, avoid older/weaker model tiers.
## Setup wizard (recommended)
@@ -42,8 +43,7 @@ openclaw onboard
```
It can set up model + auth for common providers, including **OpenAI Code (Codex)
subscription** (OAuth) and **Anthropic** (API key recommended; `claude
setup-token` also supported).
subscription** (OAuth) and **Anthropic** (API key or `claude setup-token`).
## Config keys (overview)
@@ -160,7 +160,9 @@ JSON includes `auth.oauth` (warn window + profiles) and `auth.providers`
(effective auth per provider).
Use `--check` for automation (exit `1` when missing/expired, `2` when expiring).
Preferred Anthropic auth is the Claude Code CLI setup-token (run anywhere; paste on the gateway host if needed):
Auth choice is provider/account dependent. For always-on gateway hosts, API keys are usually the most predictable; subscription token flows are also supported.
Example (Anthropic setup-token):
```bash
claude setup-token

View File

@@ -10,7 +10,9 @@ title: "OAuth"
# OAuth
OpenClaw supports “subscription auth” via OAuth for providers that offer it (notably **OpenAI Codex (ChatGPT OAuth)**). For Anthropic subscriptions, use the **setup-token** flow. This page explains:
OpenClaw supports “subscription auth” via OAuth for providers that offer it (notably **OpenAI Codex (ChatGPT OAuth)**). For Anthropic subscriptions, use the **setup-token** flow. Anthropic subscription use outside Claude Code has been restricted for some users in the past, so treat it as a user-choice risk and verify current Anthropic policy yourself. OpenAI Codex OAuth is explicitly supported for use in external tools like OpenClaw. This page explains:
For Anthropic in production, API key auth is the safer recommended path over subscription setup-token auth.
- how the OAuth **token exchange** works (PKCE)
- where tokens are **stored** (and why)
@@ -54,6 +56,12 @@ For static secret refs and runtime snapshot activation behavior, see [Secrets Ma
## Anthropic setup-token (subscription auth)
<Warning>
Anthropic setup-token support is technical compatibility, not a policy guarantee.
Anthropic has blocked some subscription usage outside Claude Code in the past.
Decide for yourself whether to use subscription auth, and verify Anthropic's current terms.
</Warning>
Run `claude setup-token` on any machine, then paste it into OpenClaw:
```bash
@@ -76,7 +84,7 @@ openclaw models status
OpenClaws interactive login flows are implemented in `@mariozechner/pi-ai` and wired into the wizards/commands.
### Anthropic (Claude Pro/Max) setup-token
### Anthropic setup-token
Flow shape:
@@ -88,6 +96,8 @@ The wizard path is `openclaw onboard` → auth choice `setup-token` (Anthropic).
### OpenAI Codex (ChatGPT OAuth)
OpenAI Codex OAuth is explicitly supported for use outside the Codex CLI, including OpenClaw workflows.
Flow shape (PKCE):
1. generate PKCE verifier/challenge + random `state`

View File

@@ -462,7 +462,7 @@ const needsNonImageSanitize =
"id": "anthropic/claude-opus-4.6",
"name": "Anthropic: Claude Opus 4.6"
},
{ "id": "minimax/minimax-m2.1:free", "name": "Minimax: Minimax M2.1" }
{ "id": "minimax/minimax-m2.5:free", "name": "Minimax: Minimax M2.5" }
]
}
}

View File

@@ -8,23 +8,26 @@ title: "Authentication"
# Authentication
OpenClaw supports OAuth and API keys for model providers. For Anthropic
accounts, we recommend using an **API key**. For Claude subscription access,
use the longlived token created by `claude setup-token`.
OpenClaw supports OAuth and API keys for model providers. For always-on gateway
hosts, API keys are usually the most predictable option. Subscription/OAuth
flows are also supported when they match your provider account model.
See [/concepts/oauth](/concepts/oauth) for the full OAuth flow and storage
layout.
For SecretRef-based auth (`env`/`file`/`exec` providers), see [Secrets Management](/gateway/secrets).
## Recommended Anthropic setup (API key)
## Recommended setup (API key, any provider)
If youre using Anthropic directly, use an API key.
If youre running a long-lived gateway, start with an API key for your chosen
provider.
For Anthropic specifically, API key auth is the safe path and is recommended
over subscription setup-token auth.
1. Create an API key in the Anthropic Console.
1. Create an API key in your provider console.
2. Put it on the **gateway host** (the machine running `openclaw gateway`).
```bash
export ANTHROPIC_API_KEY="..."
export <PROVIDER>_API_KEY="..."
openclaw models status
```
@@ -33,7 +36,7 @@ openclaw models status
```bash
cat >> ~/.openclaw/.env <<'EOF'
ANTHROPIC_API_KEY=...
<PROVIDER>_API_KEY=...
EOF
```
@@ -52,8 +55,8 @@ See [Help](/help) for details on env inheritance (`env.shellEnv`,
## Anthropic: setup-token (subscription auth)
For Anthropic, the recommended path is an **API key**. If youre using a Claude
subscription, the setup-token flow is also supported. Run it on the **gateway host**:
If youre using a Claude subscription, the setup-token flow is supported. Run
it on the **gateway host**:
```bash
claude setup-token
@@ -79,6 +82,12 @@ This credential is only authorized for use with Claude Code and cannot be used f
…use an Anthropic API key instead.
<Warning>
Anthropic setup-token support is technical compatibility only. Anthropic has blocked
some subscription usage outside Claude Code in the past. Use it only if you decide
the policy risk is acceptable, and verify Anthropic's current terms yourself.
</Warning>
Manual token entry (any provider; writes `auth-profiles.json` + updates config):
```bash
@@ -164,5 +173,5 @@ is missing, rerun `claude setup-token` and paste the token again.
## Requirements
- Claude Max or Pro subscription (for `claude setup-token`)
- Anthropic subscription account (for `claude setup-token`)
- Claude Code CLI installed (`claude` command available)

View File

@@ -527,7 +527,13 @@ Only enable direct mutable name/email/nick matching with each channel's `dangero
}
```
### Anthropic subscription + API key, MiniMax fallback
### Anthropic setup-token + API key, MiniMax fallback
<Warning>
Anthropic setup-token usage outside Claude Code has been restricted for some
users in the past. Treat this as user-choice risk and verify current Anthropic
terms before depending on subscription auth.
</Warning>
```json5
{
@@ -560,7 +566,7 @@ Only enable direct mutable name/email/nick matching with each channel's `dangero
workspace: "~/.openclaw/workspace",
model: {
primary: "anthropic/claude-opus-4-6",
fallbacks: ["minimax/MiniMax-M2.1"],
fallbacks: ["minimax/MiniMax-M2.5"],
},
},
}
@@ -597,7 +603,7 @@ Only enable direct mutable name/email/nick matching with each channel's `dangero
{
agent: {
workspace: "~/.openclaw/workspace",
model: { primary: "lmstudio/minimax-m2.1-gs32" },
model: { primary: "lmstudio/minimax-m2.5-gs32" },
},
models: {
mode: "merge",
@@ -608,8 +614,8 @@ Only enable direct mutable name/email/nick matching with each channel's `dangero
api: "openai-responses",
models: [
{
id: "minimax-m2.1-gs32",
name: "MiniMax M2.1 GS32",
id: "minimax-m2.5-gs32",
name: "MiniMax M2.5 GS32",
reasoning: false,
input: ["text"],
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },

View File

@@ -825,11 +825,11 @@ Time format in system prompt. Default: `auto` (OS preference).
defaults: {
models: {
"anthropic/claude-opus-4-6": { alias: "opus" },
"minimax/MiniMax-M2.1": { alias: "minimax" },
"minimax/MiniMax-M2.5": { alias: "minimax" },
},
model: {
primary: "anthropic/claude-opus-4-6",
fallbacks: ["minimax/MiniMax-M2.1"],
fallbacks: ["minimax/MiniMax-M2.5"],
},
imageModel: {
primary: "openrouter/qwen/qwen-2.5-vl-72b-instruct:free",
@@ -1895,7 +1895,7 @@ Notes:
agents: {
defaults: {
subagents: {
model: "minimax/MiniMax-M2.1",
model: "minimax/MiniMax-M2.5",
maxConcurrent: 1,
runTimeoutSeconds: 900,
archiveAfterMinutes: 60,
@@ -2111,8 +2111,8 @@ Anthropic-compatible, built-in provider. Shortcut: `openclaw onboard --auth-choi
env: { SYNTHETIC_API_KEY: "sk-..." },
agents: {
defaults: {
model: { primary: "synthetic/hf:MiniMaxAI/MiniMax-M2.1" },
models: { "synthetic/hf:MiniMaxAI/MiniMax-M2.1": { alias: "MiniMax M2.1" } },
model: { primary: "synthetic/hf:MiniMaxAI/MiniMax-M2.5" },
models: { "synthetic/hf:MiniMaxAI/MiniMax-M2.5": { alias: "MiniMax M2.5" } },
},
},
models: {
@@ -2124,8 +2124,8 @@ Anthropic-compatible, built-in provider. Shortcut: `openclaw onboard --auth-choi
api: "anthropic-messages",
models: [
{
id: "hf:MiniMaxAI/MiniMax-M2.1",
name: "MiniMax M2.1",
id: "hf:MiniMaxAI/MiniMax-M2.5",
name: "MiniMax M2.5",
reasoning: false,
input: ["text"],
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
@@ -2143,15 +2143,15 @@ Base URL should omit `/v1` (Anthropic client appends it). Shortcut: `openclaw on
</Accordion>
<Accordion title="MiniMax M2.1 (direct)">
<Accordion title="MiniMax M2.5 (direct)">
```json5
{
agents: {
defaults: {
model: { primary: "minimax/MiniMax-M2.1" },
model: { primary: "minimax/MiniMax-M2.5" },
models: {
"minimax/MiniMax-M2.1": { alias: "Minimax" },
"minimax/MiniMax-M2.5": { alias: "Minimax" },
},
},
},
@@ -2164,8 +2164,8 @@ Base URL should omit `/v1` (Anthropic client appends it). Shortcut: `openclaw on
api: "anthropic-messages",
models: [
{
id: "MiniMax-M2.1",
name: "MiniMax M2.1",
id: "MiniMax-M2.5",
name: "MiniMax M2.5",
reasoning: false,
input: ["text"],
cost: { input: 15, output: 60, cacheRead: 2, cacheWrite: 10 },
@@ -2185,7 +2185,7 @@ Set `MINIMAX_API_KEY`. Shortcut: `openclaw onboard --auth-choice minimax-api`.
<Accordion title="Local models (LM Studio)">
See [Local Models](/gateway/local-models). TL;DR: run MiniMax M2.1 via LM Studio Responses API on serious hardware; keep hosted models merged for fallback.
See [Local Models](/gateway/local-models). TL;DR: run MiniMax M2.5 via LM Studio Responses API on serious hardware; keep hosted models merged for fallback.
</Accordion>

View File

@@ -11,18 +11,18 @@ title: "Local Models"
Local is doable, but OpenClaw expects large context + strong defenses against prompt injection. Small cards truncate context and leak safety. Aim high: **≥2 maxed-out Mac Studios or equivalent GPU rig (~$30k+)**. A single **24 GB** GPU works only for lighter prompts with higher latency. Use the **largest / full-size model variant you can run**; aggressively quantized or “small” checkpoints raise prompt-injection risk (see [Security](/gateway/security)).
## Recommended: LM Studio + MiniMax M2.1 (Responses API, full-size)
## Recommended: LM Studio + MiniMax M2.5 (Responses API, full-size)
Best current local stack. Load MiniMax M2.1 in LM Studio, enable the local server (default `http://127.0.0.1:1234`), and use Responses API to keep reasoning separate from final text.
Best current local stack. Load MiniMax M2.5 in LM Studio, enable the local server (default `http://127.0.0.1:1234`), and use Responses API to keep reasoning separate from final text.
```json5
{
agents: {
defaults: {
model: { primary: "lmstudio/minimax-m2.1-gs32" },
model: { primary: "lmstudio/minimax-m2.5-gs32" },
models: {
"anthropic/claude-opus-4-6": { alias: "Opus" },
"lmstudio/minimax-m2.1-gs32": { alias: "Minimax" },
"lmstudio/minimax-m2.5-gs32": { alias: "Minimax" },
},
},
},
@@ -35,8 +35,8 @@ Best current local stack. Load MiniMax M2.1 in LM Studio, enable the local serve
api: "openai-responses",
models: [
{
id: "minimax-m2.1-gs32",
name: "MiniMax M2.1 GS32",
id: "minimax-m2.5-gs32",
name: "MiniMax M2.5 GS32",
reasoning: false,
input: ["text"],
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
@@ -53,7 +53,7 @@ Best current local stack. Load MiniMax M2.1 in LM Studio, enable the local serve
**Setup checklist**
- Install LM Studio: [https://lmstudio.ai](https://lmstudio.ai)
- In LM Studio, download the **largest MiniMax M2.1 build available** (avoid “small”/heavily quantized variants), start the server, confirm `http://127.0.0.1:1234/v1/models` lists it.
- In LM Studio, download the **largest MiniMax M2.5 build available** (avoid “small”/heavily quantized variants), start the server, confirm `http://127.0.0.1:1234/v1/models` lists it.
- Keep the model loaded; cold-load adds startup latency.
- Adjust `contextWindow`/`maxTokens` if your LM Studio build differs.
- For WhatsApp, stick to Responses API so only final text is sent.
@@ -68,11 +68,11 @@ Keep hosted models configured even when running local; use `models.mode: "merge"
defaults: {
model: {
primary: "anthropic/claude-sonnet-4-5",
fallbacks: ["lmstudio/minimax-m2.1-gs32", "anthropic/claude-opus-4-6"],
fallbacks: ["lmstudio/minimax-m2.5-gs32", "anthropic/claude-opus-4-6"],
},
models: {
"anthropic/claude-sonnet-4-5": { alias: "Sonnet" },
"lmstudio/minimax-m2.1-gs32": { alias: "MiniMax Local" },
"lmstudio/minimax-m2.5-gs32": { alias: "MiniMax Local" },
"anthropic/claude-opus-4-6": { alias: "Opus" },
},
},
@@ -86,8 +86,8 @@ Keep hosted models configured even when running local; use `models.mode: "merge"
api: "openai-responses",
models: [
{
id: "minimax-m2.1-gs32",
name: "MiniMax M2.1 GS32",
id: "minimax-m2.5-gs32",
name: "MiniMax M2.5 GS32",
reasoning: false,
input: ["text"],
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },

View File

@@ -516,7 +516,7 @@ Even with strong system prompts, **prompt injection is not solved**. System prom
- Run sensitive tool execution in a sandbox; keep secrets out of the agents reachable filesystem.
- Note: sandboxing is opt-in. If sandbox mode is off, exec runs on the gateway host even though tools.exec.host defaults to sandbox, and host exec does not require approvals unless you set host=gateway and configure exec approvals.
- Limit high-risk tools (`exec`, `browser`, `web_fetch`, `web_search`) to trusted agents or explicit allowlists.
- **Model choice matters:** older/legacy models can be less robust against prompt injection and tool misuse. Prefer modern, instruction-hardened models for any bot with tools. We recommend Anthropic Opus 4.6 (or the latest Opus) because its strong at recognizing prompt injections (see [“A step forward on safety”](https://www.anthropic.com/news/claude-opus-4-5)).
- **Model choice matters:** older/legacy models can be less robust against prompt injection and tool misuse. Prefer the strongest latest-generation, instruction-hardened model available for any bot with tools.
Red flags to treat as untrusted:
@@ -570,7 +570,7 @@ Prompt injection resistance is **not** uniform across model tiers. Smaller/cheap
Recommendations:
- **Use the latest generation, best-tier model** for any bot that can run tools or touch files/networks.
- **Avoid weaker tiers** (for example, Sonnet or Haiku) for tool-enabled agents or untrusted inboxes.
- **Avoid older/weaker tiers** for tool-enabled agents or untrusted inboxes.
- If you must use a smaller model, **reduce blast radius** (read-only tools, strong sandboxing, minimal filesystem access, strict allowlists).
- When running small models, **enable sandboxing for all sessions** and **disable web_search/web_fetch/browser** unless inputs are tightly controlled.
- For chat-only personal assistants with trusted input and no tools, smaller models are usually fine.

View File

@@ -147,7 +147,7 @@ Quick answers plus deeper troubleshooting for real-world setups (local dev, VPS,
- [How do I switch models on the fly (without restarting)?](#how-do-i-switch-models-on-the-fly-without-restarting)
- [Can I use GPT 5.2 for daily tasks and Codex 5.3 for coding](#can-i-use-gpt-52-for-daily-tasks-and-codex-53-for-coding)
- [Why do I see "Model … is not allowed" and then no reply?](#why-do-i-see-model-is-not-allowed-and-then-no-reply)
- [Why do I see "Unknown model: minimax/MiniMax-M2.1"?](#why-do-i-see-unknown-model-minimaxminimaxm21)
- [Why do I see "Unknown model: minimax/MiniMax-M2.5"?](#why-do-i-see-unknown-model-minimaxminimaxm21)
- [Can I use MiniMax as my default and OpenAI for complex tasks?](#can-i-use-minimax-as-my-default-and-openai-for-complex-tasks)
- [Are opus / sonnet / gpt built-in shortcuts?](#are-opus-sonnet-gpt-builtin-shortcuts)
- [How do I define/override model shortcuts (aliases)?](#how-do-i-defineoverride-model-shortcuts-aliases)
@@ -688,7 +688,7 @@ Docs: [Update](/cli/update), [Updating](/install/updating).
`openclaw onboard` is the recommended setup path. In **local mode** it walks you through:
- **Model/auth setup** (Anthropic **setup-token** recommended for Claude subscriptions, OpenAI Codex OAuth supported, API keys optional, LM Studio local models supported)
- **Model/auth setup** (provider OAuth/setup-token flows and API keys supported, plus local model options such as LM Studio)
- **Workspace** location + bootstrap files
- **Gateway settings** (bind/port/auth/tailscale)
- **Providers** (WhatsApp, Telegram, Discord, Mattermost (plugin), Signal, iMessage)
@@ -703,6 +703,10 @@ No. You can run OpenClaw with **API keys** (Anthropic/OpenAI/others) or with
**local-only models** so your data stays on your device. Subscriptions (Claude
Pro/Max or OpenAI Codex) are optional ways to authenticate those providers.
If you choose Anthropic subscription auth, decide for yourself whether to use it:
Anthropic has blocked some subscription usage outside Claude Code in the past.
OpenAI Codex OAuth is explicitly supported for external tools like OpenClaw.
Docs: [Anthropic](/providers/anthropic), [OpenAI](/providers/openai),
[Local models](/gateway/local-models), [Models](/concepts/models).
@@ -712,9 +716,9 @@ Yes. You can authenticate with a **setup-token**
instead of an API key. This is the subscription path.
Claude Pro/Max subscriptions **do not include an API key**, so this is the
correct approach for subscription accounts. Important: you must verify with
Anthropic that this usage is allowed under their subscription policy and terms.
If you want the most explicit, supported path, use an Anthropic API key.
technical path for subscription accounts. But this is your decision: Anthropic
has blocked some subscription usage outside Claude Code in the past.
If you want the clearest and safest supported path for production, use an Anthropic API key.
### How does Anthropic setuptoken auth work
@@ -734,12 +738,15 @@ Copy the token it prints, then choose **Anthropic token (paste setup-token)** in
Yes - via **setup-token**. OpenClaw no longer reuses Claude Code CLI OAuth tokens; use a setup-token or an Anthropic API key. Generate the token anywhere and paste it on the gateway host. See [Anthropic](/providers/anthropic) and [OAuth](/concepts/oauth).
Note: Claude subscription access is governed by Anthropic's terms. For production or multi-user workloads, API keys are usually the safer choice.
Important: this is technical compatibility, not a policy guarantee. Anthropic
has blocked some subscription usage outside Claude Code in the past.
You need to decide whether to use it and verify Anthropic's current terms.
For production or multi-user workloads, Anthropic API key auth is the safer, recommended choice.
### Why am I seeing HTTP 429 ratelimiterror from Anthropic
That means your **Anthropic quota/rate limit** is exhausted for the current window. If you
use a **Claude subscription** (setup-token or Claude Code OAuth), wait for the window to
use a **Claude subscription** (setup-token), wait for the window to
reset or upgrade your plan. If you use an **Anthropic API key**, check the Anthropic Console
for usage/billing and raise limits as needed.
@@ -763,8 +770,9 @@ OpenClaw supports **OpenAI Code (Codex)** via OAuth (ChatGPT sign-in). The wizar
### Do you support OpenAI subscription auth Codex OAuth
Yes. OpenClaw fully supports **OpenAI Code (Codex) subscription OAuth**. The onboarding wizard
can run the OAuth flow for you.
Yes. OpenClaw fully supports **OpenAI Code (Codex) subscription OAuth**.
OpenAI explicitly allows subscription OAuth usage in external tools/workflows
like OpenClaw. The onboarding wizard can run the OAuth flow for you.
See [OAuth](/concepts/oauth), [Model providers](/concepts/model-providers), and [Wizard](/start/wizard).
@@ -781,7 +789,7 @@ This stores OAuth tokens in auth profiles on the gateway host. Details: [Model p
### Is a local model OK for casual chats
Usually no. OpenClaw needs large context + strong safety; small cards truncate and leak. If you must, run the **largest** MiniMax M2.1 build you can locally (LM Studio) and see [/gateway/local-models](/gateway/local-models). Smaller/quantized models increase prompt-injection risk - see [Security](/gateway/security).
Usually no. OpenClaw needs large context + strong safety; small cards truncate and leak. If you must, run the **largest** MiniMax M2.5 build you can locally (LM Studio) and see [/gateway/local-models](/gateway/local-models). Smaller/quantized models increase prompt-injection risk - see [Security](/gateway/security).
### How do I keep hosted model traffic in a specific region
@@ -2028,12 +2036,11 @@ Models are referenced as `provider/model` (example: `anthropic/claude-opus-4-6`)
### What model do you recommend
**Recommended default:** `anthropic/claude-opus-4-6`.
**Good alternative:** `anthropic/claude-sonnet-4-5`.
**Reliable (less character):** `openai/gpt-5.2` - nearly as good as Opus, just less personality.
**Budget:** `zai/glm-4.7`.
**Recommended default:** use the strongest latest-generation model available in your provider stack.
**For tool-enabled or untrusted-input agents:** prioritize model strength over cost.
**For routine/low-stakes chat:** use cheaper fallback models and route by agent role.
MiniMax M2.1 has its own docs: [MiniMax](/providers/minimax) and
MiniMax M2.5 has its own docs: [MiniMax](/providers/minimax) and
[Local models](/gateway/local-models).
Rule of thumb: use the **best model you can afford** for high-stakes work, and a cheaper
@@ -2077,8 +2084,9 @@ Docs: [Models](/concepts/models), [Configure](/cli/configure), [Config](/cli/con
### What do OpenClaw, Flawd, and Krill use for models
- **OpenClaw + Flawd:** Anthropic Opus (`anthropic/claude-opus-4-6`) - see [Anthropic](/providers/anthropic).
- **Krill:** MiniMax M2.1 (`minimax/MiniMax-M2.1`) - see [MiniMax](/providers/minimax).
- These deployments can differ and may change over time; there is no fixed provider recommendation.
- Check the current runtime setting on each gateway with `openclaw models status`.
- For security-sensitive/tool-enabled agents, use the strongest latest-generation model available.
### How do I switch models on the fly without restarting
@@ -2156,8 +2164,8 @@ Fix checklist:
1. Upgrade to **2026.1.12** (or run from source `main`), then restart the gateway.
2. Make sure MiniMax is configured (wizard or JSON), or that a MiniMax API key
exists in env/auth profiles so the provider can be injected.
3. Use the exact model id (case-sensitive): `minimax/MiniMax-M2.1` or
`minimax/MiniMax-M2.1-lightning`.
3. Use the exact model id (case-sensitive): `minimax/MiniMax-M2.5` or
`minimax/MiniMax-M2.5-Lightning`.
4. Run:
```bash
@@ -2180,9 +2188,9 @@ Fallbacks are for **errors**, not "hard tasks," so use `/model` or a separate ag
env: { MINIMAX_API_KEY: "sk-...", OPENAI_API_KEY: "sk-..." },
agents: {
defaults: {
model: { primary: "minimax/MiniMax-M2.1" },
model: { primary: "minimax/MiniMax-M2.5" },
models: {
"minimax/MiniMax-M2.1": { alias: "minimax" },
"minimax/MiniMax-M2.5": { alias: "minimax" },
"openai/gpt-5.2": { alias: "gpt" },
},
},

View File

@@ -136,7 +136,7 @@ Live tests are split into two layers so we can isolate failures:
- `pnpm test:live` (or `OPENCLAW_LIVE_TEST=1` if invoking Vitest directly)
- Set `OPENCLAW_LIVE_MODELS=modern` (or `all`, alias for modern) to actually run this suite; otherwise it skips to keep `pnpm test:live` focused on gateway smoke
- How to select models:
- `OPENCLAW_LIVE_MODELS=modern` to run the modern allowlist (Opus/Sonnet/Haiku 4.5, GPT-5.x + Codex, Gemini 3, GLM 4.7, MiniMax M2.1, Grok 4)
- `OPENCLAW_LIVE_MODELS=modern` to run the modern allowlist (Opus/Sonnet/Haiku 4.5, GPT-5.x + Codex, Gemini 3, GLM 4.7, MiniMax M2.5, Grok 4)
- `OPENCLAW_LIVE_MODELS=all` is an alias for the modern allowlist
- or `OPENCLAW_LIVE_MODELS="openai/gpt-5.2,anthropic/claude-opus-4-6,..."` (comma allowlist)
- How to select providers:
@@ -167,7 +167,7 @@ Live tests are split into two layers so we can isolate failures:
- How to enable:
- `pnpm test:live` (or `OPENCLAW_LIVE_TEST=1` if invoking Vitest directly)
- How to select models:
- Default: modern allowlist (Opus/Sonnet/Haiku 4.5, GPT-5.x + Codex, Gemini 3, GLM 4.7, MiniMax M2.1, Grok 4)
- Default: modern allowlist (Opus/Sonnet/Haiku 4.5, GPT-5.x + Codex, Gemini 3, GLM 4.7, MiniMax M2.5, Grok 4)
- `OPENCLAW_LIVE_GATEWAY_MODELS=all` is an alias for the modern allowlist
- Or set `OPENCLAW_LIVE_GATEWAY_MODELS="provider/model"` (or comma list) to narrow
- How to select providers (avoid “OpenRouter everything”):
@@ -251,7 +251,7 @@ Narrow, explicit allowlists are fastest and least flaky:
- `OPENCLAW_LIVE_GATEWAY_MODELS="openai/gpt-5.2" pnpm test:live src/gateway/gateway-models.profiles.live.test.ts`
- Tool calling across several providers:
- `OPENCLAW_LIVE_GATEWAY_MODELS="openai/gpt-5.2,anthropic/claude-opus-4-6,google/gemini-3-flash-preview,zai/glm-4.7,minimax/minimax-m2.1" pnpm test:live src/gateway/gateway-models.profiles.live.test.ts`
- `OPENCLAW_LIVE_GATEWAY_MODELS="openai/gpt-5.2,anthropic/claude-opus-4-6,google/gemini-3-flash-preview,zai/glm-4.7,minimax/minimax-m2.5" pnpm test:live src/gateway/gateway-models.profiles.live.test.ts`
- Google focus (Gemini API key + Antigravity):
- Gemini (API key): `OPENCLAW_LIVE_GATEWAY_MODELS="google/gemini-3-flash-preview" pnpm test:live src/gateway/gateway-models.profiles.live.test.ts`
@@ -280,10 +280,10 @@ This is the “common models” run we expect to keep working:
- Google (Gemini API): `google/gemini-3-pro-preview` and `google/gemini-3-flash-preview` (avoid older Gemini 2.x models)
- Google (Antigravity): `google-antigravity/claude-opus-4-6-thinking` and `google-antigravity/gemini-3-flash`
- Z.AI (GLM): `zai/glm-4.7`
- MiniMax: `minimax/minimax-m2.1`
- MiniMax: `minimax/minimax-m2.5`
Run gateway smoke with tools + image:
`OPENCLAW_LIVE_GATEWAY_MODELS="openai/gpt-5.2,openai-codex/gpt-5.3-codex,anthropic/claude-opus-4-6,google/gemini-3-pro-preview,google/gemini-3-flash-preview,google-antigravity/claude-opus-4-6-thinking,google-antigravity/gemini-3-flash,zai/glm-4.7,minimax/minimax-m2.1" pnpm test:live src/gateway/gateway-models.profiles.live.test.ts`
`OPENCLAW_LIVE_GATEWAY_MODELS="openai/gpt-5.2,openai-codex/gpt-5.3-codex,anthropic/claude-opus-4-6,google/gemini-3-pro-preview,google/gemini-3-flash-preview,google-antigravity/claude-opus-4-6-thinking,google-antigravity/gemini-3-flash,zai/glm-4.7,minimax/minimax-m2.5" pnpm test:live src/gateway/gateway-models.profiles.live.test.ts`
### Baseline: tool calling (Read + optional Exec)
@@ -293,7 +293,7 @@ Pick at least one per provider family:
- Anthropic: `anthropic/claude-opus-4-6` (or `anthropic/claude-sonnet-4-5`)
- Google: `google/gemini-3-flash-preview` (or `google/gemini-3-pro-preview`)
- Z.AI (GLM): `zai/glm-4.7`
- MiniMax: `minimax/minimax-m2.1`
- MiniMax: `minimax/minimax-m2.5`
Optional additional coverage (nice to have):

View File

@@ -54,7 +54,7 @@ OpenClaw is a **self-hosted gateway** that connects your favorite chat apps —
- **Agent-native**: built for coding agents with tool use, sessions, memory, and multi-agent routing
- **Open source**: MIT licensed, community-driven
**What do you need?** Node 22+, an API key (Anthropic recommended), and 5 minutes.
**What do you need?** Node 22+, an API key from your chosen provider, and 5 minutes. For best quality and security, use the strongest latest-generation model available.
## How it works

View File

@@ -15,7 +15,7 @@ read_when:
- [flyctl CLI](https://fly.io/docs/hands-on/install-flyctl/) installed
- Fly.io account (free tier works)
- Model auth: Anthropic API key (or other provider keys)
- Model auth: API key for your chosen model provider
- Channel credentials: Discord bot token, Telegram token, etc.
## Beginner quick path

View File

@@ -23,7 +23,7 @@ What I need you to do:
1. Check if Determinate Nix is installed (if not, install it)
2. Create a local flake at ~/code/openclaw-local using templates/agent-first/flake.nix
3. Help me create a Telegram bot (@BotFather) and get my chat ID (@userinfobot)
4. Set up secrets (bot token, Anthropic key) - plain files at ~/.secrets/ is fine
4. Set up secrets (bot token, model provider API key) - plain files at ~/.secrets/ is fine
5. Fill in the template placeholders and run home-manager switch
6. Verify: launchd running, bot responds to messages

View File

@@ -199,24 +199,13 @@ If you omit `capabilities`, the entry is eligible for the list it appears in.
| Audio | OpenAI, Groq, Deepgram, Google, Mistral | Provider transcription (Whisper/Deepgram/Gemini/Voxtral). |
| Video | Google (Gemini API) | Provider video understanding. |
## Recommended providers
## Model selection guidance
**Image**
- Prefer your active model if it supports images.
- Good defaults: `openai/gpt-5.2`, `anthropic/claude-opus-4-6`, `google/gemini-3-pro-preview`.
**Audio**
- `openai/gpt-4o-mini-transcribe`, `groq/whisper-large-v3-turbo`, `deepgram/nova-3`, or `mistral/voxtral-mini-latest`.
- CLI fallback: `whisper-cli` (whisper-cpp) or `whisper`.
- Prefer the strongest latest-generation model available for each media capability when quality and safety matter.
- For tool-enabled agents handling untrusted inputs, avoid older/weaker media models.
- Keep at least one fallback per capability for availability (quality model + faster/cheaper model).
- CLI fallbacks (`whisper-cli`, `whisper`, `gemini`) are useful when provider APIs are unavailable.
- `parakeet-mlx` note: with `--output-dir`, OpenClaw reads `<output-dir>/<media-basename>.txt` when output format is `txt` (or unspecified); non-`txt` formats fall back to stdout.
- Deepgram setup: [Deepgram (audio transcription)](/providers/deepgram).
**Video**
- `google/gemini-3-flash-preview` (fast), `google/gemini-3-pro-preview` (richer).
- CLI fallback: `gemini` CLI (supports `read_file` on video/audio).
## Attachment policy

View File

@@ -1,9 +1,9 @@
---
summary: "Use Claude Max/Pro subscription as an OpenAI-compatible API endpoint"
summary: "Community proxy to expose Claude subscription credentials as an OpenAI-compatible endpoint"
read_when:
- You want to use Claude Max subscription with OpenAI-compatible tools
- You want a local API server that wraps Claude Code CLI
- You want to save money by using subscription instead of API keys
- You want to evaluate subscription-based vs API-key-based Anthropic access
title: "Claude Max API Proxy"
---
@@ -11,6 +11,12 @@ title: "Claude Max API Proxy"
**claude-max-api-proxy** is a community tool that exposes your Claude Max/Pro subscription as an OpenAI-compatible API endpoint. This allows you to use your subscription with any tool that supports the OpenAI API format.
<Warning>
This path is technical compatibility only. Anthropic has blocked some subscription
usage outside Claude Code in the past. You must decide for yourself whether to use
it and verify Anthropic's current terms before relying on it.
</Warning>
## Why Use This?
| Approach | Cost | Best For |
@@ -18,7 +24,7 @@ title: "Claude Max API Proxy"
| Anthropic API | Pay per token (~$15/M input, $75/M output for Opus) | Production apps, high volume |
| Claude Max subscription | $200/month flat | Personal use, development, unlimited usage |
If you have a Claude Max subscription and want to use it with OpenAI-compatible tools, this proxy can save you significant money.
If you have a Claude Max subscription and want to use it with OpenAI-compatible tools, this proxy may reduce cost for some workflows. API keys remain the clearer policy path for production use.
## How It Works

View File

@@ -56,7 +56,7 @@ Looking for chat channel docs (WhatsApp/Telegram/Discord/Slack/Mattermost (plugi
## Community tools
- [Claude Max API Proxy](/providers/claude-max-api-proxy) - Use Claude Max/Pro subscription as an OpenAI-compatible API endpoint
- [Claude Max API Proxy](/providers/claude-max-api-proxy) - Community proxy for Claude subscription credentials (verify Anthropic policy/terms before use)
For the full provider catalog (xAI, Groq, Mistral, etc.) and advanced configuration,
see [Model providers](/concepts/model-providers).

View File

@@ -1,5 +1,5 @@
---
summary: "Use MiniMax M2.1 in OpenClaw"
summary: "Use MiniMax M2.5 in OpenClaw"
read_when:
- You want MiniMax models in OpenClaw
- You need MiniMax setup guidance
@@ -8,15 +8,15 @@ title: "MiniMax"
# MiniMax
MiniMax is an AI company that builds the **M2/M2.1** model family. The current
coding-focused release is **MiniMax M2.1** (December 23, 2025), built for
MiniMax is an AI company that builds the **M2/M2.5** model family. The current
coding-focused release is **MiniMax M2.5** (December 23, 2025), built for
real-world complex tasks.
Source: [MiniMax M2.1 release note](https://www.minimax.io/news/minimax-m21)
Source: [MiniMax M2.5 release note](https://www.minimax.io/news/minimax-m21)
## Model overview (M2.1)
## Model overview (M2.5)
MiniMax highlights these improvements in M2.1:
MiniMax highlights these improvements in M2.5:
- Stronger **multi-language coding** (Rust, Java, Go, C++, Kotlin, Objective-C, TS/JS).
- Better **web/app development** and aesthetic output quality (including native mobile).
@@ -27,13 +27,13 @@ MiniMax highlights these improvements in M2.1:
Droid/Factory AI, Cline, Kilo Code, Roo Code, BlackBox).
- Higher-quality **dialogue and technical writing** outputs.
## MiniMax M2.1 vs MiniMax M2.1 Lightning
## MiniMax M2.5 vs MiniMax M2.5 Lightning
- **Speed:** Lightning is the “fast” variant in MiniMaxs pricing docs.
- **Cost:** Pricing shows the same input cost, but Lightning has higher output cost.
- **Coding plan routing:** The Lightning back-end isnt directly available on the MiniMax
coding plan. MiniMax auto-routes most requests to Lightning, but falls back to the
regular M2.1 back-end during traffic spikes.
regular M2.5 back-end during traffic spikes.
## Choose a setup
@@ -56,7 +56,7 @@ You will be prompted to select an endpoint:
See [MiniMax OAuth plugin README](https://github.com/openclaw/openclaw/tree/main/extensions/minimax-portal-auth) for details.
### MiniMax M2.1 (API key)
### MiniMax M2.5 (API key)
**Best for:** hosted MiniMax with Anthropic-compatible API.
@@ -64,12 +64,12 @@ Configure via CLI:
- Run `openclaw configure`
- Select **Model/auth**
- Choose **MiniMax M2.1**
- Choose **MiniMax M2.5**
```json5
{
env: { MINIMAX_API_KEY: "sk-..." },
agents: { defaults: { model: { primary: "minimax/MiniMax-M2.1" } } },
agents: { defaults: { model: { primary: "minimax/MiniMax-M2.5" } } },
models: {
mode: "merge",
providers: {
@@ -79,8 +79,8 @@ Configure via CLI:
api: "anthropic-messages",
models: [
{
id: "MiniMax-M2.1",
name: "MiniMax M2.1",
id: "MiniMax-M2.5",
name: "MiniMax M2.5",
reasoning: false,
input: ["text"],
cost: { input: 15, output: 60, cacheRead: 2, cacheWrite: 10 },
@@ -94,9 +94,10 @@ Configure via CLI:
}
```
### MiniMax M2.1 as fallback (Opus primary)
### MiniMax M2.5 as fallback (example)
**Best for:** keep Opus 4.6 as primary, fail over to MiniMax M2.1.
**Best for:** keep your strongest latest-generation model as primary, fail over to MiniMax M2.5.
Example below uses Opus as a concrete primary; swap to your preferred latest-gen primary model.
```json5
{
@@ -104,12 +105,12 @@ Configure via CLI:
agents: {
defaults: {
models: {
"anthropic/claude-opus-4-6": { alias: "opus" },
"minimax/MiniMax-M2.1": { alias: "minimax" },
"anthropic/claude-opus-4-6": { alias: "primary" },
"minimax/MiniMax-M2.5": { alias: "minimax" },
},
model: {
primary: "anthropic/claude-opus-4-6",
fallbacks: ["minimax/MiniMax-M2.1"],
fallbacks: ["minimax/MiniMax-M2.5"],
},
},
},
@@ -119,7 +120,7 @@ Configure via CLI:
### Optional: Local via LM Studio (manual)
**Best for:** local inference with LM Studio.
We have seen strong results with MiniMax M2.1 on powerful hardware (e.g. a
We have seen strong results with MiniMax M2.5 on powerful hardware (e.g. a
desktop/server) using LM Studio's local server.
Configure manually via `openclaw.json`:
@@ -128,8 +129,8 @@ Configure manually via `openclaw.json`:
{
agents: {
defaults: {
model: { primary: "lmstudio/minimax-m2.1-gs32" },
models: { "lmstudio/minimax-m2.1-gs32": { alias: "Minimax" } },
model: { primary: "lmstudio/minimax-m2.5-gs32" },
models: { "lmstudio/minimax-m2.5-gs32": { alias: "Minimax" } },
},
},
models: {
@@ -141,8 +142,8 @@ Configure manually via `openclaw.json`:
api: "openai-responses",
models: [
{
id: "minimax-m2.1-gs32",
name: "MiniMax M2.1 GS32",
id: "minimax-m2.5-gs32",
name: "MiniMax M2.5 GS32",
reasoning: false,
input: ["text"],
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
@@ -162,7 +163,7 @@ Use the interactive config wizard to set MiniMax without editing JSON:
1. Run `openclaw configure`.
2. Select **Model/auth**.
3. Choose **MiniMax M2.1**.
3. Choose **MiniMax M2.5**.
4. Pick your default model when prompted.
## Configuration options
@@ -181,25 +182,25 @@ Use the interactive config wizard to set MiniMax without editing JSON:
- Update pricing values in `models.json` if you need exact cost tracking.
- Referral link for MiniMax Coding Plan (10% off): [https://platform.minimax.io/subscribe/coding-plan?code=DbXJTRClnb&source=link](https://platform.minimax.io/subscribe/coding-plan?code=DbXJTRClnb&source=link)
- See [/concepts/model-providers](/concepts/model-providers) for provider rules.
- Use `openclaw models list` and `openclaw models set minimax/MiniMax-M2.1` to switch.
- Use `openclaw models list` and `openclaw models set minimax/MiniMax-M2.5` to switch.
## Troubleshooting
### “Unknown model: minimax/MiniMax-M2.1
### “Unknown model: minimax/MiniMax-M2.5
This usually means the **MiniMax provider isnt configured** (no provider entry
and no MiniMax auth profile/env key found). A fix for this detection is in
**2026.1.12** (unreleased at the time of writing). Fix by:
- Upgrading to **2026.1.12** (or run from source `main`), then restarting the gateway.
- Running `openclaw configure` and selecting **MiniMax M2.1**, or
- Running `openclaw configure` and selecting **MiniMax M2.5**, or
- Adding the `models.providers.minimax` block manually, or
- Setting `MINIMAX_API_KEY` (or a MiniMax auth profile) so the provider can be injected.
Make sure the model id is **casesensitive**:
- `minimax/MiniMax-M2.1`
- `minimax/MiniMax-M2.1-lightning`
- `minimax/MiniMax-M2.5`
- `minimax/MiniMax-M2.5-Lightning`
Then recheck with:

View File

@@ -10,6 +10,7 @@ title: "OpenAI"
OpenAI provides developer APIs for GPT models. Codex supports **ChatGPT sign-in** for subscription
access or **API key** sign-in for usage-based access. Codex cloud requires ChatGPT sign-in.
OpenAI explicitly supports subscription OAuth usage in external tools/workflows like OpenClaw.
## Option A: OpenAI API key (OpenAI Platform)

View File

@@ -23,7 +23,7 @@ openclaw onboard --auth-choice synthetic-api-key
The default model is set to:
```
synthetic/hf:MiniMaxAI/MiniMax-M2.1
synthetic/hf:MiniMaxAI/MiniMax-M2.5
```
## Config example
@@ -33,8 +33,8 @@ synthetic/hf:MiniMaxAI/MiniMax-M2.1
env: { SYNTHETIC_API_KEY: "sk-..." },
agents: {
defaults: {
model: { primary: "synthetic/hf:MiniMaxAI/MiniMax-M2.1" },
models: { "synthetic/hf:MiniMaxAI/MiniMax-M2.1": { alias: "MiniMax M2.1" } },
model: { primary: "synthetic/hf:MiniMaxAI/MiniMax-M2.5" },
models: { "synthetic/hf:MiniMaxAI/MiniMax-M2.5": { alias: "MiniMax M2.5" } },
},
},
models: {
@@ -46,8 +46,8 @@ synthetic/hf:MiniMaxAI/MiniMax-M2.1
api: "anthropic-messages",
models: [
{
id: "hf:MiniMaxAI/MiniMax-M2.1",
name: "MiniMax M2.1",
id: "hf:MiniMaxAI/MiniMax-M2.5",
name: "MiniMax M2.5",
reasoning: false,
input: ["text"],
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
@@ -71,7 +71,7 @@ All models below use cost `0` (input/output/cache).
| Model ID | Context window | Max tokens | Reasoning | Input |
| ------------------------------------------------------ | -------------- | ---------- | --------- | ------------ |
| `hf:MiniMaxAI/MiniMax-M2.1` | 192000 | 65536 | false | text |
| `hf:MiniMaxAI/MiniMax-M2.5` | 192000 | 65536 | false | text |
| `hf:moonshotai/Kimi-K2-Thinking` | 256000 | 8192 | true | text |
| `hf:zai-org/GLM-4.7` | 198000 | 128000 | false | text |
| `hf:deepseek-ai/DeepSeek-R1-0528` | 128000 | 8192 | false | text |

View File

@@ -158,7 +158,7 @@ openclaw models list | grep venice
| `grok-41-fast` | Grok 4.1 Fast | 262k | Reasoning, vision |
| `grok-code-fast-1` | Grok Code Fast 1 | 262k | Reasoning, code |
| `kimi-k2-thinking` | Kimi K2 Thinking | 262k | Reasoning |
| `minimax-m21` | MiniMax M2.1 | 202k | Reasoning |
| `minimax-m21` | MiniMax M2.5 | 202k | Reasoning |
## Model Discovery

View File

@@ -30,7 +30,7 @@ For a high-level overview, see [Onboarding Wizard](/start/wizard).
- Full reset (also removes workspace)
</Step>
<Step title="Model/Auth">
- **Anthropic API key (recommended)**: uses `ANTHROPIC_API_KEY` if present or prompts for a key, then saves it for daemon use.
- **Anthropic API key**: uses `ANTHROPIC_API_KEY` if present or prompts for a key, then saves it for daemon use.
- **Anthropic OAuth (Claude Code CLI)**: on macOS the wizard checks Keychain item "Claude Code-credentials" (choose "Always Allow" so launchd starts don't block); on Linux/Windows it reuses `~/.claude/.credentials.json` if present.
- **Anthropic token (paste setup-token)**: run `claude setup-token` on any machine, then paste the token (you can name it; blank = default).
- **OpenAI Code (Codex) subscription (Codex CLI)**: if `~/.codex/auth.json` exists, the wizard can reuse it.
@@ -44,7 +44,7 @@ For a high-level overview, see [Onboarding Wizard](/start/wizard).
- More detail: [Vercel AI Gateway](/providers/vercel-ai-gateway)
- **Cloudflare AI Gateway**: prompts for Account ID, Gateway ID, and `CLOUDFLARE_AI_GATEWAY_API_KEY`.
- More detail: [Cloudflare AI Gateway](/providers/cloudflare-ai-gateway)
- **MiniMax M2.1**: config is auto-written.
- **MiniMax M2.5**: config is auto-written.
- More detail: [MiniMax](/providers/minimax)
- **Synthetic (Anthropic-compatible)**: prompts for `SYNTHETIC_API_KEY`.
- More detail: [Synthetic](/providers/synthetic)
@@ -52,7 +52,7 @@ For a high-level overview, see [Onboarding Wizard](/start/wizard).
- **Kimi Coding**: config is auto-written.
- More detail: [Moonshot AI (Kimi + Kimi Coding)](/providers/moonshot)
- **Skip**: no auth configured yet.
- Pick a default model from detected options (or enter provider/model manually).
- Pick a default model from detected options (or enter provider/model manually). For best quality and lower prompt-injection risk, choose the strongest latest-generation model available in your provider stack.
- Wizard runs a model check and warns if the configured model is unknown or missing auth.
- API key storage mode defaults to plaintext auth-profile values. Use `--secret-input-mode ref` to store env-backed refs instead (for example `keyRef: { source: "env", provider: "default", id: "OPENAI_API_KEY" }`).
- OAuth credentials live in `~/.openclaw/credentials/oauth.json`; auth profiles live in `~/.openclaw/agents/<agentId>/agent/auth-profiles.json` (API keys + OAuth).

View File

@@ -116,7 +116,7 @@ What you set:
## Auth and model options
<AccordionGroup>
<Accordion title="Anthropic API key (recommended)">
<Accordion title="Anthropic API key">
Uses `ANTHROPIC_API_KEY` if present or prompts for a key, then saves it for daemon use.
</Accordion>
<Accordion title="Anthropic OAuth (Claude Code CLI)">
@@ -163,7 +163,7 @@ What you set:
Prompts for account ID, gateway ID, and `CLOUDFLARE_AI_GATEWAY_API_KEY`.
More detail: [Cloudflare AI Gateway](/providers/cloudflare-ai-gateway).
</Accordion>
<Accordion title="MiniMax M2.1">
<Accordion title="MiniMax M2.5">
Config is auto-written.
More detail: [MiniMax](/providers/minimax).
</Accordion>

View File

@@ -64,9 +64,9 @@ The wizard starts with **QuickStart** (defaults) vs **Advanced** (full control).
**Local mode (default)** walks you through these steps:
1. **Model/Auth**Anthropic API key (recommended), OpenAI, or Custom Provider
1. **Model/Auth**choose any supported provider/auth flow (API key, OAuth, or setup-token), including Custom Provider
(OpenAI-compatible, Anthropic-compatible, or Unknown auto-detect). Pick a default model.
Security note: if this agent will run tools or process webhook/hooks content, prefer a strong modern model tier and keep tool policy strict. Weaker model tiers are easier to prompt-inject.
Security note: if this agent will run tools or process webhook/hooks content, prefer the strongest latest-generation model available and keep tool policy strict. Weaker/older tiers are easier to prompt-inject.
For non-interactive runs, `--secret-input-mode ref` stores env-backed refs in auth profiles instead of plaintext API key values.
In non-interactive `ref` mode, the provider env var must be set; passing inline key flags without that env var fails fast.
In interactive runs, choosing secret reference mode lets you point at either an environment variable or a configured provider ref (`file` or `exec`), with a fast preflight validation before saving.

View File

@@ -85,13 +85,13 @@ function createOAuthHandler(region: MiniMaxRegion) {
api: "anthropic-messages",
models: [
buildModelDefinition({
id: "MiniMax-M2.1",
name: "MiniMax M2.1",
id: "MiniMax-M2.5",
name: "MiniMax M2.5",
input: ["text"],
}),
buildModelDefinition({
id: "MiniMax-M2.5",
name: "MiniMax M2.5",
id: "MiniMax-M2.5-Lightning",
name: "MiniMax M2.5 Lightning",
input: ["text"],
reasoning: true,
}),
@@ -102,8 +102,10 @@ function createOAuthHandler(region: MiniMaxRegion) {
agents: {
defaults: {
models: {
[modelRef("MiniMax-M2.1")]: { alias: "minimax-m2.1" },
[modelRef("MiniMax-M2.5")]: { alias: "minimax-m2.5" },
[modelRef("MiniMax-M2.5-Lightning")]: {
alias: "minimax-m2.5-lightning",
},
},
},
},

View File

@@ -22,7 +22,7 @@ const CODEX_MODELS = [
];
const GOOGLE_PREFIXES = ["gemini-3"];
const ZAI_PREFIXES = ["glm-5", "glm-4.7", "glm-4.7-flash", "glm-4.7-flashx"];
const MINIMAX_PREFIXES = ["minimax-m2.1", "minimax-m2.5"];
const MINIMAX_PREFIXES = ["minimax-m2.5", "minimax-m2.5"];
const XAI_PREFIXES = ["grok-4"];
function matchesPrefix(id: string, prefixes: string[]): boolean {

View File

@@ -4,7 +4,7 @@ import { isTruthyEnvValue } from "../infra/env.js";
const MINIMAX_KEY = process.env.MINIMAX_API_KEY ?? "";
const MINIMAX_BASE_URL = process.env.MINIMAX_BASE_URL?.trim() || "https://api.minimax.io/anthropic";
const MINIMAX_MODEL = process.env.MINIMAX_MODEL?.trim() || "MiniMax-M2.1";
const MINIMAX_MODEL = process.env.MINIMAX_MODEL?.trim() || "MiniMax-M2.5";
const LIVE = isTruthyEnvValue(process.env.MINIMAX_LIVE_TEST) || isTruthyEnvValue(process.env.LIVE);
const describeLive = LIVE && MINIMAX_KEY ? describe : describe.skip;

View File

@@ -185,7 +185,7 @@ describe("normalizeModelCompat", () => {
describe("isModernModelRef", () => {
it("excludes opencode minimax variants from modern selection", () => {
expect(isModernModelRef({ provider: "opencode", id: "minimax-m2.1" })).toBe(false);
expect(isModernModelRef({ provider: "opencode", id: "minimax-m2.5" })).toBe(false);
expect(isModernModelRef({ provider: "opencode", id: "minimax-m2.5" })).toBe(false);
});

View File

@@ -147,8 +147,8 @@ describe("models-config", () => {
api: "anthropic-messages",
models: [
{
id: "MiniMax-M2.1",
name: "MiniMax M2.1",
id: "MiniMax-M2.5",
name: "MiniMax M2.5",
reasoning: false,
input: ["text"],
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },

View File

@@ -58,7 +58,7 @@ type ModelsConfig = NonNullable<OpenClawConfig["models"]>;
export type ProviderConfig = NonNullable<ModelsConfig["providers"]>[string];
const MINIMAX_PORTAL_BASE_URL = "https://api.minimax.io/anthropic";
const MINIMAX_DEFAULT_MODEL_ID = "MiniMax-M2.1";
const MINIMAX_DEFAULT_MODEL_ID = "MiniMax-M2.5";
const MINIMAX_DEFAULT_VISION_MODEL_ID = "MiniMax-VL-01";
const MINIMAX_DEFAULT_CONTEXT_WINDOW = 200000;
const MINIMAX_DEFAULT_MAX_TOKENS = 8192;
@@ -585,16 +585,6 @@ function buildMinimaxProvider(): ProviderConfig {
api: "anthropic-messages",
authHeader: true,
models: [
buildMinimaxTextModel({
id: MINIMAX_DEFAULT_MODEL_ID,
name: "MiniMax M2.1",
reasoning: false,
}),
buildMinimaxTextModel({
id: "MiniMax-M2.1-lightning",
name: "MiniMax M2.1 Lightning",
reasoning: false,
}),
buildMinimaxModel({
id: MINIMAX_DEFAULT_VISION_MODEL_ID,
name: "MiniMax VL 01",
@@ -623,12 +613,12 @@ function buildMinimaxPortalProvider(): ProviderConfig {
models: [
buildMinimaxTextModel({
id: MINIMAX_DEFAULT_MODEL_ID,
name: "MiniMax M2.1",
reasoning: false,
name: "MiniMax M2.5",
reasoning: true,
}),
buildMinimaxTextModel({
id: "MiniMax-M2.5",
name: "MiniMax M2.5",
id: "MiniMax-M2.5-Lightning",
name: "MiniMax M2.5 Lightning",
reasoning: true,
}),
],

View File

@@ -98,7 +98,7 @@ describe("models-config", () => {
providerKey: "minimax",
expectedBaseUrl: "https://api.minimax.io/anthropic",
expectedApiKeyRef: "MINIMAX_API_KEY",
expectedModelIds: ["MiniMax-M2.1", "MiniMax-VL-01"],
expectedModelIds: ["MiniMax-M2.5", "MiniMax-VL-01"],
});
});
});
@@ -111,7 +111,7 @@ describe("models-config", () => {
providerKey: "synthetic",
expectedBaseUrl: "https://api.synthetic.new/anthropic",
expectedApiKeyRef: "SYNTHETIC_API_KEY",
expectedModelIds: ["hf:MiniMaxAI/MiniMax-M2.1"],
expectedModelIds: ["hf:MiniMaxAI/MiniMax-M2.5"],
});
});
});

View File

@@ -199,11 +199,11 @@ describe("openclaw-tools: subagents (sessions_spawn model + thinking)", () => {
await expectSpawnUsesConfiguredModel({
config: {
session: { mainKey: "main", scope: "per-sender" },
agents: { defaults: { subagents: { model: "minimax/MiniMax-M2.1" } } },
agents: { defaults: { subagents: { model: "minimax/MiniMax-M2.5" } } },
},
runId: "run-default-model",
callId: "call-default-model",
expectedModel: "minimax/MiniMax-M2.1",
expectedModel: "minimax/MiniMax-M2.5",
});
});
@@ -220,7 +220,7 @@ describe("openclaw-tools: subagents (sessions_spawn model + thinking)", () => {
config: {
session: { mainKey: "main", scope: "per-sender" },
agents: {
defaults: { subagents: { model: "minimax/MiniMax-M2.1" } },
defaults: { subagents: { model: "minimax/MiniMax-M2.5" } },
list: [{ id: "research", subagents: { model: "opencode/claude" } }],
},
},
@@ -235,7 +235,7 @@ describe("openclaw-tools: subagents (sessions_spawn model + thinking)", () => {
config: {
session: { mainKey: "main", scope: "per-sender" },
agents: {
defaults: { model: { primary: "minimax/MiniMax-M2.1" } },
defaults: { model: { primary: "minimax/MiniMax-M2.5" } },
list: [{ id: "research", model: { primary: "opencode/claude" } }],
},
},

View File

@@ -363,7 +363,7 @@ describe("applyExtraParamsToAgent", () => {
agent,
undefined,
"siliconflow",
"Pro/MiniMaxAI/MiniMax-M2.1",
"Pro/MiniMaxAI/MiniMax-M2.5",
undefined,
"off",
);
@@ -371,7 +371,7 @@ describe("applyExtraParamsToAgent", () => {
const model = {
api: "openai-completions",
provider: "siliconflow",
id: "Pro/MiniMaxAI/MiniMax-M2.1",
id: "Pro/MiniMaxAI/MiniMax-M2.5",
} as Model<"openai-completions">;
const context: Context = { messages: [] };
void agent.streamFn?.(model, context, {});

View File

@@ -1,7 +1,7 @@
import type { ModelDefinitionConfig } from "../config/types.js";
export const SYNTHETIC_BASE_URL = "https://api.synthetic.new/anthropic";
export const SYNTHETIC_DEFAULT_MODEL_ID = "hf:MiniMaxAI/MiniMax-M2.1";
export const SYNTHETIC_DEFAULT_MODEL_ID = "hf:MiniMaxAI/MiniMax-M2.5";
export const SYNTHETIC_DEFAULT_MODEL_REF = `synthetic/${SYNTHETIC_DEFAULT_MODEL_ID}`;
export const SYNTHETIC_DEFAULT_COST = {
input: 0,
@@ -13,7 +13,7 @@ export const SYNTHETIC_DEFAULT_COST = {
export const SYNTHETIC_MODEL_CATALOG = [
{
id: SYNTHETIC_DEFAULT_MODEL_ID,
name: "MiniMax M2.1",
name: "MiniMax M2.5",
reasoning: false,
input: ["text"],
contextWindow: 192000,

View File

@@ -113,7 +113,7 @@ function createMinimaxImageConfig(): OpenClawConfig {
return {
agents: {
defaults: {
model: { primary: "minimax/MiniMax-M2.1" },
model: { primary: "minimax/MiniMax-M2.5" },
imageModel: { primary: "minimax/MiniMax-VL-01" },
},
},
@@ -212,7 +212,7 @@ describe("image tool implicit imageModel config", () => {
vi.stubEnv("OPENAI_API_KEY", "openai-test");
vi.stubEnv("ANTHROPIC_API_KEY", "anthropic-test");
const cfg: OpenClawConfig = {
agents: { defaults: { model: { primary: "minimax/MiniMax-M2.1" } } },
agents: { defaults: { model: { primary: "minimax/MiniMax-M2.5" } } },
};
expect(resolveImageModelConfigForTool({ cfg, agentDir })).toEqual({
primary: "minimax/MiniMax-VL-01",
@@ -272,7 +272,7 @@ describe("image tool implicit imageModel config", () => {
const cfg: OpenClawConfig = {
agents: {
defaults: {
model: { primary: "minimax/MiniMax-M2.1" },
model: { primary: "minimax/MiniMax-M2.5" },
imageModel: { primary: "openai/gpt-5-mini" },
},
},
@@ -529,7 +529,7 @@ describe("image tool implicit imageModel config", () => {
vi.stubEnv("OPENAI_API_KEY", "openai-test");
const cfg: OpenClawConfig = {
agents: { defaults: { model: { primary: "minimax/MiniMax-M2.1" } } },
agents: { defaults: { model: { primary: "minimax/MiniMax-M2.5" } } },
};
const tool = requireImageTool(createImageTool({ config: cfg, agentDir, sandbox }));
@@ -605,7 +605,7 @@ describe("image tool implicit imageModel config", () => {
const cfg: OpenClawConfig = {
agents: {
defaults: {
model: { primary: "minimax/MiniMax-M2.1" },
model: { primary: "minimax/MiniMax-M2.5" },
imageModel: { primary: "minimax/MiniMax-VL-01" },
},
},
@@ -673,7 +673,7 @@ describe("image tool MiniMax VLM routing", () => {
const agentDir = await fs.mkdtemp(path.join(os.tmpdir(), "openclaw-minimax-vlm-"));
vi.stubEnv("MINIMAX_API_KEY", "minimax-test");
const cfg: OpenClawConfig = {
agents: { defaults: { model: { primary: "minimax/MiniMax-M2.1" } } },
agents: { defaults: { model: { primary: "minimax/MiniMax-M2.5" } } },
};
const tool = requireImageTool(createImageTool({ config: cfg, agentDir }));
return { fetch, tool };

View File

@@ -276,7 +276,7 @@ export const VENICE_MODEL_CATALOG = [
},
{
id: "minimax-m21",
name: "MiniMax M2.1 (via Venice)",
name: "MiniMax M2.5 (via Venice)",
reasoning: true,
input: ["text"],
contextWindow: 202752,

View File

@@ -183,7 +183,7 @@ describe("directive behavior", () => {
primary: "anthropic/claude-opus-4-5",
fallbacks: ["openai/gpt-4.1-mini"],
},
imageModel: { primary: "minimax/MiniMax-M2.1" },
imageModel: { primary: "minimax/MiniMax-M2.5" },
models: undefined,
},
});
@@ -206,7 +206,7 @@ describe("directive behavior", () => {
models: {
"anthropic/claude-opus-4-5": {},
"openai/gpt-4.1-mini": {},
"minimax/MiniMax-M2.1": { alias: "minimax" },
"minimax/MiniMax-M2.5": { alias: "minimax" },
},
},
extra: {
@@ -216,14 +216,14 @@ describe("directive behavior", () => {
minimax: {
baseUrl: "https://api.minimax.io/anthropic",
api: "anthropic-messages",
models: [{ id: "MiniMax-M2.1", name: "MiniMax M2.1" }],
models: [{ id: "MiniMax-M2.5", name: "MiniMax M2.5" }],
},
},
},
},
});
expect(configOnlyProviderText).toContain("Models (minimax");
expect(configOnlyProviderText).toContain("minimax/MiniMax-M2.1");
expect(configOnlyProviderText).toContain("minimax/MiniMax-M2.5");
const missingAuthText = await runModelDirectiveText(home, "/model list", {
defaults: {

View File

@@ -119,12 +119,12 @@ describe("directive behavior", () => {
config: {
agents: {
defaults: {
model: { primary: "minimax/MiniMax-M2.1" },
model: { primary: "minimax/MiniMax-M2.5" },
workspace: path.join(home, "openclaw"),
models: {
"minimax/MiniMax-M2.1": {},
"minimax/MiniMax-M2.1-lightning": {},
"lmstudio/minimax-m2.1-gs32": {},
"minimax/MiniMax-M2.5": {},
"minimax/MiniMax-M2.5-Lightning": {},
"lmstudio/minimax-m2.5-gs32": {},
},
},
},
@@ -135,29 +135,29 @@ describe("directive behavior", () => {
baseUrl: "https://api.minimax.io/anthropic",
apiKey: "sk-test",
api: "anthropic-messages",
models: [makeModelDefinition("MiniMax-M2.1", "MiniMax M2.1")],
models: [makeModelDefinition("MiniMax-M2.5", "MiniMax M2.5")],
},
lmstudio: {
baseUrl: "http://127.0.0.1:1234/v1",
apiKey: "lmstudio",
api: "openai-responses",
models: [makeModelDefinition("minimax-m2.1-gs32", "MiniMax M2.1 GS32")],
models: [makeModelDefinition("minimax-m2.5-gs32", "MiniMax M2.5 GS32")],
},
},
},
},
},
{
body: "/model minimax/m2.1",
body: "/model minimax/m2.5",
storePath: path.join(home, "sessions-provider-fuzzy.json"),
config: {
agents: {
defaults: {
model: { primary: "minimax/MiniMax-M2.1" },
model: { primary: "minimax/MiniMax-M2.5" },
workspace: path.join(home, "openclaw"),
models: {
"minimax/MiniMax-M2.1": {},
"minimax/MiniMax-M2.1-lightning": {},
"minimax/MiniMax-M2.5": {},
"minimax/MiniMax-M2.5-Lightning": {},
},
},
},
@@ -169,8 +169,8 @@ describe("directive behavior", () => {
apiKey: "sk-test",
api: "anthropic-messages",
models: [
makeModelDefinition("MiniMax-M2.1", "MiniMax M2.1"),
makeModelDefinition("MiniMax-M2.1-lightning", "MiniMax M2.1 Lightning"),
makeModelDefinition("MiniMax-M2.5", "MiniMax M2.5"),
makeModelDefinition("MiniMax-M2.5-Lightning", "MiniMax M2.5 Lightning"),
],
},
},

View File

@@ -80,7 +80,7 @@ const modelCatalogMocks = vi.hoisted(() => ({
{ provider: "openai", id: "gpt-4.1-mini", name: "GPT-4.1 mini" },
{ provider: "openai", id: "gpt-5.2", name: "GPT-5.2" },
{ provider: "openai-codex", id: "gpt-5.2", name: "GPT-5.2 (Codex)" },
{ provider: "minimax", id: "MiniMax-M2.1", name: "MiniMax M2.1" },
{ provider: "minimax", id: "MiniMax-M2.5", name: "MiniMax M2.5" },
]),
resetModelCatalogCacheForTest: vi.fn(),
}));

View File

@@ -19,7 +19,7 @@ vi.mock("../../agents/session-write-lock.js", () => ({
vi.mock("../../agents/model-catalog.js", () => ({
loadModelCatalog: vi.fn(async () => [
{ provider: "minimax", id: "m2.1", name: "M2.1" },
{ provider: "minimax", id: "m2.5", name: "M2.5" },
{ provider: "openai", id: "gpt-4o-mini", name: "GPT-4o mini" },
]),
}));
@@ -921,7 +921,7 @@ describe("applyResetModelOverride", () => {
});
expect(sessionEntry.providerOverride).toBe("minimax");
expect(sessionEntry.modelOverride).toBe("m2.1");
expect(sessionEntry.modelOverride).toBe("m2.5");
expect(sessionCtx.BodyStripped).toBe("summarize");
});

View File

@@ -132,7 +132,7 @@ export async function applyAuthChoiceMiniMax(
if (params.authChoice === "minimax") {
await applyProviderDefaultModel({
defaultModel: "lmstudio/minimax-m2.1-gs32",
defaultModel: "lmstudio/minimax-m2.5-gs32",
applyDefaultConfig: applyMinimaxConfig,
applyProviderConfig: applyMinimaxProviderConfig,
});

View File

@@ -1230,7 +1230,7 @@ describe("applyAuthChoice", () => {
profileId: "minimax-portal:default",
baseUrl: "https://api.minimax.io/anthropic",
api: "anthropic-messages",
defaultModel: "minimax-portal/MiniMax-M2.1",
defaultModel: "minimax-portal/MiniMax-M2.5",
apiKey: "minimax-oauth",
selectValue: "oauth",
},

View File

@@ -78,7 +78,7 @@ function createApplyAuthChoiceConfig(includeMinimaxProvider = false) {
minimax: {
baseUrl: "https://api.minimax.io/anthropic",
api: "anthropic-messages",
models: [{ id: "MiniMax-M2.1", name: "MiniMax M2.1" }],
models: [{ id: "MiniMax-M2.5", name: "MiniMax M2.5" }],
},
}
: {}),
@@ -117,7 +117,7 @@ describe("promptAuthConfig", () => {
"minimax/minimax-m2.5:free",
]);
expect(result.models?.providers?.minimax?.models?.map((model) => model.id)).toEqual([
"MiniMax-M2.1",
"MiniMax-M2.5",
]);
});
});

View File

@@ -239,7 +239,7 @@ export function applySyntheticProviderConfig(cfg: OpenClawConfig): OpenClawConfi
const models = { ...cfg.agents?.defaults?.models };
models[SYNTHETIC_DEFAULT_MODEL_REF] = {
...models[SYNTHETIC_DEFAULT_MODEL_REF],
alias: models[SYNTHETIC_DEFAULT_MODEL_REF]?.alias ?? "MiniMax M2.1",
alias: models[SYNTHETIC_DEFAULT_MODEL_REF]?.alias ?? "MiniMax M2.5",
};
const providers = { ...cfg.models?.providers };

View File

@@ -25,9 +25,9 @@ export function applyMinimaxProviderConfig(cfg: OpenClawConfig): OpenClawConfig
...models["anthropic/claude-opus-4-6"],
alias: models["anthropic/claude-opus-4-6"]?.alias ?? "Opus",
};
models["lmstudio/minimax-m2.1-gs32"] = {
...models["lmstudio/minimax-m2.1-gs32"],
alias: models["lmstudio/minimax-m2.1-gs32"]?.alias ?? "Minimax",
models["lmstudio/minimax-m2.5-gs32"] = {
...models["lmstudio/minimax-m2.5-gs32"],
alias: models["lmstudio/minimax-m2.5-gs32"]?.alias ?? "Minimax",
};
const providers = { ...cfg.models?.providers };
@@ -38,8 +38,8 @@ export function applyMinimaxProviderConfig(cfg: OpenClawConfig): OpenClawConfig
api: "openai-responses",
models: [
buildMinimaxModelDefinition({
id: "minimax-m2.1-gs32",
name: "MiniMax M2.1 GS32",
id: "minimax-m2.5-gs32",
name: "MiniMax M2.5 GS32",
reasoning: false,
cost: MINIMAX_LM_STUDIO_COST,
contextWindow: 196608,
@@ -86,7 +86,7 @@ export function applyMinimaxHostedProviderConfig(
export function applyMinimaxConfig(cfg: OpenClawConfig): OpenClawConfig {
const next = applyMinimaxProviderConfig(cfg);
return applyAgentDefaultModelPrimary(next, "lmstudio/minimax-m2.1-gs32");
return applyAgentDefaultModelPrimary(next, "lmstudio/minimax-m2.5-gs32");
}
export function applyMinimaxHostedConfig(

View File

@@ -17,7 +17,7 @@ export {
export const DEFAULT_MINIMAX_BASE_URL = "https://api.minimax.io/v1";
export const MINIMAX_API_BASE_URL = "https://api.minimax.io/anthropic";
export const MINIMAX_CN_API_BASE_URL = "https://api.minimaxi.com/anthropic";
export const MINIMAX_HOSTED_MODEL_ID = "MiniMax-M2.1";
export const MINIMAX_HOSTED_MODEL_ID = "MiniMax-M2.5";
export const MINIMAX_HOSTED_MODEL_REF = `minimax/${MINIMAX_HOSTED_MODEL_ID}`;
export const DEFAULT_MINIMAX_CONTEXT_WINDOW = 200000;
export const DEFAULT_MINIMAX_MAX_TOKENS = 8192;
@@ -89,11 +89,6 @@ export const ZAI_DEFAULT_COST = {
};
const MINIMAX_MODEL_CATALOG = {
"MiniMax-M2.1": { name: "MiniMax M2.1", reasoning: false },
"MiniMax-M2.1-lightning": {
name: "MiniMax M2.1 Lightning",
reasoning: false,
},
"MiniMax-M2.5": { name: "MiniMax M2.5", reasoning: true },
"MiniMax-M2.5-Lightning": { name: "MiniMax M2.5 Lightning", reasoning: true },
} as const;

View File

@@ -371,7 +371,7 @@ describe("applyMinimaxApiConfig", () => {
});
it("does not set reasoning for non-reasoning models", () => {
const cfg = applyMinimaxApiConfig({}, "MiniMax-M2.1");
const cfg = applyMinimaxApiConfig({}, "MiniMax-M2.5");
expect(cfg.models?.providers?.minimax?.models[0]?.reasoning).toBe(false);
});
@@ -381,7 +381,7 @@ describe("applyMinimaxApiConfig", () => {
agents: {
defaults: {
models: {
"minimax/MiniMax-M2.1": {
"minimax/MiniMax-M2.5": {
alias: "MiniMax",
params: { custom: "value" },
},
@@ -389,9 +389,9 @@ describe("applyMinimaxApiConfig", () => {
},
},
},
"MiniMax-M2.1",
"MiniMax-M2.5",
);
expect(cfg.agents?.defaults?.models?.["minimax/MiniMax-M2.1"]).toMatchObject({
expect(cfg.agents?.defaults?.models?.["minimax/MiniMax-M2.5"]).toMatchObject({
alias: "Minimax",
params: { custom: "value" },
});
@@ -514,8 +514,8 @@ describe("primary model defaults", () => {
it("sets correct primary model", () => {
const configCases = [
{
getConfig: () => applyMinimaxApiConfig({}, "MiniMax-M2.1-lightning"),
primaryModel: "minimax/MiniMax-M2.1-lightning",
getConfig: () => applyMinimaxApiConfig({}, "MiniMax-M2.5-Lightning"),
primaryModel: "minimax/MiniMax-M2.5-Lightning",
},
{
getConfig: () => applyZaiConfig({}, { modelId: "glm-5" }),
@@ -645,8 +645,8 @@ describe("provider alias defaults", () => {
it("adds expected alias for provider defaults", () => {
const aliasCases = [
{
applyConfig: () => applyMinimaxApiConfig({}, "MiniMax-M2.1"),
modelRef: "minimax/MiniMax-M2.1",
applyConfig: () => applyMinimaxApiConfig({}, "MiniMax-M2.5"),
modelRef: "minimax/MiniMax-M2.5",
alias: "Minimax",
},
{

View File

@@ -131,8 +131,8 @@ describe("config identity defaults", () => {
api: "anthropic-messages",
models: [
{
id: "MiniMax-M2.1",
name: "MiniMax M2.1",
id: "MiniMax-M2.5",
name: "MiniMax M2.5",
reasoning: false,
input: ["text"],
cost: {

View File

@@ -225,7 +225,7 @@ describe("provider usage loading", () => {
remains_time: 600,
current_interval_total_count: 120,
current_interval_usage_count: 30,
model_name: "MiniMax-M2.1",
model_name: "MiniMax-M2.5",
},
],
},

View File

@@ -98,7 +98,7 @@ describe("tui session actions", () => {
sessions: [
{
key: "agent:main:main",
model: "Minimax-M2.1",
model: "Minimax-M2.5",
modelProvider: "minimax",
},
],
@@ -106,7 +106,7 @@ describe("tui session actions", () => {
await second;
expect(state.sessionInfo.model).toBe("Minimax-M2.1");
expect(state.sessionInfo.model).toBe("Minimax-M2.5");
expect(updateAutocompleteProvider).toHaveBeenCalledTimes(2);
expect(updateFooter).toHaveBeenCalledTimes(2);
expect(requestRender).toHaveBeenCalledTimes(2);

View File

@@ -26,7 +26,7 @@ export function isReasoningTagProvider(provider: string | undefined | null): boo
return true;
}
// Handle Minimax (M2.1 is chatty/reasoning-like)
// Handle Minimax (M2.5 is chatty/reasoning-like)
if (normalized.includes("minimax")) {
return true;
}