docs(models): refresh minimax kimi glm provider docs

This commit is contained in:
Peter Steinberger
2026-03-03 00:40:06 +00:00
parent 77ecef1fde
commit 86090b0ff2
5 changed files with 37 additions and 21 deletions

View File

@@ -124,7 +124,7 @@ OpenClaw ships with the piai catalog. These providers require **no**
- Provider: `zai` - Provider: `zai`
- Auth: `ZAI_API_KEY` - Auth: `ZAI_API_KEY`
- Example model: `zai/glm-4.7` - Example model: `zai/glm-5`
- CLI: `openclaw onboard --auth-choice zai-api-key` - CLI: `openclaw onboard --auth-choice zai-api-key`
- Aliases: `z.ai/*` and `z-ai/*` normalize to `zai/*` - Aliases: `z.ai/*` and `z-ai/*` normalize to `zai/*`
@@ -178,14 +178,14 @@ Moonshot uses OpenAI-compatible endpoints, so configure it as a custom provider:
Kimi K2 model IDs: Kimi K2 model IDs:
{/_moonshot-kimi-k2-model-refs:start_/ && null} {/_ moonshot-kimi-k2-model-refs:start _/ && null}
- `moonshot/kimi-k2.5` - `moonshot/kimi-k2.5`
- `moonshot/kimi-k2-0905-preview` - `moonshot/kimi-k2-0905-preview`
- `moonshot/kimi-k2-turbo-preview` - `moonshot/kimi-k2-turbo-preview`
- `moonshot/kimi-k2-thinking` - `moonshot/kimi-k2-thinking`
- `moonshot/kimi-k2-thinking-turbo` - `moonshot/kimi-k2-thinking-turbo`
{/_moonshot-kimi-k2-model-refs:end_/ && null} {/_ moonshot-kimi-k2-model-refs:end _/ && null}
```json5 ```json5
{ {

View File

@@ -148,7 +148,7 @@ Quick answers plus deeper troubleshooting for real-world setups (local dev, VPS,
- [How do I switch models on the fly (without restarting)?](#how-do-i-switch-models-on-the-fly-without-restarting) - [How do I switch models on the fly (without restarting)?](#how-do-i-switch-models-on-the-fly-without-restarting)
- [Can I use GPT 5.2 for daily tasks and Codex 5.3 for coding](#can-i-use-gpt-52-for-daily-tasks-and-codex-53-for-coding) - [Can I use GPT 5.2 for daily tasks and Codex 5.3 for coding](#can-i-use-gpt-52-for-daily-tasks-and-codex-53-for-coding)
- [Why do I see "Model … is not allowed" and then no reply?](#why-do-i-see-model-is-not-allowed-and-then-no-reply) - [Why do I see "Model … is not allowed" and then no reply?](#why-do-i-see-model-is-not-allowed-and-then-no-reply)
- [Why do I see "Unknown model: minimax/MiniMax-M2.5"?](#why-do-i-see-unknown-model-minimaxminimaxm21) - [Why do I see "Unknown model: minimax/MiniMax-M2.5"?](#why-do-i-see-unknown-model-minimaxminimaxm25)
- [Can I use MiniMax as my default and OpenAI for complex tasks?](#can-i-use-minimax-as-my-default-and-openai-for-complex-tasks) - [Can I use MiniMax as my default and OpenAI for complex tasks?](#can-i-use-minimax-as-my-default-and-openai-for-complex-tasks)
- [Are opus / sonnet / gpt built-in shortcuts?](#are-opus-sonnet-gpt-builtin-shortcuts) - [Are opus / sonnet / gpt built-in shortcuts?](#are-opus-sonnet-gpt-builtin-shortcuts)
- [How do I define/override model shortcuts (aliases)?](#how-do-i-defineoverride-model-shortcuts-aliases) - [How do I define/override model shortcuts (aliases)?](#how-do-i-defineoverride-model-shortcuts-aliases)
@@ -2173,7 +2173,7 @@ Model "provider/model" is not allowed. Use /model to list available models.
That error is returned **instead of** a normal reply. Fix: add the model to That error is returned **instead of** a normal reply. Fix: add the model to
`agents.defaults.models`, remove the allowlist, or pick a model from `/model list`. `agents.defaults.models`, remove the allowlist, or pick a model from `/model list`.
### Why do I see Unknown model minimaxMiniMaxM21 ### Why do I see Unknown model minimaxMiniMaxM25
This means the **provider isn't configured** (no MiniMax provider config or auth This means the **provider isn't configured** (no MiniMax provider config or auth
profile was found), so the model can't be resolved. A fix for this detection is profile was found), so the model can't be resolved. A fix for this detection is
@@ -2185,7 +2185,7 @@ Fix checklist:
2. Make sure MiniMax is configured (wizard or JSON), or that a MiniMax API key 2. Make sure MiniMax is configured (wizard or JSON), or that a MiniMax API key
exists in env/auth profiles so the provider can be injected. exists in env/auth profiles so the provider can be injected.
3. Use the exact model id (case-sensitive): `minimax/MiniMax-M2.5` or 3. Use the exact model id (case-sensitive): `minimax/MiniMax-M2.5` or
`minimax/MiniMax-M2.5-Lightning`. `minimax/MiniMax-M2.5-highspeed` (legacy: `minimax/MiniMax-M2.5-Lightning`).
4. Run: 4. Run:
```bash ```bash
@@ -2288,8 +2288,8 @@ Z.AI (GLM models):
{ {
agents: { agents: {
defaults: { defaults: {
model: { primary: "zai/glm-4.7" }, model: { primary: "zai/glm-5" },
models: { "zai/glm-4.7": {} }, models: { "zai/glm-5": {} },
}, },
}, },
env: { ZAI_API_KEY: "..." }, env: { ZAI_API_KEY: "..." },

View File

@@ -12,7 +12,7 @@ MiniMax is an AI company that builds the **M2/M2.5** model family. The current
coding-focused release is **MiniMax M2.5** (December 23, 2025), built for coding-focused release is **MiniMax M2.5** (December 23, 2025), built for
real-world complex tasks. real-world complex tasks.
Source: [MiniMax M2.5 release note](https://www.minimax.io/news/minimax-m21) Source: [MiniMax M2.5 release note](https://www.minimax.io/news/minimax-m25)
## Model overview (M2.5) ## Model overview (M2.5)
@@ -27,13 +27,12 @@ MiniMax highlights these improvements in M2.5:
Droid/Factory AI, Cline, Kilo Code, Roo Code, BlackBox). Droid/Factory AI, Cline, Kilo Code, Roo Code, BlackBox).
- Higher-quality **dialogue and technical writing** outputs. - Higher-quality **dialogue and technical writing** outputs.
## MiniMax M2.5 vs MiniMax M2.5 Lightning ## MiniMax M2.5 vs MiniMax M2.5 Highspeed
- **Speed:** Lightning is the “fast” variant in MiniMaxs pricing docs. - **Speed:** `MiniMax-M2.5-highspeed` is the official fast tier in MiniMax docs.
- **Cost:** Pricing shows the same input cost, but Lightning has higher output cost. - **Cost:** MiniMax pricing lists the same input cost and a higher output cost for highspeed.
- **Coding plan routing:** The Lightning back-end isnt directly available on the MiniMax - **Compatibility:** OpenClaw still accepts legacy `MiniMax-M2.5-Lightning` configs, but prefer
coding plan. MiniMax auto-routes most requests to Lightning, but falls back to the `MiniMax-M2.5-highspeed` for new setup.
regular M2.5 back-end during traffic spikes.
## Choose a setup ## Choose a setup
@@ -81,9 +80,18 @@ Configure via CLI:
{ {
id: "MiniMax-M2.5", id: "MiniMax-M2.5",
name: "MiniMax M2.5", name: "MiniMax M2.5",
reasoning: false, reasoning: true,
input: ["text"], input: ["text"],
cost: { input: 15, output: 60, cacheRead: 2, cacheWrite: 10 }, cost: { input: 0.3, output: 1.2, cacheRead: 0.03, cacheWrite: 0.12 },
contextWindow: 200000,
maxTokens: 8192,
},
{
id: "MiniMax-M2.5-highspeed",
name: "MiniMax M2.5 Highspeed",
reasoning: true,
input: ["text"],
cost: { input: 0.3, output: 1.2, cacheRead: 0.03, cacheWrite: 0.12 },
contextWindow: 200000, contextWindow: 200000,
maxTokens: 8192, maxTokens: 8192,
}, },
@@ -178,6 +186,7 @@ Use the interactive config wizard to set MiniMax without editing JSON:
## Notes ## Notes
- Model refs are `minimax/<model>`. - Model refs are `minimax/<model>`.
- Recommended model IDs: `MiniMax-M2.5` and `MiniMax-M2.5-highspeed`.
- Coding Plan usage API: `https://api.minimaxi.com/v1/api/openplatform/coding_plan/remains` (requires a coding plan key). - Coding Plan usage API: `https://api.minimaxi.com/v1/api/openplatform/coding_plan/remains` (requires a coding plan key).
- Update pricing values in `models.json` if you need exact cost tracking. - Update pricing values in `models.json` if you need exact cost tracking.
- Referral link for MiniMax Coding Plan (10% off): [https://platform.minimax.io/subscribe/coding-plan?code=DbXJTRClnb&source=link](https://platform.minimax.io/subscribe/coding-plan?code=DbXJTRClnb&source=link) - Referral link for MiniMax Coding Plan (10% off): [https://platform.minimax.io/subscribe/coding-plan?code=DbXJTRClnb&source=link](https://platform.minimax.io/subscribe/coding-plan?code=DbXJTRClnb&source=link)
@@ -200,7 +209,8 @@ and no MiniMax auth profile/env key found). A fix for this detection is in
Make sure the model id is **casesensitive**: Make sure the model id is **casesensitive**:
- `minimax/MiniMax-M2.5` - `minimax/MiniMax-M2.5`
- `minimax/MiniMax-M2.5-Lightning` - `minimax/MiniMax-M2.5-highspeed`
- `minimax/MiniMax-M2.5-Lightning` (legacy)
Then recheck with: Then recheck with:

View File

@@ -15,14 +15,14 @@ Kimi Coding with `kimi-coding/k2p5`.
Current Kimi K2 model IDs: Current Kimi K2 model IDs:
{/_moonshot-kimi-k2-ids:start_/ && null} {/_ moonshot-kimi-k2-ids:start _/ && null}
- `kimi-k2.5` - `kimi-k2.5`
- `kimi-k2-0905-preview` - `kimi-k2-0905-preview`
- `kimi-k2-turbo-preview` - `kimi-k2-turbo-preview`
- `kimi-k2-thinking` - `kimi-k2-thinking`
- `kimi-k2-thinking-turbo` - `kimi-k2-thinking-turbo`
{/_moonshot-kimi-k2-ids:end_/ && null} {/_ moonshot-kimi-k2-ids:end _/ && null}
```bash ```bash
openclaw onboard --auth-choice moonshot-api-key openclaw onboard --auth-choice moonshot-api-key

View File

@@ -1,4 +1,4 @@
export const MOONSHOT_KIMI_K2_DEFAULT_ID = "kimi-k2-0905-preview"; export const MOONSHOT_KIMI_K2_DEFAULT_ID = "kimi-k2.5";
export const MOONSHOT_KIMI_K2_CONTEXT_WINDOW = 256000; export const MOONSHOT_KIMI_K2_CONTEXT_WINDOW = 256000;
export const MOONSHOT_KIMI_K2_MAX_TOKENS = 8192; export const MOONSHOT_KIMI_K2_MAX_TOKENS = 8192;
export const MOONSHOT_KIMI_K2_INPUT = ["text"] as const; export const MOONSHOT_KIMI_K2_INPUT = ["text"] as const;
@@ -10,6 +10,12 @@ export const MOONSHOT_KIMI_K2_COST = {
} as const; } as const;
export const MOONSHOT_KIMI_K2_MODELS = [ export const MOONSHOT_KIMI_K2_MODELS = [
{
id: "kimi-k2.5",
name: "Kimi K2.5",
alias: "Kimi K2.5",
reasoning: false,
},
{ {
id: "kimi-k2-0905-preview", id: "kimi-k2-0905-preview",
name: "Kimi K2 0905 Preview", name: "Kimi K2 0905 Preview",