mirror of
https://github.com/openclaw/openclaw.git
synced 2026-03-30 19:32:27 +00:00
docs: remove stale MiniMax M2.5 refs and add image generation docs
After the M2.7-only catalog trim (#54487), update 10 docs files: - Replace removed M2.5/VL-01 model references across FAQ, wizard, config reference, local-models, and provider pages - Make local-models guide model-agnostic (generic LM Studio placeholder) - Add image-01 generation section to minimax.md - Leave third-party catalogs (Synthetic, Venice) unchanged
This commit is contained in:
@@ -247,7 +247,7 @@ OpenClaw ships with the pi‑ai catalog. These providers require **no**
|
||||
- Example model: `kilocode/anthropic/claude-opus-4.6`
|
||||
- CLI: `openclaw onboard --kilocode-api-key <key>`
|
||||
- Base URL: `https://api.kilo.ai/api/gateway/`
|
||||
- Expanded built-in catalog includes GLM-5 Free, MiniMax M2.5 Free, GPT-5.2, Gemini 3 Pro Preview, Gemini 3 Flash Preview, Grok Code Fast 1, and Kimi K2.5.
|
||||
- Expanded built-in catalog includes GLM-5 Free, MiniMax M2.7 Free, GPT-5.2, Gemini 3 Pro Preview, Gemini 3 Flash Preview, Grok Code Fast 1, and Kimi K2.5.
|
||||
|
||||
See [/providers/kilocode](/providers/kilocode) for setup details.
|
||||
|
||||
@@ -538,8 +538,8 @@ Example (OpenAI‑compatible):
|
||||
{
|
||||
agents: {
|
||||
defaults: {
|
||||
model: { primary: "lmstudio/minimax-m2.5-gs32" },
|
||||
models: { "lmstudio/minimax-m2.5-gs32": { alias: "Minimax" } },
|
||||
model: { primary: "lmstudio/my-local-model" },
|
||||
models: { "lmstudio/my-local-model": { alias: "Local" } },
|
||||
},
|
||||
},
|
||||
models: {
|
||||
@@ -550,8 +550,8 @@ Example (OpenAI‑compatible):
|
||||
api: "openai-completions",
|
||||
models: [
|
||||
{
|
||||
id: "minimax-m2.5-gs32",
|
||||
name: "MiniMax M2.5",
|
||||
id: "my-local-model",
|
||||
name: "Local Model",
|
||||
reasoning: false,
|
||||
input: ["text"],
|
||||
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
|
||||
|
||||
@@ -617,7 +617,7 @@ terms before depending on subscription auth.
|
||||
{
|
||||
agent: {
|
||||
workspace: "~/.openclaw/workspace",
|
||||
model: { primary: "lmstudio/minimax-m2.5-gs32" },
|
||||
model: { primary: "lmstudio/my-local-model" },
|
||||
},
|
||||
models: {
|
||||
mode: "merge",
|
||||
@@ -628,8 +628,8 @@ terms before depending on subscription auth.
|
||||
api: "openai-responses",
|
||||
models: [
|
||||
{
|
||||
id: "minimax-m2.5-gs32",
|
||||
name: "MiniMax M2.5 GS32",
|
||||
id: "my-local-model",
|
||||
name: "Local Model",
|
||||
reasoning: false,
|
||||
input: ["text"],
|
||||
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
|
||||
|
||||
@@ -2356,13 +2356,13 @@ Base URL should omit `/v1` (Anthropic client appends it). Shortcut: `openclaw on
|
||||
```
|
||||
|
||||
Set `MINIMAX_API_KEY`. Shortcut: `openclaw onboard --auth-choice minimax-api`.
|
||||
`MiniMax-M2.5` and `MiniMax-M2.5-highspeed` remain available if you prefer the older text models.
|
||||
The model catalog now defaults to M2.7 only.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Local models (LM Studio)">
|
||||
|
||||
See [Local Models](/gateway/local-models). TL;DR: run MiniMax M2.5 via LM Studio Responses API on serious hardware; keep hosted models merged for fallback.
|
||||
See [Local Models](/gateway/local-models). TL;DR: run a large local model via LM Studio Responses API on serious hardware; keep hosted models merged for fallback.
|
||||
|
||||
</Accordion>
|
||||
|
||||
|
||||
@@ -13,34 +13,34 @@ Local is doable, but OpenClaw expects large context + strong defenses against pr
|
||||
|
||||
If you want the lowest-friction local setup, start with [Ollama](/providers/ollama) and `openclaw onboard`. This page is the opinionated guide for higher-end local stacks and custom OpenAI-compatible local servers.
|
||||
|
||||
## Recommended: LM Studio + MiniMax M2.5 (Responses API, full-size)
|
||||
## Recommended: LM Studio + large local model (Responses API)
|
||||
|
||||
Best current local stack. Load MiniMax M2.5 in LM Studio, enable the local server (default `http://127.0.0.1:1234`), and use Responses API to keep reasoning separate from final text.
|
||||
Best current local stack. Load a large model in LM Studio (for example, a full-size Qwen, DeepSeek, or Llama build), enable the local server (default `http://127.0.0.1:1234`), and use Responses API to keep reasoning separate from final text.
|
||||
|
||||
```json5
|
||||
{
|
||||
agents: {
|
||||
defaults: {
|
||||
model: { primary: "lmstudio/minimax-m2.5-gs32" },
|
||||
model: { primary: “lmstudio/my-local-model” },
|
||||
models: {
|
||||
"anthropic/claude-opus-4-6": { alias: "Opus" },
|
||||
"lmstudio/minimax-m2.5-gs32": { alias: "Minimax" },
|
||||
“anthropic/claude-opus-4-6”: { alias: “Opus” },
|
||||
“lmstudio/my-local-model”: { alias: “Local” },
|
||||
},
|
||||
},
|
||||
},
|
||||
models: {
|
||||
mode: "merge",
|
||||
mode: “merge”,
|
||||
providers: {
|
||||
lmstudio: {
|
||||
baseUrl: "http://127.0.0.1:1234/v1",
|
||||
apiKey: "lmstudio",
|
||||
api: "openai-responses",
|
||||
baseUrl: “http://127.0.0.1:1234/v1”,
|
||||
apiKey: “lmstudio”,
|
||||
api: “openai-responses”,
|
||||
models: [
|
||||
{
|
||||
id: "minimax-m2.5-gs32",
|
||||
name: "MiniMax M2.5 GS32",
|
||||
id: “my-local-model”,
|
||||
name: “Local Model”,
|
||||
reasoning: false,
|
||||
input: ["text"],
|
||||
input: [“text”],
|
||||
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
|
||||
contextWindow: 196608,
|
||||
maxTokens: 8192,
|
||||
@@ -55,7 +55,8 @@ Best current local stack. Load MiniMax M2.5 in LM Studio, enable the local serve
|
||||
**Setup checklist**
|
||||
|
||||
- Install LM Studio: [https://lmstudio.ai](https://lmstudio.ai)
|
||||
- In LM Studio, download the **largest MiniMax M2.5 build available** (avoid “small”/heavily quantized variants), start the server, confirm `http://127.0.0.1:1234/v1/models` lists it.
|
||||
- In LM Studio, download the **largest model build available** (avoid “small”/heavily quantized variants), start the server, confirm `http://127.0.0.1:1234/v1/models` lists it.
|
||||
- Replace `my-local-model` with the actual model ID shown in LM Studio.
|
||||
- Keep the model loaded; cold-load adds startup latency.
|
||||
- Adjust `contextWindow`/`maxTokens` if your LM Studio build differs.
|
||||
- For WhatsApp, stick to Responses API so only final text is sent.
|
||||
@@ -70,11 +71,11 @@ Keep hosted models configured even when running local; use `models.mode: "merge"
|
||||
defaults: {
|
||||
model: {
|
||||
primary: "anthropic/claude-sonnet-4-6",
|
||||
fallbacks: ["lmstudio/minimax-m2.5-gs32", "anthropic/claude-opus-4-6"],
|
||||
fallbacks: ["lmstudio/my-local-model", "anthropic/claude-opus-4-6"],
|
||||
},
|
||||
models: {
|
||||
"anthropic/claude-sonnet-4-6": { alias: "Sonnet" },
|
||||
"lmstudio/minimax-m2.5-gs32": { alias: "MiniMax Local" },
|
||||
"lmstudio/my-local-model": { alias: "Local" },
|
||||
"anthropic/claude-opus-4-6": { alias: "Opus" },
|
||||
},
|
||||
},
|
||||
@@ -88,8 +89,8 @@ Keep hosted models configured even when running local; use `models.mode: "merge"
|
||||
api: "openai-responses",
|
||||
models: [
|
||||
{
|
||||
id: "minimax-m2.5-gs32",
|
||||
name: "MiniMax M2.5 GS32",
|
||||
id: "my-local-model",
|
||||
name: "Local Model",
|
||||
reasoning: false,
|
||||
input: ["text"],
|
||||
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
|
||||
|
||||
@@ -633,7 +633,7 @@ Quick answers plus deeper troubleshooting for real-world setups (local dev, VPS,
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Is a local model OK for casual chats?">
|
||||
Usually no. OpenClaw needs large context + strong safety; small cards truncate and leak. If you must, run the **largest** MiniMax M2.5 build you can locally (LM Studio) and see [/gateway/local-models](/gateway/local-models). Smaller/quantized models increase prompt-injection risk - see [Security](/gateway/security).
|
||||
Usually no. OpenClaw needs large context + strong safety; small cards truncate and leak. If you must, run the **largest** model build you can locally (LM Studio) and see [/gateway/local-models](/gateway/local-models). Smaller/quantized models increase prompt-injection risk - see [Security](/gateway/security).
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="How do I keep hosted model traffic in a specific region?">
|
||||
|
||||
@@ -14,6 +14,29 @@ OpenClaw's MiniMax provider defaults to **MiniMax M2.7**.
|
||||
|
||||
- `MiniMax-M2.7`: default hosted text model.
|
||||
- `MiniMax-M2.7-highspeed`: faster M2.7 text tier.
|
||||
- `image-01`: image generation model (generate and image-to-image editing).
|
||||
|
||||
## Image generation
|
||||
|
||||
The MiniMax plugin registers the `image-01` model for the `image_generate` tool. It supports:
|
||||
|
||||
- **Text-to-image generation** with aspect ratio control.
|
||||
- **Image-to-image editing** (subject reference) with aspect ratio control.
|
||||
- Supported aspect ratios: `1:1`, `16:9`, `4:3`, `3:2`, `2:3`, `3:4`, `9:16`, `21:9`.
|
||||
|
||||
To use MiniMax for image generation, set it as the image generation provider:
|
||||
|
||||
```json5
|
||||
{
|
||||
agents: {
|
||||
defaults: {
|
||||
imageGenerationModel: { primary: "minimax/image-01" },
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
The plugin uses the same `MINIMAX_API_KEY` or OAuth auth as the text models. No additional configuration is needed if MiniMax is already set up.
|
||||
|
||||
## Choose a setup
|
||||
|
||||
|
||||
@@ -20,7 +20,7 @@ background.
|
||||
## Recommended: Model Studio (Alibaba Cloud Coding Plan)
|
||||
|
||||
Use [Model Studio](/providers/modelstudio) for officially supported access to
|
||||
Qwen models (Qwen 3.5 Plus, GLM-4.7, Kimi K2.5, MiniMax M2.5, and more).
|
||||
Qwen models (Qwen 3.5 Plus, GLM-4.7, Kimi K2.5, and more).
|
||||
|
||||
```bash
|
||||
# Global endpoint
|
||||
|
||||
@@ -74,7 +74,7 @@ override with a custom `baseUrl` in config.
|
||||
- **qwen3-coder-plus**, **qwen3-coder-next** — Qwen coding models
|
||||
- **GLM-5** — GLM models via Alibaba
|
||||
- **Kimi K2.5** — Moonshot AI via Alibaba
|
||||
- **MiniMax-M2.5** — MiniMax via Alibaba
|
||||
- **MiniMax-M2.7** — MiniMax via Alibaba
|
||||
|
||||
Some models (qwen3.5-plus, kimi-k2.5) support image input. Context windows range from 200K to 1M tokens.
|
||||
|
||||
|
||||
@@ -46,7 +46,7 @@ For a high-level overview, see [Onboarding (CLI)](/start/wizard).
|
||||
- More detail: [Vercel AI Gateway](/providers/vercel-ai-gateway)
|
||||
- **Cloudflare AI Gateway**: prompts for Account ID, Gateway ID, and `CLOUDFLARE_AI_GATEWAY_API_KEY`.
|
||||
- More detail: [Cloudflare AI Gateway](/providers/cloudflare-ai-gateway)
|
||||
- **MiniMax**: config is auto-written; hosted default is `MiniMax-M2.7` and `MiniMax-M2.5` stays available.
|
||||
- **MiniMax**: config is auto-written; hosted default is `MiniMax-M2.7`.
|
||||
- More detail: [MiniMax](/providers/minimax)
|
||||
- **Synthetic (Anthropic-compatible)**: prompts for `SYNTHETIC_API_KEY`.
|
||||
- More detail: [Synthetic](/providers/synthetic)
|
||||
|
||||
@@ -174,7 +174,7 @@ What you set:
|
||||
More detail: [Cloudflare AI Gateway](/providers/cloudflare-ai-gateway).
|
||||
</Accordion>
|
||||
<Accordion title="MiniMax">
|
||||
Config is auto-written. Hosted default is `MiniMax-M2.7`; `MiniMax-M2.5` stays available.
|
||||
Config is auto-written. Hosted default is `MiniMax-M2.7`.
|
||||
More detail: [MiniMax](/providers/minimax).
|
||||
</Accordion>
|
||||
<Accordion title="Synthetic (Anthropic-compatible)">
|
||||
|
||||
Reference in New Issue
Block a user