mirror of
https://github.com/openclaw/openclaw.git
synced 2026-03-12 07:20:45 +00:00
docs: tighten subscription guidance and update MiniMax M2.5 refs
This commit is contained in:
@@ -147,7 +147,7 @@ Quick answers plus deeper troubleshooting for real-world setups (local dev, VPS,
|
||||
- [How do I switch models on the fly (without restarting)?](#how-do-i-switch-models-on-the-fly-without-restarting)
|
||||
- [Can I use GPT 5.2 for daily tasks and Codex 5.3 for coding](#can-i-use-gpt-52-for-daily-tasks-and-codex-53-for-coding)
|
||||
- [Why do I see "Model … is not allowed" and then no reply?](#why-do-i-see-model-is-not-allowed-and-then-no-reply)
|
||||
- [Why do I see "Unknown model: minimax/MiniMax-M2.1"?](#why-do-i-see-unknown-model-minimaxminimaxm21)
|
||||
- [Why do I see "Unknown model: minimax/MiniMax-M2.5"?](#why-do-i-see-unknown-model-minimaxminimaxm21)
|
||||
- [Can I use MiniMax as my default and OpenAI for complex tasks?](#can-i-use-minimax-as-my-default-and-openai-for-complex-tasks)
|
||||
- [Are opus / sonnet / gpt built-in shortcuts?](#are-opus-sonnet-gpt-builtin-shortcuts)
|
||||
- [How do I define/override model shortcuts (aliases)?](#how-do-i-defineoverride-model-shortcuts-aliases)
|
||||
@@ -688,7 +688,7 @@ Docs: [Update](/cli/update), [Updating](/install/updating).
|
||||
|
||||
`openclaw onboard` is the recommended setup path. In **local mode** it walks you through:
|
||||
|
||||
- **Model/auth setup** (Anthropic **setup-token** recommended for Claude subscriptions, OpenAI Codex OAuth supported, API keys optional, LM Studio local models supported)
|
||||
- **Model/auth setup** (provider OAuth/setup-token flows and API keys supported, plus local model options such as LM Studio)
|
||||
- **Workspace** location + bootstrap files
|
||||
- **Gateway settings** (bind/port/auth/tailscale)
|
||||
- **Providers** (WhatsApp, Telegram, Discord, Mattermost (plugin), Signal, iMessage)
|
||||
@@ -703,6 +703,10 @@ No. You can run OpenClaw with **API keys** (Anthropic/OpenAI/others) or with
|
||||
**local-only models** so your data stays on your device. Subscriptions (Claude
|
||||
Pro/Max or OpenAI Codex) are optional ways to authenticate those providers.
|
||||
|
||||
If you choose Anthropic subscription auth, decide for yourself whether to use it:
|
||||
Anthropic has blocked some subscription usage outside Claude Code in the past.
|
||||
OpenAI Codex OAuth is explicitly supported for external tools like OpenClaw.
|
||||
|
||||
Docs: [Anthropic](/providers/anthropic), [OpenAI](/providers/openai),
|
||||
[Local models](/gateway/local-models), [Models](/concepts/models).
|
||||
|
||||
@@ -712,9 +716,9 @@ Yes. You can authenticate with a **setup-token**
|
||||
instead of an API key. This is the subscription path.
|
||||
|
||||
Claude Pro/Max subscriptions **do not include an API key**, so this is the
|
||||
correct approach for subscription accounts. Important: you must verify with
|
||||
Anthropic that this usage is allowed under their subscription policy and terms.
|
||||
If you want the most explicit, supported path, use an Anthropic API key.
|
||||
technical path for subscription accounts. But this is your decision: Anthropic
|
||||
has blocked some subscription usage outside Claude Code in the past.
|
||||
If you want the clearest and safest supported path for production, use an Anthropic API key.
|
||||
|
||||
### How does Anthropic setuptoken auth work
|
||||
|
||||
@@ -734,12 +738,15 @@ Copy the token it prints, then choose **Anthropic token (paste setup-token)** in
|
||||
|
||||
Yes - via **setup-token**. OpenClaw no longer reuses Claude Code CLI OAuth tokens; use a setup-token or an Anthropic API key. Generate the token anywhere and paste it on the gateway host. See [Anthropic](/providers/anthropic) and [OAuth](/concepts/oauth).
|
||||
|
||||
Note: Claude subscription access is governed by Anthropic's terms. For production or multi-user workloads, API keys are usually the safer choice.
|
||||
Important: this is technical compatibility, not a policy guarantee. Anthropic
|
||||
has blocked some subscription usage outside Claude Code in the past.
|
||||
You need to decide whether to use it and verify Anthropic's current terms.
|
||||
For production or multi-user workloads, Anthropic API key auth is the safer, recommended choice.
|
||||
|
||||
### Why am I seeing HTTP 429 ratelimiterror from Anthropic
|
||||
|
||||
That means your **Anthropic quota/rate limit** is exhausted for the current window. If you
|
||||
use a **Claude subscription** (setup-token or Claude Code OAuth), wait for the window to
|
||||
use a **Claude subscription** (setup-token), wait for the window to
|
||||
reset or upgrade your plan. If you use an **Anthropic API key**, check the Anthropic Console
|
||||
for usage/billing and raise limits as needed.
|
||||
|
||||
@@ -763,8 +770,9 @@ OpenClaw supports **OpenAI Code (Codex)** via OAuth (ChatGPT sign-in). The wizar
|
||||
|
||||
### Do you support OpenAI subscription auth Codex OAuth
|
||||
|
||||
Yes. OpenClaw fully supports **OpenAI Code (Codex) subscription OAuth**. The onboarding wizard
|
||||
can run the OAuth flow for you.
|
||||
Yes. OpenClaw fully supports **OpenAI Code (Codex) subscription OAuth**.
|
||||
OpenAI explicitly allows subscription OAuth usage in external tools/workflows
|
||||
like OpenClaw. The onboarding wizard can run the OAuth flow for you.
|
||||
|
||||
See [OAuth](/concepts/oauth), [Model providers](/concepts/model-providers), and [Wizard](/start/wizard).
|
||||
|
||||
@@ -781,7 +789,7 @@ This stores OAuth tokens in auth profiles on the gateway host. Details: [Model p
|
||||
|
||||
### Is a local model OK for casual chats
|
||||
|
||||
Usually no. OpenClaw needs large context + strong safety; small cards truncate and leak. If you must, run the **largest** MiniMax M2.1 build you can locally (LM Studio) and see [/gateway/local-models](/gateway/local-models). Smaller/quantized models increase prompt-injection risk - see [Security](/gateway/security).
|
||||
Usually no. OpenClaw needs large context + strong safety; small cards truncate and leak. If you must, run the **largest** MiniMax M2.5 build you can locally (LM Studio) and see [/gateway/local-models](/gateway/local-models). Smaller/quantized models increase prompt-injection risk - see [Security](/gateway/security).
|
||||
|
||||
### How do I keep hosted model traffic in a specific region
|
||||
|
||||
@@ -2028,12 +2036,11 @@ Models are referenced as `provider/model` (example: `anthropic/claude-opus-4-6`)
|
||||
|
||||
### What model do you recommend
|
||||
|
||||
**Recommended default:** `anthropic/claude-opus-4-6`.
|
||||
**Good alternative:** `anthropic/claude-sonnet-4-5`.
|
||||
**Reliable (less character):** `openai/gpt-5.2` - nearly as good as Opus, just less personality.
|
||||
**Budget:** `zai/glm-4.7`.
|
||||
**Recommended default:** use the strongest latest-generation model available in your provider stack.
|
||||
**For tool-enabled or untrusted-input agents:** prioritize model strength over cost.
|
||||
**For routine/low-stakes chat:** use cheaper fallback models and route by agent role.
|
||||
|
||||
MiniMax M2.1 has its own docs: [MiniMax](/providers/minimax) and
|
||||
MiniMax M2.5 has its own docs: [MiniMax](/providers/minimax) and
|
||||
[Local models](/gateway/local-models).
|
||||
|
||||
Rule of thumb: use the **best model you can afford** for high-stakes work, and a cheaper
|
||||
@@ -2077,8 +2084,9 @@ Docs: [Models](/concepts/models), [Configure](/cli/configure), [Config](/cli/con
|
||||
|
||||
### What do OpenClaw, Flawd, and Krill use for models
|
||||
|
||||
- **OpenClaw + Flawd:** Anthropic Opus (`anthropic/claude-opus-4-6`) - see [Anthropic](/providers/anthropic).
|
||||
- **Krill:** MiniMax M2.1 (`minimax/MiniMax-M2.1`) - see [MiniMax](/providers/minimax).
|
||||
- These deployments can differ and may change over time; there is no fixed provider recommendation.
|
||||
- Check the current runtime setting on each gateway with `openclaw models status`.
|
||||
- For security-sensitive/tool-enabled agents, use the strongest latest-generation model available.
|
||||
|
||||
### How do I switch models on the fly without restarting
|
||||
|
||||
@@ -2156,8 +2164,8 @@ Fix checklist:
|
||||
1. Upgrade to **2026.1.12** (or run from source `main`), then restart the gateway.
|
||||
2. Make sure MiniMax is configured (wizard or JSON), or that a MiniMax API key
|
||||
exists in env/auth profiles so the provider can be injected.
|
||||
3. Use the exact model id (case-sensitive): `minimax/MiniMax-M2.1` or
|
||||
`minimax/MiniMax-M2.1-lightning`.
|
||||
3. Use the exact model id (case-sensitive): `minimax/MiniMax-M2.5` or
|
||||
`minimax/MiniMax-M2.5-Lightning`.
|
||||
4. Run:
|
||||
|
||||
```bash
|
||||
@@ -2180,9 +2188,9 @@ Fallbacks are for **errors**, not "hard tasks," so use `/model` or a separate ag
|
||||
env: { MINIMAX_API_KEY: "sk-...", OPENAI_API_KEY: "sk-..." },
|
||||
agents: {
|
||||
defaults: {
|
||||
model: { primary: "minimax/MiniMax-M2.1" },
|
||||
model: { primary: "minimax/MiniMax-M2.5" },
|
||||
models: {
|
||||
"minimax/MiniMax-M2.1": { alias: "minimax" },
|
||||
"minimax/MiniMax-M2.5": { alias: "minimax" },
|
||||
"openai/gpt-5.2": { alias: "gpt" },
|
||||
},
|
||||
},
|
||||
|
||||
@@ -136,7 +136,7 @@ Live tests are split into two layers so we can isolate failures:
|
||||
- `pnpm test:live` (or `OPENCLAW_LIVE_TEST=1` if invoking Vitest directly)
|
||||
- Set `OPENCLAW_LIVE_MODELS=modern` (or `all`, alias for modern) to actually run this suite; otherwise it skips to keep `pnpm test:live` focused on gateway smoke
|
||||
- How to select models:
|
||||
- `OPENCLAW_LIVE_MODELS=modern` to run the modern allowlist (Opus/Sonnet/Haiku 4.5, GPT-5.x + Codex, Gemini 3, GLM 4.7, MiniMax M2.1, Grok 4)
|
||||
- `OPENCLAW_LIVE_MODELS=modern` to run the modern allowlist (Opus/Sonnet/Haiku 4.5, GPT-5.x + Codex, Gemini 3, GLM 4.7, MiniMax M2.5, Grok 4)
|
||||
- `OPENCLAW_LIVE_MODELS=all` is an alias for the modern allowlist
|
||||
- or `OPENCLAW_LIVE_MODELS="openai/gpt-5.2,anthropic/claude-opus-4-6,..."` (comma allowlist)
|
||||
- How to select providers:
|
||||
@@ -167,7 +167,7 @@ Live tests are split into two layers so we can isolate failures:
|
||||
- How to enable:
|
||||
- `pnpm test:live` (or `OPENCLAW_LIVE_TEST=1` if invoking Vitest directly)
|
||||
- How to select models:
|
||||
- Default: modern allowlist (Opus/Sonnet/Haiku 4.5, GPT-5.x + Codex, Gemini 3, GLM 4.7, MiniMax M2.1, Grok 4)
|
||||
- Default: modern allowlist (Opus/Sonnet/Haiku 4.5, GPT-5.x + Codex, Gemini 3, GLM 4.7, MiniMax M2.5, Grok 4)
|
||||
- `OPENCLAW_LIVE_GATEWAY_MODELS=all` is an alias for the modern allowlist
|
||||
- Or set `OPENCLAW_LIVE_GATEWAY_MODELS="provider/model"` (or comma list) to narrow
|
||||
- How to select providers (avoid “OpenRouter everything”):
|
||||
@@ -251,7 +251,7 @@ Narrow, explicit allowlists are fastest and least flaky:
|
||||
- `OPENCLAW_LIVE_GATEWAY_MODELS="openai/gpt-5.2" pnpm test:live src/gateway/gateway-models.profiles.live.test.ts`
|
||||
|
||||
- Tool calling across several providers:
|
||||
- `OPENCLAW_LIVE_GATEWAY_MODELS="openai/gpt-5.2,anthropic/claude-opus-4-6,google/gemini-3-flash-preview,zai/glm-4.7,minimax/minimax-m2.1" pnpm test:live src/gateway/gateway-models.profiles.live.test.ts`
|
||||
- `OPENCLAW_LIVE_GATEWAY_MODELS="openai/gpt-5.2,anthropic/claude-opus-4-6,google/gemini-3-flash-preview,zai/glm-4.7,minimax/minimax-m2.5" pnpm test:live src/gateway/gateway-models.profiles.live.test.ts`
|
||||
|
||||
- Google focus (Gemini API key + Antigravity):
|
||||
- Gemini (API key): `OPENCLAW_LIVE_GATEWAY_MODELS="google/gemini-3-flash-preview" pnpm test:live src/gateway/gateway-models.profiles.live.test.ts`
|
||||
@@ -280,10 +280,10 @@ This is the “common models” run we expect to keep working:
|
||||
- Google (Gemini API): `google/gemini-3-pro-preview` and `google/gemini-3-flash-preview` (avoid older Gemini 2.x models)
|
||||
- Google (Antigravity): `google-antigravity/claude-opus-4-6-thinking` and `google-antigravity/gemini-3-flash`
|
||||
- Z.AI (GLM): `zai/glm-4.7`
|
||||
- MiniMax: `minimax/minimax-m2.1`
|
||||
- MiniMax: `minimax/minimax-m2.5`
|
||||
|
||||
Run gateway smoke with tools + image:
|
||||
`OPENCLAW_LIVE_GATEWAY_MODELS="openai/gpt-5.2,openai-codex/gpt-5.3-codex,anthropic/claude-opus-4-6,google/gemini-3-pro-preview,google/gemini-3-flash-preview,google-antigravity/claude-opus-4-6-thinking,google-antigravity/gemini-3-flash,zai/glm-4.7,minimax/minimax-m2.1" pnpm test:live src/gateway/gateway-models.profiles.live.test.ts`
|
||||
`OPENCLAW_LIVE_GATEWAY_MODELS="openai/gpt-5.2,openai-codex/gpt-5.3-codex,anthropic/claude-opus-4-6,google/gemini-3-pro-preview,google/gemini-3-flash-preview,google-antigravity/claude-opus-4-6-thinking,google-antigravity/gemini-3-flash,zai/glm-4.7,minimax/minimax-m2.5" pnpm test:live src/gateway/gateway-models.profiles.live.test.ts`
|
||||
|
||||
### Baseline: tool calling (Read + optional Exec)
|
||||
|
||||
@@ -293,7 +293,7 @@ Pick at least one per provider family:
|
||||
- Anthropic: `anthropic/claude-opus-4-6` (or `anthropic/claude-sonnet-4-5`)
|
||||
- Google: `google/gemini-3-flash-preview` (or `google/gemini-3-pro-preview`)
|
||||
- Z.AI (GLM): `zai/glm-4.7`
|
||||
- MiniMax: `minimax/minimax-m2.1`
|
||||
- MiniMax: `minimax/minimax-m2.5`
|
||||
|
||||
Optional additional coverage (nice to have):
|
||||
|
||||
|
||||
Reference in New Issue
Block a user