mirror of
https://github.com/openclaw/openclaw.git
synced 2026-05-03 19:40:24 +00:00
refactor(providers): remove core default and usage bias
This commit is contained in:
@@ -163,7 +163,7 @@ Run an isolated agent turn:
|
||||
curl -X POST http://127.0.0.1:18789/hooks/agent \
|
||||
-H 'Authorization: Bearer SECRET' \
|
||||
-H 'Content-Type: application/json' \
|
||||
-d '{"message":"Summarize inbox","name":"Email","model":"openai/gpt-5.2-mini"}'
|
||||
-d '{"message":"Summarize inbox","name":"Email","model":"openai/gpt-5.4-mini"}'
|
||||
```
|
||||
|
||||
Fields: `message` (required), `name`, `agentId`, `wakeMode`, `deliver`, `channel`, `to`, `model`, `thinking`, `timeoutSeconds`.
|
||||
|
||||
@@ -99,7 +99,7 @@ You can switch models for the current session without restarting:
|
||||
/model
|
||||
/model list
|
||||
/model 3
|
||||
/model openai/gpt-5.2
|
||||
/model openai/gpt-5.4
|
||||
/model status
|
||||
```
|
||||
|
||||
|
||||
@@ -980,7 +980,7 @@ Time format in system prompt. Default: `auto` (OS preference).
|
||||
- `pdfMaxPages`: default maximum pages considered by extraction fallback mode in the `pdf` tool.
|
||||
- `verboseDefault`: default verbose level for agents. Values: `"off"`, `"on"`, `"full"`. Default: `"off"`.
|
||||
- `elevatedDefault`: default elevated-output level for agents. Values: `"off"`, `"on"`, `"ask"`, `"full"`. Default: `"on"`.
|
||||
- `model.primary`: format `provider/model` (e.g. `anthropic/claude-opus-4-6`). If you omit the provider, OpenClaw assumes `anthropic` (deprecated).
|
||||
- `model.primary`: format `provider/model` (e.g. `openai/gpt-5.4`). If you omit the provider, OpenClaw assumes the configured default provider (currently `openai`; deprecated fallback behavior, so prefer explicit `provider/model`).
|
||||
- `models`: the configured model catalog and allowlist for `/model`. Each entry can include `alias` (shortcut) and `params` (provider-specific, for example `temperature`, `maxTokens`, `cacheRetention`, `context1m`).
|
||||
- `params`: global default provider parameters applied to all models. Set at `agents.defaults.params` (e.g. `{ cacheRetention: "long" }`).
|
||||
- `params` merge precedence (config): `agents.defaults.params` (global base) is overridden by `agents.defaults.models["provider/model"].params` (per-model), then `agents.list[].params` (matching agent id) overrides by key. See [Prompt Caching](/reference/prompt-caching) for details.
|
||||
|
||||
@@ -113,11 +113,11 @@ When validation fails:
|
||||
defaults: {
|
||||
model: {
|
||||
primary: "anthropic/claude-sonnet-4-6",
|
||||
fallbacks: ["openai/gpt-5.2"],
|
||||
fallbacks: ["openai/gpt-5.4"],
|
||||
},
|
||||
models: {
|
||||
"anthropic/claude-sonnet-4-6": { alias: "Sonnet" },
|
||||
"openai/gpt-5.2": { alias: "GPT" },
|
||||
"openai/gpt-5.4": { alias: "GPT" },
|
||||
},
|
||||
},
|
||||
},
|
||||
|
||||
@@ -2057,7 +2057,7 @@ for usage/billing and raise limits as needed.
|
||||
agents.defaults.model.primary
|
||||
```
|
||||
|
||||
Models are referenced as `provider/model` (example: `anthropic/claude-opus-4-6`). If you omit the provider, OpenClaw currently assumes `anthropic` as a temporary deprecation fallback - but you should still **explicitly** set `provider/model`.
|
||||
Models are referenced as `provider/model` (example: `openai/gpt-5.4`). If you omit the provider, OpenClaw currently assumes the configured default provider (currently `openai`) as a temporary deprecation fallback - but you should still **explicitly** set `provider/model`.
|
||||
|
||||
</Accordion>
|
||||
|
||||
|
||||
@@ -193,7 +193,7 @@ Live tests are split into two layers so we can isolate failures:
|
||||
- How to select models:
|
||||
- `OPENCLAW_LIVE_MODELS=modern` to run the modern allowlist (Opus/Sonnet 4.6+, GPT-5.x + Codex, Gemini 3, GLM 4.7, MiniMax M2.7, Grok 4)
|
||||
- `OPENCLAW_LIVE_MODELS=all` is an alias for the modern allowlist
|
||||
- or `OPENCLAW_LIVE_MODELS="openai/gpt-5.2,anthropic/claude-opus-4-6,..."` (comma allowlist)
|
||||
- or `OPENCLAW_LIVE_MODELS="openai/gpt-5.4,anthropic/claude-opus-4-6,..."` (comma allowlist)
|
||||
- How to select providers:
|
||||
- `OPENCLAW_LIVE_PROVIDERS="google,google-antigravity,google-gemini-cli"` (comma allowlist)
|
||||
- Where keys come from:
|
||||
@@ -356,13 +356,13 @@ Docker notes:
|
||||
Narrow, explicit allowlists are fastest and least flaky:
|
||||
|
||||
- Single model, direct (no gateway):
|
||||
- `OPENCLAW_LIVE_MODELS="openai/gpt-5.2" pnpm test:live src/agents/models.profiles.live.test.ts`
|
||||
- `OPENCLAW_LIVE_MODELS="openai/gpt-5.4" pnpm test:live src/agents/models.profiles.live.test.ts`
|
||||
|
||||
- Single model, gateway smoke:
|
||||
- `OPENCLAW_LIVE_GATEWAY_MODELS="openai/gpt-5.2" pnpm test:live src/gateway/gateway-models.profiles.live.test.ts`
|
||||
- `OPENCLAW_LIVE_GATEWAY_MODELS="openai/gpt-5.4" pnpm test:live src/gateway/gateway-models.profiles.live.test.ts`
|
||||
|
||||
- Tool calling across several providers:
|
||||
- `OPENCLAW_LIVE_GATEWAY_MODELS="openai/gpt-5.2,anthropic/claude-opus-4-6,google/gemini-3-flash-preview,zai/glm-4.7,minimax/MiniMax-M2.7" pnpm test:live src/gateway/gateway-models.profiles.live.test.ts`
|
||||
- `OPENCLAW_LIVE_GATEWAY_MODELS="openai/gpt-5.4,anthropic/claude-opus-4-6,google/gemini-3-flash-preview,zai/glm-4.7,minimax/MiniMax-M2.7" pnpm test:live src/gateway/gateway-models.profiles.live.test.ts`
|
||||
|
||||
- Google focus (Gemini API key + Antigravity):
|
||||
- Gemini (API key): `OPENCLAW_LIVE_GATEWAY_MODELS="google/gemini-3-flash-preview" pnpm test:live src/gateway/gateway-models.profiles.live.test.ts`
|
||||
@@ -385,7 +385,7 @@ There is no fixed “CI model list” (live is opt-in), but these are the **reco
|
||||
|
||||
This is the “common models” run we expect to keep working:
|
||||
|
||||
- OpenAI (non-Codex): `openai/gpt-5.2` (optional: `openai/gpt-5.1`)
|
||||
- OpenAI (non-Codex): `openai/gpt-5.4` (optional: `openai/gpt-5.4-mini`)
|
||||
- OpenAI Codex: `openai-codex/gpt-5.4`
|
||||
- Anthropic: `anthropic/claude-opus-4-6` (or `anthropic/claude-sonnet-4-6`)
|
||||
- Google (Gemini API): `google/gemini-3.1-pro-preview` and `google/gemini-3-flash-preview` (avoid older Gemini 2.x models)
|
||||
@@ -394,13 +394,13 @@ This is the “common models” run we expect to keep working:
|
||||
- MiniMax: `minimax/MiniMax-M2.7`
|
||||
|
||||
Run gateway smoke with tools + image:
|
||||
`OPENCLAW_LIVE_GATEWAY_MODELS="openai/gpt-5.2,openai-codex/gpt-5.4,anthropic/claude-opus-4-6,google/gemini-3.1-pro-preview,google/gemini-3-flash-preview,google-antigravity/claude-opus-4-6-thinking,google-antigravity/gemini-3-flash,zai/glm-4.7,minimax/MiniMax-M2.7" pnpm test:live src/gateway/gateway-models.profiles.live.test.ts`
|
||||
`OPENCLAW_LIVE_GATEWAY_MODELS="openai/gpt-5.4,openai-codex/gpt-5.4,anthropic/claude-opus-4-6,google/gemini-3.1-pro-preview,google/gemini-3-flash-preview,google-antigravity/claude-opus-4-6-thinking,google-antigravity/gemini-3-flash,zai/glm-4.7,minimax/MiniMax-M2.7" pnpm test:live src/gateway/gateway-models.profiles.live.test.ts`
|
||||
|
||||
### Baseline: tool calling (Read + optional Exec)
|
||||
|
||||
Pick at least one per provider family:
|
||||
|
||||
- OpenAI: `openai/gpt-5.2` (or `openai/gpt-5-mini`)
|
||||
- OpenAI: `openai/gpt-5.4` (or `openai/gpt-5.4-mini`)
|
||||
- Anthropic: `anthropic/claude-opus-4-6` (or `anthropic/claude-sonnet-4-6`)
|
||||
- Google: `google/gemini-3-flash-preview` (or `google/gemini-3.1-pro-preview`)
|
||||
- Z.AI (GLM): `zai/glm-4.7`
|
||||
|
||||
Reference in New Issue
Block a user