Merge branch 'main' into qianfan

This commit is contained in:
ide-rea
2026-02-06 17:58:28 +08:00
committed by GitHub
413 changed files with 26165 additions and 6070 deletions

View File

@@ -31,7 +31,7 @@ openclaw onboard --anthropic-api-key "$ANTHROPIC_API_KEY"
```json5
{
env: { ANTHROPIC_API_KEY: "sk-ant-..." },
agents: { defaults: { model: { primary: "anthropic/claude-opus-4-5" } } },
agents: { defaults: { model: { primary: "anthropic/claude-opus-4-6" } } },
}
```
@@ -54,7 +54,7 @@ Use the `cacheRetention` parameter in your model config:
agents: {
defaults: {
models: {
"anthropic/claude-opus-4-5": {
"anthropic/claude-opus-4-6": {
params: { cacheRetention: "long" },
},
},
@@ -114,7 +114,7 @@ openclaw onboard --auth-choice setup-token
```json5
{
agents: { defaults: { model: { primary: "anthropic/claude-opus-4-5" } } },
agents: { defaults: { model: { primary: "anthropic/claude-opus-4-6" } } },
}
```

View File

@@ -29,7 +29,7 @@ See [Venice AI](/providers/venice).
```json5
{
agents: { defaults: { model: { primary: "anthropic/claude-opus-4-5" } } },
agents: { defaults: { model: { primary: "anthropic/claude-opus-4-6" } } },
}
```

View File

@@ -96,7 +96,7 @@ Configure via CLI:
### MiniMax M2.1 as fallback (Opus primary)
**Best for:** keep Opus 4.5 as primary, fail over to MiniMax M2.1.
**Best for:** keep Opus 4.6 as primary, fail over to MiniMax M2.1.
```json5
{
@@ -104,11 +104,11 @@ Configure via CLI:
agents: {
defaults: {
models: {
"anthropic/claude-opus-4-5": { alias: "opus" },
"anthropic/claude-opus-4-6": { alias: "opus" },
"minimax/MiniMax-M2.1": { alias: "minimax" },
},
model: {
primary: "anthropic/claude-opus-4-5",
primary: "anthropic/claude-opus-4-6",
fallbacks: ["minimax/MiniMax-M2.1"],
},
},

View File

@@ -27,7 +27,7 @@ See [Venice AI](/providers/venice).
```json5
{
agents: { defaults: { model: { primary: "anthropic/claude-opus-4-5" } } },
agents: { defaults: { model: { primary: "anthropic/claude-opus-4-6" } } },
}
```

View File

@@ -17,6 +17,8 @@ Ollama is a local LLM runtime that makes it easy to run open-source models on yo
2. Pull a model:
```bash
ollama pull gpt-oss:20b
# or
ollama pull llama3.3
# or
ollama pull qwen2.5-coder:32b
@@ -40,7 +42,7 @@ openclaw config set models.providers.ollama.apiKey "ollama-local"
{
agents: {
defaults: {
model: { primary: "ollama/llama3.3" },
model: { primary: "ollama/gpt-oss:20b" },
},
},
}
@@ -105,8 +107,8 @@ Use explicit config when:
api: "openai-completions",
models: [
{
id: "llama3.3",
name: "Llama 3.3",
id: "gpt-oss:20b",
name: "GPT-OSS 20B",
reasoning: false,
input: ["text"],
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
@@ -148,8 +150,8 @@ Once configured, all your Ollama models are available:
agents: {
defaults: {
model: {
primary: "ollama/llama3.3",
fallback: ["ollama/qwen2.5-coder:32b"],
primary: "ollama/gpt-oss:20b",
fallbacks: ["ollama/llama3.3", "ollama/qwen2.5-coder:32b"],
},
},
},
@@ -170,6 +172,48 @@ ollama pull deepseek-r1:32b
Ollama is free and runs locally, so all model costs are set to $0.
### Streaming Configuration
Due to a [known issue](https://github.com/badlogic/pi-mono/issues/1205) in the underlying SDK with Ollama's response format, **streaming is disabled by default** for Ollama models. This prevents corrupted responses when using tool-capable models.
When streaming is disabled, responses are delivered all at once (non-streaming mode), which avoids the issue where interleaved content/reasoning deltas cause garbled output.
#### Re-enable Streaming (Advanced)
If you want to re-enable streaming for Ollama (may cause issues with tool-capable models):
```json5
{
agents: {
defaults: {
models: {
"ollama/gpt-oss:20b": {
streaming: true,
},
},
},
},
}
```
#### Disable Streaming for Other Providers
You can also disable streaming for any provider if needed:
```json5
{
agents: {
defaults: {
models: {
"openai/gpt-4": {
streaming: false,
},
},
},
},
}
```
### Context windows
For auto-discovered models, OpenClaw uses the context window reported by Ollama when available, otherwise it defaults to `8192`. You can override `contextWindow` and `maxTokens` in explicit provider config.
@@ -201,7 +245,8 @@ To add models:
```bash
ollama list # See what's installed
ollama pull llama3.3 # Pull a model
ollama pull gpt-oss:20b # Pull a tool-capable model
ollama pull llama3.3 # Or another model
```
### Connection refused
@@ -216,6 +261,15 @@ ps aux | grep ollama
ollama serve
```
### Corrupted responses or tool names in output
If you see garbled responses containing tool names (like `sessions_send`, `memory_get`) or fragmented text when using Ollama models, this is due to an upstream SDK issue with streaming responses. **This is fixed by default** in the latest OpenClaw version by disabling streaming for Ollama models.
If you manually enabled streaming and experience this issue:
1. Remove the `streaming: true` configuration from your Ollama model entries, or
2. Explicitly set `streaming: false` for Ollama models (see [Streaming Configuration](#streaming-configuration))
## See Also
- [Model Providers](/concepts/model-providers) - Overview of all providers

View File

@@ -29,7 +29,7 @@ openclaw onboard --openai-api-key "$OPENAI_API_KEY"
```json5
{
env: { OPENAI_API_KEY: "sk-..." },
agents: { defaults: { model: { primary: "openai/gpt-5.2" } } },
agents: { defaults: { model: { primary: "openai/gpt-5.1-codex" } } },
}
```
@@ -52,7 +52,7 @@ openclaw models auth login --provider openai-codex
```json5
{
agents: { defaults: { model: { primary: "openai-codex/gpt-5.2" } } },
agents: { defaults: { model: { primary: "openai-codex/gpt-5.3-codex" } } },
}
```

View File

@@ -25,7 +25,7 @@ openclaw onboard --opencode-zen-api-key "$OPENCODE_API_KEY"
```json5
{
env: { OPENCODE_API_KEY: "sk-..." },
agents: { defaults: { model: { primary: "opencode/claude-opus-4-5" } } },
agents: { defaults: { model: { primary: "opencode/claude-opus-4-6" } } },
}
```

View File

@@ -28,7 +28,7 @@ openclaw onboard --auth-choice ai-gateway-api-key
{
agents: {
defaults: {
model: { primary: "vercel-ai-gateway/anthropic/claude-opus-4.5" },
model: { primary: "vercel-ai-gateway/anthropic/claude-opus-4.6" },
},
},
}