mirror of
https://github.com/openclaw/openclaw.git
synced 2026-05-06 08:20:43 +00:00
fix(deepseek): expose V4 max thinking levels (#73008)
Merged via squash.
Prepared head SHA: ef561a59de
Co-authored-by: ai-hpc <183861985+ai-hpc@users.noreply.github.com>
Co-authored-by: hxy91819 <8814856+hxy91819@users.noreply.github.com>
Reviewed-by: @hxy91819
This commit is contained in:
@@ -79,6 +79,8 @@ is available to that process (for example, in `~/.openclaw/.env` or via
|
||||
V4 models support DeepSeek's `thinking` control. OpenClaw also replays
|
||||
DeepSeek `reasoning_content` on follow-up turns so thinking sessions with tool
|
||||
calls can continue.
|
||||
Use `/think xhigh` or `/think max` with DeepSeek V4 models to request DeepSeek's
|
||||
maximum `reasoning_effort`.
|
||||
</Tip>
|
||||
|
||||
## Thinking and tools
|
||||
|
||||
@@ -26,6 +26,7 @@ title: "Thinking levels"
|
||||
- Anthropic Claude Opus 4.7 does not default to adaptive thinking. Its API effort default remains provider-owned unless you explicitly set a thinking level.
|
||||
- Anthropic Claude Opus 4.7 maps `/think xhigh` to adaptive thinking plus `output_config.effort: "xhigh"`, because `/think` is a thinking directive and `xhigh` is the Opus 4.7 effort setting.
|
||||
- Anthropic Claude Opus 4.7 also exposes `/think max`; it maps to the same provider-owned max effort path.
|
||||
- DeepSeek V4 models expose `/think xhigh|max`; both map to DeepSeek `reasoning_effort: "max"` while lower non-off levels map to `high`.
|
||||
- Ollama thinking-capable models expose `/think low|medium|high|max`; `max` maps to native `think: "high"` because Ollama's native API accepts `low`, `medium`, and `high` effort strings.
|
||||
- OpenAI GPT models map `/think` through model-specific Responses API effort support. `/think off` sends `reasoning.effort: "none"` only when the target model supports it; otherwise OpenClaw omits the disabled reasoning payload instead of sending an unsupported value.
|
||||
- Custom OpenAI-compatible catalog entries can opt into `/think xhigh` by setting `models.providers.<provider>.models[].compat.supportedReasoningEfforts` to include `"xhigh"`. This uses the same compat metadata that maps outbound OpenAI reasoning effort payloads, so menus, session validation, agent CLI, and `llm-task` agree with transport behavior.
|
||||
|
||||
Reference in New Issue
Block a user