mirror of
https://github.com/openclaw/openclaw.git
synced 2026-05-06 14:10:51 +00:00
fix(providers): map native reasoning efforts
This commit is contained in:
@@ -71,6 +71,14 @@ Use `openclaw models list --provider groq` for the most up-to-date list of
|
||||
models available on your account.
|
||||
</Tip>
|
||||
|
||||
## Reasoning models
|
||||
|
||||
OpenClaw maps its shared `/think` levels to Groq's model-specific
|
||||
`reasoning_effort` values. For `qwen/qwen3-32b`, disabled thinking sends
|
||||
`none` and enabled thinking sends `default`. For Groq GPT-OSS reasoning models,
|
||||
OpenClaw sends `low`, `medium`, or `high`; disabled thinking omits
|
||||
`reasoning_effort` because those models do not support a disabled value.
|
||||
|
||||
## Audio transcription
|
||||
|
||||
Groq also provides fast Whisper-based audio transcription. When configured as a
|
||||
|
||||
@@ -104,7 +104,7 @@ LM Studio is streaming-usage compatible. When it does not emit an OpenAI-shaped
|
||||
`usage` object, OpenClaw recovers token counts from llama.cpp-style
|
||||
`timings.prompt_n` / `timings.predicted_n` metadata instead.
|
||||
|
||||
Same behavior applies to these OpenAI-compatible local backends:
|
||||
Same streaming usage behavior applies to these OpenAI-compatible local backends:
|
||||
|
||||
- vLLM
|
||||
- SGLang
|
||||
@@ -114,6 +114,14 @@ Same behavior applies to these OpenAI-compatible local backends:
|
||||
- TabbyAPI
|
||||
- text-generation-webui
|
||||
|
||||
### Thinking compatibility
|
||||
|
||||
When LM Studio's `/api/v1/models` discovery reports model-specific reasoning
|
||||
options, OpenClaw preserves those native values in model compat metadata. For
|
||||
binary thinking models that advertise `allowed_options: ["off", "on"]`,
|
||||
OpenClaw maps disabled thinking to `off` and enabled `/think` levels to `on`
|
||||
instead of sending OpenAI-only values such as `low` or `medium`.
|
||||
|
||||
### Explicit configuration
|
||||
|
||||
```json5
|
||||
|
||||
Reference in New Issue
Block a user