mirror of
https://github.com/openclaw/openclaw.git
synced 2026-05-06 12:20:44 +00:00
docs(models): document required tool choice workaround
This commit is contained in:
@@ -174,6 +174,31 @@ Compatibility notes for stricter OpenAI-compatible backends:
|
||||
text and logs a warning with the run id, provider/model, detected pattern, and
|
||||
tool name when available. Treat that as provider/model tool-call
|
||||
incompatibility, not a completed tool run.
|
||||
- For OpenAI-compatible Chat Completions backends whose tool parser works only
|
||||
when tool use is forced, set a per-model request override instead of relying
|
||||
on text parsing:
|
||||
|
||||
```json5
|
||||
{
|
||||
agents: {
|
||||
defaults: {
|
||||
models: {
|
||||
"local/my-local-model": {
|
||||
params: {
|
||||
extra_body: {
|
||||
tool_choice: "required",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
Use this only for models/sessions where every normal turn should call a tool.
|
||||
It overrides OpenClaw's default proxy value of `tool_choice: "auto"`.
|
||||
|
||||
- Some smaller or stricter local backends are unstable with OpenClaw's full
|
||||
agent-runtime prompt shape, especially when tool schemas are included. If the
|
||||
backend works for tiny direct `/v1/chat/completions` calls but fails on normal
|
||||
|
||||
@@ -168,6 +168,41 @@ Use explicit config when:
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Qwen tool-call parser needs required">
|
||||
First make sure vLLM was started with the right tool-call parser and chat
|
||||
template for the model. For example, vLLM documents `hermes` for Qwen2.5
|
||||
models and `qwen3_xml` for Qwen3-Coder models.
|
||||
|
||||
Some Qwen/vLLM combinations still return raw tool-call text or an empty
|
||||
`tool_calls` array when the request uses `tool_choice: "auto"`, but return
|
||||
structured tool calls when the request uses `tool_choice: "required"`. For
|
||||
those model entries, force the OpenAI-compatible request field with
|
||||
`params.extra_body`:
|
||||
|
||||
```json5
|
||||
{
|
||||
agents: {
|
||||
defaults: {
|
||||
models: {
|
||||
"vllm/Qwen-Qwen2.5-Coder-32B-Instruct": {
|
||||
params: {
|
||||
extra_body: {
|
||||
tool_choice: "required",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
This is an opt-in compatibility workaround. It makes every model turn with
|
||||
tools require a tool call, so use it only for a dedicated local model entry
|
||||
where that behavior is acceptable.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Custom base URL">
|
||||
If your vLLM server runs on a non-default host or port, set `baseUrl` in the explicit provider config:
|
||||
|
||||
|
||||
Reference in New Issue
Block a user