fix: increase maxTokens for tool probe to support reasoning models

Closes #7521
This commit is contained in:
Jakob
2026-03-06 22:27:28 -05:00
committed by GitHub
parent a01978ba96
commit fa69f836c4
2 changed files with 2 additions and 1 deletions

View File

@@ -220,6 +220,7 @@ Docs: https://docs.openclaw.ai
- Venice/default model refresh: switch the built-in Venice default to `kimi-k2-5`, update onboarding aliasing, and refresh Venice provider docs/recommendations to match the current private and anonymized catalog. (from #12964) Fixes #20156. Thanks @sabrinaaquino and @vincentkoc.
- Agents/skill API write pacing: add a global prompt guardrail that treats skill-driven external API writes as rate-limited by default, so runners prefer batched writes, avoid tight request loops, and respect `429`/`Retry-After`. Thanks @vincentkoc.
- Google Chat/multi-account webhook auth fallback: when `channels.googlechat.accounts.default` carries shared webhook audience/path settings (for example after config normalization), inherit those defaults for named accounts while preserving top-level and per-account overrides, so inbound webhook verification no longer fails silently for named accounts missing duplicated audience fields. Fixes #38369.
- Models/tool probing: raise the tool-capability probe budget from 32 to 256 tokens so reasoning models that spend tokens on thinking before returning a required tool call are less likely to be misclassified as not supporting tools. (#7521) Thanks @jakobdylanc.
## 2026.3.2

View File

@@ -262,7 +262,7 @@ async function probeTool(
const message = await withTimeout(timeoutMs, (signal) =>
complete(model, context, {
apiKey,
maxTokens: 32,
maxTokens: 256,
temperature: 0,
toolChoice: "required",
signal,