mirror of
https://github.com/openclaw/openclaw.git
synced 2026-03-12 07:20:45 +00:00
fix: increase maxTokens for tool probe to support reasoning models
Closes #7521
This commit is contained in:
@@ -220,6 +220,7 @@ Docs: https://docs.openclaw.ai
|
||||
- Venice/default model refresh: switch the built-in Venice default to `kimi-k2-5`, update onboarding aliasing, and refresh Venice provider docs/recommendations to match the current private and anonymized catalog. (from #12964) Fixes #20156. Thanks @sabrinaaquino and @vincentkoc.
|
||||
- Agents/skill API write pacing: add a global prompt guardrail that treats skill-driven external API writes as rate-limited by default, so runners prefer batched writes, avoid tight request loops, and respect `429`/`Retry-After`. Thanks @vincentkoc.
|
||||
- Google Chat/multi-account webhook auth fallback: when `channels.googlechat.accounts.default` carries shared webhook audience/path settings (for example after config normalization), inherit those defaults for named accounts while preserving top-level and per-account overrides, so inbound webhook verification no longer fails silently for named accounts missing duplicated audience fields. Fixes #38369.
|
||||
- Models/tool probing: raise the tool-capability probe budget from 32 to 256 tokens so reasoning models that spend tokens on thinking before returning a required tool call are less likely to be misclassified as not supporting tools. (#7521) Thanks @jakobdylanc.
|
||||
|
||||
## 2026.3.2
|
||||
|
||||
|
||||
@@ -262,7 +262,7 @@ async function probeTool(
|
||||
const message = await withTimeout(timeoutMs, (signal) =>
|
||||
complete(model, context, {
|
||||
apiKey,
|
||||
maxTokens: 32,
|
||||
maxTokens: 256,
|
||||
temperature: 0,
|
||||
toolChoice: "required",
|
||||
signal,
|
||||
|
||||
Reference in New Issue
Block a user