mirror of
https://github.com/openclaw/openclaw.git
synced 2026-04-24 23:51:48 +00:00
Ollama thinking-capable models default to think=true when the parameter is absent. When OpenClaw has thinking set to off, the request never included think=false, so models continued generating thinking tokens that were then discarded by the response parser, producing empty responses. Wire onPayload into the Ollama stream path so payload wrappers can mutate the request body, and add an Ollama-specific wrapper that sets top-level think=false when thinkingLevel is off. Fixes #46680, #50702, #50712 Co-Authored-By: SnowSky1 <126348592+snowsky1@users.noreply.github.com>
Ollama Provider
Bundled provider plugin for Ollama discovery and setup.