mirror of
https://github.com/openclaw/openclaw.git
synced 2026-03-23 07:51:33 +00:00
docs(agents): update steering semantics
This commit is contained in:
@@ -36,6 +36,7 @@ Docs: https://docs.openclaw.ai
|
||||
- Z.AI/models: sync the bundled GLM catalog to current Pi metadata, including newer 4.5/4.6 model families, updated multimodal entries, and current pricing and token limits. Thanks @vincentkoc.
|
||||
- Mistral/models: sync the bundled default Mistral metadata to current Pi pricing so the built-in default no longer advertises zero-cost usage. Thanks @vincentkoc.
|
||||
- xAI/fast mode: map shared `/fast` and `params.fastMode` to the current xAI Grok fast model family so direct Grok runs can opt into the faster Pi-backed variants. Thanks @vincentkoc.
|
||||
- Agents/steering docs: update embedded Pi steering docs and runner comments for the current upstream behavior, where queued steering is injected after the active assistant turn finishes its tool calls instead of skipping the remaining tools mid-turn. Thanks @vincentkoc.
|
||||
- Telegram/actions: add `topic-edit` for forum-topic renames and icon updates while sharing the same Telegram topic-edit transport used by the plugin runtime. (#47798) Thanks @obviyus.
|
||||
- Telegram/error replies: add a default-off `channels.telegram.silentErrorReplies` setting so bot error replies can be delivered silently across regular replies, native commands, and fallback sends. (#19776) Thanks @ImLukeF.
|
||||
- Doctor/refactor: start splitting doctor provider checks into `src/commands/doctor/providers/*` by extracting Telegram first-run and group allowlist warnings into a provider-specific module, keeping the current setup guidance and warning behavior intact. Thanks @vincentkoc.
|
||||
|
||||
@@ -81,10 +81,10 @@ Legacy session folders from other tools are not read.
|
||||
## Steering while streaming
|
||||
|
||||
When queue mode is `steer`, inbound messages are injected into the current run.
|
||||
The queue is checked **after each tool call**; if a queued message is present,
|
||||
remaining tool calls from the current assistant message are skipped (error tool
|
||||
results with "Skipped due to queued user message."), then the queued user
|
||||
message is injected before the next assistant response.
|
||||
Queued steering is delivered **after the current assistant turn finishes
|
||||
executing its tool calls**, before the next LLM call. Steering no longer skips
|
||||
remaining tool calls from the current assistant message; it injects the queued
|
||||
message at the next model boundary instead.
|
||||
|
||||
When queue mode is `followup` or `collect`, inbound messages are held until the
|
||||
current turn ends, then a new agent turn starts with the queued payloads. See
|
||||
|
||||
@@ -226,7 +226,8 @@ function createYieldAbortedResponse(model: { api?: string; provider?: string; id
|
||||
};
|
||||
}
|
||||
|
||||
// Queue a hidden steering message so pi-agent-core skips any remaining tool calls.
|
||||
// Queue a hidden steering message so pi-agent-core injects it before the next
|
||||
// LLM call once the current assistant turn finishes executing its tool calls.
|
||||
function queueSessionsYieldInterruptMessage(activeSession: {
|
||||
agent: { steer: (message: AgentMessage) => void };
|
||||
}) {
|
||||
|
||||
Reference in New Issue
Block a user