* fix(cron): preserve all fields in announce delivery by removing summarization instruction
The delivery instruction appended to the cron agent prompt contained the word
'summary', causing LLMs to condense structured output non-deterministically and
drop fields on delivery. Replace with 'response' and add explicit instruction
to reproduce all fields exactly.
Fixes#58535
* chore(changelog): add cron announce entry
---------
Co-authored-by: Vincent Koc <vincentkoc@ieee.org>
* feat(gateway,ui): add Model Auth status card to Overview
Adds a new `models.authStatus` gateway endpoint that combines
`buildAuthHealthSummary()` (token expiry/status) with
`loadProviderUsageSummary()` (rate limits) into a single response
suitable for UI rendering. Strips credentials - only ships status,
expiry, remaining time, and rate-limit windows.
Adds a corresponding "Model Auth" card to the Overview dashboard
showing provider token status and rate limits at a glance. Attention
items are raised when OAuth tokens are expiring or expired.
Also catches the OAuth token sink class of bug: if multiple profiles
exist per provider/account and tokens are drifting out of sync, this
surfaces it immediately in the dashboard instead of silently falling
back to a different provider.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* CHANGELOG: note Model Auth status card on Overview
* UI/Overview: render Model Auth card during load with N/A placeholder
* models.authStatus: env-backed OAuth escape hatch + expectsOAuth missing signal
---------
Co-authored-by: Lobster <10343873+omarshahine@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: remove documentation fences from HEARTBEAT.md template
The HEARTBEAT.md template wrapped its content in markdown code fences
and a doc heading for display purposes. Since loadTemplate() only strips
YAML front matter, these artifacts leaked into generated workspace files,
causing isHeartbeatContentEffectivelyEmpty() to consider them non-empty
and triggering unnecessary API calls.
Remove the markdown fences and doc heading so the template produces
clean content after front-matter stripping.
Closes#66284
* fix: guard against undefined event.content in cron agentTurn payload
When a cron job fires with agentTurn payload, event.content is undefined.
parseFaceTags(undefined) returned undefined, which propagated to
userContent.startsWith("/") causing a TypeError crash.
- Fix parseFaceTags and filterInternalMarkers to return "" for falsy input
instead of returning the falsy value itself
- Add null coalescing fallback at the gateway call site
- Add unit tests for undefined/null/empty string inputs
Closes#66283
* fix: address review — remove redundant guards, casts, and unrelated HEARTBEAT.md change
* fix: guard against undefined event.content in cron agentTurn payload (#66302) (thanks @xinmotlanthua)
---------
Co-authored-by: khanhkhanhlele <namkhanh2172@gmail.com>
Co-authored-by: sliverp <870080352@qq.com>
* fix(openrouter): handle reasoning_details field in Qwen3 stream parsing
Add support for the reasoning_details field returned by OpenRouter/Qwen3
models. Previously this field was not recognized, causing payloads=0 and
incomplete turn errors.
- Add reasoning_details handling in processOpenAICompletionsStream
- Extract text from reasoning_details array items with type reasoning.text
- Treat as thinking content, similar to other reasoning fields
- Add test case for reasoning_details handling
Fixes#66833
* fix(openrouter): keep tool calls with reasoning_details
* fix: handle OpenRouter Qwen3 reasoning_details streams (#66905) (thanks @bladin)
* fix: preserve streamed tool calls with reasoning deltas (#66905) (thanks @bladin)
---------
Co-authored-by: bladin <bladin@users.noreply.github.com>
Co-authored-by: Ayaan Zaidi <hi@obviy.us>
* fix(audio): restore allowPrivateNetwork for self-hosted STT endpoints
resolveProviderExecutionContext built the request object passed to
transcribeAudio using only sanitizeConfiguredProviderRequest on the
tool-level config and entry — which strips allowPrivateNetwork. The
provider-level request config (models.providers.*.request) was never
included in the merge, so allowPrivateNetwork:true was silently dropped.
Additionally, resolveProviderRequestPolicyConfig only read allowPrivate
Network from params.allowPrivateNetwork (a direct parameter) and ignored
params.request?.allowPrivateNetwork even when it was present.
Fix both gaps:
- runner.entries.ts: use mergeModelProviderRequestOverrides with
sanitizeConfiguredModelProviderRequest(providerConfig?.request) so
models.providers.*.request.allowPrivateNetwork flows through to the
media execution context
- provider-request-config.ts: fall back to params.request?.allowPrivate
Network when params.allowPrivateNetwork is undefined
Fixes#66691. Regression introduced in v2026.4.14.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* test(media-understanding): assert allowPrivateNetwork flows through resolveProviderExecutionContext
Regression test for the bug where providerConfig.request.allowPrivateNetwork
was dropped when building the AudioTranscriptionRequest passed to media
providers. Verifies that setting allowPrivateNetwork in the provider config
reaches the provider's request object after the fix to use
mergeModelProviderRequestOverrides + sanitizeConfiguredModelProviderRequest.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* test(media-understanding): tighten allowPrivateNetwork regression types
* fix: restore allowPrivateNetwork for self-hosted STT endpoints (#66692) (thanks @jhsmith409)
---------
Co-authored-by: Jim Smith <jhsmith0@me.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: Ayaan Zaidi <hi@obviy.us>
* fix: use process-scoped cache for Telegram command sync to fix missing menu after restart
Fixes openclaw#66714, openclaw#66682
Root cause: The command hash cache was persisted to disk across gateway
restarts. When the hash matched (commands unchanged), setMyCommands was
skipped entirely. But Telegram bot commands can be cleared by external
factors, so the cached state becomes stale after restart.
Fix: Replace file-based hash cache with a process-scoped Map. This preserves
the rapid-restart rate-limit protection within a single process, but ensures
commands are always re-registered after a gateway restart.
* fix(telegram): drop stale async command cache calls
* fix: keep Telegram command sync process-local (#66730) (thanks @nightq)
---------
Co-authored-by: nightq <zengwei@nightq.cn>
Co-authored-by: Ayaan Zaidi <hi@obviy.us>
Adds an in-process startup catchup pass to the BlueBubbles channel that
queries BB Server for messages delivered since a persisted per-account
cursor and re-feeds each through the existing processMessage pipeline.
Fixes the missed-message hole documented in #66721: BB's WebhookService
is fire-and-forget on POST failure, and MessagePoller only re-fires
webhooks on BB-side reconnection events, not on webhook-receiver
recovery.
- New extensions/bluebubbles/src/catchup.ts with singleflight per
accountId, cursor persistence via the canonical state-paths
resolver, bounded query (perRunLimit + maxAgeMinutes), failure-held
cursor, truncation-aware page-boundary advancement, future-cursor
recovery, isFromMe filter (pre- and post-normalization).
- monitor.ts fires catchup as a background task after the webhook
target registers.
- config-schema.ts adds optional catchup block; accounts.ts adds
catchup to nestedObjectKeys for deep-merge per-account overrides.
- Dedupes against #66816's persistent inbound GUID cache.
- 22 scoped tests; full BB suite 411/411; pnpm check green; live E2E
on macOS 26.3 / BB Server 1.9.x recovered 3/3 missed messages.
Closes#66721.
Co-authored-by: Omar Shahine <omar@shahine.com>
BlueBubbles MessagePoller replays its ~1-week lookback window as new-message
webhooks after BB Server restart or reconnect. Add a persistent file-backed
GUID dedupe (TTL=7d) at the top of processMessage using createClaimableDedupe
from the Plugin SDK. Claim/finalize/release semantics ensure transient delivery
failures release the GUID so a later replay can retry.
Fixes#19176, #12053.
Co-authored-by: Omar Shahine <omar@shahine.com>
* fix(context-engine): pass deferred maintenance token budget
Thread tokenBudget through the after-turn runtime context so background context-engine maintenance reuses the real model context window instead of falling back to 128k. Also pass through a best-effort currentTokenCount from the latest call total and make the runtime context type explicit about both fields.
Regeneration-Prompt: |
OpenClaw already passed the real context token budget into direct context-engine calls like afterTurn and assemble, but deferred maintain() reused only the runtimeContext object and that object did not carry tokenBudget. Lossless Claw therefore fell back to 128k during background maintenance, which made budget-trigger fire much more aggressively than the live model context warranted. Thread the real contextTokenBudget into buildAfterTurnRuntimeContext so deferred maintenance receives the same budget, and pass a straightforward best-effort currentTokenCount from the latest call total while the relevant data is already in scope. Keep the change additive, update the runtime-context type, and cover the background maintenance/runtime-context behavior with focused tests.
* fix(context-engine): use prompt usage for deferred maintenance
* Docs: add Anthropic max_tokens investigation memo
Regeneration-Prompt: |
Investigate the reported OpenClaw cron isolated-agent failure where an
Anthropic Haiku run returned "max_tokens: must be greater than or equal to 1".
Do not implement a fix yet. Inspect the cron isolated-agent execution path,
the embedded runner, extra param plumbing, Anthropic transport code, and any
model-selection or token-budget logic that could synthesize maxTokens = 0.
Produce a concise maintainer memo with concrete file references, explain why
cron itself is not the component setting maxTokens, identify the most likely
root cause, describe the smallest repro shape, and recommend the cleanest fix.
* openclaw-e82: guard Anthropic Messages maxTokens
Regeneration-Prompt: |
Fix the Anthropic Messages path so OpenClaw never sends max_tokens <= 0
to Anthropic. Match the positive-number guard already used by the
Anthropic Vertex transport, but keep the change scoped: validate token
limits in src/agents/anthropic-transport-stream.ts where transport
options are resolved and where the final payload is assembled, fall back
to the model limit when a runtime override is zero, fail locally when no
positive token budget exists, and drop non-positive maxTokens from
src/agents/pi-embedded-runner/extra-params.ts so hidden config params do
not leak through. Add focused regression coverage for both the transport
and extra-param forwarding path, and remove the earlier investigation memo
from the branch so the PR diff only contains the fix.
* fix: scope Anthropic max token guard
* fix: document Anthropic max token guard
* fix: floor Anthropic max token overrides
* fix(mcp): harden loopback request guards
* fix(commit): block staged user log
* Revert pre-commit USER.md guard from this PR
Out of scope for the MCP loopback hardening — keep this PR
focused on the loopback request gate and the bearer-comparison
fix. The pre-commit worklog guard can land separately if
maintainers want it.
* changelog: note MCP loopback constant-time + Origin guard (#66665)
* fix(mcp): allow loopback flows that browsers flag as cross-site
The previous Sec-Fetch-Site early-return rejected legit local
browser callers like a UI hosted on http://localhost:<ui-port>
talking to MCP on http://127.0.0.1:<mcp-port> — browsers report
that host mismatch as cross-site even though both ends are
loopback. checkBrowserOrigin already authorizes those via its
local-loopback matcher (loopback peer + loopback Origin host),
so route every Origin-bearing request through that helper and
let it decide. Native MCP clients (no Origin header) continue to
short-circuit through to the bearer check unchanged.
Adds a regression test asserting that
origin: http://localhost:43123, sec-fetch-site: cross-site
from a loopback peer is accepted with a valid bearer.
---------
Co-authored-by: Devin Robison <drobison@nvidia.com>