Adds an in-process startup catchup pass to the BlueBubbles channel that
queries BB Server for messages delivered since a persisted per-account
cursor and re-feeds each through the existing processMessage pipeline.
Fixes the missed-message hole documented in #66721: BB's WebhookService
is fire-and-forget on POST failure, and MessagePoller only re-fires
webhooks on BB-side reconnection events, not on webhook-receiver
recovery.
- New extensions/bluebubbles/src/catchup.ts with singleflight per
accountId, cursor persistence via the canonical state-paths
resolver, bounded query (perRunLimit + maxAgeMinutes), failure-held
cursor, truncation-aware page-boundary advancement, future-cursor
recovery, isFromMe filter (pre- and post-normalization).
- monitor.ts fires catchup as a background task after the webhook
target registers.
- config-schema.ts adds optional catchup block; accounts.ts adds
catchup to nestedObjectKeys for deep-merge per-account overrides.
- Dedupes against #66816's persistent inbound GUID cache.
- 22 scoped tests; full BB suite 411/411; pnpm check green; live E2E
on macOS 26.3 / BB Server 1.9.x recovered 3/3 missed messages.
Closes#66721.
Co-authored-by: Omar Shahine <omar@shahine.com>
Remove the old qa-lab-runtime shim now that qa-runtime is the only live
consumer seam. This leaves one tiny shared runtime facade instead of two
parallel names for the same private helper surface.
Introduce a tiny generic qa-runtime seam for shared live-lane helpers and
repoint qa-matrix to it. This keeps the qa-lab host split while removing
the host-owned runtime name from runner code.
Drop the old qa-lab-runtime shim/export now that nothing consumes it and
keep the plugin-sdk surface aligned with the new seam.
BlueBubbles MessagePoller replays its ~1-week lookback window as new-message
webhooks after BB Server restart or reconnect. Add a persistent file-backed
GUID dedupe (TTL=7d) at the top of processMessage using createClaimableDedupe
from the Plugin SDK. Claim/finalize/release semantics ensure transient delivery
failures release the GUID so a later replay can retry.
Fixes#19176, #12053.
Co-authored-by: Omar Shahine <omar@shahine.com>
* fix(context-engine): pass deferred maintenance token budget
Thread tokenBudget through the after-turn runtime context so background context-engine maintenance reuses the real model context window instead of falling back to 128k. Also pass through a best-effort currentTokenCount from the latest call total and make the runtime context type explicit about both fields.
Regeneration-Prompt: |
OpenClaw already passed the real context token budget into direct context-engine calls like afterTurn and assemble, but deferred maintain() reused only the runtimeContext object and that object did not carry tokenBudget. Lossless Claw therefore fell back to 128k during background maintenance, which made budget-trigger fire much more aggressively than the live model context warranted. Thread the real contextTokenBudget into buildAfterTurnRuntimeContext so deferred maintenance receives the same budget, and pass a straightforward best-effort currentTokenCount from the latest call total while the relevant data is already in scope. Keep the change additive, update the runtime-context type, and cover the background maintenance/runtime-context behavior with focused tests.
* fix(context-engine): use prompt usage for deferred maintenance
* Docs: add Anthropic max_tokens investigation memo
Regeneration-Prompt: |
Investigate the reported OpenClaw cron isolated-agent failure where an
Anthropic Haiku run returned "max_tokens: must be greater than or equal to 1".
Do not implement a fix yet. Inspect the cron isolated-agent execution path,
the embedded runner, extra param plumbing, Anthropic transport code, and any
model-selection or token-budget logic that could synthesize maxTokens = 0.
Produce a concise maintainer memo with concrete file references, explain why
cron itself is not the component setting maxTokens, identify the most likely
root cause, describe the smallest repro shape, and recommend the cleanest fix.
* openclaw-e82: guard Anthropic Messages maxTokens
Regeneration-Prompt: |
Fix the Anthropic Messages path so OpenClaw never sends max_tokens <= 0
to Anthropic. Match the positive-number guard already used by the
Anthropic Vertex transport, but keep the change scoped: validate token
limits in src/agents/anthropic-transport-stream.ts where transport
options are resolved and where the final payload is assembled, fall back
to the model limit when a runtime override is zero, fail locally when no
positive token budget exists, and drop non-positive maxTokens from
src/agents/pi-embedded-runner/extra-params.ts so hidden config params do
not leak through. Add focused regression coverage for both the transport
and extra-param forwarding path, and remove the earlier investigation memo
from the branch so the PR diff only contains the fix.
* fix: scope Anthropic max token guard
* fix: document Anthropic max token guard
* fix: floor Anthropic max token overrides
Remove the stale install metadata from the private qa-channel package.
The runner still loads from the repo checkout, but it should not
advertise an npm install path we do not support.