Commit Graph

19725 Commits

Author SHA1 Message Date
Vincent Koc
7821fae05d test(types): fix perf test follow-up mocks 2026-04-15 10:36:41 +01:00
Vincent Koc
7320dfc1ff test(perf): speed up slow cron infra and secrets specs 2026-04-15 10:22:43 +01:00
Vincent Koc
f49d9bcae9 test(gateway): harden non-isolated channel mocks 2026-04-15 10:02:05 +01:00
Srinivas Pavan
fb4395c1fe fix(cron): preserve all fields in announce delivery by removing summarization instruction (#65638)
* fix(cron): preserve all fields in announce delivery by removing summarization instruction

The delivery instruction appended to the cron agent prompt contained the word
'summary', causing LLMs to condense structured output non-deterministically and
drop fields on delivery. Replace with 'response' and add explicit instruction
to reproduce all fields exactly.

Fixes #58535

* chore(changelog): add cron announce entry

---------

Co-authored-by: Vincent Koc <vincentkoc@ieee.org>
2026-04-15 09:40:26 +01:00
Vincent Koc
ea4889ecdc fix(update): keep dist verify compat-safe 2026-04-15 09:39:18 +01:00
Vincent Koc
9e665e4328 fix(ts): use typed runtime semver helpers 2026-04-15 09:20:26 +01:00
Vincent Koc
7f35f76914 fix(update): harden dist inventory handling 2026-04-15 09:16:46 +01:00
Ayaan Zaidi
a1d4eb255a fix(inventory): omit qa-matrix dist artifacts 2026-04-15 13:22:04 +05:30
Ayaan Zaidi
2791b00e72 fix(build): move compat sidecars into src 2026-04-15 13:22:04 +05:30
Ayaan Zaidi
8b79141997 fix(update): infer legacy bundled sidecars 2026-04-15 13:22:04 +05:30
Ayaan Zaidi
2a8226f8e2 fix(postinstall): reject dist symlink escapes 2026-04-15 13:22:04 +05:30
Ayaan Zaidi
64f258fc49 fix(update): keep downgrade follow-ups in-process 2026-04-15 13:22:04 +05:30
Ayaan Zaidi
60e2ccbd5b fix(update): preserve legacy downgrade verify 2026-04-15 13:22:04 +05:30
Ayaan Zaidi
aaa6b05f3b fix(update): preserve legacy global verify 2026-04-15 13:22:04 +05:30
Ayaan Zaidi
5e7306bcfc fix(update): filter dist inventory to packed files 2026-04-15 13:22:04 +05:30
Ayaan Zaidi
18d0af3a13 fix(update): verify packaged dist inventory 2026-04-15 13:22:04 +05:30
Chunyue Wang
6aa4515798 fix(context-engine): gracefully degrade to legacy engine on third-party plugin resolution failure (#66930)
Merged via squash.

Prepared head SHA: 969c67716c
Co-authored-by: openperf <80630709+openperf@users.noreply.github.com>
Co-authored-by: openperf <80630709+openperf@users.noreply.github.com>
Reviewed-by: @openperf
2026-04-15 14:59:29 +08:00
Ivan Fofanov
732db75279 fix: classify "No conversation found" as session_expired (#65028)
Merged via squash.

Prepared head SHA: f429ba2de0
Co-authored-by: Ivan-Fn <1247214+Ivan-Fn@users.noreply.github.com>
Co-authored-by: altaywtf <9790196+altaywtf@users.noreply.github.com>
Reviewed-by: @altaywtf
2026-04-15 09:31:55 +03:00
Omar Shahine
507b718917 feat(ui): add Model Auth status card to Overview dashboard (#66211)
* feat(gateway,ui): add Model Auth status card to Overview

Adds a new `models.authStatus` gateway endpoint that combines
`buildAuthHealthSummary()` (token expiry/status) with
`loadProviderUsageSummary()` (rate limits) into a single response
suitable for UI rendering. Strips credentials - only ships status,
expiry, remaining time, and rate-limit windows.

Adds a corresponding "Model Auth" card to the Overview dashboard
showing provider token status and rate limits at a glance. Attention
items are raised when OAuth tokens are expiring or expired.

Also catches the OAuth token sink class of bug: if multiple profiles
exist per provider/account and tokens are drifting out of sync, this
surfaces it immediately in the dashboard instead of silently falling
back to a different provider.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* CHANGELOG: note Model Auth status card on Overview

* UI/Overview: render Model Auth card during load with N/A placeholder

* models.authStatus: env-backed OAuth escape hatch + expectsOAuth missing signal

---------

Co-authored-by: Lobster <10343873+omarshahine@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 22:40:42 -07:00
Mason Huang
3d2f51c0a4 CLI/plugins: stop forced-unsafe installs from falling back to hook packs (#58909)
Merged via squash.

Prepared head SHA: 7cf146efb6
Co-authored-by: hxy91819 <8814856+hxy91819@users.noreply.github.com>
Co-authored-by: hxy91819 <8814856+hxy91819@users.noreply.github.com>
Reviewed-by: @hxy91819
2026-04-15 13:23:17 +08:00
Roger Chien
2e2cbdd19d fix(onboard): crash at channel selection on globally installed CLI (#66736)
* fix(channels): resolve bundled channel catalog from dist/extensions/ in published installs

* refactor(channels): delegate bundled channel catalog loader to resolveBundledPluginsDir

---------

Co-authored-by: Claude <noreply@anthropic.com>
2026-04-15 11:08:01 +07:00
Peter Steinberger
ec7635256b build: refresh bundled channel metadata 2026-04-15 05:01:43 +01:00
Peter Steinberger
5ca65c84cc fix: type media private-network request flag 2026-04-15 04:58:11 +01:00
Gustavo Madeira Santana
8db4bb7583 Reply: preserve phased block metadata 2026-04-14 23:44:41 -04:00
Peter Steinberger
0bc4472b7e fix: remove stale media override import 2026-04-15 03:57:15 +01:00
bladin
e0bf756b50 fix: handle OpenRouter Qwen3 reasoning_details streams (#66905) (thanks @bladin)
* fix(openrouter): handle reasoning_details field in Qwen3 stream parsing

Add support for the reasoning_details field returned by OpenRouter/Qwen3
models. Previously this field was not recognized, causing payloads=0 and
incomplete turn errors.

- Add reasoning_details handling in processOpenAICompletionsStream
- Extract text from reasoning_details array items with type reasoning.text
- Treat as thinking content, similar to other reasoning fields
- Add test case for reasoning_details handling

Fixes #66833

* fix(openrouter): keep tool calls with reasoning_details

* fix: handle OpenRouter Qwen3 reasoning_details streams (#66905) (thanks @bladin)

* fix: preserve streamed tool calls with reasoning deltas (#66905) (thanks @bladin)

---------

Co-authored-by: bladin <bladin@users.noreply.github.com>
Co-authored-by: Ayaan Zaidi <hi@obviy.us>
2026-04-15 08:15:58 +05:30
Jim Smith
0c0463b2b7 fix: restore allowPrivateNetwork for self-hosted STT endpoints (#66692) (thanks @jhsmith409)
* fix(audio): restore allowPrivateNetwork for self-hosted STT endpoints

resolveProviderExecutionContext built the request object passed to
transcribeAudio using only sanitizeConfiguredProviderRequest on the
tool-level config and entry — which strips allowPrivateNetwork. The
provider-level request config (models.providers.*.request) was never
included in the merge, so allowPrivateNetwork:true was silently dropped.

Additionally, resolveProviderRequestPolicyConfig only read allowPrivate
Network from params.allowPrivateNetwork (a direct parameter) and ignored
params.request?.allowPrivateNetwork even when it was present.

Fix both gaps:
- runner.entries.ts: use mergeModelProviderRequestOverrides with
  sanitizeConfiguredModelProviderRequest(providerConfig?.request) so
  models.providers.*.request.allowPrivateNetwork flows through to the
  media execution context
- provider-request-config.ts: fall back to params.request?.allowPrivate
  Network when params.allowPrivateNetwork is undefined

Fixes #66691. Regression introduced in v2026.4.14.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* test(media-understanding): assert allowPrivateNetwork flows through resolveProviderExecutionContext

Regression test for the bug where providerConfig.request.allowPrivateNetwork
was dropped when building the AudioTranscriptionRequest passed to media
providers. Verifies that setting allowPrivateNetwork in the provider config
reaches the provider's request object after the fix to use
mergeModelProviderRequestOverrides + sanitizeConfiguredModelProviderRequest.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* test(media-understanding): tighten allowPrivateNetwork regression types

* fix: restore allowPrivateNetwork for self-hosted STT endpoints (#66692) (thanks @jhsmith409)

---------

Co-authored-by: Jim Smith <jhsmith0@me.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: Ayaan Zaidi <hi@obviy.us>
2026-04-15 08:05:37 +05:30
Serhii
ff4edd0559 fix: restore Telegram native auto defaults (#66843) (thanks @kashevk0)
* fix(config): restore Telegram native commands under auto defaults

* chore: trigger CI rerun

* test(config): split native auto-default regressions

* fix: restore Telegram native auto defaults (#66843) (thanks @kashevk0)

---------

Co-authored-by: Ayaan Zaidi <hi@obviy.us>
2026-04-15 07:46:35 +05:30
François Martin
734bb9c2e7 Telegram/documents: sanitize binary payloads to prevent prompt input inflation (#66877)
Merged via squash.

Prepared head SHA: 09a87c184f
Co-authored-by: martinfrancois <14319020+martinfrancois@users.noreply.github.com>
Co-authored-by: gumadeiras <5599352+gumadeiras@users.noreply.github.com>
Reviewed-by: @gumadeiras
2026-04-14 20:53:00 -04:00
Gustavo Madeira Santana
0c4e0d7030 memory: block dreaming self-ingestion (#66852)
Merged via squash.

Prepared head SHA: 4742656a0d
Co-authored-by: gumadeiras <5599352+gumadeiras@users.noreply.github.com>
Co-authored-by: gumadeiras <5599352+gumadeiras@users.noreply.github.com>
Reviewed-by: @gumadeiras
2026-04-14 20:29:12 -04:00
Peter Steinberger
5702ab695b test(e2e): harden beta preflight failures 2026-04-15 01:27:07 +01:00
Vincent Koc
f3a5b96b62 test(gateway): harden runtime services delivery recovery assertion 2026-04-15 01:01:28 +01:00
Peter Steinberger
5ed9016914 fix: narrow a2ui bundle hash inputs 2026-04-15 00:46:40 +01:00
Peter Steinberger
956b04975d build: refresh A2UI bundle hash 2026-04-15 00:42:05 +01:00
Vincent Koc
97ee0c6fd3 perf(migrations): trim legacy migration and bind cold paths 2026-04-15 00:38:45 +01:00
Vincent Koc
16c949ed5f test(agents): trim hot replay approval suites 2026-04-15 00:29:09 +01:00
Vincent Koc
87ef32c937 perf(tests): avoid bundled channel cold-loads in hot paths 2026-04-15 00:11:43 +01:00
Vincent Koc
06715db218 test(gateway): avoid startup auth module reset churn 2026-04-14 23:59:30 +01:00
Vincent Koc
ed1dfe23d4 perf(commands): trim sessions cold-path imports 2026-04-14 23:59:30 +01:00
Josh Avant
1769fb2aa1 fix(secrets): align SecretRef inspect/strict behavior across preload/runtime paths (#66818)
* Config: add inspect/strict SecretRef string resolver

* CLI: pass resolved/source config snapshots to plugin preload

* Slack: keep HTTP route registration config-only

* Providers: normalize SecretRef handling for auth and web tools

* Secrets: add Exa web search target to registry and docs

* Telegram: resolve env SecretRef tokens at runtime

* Agents: resolve custom provider env SecretRef ids

* Providers: fail closed on blocked SecretRef fallback

* Telegram: enforce env SecretRef policy for runtime token refs

* Status/Providers/Telegram: tighten SecretRef preload and fallback handling

* Providers: enforce env SecretRef policy checks in fallback auth paths

* fix: add SecretRef lifecycle changelog entry (#66818) (thanks @joshavant)
2026-04-14 17:59:28 -05:00
Gustavo Madeira Santana
4491bdad76 QA: drop dead qa-lab-runtime shim
Remove the old qa-lab-runtime shim now that qa-runtime is the only live
consumer seam. This leaves one tiny shared runtime facade instead of two
parallel names for the same private helper surface.
2026-04-14 18:53:36 -04:00
Gustavo Madeira Santana
95be2c1605 QA: replace qa-lab-runtime with qa-runtime
Introduce a tiny generic qa-runtime seam for shared live-lane helpers and
repoint qa-matrix to it. This keeps the qa-lab host split while removing
the host-owned runtime name from runner code.

Drop the old qa-lab-runtime shim/export now that nothing consumes it and
keep the plugin-sdk surface aligned with the new seam.
2026-04-14 18:53:25 -04:00
Vincent Koc
59b5db5cbf test(perf): trim slow gateway, daemon, and command specs 2026-04-14 23:40:03 +01:00
Vincent Koc
d5b1329bf3 test(perf): speed up slow launchd and sessions specs 2026-04-14 23:34:09 +01:00
Peter Steinberger
e1e0120c0d test(live): skip codex html interruptions in modern sweep 2026-04-14 23:31:07 +01:00
Josh Lehman
75e7fc97f8 fix: preserve runtime token budget in deferred context-engine maintenance (#66820)
* fix(context-engine): pass deferred maintenance token budget

Thread tokenBudget through the after-turn runtime context so background context-engine maintenance reuses the real model context window instead of falling back to 128k. Also pass through a best-effort currentTokenCount from the latest call total and make the runtime context type explicit about both fields.

Regeneration-Prompt: |
  OpenClaw already passed the real context token budget into direct context-engine calls like afterTurn and assemble, but deferred maintain() reused only the runtimeContext object and that object did not carry tokenBudget. Lossless Claw therefore fell back to 128k during background maintenance, which made budget-trigger fire much more aggressively than the live model context warranted. Thread the real contextTokenBudget into buildAfterTurnRuntimeContext so deferred maintenance receives the same budget, and pass a straightforward best-effort currentTokenCount from the latest call total while the relevant data is already in scope. Keep the change additive, update the runtime-context type, and cover the background maintenance/runtime-context behavior with focused tests.

* fix(context-engine): use prompt usage for deferred maintenance
2026-04-14 15:30:37 -07:00
Peter Steinberger
7026ddadba test(gateway): tolerate loaded hook enqueue timing 2026-04-14 23:05:18 +01:00
Josh Lehman
ef3ac6a58e fix: guard Anthropic Messages max tokens (#66664)
* Docs: add Anthropic max_tokens investigation memo

Regeneration-Prompt: |
  Investigate the reported OpenClaw cron isolated-agent failure where an
  Anthropic Haiku run returned "max_tokens: must be greater than or equal to 1".
  Do not implement a fix yet. Inspect the cron isolated-agent execution path,
  the embedded runner, extra param plumbing, Anthropic transport code, and any
  model-selection or token-budget logic that could synthesize maxTokens = 0.
  Produce a concise maintainer memo with concrete file references, explain why
  cron itself is not the component setting maxTokens, identify the most likely
  root cause, describe the smallest repro shape, and recommend the cleanest fix.

* openclaw-e82: guard Anthropic Messages maxTokens

Regeneration-Prompt: |
  Fix the Anthropic Messages path so OpenClaw never sends max_tokens <= 0
  to Anthropic. Match the positive-number guard already used by the
  Anthropic Vertex transport, but keep the change scoped: validate token
  limits in src/agents/anthropic-transport-stream.ts where transport
  options are resolved and where the final payload is assembled, fall back
  to the model limit when a runtime override is zero, fail locally when no
  positive token budget exists, and drop non-positive maxTokens from
  src/agents/pi-embedded-runner/extra-params.ts so hidden config params do
  not leak through. Add focused regression coverage for both the transport
  and extra-param forwarding path, and remove the earlier investigation memo
  from the branch so the PR diff only contains the fix.

* fix: scope Anthropic max token guard

* fix: document Anthropic max token guard

* fix: floor Anthropic max token overrides
2026-04-14 15:05:04 -07:00
Vincent Koc
9b25c8f8e1 perf(tests): trim plugin and gateway hot paths 2026-04-14 23:03:23 +01:00
Peter Steinberger
54cf4cd857 test(agents): isolate shared subagent state 2026-04-14 22:49:31 +01:00