Commit Graph

32374 Commits

Author SHA1 Message Date
Peter Steinberger
d4e1a790ab refactor: share media dimension parsing 2026-04-19 05:11:58 +01:00
Peter Steinberger
34abb441f6 refactor: share provider plugin id filtering 2026-04-19 05:09:57 +01:00
Peter Steinberger
984ecd98ca refactor: share legacy config migration pipeline 2026-04-19 05:06:58 +01:00
Peter Steinberger
528f296cfc refactor: share acp identity construction 2026-04-19 05:04:19 +01:00
Peter Steinberger
45381135df refactor: share conversation target normalization 2026-04-19 05:02:03 +01:00
Peter Steinberger
2a14f76964 refactor: share plugin validation diagnostics 2026-04-19 04:59:40 +01:00
Peter Steinberger
ca3e5ffd89 refactor: reduce subagent requester wrapper duplication 2026-04-19 04:52:51 +01:00
Peter Steinberger
9686e518bc test: share media generation reset helpers 2026-04-19 04:48:52 +01:00
Peter Steinberger
5ca33f7cb4 refactor: share model resolve fallback lookup 2026-04-19 04:46:11 +01:00
Peter Steinberger
dfe2e81829 refactor: share provider replay hook params 2026-04-19 04:40:22 +01:00
Peter Steinberger
3eed321081 refactor: share model allowlist parsing 2026-04-19 04:34:51 +01:00
Peter Steinberger
2a35ea4f07 test: share pi embedded helper setup 2026-04-19 04:31:01 +01:00
Peter Steinberger
efda761724 refactor: share cron flat recovery 2026-04-19 04:28:03 +01:00
Peter Steinberger
c63d6bf508 refactor: reuse codex search config types 2026-04-19 04:25:12 +01:00
Peter Steinberger
bcbb3de760 test: reuse run attempt fixture 2026-04-19 04:22:05 +01:00
Peter Steinberger
590474a9a4 test: share compact session fixture 2026-04-19 04:19:35 +01:00
Peter Steinberger
10e14bd5be test: reuse sanitize assistant fixture 2026-04-19 04:16:58 +01:00
Peter Steinberger
bfea6bebc9 test: share subagent cleanup lookup 2026-04-19 04:14:55 +01:00
Peter Steinberger
ab4eb5aa94 test: share anthropic cache payload fixture 2026-04-19 04:12:37 +01:00
Peter Steinberger
f5c49758fc test: share gateway exec allowlist fixture 2026-04-19 04:10:19 +01:00
Peter Steinberger
394c7a2357 test: share exec approval disabled fixture 2026-04-19 04:07:54 +01:00
Peter Steinberger
91ad6c2739 test: share mcp cache tool turn helper 2026-04-19 04:05:28 +01:00
Peter Steinberger
04697eca88 refactor: share channel action params 2026-04-19 04:03:16 +01:00
Peter Steinberger
1908967cfa test: share auth profile env cleanup 2026-04-19 04:00:36 +01:00
Peter Steinberger
f54cf74ef6 test: share BTW sanitized user assertion 2026-04-19 03:58:08 +01:00
Peter Steinberger
44166f7cfe test: share live model switch params 2026-04-19 03:55:35 +01:00
Peter Steinberger
6a87d6e814 test: share model fallback probe assertions 2026-04-19 03:52:57 +01:00
Peter Steinberger
0f871664c5 test: share bootstrap heartbeat fixture 2026-04-19 03:49:35 +01:00
Peter Steinberger
0a5515297e test: share skill auth config fixtures 2026-04-19 03:47:24 +01:00
Peter Steinberger
97a3089cec test: share unsafe skill scan fixture 2026-04-19 03:44:29 +01:00
Peter Steinberger
555f74cf67 test: share escaped bundled skill fixture 2026-04-19 03:42:19 +01:00
Peter Steinberger
9e93aa0c32 test: share ClawHub skill update assertion 2026-04-19 03:40:13 +01:00
Peter Steinberger
bf5b6cba70 test: share usage accumulator fixtures 2026-04-19 03:37:36 +01:00
stain lu
24b915ed41 fix: surface preserved stale session totals (#67695) (thanks @stainlu)
* fix(agents): preserve session totalTokens when provider omits usage data

Fixes #67667

When a provider (e.g. MiniMax via Anthropic endpoint) does not return
usage data in its API response, hasNonzeroUsage() is false and the
entire totalTokens update block in persistSessionAfterRun is skipped.
This resets totalTokens to undefined, causing /status to show 0%
context usage even after compaction has calculated real token counts.

The fix preserves the previous totalTokens value when the current run
has no usage data, marking it as stale (totalTokensFresh: false) so
display layers know it is from a prior run. This is strictly better
than null — the user sees the last known context usage instead of 0%.

* ci: retrigger after flaky gateway shutdown test

* test(agents): port totalTokens regression test to withTempSessionStore helper post-rebase

* fix(status): surface preserved stale session totals

* fix: surface preserved stale session totals (#67695) (thanks @stainlu)

---------

Co-authored-by: Ayaan Zaidi <hi@obviy.us>
2026-04-19 08:06:36 +05:30
Peter Steinberger
8233ca6401 test: share sandbox docker create fixture 2026-04-19 03:35:20 +01:00
Peter Steinberger
bf2fbf071b test: share vertex ADC auth fixture 2026-04-19 03:32:49 +01:00
Peter Steinberger
199f4d78d9 test: share anthropic payload fixtures 2026-04-19 03:29:43 +01:00
stain lu
4da808da50 fix: scope nested agent lanes per target session (#67785) (thanks @stainlu)
* fix(agents): scope nested lane per target session to stop cross-agent blocking

* docs(agents): note per-session nested-lane lifecycle parity with session:* lanes

* refactor(agents): distill nested lane helpers

* fix: scope nested agent lanes per target session (#67785) (thanks @stainlu)

---------

Co-authored-by: Ayaan Zaidi <hi@obviy.us>
2026-04-19 07:58:55 +05:30
Peter Steinberger
67bd9edd8b test: share cache trace memory fixture 2026-04-19 03:27:05 +01:00
Peter Steinberger
6de5f92835 test: share command delivery media fixture 2026-04-19 03:24:43 +01:00
Peter Steinberger
83a0f1fd52 test: share subagent cleanup decision fixture 2026-04-19 03:22:10 +01:00
Peter Steinberger
314654bd0f test: share auth profile env fixture 2026-04-19 03:19:57 +01:00
Peter Steinberger
22d99ee9df test: share models config env fixture 2026-04-19 03:17:36 +01:00
Peter Steinberger
8f92c0607c test: share transcript replay defaults fixture 2026-04-19 03:15:23 +01:00
Ayaan Zaidi
74f0dc87de fix: always send openai stream usage flag (#68746) (thanks @kagura-agent) 2026-04-19 07:44:48 +05:30
Ayaan Zaidi
43f6ffd0ae test: distill openai stream usage regression coverage 2026-04-19 07:44:48 +05:30
kagura-agent
c560793482 fix: always send stream_options.include_usage when streaming openai-completions
Backends like llama-cpp and LM Studio require stream_options: { include_usage: true }
in the request payload to report token usage in streaming responses.
buildOpenAICompletionsParams() previously gated this behind supportsUsageInStreaming
compat detection, which excluded non-standard and custom endpoints. The OpenAI SDK
sends this unconditionally, so we now do the same.

Fixes #68707
2026-04-19 07:44:48 +05:30
Peter Steinberger
1212412ff1 test: share context window model fixture 2026-04-19 03:12:59 +01:00
Peter Steinberger
a56aa6ccbe test: share model compat streaming fixture 2026-04-19 03:10:47 +01:00
Peter Steinberger
59032f63b1 test: share compact skill prompt fixture 2026-04-19 03:08:24 +01:00