* Auth: fix native model profile selection
Fix native `/model ...@profile` targeting so profile selections persist onto the intended session, and preserve explicit session auth-profile overrides even when stored auth order prefers another profile. Update the reply/session regressions to use placeholder example.test profile ids.
Regeneration-Prompt: |
Native `/model ...@profile` commands in chat were acknowledging the requested auth profile but later runs still used another account. Fix the target-session handling so native slash commands mutate the real chat session rather than a slash-session surrogate, and keep explicit session auth-profile overrides from being cleared just because stored provider order prefers another profile. Update the tests to cover the target-session path and the override-preservation behavior, and use placeholder profile ids instead of real email addresses in test fixtures.
* Auth: honor explicit user-locked profiles in runner
Allow an explicit user-selected auth profile to run even when per-agent auth-state order excludes it. Keep auth-state order for automatic selection and failover, and add an embedded runner regression that seeds stored order with one profile while verifying a different user-locked profile still executes.
Regeneration-Prompt: |
The remaining bug after fixing native `/model ...@profile` persistence was in the embedded runner itself. A user could explicitly select a valid auth profile for a provider, but the run still failed if per-agent auth-state order did not include that profile. Preserve the intended semantics by validating user-locked profiles directly for provider match and credential eligibility, then using them without requiring membership in resolved auto-order. Add a regression in the embedded auth-profile rotation suite where stored order only includes one OpenAI profile but a different user-locked profile is chosen and must still be used.
* Changelog: note explicit auth profile selection fix
Add the required Unreleased changelog line for the explicit auth-profile selection and runner honor fix in this PR.
Regeneration-Prompt: |
The PR needed a mandatory CHANGELOG.md entry under Unreleased/Fixes. Add a concise user-facing line describing that native `/model ...@profile` selections now persist on the target session and explicit user-locked OpenAI Codex auth profiles are honored even when per-agent auth order excludes them, and include the PR number plus thanks attribution for the PR author.
* Context engine: plumb prompt cache runtime context
Add a typed prompt-cache payload to the context-engine runtime context and populate it from the embedded runner's resolved retention, last-call usage, cache-break observation, and cache-touch metadata. Also pass the same payload through the retry compaction runtime context when a run attempt already has it.
Regeneration-Prompt: |
Expose OpenClaw prompt-cache telemetry to context engines in a narrow,
additive way without changing compaction policy. Keep the public change on
the OpenClaw side only: add a typed promptCache payload to the context-engine
runtime context, thread it into afterTurn, and also into compact where the
existing run loop already has the data cheaply available.
Use OpenClaw's resolved cache retention, not raw config. Use last-call usage
for the new payload, not accumulated retry or tool-loop totals. Reuse the
existing prompt-cache observability result and tracked change causes instead
of inventing a new heuristic. If cache-touch metadata is already available
from the cache-TTL bookkeeping, include it; do not invent expiry timestamps
for providers where OpenClaw cannot know them confidently.
Keep the interface backward-compatible for engines that ignore the new field.
Add focused tests around the existing attempt/context-engine helpers and the
compaction runtime-context propagation path rather than broad new integration
coverage.
* Agents: fix prompt-cache afterTurn usage
Regeneration-Prompt: |
Fix PR #62179 so context-engine prompt-cache metadata uses only the current attempt's usage. The review comment pointed out that early exits could reuse a prior turn's assistant usage when no new assistant message was produced. Restrict the prompt-cache lastCallUsage lookup to assistant messages added after prePromptMessageCount, and fall back to current-attempt usage totals instead of stale snapshot history. Also repair the PR's new context-engine test typings and add a regression test for the stale prior-turn case. Two import-only fixes in doctor-state-integrity and config/talk were already broken on origin/main, but they blocked build/check and the gateway-watch regression harness, so include the minimum unblocking imports as well.
* Agents: document prompt-cache context
* Agents: address prompt-cache review feedback
* Doctor: drop unused isRecord import