mirror of
https://github.com/openclaw/openclaw.git
synced 2026-05-06 08:00:42 +00:00
* fix(amazon-bedrock): inject cache points for application inference profile ARNs pi-ai's internal supportsPromptCaching checks model.id for specific Claude model name patterns (e.g. "-4-", "claude-3-7-sonnet"), which fails for application inference profile ARNs that don't contain the model name. This causes prompt caching to silently break for Bedrock users with application inference profiles. Work around this by detecting when pi-ai would miss cache point injection (via piAiWouldInjectCachePoints mirror) and patching the Converse API payload via onPayload to add cachePoint blocks to the system prompt and last user message — matching the same format pi-ai uses natively. The fix is safe: - Checks for existing cache points to avoid double-injection - Respects cacheRetention: "none" - Defaults to "short" retention (matching pi-ai default) - Becomes a no-op once upstream pi-mono#2925 is fixed Fixes #19279 Upstream: https://github.com/badlogic/pi-mono/issues/2925 * fix(amazon-bedrock): tighten app-profile cache injection --------- Co-authored-by: Your Name <you@example.com> Co-authored-by: Vincent Koc <vincentkoc@ieee.org>