Filter SwiftPM dependency build results from the manual macOS CodeQL shard before upload. Verified with workflow sanity, local jq filtering, and profile=macos-security branch proof in 15m54s. PR CI has the same unrelated extensions/memory-core timeout failure currently present on main.
End-to-end testing on macOS + BlueBubbles + ElevenLabs walked through three CAF flavors before landing on the format Apple's Messages.app actually emits when a user records a native iMessage voice memo:
- PCM int16 @ 44.1 kHz CAF: BlueBubbles' internal `afconvert -f m4af -d aac` conversion fails; the original CAF reaches iMessage but renders with 0 s duration.
- AAC @ 22.05 kHz mono CAF: BlueBubbles' conversion succeeds and the server silently downgrades the delivery, sending the converted MP3 as a generic audio attachment.
- **Opus @ 24 kHz mono CAF**: byte-identical to the descriptor block Apple's Messages.app produces; BlueBubbles passes it through unchanged and iMessage renders a native voice-memo bubble with proper duration and waveform UI.
Adds an opt-in `tts.voice.preferAudioFileFormat` channel capability and a macOS `afconvert`-backed pre-transcode in the speech-core pipeline. BlueBubbles declares `preferAudioFileFormat: "caf"`. Other channels are unaffected. Falls back to the original buffer when the host platform, the source/target pair, or the transcoder process can't produce the preferred container — so non-Darwin hosts and unsupported provider combinations are unchanged.
Also adds a `caff` magic-byte sniff in `src/media/mime.ts` so the auto-reply host-local-media validator (which uses `file-type` and didn't recognize CAF natively) accepts the buffer instead of dropping it as "⚠️ Media failed."
Fixes#72506.
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Remediate current-profile CodeQL findings for file SecretRef id validation and release workflow job permissions. Includes changelog credit. Thanks @vincentkoc.
Feishu config defaults groupPolicy to 'allowlist'. Inbound group handling read groupAllowFrom and called isFeishuGroupAllowed before resolveFeishuReplyPolicy was reached, so a config that only set channels.feishu.groups.<chat_id>.requireMention=false (with no groupAllowFrom) was rejected with 'group not in groupAllowFrom' before per-group requireMention could take effect. Treat the explicit presence of a group entry under channels.feishu.groups as the operator's allowlist signal: if groupConfig is defined, skip the empty-allowlist rejection. resolveFeishuReplyPolicy still owns mention gating, and existing groupConfig.enabled=false / groupAllowFrom-driven rejections are preserved. Adds a regression test that exercises the reporter's exact config shape and confirms inbound text reaches finalize/dispatch.
The ACP dispatch path calls applyMediaUnderstanding without the agentDir
parameter. This prevents the media understanding pipeline from locating
agent-specific models.json and auth profiles, causing image understanding
to fail silently for non-visual models configured with a separate image
understanding model.
The non-ACP reply path (get-reply.ts) already passes agentDir correctly.
This aligns the ACP path with the same behavior.
Closes#55046
AI-assisted (built with Hermes orchestration).
Address review feedback on PR #72888. triggerInternalHook passes the
same event reference to all handlers sequentially. Mutating evt.context
leaks pluginConfig to subsequent handlers and causes cross-plugin
overwrites. Shallow-copy event and context instead.
When plugins register hooks via api.registerHook(), pluginConfig from
openclaw.json was not available in the hook event context. Plugins that
accessed ctx.pluginConfig or event.context.pluginConfig received
undefined, causing silent failures or fallback to defaults.
Changes:
- Add pluginConfig parameter to registerHook() function
- Wrap handler to inject pluginConfig into event.context before invocation
- Pass params.pluginConfig through createApi() call site
Fixes#72880
Closes#72837. The 15s narrative-subagent timeout was empirically too
tight for warm-gateway runs across light, REM, and deep phases —
gpt-5.4-mini latency through OpenAI alone routinely brushes 12s+, so the
first sweep after a restart deterministically times out across all three
phases. 60s gives realistic LLM-call headroom while still capping the
worst case at one minute, preserving the original comment's "don't leave
parent cron running for minutes" constraint.
Test: updates the matching toMatchObject assertion in
dreaming-narrative.test.ts from 15_000 to 60_000.