Files
openclaw/docs/concepts/messages.md
Omar Shahine 8f1bd3cf53 fix(bluebubbles): add opt-in coalesceSameSenderDms for split-send DMs
Two distinct user sends to a DM — a command followed by a pasted URL that
iMessage renders as a standalone URL-balloon message — are delivered by
Apple/BlueBubbles as two separate webhooks ~0.8-2.0 s apart. Without
coalescing, the agent replies to the command alone before the URL arrives
(the dump skill sees an empty payload), and the URL lands as a second
queued turn.

Opt-in behind channels.bluebubbles.coalesceSameSenderDms (default false).
When set, DM messages with no associatedMessageGuid hash to
dm:<chat>:<sender> so the debounce window merges text + URL-balloon into
one merged agent turn. Group chats and legacy text+balloon pairs linked
via associatedMessageGuid keep per-message keys. The default inbound
debounce window widens from 500 ms to 2500 ms when the flag is on and no
explicit messages.inbound.byChannel.bluebubbles is set, covering Apple's
observed split-send cadence.

Merged output is bounded (<=4000 chars text with an explicit truncation
marker, <=20 attachments, first-plus-latest sampling beyond 10 source
entries). Every source messageId folded into the merged view is committed
to the inbound dedupe store after processing, so a later BlueBubbles
MessagePoller replay of any individual source event is recognized as a
duplicate.

Tests (506 BlueBubbles tests passing, multiple new cases covering the
on/off matrix, control-command override, orphan URL-balloon, group-chat
preservation, default-window widening, replay-of-any-source-id dedupe,
and the 25-message flood bound).

Smoke tested end-to-end against live BlueBubbles webhook traffic:
Dump+URL composed as one iMessage -> BB dispatches two webhooks ~1 s
apart -> debouncer coalesces both under dm:<chat>:<sender> -> dump skill
runs on merged payload in one turn.

Also updates docs/channels/bluebubbles.md with a scenarios table,
enablement guide, trade-offs, and three-layer troubleshooting checklist;
docs/concepts/messages.md cross-references the control-command debounce
exception.
2026-04-21 01:38:17 -07:00

6.7 KiB

summary, read_when, title
summary read_when title
Message flow, sessions, queueing, and reasoning visibility
Explaining how inbound messages become replies
Clarifying sessions, queueing modes, or streaming behavior
Documenting reasoning visibility and usage implications
Messages

Messages

This page ties together how OpenClaw handles inbound messages, sessions, queueing, streaming, and reasoning visibility.

Message flow (high level)

Inbound message
  -> routing/bindings -> session key
  -> queue (if a run is active)
  -> agent run (streaming + tools)
  -> outbound replies (channel limits + chunking)

Key knobs live in configuration:

  • messages.* for prefixes, queueing, and group behavior.
  • agents.defaults.* for block streaming and chunking defaults.
  • Channel overrides (channels.whatsapp.*, channels.telegram.*, etc.) for caps and streaming toggles.

See Configuration for full schema.

Inbound dedupe

Channels can redeliver the same message after reconnects. OpenClaw keeps a short-lived cache keyed by channel/account/peer/session/message id so duplicate deliveries do not trigger another agent run.

Inbound debouncing

Rapid consecutive messages from the same sender can be batched into a single agent turn via messages.inbound. Debouncing is scoped per channel + conversation and uses the most recent message for reply threading/IDs.

Config (global default + per-channel overrides):

{
  messages: {
    inbound: {
      debounceMs: 2000,
      byChannel: {
        whatsapp: 5000,
        slack: 1500,
        discord: 1500,
      },
    },
  },
}

Notes:

  • Debounce applies to text-only messages; media/attachments flush immediately.
  • Control commands bypass debouncing so they remain standalone — except when a channel explicitly opts in to same-sender DM coalescing (e.g. BlueBubbles coalesceSameSenderDms), where DM commands wait inside the debounce window so a split-send payload can join the same agent turn.

Sessions and devices

Sessions are owned by the gateway, not by clients.

  • Direct chats collapse into the agent main session key.
  • Groups/channels get their own session keys.
  • The session store and transcripts live on the gateway host.

Multiple devices/channels can map to the same session, but history is not fully synced back to every client. Recommendation: use one primary device for long conversations to avoid divergent context. The Control UI and TUI always show the gateway-backed session transcript, so they are the source of truth.

Details: Session management.

Inbound bodies and history context

OpenClaw separates the prompt body from the command body:

  • Body: prompt text sent to the agent. This may include channel envelopes and optional history wrappers.
  • CommandBody: raw user text for directive/command parsing.
  • RawBody: legacy alias for CommandBody (kept for compatibility).

When a channel supplies history, it uses a shared wrapper:

  • [Chat messages since your last reply - for context]
  • [Current message - respond to this]

For non-direct chats (groups/channels/rooms), the current message body is prefixed with the sender label (same style used for history entries). This keeps real-time and queued/history messages consistent in the agent prompt.

History buffers are pending-only: they include group messages that did not trigger a run (for example, mention-gated messages) and exclude messages already in the session transcript.

Directive stripping only applies to the current message section so history remains intact. Channels that wrap history should set CommandBody (or RawBody) to the original message text and keep Body as the combined prompt. History buffers are configurable via messages.groupChat.historyLimit (global default) and per-channel overrides like channels.slack.historyLimit or channels.telegram.accounts.<id>.historyLimit (set 0 to disable).

Queueing and followups

If a run is already active, inbound messages can be queued, steered into the current run, or collected for a followup turn.

  • Configure via messages.queue (and messages.queue.byChannel).
  • Modes: interrupt, steer, followup, collect, plus backlog variants.

Details: Queueing.

Streaming, chunking, and batching

Block streaming sends partial replies as the model produces text blocks. Chunking respects channel text limits and avoids splitting fenced code.

Key settings:

  • agents.defaults.blockStreamingDefault (on|off, default off)
  • agents.defaults.blockStreamingBreak (text_end|message_end)
  • agents.defaults.blockStreamingChunk (minChars|maxChars|breakPreference)
  • agents.defaults.blockStreamingCoalesce (idle-based batching)
  • agents.defaults.humanDelay (human-like pause between block replies)
  • Channel overrides: *.blockStreaming and *.blockStreamingCoalesce (non-Telegram channels require explicit *.blockStreaming: true)

Details: Streaming + chunking.

Reasoning visibility and tokens

OpenClaw can expose or hide model reasoning:

  • /reasoning on|off|stream controls visibility.
  • Reasoning content still counts toward token usage when produced by the model.
  • Telegram supports reasoning stream into the draft bubble.

Details: Thinking + reasoning directives and Token use.

Prefixes, threading, and replies

Outbound message formatting is centralized in messages:

  • messages.responsePrefix, channels.<channel>.responsePrefix, and channels.<channel>.accounts.<id>.responsePrefix (outbound prefix cascade), plus channels.whatsapp.messagePrefix (WhatsApp inbound prefix)
  • Reply threading via replyToMode and per-channel defaults

Details: Configuration and channel docs.

Silent replies

The exact silent token NO_REPLY / no_reply means “do not deliver a user-visible reply”. OpenClaw resolves that behavior by conversation type:

  • Direct conversations disallow silence by default and rewrite a bare silent reply to a short visible fallback.
  • Groups/channels allow silence by default.
  • Internal orchestration allows silence by default.

Defaults live under agents.defaults.silentReply and agents.defaults.silentReplyRewrite; surfaces.<id>.silentReply and surfaces.<id>.silentReplyRewrite can override them per surface.

  • Streaming — real-time message delivery
  • Retry — message delivery retry behavior
  • Queue — message processing queue
  • Channels — messaging platform integrations