fix(messages): keep group replies tool-only by default

Rewrites the always-on reply handling so group/channel rooms default to message-tool-visible output, while `messages.groupChat.visibleReplies: \"automatic\"` preserves legacy auto-posting.\n\nThanks @scoootscooob.
This commit is contained in:
scoootscooob
2026-04-27 23:36:43 -07:00
committed by GitHub
parent e388f289bf
commit 3c636208b0
46 changed files with 684 additions and 63 deletions

View File

@@ -113,6 +113,7 @@ Docs: https://docs.openclaw.ai
- Doctor/channels: suppress disabled bundled-plugin blocker warnings when a trusted external plugin owns the configured channel, so Lark/Feishu installs no longer get Feishu repair noise after switching to `openclaw-lark`. Fixes #56794. Thanks @wuji-tech-dev.
- CLI/status: show skipped fast-path memory checks as `not checked` and report active custom memory plugin runtime status from `status --json --all` without requiring built-in `agents.defaults.memorySearch`, so plugins such as memory-lancedb-pro and memory-cms no longer look unavailable when their own runtime is healthy. Fixes #56968. Thanks @Tony-ooo and @aderius.
- Gateway/channels: record and log unexpected clean channel monitor exits so channels that return without throwing no longer appear stopped with no error. Fixes #73099. Thanks @balaji1968-kingler.
- Discord/group chats: keep group/channel replies private by default unless the agent explicitly uses the message tool, so always-on rooms can lurk without leaking automatic final, block, preview, or status-reaction output; `messages.groupChat.visibleReplies: "automatic"` restores legacy auto-posting. (#73046) Thanks @scoootscooob.
- Plugins/package: force nested bundled-plugin runtime dependency installs out of inherited npm dry-run mode during prepack and package smoke checks, so packed installs materialize required plugin modules instead of reporting missing bundled files. Refs #73128. Thanks @Adam-Researchh.
- Discord: skip reaction events before REST channel fetch when notifications are off, guild reactions are disabled, or allowlist mode cannot match without channel overrides, reducing reconnect bursts that caused slow listener warnings. Fixes #73133. Thanks @isaacsummers.
- Channels/Telegram: centralize polling update tracking so accepted offsets remain durable across restarts, same-process handler failures can still retry, and slow offset writes cannot overwrite newer accepted watermarks. Refs #73115. Thanks @vdruts.

View File

@@ -1,4 +1,4 @@
9caccd04afca25d18cfcc4a66bdc30c995f5ec51eaa764c076ce58c9af11a7bf config-baseline.json
8530c8fd54e04a2ab7f6704195f9959311e289ae122ebd8e27af236de435fef9 config-baseline.core.json
4fd357ae137b920586ce5760d461be586f4f9a94e49b73cad1f81110167cd9da config-baseline.json
f874cddd0744be277af58ef14261af7994aba669c642f613be10f92b095998ba config-baseline.core.json
a9f058ee9616e189dab7fc223e1207a49ae52b8490b8028935c9d0a2b16f81b2 config-baseline.channel.json
1f5592bfd141ba1e982ce31763a253c10afb080ab4ea2b6538299b114e29cee1 config-baseline.plugin.json

View File

@@ -216,6 +216,8 @@ Once DMs are working, you can set up your Discord server as a full workspace whe
<Step title="Allow responses without @mention">
By default, your agent only responds in guild channels when @mentioned. For a private server, you probably want it to respond to every message.
In guild channels, normal assistant final replies stay private by default. Visible Discord output must be sent explicitly with the `message` tool, so the agent can lurk by default and only post when it decides a channel reply is useful.
<Tabs>
<Tab title="Ask your agent">
> "Allow my agent to respond on this server without having to be @mentioned"
@@ -237,6 +239,8 @@ Once DMs are working, you can set up your Discord server as a full workspace whe
}
```
To restore legacy automatic final replies for group/channel rooms, set `messages.groupChat.visibleReplies: "automatic"`.
</Tab>
</Tabs>

View File

@@ -16,6 +16,7 @@ Default behavior:
- Groups are restricted (`groupPolicy: "allowlist"`).
- Replies require a mention unless you explicitly disable mention gating.
- Normal final replies in groups/channels are private by default. Visible room output uses the `message` tool.
Translation: allowlisted senders can trigger OpenClaw by mentioning it.
@@ -36,6 +37,25 @@ requireMention? yes -> mentioned? no -> store for context only
otherwise -> reply
```
## Visible replies
For group/channel rooms, OpenClaw defaults to `messages.groupChat.visibleReplies: "message_tool"`.
That means the agent still processes the turn and can update memory/session state, but its normal final answer is not automatically posted back into the room. To speak visibly, the agent uses `message(action=send)`.
This replaces the old pattern of forcing the model to answer `NO_REPLY` for most lurk-mode turns. In tool-only mode, doing nothing visible simply means not calling the message tool.
To restore legacy automatic final replies for group/channel rooms:
```json5
{
messages: {
groupChat: {
visibleReplies: "automatic",
},
},
}
```
## Context visibility and allowlists
Two different controls are involved in group safety:

View File

@@ -99,6 +99,52 @@ describe("handleDiscordMessageAction", () => {
);
});
it("falls back to Discord toolContext.currentChannelId for sends", async () => {
await handleDiscordMessageAction({
action: "send",
params: {
message: "hello",
},
cfg: {
channels: { discord: { token: "tok" } },
} as OpenClawConfig,
toolContext: {
currentChannelProvider: "discord",
currentChannelId: "channel:123",
},
});
expect(handleDiscordActionMock).toHaveBeenCalledWith(
expect.objectContaining({
action: "sendMessage",
to: "channel:123",
content: "hello",
}),
expect.any(Object),
expect.any(Object),
);
});
it("does not use another provider's current target for Discord sends", async () => {
await expect(
handleDiscordMessageAction({
action: "send",
params: {
message: "hello",
},
cfg: {
channels: { discord: { token: "tok" } },
} as OpenClawConfig,
toolContext: {
currentChannelProvider: "telegram",
currentChannelId: "channel:123",
},
}),
).rejects.toThrow(/channel target is required/i);
expect(handleDiscordActionMock).not.toHaveBeenCalled();
});
it("does not use another provider's current target for Discord reactions", async () => {
await expect(
handleDiscordMessageAction({

View File

@@ -68,7 +68,13 @@ export async function handleDiscordMessageAction(
const resolveChannelId = () => resolveDiscordChannelId(readTarget());
if (action === "send") {
const to = readStringParam(params, "to", { required: true });
const to =
readStringParam(params, "to") ??
readStringParam(params, "target") ??
readCurrentDiscordTarget(ctx.toolContext);
if (!to) {
throw new Error("Discord channel target is required (use channel:<id>).");
}
const asVoice = readBooleanParam(params, "asVoice") === true;
const rawComponents =
buildDiscordPresentationComponents(normalizeMessagePresentation(params.presentation)) ??

View File

@@ -110,6 +110,8 @@ type DispatchInboundParams = {
summary?: string;
title?: string;
}) => Promise<void> | void;
sourceReplyDeliveryMode?: "automatic" | "message_tool_only";
disableBlockStreaming?: boolean;
suppressDefaultToolProgressMessages?: boolean;
onCompactionStart?: () => Promise<void> | void;
onCompactionEnd?: () => Promise<void> | void;
@@ -217,6 +219,30 @@ async function createBaseContext(
return await createBaseDiscordMessageContext(...args);
}
async function createAutomaticSourceDeliveryContext(
overrides: Parameters<typeof createBaseDiscordMessageContext>[0] = {},
): Promise<Awaited<ReturnType<typeof createBaseDiscordMessageContext>>> {
const cfg = (overrides.cfg ?? {}) as {
messages?: {
groupChat?: Record<string, unknown>;
} & Record<string, unknown>;
} & Record<string, unknown>;
return await createBaseContext({
...overrides,
cfg: {
...cfg,
messages: {
...cfg.messages,
ackReaction: cfg.messages?.ackReaction ?? "👀",
groupChat: {
...cfg.messages?.groupChat,
visibleReplies: "automatic",
},
},
},
});
}
function createDirectMessageContextOverrides(
...args: Parameters<typeof createDiscordDirectMessageContextOverrides>
): ReturnType<typeof createDiscordDirectMessageContextOverrides> {
@@ -314,6 +340,12 @@ function getLastDispatchCtx():
return params?.ctx;
}
function getLastDispatchReplyOptions(): DispatchInboundParams["replyOptions"] | undefined {
const callArgs = dispatchInboundMessage.mock.calls.at(-1) as unknown[] | undefined;
const params = callArgs?.[0] as DispatchInboundParams | undefined;
return params?.replyOptions;
}
async function runProcessDiscordMessage(ctx: unknown): Promise<void> {
await processDiscordMessage(ctx as any);
}
@@ -421,7 +453,7 @@ describe("processDiscordMessage ack reactions", () => {
});
it("sends ack reactions for mention-gated guild messages when mentioned", async () => {
const ctx = await createBaseContext({
const ctx = await createAutomaticSourceDeliveryContext({
accountId: "ops",
shouldRequireMention: true,
effectiveWasMentioned: true,
@@ -443,7 +475,7 @@ describe("processDiscordMessage ack reactions", () => {
});
it("uses preflight-resolved messageChannelId when message.channelId is missing", async () => {
const ctx = await createBaseContext({
const ctx = await createAutomaticSourceDeliveryContext({
message: {
id: "m1",
timestamp: new Date().toISOString(),
@@ -482,7 +514,7 @@ describe("processDiscordMessage ack reactions", () => {
return { queuedFinal: true, counts: { final: 1, tool: 0, block: 0 } };
});
const ctx = await createBaseContext();
const ctx = await createAutomaticSourceDeliveryContext();
await runProcessDiscordMessage(ctx);
@@ -503,7 +535,7 @@ describe("processDiscordMessage ack reactions", () => {
return createNoQueuedDispatchResult();
});
const ctx = await createBaseContext();
const ctx = await createAutomaticSourceDeliveryContext();
await processDiscordMessage(ctx as any);
@@ -525,7 +557,7 @@ describe("processDiscordMessage ack reactions", () => {
return createNoQueuedDispatchResult();
});
const ctx = await createBaseContext();
const ctx = await createAutomaticSourceDeliveryContext();
const runPromise = processDiscordMessage(ctx as any);
await vi.advanceTimersByTimeAsync(30_001);
@@ -547,7 +579,7 @@ describe("processDiscordMessage ack reactions", () => {
return createNoQueuedDispatchResult();
});
const ctx = await createBaseContext({
const ctx = await createAutomaticSourceDeliveryContext({
cfg: {
messages: {
ackReaction: "👀",
@@ -573,7 +605,7 @@ describe("processDiscordMessage ack reactions", () => {
return createNoQueuedDispatchResult();
});
const ctx = await createBaseContext({
const ctx = await createAutomaticSourceDeliveryContext({
cfg: {
messages: {
ackReaction: "👀",
@@ -601,7 +633,7 @@ describe("processDiscordMessage ack reactions", () => {
return createNoQueuedDispatchResult();
});
const ctx = await createBaseContext({
const ctx = await createAutomaticSourceDeliveryContext({
cfg: {
messages: {
ackReaction: "👀",
@@ -630,7 +662,7 @@ describe("processDiscordMessage ack reactions", () => {
throw new Error("aborted");
});
const ctx = await createBaseContext({
const ctx = await createAutomaticSourceDeliveryContext({
abortSignal: abortController.signal,
cfg: {
messages: {
@@ -651,7 +683,7 @@ describe("processDiscordMessage ack reactions", () => {
});
it("removes the plain ack reaction when status reactions are disabled and removeAckAfterReply is enabled", async () => {
const ctx = await createBaseContext({
const ctx = await createAutomaticSourceDeliveryContext({
cfg: {
messages: {
ackReaction: "👀",
@@ -753,6 +785,85 @@ describe("processDiscordMessage session routing", () => {
});
});
it("marks always-on guild replies as message-tool-only and disables source streaming", async () => {
const ctx = await createBaseContext({
shouldRequireMention: false,
effectiveWasMentioned: false,
discordConfig: { streaming: "partial", blockStreaming: true },
route: BASE_CHANNEL_ROUTE,
});
await processDiscordMessage(ctx as any);
expect(getLastDispatchReplyOptions()).toMatchObject({
sourceReplyDeliveryMode: "message_tool_only",
disableBlockStreaming: true,
});
expect(createDiscordDraftStream).not.toHaveBeenCalled();
});
it("suppresses automatic status reactions for always-on guild replies", async () => {
const ctx = await createBaseContext({
shouldRequireMention: false,
effectiveWasMentioned: false,
ackReactionScope: "all",
cfg: {
messages: {
ackReaction: "👀",
ackReactionScope: "all",
statusReactions: {
timing: { debounceMs: 0 },
},
},
session: { store: "/tmp/openclaw-discord-process-test-sessions.json" },
},
route: BASE_CHANNEL_ROUTE,
});
await processDiscordMessage(ctx as any);
expect(getLastDispatchReplyOptions()?.sourceReplyDeliveryMode).toBe("message_tool_only");
expect(sendMocks.reactMessageDiscord).not.toHaveBeenCalled();
expect(sendMocks.removeReactionDiscord).not.toHaveBeenCalled();
});
it("defaults guild replies to message-tool-only source delivery", async () => {
await processDiscordMessage(
(await createBaseContext({
shouldRequireMention: true,
effectiveWasMentioned: true,
route: BASE_CHANNEL_ROUTE,
})) as any,
);
expect(getLastDispatchReplyOptions()?.sourceReplyDeliveryMode).toBe("message_tool_only");
dispatchInboundMessage.mockClear();
await processDiscordMessage(
(await createBaseContext({
shouldRequireMention: true,
effectiveWasMentioned: true,
cfg: {
messages: {
groupChat: {
visibleReplies: "automatic",
},
},
session: { store: "/tmp/openclaw-discord-process-test-sessions.json" },
},
route: BASE_CHANNEL_ROUTE,
})) as any,
);
expect(getLastDispatchReplyOptions()?.sourceReplyDeliveryMode).toBe("automatic");
dispatchInboundMessage.mockClear();
await processDiscordMessage(
(await createBaseContext({
...createDirectMessageContextOverrides(),
})) as any,
);
expect(getLastDispatchReplyOptions()?.sourceReplyDeliveryMode).toBeUndefined();
});
it("prefers bound session keys and sets MessageThreadId for bound thread messages", async () => {
const threadBindings = createThreadBindingManager({
cfg: {} as import("openclaw/plugin-sdk/config-types").OpenClawConfig,
@@ -830,7 +941,7 @@ describe("processDiscordMessage draft streaming", () => {
return { queuedFinal: true, counts: { final: 1, tool: 0, block: 0 } };
});
const ctx = await createBaseContext({
const ctx = await createAutomaticSourceDeliveryContext({
discordConfig,
});
@@ -838,7 +949,7 @@ describe("processDiscordMessage draft streaming", () => {
}
async function createBlockModeContext() {
return await createBaseContext({
return await createAutomaticSourceDeliveryContext({
cfg: {
messages: { ackReaction: "👀" },
session: { store: "/tmp/openclaw-discord-process-test-sessions.json" },
@@ -882,7 +993,7 @@ describe("processDiscordMessage draft streaming", () => {
return { queuedFinal: true, counts: { final: 1, tool: 0, block: 0 } };
});
const ctx = await createBaseContext({
const ctx = await createAutomaticSourceDeliveryContext({
cfg: {
messages: { ackReaction: "👀" },
session: { store: "/tmp/openclaw-discord-process-test-sessions.json" },
@@ -917,7 +1028,7 @@ describe("processDiscordMessage draft streaming", () => {
return { queuedFinal: true, counts: { final: 1, tool: 0, block: 0 } };
});
const ctx = await createBaseContext({
const ctx = await createAutomaticSourceDeliveryContext({
discordConfig: { streamMode: "partial", maxLinesPerMessage: 5 },
});
@@ -937,7 +1048,7 @@ describe("processDiscordMessage draft streaming", () => {
return { queuedFinal: true, counts: { final: 1, tool: 0, block: 0 } };
});
const ctx = await createBaseContext({
const ctx = await createAutomaticSourceDeliveryContext({
discordConfig: { streamMode: "partial", maxLinesPerMessage: 5 },
});
@@ -960,7 +1071,7 @@ describe("processDiscordMessage draft streaming", () => {
return { queuedFinal: true, counts: { final: 1, tool: 0, block: 0 } };
});
const ctx = await createBaseContext({
const ctx = await createAutomaticSourceDeliveryContext({
discordConfig: { streamMode: "partial", maxLinesPerMessage: 5 },
});
@@ -989,7 +1100,9 @@ describe("processDiscordMessage draft streaming", () => {
return { queuedFinal: true, counts: { final: 1, tool: 0, block: 0 } };
});
const ctx = await createBaseContext({ discordConfig: { streamMode: "off" } });
const ctx = await createAutomaticSourceDeliveryContext({
discordConfig: { streamMode: "off" },
});
await processDiscordMessage(ctx as any);
@@ -1030,7 +1143,7 @@ describe("processDiscordMessage draft streaming", () => {
return createNoQueuedDispatchResult();
});
const ctx = await createBaseContext({
const ctx = await createAutomaticSourceDeliveryContext({
discordConfig: { streamMode: "partial" },
});

View File

@@ -206,6 +206,12 @@ export async function processDiscordMessage(
if (boundThreadId && typeof threadBindings.touchThread === "function") {
threadBindings.touchThread({ threadId: boundThreadId });
}
const sourceReplyDeliveryMode = isGuildMessage
? cfg.messages?.groupChat?.visibleReplies === "automatic"
? ("automatic" as const)
: ("message_tool_only" as const)
: undefined;
const sourceRepliesAreToolOnly = sourceReplyDeliveryMode === "message_tool_only";
const ackReaction = resolveAckReaction(cfg, route.agentId, {
channel: "discord",
accountId,
@@ -226,7 +232,7 @@ export async function processDiscordMessage(
shouldBypassMention,
}),
);
const shouldSendAckReaction = shouldAckReaction();
const shouldSendAckReaction = !sourceRepliesAreToolOnly && shouldAckReaction();
const statusReactionsEnabled =
shouldSendAckReaction && cfg.messages?.statusReactions?.enabled !== false;
const feedbackRest = createDiscordRestClient({
@@ -607,7 +613,8 @@ export async function processDiscordMessage(
const accountBlockStreamingEnabled =
resolveChannelStreamingBlockEnabled(discordConfig) ??
cfg.agents?.defaults?.blockStreamingDefault === "on";
const canStreamDraft = discordStreamMode !== "off" && !accountBlockStreamingEnabled;
const canStreamDraft =
!sourceRepliesAreToolOnly && discordStreamMode !== "off" && !accountBlockStreamingEnabled;
const draftReplyToMessageId = () => replyReference.peek();
const deliverChannelId = deliverTarget.startsWith("channel:")
? deliverTarget.slice("channel:".length)
@@ -954,11 +961,13 @@ export async function processDiscordMessage(
...replyOptions,
abortSignal,
skillFilter: channelConfig?.skills,
disableBlockStreaming:
disableBlockStreamingForDraft ??
(typeof resolvedBlockStreamingEnabled === "boolean"
? !resolvedBlockStreamingEnabled
: undefined),
sourceReplyDeliveryMode,
disableBlockStreaming: sourceRepliesAreToolOnly
? true
: (disableBlockStreamingForDraft ??
(typeof resolvedBlockStreamingEnabled === "boolean"
? !resolvedBlockStreamingEnabled
: undefined)),
onPartialReply: draftStream ? (payload) => updateDraftFromPartial(payload.text) : undefined,
onAssistantMessageStart: draftStream
? () => {

View File

@@ -173,6 +173,7 @@ describe("monitorSlackProvider tool results", () => {
includeAckReactionConfig?: boolean;
replyToMode?: "off" | "all" | "first";
threadInheritParent?: boolean;
visibleReplies?: "automatic" | "message_tool";
}) {
const slackChannelConfig: Record<string, unknown> = {
dm: { enabled: true, policy: "open", allowFrom: ["*"] },
@@ -187,8 +188,16 @@ describe("monitorSlackProvider tool results", () => {
responsePrefix: "PFX",
ackReaction: "👀",
ackReactionScope: "group-mentions",
...(params.visibleReplies
? { groupChat: { visibleReplies: params.visibleReplies } }
: {}),
}
: { responsePrefix: "PFX" },
: {
responsePrefix: "PFX",
...(params?.visibleReplies
? { groupChat: { visibleReplies: params.visibleReplies } }
: {}),
},
channels: { slack: slackChannelConfig },
...(params?.bindings ? { bindings: params.bindings } : {}),
};
@@ -488,6 +497,9 @@ describe("monitorSlackProvider tool results", () => {
it("accepts channel messages without mention when channels.slack.requireMention is false", async () => {
slackTestState.config = {
messages: {
groupChat: { visibleReplies: "automatic" },
},
channels: {
slack: {
dm: { enabled: true, policy: "open", allowFrom: ["*"] },
@@ -523,6 +535,7 @@ describe("monitorSlackProvider tool results", () => {
includeAckReactionConfig: true,
groupPolicy: "open",
replyToMode: "off",
visibleReplies: "automatic",
});
await runChannelThreadReplyEvent();

View File

@@ -398,6 +398,7 @@ export function buildRunClaudeCliAgentParams(params: RunClaudeCliAgentParams): R
runId: params.runId,
jobId: params.jobId,
extraSystemPrompt: params.extraSystemPrompt,
sourceReplyDeliveryMode: params.sourceReplyDeliveryMode,
silentReplyPromptMode: params.silentReplyPromptMode,
extraSystemPromptStatic: params.extraSystemPromptStatic,
ownerNumbers: params.ownerNumbers,

View File

@@ -6,6 +6,7 @@ import type { AgentTool } from "@mariozechner/pi-agent-core";
import type { ImageContent } from "@mariozechner/pi-ai";
import { KeyedAsyncQueue } from "openclaw/plugin-sdk/keyed-async-queue";
import { isAcpRuntimeSpawnAvailable } from "../../acp/runtime/availability.js";
import type { SourceReplyDeliveryMode } from "../../auto-reply/get-reply-options.types.js";
import type { ThinkLevel } from "../../auto-reply/thinking.js";
import type { CliBackendConfig } from "../../config/types.js";
import type { OpenClawConfig } from "../../config/types.openclaw.js";
@@ -70,6 +71,7 @@ export function buildSystemPrompt(params: {
config?: OpenClawConfig;
defaultThinkLevel?: ThinkLevel;
extraSystemPrompt?: string;
sourceReplyDeliveryMode?: SourceReplyDeliveryMode;
silentReplyPromptMode?: SilentReplyPromptMode;
ownerNumbers?: string[];
heartbeatPrompt?: string;
@@ -109,6 +111,7 @@ export function buildSystemPrompt(params: {
workspaceDir: params.workspaceDir,
defaultThinkLevel: params.defaultThinkLevel,
extraSystemPrompt: params.extraSystemPrompt,
sourceReplyDeliveryMode: params.sourceReplyDeliveryMode,
silentReplyPromptMode: params.silentReplyPromptMode,
ownerNumbers: params.ownerNumbers,
ownerDisplay: ownerDisplay.ownerDisplay,

View File

@@ -302,6 +302,7 @@ export async function prepareCliRunContext(
config: params.config,
defaultThinkLevel: params.thinkLevel,
extraSystemPrompt,
sourceReplyDeliveryMode: params.sourceReplyDeliveryMode,
silentReplyPromptMode: params.silentReplyPromptMode,
ownerNumbers: params.ownerNumbers,
heartbeatPrompt,

View File

@@ -1,4 +1,5 @@
import type { ImageContent } from "@mariozechner/pi-ai";
import type { SourceReplyDeliveryMode } from "../../auto-reply/get-reply-options.types.js";
import type { ReplyOperation } from "../../auto-reply/reply/reply-run-registry.js";
import type { ThinkLevel } from "../../auto-reply/thinking.js";
import type { CliSessionBinding } from "../../config/sessions.js";
@@ -28,6 +29,7 @@ export type RunCliAgentParams = {
runId: string;
jobId?: string;
extraSystemPrompt?: string;
sourceReplyDeliveryMode?: SourceReplyDeliveryMode;
silentReplyPromptMode?: SilentReplyPromptMode;
/** Static portion of extraSystemPrompt (excluding per-message inbound metadata) for session reuse hashing. */
extraSystemPromptStatic?: string;

View File

@@ -321,6 +321,7 @@ function buildCompactionContextEngineRuntimeContext(params: {
reasoningLevel: params.params.reasoningLevel,
bashElevated: params.params.bashElevated,
extraSystemPrompt: params.params.extraSystemPrompt,
sourceReplyDeliveryMode: params.params.sourceReplyDeliveryMode,
ownerNumbers: params.params.ownerNumbers,
}),
tokenBudget: params.contextTokenBudget,

View File

@@ -778,6 +778,7 @@ export async function compactEmbeddedPiSessionDirect(
sourcePath: openClawReferences.sourcePath ?? undefined,
ttsHint,
promptMode,
sourceReplyDeliveryMode: params.sourceReplyDeliveryMode,
acpEnabled: isAcpRuntimeSpawnAvailable({
config: params.config,
sandboxed: sandboxInfo?.enabled === true,

View File

@@ -1,3 +1,4 @@
import type { SourceReplyDeliveryMode } from "../../auto-reply/get-reply-options.types.js";
import type { ReasoningLevel, ThinkLevel } from "../../auto-reply/thinking.js";
import type { OpenClawConfig } from "../../config/types.openclaw.js";
import type { ContextEngine, ContextEngineRuntimeContext } from "../../context-engine/types.js";
@@ -66,6 +67,7 @@ export type CompactEmbeddedPiSessionParams = {
lane?: string;
enqueue?: CommandQueueEnqueueFn;
extraSystemPrompt?: string;
sourceReplyDeliveryMode?: SourceReplyDeliveryMode;
ownerNumbers?: string[];
abortSignal?: AbortSignal;
/** Allow runtime plugins for this compaction to late-bind the gateway subagent. */

View File

@@ -1,3 +1,4 @@
import type { SourceReplyDeliveryMode } from "../../auto-reply/get-reply-options.types.js";
import type { ReasoningLevel, ThinkLevel } from "../../auto-reply/thinking.js";
import type { OpenClawConfig } from "../../config/types.openclaw.js";
import type { ExecElevatedDefaults } from "../bash-tools.js";
@@ -24,6 +25,7 @@ export type EmbeddedCompactionRuntimeContext = {
reasoningLevel?: ReasoningLevel;
bashElevated?: ExecElevatedDefaults;
extraSystemPrompt?: string;
sourceReplyDeliveryMode?: SourceReplyDeliveryMode;
ownerNumbers?: string[];
};
@@ -89,6 +91,7 @@ export function buildEmbeddedCompactionRuntimeContext(params: {
reasoningLevel?: ReasoningLevel;
bashElevated?: ExecElevatedDefaults;
extraSystemPrompt?: string;
sourceReplyDeliveryMode?: SourceReplyDeliveryMode;
ownerNumbers?: string[];
}): EmbeddedCompactionRuntimeContext {
const resolved = resolveEmbeddedCompactionTarget({
@@ -118,6 +121,7 @@ export function buildEmbeddedCompactionRuntimeContext(params: {
reasoningLevel: params.reasoningLevel,
bashElevated: params.bashElevated,
extraSystemPrompt: params.extraSystemPrompt,
sourceReplyDeliveryMode: params.sourceReplyDeliveryMode,
ownerNumbers: params.ownerNumbers,
};
}

View File

@@ -978,6 +978,7 @@ export async function runEmbeddedPiAgent(
onToolResult: params.onToolResult,
onAgentEvent: params.onAgentEvent,
extraSystemPrompt: params.extraSystemPrompt,
sourceReplyDeliveryMode: params.sourceReplyDeliveryMode,
inputProvenance: params.inputProvenance,
streamParams: params.streamParams,
ownerNumbers: params.ownerNumbers,
@@ -1155,6 +1156,7 @@ export async function runEmbeddedPiAgent(
reasoningLevel: params.reasoningLevel,
bashElevated: params.bashElevated,
extraSystemPrompt: params.extraSystemPrompt,
sourceReplyDeliveryMode: params.sourceReplyDeliveryMode,
ownerNumbers: params.ownerNumbers,
}),
...(attempt.promptCache ? { promptCache: attempt.promptCache } : {}),
@@ -1307,6 +1309,7 @@ export async function runEmbeddedPiAgent(
reasoningLevel: params.reasoningLevel,
bashElevated: params.bashElevated,
extraSystemPrompt: params.extraSystemPrompt,
sourceReplyDeliveryMode: params.sourceReplyDeliveryMode,
ownerNumbers: params.ownerNumbers,
}),
...(attempt.promptCache ? { promptCache: attempt.promptCache } : {}),

View File

@@ -1132,6 +1132,7 @@ export async function runEmbeddedAttempt(
workspaceNotes: workspaceNotes?.length ? workspaceNotes : undefined,
reactionGuidance,
promptMode: effectivePromptMode,
sourceReplyDeliveryMode: params.sourceReplyDeliveryMode,
silentReplyPromptMode: params.silentReplyPromptMode,
acpEnabled: isAcpRuntimeSpawnAvailable({
config: params.config,

View File

@@ -1,4 +1,5 @@
import type { ImageContent } from "@mariozechner/pi-ai";
import type { SourceReplyDeliveryMode } from "../../../auto-reply/get-reply-options.types.js";
import type { ReplyPayload } from "../../../auto-reply/reply-payload.js";
import type { ReplyOperation } from "../../../auto-reply/reply/reply-run-registry.js";
import type { ReasoningLevel, ThinkLevel, VerboseLevel } from "../../../auto-reply/thinking.js";
@@ -143,6 +144,7 @@ export type RunEmbeddedPiAgentParams = {
lane?: string;
enqueue?: CommandQueueEnqueueFn;
extraSystemPrompt?: string;
sourceReplyDeliveryMode?: SourceReplyDeliveryMode;
silentReplyPromptMode?: SilentReplyPromptMode;
internalEvents?: AgentInternalEvent[];
inputProvenance?: InputProvenance;

View File

@@ -1,5 +1,6 @@
import type { AgentTool } from "@mariozechner/pi-agent-core";
import type { AgentSession } from "@mariozechner/pi-coding-agent";
import type { SourceReplyDeliveryMode } from "../../auto-reply/get-reply-options.types.js";
import type { MemoryCitationsMode } from "../../config/types.memory.js";
import type { ResolvedTimeFormat } from "../date-time.js";
import type { EmbeddedContextFile } from "../pi-embedded-helpers.js";
@@ -32,6 +33,7 @@ export function buildEmbeddedSystemPrompt(params: {
promptMode?: PromptMode;
/** Controls the generic silent-reply section. Channel-aware prompts can set "none". */
silentReplyPromptMode?: SilentReplyPromptMode;
sourceReplyDeliveryMode?: SourceReplyDeliveryMode;
/** Whether ACP-specific routing guidance should be included. Defaults to true. */
acpEnabled?: boolean;
/** Registered runtime slash/native command names such as `codex`. */
@@ -82,6 +84,7 @@ export function buildEmbeddedSystemPrompt(params: {
reactionGuidance: params.reactionGuidance,
promptMode: params.promptMode,
silentReplyPromptMode: params.silentReplyPromptMode,
sourceReplyDeliveryMode: params.sourceReplyDeliveryMode,
acpEnabled: params.acpEnabled,
nativeCommandNames: params.nativeCommandNames,
nativeCommandGuidanceLines: params.nativeCommandGuidanceLines,

View File

@@ -758,6 +758,26 @@ describe("buildAgentSystemPrompt", () => {
expect(prompt).toContain("`style` can be `primary`, `success`, or `danger`");
});
it("describes message-tool-only source delivery without requiring target", () => {
const prompt = buildAgentSystemPrompt({
workspaceDir: "/tmp/openclaw",
toolNames: ["message"],
sourceReplyDeliveryMode: "message_tool_only",
runtimeInfo: {
channel: "discord",
},
});
expect(prompt).toContain("private by default for this source channel");
expect(prompt).toContain("use `message(action=send)` for visible channel output");
expect(prompt).toContain("The target defaults to the current source channel");
expect(prompt).toContain("final answers are private in this mode");
expect(prompt).not.toContain(
`respond with ONLY: ${SILENT_REPLY_TOKEN} (avoid duplicate replies)`,
);
expect(prompt).not.toContain("For `action=send`, include `target` and `message`.");
});
it("suppresses plain chat approval commands when inline approval UI is available", () => {
const prompt = buildAgentSystemPrompt({
workspaceDir: "/tmp/openclaw",

View File

@@ -1,4 +1,5 @@
import { createHmac, createHash } from "node:crypto";
import type { SourceReplyDeliveryMode } from "../auto-reply/get-reply-options.types.js";
import type { ReasoningLevel, ThinkLevel } from "../auto-reply/thinking.js";
import { SILENT_REPLY_TOKEN } from "../auto-reply/tokens.js";
import { resolveChannelApprovalCapability } from "../channels/plugins/approvals.js";
@@ -339,10 +340,12 @@ function buildMessagingSection(params: {
inlineButtonsEnabled: boolean;
runtimeChannel?: string;
messageToolHints?: string[];
sourceReplyDeliveryMode?: SourceReplyDeliveryMode;
}) {
if (params.isMinimal) {
return [];
}
const messageToolOnly = params.sourceReplyDeliveryMode === "message_tool_only";
const hasSessionsSpawn = params.availableTools.has("sessions_spawn");
const hasSubagents = params.availableTools.has("subagents");
const subagentOrchestrationGuidance = hasSessionsSpawn
@@ -354,7 +357,9 @@ function buildMessagingSection(params: {
: "";
return [
"## Messaging",
"- Reply in current session → automatically routes to the source channel (Signal, Telegram, etc.)",
messageToolOnly
? "- Reply in current session → private by default for this source channel; use `message(action=send)` for visible channel output."
: "- Reply in current session → automatically routes to the source channel (Signal, Telegram, etc.)",
"- Cross-session messaging → use sessions_send(sessionKey, message)",
subagentOrchestrationGuidance,
`- Runtime-generated completion events may ask for a user update. Rewrite those in your normal assistant voice and send the update (do not forward raw internal metadata or default to ${SILENT_REPLY_TOKEN}).`,
@@ -364,9 +369,13 @@ function buildMessagingSection(params: {
"",
"### message tool",
"- Use `message` for proactive sends + channel actions (polls, reactions, etc.).",
"- For `action=send`, include `target` and `message`.",
messageToolOnly
? "- For `action=send`, include `message`. The target defaults to the current source channel; include `target` only when sending somewhere else."
: "- For `action=send`, include `target` and `message`.",
`- If multiple channels are configured, pass \`channel\` (${params.messageChannelOptions}).`,
`- If you use \`message\` (\`action=send\`) to deliver your user-visible reply, respond with ONLY: ${SILENT_REPLY_TOKEN} (avoid duplicate replies).`,
messageToolOnly
? "- If you use `message` (`action=send`) to deliver visible output, do not repeat that visible content in your final answer; final answers are private in this mode."
: `- If you use \`message\` (\`action=send\`) to deliver your user-visible reply, respond with ONLY: ${SILENT_REPLY_TOKEN} (avoid duplicate replies).`,
params.inlineButtonsEnabled
? "- Inline buttons supported. Use `action=send` with `buttons=[[{text,callback_data,style?}]]`; `style` can be `primary`, `success`, or `danger`."
: params.runtimeChannel
@@ -462,6 +471,7 @@ export function buildAgentSystemPrompt(params: {
promptMode?: PromptMode;
/** Controls the generic silent-reply section. Channel-aware prompts can set "none". */
silentReplyPromptMode?: SilentReplyPromptMode;
sourceReplyDeliveryMode?: SourceReplyDeliveryMode;
/** Whether ACP-specific routing guidance should be included. Defaults to true. */
acpEnabled?: boolean;
/** Registered runtime slash/native command names such as `codex`. */
@@ -905,6 +915,7 @@ export function buildAgentSystemPrompt(params: {
inlineButtonsEnabled,
runtimeChannel,
messageToolHints: params.messageToolHints,
sourceReplyDeliveryMode: params.sourceReplyDeliveryMode,
}),
...buildVoiceSection({ isMinimal, ttsHint: params.ttsHint }),
];

View File

@@ -29,6 +29,8 @@ export type ReplyThreadingPolicy = {
implicitCurrentMessage?: "default" | "allow" | "deny";
};
export type SourceReplyDeliveryMode = "automatic" | "message_tool_only";
export type GetReplyOptions = {
/** Override run id for agent events (defaults to random UUID). */
runId?: string;
@@ -143,6 +145,12 @@ export type GetReplyOptions = {
/** Called when the actual model is selected (including after fallback).
* Use this to get model/provider/thinkLevel for responsePrefix template interpolation. */
onModelSelected?: (ctx: ModelSelectedContext) => void;
/**
* Controls whether normal assistant replies are automatically delivered to
* the source conversation. `message_tool_only` keeps final/block/preview
* output private; visible channel output must come from the message tool.
*/
sourceReplyDeliveryMode?: SourceReplyDeliveryMode;
disableBlockStreaming?: boolean;
/** Timeout for block reply delivery (ms). */
blockReplyTimeoutMs?: number;

View File

@@ -1228,6 +1228,7 @@ export async function runAgentTurnWithFallback(params: {
timeoutMs: params.followupRun.run.timeoutMs,
runId,
extraSystemPrompt: params.followupRun.run.extraSystemPrompt,
sourceReplyDeliveryMode: params.followupRun.run.sourceReplyDeliveryMode,
silentReplyPromptMode: params.followupRun.run.silentReplyPromptMode,
extraSystemPromptStatic: params.followupRun.run.extraSystemPromptStatic,
ownerNumbers: params.followupRun.run.ownerNumbers,
@@ -1353,6 +1354,7 @@ export async function runAgentTurnWithFallback(params: {
prompt: params.commandBody,
transcriptPrompt: params.transcriptCommandBody,
extraSystemPrompt: params.followupRun.run.extraSystemPrompt,
sourceReplyDeliveryMode: params.followupRun.run.sourceReplyDeliveryMode,
silentReplyPromptMode: params.followupRun.run.silentReplyPromptMode,
toolResultFormat: (() => {
const channel = resolveMessageChannel(

View File

@@ -73,6 +73,7 @@ export function buildEmbeddedRunBaseParams(params: {
silentExpected: params.run.silentExpected,
allowEmptyAssistantReplyAsSilent: params.run.allowEmptyAssistantReplyAsSilent,
silentReplyPromptMode: params.run.silentReplyPromptMode,
sourceReplyDeliveryMode: params.run.sourceReplyDeliveryMode,
provider: params.provider,
model: params.model,
...params.authProfile,

View File

@@ -227,6 +227,7 @@ async function runDispatch(params: {
images?: Array<{ data: string; mimeType: string }>;
ctxOverrides?: Record<string, unknown>;
sessionKeyOverride?: string;
sourceReplyDeliveryMode?: "automatic" | "message_tool_only";
}) {
const targetSessionKey = params.sessionKeyOverride ?? sessionKey;
return tryDispatchAcpReply({
@@ -242,6 +243,7 @@ async function runDispatch(params: {
sessionKey: targetSessionKey,
images: params.images,
inboundAudio: false,
sourceReplyDeliveryMode: params.sourceReplyDeliveryMode,
shouldRouteToOriginating: params.shouldRouteToOriginating ?? false,
...(params.shouldRouteToOriginating
? {
@@ -419,6 +421,22 @@ describe("tryDispatchAcpReply", () => {
expect(routeMocks.routeReply).toHaveBeenCalledWith(expect.objectContaining({ mirror: false }));
});
it("adds source delivery guidance to tool-only ACP turns", async () => {
setReadyAcpResolution();
await runDispatch({
bodyForAgent: "reply privately unless you send explicitly",
sourceReplyDeliveryMode: "message_tool_only",
});
expect(managerMocks.runTurn).toHaveBeenCalledTimes(1);
const call = managerMocks.runTurn.mock.calls[0]?.[0] as { text?: string } | undefined;
expect(call?.text).toContain("Source channel delivery is private by default");
expect(call?.text).toContain("message(action=send)");
expect(call?.text).toContain("The target defaults to the current source channel");
expect(call?.text).toContain("reply privately unless you send explicitly");
});
it("edits ACP tool lifecycle updates in place when supported", async () => {
setReadyAcpResolution();
mockToolLifecycleTurn("call-1");

View File

@@ -22,6 +22,7 @@ import {
} from "../../shared/string-coerce.js";
import { resolveStatusTtsSnapshot } from "../../tts/status-config.js";
import { resolveConfiguredTtsMode } from "../../tts/tts-config.js";
import type { SourceReplyDeliveryMode } from "../get-reply-options.types.js";
import type { FinalizedMsgContext } from "../templating.js";
import { createAcpReplyProjector } from "./acp-projector.js";
import {
@@ -113,6 +114,23 @@ function resolveAcpRequestId(ctx: FinalizedMsgContext): string {
return generateSecureUuid();
}
function resolveAcpTurnText(params: {
promptText: string;
sourceReplyDeliveryMode?: SourceReplyDeliveryMode;
}): string {
if (params.sourceReplyDeliveryMode !== "message_tool_only") {
return params.promptText;
}
const guidance = prefixSystemMessage(
[
"Source channel delivery is private by default for this turn.",
"Normal ACP final output will not be automatically posted to the source channel.",
"To send visible output, use message(action=send). The target defaults to the current source channel.",
].join(" "),
);
return params.promptText ? `${guidance}\n\n${params.promptText}` : guidance;
}
async function hasBoundConversationForSession(params: {
cfg: OpenClawConfig;
sessionKey: string;
@@ -297,6 +315,7 @@ export async function tryDispatchAcpReply(params: {
sessionTtsAuto?: TtsAutoMode;
ttsChannel?: string;
suppressUserDelivery?: boolean;
sourceReplyDeliveryMode?: SourceReplyDeliveryMode;
shouldRouteToOriginating: boolean;
originatingChannel?: string;
originatingTo?: string;
@@ -455,7 +474,10 @@ export async function tryDispatchAcpReply(params: {
await acpManager.runTurn({
cfg: params.cfg,
sessionKey: canonicalSessionKey,
text: promptText,
text: resolveAcpTurnText({
promptText,
sourceReplyDeliveryMode: params.sourceReplyDeliveryMode,
}),
attachments: attachments.length > 0 ? attachments : undefined,
mode: "prompt",
requestId: resolveAcpRequestId(params.ctx),

View File

@@ -482,6 +482,13 @@ vi.mock("../../tts/tts-config.js", () => ({
const noAbortResult = { handled: false, aborted: false } as const;
const emptyConfig = {} as OpenClawConfig;
const automaticGroupReplyConfig = {
messages: {
groupChat: {
visibleReplies: "automatic",
},
},
} as const satisfies OpenClawConfig;
let dispatchReplyFromConfig: typeof import("./dispatch-from-config.js").dispatchReplyFromConfig;
let resetInboundDedupe: typeof import("./inbound-dedupe.js").resetInboundDedupe;
let tryDispatchAcpReplyHook: typeof import("../../plugin-sdk/acp-runtime.js").tryDispatchAcpReplyHook;
@@ -1238,7 +1245,7 @@ describe("dispatchReplyFromConfig", () => {
it("routes media-only tool results when summaries are suppressed", async () => {
setNoAbort();
mocks.routeReply.mockClear();
const cfg = emptyConfig;
const cfg = automaticGroupReplyConfig;
const dispatcher = createDispatcher();
const ctx = buildTestCtx({
Provider: "slack",
@@ -1303,7 +1310,7 @@ describe("dispatchReplyFromConfig", () => {
it("suppresses group tool summaries but still forwards tool media", async () => {
setNoAbort();
const cfg = emptyConfig;
const cfg = automaticGroupReplyConfig;
const dispatcher = createDispatcher();
const ctx = buildTestCtx({
Provider: "telegram",
@@ -1342,7 +1349,7 @@ describe("dispatchReplyFromConfig", () => {
mediaUrls: undefined,
}),
);
const cfg = emptyConfig;
const cfg = automaticGroupReplyConfig;
const dispatcher = createDispatcher();
const ctx = buildTestCtx({
Provider: "webchat",
@@ -1376,7 +1383,7 @@ describe("dispatchReplyFromConfig", () => {
it("delivers tool summaries in forum topic sessions (group + IsForum)", async () => {
setNoAbort();
const cfg = emptyConfig;
const cfg = automaticGroupReplyConfig;
const dispatcher = createDispatcher();
const ctx = buildTestCtx({
Provider: "telegram",
@@ -1404,7 +1411,7 @@ describe("dispatchReplyFromConfig", () => {
it("delivers deterministic exec approval tool payloads in groups", async () => {
setNoAbort();
const cfg = emptyConfig;
const cfg = automaticGroupReplyConfig;
const dispatcher = createDispatcher();
const ctx = buildTestCtx({
Provider: "telegram",
@@ -1602,6 +1609,7 @@ describe("dispatchReplyFromConfig", () => {
setNoAbort();
const cfg = {
...emptyConfig,
messages: automaticGroupReplyConfig.messages,
agents: {
defaults: {
verboseDefault: "on",
@@ -3639,7 +3647,12 @@ describe("dispatchReplyFromConfig", () => {
return { text: "NO_REPLY" };
};
await dispatchReplyFromConfig({ ctx, cfg: emptyConfig, dispatcher, replyResolver });
await dispatchReplyFromConfig({
ctx,
cfg: automaticGroupReplyConfig,
dispatcher,
replyResolver,
});
expect(dispatcher.sendBlockReply).toHaveBeenCalledTimes(1);
expect(dispatcher.sendBlockReply).toHaveBeenCalledWith({
@@ -3855,6 +3868,14 @@ describe("before_dispatch hook", () => {
describe("sendPolicy deny — suppress delivery, not processing (#53328)", () => {
beforeEach(() => {
resetInboundDedupe();
sessionBindingMocks.resolveByConversation.mockReset();
sessionBindingMocks.resolveByConversation.mockReturnValue(null);
sessionBindingMocks.touch.mockReset();
hookMocks.registry.plugins = [];
hookMocks.runner.runInboundClaimForPluginOutcome.mockResolvedValue({
status: "no_handler",
});
hookMocks.runner.hasHooks.mockImplementation(
(hookName?: string) => hookName === "reply_dispatch",
);
@@ -4181,4 +4202,120 @@ describe("sendPolicy deny — suppress delivery, not processing (#53328)", () =>
// ...but no final reply is delivered.
expect(dispatcher.sendFinalReply).not.toHaveBeenCalled();
});
it("keeps message-tool-only source delivery private while still processing the turn", async () => {
setNoAbort();
sessionStoreMocks.currentEntry = {
sessionId: "s1",
updatedAt: 0,
sendPolicy: "allow",
};
const dispatcher = createDispatcher();
const callbacks = {
partial: vi.fn(),
reasoning: vi.fn(),
assistantStart: vi.fn(),
blockQueued: vi.fn(),
toolStart: vi.fn(),
itemEvent: vi.fn(),
planUpdate: vi.fn(),
toolResult: vi.fn(),
};
const replyResolver = vi.fn(async (_ctx: MsgContext, opts?: GetReplyOptions) => {
await opts?.onPartialReply?.({ text: "draft leak" });
await opts?.onReasoningStream?.({ text: "reasoning leak" });
await opts?.onAssistantMessageStart?.();
await opts?.onToolStart?.({ name: "lookup" });
await opts?.onItemEvent?.({ progressText: "working" });
await opts?.onPlanUpdate?.({ phase: "update", explanation: "planning" });
await opts?.onToolResult?.({ text: "tool output" });
await opts?.onBlockReply?.({ text: "streaming block" });
return { text: "final reply" } satisfies ReplyPayload;
});
const ctx = buildTestCtx({ SessionKey: "test:session" });
const result = await dispatchReplyFromConfig({
ctx,
cfg: emptyConfig,
dispatcher,
replyResolver,
replyOptions: {
sourceReplyDeliveryMode: "message_tool_only",
onPartialReply: callbacks.partial,
onReasoningStream: callbacks.reasoning,
onAssistantMessageStart: callbacks.assistantStart,
onBlockReplyQueued: callbacks.blockQueued,
onToolStart: callbacks.toolStart,
onItemEvent: callbacks.itemEvent,
onPlanUpdate: callbacks.planUpdate,
onToolResult: callbacks.toolResult,
},
});
expect(replyResolver).toHaveBeenCalledTimes(1);
expect(result.queuedFinal).toBe(false);
expect(dispatcher.sendFinalReply).not.toHaveBeenCalled();
expect(dispatcher.sendBlockReply).not.toHaveBeenCalled();
expect(dispatcher.sendToolResult).not.toHaveBeenCalled();
for (const callback of Object.values(callbacks)) {
expect(callback).not.toHaveBeenCalled();
}
expect(hookMocks.runner.runReplyDispatch).toHaveBeenCalledWith(
expect.objectContaining({
suppressUserDelivery: true,
sourceReplyDeliveryMode: "message_tool_only",
sendPolicy: "allow",
}),
expect.any(Object),
);
});
it("defaults group/channel turns to message-tool-only source delivery", async () => {
setNoAbort();
const dispatcher = createDispatcher();
const replyResolver = vi.fn(async (_ctx: MsgContext, opts?: GetReplyOptions) => {
expect(opts?.sourceReplyDeliveryMode).toBe("message_tool_only");
return { text: "final reply" } satisfies ReplyPayload;
});
const result = await dispatchReplyFromConfig({
ctx: buildTestCtx({
ChatType: "channel",
SessionKey: "test:discord:channel:C1",
}),
cfg: emptyConfig,
dispatcher,
replyResolver,
});
expect(replyResolver).toHaveBeenCalledTimes(1);
expect(result.queuedFinal).toBe(false);
expect(dispatcher.sendFinalReply).not.toHaveBeenCalled();
});
it("allows config to keep group/channel source delivery automatic", async () => {
setNoAbort();
const dispatcher = createDispatcher();
const replyResolver = vi.fn(async (_ctx: MsgContext, opts?: GetReplyOptions) => {
expect(opts?.sourceReplyDeliveryMode).toBe("automatic");
return { text: "final reply" } satisfies ReplyPayload;
});
const result = await dispatchReplyFromConfig({
ctx: buildTestCtx({
ChatType: "group",
WasMentioned: true,
SessionKey: "test:telegram:group:G1",
}),
cfg: automaticGroupReplyConfig,
dispatcher,
replyResolver,
});
expect(replyResolver).toHaveBeenCalledTimes(1);
expect(result.queuedFinal).toBe(true);
expect(dispatcher.sendFinalReply).toHaveBeenCalledWith(
expect.objectContaining({ text: "final reply" }),
);
});
});

View File

@@ -193,6 +193,23 @@ const resolveRoutedPolicyConversationType = (
return undefined;
};
function resolveSourceReplyDeliveryMode(params: {
cfg: OpenClawConfig;
ctx: FinalizedMsgContext;
requested?: "automatic" | "message_tool_only";
}): "automatic" | "message_tool_only" {
if (params.requested) {
return params.requested;
}
const chatType = normalizeChatType(params.ctx.ChatType);
if (chatType === "group" || chatType === "channel") {
return params.cfg.messages?.groupChat?.visibleReplies === "automatic"
? "automatic"
: "message_tool_only";
}
return "automatic";
}
const resolveSessionStoreLookup = (
ctx: FinalizedMsgContext,
cfg: OpenClawConfig,
@@ -574,10 +591,10 @@ export async function dispatchReplyFromConfig(
? toPluginConversationBinding(pluginOwnedBindingRecord)
: null;
// Resolve sendPolicy early so every outbound path below (plugin-binding
// notices, fast-abort, normal dispatch) honors suppressDelivery. Under
// sendPolicy: "deny" the agent still processes inbound, but no outbound
// reply/notice/indicator is allowed. See #53328.
// Resolve automatic source-delivery suppression early so every outbound path
// below (plugin-binding notices, fast-abort, normal dispatch) honors it. The
// agent still processes inbound, but automatic replies/notices/indicators are
// blocked; explicit message tool sends remain available.
const sendPolicy = resolveSendPolicy({
cfg,
entry: sessionStoreEntry.entry,
@@ -591,7 +608,19 @@ export async function dispatchReplyFromConfig(
undefined,
chatType: sessionStoreEntry.entry?.chatType,
});
const suppressDelivery = sendPolicy === "deny";
const sendPolicyDenied = sendPolicy === "deny";
const sourceReplyDeliveryMode = resolveSourceReplyDeliveryMode({
cfg,
ctx,
requested: params.replyOptions?.sourceReplyDeliveryMode,
});
const suppressAutomaticSourceDelivery = sourceReplyDeliveryMode === "message_tool_only";
const suppressDelivery = sendPolicyDenied || suppressAutomaticSourceDelivery;
const deliverySuppressionReason = sendPolicyDenied
? "sendPolicy: deny"
: suppressAutomaticSourceDelivery
? "sourceReplyDeliveryMode: message_tool_only"
: "";
const suppressHookUserDelivery = suppressAcpChildUserDelivery || suppressDelivery;
let pluginFallbackReason:
@@ -603,11 +632,10 @@ export async function dispatchReplyFromConfig(
touchConversationBindingRecord(pluginOwnedBinding.bindingId);
if (suppressDelivery) {
// Plugin-bound inbound handlers typically emit outbound replies we
// cannot rewind. Under deny, skip the plugin claim entirely and fall
// through to normal (suppressed) agent processing so no delivery leaks
// via the plugin path. See #53328.
// cannot rewind. When automatic delivery is suppressed, skip the plugin
// claim and fall through to normal suppressed agent processing.
logVerbose(
`plugin-bound inbound skipped under sendPolicy: deny (plugin=${pluginOwnedBinding.pluginId} session=${sessionKey ?? "unknown"}); falling through to suppressed agent processing`,
`plugin-bound inbound skipped under ${deliverySuppressionReason} (plugin=${pluginOwnedBinding.pluginId} session=${sessionKey ?? "unknown"}); falling through to suppressed agent processing`,
);
} else {
logVerbose(
@@ -742,7 +770,7 @@ export async function dispatchReplyFromConfig(
}
} else {
logVerbose(
`dispatch-from-config: fast_abort reply suppressed by sendPolicy: deny (session=${sessionKey ?? "unknown"})`,
`dispatch-from-config: fast_abort reply suppressed by ${deliverySuppressionReason} (session=${sessionKey ?? "unknown"})`,
);
}
const counts = dispatcher.getQueuedCounts();
@@ -844,6 +872,7 @@ export async function dispatchReplyFromConfig(
sessionTtsAuto,
ttsChannel: deliveryChannel,
suppressUserDelivery: suppressHookUserDelivery,
sourceReplyDeliveryMode,
shouldRouteToOriginating,
originatingChannel: routeReplyChannel,
originatingTo: routeReplyTo,
@@ -868,11 +897,12 @@ export async function dispatchReplyFromConfig(
}
}
// When sendPolicy is "deny", we still let the agent process the inbound message
// (context, memory, tool calls) but suppress all outbound delivery.
// When automatic source delivery is suppressed, still let the agent process
// the inbound message (context, memory, tool calls) but suppress automatic
// outbound source delivery.
if (suppressDelivery) {
logVerbose(
`Delivery suppressed by send policy for session ${sessionStoreEntry.sessionKey ?? sessionKey ?? "unknown"} — agent will still process the message`,
`Delivery suppressed by ${deliverySuppressionReason} for session ${sessionStoreEntry.sessionKey ?? sessionKey ?? "unknown"} — agent will still process the message`,
);
}
@@ -1044,12 +1074,41 @@ export async function dispatchReplyFromConfig(
ctx,
{
...params.replyOptions,
sourceReplyDeliveryMode,
typingPolicy: typing.typingPolicy,
suppressTyping: typing.suppressTyping,
onPartialReply: suppressAutomaticSourceDelivery
? undefined
: params.replyOptions?.onPartialReply,
onReasoningStream: suppressAutomaticSourceDelivery
? undefined
: params.replyOptions?.onReasoningStream,
onReasoningEnd: suppressAutomaticSourceDelivery
? undefined
: params.replyOptions?.onReasoningEnd,
onAssistantMessageStart: suppressAutomaticSourceDelivery
? undefined
: params.replyOptions?.onAssistantMessageStart,
onBlockReplyQueued: suppressAutomaticSourceDelivery
? undefined
: params.replyOptions?.onBlockReplyQueued,
onToolStart: suppressAutomaticSourceDelivery ? undefined : params.replyOptions?.onToolStart,
onItemEvent: suppressAutomaticSourceDelivery ? undefined : params.replyOptions?.onItemEvent,
onCommandOutput: suppressAutomaticSourceDelivery
? undefined
: params.replyOptions?.onCommandOutput,
onCompactionStart: suppressAutomaticSourceDelivery
? undefined
: params.replyOptions?.onCompactionStart,
onCompactionEnd: suppressAutomaticSourceDelivery
? undefined
: params.replyOptions?.onCompactionEnd,
onToolResult: (payload: ReplyPayload) => {
const run = async () => {
markInboundDedupeReplayUnsafe();
await onToolResultFromReplyOptions?.(payload);
if (!suppressAutomaticSourceDelivery) {
await onToolResultFromReplyOptions?.(payload);
}
if (suppressDelivery) {
return;
}
@@ -1093,7 +1152,9 @@ export async function dispatchReplyFromConfig(
},
onPlanUpdate: async (payload) => {
markInboundDedupeReplayUnsafe();
await onPlanUpdateFromReplyOptions?.(payload);
if (!suppressAutomaticSourceDelivery) {
await onPlanUpdateFromReplyOptions?.(payload);
}
if (payload.phase !== "update" || suppressDefaultToolProgressMessages) {
return;
}
@@ -1101,7 +1162,9 @@ export async function dispatchReplyFromConfig(
},
onApprovalEvent: async (payload) => {
markInboundDedupeReplayUnsafe();
await onApprovalEventFromReplyOptions?.(payload);
if (!suppressAutomaticSourceDelivery) {
await onApprovalEventFromReplyOptions?.(payload);
}
if (payload.phase !== "requested" || suppressDefaultToolProgressMessages) {
return;
}
@@ -1117,7 +1180,9 @@ export async function dispatchReplyFromConfig(
},
onPatchSummary: async (payload) => {
markInboundDedupeReplayUnsafe();
await onPatchSummaryFromReplyOptions?.(payload);
if (!suppressAutomaticSourceDelivery) {
await onPatchSummaryFromReplyOptions?.(payload);
}
if (payload.phase !== "end" || suppressDefaultToolProgressMessages) {
return;
}
@@ -1181,7 +1246,9 @@ export async function dispatchReplyFromConfig(
assistantMessageIndex: payloadMetadata.assistantMessageIndex,
}
: context;
await params.replyOptions?.onBlockReplyQueued?.(visiblePayload, queuedContext);
if (!suppressAutomaticSourceDelivery) {
await params.replyOptions?.onBlockReplyQueued?.(visiblePayload, queuedContext);
}
const ttsPayload = await maybeApplyTtsToReplyPayload({
payload: visiblePayload,
cfg,
@@ -1221,6 +1288,7 @@ export async function dispatchReplyFromConfig(
sessionTtsAuto,
ttsChannel: deliveryChannel,
suppressUserDelivery: suppressHookUserDelivery,
sourceReplyDeliveryMode,
shouldRouteToOriginating,
originatingChannel: routeReplyChannel,
originatingTo: routeReplyTo,

View File

@@ -1362,6 +1362,30 @@ describe("createFollowupRunner messaging delivery and dedupe", () => {
expect(onBlockReply).toHaveBeenCalledWith(expect.objectContaining({ text: "hello world!" }));
});
it("keeps message-tool-only queued followup finals private", async () => {
const queued = baseQueuedRun("discord");
const { onBlockReply } = await runMessagingCase({
agentResult: { payloads: [{ text: "hello world!" }] },
queued: {
...queued,
originatingChannel: "discord",
originatingTo: "channel:C1",
run: {
...queued.run,
sourceReplyDeliveryMode: "message_tool_only",
},
} as FollowupRun,
});
expect(runEmbeddedPiAgentMock).toHaveBeenCalledWith(
expect.objectContaining({
sourceReplyDeliveryMode: "message_tool_only",
}),
);
expect(routeReplyMock).not.toHaveBeenCalled();
expect(onBlockReply).not.toHaveBeenCalled();
});
it("lets provider followup route hooks force dispatcher delivery", async () => {
resolveProviderFollowupFallbackRouteMock.mockReturnValue({
route: "dispatcher",

View File

@@ -306,6 +306,7 @@ export function createFollowupRunner(params: {
transcriptPrompt: queued.transcriptPrompt,
extraSystemPrompt: run.extraSystemPrompt,
silentReplyPromptMode: run.silentReplyPromptMode,
sourceReplyDeliveryMode: run.sourceReplyDeliveryMode,
ownerNumbers: run.ownerNumbers,
enforceFinalTag: run.enforceFinalTag,
allowEmptyAssistantReplyAsSilent: run.allowEmptyAssistantReplyAsSilent,
@@ -473,6 +474,13 @@ export function createFollowupRunner(params: {
}
}
if (run.sourceReplyDeliveryMode === "message_tool_only") {
logVerbose(
"followup queue: automatic source delivery suppressed by sourceReplyDeliveryMode: message_tool_only",
);
return;
}
await sendFollowupPayloads(finalPayloads, effectiveQueued, {
provider: providerUsed,
modelId: modelUsed,

View File

@@ -310,7 +310,7 @@ export async function runPreparedReply(
let currentSystemSent = systemSent;
const isFirstTurnInSession = isNewSession || !currentSystemSent;
const isGroupChat = sessionCtx.ChatType === "group";
const isGroupChat = sessionCtx.ChatType === "group" || sessionCtx.ChatType === "channel";
const wasMentioned = ctx.WasMentioned === true;
const isHeartbeat = opts?.isHeartbeat === true;
const { typingPolicy, suppressTyping } = resolveRunTypingPolicy({
@@ -343,6 +343,7 @@ export async function runPreparedReply(
const groupChatContext = isGroupChat
? buildGroupChatContext({
sessionCtx,
sourceReplyDeliveryMode: opts?.sourceReplyDeliveryMode,
silentReplyPolicy: silentReplySettings.policy,
silentReplyRewrite: silentReplySettings.rewrite,
silentToken: SILENT_REPLY_TOKEN,
@@ -400,7 +401,9 @@ export async function runPreparedReply(
}),
].filter(Boolean);
const silentReplyPromptMode: SilentReplyPromptMode =
directChatContext || groupChatContext ? "none" : "generic";
directChatContext || groupChatContext || opts?.sourceReplyDeliveryMode === "message_tool_only"
? "none"
: "generic";
const baseBody = sessionCtx.BodyStripped ?? sessionCtx.Body ?? "";
// Use CommandBody/RawBody for bare reset detection (clean message without structural context).
const rawBodyTrimmed = (ctx.CommandBody ?? ctx.RawBody ?? ctx.Body ?? "").trim();
@@ -854,6 +857,7 @@ export async function runPreparedReply(
ownerNumbers: command.ownerList.length > 0 ? command.ownerList : undefined,
inputProvenance: ctx.InputProvenance ?? sessionCtx.InputProvenance,
extraSystemPrompt: extraSystemPromptParts.join("\n\n") || undefined,
sourceReplyDeliveryMode: opts?.sourceReplyDeliveryMode,
silentReplyPromptMode,
extraSystemPromptStatic: extraSystemPromptStaticParts.join("\n\n"),
skipProviderRuntimeHints: useFastReplyRuntime,

View File

@@ -32,6 +32,17 @@ describe("group runtime loading", () => {
);
expect(groupChatContext).toContain("Minimize empty lines and use normal chat conventions");
expect(groupChatContext).toContain('reply with exactly "NO_REPLY"');
const toolOnlyContext = groups.buildGroupChatContext({
sessionCtx: { ChatType: "group", Provider: "discord" },
sourceReplyDeliveryMode: "message_tool_only",
silentReplyPolicy: "allow",
silentToken: "NO_REPLY",
});
expect(toolOnlyContext).toContain("Normal final replies are private");
expect(toolOnlyContext).toContain("message tool with action=send");
expect(toolOnlyContext).toContain("Be a good group participant");
expect(toolOnlyContext).toContain("do not call message(action=send)");
expect(toolOnlyContext).not.toContain('reply with exactly "NO_REPLY"');
expect(
groups.buildGroupIntro({
cfg: {} as OpenClawConfig,

View File

@@ -7,6 +7,7 @@ import {
normalizeOptionalString,
} from "../../shared/string-coerce.js";
import { isInternalMessageChannel } from "../../utils/message-channel.js";
import type { SourceReplyDeliveryMode } from "../get-reply-options.types.js";
import { normalizeGroupActivation } from "../group-activation.js";
import type { TemplateContext } from "../templating.js";
import { extractExplicitGroupId } from "./group-id.js";
@@ -219,17 +220,25 @@ function resolveProviderLabel(rawProvider: string | undefined): string {
export function buildGroupChatContext(params: {
sessionCtx: TemplateContext;
sourceReplyDeliveryMode?: SourceReplyDeliveryMode;
silentReplyPolicy?: SilentReplyPolicy;
silentReplyRewrite?: boolean;
silentToken?: string;
}): string {
const providerLabel = resolveProviderLabel(params.sessionCtx.Provider);
const messageToolOnly = params.sourceReplyDeliveryMode === "message_tool_only";
const lines: string[] = [];
lines.push(`You are in a ${providerLabel} group chat.`);
lines.push(
"Your replies are automatically sent to this group chat. Do not use the message tool to send to this same group - just reply normally.",
);
if (messageToolOnly) {
lines.push(
"Normal final replies are private and are not automatically sent to this group chat. To post visible output here, use the message tool with action=send; the target defaults to this group chat.",
);
} else {
lines.push(
"Your replies are automatically sent to this group chat. Do not use the message tool to send to this same group - just reply normally.",
);
}
lines.push(
"Be a good group participant: mostly lurk and follow the conversation; reply only when directly addressed or you can add clear value. Emoji reactions are welcome when available.",
);
@@ -237,8 +246,14 @@ export function buildGroupChatContext(params: {
"Write like a human. Avoid Markdown tables. Minimize empty lines and use normal chat conventions, not document-style spacing. Don't type literal \\n sequences; use real line breaks sparingly.",
);
const canUseSilentReply =
!messageToolOnly &&
params.silentToken &&
(params.silentReplyPolicy !== "disallow" || params.silentReplyRewrite === true);
if (messageToolOnly) {
lines.push(
"If no visible group response is needed, do not call message(action=send). Your normal final answer stays private and will not be posted to the group.",
);
}
if (canUseSilentReply) {
if (params.silentReplyPolicy === "allow") {
lines.push(

View File

@@ -5,6 +5,7 @@ import type { SessionEntry } from "../../../config/sessions.js";
import type { OpenClawConfig } from "../../../config/types.openclaw.js";
import type { PromptImageOrderEntry } from "../../../media/prompt-image-order.js";
import type { InputProvenance } from "../../../sessions/input-provenance.js";
import type { SourceReplyDeliveryMode } from "../../get-reply-options.types.js";
import type { OriginatingChannelType } from "../../templating.js";
import type { ElevatedLevel, ReasoningLevel, ThinkLevel, VerboseLevel } from "../directives.js";
@@ -90,6 +91,7 @@ export type FollowupRun = {
ownerNumbers?: string[];
inputProvenance?: InputProvenance;
extraSystemPrompt?: string;
sourceReplyDeliveryMode?: SourceReplyDeliveryMode;
silentReplyPromptMode?: SilentReplyPromptMode;
extraSystemPromptStatic?: string;
enforceFinalTag?: boolean;

View File

@@ -3,6 +3,7 @@ export type {
GetReplyOptions,
ModelSelectedContext,
ReplyThreadingPolicy,
SourceReplyDeliveryMode,
TypingPolicy,
} from "./get-reply-options.types.js";
export { getReplyPayloadMetadata, setReplyPayloadMetadata } from "./reply-payload.js";

View File

@@ -7214,6 +7214,10 @@ export const GENERATED_BASE_CONFIG_SCHEMA: BaseConfigSchemaResponse = {
exclusiveMinimum: 0,
maximum: 9007199254740991,
},
visibleReplies: {
type: "string",
enum: ["automatic", "message_tool"],
},
},
additionalProperties: false,
},
@@ -18854,6 +18858,13 @@ export const GENERATED_BASE_CONFIG_SCHEMA: BaseConfigSchemaResponse = {
description:
"Maximum number of prior group messages loaded as context per turn for group sessions. Use higher values for richer continuity, or lower values for faster and cheaper responses.",
},
visibleReplies: {
type: "string",
enum: ["automatic", "message_tool"],
title: "Group Visible Replies",
description:
'Controls visible group/channel replies. "message_tool" keeps normal final replies private and requires message(action=send) for room output; "automatic" posts normal replies as before.',
},
},
additionalProperties: false,
title: "Group Chat Rules",
@@ -28050,6 +28061,11 @@ export const GENERATED_BASE_CONFIG_SCHEMA: BaseConfigSchemaResponse = {
help: "Maximum number of prior group messages loaded as context per turn for group sessions. Use higher values for richer continuity, or lower values for faster and cheaper responses.",
tags: ["performance"],
},
"messages.groupChat.visibleReplies": {
label: "Group Visible Replies",
help: 'Controls visible group/channel replies. "message_tool" keeps normal final replies private and requires message(action=send) for room output; "automatic" posts normal replies as before.',
tags: ["advanced"],
},
"messages.queue": {
label: "Inbound Queue",
help: "Inbound message queue strategy used to buffer bursts before processing turns. Tune this for busy channels where sequential processing or batching behavior matters.",

View File

@@ -246,6 +246,7 @@ const TARGET_KEYS = [
"messages.groupChat",
"messages.groupChat.mentionPatterns",
"messages.groupChat.historyLimit",
"messages.groupChat.visibleReplies",
"messages.queue",
"messages.queue.mode",
"messages.queue.byChannel",

View File

@@ -1596,6 +1596,8 @@ export const FIELD_HELP: Record<string, string> = {
"Safe case-insensitive regex patterns used to detect explicit mentions/trigger phrases in group chats. Use precise patterns to reduce false positives in high-volume channels; invalid or unsafe nested-repetition patterns are ignored.",
"messages.groupChat.historyLimit":
"Maximum number of prior group messages loaded as context per turn for group sessions. Use higher values for richer continuity, or lower values for faster and cheaper responses.",
"messages.groupChat.visibleReplies":
'Controls visible group/channel replies. "message_tool" keeps normal final replies private and requires message(action=send) for room output; "automatic" posts normal replies as before.',
"messages.queue":
"Inbound message queue strategy used to buffer bursts before processing turns. Tune this for busy channels where sequential processing or batching behavior matters.",
"messages.queue.mode":

View File

@@ -819,6 +819,7 @@ export const FIELD_LABELS: Record<string, string> = {
"messages.groupChat": "Group Chat Rules",
"messages.groupChat.mentionPatterns": "Group Mention Patterns",
"messages.groupChat.historyLimit": "Group History Limit",
"messages.groupChat.visibleReplies": "Group Visible Replies",
"messages.queue": "Inbound Queue",
"messages.queue.mode": "Queue Mode",
"messages.queue.byChannel": "Queue Mode by Channel",

View File

@@ -4,6 +4,11 @@ import type { TtsConfig } from "./types.tts.js";
export type GroupChatConfig = {
mentionPatterns?: string[];
historyLimit?: number;
/**
* Controls how group/channel turns produce visible room replies.
* Default: "message_tool".
*/
visibleReplies?: "automatic" | "message_tool";
};
export type DmConfig = {

View File

@@ -393,6 +393,7 @@ export const GroupChatSchema = z
.object({
mentionPatterns: z.array(z.string()).optional(),
historyLimit: z.number().int().positive().optional(),
visibleReplies: z.enum(["automatic", "message_tool"]).optional(),
})
.strict()
.optional();

View File

@@ -90,6 +90,7 @@ export async function tryDispatchAcpReplyHook(
sessionTtsAuto: event.sessionTtsAuto,
ttsChannel: event.ttsChannel,
suppressUserDelivery: event.suppressUserDelivery,
sourceReplyDeliveryMode: event.sourceReplyDeliveryMode,
shouldRouteToOriginating: event.shouldRouteToOriginating,
originatingChannel: event.originatingChannel,
originatingTo: event.originatingTo,

View File

@@ -53,7 +53,11 @@ export type {
ReplyDispatcherWithTypingOptions,
} from "../auto-reply/reply/reply-dispatcher.js";
export { createReplyReferencePlanner } from "../auto-reply/reply/reply-reference.js";
export type { GetReplyOptions, BlockReplyContext } from "../auto-reply/get-reply-options.types.js";
export type {
GetReplyOptions,
BlockReplyContext,
SourceReplyDeliveryMode,
} from "../auto-reply/get-reply-options.types.js";
export type { ReplyPayload } from "./reply-payload.js";
export type { FinalizedMsgContext, MsgContext } from "../auto-reply/templating.js";
export { generateConversationLabel } from "../auto-reply/reply/conversation-label-generator.js";

View File

@@ -1,4 +1,5 @@
import type { AgentMessage } from "@mariozechner/pi-agent-core";
import type { SourceReplyDeliveryMode } from "../auto-reply/get-reply-options.types.js";
import type { ReplyPayload } from "../auto-reply/reply-payload.js";
import type {
ReplyDispatchKind,
@@ -356,6 +357,7 @@ export type PluginHookReplyDispatchEvent = {
sessionTtsAuto?: TtsAutoMode;
ttsChannel?: string;
suppressUserDelivery?: boolean;
sourceReplyDeliveryMode?: SourceReplyDeliveryMode;
shouldRouteToOriginating: boolean;
originatingChannel?: string;
originatingTo?: string;