fix(embed): set lastBlockReplyText only after emitting block reply

When directive consume() returned null (e.g. silent NO_REPLY chunk) or the
cleaned payload was empty, we still set lastBlockReplyText, so message_end
skipped the safety send while no channel delivery had occurred.

Fixes #77833.

Co-authored-by: Cursor <cursoragent@cursor.com>
This commit is contained in:
Neerav Makwana
2026-05-05 08:47:31 -04:00
committed by Ayaan Zaidi
parent 8faf91a2a8
commit cb8c94a8cb
3 changed files with 27 additions and 1 deletions

View File

@@ -69,6 +69,7 @@ Docs: https://docs.openclaw.ai
### Fixes
- Agents/embed: only mark `lastBlockReplyText` after a text_end block reply is actually emitted, so message_end keeps its safety delivery when directive parsing suppresses an earlier chunk (fixes dropped channel replies including Telegram forum topics where logs showed skipping message_end sends). Fixes #77833.
- TUI/sessions: bound the session picker to recent rows and use exact lookup-style refreshes for the active session, so dusty stores no longer make TUI hydrate weeks-old transcripts before becoming responsive. Thanks @vincentkoc.
- Doctor/gateway: report recent supervisor restart handoffs in `openclaw doctor --deep`, using the installed service environment when available so service-managed clean exits are visible in guided diagnostics. Thanks @shakkernerd.
- Gateway/status: show recent supervisor restart handoffs in `openclaw gateway status --deep`, including JSON details, so clean service-managed restarts are reported as restart handoffs instead of opaque stopped-service diagnostics. Thanks @shakkernerd.

View File

@@ -125,6 +125,31 @@ describe("subscribeEmbeddedPiSession", () => {
expect(subscription.assistantTexts).toEqual(["Hello block"]);
});
it("message_end block-replies visible text when text_end streamed only silent NO_REPLY chunks", async () => {
const onBlockReply = vi.fn();
const { emit } = createTextEndBlockReplyHarness({ onBlockReply });
emit({ type: "message_start", message: { role: "assistant" } });
emitAssistantTextEnd({ emit, content: "NO_REPLY" });
await Promise.resolve();
expect(onBlockReply).not.toHaveBeenCalled();
emit({
type: "message_end",
message: {
role: "assistant",
content: [{ type: "text", text: "Final visible reply." }],
} as AssistantMessage,
});
await Promise.resolve();
await vi.waitFor(() => {
expect(onBlockReply).toHaveBeenCalledTimes(1);
});
expect(onBlockReply.mock.calls[0]?.[0]?.text).toBe("Final visible reply.");
});
it("does not duplicate when message_end flushes and a late text_end arrives", async () => {
const onBlockReply = vi.fn();
const { emit, subscription } = createTextEndBlockReplyHarness({ onBlockReply });

View File

@@ -733,7 +733,6 @@ export function subscribeEmbeddedPiSession(params: SubscribeEmbeddedPiSessionPar
return;
}
state.lastBlockReplyText = chunk;
pushAssistantText(chunk);
if (!params.onBlockReply) {
return;
@@ -754,6 +753,7 @@ export function subscribeEmbeddedPiSession(params: SubscribeEmbeddedPiSessionPar
if (!cleanedText && (!mediaUrls || mediaUrls.length === 0) && !audioAsVoice) {
return;
}
state.lastBlockReplyText = chunk;
emitBlockReply(
{
text: cleanedText,