* 'main' of https://github.com/openclaw/openclaw:
  feat: add crestodian local planner fallback
  fix(control-ui): clarify chat context details
  fix(telegram): keep polling watchdog active for wedged runner
This commit is contained in:
Vincent Koc
2026-04-25 02:22:02 -07:00
26 changed files with 1076 additions and 68 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 38 KiB

View File

@@ -120,7 +120,7 @@ Telegraph style. Root rules only. Read scoped `AGENTS.md` before subtree work.
- Docs change with behavior/API. Use docs list/read_when hints; docs links per `docs/AGENTS.md`.
- Changelog user-facing only; pure test/internal usually no entry.
- Changelog placement: active version `### Changes`/`### Fixes`; at most one contributor mention, prefer `Thanks @user`.
- Changelog placement: active version `### Changes`/`### Fixes`; every added entry must include at least one `Thanks @author` attribution, using credited GitHub username(s).
## Git

View File

@@ -17,7 +17,9 @@ Docs: https://docs.openclaw.ai
### Fixes
- Heartbeat: clamp oversized scheduler delays through the shared safe timer helper, preventing `every` values over Node's timeout cap from becoming a 1 ms crash loop. Fixes #71414. (#71478) Thanks @hclsys.
- Control UI/chat: collapse assistant token/model context details behind an explicit Context disclosure and show full dates in message footers, making historical transcript timing clear without noisy default metadata. (#71337) Thanks @BunsDev.
- Telegram: remove the startup persisted-offset `getUpdates` preflight so polling restarts do not self-conflict before the runner starts. Fixes #69304. (#69779) Thanks @chinar-amrutkar.
- Telegram: keep the polling stall watchdog active even when grammY reports the runner as not running while its task is still pending, so a rebuilt transport cannot leave `getUpdates` silent until a manual gateway restart. Fixes #69064. Thanks @LDLoeb.
- Browser/Playwright: ignore benign already-handled route races during guarded navigation so browser-page tasks no longer fail when Playwright tears down a route mid-flight. (#68708) Thanks @Steady-ai.
- Browser/downloads: seed managed Chrome profiles with OpenClaw download prefs and capture unmanaged click-triggered downloads under the guarded downloads directory, while explicit download waiters still own their target file. (#64558) Thanks @Pearcekieser.
- Browser/Chrome: stop passing redundant `--disable-setuid-sandbox` when `browser.noSandbox` is enabled; `--no-sandbox` remains the effective sandbox opt-out. (#67939) Thanks @sebykrueger.
@@ -50,7 +52,7 @@ Docs: https://docs.openclaw.ai
### Changes
- CLI/Crestodian: add a configless setup and repair helper for bare `openclaw`, typed config operations, agent handoff, audit logging, docs/source discovery, and guarded message-channel rescue.
- CLI/Crestodian: add a configless setup and repair helper for bare `openclaw`, typed config operations, local Claude/Codex planner fallback, agent handoff, audit logging, docs/source discovery, and guarded message-channel rescue.
- Gateway/nodes: add disabled-by-default `gateway.nodes.pairing.autoApproveCidrs` for first-time node pairing from explicit trusted CIDRs, while keeping operator/browser pairing and all upgrade flows manual. Fixes #60800. Thanks @sahilsatralkar.
- Browser: add viewport coordinate clicks for managed and existing-session automation, plus `openclaw browser click-coords` for CLI use. (#54452) Thanks @dluttz.
- Browser: add `browser.actionTimeoutMs` and use a 60s default action budget so healthy long browser waits do not fail at the client transport boundary. (#62589) Thanks @andyylin.

View File

@@ -105,7 +105,7 @@ Read-only operations can run immediately:
- show the audit-log path
Persistent operations require conversational approval in interactive mode unless
you pass `--yes` for a one-shot command:
you pass `--yes` for a direct command:
- write config
- run `config set`
@@ -153,14 +153,22 @@ model unset. Install or log into Codex/Claude Code, or expose
## Model-Assisted Planner
Crestodian always starts in deterministic mode. Once a valid OpenClaw model is
configured, local Crestodian can make one bounded model call for fuzzy commands
that the deterministic parser does not understand.
Crestodian always starts in deterministic mode. For fuzzy commands that the
deterministic parser does not understand, local Crestodian can make one bounded
planner turn through OpenClaw's normal runtime paths. It first uses the
configured OpenClaw model. If no configured model is usable yet, it can fall
back to local runtimes already present on the machine:
- Claude Code CLI: `claude-cli/claude-opus-4-7`
- Codex app-server harness: `openai/gpt-5.5` with `embeddedHarness.runtime: "codex"`
- Codex CLI: `codex-cli/gpt-5.5`
The model-assisted planner cannot mutate config directly. It must translate the
request into one of Crestodian's typed commands, then the normal approval and
audit rules apply. Crestodian prints the model it used and the interpreted
command before it runs anything.
command before it runs anything. Configless fallback planner turns are
temporary, tool-disabled where the runtime supports it, and use a temporary
workspace/session.
Message-channel rescue mode does not use the model-assisted planner. Remote
rescue stays deterministic so a broken or compromised normal agent path cannot
@@ -275,6 +283,19 @@ Remote rescue is covered by the Docker lane:
pnpm test:docker:crestodian-rescue
```
Configless local planner fallback is covered by:
```bash
pnpm test:docker:crestodian-planner
```
An opt-in live channel command-surface smoke checks `/crestodian status` plus a
persistent approval roundtrip through the rescue handler:
```bash
pnpm test:live:crestodian-rescue-channel
```
Fresh configless setup through Crestodian is covered by:
```bash

View File

@@ -55,6 +55,14 @@ When debugging real providers/models (requires real creds):
Slack DM with `/codex bind`, exercises `/codex fast` and
`/codex permissions`, then verifies a plain reply and an image attachment
route through the native plugin binding instead of ACP.
- Crestodian rescue command smoke: `pnpm test:live:crestodian-rescue-channel`
- Opt-in belt-and-suspenders check for the message-channel rescue command
surface. It exercises `/crestodian status`, queues a persistent model
change, replies `/crestodian yes`, and verifies the audit/config write path.
- Crestodian planner Docker smoke: `pnpm test:docker:crestodian-planner`
- Runs Crestodian in a configless container with a fake Claude CLI on `PATH`
and verifies the fuzzy planner fallback translates into an audited typed
config write.
- Moonshot/Kimi cost smoke: with `MOONSHOT_API_KEY` set, run
`openclaw models list --provider moonshot --json`, then run an isolated
`openclaw agent --local --session-id live-kimi-cost --message 'Reply exactly: KIMI_LIVE_OK' --thinking off --json`

View File

@@ -31,7 +31,6 @@ describe("TelegramPollingLivenessTracker", () => {
expect(
tracker.detectStall({
thresholdMs: POLL_STALL_THRESHOLD_MS,
runnerIsRunning: true,
}),
).toBeNull();
@@ -45,7 +44,6 @@ describe("TelegramPollingLivenessTracker", () => {
now = 120_001;
const stall = tracker.detectStall({
thresholdMs: POLL_STALL_THRESHOLD_MS,
runnerIsRunning: true,
});
expect(stall?.message).toContain("Polling stall detected (no completed getUpdates");
expect(stall?.message).toContain("inFlight=0 outcome=not-started");
@@ -54,7 +52,6 @@ describe("TelegramPollingLivenessTracker", () => {
expect(
tracker.detectStall({
thresholdMs: POLL_STALL_THRESHOLD_MS,
runnerIsRunning: true,
}),
).toBeNull();
});
@@ -69,7 +66,6 @@ describe("TelegramPollingLivenessTracker", () => {
now = 120_001;
const stall = tracker.detectStall({
thresholdMs: POLL_STALL_THRESHOLD_MS,
runnerIsRunning: true,
});
expect(stall?.message).toContain("active getUpdates stuck");

View File

@@ -89,14 +89,7 @@ export class TelegramPollingLivenessTracker {
this.#inFlightGetUpdates = Math.max(0, this.#inFlightGetUpdates - 1);
}
detectStall(params: {
thresholdMs: number;
runnerIsRunning: boolean;
now?: number;
}): TelegramPollingStall | null {
if (!params.runnerIsRunning) {
return null;
}
detectStall(params: { thresholdMs: number; now?: number }): TelegramPollingStall | null {
const now = params.now ?? this.#now();
const activeElapsed =
this.#inFlightGetUpdates > 0 && this.#lastGetUpdatesStartedAt != null

View File

@@ -387,6 +387,60 @@ describe("TelegramPollingSession", () => {
}
});
it("forces a restart when the runner task is pending but reports not running", async () => {
const abort = new AbortController();
const firstRunnerStop = vi.fn(async () => undefined);
const secondRunnerStop = vi.fn(async () => undefined);
createTelegramBotMock.mockReturnValue(makeBot());
let firstTaskResolve: (() => void) | undefined;
const firstTask = new Promise<void>((resolve) => {
firstTaskResolve = resolve;
});
let cycle = 0;
runMock.mockImplementation(() => {
cycle += 1;
if (cycle === 1) {
return {
task: () => firstTask,
stop: async () => {
await firstRunnerStop();
firstTaskResolve?.();
},
isRunning: () => false,
};
}
return {
task: async () => {
abort.abort();
},
stop: secondRunnerStop,
isRunning: () => false,
};
});
const watchdogHarness = installPollingStallWatchdogHarness();
const log = vi.fn();
const session = createPollingSession({
abortSignal: abort.signal,
log,
});
try {
const runPromise = session.runUntilAbort();
const watchdog = await watchdogHarness.waitForWatchdog();
watchdog?.();
await runPromise;
expect(runMock).toHaveBeenCalledTimes(2);
expect(firstRunnerStop).toHaveBeenCalledTimes(1);
expect(log).toHaveBeenCalledWith(expect.stringContaining("Polling stall detected"));
} finally {
watchdogHarness.restore();
}
});
it("honors a custom polling stall threshold", async () => {
const abort = new AbortController();
const botStop = vi.fn(async () => undefined);

View File

@@ -295,7 +295,6 @@ export class TelegramPollingSession {
const stall = liveness.detectStall({
thresholdMs: this.#stallThresholdMs,
runnerIsRunning: runner.isRunning(),
});
if (stall) {
this.#transportState.markDirty();

View File

@@ -1479,6 +1479,7 @@
"test:docker:cleanup": "bash scripts/test-cleanup-docker.sh",
"test:docker:config-reload": "bash scripts/e2e/config-reload-source-docker.sh",
"test:docker:crestodian-first-run": "bash scripts/e2e/crestodian-first-run-docker.sh",
"test:docker:crestodian-planner": "bash scripts/e2e/crestodian-planner-docker.sh",
"test:docker:crestodian-rescue": "bash scripts/e2e/crestodian-rescue-docker.sh",
"test:docker:cron-mcp-cleanup": "bash scripts/e2e/cron-mcp-cleanup-docker.sh",
"test:docker:doctor-switch": "bash scripts/e2e/doctor-install-switch-docker.sh",
@@ -1543,6 +1544,7 @@
"test:live": "node scripts/test-live.mjs",
"test:live:cache": "bun scripts/check-live-cache.ts",
"test:live:codex-harness": "OPENCLAW_LIVE_CODEX_HARNESS=1 node scripts/test-live.mjs -- src/gateway/gateway-codex-harness.live.test.ts",
"test:live:crestodian-rescue-channel": "OPENCLAW_LIVE_CRESTODIAN_RESCUE_CHANNEL=1 node scripts/test-live.mjs -- src/crestodian/rescue-channel.live.test.ts",
"test:live:gateway-profiles": "node scripts/test-live.mjs -- src/gateway/gateway-models.profiles.live.test.ts",
"test:live:media": "node --import tsx scripts/test-live-media.ts",
"test:live:media:image": "node --import tsx scripts/test-live-media.ts image",

View File

@@ -0,0 +1,122 @@
import fs from "node:fs/promises";
import os from "node:os";
import path from "node:path";
import { clearConfigCache } from "../../src/config/config.js";
import type { OpenClawConfig } from "../../src/config/types.openclaw.js";
import { runCrestodian } from "../../src/crestodian/crestodian.js";
import type { RuntimeEnv } from "../../src/runtime.js";
function assert(condition: unknown, message: string): asserts condition {
if (!condition) {
throw new Error(message);
}
}
function createRuntime(): { runtime: RuntimeEnv; lines: string[] } {
const lines: string[] = [];
return {
lines,
runtime: {
log: (...args) => lines.push(args.join(" ")),
error: (...args) => lines.push(args.join(" ")),
exit: (code) => {
throw new Error(`exit ${code}`);
},
},
};
}
async function installFakeClaudeCli(fakeBinDir: string, promptLogPath: string): Promise<void> {
await fs.mkdir(fakeBinDir, { recursive: true });
const scriptPath = path.join(fakeBinDir, "claude");
await fs.writeFile(
scriptPath,
[
"#!/usr/bin/env bash",
"set -euo pipefail",
'if [[ "${1:-}" == "--version" ]]; then',
' echo "claude 99.0.0"',
" exit 0",
"fi",
"IFS= read -r prompt_line || true",
`printf '%s\\n' "$prompt_line" > ${JSON.stringify(promptLogPath)}`,
'node -e \'console.log(JSON.stringify({ type: "result", session_id: "fake-claude-session", result: JSON.stringify({ reply: "Fake Claude planner selected a typed model update.", command: "set default model openai/gpt-5.2" }), usage: { input_tokens: 1, output_tokens: 1 } }))\'',
].join("\n"),
{ mode: 0o755 },
);
await fs.chmod(scriptPath, 0o755);
}
async function main() {
const stateDir =
process.env.OPENCLAW_STATE_DIR ??
(await fs.mkdtemp(path.join(os.tmpdir(), "openclaw-crestodian-planner-")));
const configPath = process.env.OPENCLAW_CONFIG_PATH ?? path.join(stateDir, "openclaw.json");
const fakeBinDir = path.join(stateDir, "fake-bin");
const promptLogPath = path.join(stateDir, "fake-claude-prompt.jsonl");
process.env.OPENCLAW_STATE_DIR = stateDir;
process.env.OPENCLAW_CONFIG_PATH = configPath;
process.env.PATH = `${fakeBinDir}:${process.env.PATH ?? ""}`;
await fs.rm(stateDir, { recursive: true, force: true });
await fs.mkdir(stateDir, { recursive: true });
await installFakeClaudeCli(fakeBinDir, promptLogPath);
clearConfigCache();
const runtime = createRuntime();
await runCrestodian(
{
message: "please make the default brain gpt five two",
yes: true,
interactive: false,
},
runtime.runtime,
);
const output = runtime.lines.join("\n");
assert(
output.includes("[crestodian] planner: claude-cli/claude-opus-4-7"),
"configless planner did not use Claude CLI fallback",
);
assert(
output.includes("Fake Claude planner selected a typed model update."),
"planner reply was not surfaced",
);
assert(
output.includes("[crestodian] interpreted: set default model openai/gpt-5.2"),
"planner command was not interpreted",
);
assert(
output.includes("[crestodian] done: config.setDefaultModel"),
"planned model update did not apply",
);
const promptLine = await fs.readFile(promptLogPath, "utf8");
assert(promptLine.includes("User request:"), "fake Claude CLI did not receive planner prompt");
assert(
promptLine.includes("OpenClaw docs:"),
"planner prompt did not include docs reference context",
);
const config = JSON.parse(await fs.readFile(configPath, "utf8")) as OpenClawConfig;
assert(
config.agents?.defaults?.model &&
typeof config.agents.defaults.model === "object" &&
"primary" in config.agents.defaults.model &&
config.agents.defaults.model.primary === "openai/gpt-5.2",
"planned default model was not written",
);
const auditPath = path.join(stateDir, "audit", "crestodian.jsonl");
const audit = (await fs.readFile(auditPath, "utf8")).trim();
assert(
audit.includes('"operation":"config.setDefaultModel"'),
"planned model update audit entry missing",
);
console.log("Crestodian planner Docker E2E passed");
process.exit(0);
}
main().catch((err) => {
console.error(err);
process.exit(1);
});

View File

@@ -0,0 +1,38 @@
#!/usr/bin/env bash
set -euo pipefail
ROOT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/../.." && pwd)"
source "$ROOT_DIR/scripts/lib/docker-e2e-image.sh"
IMAGE_NAME="$(docker_e2e_resolve_image "openclaw-crestodian-planner-e2e" OPENCLAW_CRESTODIAN_PLANNER_E2E_IMAGE)"
CONTAINER_NAME="openclaw-crestodian-planner-e2e-$$"
RUN_LOG="$(mktemp -t openclaw-crestodian-planner-log.XXXXXX)"
cleanup() {
docker rm -f "$CONTAINER_NAME" >/dev/null 2>&1 || true
rm -f "$RUN_LOG"
}
trap cleanup EXIT
docker_e2e_build_or_reuse "$IMAGE_NAME" crestodian-planner
echo "Running in-container Crestodian planner fallback smoke..."
set +e
docker run --rm \
--name "$CONTAINER_NAME" \
-e "OPENCLAW_STATE_DIR=/tmp/openclaw-state" \
-e "OPENCLAW_CONFIG_PATH=/tmp/openclaw-state/openclaw.json" \
"$IMAGE_NAME" \
bash -lc "set -euo pipefail
node --import tsx scripts/e2e/crestodian-planner-docker-client.ts
" >"$RUN_LOG" 2>&1
status=${PIPESTATUS[0]}
set -e
if [ "$status" -ne 0 ]; then
echo "Docker Crestodian planner fallback smoke failed"
cat "$RUN_LOG"
exit "$status"
fi
cat "$RUN_LOG"
echo "OK"

View File

@@ -229,6 +229,7 @@ const lanes = [
}),
lane("pi-bundle-mcp-tools", "OPENCLAW_SKIP_DOCKER_BUILD=1 pnpm test:docker:pi-bundle-mcp-tools"),
lane("crestodian-rescue", "OPENCLAW_SKIP_DOCKER_BUILD=1 pnpm test:docker:crestodian-rescue"),
lane("crestodian-planner", "OPENCLAW_SKIP_DOCKER_BUILD=1 pnpm test:docker:crestodian-planner"),
serviceLane(
"cron-mcp-cleanup",
"OPENCLAW_SKIP_DOCKER_BUILD=1 pnpm test:docker:cron-mcp-cleanup",

View File

@@ -19,6 +19,7 @@ const {
runBeforeAgentReplyMock,
executePreparedCliRunMock,
prepareCliRunContextMock,
closeClaudeLiveSessionForContextMock,
} = vi.hoisted(() => ({
hasHooksMock: vi.fn<(hookName: string) => boolean>(() => false),
runBeforeAgentReplyMock: vi.fn<(event: unknown, ctx: unknown) => Promise<BeforeAgentReplyResult>>(
@@ -28,6 +29,7 @@ const {
text: "",
})),
prepareCliRunContextMock: vi.fn(),
closeClaudeLiveSessionForContextMock: vi.fn(),
}));
vi.mock("../plugins/hook-runner-global.js", () => ({
@@ -45,6 +47,10 @@ vi.mock("./cli-runner/execute.runtime.js", () => ({
executePreparedCliRun: executePreparedCliRunMock,
}));
vi.mock("./cli-runner/claude-live-session.js", () => ({
closeClaudeLiveSessionForContext: closeClaudeLiveSessionForContextMock,
}));
const baseRunParams = {
sessionId: "test-session",
sessionKey: "test-session-key",
@@ -86,6 +92,7 @@ beforeEach(() => {
prepareCliRunContextMock.mockImplementation(async (params) =>
makeStubContext(params as typeof baseRunParams & { trigger?: string }),
);
closeClaudeLiveSessionForContextMock.mockReset();
});
afterEach(() => {
@@ -164,4 +171,17 @@ describe("runCliAgent cron before_agent_reply seam", () => {
expect(runBeforeAgentReplyMock).not.toHaveBeenCalled();
expect(executePreparedCliRunMock).toHaveBeenCalled();
});
it("can close temporary CLI live sessions after a run", async () => {
const { runCliAgent } = await import("./cli-runner.js");
executePreparedCliRunMock.mockResolvedValue({ text: "real reply" });
await runCliAgent({ ...baseRunParams, cleanupCliLiveSessionOnRunEnd: true });
expect(executePreparedCliRunMock).toHaveBeenCalledTimes(1);
expect(closeClaudeLiveSessionForContextMock).toHaveBeenCalledTimes(1);
expect(closeClaudeLiveSessionForContextMock).toHaveBeenCalledWith(
await prepareCliRunContextMock.mock.results[0].value,
);
});
});

View File

@@ -101,7 +101,15 @@ export async function runCliAgent(params: RunCliAgentParams): Promise<EmbeddedPi
}
const { prepareCliRunContext } = await import("./cli-runner/prepare.runtime.js");
const context = await prepareCliRunContext(params);
return runPreparedCliAgent(context);
try {
return await runPreparedCliAgent(context);
} finally {
if (params.cleanupCliLiveSessionOnRunEnd === true) {
const { closeClaudeLiveSessionForContext } =
await import("./cli-runner/claude-live-session.js");
closeClaudeLiveSessionForContext(context);
}
}
}
export async function runPreparedCliAgent(

View File

@@ -72,6 +72,15 @@ export function resetClaudeLiveSessionsForTest(): void {
liveSessionCreates.clear();
}
export function closeClaudeLiveSessionForContext(context: PreparedCliRunContext): void {
const key = buildClaudeLiveKey(context);
const session = liveSessions.get(key);
if (session) {
closeLiveSession(session, "restart");
}
liveSessionCreates.delete(key);
}
export function shouldUseClaudeLiveSession(context: PreparedCliRunContext): boolean {
return (
context.backendResolved.id === "claude-cli" &&

View File

@@ -44,6 +44,12 @@ export type RunCliAgentParams = {
senderIsOwner?: boolean;
abortSignal?: AbortSignal;
replyOperation?: ReplyOperation;
/**
* Close any long-lived CLI live session created for this run after the run
* finishes. Intended for temporary helper calls that should not keep process
* handles alive after returning.
*/
cleanupCliLiveSessionOnRunEnd?: boolean;
};
export type CliPreparedBackend = {

View File

@@ -188,6 +188,8 @@ describe("resolveAuthForTarget", () => {
it("redacts resolver internals from unresolved SecretRef diagnostics", async () => {
await withEnvAsync(
{
OPENCLAW_GATEWAY_PASSWORD: undefined,
OPENCLAW_GATEWAY_TOKEN: undefined,
MISSING_GATEWAY_TOKEN: undefined,
},
async () => {

View File

@@ -0,0 +1,66 @@
import { describe, expect, it, vi } from "vitest";
const readConfigFileSnapshotMock = vi.hoisted(() => vi.fn());
const prepareSimpleCompletionModelForAgentMock = vi.hoisted(() => vi.fn());
vi.mock("../config/config.js", () => ({
readConfigFileSnapshot: readConfigFileSnapshotMock,
}));
vi.mock("../agents/simple-completion-runtime.js", () => ({
prepareSimpleCompletionModelForAgent: prepareSimpleCompletionModelForAgentMock,
completeWithPreparedSimpleCompletionModel: vi.fn(),
}));
const { planCrestodianCommandWithConfiguredModel } = await import("./assistant.js");
describe("Crestodian configured-model planner", () => {
it("skips the configured model path when no config file exists", async () => {
readConfigFileSnapshotMock.mockResolvedValue({
path: "/tmp/openclaw.json",
exists: false,
raw: null,
parsed: {},
sourceConfig: {},
resolved: {},
valid: true,
runtimeConfig: {},
config: {},
issues: [],
warnings: [],
});
await expect(
planCrestodianCommandWithConfiguredModel({
input: "please set up my model",
overview: {
config: {
path: "/tmp/openclaw.json",
exists: false,
valid: true,
issues: [],
hash: null,
},
agents: [],
defaultAgentId: "main",
tools: {
codex: { command: "codex", found: false },
claude: { command: "claude", found: false },
apiKeys: { openai: false, anthropic: false },
},
gateway: {
url: "ws://127.0.0.1:18789",
source: "local loopback",
reachable: false,
},
references: {
docsUrl: "https://docs.openclaw.ai",
sourceUrl: "https://github.com/openclaw/openclaw",
},
},
}),
).resolves.toBeNull();
expect(prepareSimpleCompletionModelForAgentMock).not.toHaveBeenCalled();
});
});

View File

@@ -1,57 +1,51 @@
import { describe, expect, it } from "vitest";
import { describe, expect, it, vi } from "vitest";
import type { RunCliAgentParams } from "../agents/cli-runner/types.js";
import type { RunEmbeddedPiAgentParams } from "../agents/pi-embedded-runner/run/params.js";
import type { EmbeddedPiRunResult } from "../agents/pi-embedded.js";
import {
buildCrestodianAssistantUserPrompt,
planCrestodianCommandWithLocalRuntime,
parseCrestodianAssistantPlanText,
} from "./assistant.js";
import type { CrestodianOverview } from "./overview.js";
function overviewFixture(): CrestodianOverview {
function overview(overrides: Partial<CrestodianOverview["tools"]> = {}): CrestodianOverview {
return {
config: {
path: "/tmp/openclaw.json",
exists: true,
valid: true,
exists: false,
valid: false,
issues: [],
hash: "hash",
hash: null,
},
agents: [
{
id: "main",
name: "Main",
isDefault: true,
model: "openai/gpt-5.5",
workspace: "/tmp/main",
},
],
defaultAgentId: "main",
defaultModel: "openai/gpt-5.5",
agents: [],
defaultAgentId: "default",
tools: {
codex: { command: "codex", found: true, version: "codex 1.0.0" },
codex: { command: "codex", found: false },
claude: { command: "claude", found: false },
apiKeys: { openai: true, anthropic: false },
apiKeys: { openai: false, anthropic: false },
...overrides,
},
gateway: {
url: "ws://127.0.0.1:18200",
url: "ws://127.0.0.1:14567",
source: "local loopback",
reachable: false,
},
references: {
docsPath: "/tmp/openclaw/docs",
docsUrl: "https://docs.openclaw.ai",
sourcePath: "/tmp/openclaw",
sourceUrl: "https://github.com/openclaw/openclaw",
},
};
}
describe("parseCrestodianAssistantPlanText", () => {
it("extracts compact planner JSON", () => {
describe("Crestodian assistant", () => {
it("parses the first compact JSON command", () => {
expect(
parseCrestodianAssistantPlanText(
'tiny claw says {"reply":"I can restart it.","command":"restart gateway"}',
'thinking... {"reply":"Aye aye.","command":"restart gateway"}',
),
).toEqual({
reply: "I can restart it.",
reply: "Aye aye.",
command: "restart gateway",
});
});
@@ -60,13 +54,40 @@ describe("parseCrestodianAssistantPlanText", () => {
expect(parseCrestodianAssistantPlanText("I would edit config directly.")).toBeNull();
expect(parseCrestodianAssistantPlanText('{"reply":"missing command"}')).toBeNull();
});
});
describe("buildCrestodianAssistantUserPrompt", () => {
it("includes only operational summary context", () => {
it("includes only operational summary context in planner prompts", () => {
const prompt = buildCrestodianAssistantUserPrompt({
input: "fix my setup",
overview: overviewFixture(),
overview: {
...overview({
codex: { command: "codex", found: true, version: "codex 1.0.0" },
apiKeys: { openai: true, anthropic: false },
}),
config: {
path: "/tmp/openclaw.json",
exists: true,
valid: true,
issues: [],
hash: "hash",
},
agents: [
{
id: "main",
name: "Main",
isDefault: true,
model: "openai/gpt-5.5",
workspace: "/tmp/main",
},
],
defaultAgentId: "main",
defaultModel: "openai/gpt-5.5",
references: {
docsPath: "/tmp/openclaw/docs",
docsUrl: "https://docs.openclaw.ai",
sourcePath: "/tmp/openclaw",
sourceUrl: "https://github.com/openclaw/openclaw",
},
},
});
expect(prompt).toContain("User request: fix my setup");
@@ -76,4 +97,140 @@ describe("buildCrestodianAssistantUserPrompt", () => {
expect(prompt).toContain("OpenClaw docs: /tmp/openclaw/docs");
expect(prompt).toContain("OpenClaw source: /tmp/openclaw");
});
it("uses Claude CLI first for configless planning", async () => {
const runCliAgent = vi.fn(
async (_params: RunCliAgentParams): Promise<EmbeddedPiRunResult> => ({
payloads: [{ text: '{"reply":"Checking the shell.","command":"status"}' }],
meta: { durationMs: 0 },
}),
);
const runEmbeddedPiAgent = vi.fn();
await expect(
planCrestodianCommandWithLocalRuntime({
input: "what is going on",
overview: overview({
claude: { command: "claude", found: true },
codex: { command: "codex", found: true },
}),
deps: {
runCliAgent,
runEmbeddedPiAgent,
createTempDir: async () => "/tmp/crestodian-planner",
removeTempDir: async () => {},
},
}),
).resolves.toMatchObject({
command: "status",
reply: "Checking the shell.",
modelLabel: "claude-cli/claude-opus-4-7",
});
expect(runCliAgent).toHaveBeenCalledTimes(1);
const firstCliCall = runCliAgent.mock.calls[0][0];
expect(firstCliCall).toMatchObject({
provider: "claude-cli",
model: "claude-opus-4-7",
cleanupCliLiveSessionOnRunEnd: true,
});
expect(firstCliCall.config?.agents?.defaults?.cliBackends).toBeUndefined();
expect(firstCliCall.extraSystemPrompt).toContain("Do not use tools, shell commands");
expect(runEmbeddedPiAgent).not.toHaveBeenCalled();
});
it("falls back to Codex app-server when Claude CLI planning fails", async () => {
const runCliAgent = vi.fn(async () => {
throw new Error("claude unavailable");
});
const runEmbeddedPiAgent = vi.fn(
async (_params: RunEmbeddedPiAgentParams): Promise<EmbeddedPiRunResult> => ({
meta: {
durationMs: 0,
finalAssistantVisibleText: '{"reply":"Codex planner online.","command":"gateway status"}',
},
}),
);
await expect(
planCrestodianCommandWithLocalRuntime({
input: "is gateway alive",
overview: overview({
claude: { command: "claude", found: true },
codex: { command: "codex", found: true },
}),
deps: {
runCliAgent,
runEmbeddedPiAgent,
createTempDir: async () => "/tmp/crestodian-planner",
removeTempDir: async () => {},
},
}),
).resolves.toMatchObject({
command: "gateway status",
reply: "Codex planner online.",
modelLabel: "openai/gpt-5.5 via codex",
});
expect(runEmbeddedPiAgent).toHaveBeenCalledTimes(1);
const firstEmbeddedCall = runEmbeddedPiAgent.mock.calls[0][0];
expect(firstEmbeddedCall).toMatchObject({
provider: "openai",
model: "gpt-5.5",
agentHarnessId: "codex",
disableTools: true,
toolsAllow: [],
});
expect(firstEmbeddedCall.config).toMatchObject({
agents: {
defaults: {
embeddedHarness: { runtime: "codex", fallback: "none" },
model: { primary: "openai/gpt-5.5" },
},
},
plugins: { entries: { codex: { enabled: true } } },
});
});
it("uses Codex CLI if the app-server planner is not usable", async () => {
const runCliAgent = vi.fn(async (params: RunCliAgentParams): Promise<EmbeddedPiRunResult> => {
if (params.provider === "codex-cli") {
return {
payloads: [{ text: '{"reply":"CLI fallback.","command":"models"}' }],
meta: { durationMs: 0 },
};
}
throw new Error("unexpected cli provider");
});
const runEmbeddedPiAgent = vi.fn(async () => {
throw new Error("codex app-server unavailable");
});
await expect(
planCrestodianCommandWithLocalRuntime({
input: "show models",
overview: overview({
codex: { command: "codex", found: true },
}),
deps: {
runCliAgent,
runEmbeddedPiAgent,
createTempDir: async () => "/tmp/crestodian-planner",
removeTempDir: async () => {},
},
}),
).resolves.toMatchObject({
command: "models",
reply: "CLI fallback.",
modelLabel: "codex-cli/gpt-5.5",
});
expect(runEmbeddedPiAgent).toHaveBeenCalledTimes(1);
expect(runCliAgent).toHaveBeenCalledTimes(1);
expect(runCliAgent.mock.calls[0][0]).toMatchObject({
provider: "codex-cli",
model: "gpt-5.5",
cleanupCliLiveSessionOnRunEnd: true,
});
});
});

View File

@@ -1,3 +1,7 @@
import { randomUUID } from "node:crypto";
import fs from "node:fs/promises";
import os from "node:os";
import path from "node:path";
import { resolveDefaultAgentId } from "../agents/agent-scope.js";
import { extractAssistantText } from "../agents/pi-embedded-utils.js";
import {
@@ -5,16 +9,20 @@ import {
prepareSimpleCompletionModelForAgent,
} from "../agents/simple-completion-runtime.js";
import { readConfigFileSnapshot } from "../config/config.js";
import type { OpenClawConfig } from "../config/types.openclaw.js";
import type { CrestodianOverview } from "./overview.js";
const CRESTODIAN_ASSISTANT_TIMEOUT_MS = 10_000;
const CRESTODIAN_ASSISTANT_MAX_TOKENS = 512;
const CRESTODIAN_CLAUDE_CLI_MODEL = "claude-opus-4-7";
const CRESTODIAN_CODEX_MODEL = "gpt-5.5";
const CRESTODIAN_ASSISTANT_SYSTEM_PROMPT = [
"You are Crestodian, OpenClaw's ring-zero setup helper.",
"Turn the user's request into exactly one safe OpenClaw Crestodian command.",
"Return only compact JSON with keys reply and command.",
"Do not invent commands. Do not claim a write was applied.",
"Do not use tools, shell commands, file edits, or network lookups; plan only from the supplied overview.",
"Use the provided OpenClaw docs/source references when the user's request needs behavior, config, or architecture details.",
"If local source is available, prefer inspecting it. Otherwise point to GitHub and strongly recommend reviewing source when docs are not enough.",
"Allowed commands:",
@@ -51,6 +59,30 @@ export type CrestodianAssistantPlanner = (params: {
overview: CrestodianOverview;
}) => Promise<CrestodianAssistantPlan | null>;
type RunCliAgentFn = typeof import("../agents/cli-runner.js").runCliAgent;
type RunEmbeddedPiAgentFn = typeof import("../agents/pi-embedded.js").runEmbeddedPiAgent;
export type CrestodianLocalRuntimePlannerDeps = {
runCliAgent?: RunCliAgentFn;
runEmbeddedPiAgent?: RunEmbeddedPiAgentFn;
createTempDir?: () => Promise<string>;
removeTempDir?: (dir: string) => Promise<void>;
};
type LocalPlannerCandidate = "claude-cli" | "codex-app-server" | "codex-cli";
export async function planCrestodianCommand(params: {
input: string;
overview: CrestodianOverview;
deps?: CrestodianLocalRuntimePlannerDeps;
}): Promise<CrestodianAssistantPlan | null> {
const configured = await planCrestodianCommandWithConfiguredModel(params);
if (configured) {
return configured;
}
return await planCrestodianCommandWithLocalRuntime(params);
}
export async function planCrestodianCommandWithConfiguredModel(params: {
input: string;
overview: CrestodianOverview;
@@ -60,7 +92,7 @@ export async function planCrestodianCommandWithConfiguredModel(params: {
return null;
}
const snapshot = await readConfigFileSnapshot();
if (!snapshot.valid) {
if (!snapshot.exists || !snapshot.valid) {
return null;
}
const cfg = snapshot.runtimeConfig ?? snapshot.config;
@@ -113,6 +145,44 @@ export async function planCrestodianCommandWithConfiguredModel(params: {
}
}
export async function planCrestodianCommandWithLocalRuntime(params: {
input: string;
overview: CrestodianOverview;
deps?: CrestodianLocalRuntimePlannerDeps;
}): Promise<CrestodianAssistantPlan | null> {
const input = params.input.trim();
if (!input) {
return null;
}
const candidates = listLocalRuntimePlannerCandidates(params.overview);
if (candidates.length === 0) {
return null;
}
const prompt = buildCrestodianAssistantUserPrompt({
input,
overview: params.overview,
});
for (const candidate of candidates) {
try {
const rawText = await runLocalRuntimePlanner(candidate, {
prompt,
deps: params.deps,
});
const parsed = parseCrestodianAssistantPlanText(rawText);
if (parsed) {
return {
...parsed,
modelLabel: localRuntimePlannerLabel(candidate),
};
}
} catch {
// Try the next locally available runtime. Crestodian must keep booting.
}
}
return null;
}
export function buildCrestodianAssistantUserPrompt(params: {
input: string;
overview: CrestodianOverview;
@@ -193,3 +263,179 @@ function extractFirstJsonObject(text: string): string | null {
}
return text.slice(start, end + 1);
}
function listLocalRuntimePlannerCandidates(overview: CrestodianOverview): LocalPlannerCandidate[] {
const candidates: LocalPlannerCandidate[] = [];
if (overview.tools.claude.found) {
candidates.push("claude-cli");
}
if (overview.tools.codex.found) {
candidates.push("codex-app-server", "codex-cli");
}
return candidates;
}
function localRuntimePlannerLabel(candidate: LocalPlannerCandidate): string {
const labels: Record<LocalPlannerCandidate, string> = {
"claude-cli": `claude-cli/${CRESTODIAN_CLAUDE_CLI_MODEL}`,
"codex-app-server": `openai/${CRESTODIAN_CODEX_MODEL} via codex`,
"codex-cli": `codex-cli/${CRESTODIAN_CODEX_MODEL}`,
};
return labels[candidate];
}
async function runLocalRuntimePlanner(
candidate: LocalPlannerCandidate,
params: {
prompt: string;
deps?: CrestodianLocalRuntimePlannerDeps;
},
): Promise<string | undefined> {
const tempDir = await (params.deps?.createTempDir ?? createTempPlannerDir)();
try {
const runId = `crestodian-planner-${randomUUID()}`;
const sessionFile = path.join(tempDir, "session.jsonl");
const sessionId = `${runId}-session`;
const sessionKey = `temp:crestodian-planner:${runId}`;
switch (candidate) {
case "claude-cli": {
const runCli = params.deps?.runCliAgent ?? (await loadRunCliAgent());
const result = await runCli({
sessionId,
sessionKey,
agentId: "crestodian",
trigger: "manual",
sessionFile,
workspaceDir: tempDir,
config: buildCliPlannerConfig(tempDir, `claude-cli/${CRESTODIAN_CLAUDE_CLI_MODEL}`),
prompt: params.prompt,
provider: "claude-cli",
model: CRESTODIAN_CLAUDE_CLI_MODEL,
timeoutMs: CRESTODIAN_ASSISTANT_TIMEOUT_MS,
runId,
extraSystemPrompt: CRESTODIAN_ASSISTANT_SYSTEM_PROMPT,
extraSystemPromptStatic: CRESTODIAN_ASSISTANT_SYSTEM_PROMPT,
messageChannel: "crestodian",
messageProvider: "crestodian",
senderIsOwner: true,
cleanupCliLiveSessionOnRunEnd: true,
});
return extractPlannerResultText(result);
}
case "codex-app-server": {
const runEmbedded = params.deps?.runEmbeddedPiAgent ?? (await loadRunEmbeddedPiAgent());
const result = await runEmbedded({
sessionId,
sessionKey,
agentId: "crestodian",
trigger: "manual",
sessionFile,
workspaceDir: tempDir,
config: buildCodexAppServerPlannerConfig(tempDir),
prompt: params.prompt,
provider: "openai",
model: CRESTODIAN_CODEX_MODEL,
agentHarnessId: "codex",
disableTools: true,
toolsAllow: [],
timeoutMs: CRESTODIAN_ASSISTANT_TIMEOUT_MS,
runId,
extraSystemPrompt: CRESTODIAN_ASSISTANT_SYSTEM_PROMPT,
messageChannel: "crestodian",
messageProvider: "crestodian",
senderIsOwner: true,
cleanupBundleMcpOnRunEnd: true,
});
return extractPlannerResultText(result);
}
case "codex-cli": {
const runCli = params.deps?.runCliAgent ?? (await loadRunCliAgent());
const result = await runCli({
sessionId,
sessionKey,
agentId: "crestodian",
trigger: "manual",
sessionFile,
workspaceDir: tempDir,
config: buildCliPlannerConfig(tempDir, `codex-cli/${CRESTODIAN_CODEX_MODEL}`),
prompt: params.prompt,
provider: "codex-cli",
model: CRESTODIAN_CODEX_MODEL,
timeoutMs: CRESTODIAN_ASSISTANT_TIMEOUT_MS,
runId,
extraSystemPrompt: CRESTODIAN_ASSISTANT_SYSTEM_PROMPT,
extraSystemPromptStatic: CRESTODIAN_ASSISTANT_SYSTEM_PROMPT,
messageChannel: "crestodian",
messageProvider: "crestodian",
senderIsOwner: true,
cleanupCliLiveSessionOnRunEnd: true,
});
return extractPlannerResultText(result);
}
}
return undefined;
} finally {
await (params.deps?.removeTempDir ?? removeTempPlannerDir)(tempDir);
}
}
function buildCliPlannerConfig(workspaceDir: string, modelRef: string): OpenClawConfig {
return {
agents: {
defaults: {
workspace: workspaceDir,
model: { primary: modelRef },
},
},
};
}
function buildCodexAppServerPlannerConfig(workspaceDir: string): OpenClawConfig {
return {
agents: {
defaults: {
workspace: workspaceDir,
embeddedHarness: { runtime: "codex", fallback: "none" },
model: { primary: `openai/${CRESTODIAN_CODEX_MODEL}` },
},
},
plugins: {
entries: {
codex: { enabled: true },
},
},
};
}
async function createTempPlannerDir(): Promise<string> {
return await fs.mkdtemp(path.join(os.tmpdir(), "openclaw-crestodian-planner-"));
}
async function removeTempPlannerDir(dir: string): Promise<void> {
await fs.rm(dir, { recursive: true, force: true });
}
async function loadRunCliAgent(): Promise<RunCliAgentFn> {
return (await import("../agents/cli-runner.js")).runCliAgent;
}
async function loadRunEmbeddedPiAgent(): Promise<RunEmbeddedPiAgentFn> {
return (await import("../agents/pi-embedded.js")).runEmbeddedPiAgent;
}
function extractPlannerResultText(result: {
payloads?: Array<{ text?: string }>;
meta?: {
finalAssistantVisibleText?: string;
finalAssistantRawText?: string;
};
}): string | undefined {
return (
result.meta?.finalAssistantVisibleText ??
result.meta?.finalAssistantRawText ??
result.payloads
?.map((payload) => payload.text?.trim())
.filter(Boolean)
.join("\n")
);
}

View File

@@ -2,7 +2,7 @@ import { stdin as defaultStdin, stdout as defaultStdout } from "node:process";
import readline from "node:readline/promises";
import { defaultRuntime, writeRuntimeJson, type RuntimeEnv } from "../runtime.js";
import {
planCrestodianCommandWithConfiguredModel,
planCrestodianCommand,
type CrestodianAssistantPlan,
type CrestodianAssistantPlanner,
} from "./assistant.js";
@@ -76,7 +76,7 @@ async function resolveCrestodianOperation(
return operation;
}
const overview = await loadCrestodianOverview();
const planner = opts.planWithAssistant ?? planCrestodianCommandWithConfiguredModel;
const planner = opts.planWithAssistant ?? planCrestodianCommand;
const plan = await planner({ input, overview });
if (!plan) {
return operation;

View File

@@ -0,0 +1,108 @@
import fs from "node:fs/promises";
import os from "node:os";
import path from "node:path";
import { afterEach, describe, expect, it, vi } from "vitest";
import type { CommandContext } from "../auto-reply/reply/commands-types.js";
import { clearConfigCache } from "../config/config.js";
import type { OpenClawConfig } from "../config/types.openclaw.js";
import { runCrestodianRescueMessage } from "./rescue-message.js";
const originalStateDir = process.env.OPENCLAW_STATE_DIR;
const originalConfigPath = process.env.OPENCLAW_CONFIG_PATH;
function truthy(value: string | undefined): boolean {
return /^(1|true|yes|on)$/i.test(value?.trim() ?? "");
}
const runLive =
truthy(process.env.OPENCLAW_LIVE_TEST) &&
truthy(process.env.OPENCLAW_LIVE_CRESTODIAN_RESCUE_CHANNEL);
const describeLive = runLive ? describe : describe.skip;
function commandContext(channel = process.env.OPENCLAW_LIVE_CRESTODIAN_CHANNEL ?? "whatsapp") {
return {
surface: channel,
channel,
channelId: channel,
ownerList: ["user:owner"],
senderIsOwner: true,
isAuthorizedSender: true,
senderId: "user:owner",
rawBodyNormalized: "/crestodian status",
commandBodyNormalized: "/crestodian status",
from: "user:owner",
to: "account:default",
} satisfies CommandContext;
}
async function runRescue(params: {
commandBody: string;
cfg: OpenClawConfig;
ctx?: CommandContext;
}) {
const ctx = params.ctx ?? commandContext();
return await runCrestodianRescueMessage({
cfg: params.cfg,
command: { ...ctx, commandBodyNormalized: params.commandBody },
commandBody: params.commandBody,
isGroup: false,
});
}
describeLive("Crestodian live rescue channel smoke", () => {
afterEach(() => {
clearConfigCache();
if (originalStateDir === undefined) {
delete process.env.OPENCLAW_STATE_DIR;
} else {
process.env.OPENCLAW_STATE_DIR = originalStateDir;
}
if (originalConfigPath === undefined) {
delete process.env.OPENCLAW_CONFIG_PATH;
} else {
process.env.OPENCLAW_CONFIG_PATH = originalConfigPath;
}
});
it("handles /crestodian status and a persistent approval roundtrip", async () => {
const tempDir = await fs.mkdtemp(path.join(os.tmpdir(), "crestodian-live-rescue-"));
const configPath = path.join(tempDir, "openclaw.json");
vi.stubEnv("OPENCLAW_STATE_DIR", tempDir);
vi.stubEnv("OPENCLAW_CONFIG_PATH", configPath);
await fs.writeFile(
configPath,
JSON.stringify(
{
meta: { lastTouchedVersion: "live-test", lastTouchedAt: new Date(0).toISOString() },
agents: { defaults: {} },
tools: { exec: { security: "full", ask: "off" } },
},
null,
2,
),
);
const cfg: OpenClawConfig = {
crestodian: { rescue: { enabled: true } },
tools: { exec: { security: "full", ask: "off" } },
};
await expect(runRescue({ commandBody: "/crestodian status", cfg })).resolves.toContain(
"[crestodian] done: status.check",
);
await expect(
runRescue({ commandBody: "/crestodian set default model openai/gpt-5.5", cfg }),
).resolves.toContain("Reply /crestodian yes to apply");
await expect(runRescue({ commandBody: "/crestodian yes", cfg })).resolves.toContain(
"Default model: openai/gpt-5.5",
);
const config = JSON.parse(await fs.readFile(configPath, "utf8")) as OpenClawConfig;
expect(config.agents?.defaults?.model).toMatchObject({ primary: "openai/gpt-5.5" });
const auditPath = path.join(tempDir, "audit", "crestodian.jsonl");
const auditLines = (await fs.readFile(auditPath, "utf8")).trim().split("\n");
expect(auditLines.some((line) => line.includes('"operation":"config.setDefaultModel"'))).toBe(
true,
);
});
});

View File

@@ -46,7 +46,9 @@
.chat-group-footer {
display: flex;
gap: 8px;
align-items: baseline;
row-gap: 5px;
align-items: center;
flex-wrap: wrap;
margin-top: 6px;
}
@@ -60,6 +62,7 @@
font-size: 11px;
color: var(--muted);
opacity: 0.7;
line-height: 1.2;
}
/* ── Group footer action buttons (TTS, delete) ── */
@@ -382,14 +385,81 @@ img.chat-avatar {
.msg-meta {
display: inline-flex;
align-items: center;
gap: 8px;
gap: 6px;
font-size: 11px;
line-height: 1;
color: var(--muted);
margin-top: 4px;
flex-wrap: wrap;
}
.msg-meta__summary {
list-style: none;
display: inline-flex;
align-items: center;
gap: 4px;
min-height: 22px;
padding: 2px 7px 2px 5px;
border: 1px solid var(--border);
border-radius: var(--radius-full);
background: color-mix(in srgb, var(--bg-hover, rgba(255, 255, 255, 0.08)) 65%, transparent);
cursor: pointer;
user-select: none;
transition:
border-color var(--duration-fast) ease-out,
background var(--duration-fast) ease-out,
color var(--duration-fast) ease-out;
}
.msg-meta__summary::-webkit-details-marker {
display: none;
}
.msg-meta__summary:hover,
.msg-meta__summary:focus-visible {
border-color: color-mix(in srgb, var(--accent) 40%, var(--border));
background: var(--bg-hover, rgba(255, 255, 255, 0.08));
color: var(--fg);
}
.msg-meta__summary:focus-visible {
outline: 2px solid var(--accent);
outline-offset: 2px;
}
.msg-meta__summary-icon {
display: inline-flex;
width: 12px;
height: 12px;
transition: transform 120ms ease-out;
}
.msg-meta__summary-icon svg {
width: 12px;
height: 12px;
fill: none;
stroke: currentColor;
stroke-width: 2;
}
.msg-meta[open] .msg-meta__summary-icon {
transform: rotate(90deg);
}
details.msg-meta:not([open]) .msg-meta__details {
display: none;
}
.msg-meta__details {
display: inline-flex;
align-items: center;
gap: 8px;
flex-wrap: wrap;
padding: 3px 7px;
border: 1px solid var(--border);
border-radius: var(--radius-full);
background: rgba(255, 255, 255, 0.03);
}
.msg-meta__tokens,
.msg-meta__cache,
.msg-meta__cost,

View File

@@ -5,7 +5,9 @@ import { afterEach, describe, expect, it, vi } from "vitest";
import { getSafeLocalStorage } from "../../local-storage.ts";
import type { MessageGroup } from "../types/chat-types.ts";
import {
formatChatTimestampForDisplay,
renderMessageGroup,
renderStreamingGroup,
resolveAssistantTextAvatar,
resetAssistantAttachmentAvailabilityCacheForTest,
} from "./grouped-render.ts";
@@ -304,6 +306,10 @@ describe("grouped chat rendering", () => {
},
1_000_000,
);
const meta = cached.querySelector<HTMLDetailsElement>("details.msg-meta");
expect(meta).not.toBeNull();
expect(meta?.open).toBe(false);
expect(meta?.querySelector("summary")?.textContent).toContain("Context");
expect(cached.querySelector(".msg-meta__ctx")?.textContent).toBe("44% ctx");
expect(cached.textContent).toContain("R438.4k");
expect(cached.textContent).toContain("W307");
@@ -320,6 +326,34 @@ describe("grouped chat rendering", () => {
expect(outputHeavy.querySelector(".msg-meta__ctx")?.textContent).toBe("10% ctx");
});
it("renders full dates with message timestamps", () => {
const container = document.createElement("div");
const timestamp = Date.UTC(2026, 3, 24, 18, 30);
renderAssistantMessage(container, {
role: "assistant",
content: "Done",
timestamp,
});
const time = container.querySelector<HTMLTimeElement>(".chat-group-timestamp");
const display = formatChatTimestampForDisplay(timestamp);
expect(time).not.toBeNull();
expect(time?.dateTime).toBe(display.dateTime);
expect(time?.textContent?.trim()).toBe(display.label);
expect(time?.getAttribute("title")).toBe(display.title);
});
it("renders full dates with streaming timestamps", () => {
const container = document.createElement("div");
const timestamp = Date.UTC(2026, 3, 24, 18, 30);
render(renderStreamingGroup("Working", timestamp), container);
const time = container.querySelector<HTMLTimeElement>(".chat-group-timestamp");
expect(time?.textContent?.trim()).toBe(formatChatTimestampForDisplay(timestamp).label);
});
it("renders configured local user names and avatar variants", () => {
const renderUser = (opts: Partial<RenderMessageGroupOptions>) => {
const container = document.createElement("div");

View File

@@ -49,6 +49,53 @@ type AssistantAttachmentAvailability =
const assistantAttachmentAvailabilityCache = new Map<string, AssistantAttachmentAvailability>();
const ASSISTANT_ATTACHMENT_UNAVAILABLE_RETRY_MS = 5_000;
export type ChatTimestampDisplay = {
label: string;
title: string;
dateTime: string;
};
export function formatChatTimestampForDisplay(timestamp: number): ChatTimestampDisplay {
const date = new Date(timestamp);
if (!Number.isFinite(date.getTime())) {
return {
label: "Unknown date",
title: "Unknown date",
dateTime: "",
};
}
return {
label: date.toLocaleString([], {
month: "short",
day: "numeric",
year: "numeric",
hour: "numeric",
minute: "2-digit",
}),
title: date.toLocaleString([], {
weekday: "long",
month: "long",
day: "numeric",
year: "numeric",
hour: "numeric",
minute: "2-digit",
second: "2-digit",
timeZoneName: "short",
}),
dateTime: date.toISOString(),
};
}
function renderChatTimestamp(timestamp: number) {
const display = formatChatTimestampForDisplay(timestamp);
return html`
<time class="chat-group-timestamp" datetime=${display.dateTime} title=${display.title}>
${display.label}
</time>
`;
}
export function resetAssistantAttachmentAvailabilityCacheForTest() {
assistantAttachmentAvailabilityCache.clear();
for (const blobUrl of managedImageBlobUrlResolvedCache.values()) {
@@ -238,10 +285,6 @@ export function renderStreamingGroup(
basePath?: string,
authToken?: string | null,
) {
const timestamp = new Date(startedAt).toLocaleTimeString([], {
hour: "numeric",
minute: "2-digit",
});
const name = assistant?.name ?? "Assistant";
return html`
@@ -260,7 +303,7 @@ export function renderStreamingGroup(
)}
<div class="chat-group-footer">
<span class="chat-sender-name">${name}</span>
<span class="chat-group-timestamp">${timestamp}</span>
${renderChatTimestamp(startedAt)}
</div>
</div>
</div>
@@ -316,10 +359,6 @@ export function renderMessageGroup(
: normalizedRole === "tool"
? "tool"
: "other";
const timestamp = new Date(group.timestamp).toLocaleTimeString([], {
hour: "numeric",
minute: "2-digit",
});
// Aggregate usage/cost/model across all messages in the group
const meta = extractGroupMeta(group, opts.contextWindow ?? null);
@@ -365,8 +404,7 @@ export function renderMessageGroup(
)}
<div class="chat-group-footer">
<span class="chat-sender-name">${who}</span>
<span class="chat-group-timestamp">${timestamp}</span>
${renderMessageMeta(meta)}
${renderChatTimestamp(group.timestamp)} ${renderMessageMeta(meta)}
${normalizedRole === "assistant" && isTtsSupported() ? renderTtsButton(group) : nothing}
${opts.onDelete
? renderDeleteButton(opts.onDelete, normalizedRole === "user" ? "left" : "right")
@@ -495,7 +533,15 @@ function renderMessageMeta(meta: GroupMeta | null) {
return nothing;
}
return html`<span class="msg-meta">${parts}</span>`;
return html`
<details class="msg-meta">
<summary class="msg-meta__summary" title="Show message context details">
<span class="msg-meta__summary-icon" aria-hidden="true">${icons.chevronRight}</span>
<span>Context</span>
</summary>
<span class="msg-meta__details">${parts}</span>
</details>
`;
}
function extractGroupText(group: MessageGroup): string {