mirror of
https://github.com/openclaw/openclaw.git
synced 2026-05-09 17:40:42 +00:00
Merge branch 'main' into fix/control-ui-sender-metadata-stream
This commit is contained in:
14
CHANGELOG.md
14
CHANGELOG.md
@@ -6,14 +6,21 @@ Docs: https://docs.openclaw.ai
|
||||
|
||||
### Changes
|
||||
|
||||
- Control UI: read the Quick Settings exec policy badge from `tools.exec.security` instead of the non-schema `agents.defaults.exec.security` path, so configured `full`/`deny` values render accurately. Fixes #78311. Thanks @FriedBack.
|
||||
- Control UI/usage: add transcript-backed historical lineage rollups for rotated logical sessions, with current-instance vs historical-lineage scope controls and long-range presets so usage history stays visible after restarts and updates. Fixes #50701. Thanks @dev-gideon-llc and @BunsDev.
|
||||
- Agents/failover: harden state-aware lane suspension by persisting quota resume transitions, restoring configured lane concurrency, preserving non-quota failure reasons, and exporting model failover events through diagnostics OTLP. Thanks @BunsDev.
|
||||
- Channels/streaming: make progress draft labels scroll away with other progress lines, render structured tool rows as compact emoji/title/details, show web-search queries from provider-native argument shapes, and skip empty Discord apply-patch starts until a patch summary exists. (#79146)
|
||||
- Telegram: preserve the channel-specific 10-option poll cap in the unified outbound adapter so over-limit polls are rejected before send. (#78762) Thanks @obviyus.
|
||||
- Slack: route handled top-level channel turns in implicit-conversation channels to thread-scoped sessions when Slack reply threading is enabled, keeping the root turn and later thread replies on one OpenClaw session. (#78522) Thanks @zeroth-blip.
|
||||
- Telegram: re-probe the primary fetch transport after repeated sticky fallback success so transient IPv4 or pinned-IP fallback promotion can recover without a gateway restart. Fixes #77088. (#77157) Thanks @MkDev11.
|
||||
- Runtime/install: raise the supported Node 22 floor to `22.16+` so native SQLite query handling can rely on the `node:sqlite` statement metadata API while continuing to recommend Node 24. (#78921)
|
||||
- Discord/voice: make duplicate same-guild auto-join entries resolve to the last configured channel so moving an agent between voice channels does not keep joining the stale channel.
|
||||
- Discord/voice: include a bounded one-line STT transcript preview in verbose voice logs so live voice debugging shows what speakers said before the agent reply.
|
||||
- Codex app-server: pin the managed Codex harness and Codex CLI smoke package to `@openai/codex@0.129.0`, defer OpenClaw integration dynamic tools behind Codex tool search by default, and accept current Codex service-tier values so legacy `fast` settings survive the stable harness upgrade as `priority`.
|
||||
- Codex app-server: default implicit local stdio app-server permissions to guardian when Codex system requirements disallow the YOLO approval, reviewer, or sandbox value, including hostname-scoped remote sandbox entries, avoiding turn-start failures on managed hosts that permit only reviewed approval or narrower sandboxes.
|
||||
- Discord/voice: stream ElevenLabs TTS directly into Discord playback and send ElevenLabs latency optimization as the documented query parameter so spoken replies can start sooner.
|
||||
- Discord/voice: keep TTS playback running when another user starts speaking, ignore new capture during playback to avoid feedback loops, and downgrade expected receive-stream aborts to verbose diagnostics.
|
||||
- iMessage: expose native private-API message actions through `imsg rpc` for reactions, edits, unsends, replies, rich sends, attachments, and group management when `imsg status --json` reports the required bridge capabilities.
|
||||
- Telegram: treat successful same-chat `message` tool outbound sends during an inbound telegram turn as delivered when deciding whether to emit the rewritten silent reply fallback (#78685). Thanks @neeravmakwana.
|
||||
- Gateway/tasks: reconcile stale CLI run-context tasks whose live run context disappeared even when a child session row remains, and apply the default bounded reload deferral timeout to channel hot reloads so stale task records cannot block Discord/Slack/Telegram reloads forever.
|
||||
- Gateway/sessions: keep session-store index writes atomic while skipping durable fsync inside the writer lock, reducing cron and channel-turn starvation on slow filesystems and addressing the session-store strand of #73655. Thanks @mmartoccia.
|
||||
@@ -39,6 +46,7 @@ Docs: https://docs.openclaw.ai
|
||||
- ACPX/Codex: preserve trusted Codex project declarations when launching isolated Codex ACP sessions, avoiding interactive trust prompts in headless runs. Thanks @Stedyclaw.
|
||||
- ACPX/Codex: reap stale OpenClaw-owned ACPX/Codex ACP process trees on startup and after ACP session close, preventing orphaned harness processes from slowing the Gateway. Thanks @91wan.
|
||||
- ACP bridge: implement stable session list, resume, and close handlers so ACP clients can page Gateway sessions, rebind existing sessions without replay, and close bridge sessions cleanly. Thanks @amknight.
|
||||
- ACP bridge: replay complete ledger-backed ACP sessions on load, including user prompts, tool updates, session metadata, and usage snapshots, while keeping older sessions on the existing transcript fallback. Thanks @amknight.
|
||||
- ACP sessions: allow parent agents to inspect and message their own spawned cross-agent ACP sessions without enabling broad agent-to-agent visibility. Thanks @barronlroth.
|
||||
- Talk/voice: unify realtime relay, transcription relay, managed-room handoff, Voice Call, Google Meet, VoiceClaw, and native clients around a shared Talk session controller and add the Gateway-managed `talk.session.*` RPC surface.
|
||||
- Diagnostics/Talk: export bounded Talk lifecycle/audio metrics and session recovery metrics through OpenTelemetry and Prometheus without exposing transcripts, audio payloads, room ids, turn ids, or session ids.
|
||||
@@ -165,7 +173,11 @@ Docs: https://docs.openclaw.ai
|
||||
|
||||
### Fixes
|
||||
|
||||
- Gateway/macOS: `openclaw gateway stop` now uses `launchctl bootout` by default instead of unconditionally calling `launchctl disable`, so KeepAlive auto-recovery still works after unexpected crashes; use the new `--disable` flag to opt into the persistent-disable behavior when a manual stop should survive reboots. Fixes #77934. Thanks @bmoran1022.
|
||||
- Gateway/macOS: `repairLaunchAgentBootstrap` no longer kickstarts an already-running LaunchAgent, preventing unnecessary service restarts and session disconnects when repair runs against a healthy gateway. Fixes #77428. Thanks @ramitrkar-hash.
|
||||
- Gateway/macOS: `openclaw gateway stop --disable` now persists the LaunchAgent disable bit even after a previous bootout left the service not loaded, keeping the explicit stay-down path reliable. (#78412) Thanks @wdeveloper16.
|
||||
- Control UI/chat: hide retired and non-public Google Gemini model IDs from chat model catalogs and route the bare `gemini-3-pro` alias to Gemini 3.1 Pro Preview instead of the shut-down Gemini 3 Pro Preview. Thanks @BunsDev.
|
||||
- CLI/install: refuse state-mutating OpenClaw CLI runs as root by default, keep an explicit `OPENCLAW_ALLOW_ROOT=1` escape hatch for intentional root/container use, and update DigitalOcean setup guidance to run OpenClaw as a non-root user. Fixes #67478. Thanks @Jerry-Xin and @natechicago.
|
||||
- Gateway/watch: leave `OPENCLAW_TRACE_SYNC_IO` disabled by default in `pnpm gateway:watch:raw` so watch mode avoids noisy Node sync-I/O stack traces unless explicitly requested.
|
||||
- Codex app-server: close stdio stdin before force-killing the managed app-server, matching Codex single-client shutdown behavior and avoiding unsettled CLI exits after successful runs.
|
||||
- CLI/Codex: dispose registered agent harnesses during short-lived CLI shutdown so successful Codex-backed `agent --local` runs do not leave app-server child processes alive.
|
||||
@@ -593,6 +605,8 @@ Docs: https://docs.openclaw.ai
|
||||
- Hooks/cron: log returned `/hooks/agent` isolated-run errors and failed cron jobs with cron diagnostic summaries, so rejected `payload.model` values are visible instead of looking like accepted-but-missing runs. Fixes #78597. (#78655) Thanks @kevinslin.
|
||||
- Managed proxy/security: classify raw socket callsites and proxy runtime mutations in boundary checks so new direct egress or unmanaged proxy-state changes cannot land without explicit review. (#77126) Thanks @jesse-merhi.
|
||||
- Channels/iMessage: surface the silent group-allowlist drop at default log level by emitting a one-time `warn` per account at monitor startup when `channels.imessage.groupPolicy: "allowlist"` is set without a `channels.imessage.groups` block, plus a one-time `warn` per `chat_id` when the runtime gate drops a specific group, naming the exact `channels.imessage.groups[...]` key to add to allow it. Fixes #78749. (#79190) Thanks @omarshahine.
|
||||
- WhatsApp: stop Gateway-originated outbound echoes from advancing inbound activity in `openclaw channels status`, so outbound self-sends no longer look like handled inbound messages. Fixes #79056. (#79057) Thanks @ai-hpc and @bittoby.
|
||||
- Gateway/nodes: preserve the live node registry session and invoke ownership when an older same-node WebSocket closes after reconnecting. (#78351) Thanks @samzong.
|
||||
|
||||
## 2026.5.3-1
|
||||
|
||||
|
||||
@@ -86,6 +86,9 @@ Welcome to the lobster tank! 🦞
|
||||
- **Mason Huang** - Stability, Security, Speed
|
||||
- GitHub: [@hxy91819](https://github.com/hxy91819) · X: [@chenjingtalk](https://x.com/chenjingtalk)
|
||||
|
||||
- **Maurice Niu** - ClawHub, Security, Stability, Data integrity
|
||||
- GitHub: [@momothemage](https://github.com/momothemage) · X: [@MomoPsicasso](https://x.com/MomoPsicasso)
|
||||
|
||||
## How to Contribute
|
||||
|
||||
1. **Bugs & small fixes** → Open a PR!
|
||||
|
||||
@@ -2243,6 +2243,9 @@ public struct SessionsUsageParams: Codable, Sendable {
|
||||
public let startdate: String?
|
||||
public let enddate: String?
|
||||
public let mode: AnyCodable?
|
||||
public let range: AnyCodable?
|
||||
public let groupby: AnyCodable?
|
||||
public let includehistorical: Bool?
|
||||
public let utcoffset: String?
|
||||
public let limit: Int?
|
||||
public let includecontextweight: Bool?
|
||||
@@ -2252,6 +2255,9 @@ public struct SessionsUsageParams: Codable, Sendable {
|
||||
startdate: String?,
|
||||
enddate: String?,
|
||||
mode: AnyCodable?,
|
||||
range: AnyCodable?,
|
||||
groupby: AnyCodable?,
|
||||
includehistorical: Bool?,
|
||||
utcoffset: String?,
|
||||
limit: Int?,
|
||||
includecontextweight: Bool?)
|
||||
@@ -2260,6 +2266,9 @@ public struct SessionsUsageParams: Codable, Sendable {
|
||||
self.startdate = startdate
|
||||
self.enddate = enddate
|
||||
self.mode = mode
|
||||
self.range = range
|
||||
self.groupby = groupby
|
||||
self.includehistorical = includehistorical
|
||||
self.utcoffset = utcoffset
|
||||
self.limit = limit
|
||||
self.includecontextweight = includecontextweight
|
||||
@@ -2270,6 +2279,9 @@ public struct SessionsUsageParams: Codable, Sendable {
|
||||
case startdate = "startDate"
|
||||
case enddate = "endDate"
|
||||
case mode
|
||||
case range
|
||||
case groupby = "groupBy"
|
||||
case includehistorical = "includeHistorical"
|
||||
case utcoffset = "utcOffset"
|
||||
case limit
|
||||
case includecontextweight = "includeContextWeight"
|
||||
|
||||
@@ -1206,6 +1206,7 @@ Notes:
|
||||
- Voice transcript turns derive owner status from Discord `allowFrom` (or `dm.allowFrom`); non-owner speakers cannot access owner-only tools (for example `gateway` and `cron`).
|
||||
- Discord voice is opt-in for text-only configs; set `channels.discord.voice.enabled=true` (or keep an existing `channels.discord.voice` block) to enable `/vc` commands, the voice runtime, and the `GuildVoiceStates` gateway intent.
|
||||
- `channels.discord.intents.voiceStates` can explicitly override voice-state intent subscription. Leave it unset for the intent to follow effective voice enablement.
|
||||
- If `voice.autoJoin` has multiple entries for the same guild, OpenClaw joins the last configured channel for that guild.
|
||||
- `voice.daveEncryption` and `voice.decryptionFailureTolerance` pass through to `@discordjs/voice` join options.
|
||||
- `@discordjs/voice` defaults are `daveEncryption=true` and `decryptionFailureTolerance=24` if unset.
|
||||
- `voice.connectTimeoutMs` controls the initial `@discordjs/voice` Ready wait for `/vc join` and auto-join attempts. Default: `30000`.
|
||||
|
||||
@@ -930,6 +930,7 @@ Current Slack message actions include `send`, `upload-file`, `download-file`, `r
|
||||
- With default `session.dmScope=main`, Slack DMs collapse to agent main session.
|
||||
- Channel sessions: `agent:<agentId>:slack:channel:<channelId>`.
|
||||
- Thread replies can create thread session suffixes (`:thread:<threadTs>`) when applicable.
|
||||
- In channels where OpenClaw handles top-level messages without requiring an explicit mention, non-`off` `replyToMode` routes each handled root into `agent:<agentId>:slack:channel:<channelId>:thread:<rootTs>` so the visible Slack thread maps to one OpenClaw session from the first turn.
|
||||
- `channels.slack.thread.historyScope` default is `thread`; `thread.inheritParent` default is `false`.
|
||||
- `channels.slack.thread.initialHistoryLimit` controls how many existing thread messages are fetched when a new thread session starts (default `20`; set `0` to disable).
|
||||
- `channels.slack.thread.requireExplicitMention` (default `false`): when `true`, suppress implicit thread mentions so the bot only responds to explicit `@bot` mentions inside threads, even when the bot already participated in the thread. Without this, replies in a bot-participated thread bypass `requireMention` gating.
|
||||
|
||||
@@ -44,7 +44,7 @@ Quick rule:
|
||||
| `initialize`, `newSession`, `prompt`, `cancel` | Implemented | Core bridge flow over stdio to Gateway chat/send + abort. |
|
||||
| `listSessions`, slash commands | Implemented | Session list works against Gateway session state with bounded cursor pagination and `cwd` filtering where Gateway session rows carry workspace metadata; commands are advertised via `available_commands_update`. |
|
||||
| `resumeSession`, `closeSession` | Implemented | Resume rebinds an ACP session to an existing Gateway session without replaying history. Close cancels active bridge work, resolves pending prompts as cancelled, and releases bridge session state. |
|
||||
| `loadSession` | Partial | Rebinds the ACP session to a Gateway session key and replays stored user/assistant text history. Tool/system history is not reconstructed yet. |
|
||||
| `loadSession` | Partial | Rebinds the ACP session to a Gateway session key and replays ACP event-ledger history for bridge-created sessions. Older/no-ledger sessions fall back to stored user/assistant text. |
|
||||
| Prompt content (`text`, embedded `resource`, images) | Partial | Text/resources are flattened into chat input; images become Gateway attachments. |
|
||||
| Session modes | Partial | `session/set_mode` is supported and the bridge exposes initial Gateway-backed session controls for thought level, tool verbosity, reasoning, usage detail, and elevated actions. Broader ACP-native mode/config surfaces are still out of scope. |
|
||||
| Session info and usage updates | Partial | The bridge emits `session_info_update` and best-effort `usage_update` notifications from cached Gateway session snapshots. Usage is approximate and only sent when Gateway token totals are marked fresh. |
|
||||
@@ -56,9 +56,9 @@ Quick rule:
|
||||
|
||||
## Known Limitations
|
||||
|
||||
- `loadSession` replays stored user and assistant text history, but it does not
|
||||
reconstruct historic tool calls, system notices, or richer ACP-native event
|
||||
types.
|
||||
- `loadSession` can replay complete ACP event-ledger history only for
|
||||
bridge-created sessions. Older/no-ledger sessions still use transcript
|
||||
fallback and do not reconstruct historic tool calls or system notices.
|
||||
- If multiple ACP clients share the same Gateway session key, event and cancel
|
||||
routing are best-effort rather than strictly isolated per client. Prefer the
|
||||
default isolated `acp:<uuid>` sessions when you need clean editor-local
|
||||
|
||||
@@ -483,11 +483,13 @@ openclaw gateway restart
|
||||
- `gateway status`: `--url`, `--token`, `--password`, `--timeout`, `--no-probe`, `--require-rpc`, `--deep`, `--json`
|
||||
- `gateway install`: `--port`, `--runtime <node|bun>`, `--token`, `--wrapper <path>`, `--force`, `--json`
|
||||
- `gateway restart`: `--safe`, `--force`, `--wait <duration>`, `--json`
|
||||
- `gateway uninstall|start|stop`: `--json`
|
||||
- `gateway uninstall|start`: `--json`
|
||||
- `gateway stop`: `--disable`, `--json`
|
||||
|
||||
</Accordion>
|
||||
<Accordion title="Lifecycle behavior">
|
||||
- Use `gateway restart` to restart a managed service. Do not chain `gateway stop` and `gateway start` as a restart substitute; on macOS, `gateway stop` intentionally disables the LaunchAgent before stopping it.
|
||||
- Use `gateway restart` to restart a managed service. Do not chain `gateway stop` and `gateway start` as a restart substitute.
|
||||
- On macOS, `gateway stop` uses `launchctl bootout` by default, which removes the LaunchAgent from the current boot session without persisting a disable — KeepAlive auto-recovery remains active for future crashes and `gateway start` re-enables cleanly without a manual `launchctl enable`. Pass `--disable` to persistently suppress KeepAlive and RunAtLoad so the gateway does not respawn until the next explicit `gateway start`; use this when a manual stop should survive reboots or system restarts.
|
||||
- `gateway restart --safe` asks the running Gateway to preflight active OpenClaw work and defer the restart until reply delivery, embedded runs, and task runs drain. `--safe` cannot be combined with `--force` or `--wait`.
|
||||
- `gateway restart --wait 30s` overrides the configured restart drain budget for that restart. Bare numbers are milliseconds; units such as `s`, `m`, and `h` are accepted. `--wait 0` waits indefinitely.
|
||||
- `gateway restart --force` skips the active-work drain and restarts immediately. Use it when an operator has already inspected the listed task blockers and wants the gateway back now.
|
||||
|
||||
@@ -217,7 +217,9 @@ openclaw gateway restart
|
||||
openclaw gateway stop
|
||||
```
|
||||
|
||||
Use `openclaw gateway restart` for restarts. Do not chain `openclaw gateway stop` and `openclaw gateway start`; on macOS, `gateway stop` intentionally disables the LaunchAgent before stopping it.
|
||||
Use `openclaw gateway restart` for restarts. Do not chain `openclaw gateway stop` and `openclaw gateway start` as a restart substitute.
|
||||
|
||||
On macOS, `gateway stop` uses `launchctl bootout` by default — this removes the LaunchAgent from the current boot session without persisting a disable, so KeepAlive auto-recovery still works after unexpected crashes and `gateway start` re-enables cleanly. To persistently suppress auto-respawn across reboots, pass `--disable`: `openclaw gateway stop --disable`.
|
||||
|
||||
LaunchAgent labels are `ai.openclaw.gateway` (default) or `ai.openclaw.<profile>` (named profile). `openclaw doctor` audits and repairs service config drift.
|
||||
|
||||
|
||||
@@ -50,9 +50,18 @@ DigitalOcean is the simplest paid VPS path. If you prefer cheaper or free option
|
||||
|
||||
# Install OpenClaw
|
||||
curl -fsSL https://openclaw.ai/install.sh | bash
|
||||
|
||||
# Create the non-root user that will own OpenClaw state and services.
|
||||
adduser openclaw
|
||||
usermod -aG sudo openclaw
|
||||
loginctl enable-linger openclaw
|
||||
|
||||
su - openclaw
|
||||
openclaw --version
|
||||
```
|
||||
|
||||
Use the root shell only for system bootstrap. Run OpenClaw commands as the non-root `openclaw` user so state lives under `/home/openclaw/.openclaw/` and the Gateway installs as that user's systemd service.
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="Run onboarding">
|
||||
@@ -97,8 +106,8 @@ DigitalOcean is the simplest paid VPS path. If you prefer cheaper or free option
|
||||
**Option B: Tailscale Serve**
|
||||
|
||||
```bash
|
||||
curl -fsSL https://tailscale.com/install.sh | sh
|
||||
tailscale up
|
||||
curl -fsSL https://tailscale.com/install.sh | sudo sh
|
||||
sudo tailscale up
|
||||
openclaw config set gateway.tailscale.mode serve
|
||||
openclaw gateway restart
|
||||
```
|
||||
|
||||
@@ -484,7 +484,13 @@ By default, OpenClaw starts local Codex harness sessions in YOLO mode:
|
||||
`approvalPolicy: "never"`, `approvalsReviewer: "user"`, and
|
||||
`sandbox: "danger-full-access"`. This is the trusted local operator posture used
|
||||
for autonomous heartbeats: Codex can use shell and network tools without
|
||||
stopping on native approval prompts that nobody is around to answer.
|
||||
stopping on native approval prompts that nobody is around to answer. On local
|
||||
stdio Codex app-server installs where Codex's system requirements file
|
||||
disallows the implicit YOLO approval, reviewer, or sandbox value, OpenClaw
|
||||
treats the implicit default as guardian instead and selects allowed guardian
|
||||
permissions so it does not send an override that Codex app-server will reject.
|
||||
Hostname-matching `[[remote_sandbox_config]]` entries in the same requirements
|
||||
file are honored for the sandbox default decision.
|
||||
|
||||
To opt in to Codex guardian-reviewed approvals, set `appServer.mode:
|
||||
"guardian"`:
|
||||
@@ -635,22 +641,22 @@ Supported top-level Codex plugin fields:
|
||||
|
||||
Supported `appServer` fields:
|
||||
|
||||
| Field | Default | Meaning |
|
||||
| ----------------------------- | ---------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
|
||||
| `transport` | `"stdio"` | `"stdio"` spawns Codex; `"websocket"` connects to `url`. |
|
||||
| `command` | managed Codex binary | Executable for stdio transport. Leave unset to use the managed binary; set it only for an explicit override. |
|
||||
| `args` | `["app-server", "--listen", "stdio://"]` | Arguments for stdio transport. |
|
||||
| `url` | unset | WebSocket app-server URL. |
|
||||
| `authToken` | unset | Bearer token for WebSocket transport. |
|
||||
| `headers` | `{}` | Extra WebSocket headers. |
|
||||
| `clearEnv` | `[]` | Extra environment variable names removed from the spawned stdio app-server process after OpenClaw builds its inherited environment. `CODEX_HOME` and `HOME` are reserved for OpenClaw's per-agent Codex isolation on local launches. |
|
||||
| `requestTimeoutMs` | `60000` | Timeout for app-server control-plane calls. |
|
||||
| `turnCompletionIdleTimeoutMs` | `60000` | Quiet window after a turn-scoped Codex app-server request while OpenClaw waits for `turn/completed`. Raise this for slow post-tool or status-only synthesis phases. |
|
||||
| `mode` | `"yolo"` | Preset for YOLO or guardian-reviewed execution. |
|
||||
| `approvalPolicy` | `"never"` | Native Codex approval policy sent to thread start/resume/turn. |
|
||||
| `sandbox` | `"danger-full-access"` | Native Codex sandbox mode sent to thread start/resume. |
|
||||
| `approvalsReviewer` | `"user"` | Use `"auto_review"` to let Codex review native approval prompts. `guardian_subagent` remains a legacy alias. |
|
||||
| `serviceTier` | unset | Optional Codex app-server service tier. `"priority"` enables fast-mode routing, `"flex"` requests flex processing, `null` clears the override, and legacy `"fast"` is accepted as `"priority"`. |
|
||||
| Field | Default | Meaning |
|
||||
| ----------------------------- | ------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
|
||||
| `transport` | `"stdio"` | `"stdio"` spawns Codex; `"websocket"` connects to `url`. |
|
||||
| `command` | managed Codex binary | Executable for stdio transport. Leave unset to use the managed binary; set it only for an explicit override. |
|
||||
| `args` | `["app-server", "--listen", "stdio://"]` | Arguments for stdio transport. |
|
||||
| `url` | unset | WebSocket app-server URL. |
|
||||
| `authToken` | unset | Bearer token for WebSocket transport. |
|
||||
| `headers` | `{}` | Extra WebSocket headers. |
|
||||
| `clearEnv` | `[]` | Extra environment variable names removed from the spawned stdio app-server process after OpenClaw builds its inherited environment. `CODEX_HOME` and `HOME` are reserved for OpenClaw's per-agent Codex isolation on local launches. |
|
||||
| `requestTimeoutMs` | `60000` | Timeout for app-server control-plane calls. |
|
||||
| `turnCompletionIdleTimeoutMs` | `60000` | Quiet window after a turn-scoped Codex app-server request while OpenClaw waits for `turn/completed`. Raise this for slow post-tool or status-only synthesis phases. |
|
||||
| `mode` | `"yolo"` unless local Codex requirements disallow YOLO | Preset for YOLO or guardian-reviewed execution. Local stdio requirements that omit `danger-full-access`, `never` approval, or the `user` reviewer make the implicit default guardian. |
|
||||
| `approvalPolicy` | `"never"` or an allowed guardian approval policy | Native Codex approval policy sent to thread start/resume/turn. Guardian defaults prefer `"on-request"` when allowed. |
|
||||
| `sandbox` | `"danger-full-access"` or an allowed guardian sandbox | Native Codex sandbox mode sent to thread start/resume. Guardian defaults prefer `"workspace-write"` when allowed, otherwise `"read-only"`. |
|
||||
| `approvalsReviewer` | `"user"` or an allowed guardian reviewer | Use `"auto_review"` to let Codex review native approval prompts when allowed, otherwise `guardian_subagent` or `user`. `guardian_subagent` remains a legacy alias. |
|
||||
| `serviceTier` | unset | Optional Codex app-server service tier. `"priority"` enables fast-mode routing, `"flex"` requests flex processing, `null` clears the override, and legacy `"fast"` is accepted as `"priority"`. |
|
||||
|
||||
OpenClaw-owned dynamic tool calls are bounded independently from
|
||||
`appServer.requestTimeoutMs`: each Codex `item/tool/call` request must receive
|
||||
|
||||
@@ -98,7 +98,10 @@ describe("embedded acpx plugin config", () => {
|
||||
});
|
||||
|
||||
const server = resolved.mcpServers["openclaw-plugin-tools"];
|
||||
expect(server).toBeDefined();
|
||||
expect(server).toMatchObject({
|
||||
command: process.execPath,
|
||||
args: expect.any(Array),
|
||||
});
|
||||
expect(server.command).toBe(process.execPath);
|
||||
expect(Array.isArray(server.args)).toBe(true);
|
||||
expect(server.args?.length).toBeGreaterThan(0);
|
||||
@@ -113,7 +116,10 @@ describe("embedded acpx plugin config", () => {
|
||||
});
|
||||
|
||||
const server = resolved.mcpServers["openclaw-tools"];
|
||||
expect(server).toBeDefined();
|
||||
expect(server).toMatchObject({
|
||||
command: process.execPath,
|
||||
args: expect.any(Array),
|
||||
});
|
||||
expect(server.command).toBe(process.execPath);
|
||||
expect(Array.isArray(server.args)).toBe(true);
|
||||
expect(server.args?.length).toBeGreaterThan(0);
|
||||
|
||||
@@ -12,7 +12,7 @@ describe("acpx package manifest", () => {
|
||||
fs.readFileSync(new URL("../package.json", import.meta.url), "utf8"),
|
||||
) as AcpxPackageManifest;
|
||||
|
||||
expect(packageJson.dependencies?.acpx).toBeDefined();
|
||||
expect(packageJson.dependencies?.acpx).toEqual(expect.any(String));
|
||||
expect(packageJson.dependencies?.["@zed-industries/codex-acp"]).toBe("0.13.0");
|
||||
expect(packageJson.dependencies?.["@agentclientprotocol/claude-agent-acp"]).toBe("0.32.0");
|
||||
expect(packageJson.devDependencies?.["@agentclientprotocol/claude-agent-acp"]).toBeUndefined();
|
||||
|
||||
@@ -325,10 +325,15 @@ describe("createAcpxRuntimeService", () => {
|
||||
await service.start(ctx);
|
||||
|
||||
const backend = getAcpRuntimeBackend("acpx");
|
||||
expect(backend?.runtime).toBeDefined();
|
||||
if (!backend) {
|
||||
throw new Error("expected ACPX runtime backend");
|
||||
}
|
||||
expect(backend.runtime).toMatchObject({
|
||||
ensureSession: expect.any(Function),
|
||||
});
|
||||
expect(acpxRuntimeConstructorMock).not.toHaveBeenCalled();
|
||||
|
||||
await backend?.runtime.ensureSession({
|
||||
await backend.runtime.ensureSession({
|
||||
agent: "codex",
|
||||
mode: "oneshot",
|
||||
sessionKey: "agent:codex:acp:test",
|
||||
@@ -509,7 +514,9 @@ describe("createAcpxRuntimeService", () => {
|
||||
await service.start(ctx);
|
||||
|
||||
expect(probeAvailability).not.toHaveBeenCalled();
|
||||
expect(getAcpRuntimeBackend("acpx")).toBeTruthy();
|
||||
expect(getAcpRuntimeBackend("acpx")).toMatchObject({
|
||||
runtime: expect.any(Object),
|
||||
});
|
||||
|
||||
await service.stop?.(ctx);
|
||||
});
|
||||
|
||||
@@ -156,6 +156,17 @@ describe("active-memory plugin", () => {
|
||||
vi
|
||||
.mocked(api.logger.warn)
|
||||
.mock.calls.some((call: unknown[]) => String(call[0]).includes(needle));
|
||||
const expectPrependContextResult = (result: unknown) => {
|
||||
expect(result).toMatchObject({
|
||||
prependContext: expect.any(String),
|
||||
});
|
||||
};
|
||||
const requireNonEmptyString = (value: unknown, message: string): string => {
|
||||
if (typeof value !== "string" || value.length === 0) {
|
||||
throw new Error(message);
|
||||
}
|
||||
return value;
|
||||
};
|
||||
|
||||
beforeEach(async () => {
|
||||
vi.clearAllMocks();
|
||||
@@ -931,7 +942,7 @@ describe("active-memory plugin", () => {
|
||||
);
|
||||
|
||||
expect(runEmbeddedPiAgent).toHaveBeenCalledTimes(1);
|
||||
expect(result).toBeDefined();
|
||||
expectPrependContextResult(result);
|
||||
});
|
||||
|
||||
it("skips sessions whose conversation id is in deniedChatIds even when chat type is allowed", async () => {
|
||||
@@ -1033,7 +1044,7 @@ describe("active-memory plugin", () => {
|
||||
);
|
||||
|
||||
expect(runEmbeddedPiAgent).toHaveBeenCalledTimes(1);
|
||||
expect(result).toBeDefined();
|
||||
expectPrependContextResult(result);
|
||||
});
|
||||
|
||||
it("matches per-peer direct session keys (agent:<id>:direct:<peer>)", async () => {
|
||||
@@ -1057,7 +1068,7 @@ describe("active-memory plugin", () => {
|
||||
);
|
||||
|
||||
expect(runEmbeddedPiAgent).toHaveBeenCalledTimes(1);
|
||||
expect(result).toBeDefined();
|
||||
expectPrependContextResult(result);
|
||||
});
|
||||
|
||||
it("matches per-account-channel-peer direct session keys (agent:<id>:<channel>:<account>:direct:<peer>)", async () => {
|
||||
@@ -1082,7 +1093,7 @@ describe("active-memory plugin", () => {
|
||||
);
|
||||
|
||||
expect(runEmbeddedPiAgent).toHaveBeenCalledTimes(1);
|
||||
expect(result).toBeDefined();
|
||||
expectPrependContextResult(result);
|
||||
});
|
||||
|
||||
it("strips :thread:<id> suffix before matching allowedChatIds (group)", async () => {
|
||||
@@ -1109,7 +1120,7 @@ describe("active-memory plugin", () => {
|
||||
);
|
||||
|
||||
expect(runEmbeddedPiAgent).toHaveBeenCalledTimes(1);
|
||||
expect(result).toBeDefined();
|
||||
expectPrependContextResult(result);
|
||||
});
|
||||
|
||||
it("strips :thread:<id> suffix before matching deniedChatIds (direct)", async () => {
|
||||
@@ -1630,13 +1641,13 @@ describe("active-memory plugin", () => {
|
||||
const deprecationMessage = warnCalls
|
||||
.map(([first]) => (typeof first === "string" ? first : ""))
|
||||
.find((message) => message.includes("config.modelFallbackPolicy is deprecated"));
|
||||
expect(deprecationMessage).toBeDefined();
|
||||
const message = requireNonEmptyString(deprecationMessage, "deprecation warning missing");
|
||||
// Positive: the warning describes chain-resolution last-resort behavior.
|
||||
expect(deprecationMessage).toContain("chain-resolution");
|
||||
expect(deprecationMessage).toContain("last-resort");
|
||||
expect(message).toContain("chain-resolution");
|
||||
expect(message).toContain("last-resort");
|
||||
// Negative: the warning explicitly disclaims runtime failover, since
|
||||
// that's the wrong mental model the previous wording invited.
|
||||
expect(deprecationMessage).toMatch(/NOT a runtime failover/i);
|
||||
expect(message).toMatch(/NOT a runtime failover/i);
|
||||
});
|
||||
|
||||
it("does not use a built-in fallback model even when default-remote is configured", async () => {
|
||||
@@ -1760,9 +1771,9 @@ describe("active-memory plugin", () => {
|
||||
const debugLine = entries?.[0]?.lines.find((line) =>
|
||||
line.startsWith("🔎 Active Memory Debug:"),
|
||||
);
|
||||
expect(debugLine).toBeDefined();
|
||||
expect(debugLine).toContain("backend=qmd");
|
||||
expect(debugLine).toContain("hits=3");
|
||||
const line = requireNonEmptyString(debugLine, "active memory debug line missing");
|
||||
expect(line).toContain("backend=qmd");
|
||||
expect(line).toContain("hits=3");
|
||||
});
|
||||
|
||||
it("replaces stale structured active-memory lines on a later empty run", async () => {
|
||||
@@ -2033,6 +2044,7 @@ describe("active-memory plugin", () => {
|
||||
it("returns partial transcript text on timeout when transcripts are temporary by default", async () => {
|
||||
__testing.setMinimumTimeoutMsForTests(1);
|
||||
__testing.setSetupGraceTimeoutMsForTests(0);
|
||||
__testing.setTimeoutPartialDataGraceMsForTests(100);
|
||||
api.pluginConfig = {
|
||||
agents: ["main"],
|
||||
timeoutMs: 250,
|
||||
@@ -2299,9 +2311,9 @@ describe("active-memory plugin", () => {
|
||||
maxBytes: 10 * 1024 * 1024,
|
||||
});
|
||||
|
||||
expect(result).toBeTruthy();
|
||||
expect(result?.length).toBeLessThanOrEqual(128);
|
||||
expect(result).toContain("alpha beta gamma");
|
||||
const partialText = requireNonEmptyString(result, "partial assistant text missing");
|
||||
expect(partialText.length).toBeLessThanOrEqual(128);
|
||||
expect(partialText).toContain("alpha beta gamma");
|
||||
expect(readFileSpy).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
@@ -2953,9 +2965,9 @@ describe("active-memory plugin", () => {
|
||||
.mocked(api.logger.info)
|
||||
.mock.calls.map((call: unknown[]) => String(call[0]));
|
||||
const startLine = infoLines.find((line: string) => line.includes(" start timeoutMs="));
|
||||
expect(startLine).toBeTruthy();
|
||||
expect(startLine && startLine.length < 500).toBe(true);
|
||||
expect(startLine).toContain("...");
|
||||
const line = requireNonEmptyString(startLine, "active memory start log line missing");
|
||||
expect(line.length).toBeLessThan(500);
|
||||
expect(line).toContain("...");
|
||||
});
|
||||
|
||||
it("uses a canonical agent session key when only sessionId is available", async () => {
|
||||
|
||||
@@ -539,9 +539,10 @@ describe("amazon-bedrock provider plugin", () => {
|
||||
const discovery = pluginJson.configSchema?.properties?.discovery;
|
||||
const guardrail = pluginJson.configSchema?.properties?.guardrail;
|
||||
|
||||
expect(discovery).toBeDefined();
|
||||
expect(discovery.type).toBe("object");
|
||||
expect(discovery.additionalProperties).toBe(false);
|
||||
expect(discovery).toMatchObject({
|
||||
type: "object",
|
||||
additionalProperties: false,
|
||||
});
|
||||
expect(discovery.properties.enabled).toEqual({ type: "boolean" });
|
||||
expect(discovery.properties.region).toEqual({ type: "string" });
|
||||
expect(discovery.properties.providerFilter).toEqual({
|
||||
@@ -561,9 +562,10 @@ describe("amazon-bedrock provider plugin", () => {
|
||||
minimum: 1,
|
||||
});
|
||||
|
||||
expect(guardrail).toBeDefined();
|
||||
expect(guardrail.type).toBe("object");
|
||||
expect(guardrail.additionalProperties).toBe(false);
|
||||
expect(guardrail).toMatchObject({
|
||||
type: "object",
|
||||
additionalProperties: false,
|
||||
});
|
||||
|
||||
// Required fields
|
||||
expect(guardrail.required).toEqual(["guardrailIdentifier", "guardrailVersion"]);
|
||||
|
||||
@@ -48,7 +48,7 @@ function createModelRegistry(models: ProviderRuntimeModel[]) {
|
||||
}
|
||||
|
||||
describe("anthropic provider replay hooks", () => {
|
||||
it("registers the claude-cli backend", async () => {
|
||||
it("registers the claude-cli backend", () => {
|
||||
const captured = capturePluginRegistration({ register: anthropicPlugin.register });
|
||||
|
||||
expect(captured.cliBackends).toContainEqual(
|
||||
@@ -383,9 +383,11 @@ describe("anthropic provider replay hooks", () => {
|
||||
const provider = await registerSingleProviderPlugin(anthropicPlugin);
|
||||
const cliAuth = provider.auth.find((entry) => entry.id === "cli");
|
||||
|
||||
expect(cliAuth).toBeDefined();
|
||||
if (!cliAuth) {
|
||||
throw new Error("expected Anthropic CLI auth method");
|
||||
}
|
||||
|
||||
const result = await cliAuth?.run({
|
||||
const result = await cliAuth.run({
|
||||
config: {},
|
||||
} as never);
|
||||
|
||||
|
||||
@@ -88,8 +88,7 @@ describe("anthropic stream wrappers", () => {
|
||||
it("strips context-1m for Claude CLI or legacy token auth and warns", () => {
|
||||
const warn = vi.spyOn(__testing.log, "warn").mockImplementation(() => undefined);
|
||||
const headers = runWrapper("sk-ant-oat01-123");
|
||||
expect(headers?.["anthropic-beta"]).toBeDefined();
|
||||
expect(headers?.["anthropic-beta"]).toContain(OAUTH_BETA);
|
||||
expect(headers?.["anthropic-beta"]).toEqual(expect.stringContaining(OAUTH_BETA));
|
||||
expect(headers?.["anthropic-beta"]).not.toContain(CONTEXT_1M_BETA);
|
||||
expect(warn).toHaveBeenCalledOnce();
|
||||
});
|
||||
@@ -97,8 +96,7 @@ describe("anthropic stream wrappers", () => {
|
||||
it("keeps context-1m for API key auth", () => {
|
||||
const warn = vi.spyOn(__testing.log, "warn").mockImplementation(() => undefined);
|
||||
const headers = runWrapper("sk-ant-api-123");
|
||||
expect(headers?.["anthropic-beta"]).toBeDefined();
|
||||
expect(headers?.["anthropic-beta"]).toContain(CONTEXT_1M_BETA);
|
||||
expect(headers?.["anthropic-beta"]).toEqual(expect.stringContaining(CONTEXT_1M_BETA));
|
||||
expect(warn).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
@@ -165,80 +163,70 @@ describe("createAnthropicThinkingPrefillWrapper", () => {
|
||||
});
|
||||
});
|
||||
|
||||
describe("createAnthropicFastModeWrapper", () => {
|
||||
function runFastModeWrapper(params: {
|
||||
apiKey?: string;
|
||||
provider?: string;
|
||||
api?: string;
|
||||
baseUrl?: string;
|
||||
enabled?: boolean;
|
||||
}): Record<string, unknown> | undefined {
|
||||
return runPayloadWrapper(params, (base) =>
|
||||
createAnthropicFastModeWrapper(base, params.enabled ?? true),
|
||||
);
|
||||
}
|
||||
type ServiceTierWrapperParams = {
|
||||
apiKey?: string;
|
||||
provider?: string;
|
||||
api?: string;
|
||||
enabled?: boolean;
|
||||
serviceTier?: "auto" | "standard_only";
|
||||
};
|
||||
|
||||
it("does not inject service_tier for OAuth token", () => {
|
||||
const payload = runFastModeWrapper({ apiKey: "sk-ant-oat01-test-token" });
|
||||
const serviceTierWrapperCases: Array<{
|
||||
name: string;
|
||||
run: (params: ServiceTierWrapperParams) => Record<string, unknown> | undefined;
|
||||
}> = [
|
||||
{
|
||||
name: "fast mode",
|
||||
run: (params) =>
|
||||
runPayloadWrapper(params, (base) =>
|
||||
createAnthropicFastModeWrapper(base, params.enabled ?? true),
|
||||
),
|
||||
},
|
||||
{
|
||||
name: "explicit service tier",
|
||||
run: (params) =>
|
||||
runPayloadWrapper(params, (base) =>
|
||||
createAnthropicServiceTierWrapper(base, params.serviceTier ?? "auto"),
|
||||
),
|
||||
},
|
||||
];
|
||||
|
||||
describe("Anthropic service_tier payload wrappers", () => {
|
||||
it.each(serviceTierWrapperCases)("$name skips service_tier for OAuth token", ({ run }) => {
|
||||
const payload = run({ apiKey: "sk-ant-oat01-test-token" });
|
||||
expect(payload?.service_tier).toBeUndefined();
|
||||
});
|
||||
|
||||
it("injects service_tier for regular API keys", () => {
|
||||
const payload = runFastModeWrapper({ apiKey: "sk-ant-api03-test-key" });
|
||||
it.each(serviceTierWrapperCases)("$name injects service_tier for regular API keys", ({ run }) => {
|
||||
const payload = run({ apiKey: "sk-ant-api03-test-key" });
|
||||
expect(payload?.service_tier).toBe("auto");
|
||||
});
|
||||
|
||||
it("injects service_tier=standard_only when disabled for API keys", () => {
|
||||
const payload = runFastModeWrapper({ apiKey: "sk-ant-api03-test-key", enabled: false });
|
||||
it.each(serviceTierWrapperCases)(
|
||||
"$name does not inject service_tier for non-anthropic provider",
|
||||
({ run }) => {
|
||||
const payload = run({
|
||||
apiKey: "sk-ant-api03-test-key",
|
||||
provider: "openai",
|
||||
api: "openai-completions",
|
||||
});
|
||||
expect(payload?.service_tier).toBeUndefined();
|
||||
},
|
||||
);
|
||||
|
||||
it("fast mode injects service_tier=standard_only when disabled for API keys", () => {
|
||||
const payload = serviceTierWrapperCases[0].run({
|
||||
apiKey: "sk-ant-api03-test-key",
|
||||
enabled: false,
|
||||
});
|
||||
expect(payload?.service_tier).toBe("standard_only");
|
||||
});
|
||||
|
||||
it("does not inject service_tier for non-anthropic provider", () => {
|
||||
const payload = runFastModeWrapper({
|
||||
apiKey: "sk-ant-api03-test-key",
|
||||
provider: "openai",
|
||||
api: "openai-completions",
|
||||
});
|
||||
expect(payload?.service_tier).toBeUndefined();
|
||||
});
|
||||
});
|
||||
|
||||
describe("createAnthropicServiceTierWrapper", () => {
|
||||
function runServiceTierWrapper(params: {
|
||||
apiKey?: string;
|
||||
provider?: string;
|
||||
api?: string;
|
||||
serviceTier?: "auto" | "standard_only";
|
||||
}): Record<string, unknown> | undefined {
|
||||
return runPayloadWrapper(params, (base) =>
|
||||
createAnthropicServiceTierWrapper(base, params.serviceTier ?? "auto"),
|
||||
);
|
||||
}
|
||||
|
||||
it("does not inject service_tier for OAuth token", () => {
|
||||
const payload = runServiceTierWrapper({ apiKey: "sk-ant-oat01-test-token" });
|
||||
expect(payload?.service_tier).toBeUndefined();
|
||||
});
|
||||
|
||||
it("injects service_tier for regular API keys", () => {
|
||||
const payload = runServiceTierWrapper({ apiKey: "sk-ant-api03-test-key" });
|
||||
expect(payload?.service_tier).toBe("auto");
|
||||
});
|
||||
|
||||
it("injects service_tier=standard_only for regular API keys", () => {
|
||||
const payload = runServiceTierWrapper({
|
||||
it("explicit service tier injects service_tier=standard_only for regular API keys", () => {
|
||||
const payload = serviceTierWrapperCases[1].run({
|
||||
apiKey: "sk-ant-api03-test-key",
|
||||
serviceTier: "standard_only",
|
||||
});
|
||||
expect(payload?.service_tier).toBe("standard_only");
|
||||
});
|
||||
|
||||
it("does not inject service_tier for non-anthropic provider", () => {
|
||||
const payload = runServiceTierWrapper({
|
||||
apiKey: "sk-ant-api03-test-key",
|
||||
provider: "openai",
|
||||
api: "openai-completions",
|
||||
});
|
||||
expect(payload?.service_tier).toBeUndefined();
|
||||
});
|
||||
});
|
||||
|
||||
@@ -775,8 +775,10 @@ describe("gateway bonjour advertiser", () => {
|
||||
const disableLog = logger.warn.mock.calls.find(
|
||||
(call) => typeof call[0] === "string" && call[0].includes("disabling advertiser after"),
|
||||
);
|
||||
expect(disableLog).toBeDefined();
|
||||
expect(String(disableLog?.[0])).toMatch(/restarts within \d+ minutes/);
|
||||
if (!disableLog) {
|
||||
throw new Error("expected advertiser disable warning after repeated restarts");
|
||||
}
|
||||
expect(String(disableLog[0])).toMatch(/restarts within \d+ minutes/);
|
||||
|
||||
const advertiseCallsAtDisable = advertise.mock.calls.length;
|
||||
const createServiceCallsAtDisable = createService.mock.calls.length;
|
||||
|
||||
@@ -211,7 +211,7 @@ describe("cdp.helpers", () => {
|
||||
});
|
||||
|
||||
describe("fetchBrowserJson loopback auth (bridge auth registry)", () => {
|
||||
it("falls back to per-port bridge auth when config auth is not available", async () => {
|
||||
it("falls back to per-port bridge auth when config auth is not available", () => {
|
||||
const port = 18765;
|
||||
const getBridgeAuthForPort = vi.fn((candidate: number) =>
|
||||
candidate === port ? { token: "registry-token" } : undefined,
|
||||
|
||||
@@ -455,9 +455,11 @@ describe("openCdpWebSocket option handling", () => {
|
||||
it("clamps a non-finite handshakeTimeoutMs to the default", () => {
|
||||
// Exercises the Number.isFinite false side of the handshake-timeout
|
||||
// ternary in openCdpWebSocket.
|
||||
const ws = openCdpWebSocket("ws://127.0.0.1:1/devtools/browser/X", {
|
||||
const url = "ws://127.0.0.1:1/devtools/browser/X";
|
||||
const ws = openCdpWebSocket(url, {
|
||||
handshakeTimeoutMs: Number.NaN,
|
||||
});
|
||||
expect(ws.url).toBe(url);
|
||||
// Ensure we don't leak the socket even though we never await it.
|
||||
ws.once("error", () => {});
|
||||
ws.close();
|
||||
@@ -466,9 +468,11 @@ describe("openCdpWebSocket option handling", () => {
|
||||
it("honours an explicit, finite handshakeTimeoutMs", () => {
|
||||
// Exercises the truthy side of the handshake-timeout ternary: both
|
||||
// typeof === "number" AND Number.isFinite must be true.
|
||||
const ws = openCdpWebSocket("ws://127.0.0.1:1/devtools/browser/X", {
|
||||
const url = "ws://127.0.0.1:1/devtools/browser/X";
|
||||
const ws = openCdpWebSocket(url, {
|
||||
handshakeTimeoutMs: 500,
|
||||
});
|
||||
expect(ws.url).toBe(url);
|
||||
ws.once("error", () => {});
|
||||
ws.close();
|
||||
});
|
||||
@@ -476,16 +480,20 @@ describe("openCdpWebSocket option handling", () => {
|
||||
it("omits the direct-loopback agent for non-loopback targets", () => {
|
||||
// Exercises the falsy side of `agent ? { agent } : {}` — the loopback
|
||||
// agent helper returns undefined for non-loopback hosts.
|
||||
const ws = openCdpWebSocket("ws://93.184.216.34:9222/devtools/browser/X");
|
||||
const url = "ws://93.184.216.34:9222/devtools/browser/X";
|
||||
const ws = openCdpWebSocket(url);
|
||||
expect(ws.url).toBe(url);
|
||||
ws.once("error", () => {});
|
||||
ws.close();
|
||||
});
|
||||
|
||||
it("injects custom headers when opts.headers is a non-empty object", () => {
|
||||
// Exercises the truthy side of `Object.keys(headers).length ? ... : {}`.
|
||||
const ws = openCdpWebSocket("ws://127.0.0.1:1/devtools/browser/X", {
|
||||
const url = "ws://127.0.0.1:1/devtools/browser/X";
|
||||
const ws = openCdpWebSocket(url, {
|
||||
headers: { "X-Custom": "abc" },
|
||||
});
|
||||
expect(ws.url).toBe(url);
|
||||
ws.once("error", () => {});
|
||||
ws.close();
|
||||
});
|
||||
|
||||
@@ -94,18 +94,25 @@ beforeEach(() => {
|
||||
mockState.naturalViewport = { w: 1920, h: 1080, dpr: 1 };
|
||||
});
|
||||
|
||||
function requireSentMessage(method: string) {
|
||||
const message = sentMessages.find((m) => m.method === method);
|
||||
if (!message) {
|
||||
throw new Error(`expected ${method} CDP message`);
|
||||
}
|
||||
return message;
|
||||
}
|
||||
|
||||
describe("CDP screenshot params", () => {
|
||||
it("viewport screenshot omits fromSurface and captureBeyondViewport", async () => {
|
||||
await captureScreenshot({ wsUrl: "ws://localhost:9222/devtools/page/X", format: "png" });
|
||||
|
||||
const call = sentMessages.find((m) => m.method === "Page.captureScreenshot");
|
||||
expect(call).toBeDefined();
|
||||
expect(call!.params).toMatchObject({
|
||||
const call = requireSentMessage("Page.captureScreenshot");
|
||||
expect(call.params).toMatchObject({
|
||||
format: "png",
|
||||
});
|
||||
expect(call!.params).not.toHaveProperty("fromSurface");
|
||||
expect(call!.params).not.toHaveProperty("captureBeyondViewport");
|
||||
expect(call!.params).not.toHaveProperty("clip");
|
||||
expect(call.params).not.toHaveProperty("fromSurface");
|
||||
expect(call.params).not.toHaveProperty("captureBeyondViewport");
|
||||
expect(call.params).not.toHaveProperty("clip");
|
||||
|
||||
const emulationCalls = sentMessages.filter(
|
||||
(m) => m.method === "Emulation.setDeviceMetricsOverride",
|
||||
@@ -152,10 +159,9 @@ describe("CDP screenshot params", () => {
|
||||
});
|
||||
|
||||
// Clear is called first in the finally block
|
||||
const clearCall = sentMessages.find((m) => m.method === "Emulation.clearDeviceMetricsOverride");
|
||||
expect(clearCall).toBeDefined();
|
||||
const captureCall = sentMessages.find((m) => m.method === "Page.captureScreenshot");
|
||||
expect(captureCall?.params).toMatchObject({ captureBeyondViewport: true });
|
||||
requireSentMessage("Emulation.clearDeviceMetricsOverride");
|
||||
const captureCall = requireSentMessage("Page.captureScreenshot");
|
||||
expect(captureCall.params).toMatchObject({ captureBeyondViewport: true });
|
||||
|
||||
// Viewport drifted after clear → re-apply saved dimensions
|
||||
expect(secondSetCall.params).toMatchObject({
|
||||
@@ -183,17 +189,15 @@ describe("CDP screenshot params", () => {
|
||||
// Only the expand call — no re-apply after clear
|
||||
expect(setCalls).toHaveLength(1);
|
||||
|
||||
const clearCall = sentMessages.find((m) => m.method === "Emulation.clearDeviceMetricsOverride");
|
||||
expect(clearCall).toBeDefined();
|
||||
requireSentMessage("Emulation.clearDeviceMetricsOverride");
|
||||
});
|
||||
|
||||
it("fullPage viewport dimensions never shrink below current innerWidth/Height", async () => {
|
||||
await captureScreenshot({ wsUrl: "ws://localhost:9222/devtools/page/X", fullPage: true });
|
||||
|
||||
const expandCall = sentMessages.find((m) => m.method === "Emulation.setDeviceMetricsOverride");
|
||||
expect(expandCall).toBeDefined();
|
||||
expect(Number(expandCall!.params!.width)).toBeGreaterThanOrEqual(800);
|
||||
expect(Number(expandCall!.params!.height)).toBeGreaterThanOrEqual(600);
|
||||
const expandCall = requireSentMessage("Emulation.setDeviceMetricsOverride");
|
||||
expect(Number(expandCall.params?.width)).toBeGreaterThanOrEqual(800);
|
||||
expect(Number(expandCall.params?.height)).toBeGreaterThanOrEqual(600);
|
||||
});
|
||||
});
|
||||
|
||||
|
||||
@@ -68,7 +68,7 @@ describe("browser default executable detection", () => {
|
||||
vi.mocked(os.homedir).mockReturnValue("/Users/test");
|
||||
});
|
||||
|
||||
it("prefers default Chromium browser on macOS", async () => {
|
||||
it("prefers default Chromium browser on macOS", () => {
|
||||
mockMacDefaultBrowser("com.google.Chrome", "/Applications/Google Chrome.app");
|
||||
mockChromeExecutableExists();
|
||||
|
||||
@@ -81,7 +81,7 @@ describe("browser default executable detection", () => {
|
||||
expect(exe?.kind).toBe("chrome");
|
||||
});
|
||||
|
||||
it("detects Edge via LaunchServices bundle ID (com.microsoft.edgemac)", async () => {
|
||||
it("detects Edge via LaunchServices bundle ID (com.microsoft.edgemac)", () => {
|
||||
const edgeExecutablePath = "/Applications/Microsoft Edge.app/Contents/MacOS/Microsoft Edge";
|
||||
// macOS LaunchServices registers Edge as "com.microsoft.edgemac", which
|
||||
// differs from the CFBundleIdentifier "com.microsoft.Edge" in the app's
|
||||
@@ -127,7 +127,7 @@ describe("browser default executable detection", () => {
|
||||
expect(exe?.kind).toBe("edge");
|
||||
});
|
||||
|
||||
it("falls back to Chrome when Edge LaunchServices lookup has no app path", async () => {
|
||||
it("falls back to Chrome when Edge LaunchServices lookup has no app path", () => {
|
||||
vi.mocked(execFileSync).mockImplementation((cmd, args) => {
|
||||
const argsStr = Array.isArray(args) ? args.join(" ") : "";
|
||||
if (cmd === "/usr/bin/plutil" && argsStr.includes("LSHandlers")) {
|
||||
@@ -150,7 +150,7 @@ describe("browser default executable detection", () => {
|
||||
expect(exe?.kind).toBe("chrome");
|
||||
});
|
||||
|
||||
it("falls back when default browser is non-Chromium on macOS", async () => {
|
||||
it("falls back when default browser is non-Chromium on macOS", () => {
|
||||
mockMacDefaultBrowser("com.apple.Safari");
|
||||
mockChromeExecutableExists();
|
||||
|
||||
|
||||
@@ -340,6 +340,7 @@ describe("chrome.ts internal", () => {
|
||||
extraArgs: [],
|
||||
} as unknown as ResolvedBrowserConfig;
|
||||
const running = await launchOpenClawChrome(resolved, profile);
|
||||
expect(running.pid).toBe(4242);
|
||||
running.proc.kill?.("SIGTERM");
|
||||
},
|
||||
});
|
||||
@@ -925,6 +926,7 @@ describe("chrome.ts internal", () => {
|
||||
extraArgs: [],
|
||||
} as unknown as ResolvedBrowserConfig;
|
||||
const running = await launchOpenClawChrome(resolved, profile);
|
||||
expect(running.pid).toBe(4242);
|
||||
running.proc.kill?.("SIGTERM");
|
||||
},
|
||||
});
|
||||
@@ -969,6 +971,7 @@ describe("chrome.ts internal", () => {
|
||||
extraArgs: [],
|
||||
} as unknown as ResolvedBrowserConfig;
|
||||
const running = await launchOpenClawChrome(resolved, profile);
|
||||
expect(running.pid).toBe(4242);
|
||||
running.proc.kill?.("SIGTERM");
|
||||
},
|
||||
});
|
||||
@@ -1106,6 +1109,8 @@ describe("chrome.ts internal", () => {
|
||||
extraArgs: [],
|
||||
} as unknown as ResolvedBrowserConfig;
|
||||
const running = await launchOpenClawChrome(resolved, profile);
|
||||
expect(spawnCount).toBe(2);
|
||||
expect(running.proc).toBe(runtimeProc);
|
||||
running.proc.kill?.("SIGTERM");
|
||||
},
|
||||
});
|
||||
@@ -1160,6 +1165,8 @@ describe("chrome.ts internal", () => {
|
||||
extraArgs: [],
|
||||
} as unknown as ResolvedBrowserConfig;
|
||||
const running = await launchOpenClawChrome(resolved, profile);
|
||||
expect(callCount).toBe(2);
|
||||
expect(running.proc).toBe(runtimeProc);
|
||||
running.proc.kill?.("SIGTERM");
|
||||
},
|
||||
});
|
||||
@@ -1213,6 +1220,7 @@ describe("chrome.ts internal", () => {
|
||||
extraArgs: [],
|
||||
} as unknown as ResolvedBrowserConfig;
|
||||
const running = await launchOpenClawChrome(resolved, profile);
|
||||
expect(running.pid).toBe(4242);
|
||||
running.proc.kill?.("SIGTERM");
|
||||
},
|
||||
});
|
||||
|
||||
@@ -17,6 +17,14 @@ import {
|
||||
} from "./client.js";
|
||||
|
||||
describe("browser client", () => {
|
||||
function requireSnapshotCall(calls: string[]): string {
|
||||
const call = calls.find((url) => url.includes("/snapshot?"));
|
||||
if (!call) {
|
||||
throw new Error("expected browser snapshot request");
|
||||
}
|
||||
return call;
|
||||
}
|
||||
|
||||
function stubSnapshotFetch(calls: string[]) {
|
||||
vi.stubGlobal(
|
||||
"fetch",
|
||||
@@ -85,9 +93,7 @@ describe("browser client", () => {
|
||||
}),
|
||||
).resolves.toMatchObject({ ok: true, format: "ai" });
|
||||
|
||||
const snapshotCall = calls.find((url) => url.includes("/snapshot?"));
|
||||
expect(snapshotCall).toBeTruthy();
|
||||
const parsed = new URL(snapshotCall as string);
|
||||
const parsed = new URL(requireSnapshotCall(calls));
|
||||
expect(parsed.searchParams.get("labels")).toBe("1");
|
||||
expect(parsed.searchParams.get("mode")).toBe("efficient");
|
||||
});
|
||||
@@ -101,9 +107,7 @@ describe("browser client", () => {
|
||||
refs: "aria",
|
||||
});
|
||||
|
||||
const snapshotCall = calls.find((url) => url.includes("/snapshot?"));
|
||||
expect(snapshotCall).toBeTruthy();
|
||||
const parsed = new URL(snapshotCall as string);
|
||||
const parsed = new URL(requireSnapshotCall(calls));
|
||||
expect(parsed.searchParams.get("refs")).toBe("aria");
|
||||
});
|
||||
|
||||
@@ -115,9 +119,7 @@ describe("browser client", () => {
|
||||
profile: "chrome",
|
||||
});
|
||||
|
||||
const snapshotCall = calls.find((url) => url.includes("/snapshot?"));
|
||||
expect(snapshotCall).toBeTruthy();
|
||||
const parsed = new URL(snapshotCall as string);
|
||||
const parsed = new URL(requireSnapshotCall(calls));
|
||||
expect(parsed.searchParams.get("format")).toBeNull();
|
||||
expect(parsed.searchParams.get("profile")).toBe("chrome");
|
||||
});
|
||||
|
||||
@@ -13,9 +13,10 @@ describe("ensureBrowserControlAuth", () => {
|
||||
expect(result.auth.password).toBeUndefined();
|
||||
}
|
||||
|
||||
describe("trusted-proxy mode", () => {
|
||||
it("should skip auto-generation in test mode", async () => {
|
||||
const cfg: OpenClawConfig = {
|
||||
it.each([
|
||||
{
|
||||
name: "trusted-proxy",
|
||||
cfg: {
|
||||
gateway: {
|
||||
auth: {
|
||||
mode: "trusted-proxy",
|
||||
@@ -25,35 +26,40 @@ describe("ensureBrowserControlAuth", () => {
|
||||
},
|
||||
trustedProxies: ["192.168.1.1"],
|
||||
},
|
||||
};
|
||||
await expectNoAutoGeneratedAuth(cfg);
|
||||
});
|
||||
});
|
||||
|
||||
describe("password mode", () => {
|
||||
it("should skip auto-generation in test mode", async () => {
|
||||
const cfg: OpenClawConfig = {
|
||||
} satisfies OpenClawConfig,
|
||||
},
|
||||
{
|
||||
name: "password",
|
||||
cfg: {
|
||||
gateway: {
|
||||
auth: {
|
||||
mode: "password",
|
||||
},
|
||||
},
|
||||
};
|
||||
await expectNoAutoGeneratedAuth(cfg);
|
||||
});
|
||||
});
|
||||
|
||||
describe("none mode", () => {
|
||||
it("should skip auto-generation in test mode", async () => {
|
||||
const cfg: OpenClawConfig = {
|
||||
} satisfies OpenClawConfig,
|
||||
},
|
||||
{
|
||||
name: "none",
|
||||
cfg: {
|
||||
gateway: {
|
||||
auth: {
|
||||
mode: "none",
|
||||
},
|
||||
},
|
||||
};
|
||||
await expectNoAutoGeneratedAuth(cfg);
|
||||
});
|
||||
} satisfies OpenClawConfig,
|
||||
},
|
||||
{
|
||||
name: "token",
|
||||
cfg: {
|
||||
gateway: {
|
||||
auth: {
|
||||
mode: "token",
|
||||
},
|
||||
},
|
||||
} satisfies OpenClawConfig,
|
||||
},
|
||||
])("skips auto-generation in test mode for $name mode", async ({ cfg }) => {
|
||||
await expectNoAutoGeneratedAuth(cfg);
|
||||
});
|
||||
|
||||
describe("token mode", () => {
|
||||
@@ -75,23 +81,5 @@ describe("ensureBrowserControlAuth", () => {
|
||||
expect(result.generatedToken).toBeUndefined();
|
||||
expect(result.auth.token).toBe("existing-token-123");
|
||||
});
|
||||
|
||||
it("should skip auto-generation in test environment", async () => {
|
||||
const cfg: OpenClawConfig = {
|
||||
gateway: {
|
||||
auth: {
|
||||
mode: "token",
|
||||
},
|
||||
},
|
||||
};
|
||||
|
||||
const result = await ensureBrowserControlAuth({
|
||||
cfg,
|
||||
env: { NODE_ENV: "test" },
|
||||
});
|
||||
|
||||
expect(result.generatedToken).toBeUndefined();
|
||||
expect(result.auth.token).toBeUndefined();
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
@@ -18,6 +18,7 @@ describeLive("browser (live): remote CDP tab persistence", () => {
|
||||
await pw.closePlaywrightBrowserConnection().catch(() => {});
|
||||
|
||||
const created = await pw.createPageViaPlaywright({ cdpUrl: CDP_URL, url: "about:blank" });
|
||||
expect(created.targetId).toEqual(expect.any(String));
|
||||
try {
|
||||
await waitFor(
|
||||
async () => {
|
||||
|
||||
@@ -331,8 +331,10 @@ describe("pw-tools-core", () => {
|
||||
});
|
||||
|
||||
await Promise.resolve();
|
||||
expect(responseHandler).toBeDefined();
|
||||
responseHandler?.(resp);
|
||||
if (!responseHandler) {
|
||||
throw new Error("expected Playwright response handler");
|
||||
}
|
||||
responseHandler(resp);
|
||||
|
||||
const res = await p;
|
||||
expect(res.url).toBe("https://example.com/api/data");
|
||||
|
||||
@@ -67,6 +67,13 @@ const { resolveBrowserConfig, resolveProfile } = await import("./config.js");
|
||||
const { refreshResolvedBrowserConfigFromDisk, resolveBrowserProfileWithHotReload } =
|
||||
await import("./resolved-config-refresh.js");
|
||||
|
||||
function requireValue<T>(value: T | null | undefined, message: string): T {
|
||||
if (value == null) {
|
||||
throw new Error(message);
|
||||
}
|
||||
return value;
|
||||
}
|
||||
|
||||
describe("server-context hot-reload profiles", () => {
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks();
|
||||
@@ -76,7 +83,7 @@ describe("server-context hot-reload profiles", () => {
|
||||
mockState.cachedConfig = null; // Clear simulated cache
|
||||
});
|
||||
|
||||
it("forProfile hot-reloads newly added profiles from config", async () => {
|
||||
it("forProfile hot-reloads newly added profiles from config", () => {
|
||||
// Start with only openclaw profile
|
||||
// 1. Prime the cache by calling getRuntimeConfig() first
|
||||
const cfg = getRuntimeConfig();
|
||||
@@ -117,7 +124,7 @@ describe("server-context hot-reload profiles", () => {
|
||||
expect(profile?.cdpUrl).toBe("http://127.0.0.1:9222");
|
||||
|
||||
// 5. Verify the new profile was merged into the cached state
|
||||
expect(state.resolved.profiles.desktop).toBeDefined();
|
||||
expect(state.resolved.profiles).toHaveProperty("desktop");
|
||||
|
||||
// 6. Verify GLOBAL cache was NOT cleared - subsequent simple getRuntimeConfig() still sees STALE value
|
||||
// This confirms the fix: we read fresh config for the specific profile lookup without flushing the global cache
|
||||
@@ -125,7 +132,7 @@ describe("server-context hot-reload profiles", () => {
|
||||
expect(stillStaleCfg.browser?.profiles?.desktop).toBeUndefined();
|
||||
});
|
||||
|
||||
it("forProfile still throws for profiles that don't exist in fresh config", async () => {
|
||||
it("forProfile still throws for profiles that don't exist in fresh config", () => {
|
||||
const cfg = getRuntimeConfig();
|
||||
const resolved = resolveBrowserConfig(cfg.browser, cfg);
|
||||
const state = {
|
||||
@@ -145,7 +152,7 @@ describe("server-context hot-reload profiles", () => {
|
||||
).toBeNull();
|
||||
});
|
||||
|
||||
it("forProfile refreshes existing profile config after getRuntimeConfig cache updates", async () => {
|
||||
it("forProfile refreshes existing profile config after getRuntimeConfig cache updates", () => {
|
||||
const cfg = getRuntimeConfig();
|
||||
const resolved = resolveBrowserConfig(cfg.browser, cfg);
|
||||
const state = {
|
||||
@@ -167,7 +174,7 @@ describe("server-context hot-reload profiles", () => {
|
||||
expect(state.resolved.profiles.openclaw?.cdpPort).toBe(19999);
|
||||
});
|
||||
|
||||
it("listProfiles refreshes config before enumerating profiles", async () => {
|
||||
it("listProfiles refreshes config before enumerating profiles", () => {
|
||||
const cfg = getRuntimeConfig();
|
||||
const resolved = resolveBrowserConfig(cfg.browser, cfg);
|
||||
const state = {
|
||||
@@ -188,11 +195,13 @@ describe("server-context hot-reload profiles", () => {
|
||||
expect(Object.keys(state.resolved.profiles)).toContain("desktop");
|
||||
});
|
||||
|
||||
it("marks existing runtime state for reconcile when profile invariants change", async () => {
|
||||
it("marks existing runtime state for reconcile when profile invariants change", () => {
|
||||
const cfg = getRuntimeConfig();
|
||||
const resolved = resolveBrowserConfig(cfg.browser, cfg);
|
||||
const openclawProfile = resolveProfile(resolved, "openclaw");
|
||||
expect(openclawProfile).toBeTruthy();
|
||||
const openclawProfile = requireValue(
|
||||
resolveProfile(resolved, "openclaw"),
|
||||
"openclaw profile missing",
|
||||
);
|
||||
const state: BrowserServerState = {
|
||||
server: null,
|
||||
port: 18791,
|
||||
@@ -201,7 +210,7 @@ describe("server-context hot-reload profiles", () => {
|
||||
[
|
||||
"openclaw",
|
||||
{
|
||||
profile: openclawProfile!,
|
||||
profile: openclawProfile,
|
||||
running: { pid: 123 } as never,
|
||||
lastTargetId: "tab-1",
|
||||
reconcile: null,
|
||||
@@ -219,19 +228,20 @@ describe("server-context hot-reload profiles", () => {
|
||||
mode: "cached",
|
||||
});
|
||||
|
||||
const runtime = state.profiles.get("openclaw");
|
||||
expect(runtime).toBeTruthy();
|
||||
expect(runtime?.profile.cdpPort).toBe(19999);
|
||||
expect(runtime?.lastTargetId).toBeNull();
|
||||
expect(runtime?.reconcile?.reason).toContain("cdpPort");
|
||||
const runtime = requireValue(state.profiles.get("openclaw"), "openclaw runtime missing");
|
||||
expect(runtime.profile.cdpPort).toBe(19999);
|
||||
expect(runtime.lastTargetId).toBeNull();
|
||||
expect(runtime.reconcile?.reason).toContain("cdpPort");
|
||||
});
|
||||
|
||||
it("marks local managed runtime state for reconcile when profile headless changes", async () => {
|
||||
it("marks local managed runtime state for reconcile when profile headless changes", () => {
|
||||
const cfg = getRuntimeConfig();
|
||||
const resolved = resolveBrowserConfig(cfg.browser, cfg);
|
||||
const openclawProfile = resolveProfile(resolved, "openclaw");
|
||||
expect(openclawProfile).toBeTruthy();
|
||||
expect(openclawProfile?.headless).toBe(true);
|
||||
const openclawProfile = requireValue(
|
||||
resolveProfile(resolved, "openclaw"),
|
||||
"openclaw profile missing",
|
||||
);
|
||||
expect(openclawProfile.headless).toBe(true);
|
||||
const state: BrowserServerState = {
|
||||
server: null,
|
||||
port: 18791,
|
||||
@@ -240,7 +250,7 @@ describe("server-context hot-reload profiles", () => {
|
||||
[
|
||||
"openclaw",
|
||||
{
|
||||
profile: openclawProfile!,
|
||||
profile: openclawProfile,
|
||||
running: { pid: 123 } as never,
|
||||
lastTargetId: "tab-1",
|
||||
reconcile: null,
|
||||
@@ -262,14 +272,13 @@ describe("server-context hot-reload profiles", () => {
|
||||
mode: "cached",
|
||||
});
|
||||
|
||||
const runtime = state.profiles.get("openclaw");
|
||||
expect(runtime).toBeTruthy();
|
||||
expect(runtime?.profile.headless).toBe(false);
|
||||
expect(runtime?.lastTargetId).toBeNull();
|
||||
expect(runtime?.reconcile?.reason).toContain("headless");
|
||||
const runtime = requireValue(state.profiles.get("openclaw"), "openclaw runtime missing");
|
||||
expect(runtime.profile.headless).toBe(false);
|
||||
expect(runtime.lastTargetId).toBeNull();
|
||||
expect(runtime.reconcile?.reason).toContain("headless");
|
||||
});
|
||||
|
||||
it("marks local managed runtime state for reconcile when profile executablePath changes", async () => {
|
||||
it("marks local managed runtime state for reconcile when profile executablePath changes", () => {
|
||||
mockState.cfgProfiles.openclaw = {
|
||||
cdpPort: 18800,
|
||||
color: "#FF4500",
|
||||
@@ -278,9 +287,11 @@ describe("server-context hot-reload profiles", () => {
|
||||
mockState.cachedConfig = null;
|
||||
const cfg = getRuntimeConfig();
|
||||
const resolved = resolveBrowserConfig(cfg.browser, cfg);
|
||||
const openclawProfile = resolveProfile(resolved, "openclaw");
|
||||
expect(openclawProfile).toBeTruthy();
|
||||
expect(openclawProfile?.executablePath).toBe("/usr/bin/chrome-old");
|
||||
const openclawProfile = requireValue(
|
||||
resolveProfile(resolved, "openclaw"),
|
||||
"openclaw profile missing",
|
||||
);
|
||||
expect(openclawProfile.executablePath).toBe("/usr/bin/chrome-old");
|
||||
const state: BrowserServerState = {
|
||||
server: null,
|
||||
port: 18791,
|
||||
@@ -289,7 +300,7 @@ describe("server-context hot-reload profiles", () => {
|
||||
[
|
||||
"openclaw",
|
||||
{
|
||||
profile: openclawProfile!,
|
||||
profile: openclawProfile,
|
||||
running: { pid: 123 } as never,
|
||||
lastTargetId: "tab-1",
|
||||
reconcile: null,
|
||||
@@ -311,14 +322,13 @@ describe("server-context hot-reload profiles", () => {
|
||||
mode: "cached",
|
||||
});
|
||||
|
||||
const runtime = state.profiles.get("openclaw");
|
||||
expect(runtime).toBeTruthy();
|
||||
expect(runtime?.profile.executablePath).toBe("/usr/bin/chrome-new");
|
||||
expect(runtime?.lastTargetId).toBeNull();
|
||||
expect(runtime?.reconcile?.reason).toContain("executablePath");
|
||||
const runtime = requireValue(state.profiles.get("openclaw"), "openclaw runtime missing");
|
||||
expect(runtime.profile.executablePath).toBe("/usr/bin/chrome-new");
|
||||
expect(runtime.lastTargetId).toBeNull();
|
||||
expect(runtime.reconcile?.reason).toContain("executablePath");
|
||||
});
|
||||
|
||||
it("does not reconcile existing-session runtime when only headless changes", async () => {
|
||||
it("does not reconcile existing-session runtime when only headless changes", () => {
|
||||
mockState.cfgProfiles.remote = {
|
||||
cdpUrl: "http://127.0.0.1:9222",
|
||||
color: "#0066CC",
|
||||
@@ -328,11 +338,13 @@ describe("server-context hot-reload profiles", () => {
|
||||
|
||||
const cfg = getRuntimeConfig();
|
||||
const resolved = resolveBrowserConfig(cfg.browser, cfg);
|
||||
const remoteProfile = resolveProfile(resolved, "remote");
|
||||
expect(remoteProfile).toBeTruthy();
|
||||
expect(remoteProfile?.driver).toBe("existing-session");
|
||||
expect(remoteProfile?.attachOnly).toBe(true);
|
||||
expect(remoteProfile?.headless).toBe(true);
|
||||
const remoteProfile = requireValue(
|
||||
resolveProfile(resolved, "remote"),
|
||||
"remote profile missing",
|
||||
);
|
||||
expect(remoteProfile.driver).toBe("existing-session");
|
||||
expect(remoteProfile.attachOnly).toBe(true);
|
||||
expect(remoteProfile.headless).toBe(true);
|
||||
|
||||
const state: BrowserServerState = {
|
||||
server: null,
|
||||
@@ -342,7 +354,7 @@ describe("server-context hot-reload profiles", () => {
|
||||
[
|
||||
"remote",
|
||||
{
|
||||
profile: remoteProfile!,
|
||||
profile: remoteProfile,
|
||||
running: { pid: 456 } as never,
|
||||
lastTargetId: "tab-remote",
|
||||
reconcile: null,
|
||||
@@ -365,15 +377,14 @@ describe("server-context hot-reload profiles", () => {
|
||||
mode: "cached",
|
||||
});
|
||||
|
||||
const runtime = state.profiles.get("remote");
|
||||
expect(runtime).toBeTruthy();
|
||||
expect(runtime?.profile.driver).toBe("existing-session");
|
||||
expect(runtime?.profile.headless).toBe(false);
|
||||
expect(runtime?.lastTargetId).toBe("tab-remote");
|
||||
expect(runtime?.reconcile).toBeNull();
|
||||
const runtime = requireValue(state.profiles.get("remote"), "remote runtime missing");
|
||||
expect(runtime.profile.driver).toBe("existing-session");
|
||||
expect(runtime.profile.headless).toBe(false);
|
||||
expect(runtime.lastTargetId).toBe("tab-remote");
|
||||
expect(runtime.reconcile).toBeNull();
|
||||
});
|
||||
|
||||
it("does not reconcile remote cdp runtime when only headless changes", async () => {
|
||||
it("does not reconcile remote cdp runtime when only headless changes", () => {
|
||||
mockState.cfgProfiles.remote = {
|
||||
cdpUrl: "http://10.0.0.42:9222",
|
||||
color: "#0066CC",
|
||||
@@ -382,12 +393,14 @@ describe("server-context hot-reload profiles", () => {
|
||||
|
||||
const cfg = getRuntimeConfig();
|
||||
const resolved = resolveBrowserConfig(cfg.browser, cfg);
|
||||
const remoteProfile = resolveProfile(resolved, "remote");
|
||||
expect(remoteProfile).toBeTruthy();
|
||||
expect(remoteProfile?.driver).toBe("openclaw");
|
||||
expect(remoteProfile?.attachOnly).toBe(false);
|
||||
expect(remoteProfile?.cdpIsLoopback).toBe(false);
|
||||
expect(remoteProfile?.headless).toBe(true);
|
||||
const remoteProfile = requireValue(
|
||||
resolveProfile(resolved, "remote"),
|
||||
"remote profile missing",
|
||||
);
|
||||
expect(remoteProfile.driver).toBe("openclaw");
|
||||
expect(remoteProfile.attachOnly).toBe(false);
|
||||
expect(remoteProfile.cdpIsLoopback).toBe(false);
|
||||
expect(remoteProfile.headless).toBe(true);
|
||||
|
||||
const state: BrowserServerState = {
|
||||
server: null,
|
||||
@@ -397,7 +410,7 @@ describe("server-context hot-reload profiles", () => {
|
||||
[
|
||||
"remote",
|
||||
{
|
||||
profile: remoteProfile!,
|
||||
profile: remoteProfile,
|
||||
running: { pid: 789 } as never,
|
||||
lastTargetId: "tab-remote-cdp",
|
||||
reconcile: null,
|
||||
@@ -419,12 +432,11 @@ describe("server-context hot-reload profiles", () => {
|
||||
mode: "cached",
|
||||
});
|
||||
|
||||
const runtime = state.profiles.get("remote");
|
||||
expect(runtime).toBeTruthy();
|
||||
expect(runtime?.profile.driver).toBe("openclaw");
|
||||
expect(runtime?.profile.cdpIsLoopback).toBe(false);
|
||||
expect(runtime?.profile.headless).toBe(false);
|
||||
expect(runtime?.lastTargetId).toBe("tab-remote-cdp");
|
||||
expect(runtime?.reconcile).toBeNull();
|
||||
const runtime = requireValue(state.profiles.get("remote"), "remote runtime missing");
|
||||
expect(runtime.profile.driver).toBe("openclaw");
|
||||
expect(runtime.profile.cdpIsLoopback).toBe(false);
|
||||
expect(runtime.profile.headless).toBe(false);
|
||||
expect(runtime.lastTargetId).toBe("tab-remote-cdp");
|
||||
expect(runtime.reconcile).toBeNull();
|
||||
});
|
||||
});
|
||||
|
||||
@@ -17,9 +17,11 @@ describe("browser manage start timeout option", () => {
|
||||
await program.parseAsync(["browser", "--timeout", "60000", "start"], { from: "user" });
|
||||
|
||||
const startCall = findBrowserManageCall("/start");
|
||||
expect(startCall).toBeDefined();
|
||||
expect(startCall?.[0]).toMatchObject({ timeout: "60000" });
|
||||
expect(startCall?.[2]).toBeUndefined();
|
||||
if (!startCall) {
|
||||
throw new Error("expected browser /start call");
|
||||
}
|
||||
expect(startCall[0]).toMatchObject({ timeout: "60000" });
|
||||
expect(startCall[2]).toBeUndefined();
|
||||
});
|
||||
|
||||
it("passes headless=true for browser start --headless", async () => {
|
||||
|
||||
@@ -46,7 +46,6 @@ describe("browser state option collisions", () => {
|
||||
|
||||
const getLastRequest = () => {
|
||||
const call = mocks.callBrowserRequest.mock.calls.at(-1);
|
||||
expect(call).toBeDefined();
|
||||
if (!call) {
|
||||
throw new Error("expected browser request call");
|
||||
}
|
||||
@@ -101,9 +100,7 @@ describe("browser state option collisions", () => {
|
||||
],
|
||||
{ from: "user" },
|
||||
);
|
||||
const call = mocks.callBrowserRequest.mock.calls.at(-1);
|
||||
expect(call).toBeDefined();
|
||||
const request = call![1] as { body?: { cookie?: { url?: string } } };
|
||||
const request = getLastRequest() as { body?: { cookie?: { url?: string } } };
|
||||
expect(request.body?.cookie?.url).toBe("https://example.com");
|
||||
});
|
||||
|
||||
@@ -113,9 +110,7 @@ describe("browser state option collisions", () => {
|
||||
["browser", "--url", "https://inherited.example.com", "cookies", "set", "session", "abc"],
|
||||
{ from: "user" },
|
||||
);
|
||||
const call = mocks.callBrowserRequest.mock.calls.at(-1);
|
||||
expect(call).toBeDefined();
|
||||
const request = call![1] as { body?: { cookie?: { url?: string } } };
|
||||
const request = getLastRequest() as { body?: { cookie?: { url?: string } } };
|
||||
expect(request.body?.cookie?.url).toBe("https://inherited.example.com");
|
||||
});
|
||||
|
||||
|
||||
@@ -72,8 +72,10 @@ describe("registerBrowserCli lazy browser subcommands", () => {
|
||||
expect(browser?.commands.map((command) => command.name())).toContain("status");
|
||||
expect(browser?.commands.map((command) => command.name())).toContain("snapshot");
|
||||
const doctor = browser?.commands.find((command) => command.name() === "doctor");
|
||||
expect(doctor).toBeDefined();
|
||||
expect(doctor?.options.map((option) => option.long)).toContain("--deep");
|
||||
if (!doctor) {
|
||||
throw new Error("expected browser doctor command placeholder");
|
||||
}
|
||||
expect(doctor.options.map((option) => option.long)).toContain("--deep");
|
||||
expect(manageMocks.registerBrowserManageCommands).not.toHaveBeenCalled();
|
||||
expect(inspectMocks.registerBrowserInspectCommands).not.toHaveBeenCalled();
|
||||
expect(actionInputMocks.registerBrowserActionInputCommands).not.toHaveBeenCalled();
|
||||
|
||||
@@ -70,8 +70,12 @@ describe("canvas a2ui copy", () => {
|
||||
|
||||
await copyA2uiAssets({ srcDir, outDir });
|
||||
|
||||
await expect(fs.stat(path.join(outDir, "index.html"))).resolves.toBeTruthy();
|
||||
await expect(fs.stat(path.join(outDir, "a2ui.bundle.js"))).resolves.toBeTruthy();
|
||||
await expect(fs.readFile(path.join(outDir, "index.html"), "utf8")).resolves.toBe(
|
||||
"<html></html>",
|
||||
);
|
||||
await expect(fs.readFile(path.join(outDir, "a2ui.bundle.js"), "utf8")).resolves.toBe(
|
||||
"console.log(1);",
|
||||
);
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
@@ -333,7 +333,9 @@ describe("canvas host", () => {
|
||||
|
||||
try {
|
||||
const watcher = watcherState.watchers[watcherStart];
|
||||
expect(watcher).toBeTruthy();
|
||||
if (!watcher) {
|
||||
throw new Error("expected Canvas host watcher");
|
||||
}
|
||||
const upgraded = handler.handleUpgrade(
|
||||
{ url: CANVAS_WS_PATH } as IncomingMessage,
|
||||
{} as Duplex,
|
||||
@@ -342,12 +344,14 @@ describe("canvas host", () => {
|
||||
expect(upgraded).toBe(true);
|
||||
expect(TrackingWebSocketServerClass.latestInstance?.connectionCount).toBe(1);
|
||||
const ws = TrackingWebSocketServerClass.latestSocket;
|
||||
expect(ws).toBeTruthy();
|
||||
if (!ws) {
|
||||
throw new Error("expected Canvas host websocket");
|
||||
}
|
||||
|
||||
await fs.writeFile(index, "<html><body>v2</body></html>", "utf8");
|
||||
watcher.__emit("all", "change", index);
|
||||
await reloadSent;
|
||||
expect(ws?.sent[0]).toBe("reload");
|
||||
expect(ws.sent[0]).toBe("reload");
|
||||
} finally {
|
||||
await handler.close();
|
||||
}
|
||||
|
||||
@@ -362,7 +362,7 @@ describe("CodexAppServerClient", () => {
|
||||
);
|
||||
});
|
||||
|
||||
it("does not write to stdin after the child process exits", async () => {
|
||||
it("does not write to stdin after the child process exits", () => {
|
||||
const harness = createClientHarness();
|
||||
clients.push(harness.client);
|
||||
|
||||
|
||||
@@ -12,9 +12,15 @@ import {
|
||||
resolveCodexPluginsPolicy,
|
||||
} from "./config.js";
|
||||
|
||||
type RuntimeOptionsParams = NonNullable<Parameters<typeof resolveCodexAppServerRuntimeOptions>[0]>;
|
||||
|
||||
function resolveRuntimeForTest(params: RuntimeOptionsParams = {}) {
|
||||
return resolveCodexAppServerRuntimeOptions({ env: {}, requirementsToml: null, ...params });
|
||||
}
|
||||
|
||||
describe("Codex app-server config", () => {
|
||||
it("parses typed plugin config before falling back to environment knobs", () => {
|
||||
const runtime = resolveCodexAppServerRuntimeOptions({
|
||||
const runtime = resolveRuntimeForTest({
|
||||
pluginConfig: {
|
||||
appServer: {
|
||||
mode: "guardian",
|
||||
@@ -51,7 +57,7 @@ describe("Codex app-server config", () => {
|
||||
});
|
||||
|
||||
it("ignores app-server environment clearing for websocket transports", () => {
|
||||
const runtime = resolveCodexAppServerRuntimeOptions({
|
||||
const runtime = resolveRuntimeForTest({
|
||||
pluginConfig: {
|
||||
appServer: {
|
||||
transport: "websocket",
|
||||
@@ -66,7 +72,7 @@ describe("Codex app-server config", () => {
|
||||
});
|
||||
|
||||
it("normalizes app-server environment variables to clear", () => {
|
||||
const runtime = resolveCodexAppServerRuntimeOptions({
|
||||
const runtime = resolveRuntimeForTest({
|
||||
pluginConfig: {
|
||||
appServer: {
|
||||
clearEnv: [" OPENAI_API_KEY ", "", " "],
|
||||
@@ -83,7 +89,7 @@ describe("Codex app-server config", () => {
|
||||
});
|
||||
|
||||
it("normalizes legacy service tiers without discarding the rest of the config", () => {
|
||||
const runtime = resolveCodexAppServerRuntimeOptions({
|
||||
const runtime = resolveRuntimeForTest({
|
||||
pluginConfig: {
|
||||
appServer: {
|
||||
mode: "guardian",
|
||||
@@ -130,7 +136,7 @@ describe("Codex app-server config", () => {
|
||||
|
||||
it("requires a websocket url when websocket transport is configured", () => {
|
||||
expect(() =>
|
||||
resolveCodexAppServerRuntimeOptions({
|
||||
resolveRuntimeForTest({
|
||||
pluginConfig: { appServer: { transport: "websocket" } },
|
||||
env: {},
|
||||
}),
|
||||
@@ -138,9 +144,8 @@ describe("Codex app-server config", () => {
|
||||
});
|
||||
|
||||
it("defaults native Codex approvals to unchained local execution", () => {
|
||||
const runtime = resolveCodexAppServerRuntimeOptions({
|
||||
const runtime = resolveRuntimeForTest({
|
||||
pluginConfig: {},
|
||||
env: {},
|
||||
});
|
||||
|
||||
expect(runtime).toEqual(
|
||||
@@ -156,6 +161,298 @@ describe("Codex app-server config", () => {
|
||||
);
|
||||
});
|
||||
|
||||
it("defaults native Codex approvals to guardian when requirements disallow full access", () => {
|
||||
const runtime = resolveRuntimeForTest({
|
||||
pluginConfig: {},
|
||||
requirementsToml: 'allowed_sandbox_modes = ["read-only", "workspace-write"]\n',
|
||||
});
|
||||
|
||||
expect(runtime).toEqual(
|
||||
expect.objectContaining({
|
||||
approvalPolicy: "on-request",
|
||||
sandbox: "workspace-write",
|
||||
approvalsReviewer: "auto_review",
|
||||
}),
|
||||
);
|
||||
});
|
||||
|
||||
it("uses read-only sandbox for guardian defaults when requirements only allow read-only", () => {
|
||||
const runtime = resolveRuntimeForTest({
|
||||
pluginConfig: {},
|
||||
requirementsToml: 'allowed_sandbox_modes = ["read-only"]\n',
|
||||
});
|
||||
|
||||
expect(runtime).toEqual(
|
||||
expect.objectContaining({
|
||||
approvalPolicy: "on-request",
|
||||
sandbox: "read-only",
|
||||
approvalsReviewer: "auto_review",
|
||||
}),
|
||||
);
|
||||
});
|
||||
|
||||
it("defaults native Codex approvals to guardian when requirements disallow never approval", () => {
|
||||
const runtime = resolveRuntimeForTest({
|
||||
pluginConfig: {},
|
||||
requirementsToml: 'allowed_approval_policies = ["on-request"]\n',
|
||||
});
|
||||
|
||||
expect(runtime).toEqual(
|
||||
expect.objectContaining({
|
||||
approvalPolicy: "on-request",
|
||||
sandbox: "workspace-write",
|
||||
approvalsReviewer: "auto_review",
|
||||
}),
|
||||
);
|
||||
});
|
||||
|
||||
it("selects an allowed guardian approval policy when on-request is unavailable", () => {
|
||||
const runtime = resolveRuntimeForTest({
|
||||
pluginConfig: {},
|
||||
requirementsToml: 'allowed_approval_policies = ["on-failure"]\n',
|
||||
});
|
||||
|
||||
expect(runtime).toEqual(
|
||||
expect.objectContaining({
|
||||
approvalPolicy: "on-failure",
|
||||
sandbox: "workspace-write",
|
||||
approvalsReviewer: "auto_review",
|
||||
}),
|
||||
);
|
||||
});
|
||||
|
||||
it("keeps native Codex approvals unchained when requirements allow never approval", () => {
|
||||
const runtime = resolveRuntimeForTest({
|
||||
pluginConfig: {},
|
||||
requirementsToml: 'allowed_approval_policies = ["never"]\n',
|
||||
});
|
||||
|
||||
expect(runtime).toEqual(
|
||||
expect.objectContaining({
|
||||
approvalPolicy: "never",
|
||||
sandbox: "danger-full-access",
|
||||
approvalsReviewer: "user",
|
||||
}),
|
||||
);
|
||||
});
|
||||
|
||||
it("defaults native Codex approvals to guardian when requirements disallow user reviewer", () => {
|
||||
const runtime = resolveRuntimeForTest({
|
||||
pluginConfig: {},
|
||||
requirementsToml: 'allowed_approvals_reviewers = ["auto_review"]\n',
|
||||
});
|
||||
|
||||
expect(runtime).toEqual(
|
||||
expect.objectContaining({
|
||||
approvalPolicy: "on-request",
|
||||
sandbox: "workspace-write",
|
||||
approvalsReviewer: "auto_review",
|
||||
}),
|
||||
);
|
||||
});
|
||||
|
||||
it("selects an allowed reviewer when sandbox requirements force guardian defaults", () => {
|
||||
const runtime = resolveRuntimeForTest({
|
||||
pluginConfig: {},
|
||||
requirementsToml:
|
||||
'allowed_sandbox_modes = ["read-only", "workspace-write"]\nallowed_approvals_reviewers = ["user"]\n',
|
||||
});
|
||||
|
||||
expect(runtime).toEqual(
|
||||
expect.objectContaining({
|
||||
approvalPolicy: "on-request",
|
||||
sandbox: "workspace-write",
|
||||
approvalsReviewer: "user",
|
||||
}),
|
||||
);
|
||||
});
|
||||
|
||||
it("ignores quoted sandbox modes inside requirements comments", () => {
|
||||
const runtime = resolveRuntimeForTest({
|
||||
pluginConfig: {},
|
||||
requirementsToml: `allowed_sandbox_modes = [
|
||||
"read-only",
|
||||
# "danger-full-access",
|
||||
"workspace-write",
|
||||
]
|
||||
`,
|
||||
});
|
||||
|
||||
expect(runtime).toEqual(
|
||||
expect.objectContaining({
|
||||
approvalPolicy: "on-request",
|
||||
sandbox: "workspace-write",
|
||||
approvalsReviewer: "auto_review",
|
||||
}),
|
||||
);
|
||||
});
|
||||
|
||||
it("applies the first matching remote sandbox requirements before resolving local stdio defaults", () => {
|
||||
const runtime = resolveRuntimeForTest({
|
||||
pluginConfig: {},
|
||||
hostName: "BUILD-01.EXAMPLE.COM.",
|
||||
requirementsToml: `[[remote_sandbox_config]]
|
||||
hostname_patterns = ["build-*.example.com"]
|
||||
allowed_sandbox_modes = ["read-only", "workspace-write"]
|
||||
|
||||
[[remote_sandbox_config]]
|
||||
hostname_patterns = ["build-01.example.com"]
|
||||
allowed_sandbox_modes = ["read-only", "danger-full-access"]
|
||||
`,
|
||||
});
|
||||
|
||||
expect(runtime).toEqual(
|
||||
expect.objectContaining({
|
||||
approvalPolicy: "on-request",
|
||||
sandbox: "workspace-write",
|
||||
approvalsReviewer: "auto_review",
|
||||
}),
|
||||
);
|
||||
});
|
||||
|
||||
it("ignores non-matching remote-only sandbox requirements when resolving local stdio defaults", () => {
|
||||
const runtime = resolveRuntimeForTest({
|
||||
pluginConfig: {},
|
||||
hostName: "laptop.example.com",
|
||||
requirementsToml: `[[remote_sandbox_config]]
|
||||
hostname_patterns = ["build-*.example.com"]
|
||||
allowed_sandbox_modes = ["read-only", "workspace-write"]
|
||||
`,
|
||||
});
|
||||
|
||||
expect(runtime).toEqual(
|
||||
expect.objectContaining({
|
||||
approvalPolicy: "never",
|
||||
sandbox: "danger-full-access",
|
||||
approvalsReviewer: "user",
|
||||
}),
|
||||
);
|
||||
});
|
||||
|
||||
it("reads local requirements policy from the configured requirements path", () => {
|
||||
const readPaths: string[] = [];
|
||||
const runtime = resolveCodexAppServerRuntimeOptions({
|
||||
pluginConfig: {},
|
||||
env: {},
|
||||
requirementsPath: "/custom/codex/requirements.toml",
|
||||
readRequirementsFile: (path) => {
|
||||
readPaths.push(path);
|
||||
return 'allowed_sandbox_modes = ["read-only", "workspace-write"]\n';
|
||||
},
|
||||
});
|
||||
|
||||
expect(readPaths).toEqual(["/custom/codex/requirements.toml"]);
|
||||
expect(runtime).toEqual(
|
||||
expect.objectContaining({
|
||||
approvalPolicy: "on-request",
|
||||
sandbox: "workspace-write",
|
||||
approvalsReviewer: "auto_review",
|
||||
}),
|
||||
);
|
||||
});
|
||||
|
||||
it("reads local requirements policy from the Codex Windows requirements path", () => {
|
||||
const readPaths: string[] = [];
|
||||
const runtime = resolveCodexAppServerRuntimeOptions({
|
||||
pluginConfig: {},
|
||||
env: { ProgramData: "D:\\ManagedData" },
|
||||
platform: "win32",
|
||||
readRequirementsFile: (path) => {
|
||||
readPaths.push(path);
|
||||
return 'allowed_sandbox_modes = ["read-only", "workspace-write"]\n';
|
||||
},
|
||||
});
|
||||
|
||||
expect(readPaths).toEqual(["D:\\ManagedData\\OpenAI\\Codex\\requirements.toml"]);
|
||||
expect(runtime).toEqual(
|
||||
expect.objectContaining({
|
||||
approvalPolicy: "on-request",
|
||||
sandbox: "workspace-write",
|
||||
approvalsReviewer: "auto_review",
|
||||
}),
|
||||
);
|
||||
});
|
||||
|
||||
it("keeps native Codex approvals unchained when requirements allow full access", () => {
|
||||
const runtime = resolveRuntimeForTest({
|
||||
pluginConfig: {},
|
||||
requirementsToml:
|
||||
'allowed_sandbox_modes = ["ReadOnly", "WorkspaceWrite", "DangerFullAccess"]\n',
|
||||
});
|
||||
|
||||
expect(runtime).toEqual(
|
||||
expect.objectContaining({
|
||||
approvalPolicy: "never",
|
||||
sandbox: "danger-full-access",
|
||||
approvalsReviewer: "user",
|
||||
}),
|
||||
);
|
||||
});
|
||||
|
||||
it("keeps native Codex approvals unchained when requirements are malformed", () => {
|
||||
const runtime = resolveRuntimeForTest({
|
||||
pluginConfig: {},
|
||||
requirementsToml: "allowed_sandbox_modes = [read-only]\n",
|
||||
});
|
||||
|
||||
expect(runtime).toEqual(
|
||||
expect.objectContaining({
|
||||
approvalPolicy: "never",
|
||||
sandbox: "danger-full-access",
|
||||
approvalsReviewer: "user",
|
||||
}),
|
||||
);
|
||||
});
|
||||
|
||||
it("does not apply local requirements policy to websocket app-server transports", () => {
|
||||
const runtime = resolveRuntimeForTest({
|
||||
pluginConfig: {
|
||||
appServer: {
|
||||
transport: "websocket",
|
||||
url: "ws://127.0.0.1:39175",
|
||||
},
|
||||
},
|
||||
requirementsToml: 'allowed_sandbox_modes = ["read-only", "workspace-write"]\n',
|
||||
});
|
||||
|
||||
expect(runtime).toEqual(
|
||||
expect.objectContaining({
|
||||
approvalPolicy: "never",
|
||||
sandbox: "danger-full-access",
|
||||
approvalsReviewer: "user",
|
||||
}),
|
||||
);
|
||||
});
|
||||
|
||||
it("keeps explicit yolo mode when requirements disallow full access", () => {
|
||||
const requirementsToml = 'allowed_sandbox_modes = ["read-only", "workspace-write"]\n';
|
||||
expect(
|
||||
resolveRuntimeForTest({
|
||||
pluginConfig: { appServer: { mode: "yolo" } },
|
||||
requirementsToml,
|
||||
}),
|
||||
).toEqual(
|
||||
expect.objectContaining({
|
||||
approvalPolicy: "never",
|
||||
sandbox: "danger-full-access",
|
||||
approvalsReviewer: "user",
|
||||
}),
|
||||
);
|
||||
expect(
|
||||
resolveRuntimeForTest({
|
||||
pluginConfig: {},
|
||||
env: { OPENCLAW_CODEX_APP_SERVER_MODE: "yolo" },
|
||||
requirementsToml,
|
||||
}),
|
||||
).toEqual(
|
||||
expect.objectContaining({
|
||||
approvalPolicy: "never",
|
||||
sandbox: "danger-full-access",
|
||||
approvalsReviewer: "user",
|
||||
}),
|
||||
);
|
||||
});
|
||||
|
||||
it("parses dynamic tool profile controls", () => {
|
||||
expect(
|
||||
readCodexPluginConfig({
|
||||
@@ -237,7 +534,7 @@ describe("Codex app-server config", () => {
|
||||
|
||||
it("treats configured and environment commands as explicit overrides", () => {
|
||||
expect(
|
||||
resolveCodexAppServerRuntimeOptions({
|
||||
resolveRuntimeForTest({
|
||||
pluginConfig: { appServer: { command: "/opt/codex/bin/codex" } },
|
||||
env: { OPENCLAW_CODEX_APP_SERVER_BIN: "/usr/local/bin/codex" },
|
||||
}).start,
|
||||
@@ -249,7 +546,7 @@ describe("Codex app-server config", () => {
|
||||
);
|
||||
|
||||
expect(
|
||||
resolveCodexAppServerRuntimeOptions({
|
||||
resolveRuntimeForTest({
|
||||
pluginConfig: {},
|
||||
env: { OPENCLAW_CODEX_APP_SERVER_BIN: "/usr/local/bin/codex" },
|
||||
}).start,
|
||||
@@ -304,7 +601,7 @@ describe("Codex app-server config", () => {
|
||||
});
|
||||
|
||||
it("allows plugin config to opt in to guardian-reviewed local execution", () => {
|
||||
const runtime = resolveCodexAppServerRuntimeOptions({
|
||||
const runtime = resolveRuntimeForTest({
|
||||
pluginConfig: {
|
||||
appServer: {
|
||||
mode: "guardian",
|
||||
@@ -323,7 +620,7 @@ describe("Codex app-server config", () => {
|
||||
});
|
||||
|
||||
it("allows environment mode fallback to opt in to guardian-reviewed local execution", () => {
|
||||
const runtime = resolveCodexAppServerRuntimeOptions({
|
||||
const runtime = resolveRuntimeForTest({
|
||||
pluginConfig: {},
|
||||
env: { OPENCLAW_CODEX_APP_SERVER_MODE: "guardian" },
|
||||
});
|
||||
@@ -339,13 +636,13 @@ describe("Codex app-server config", () => {
|
||||
|
||||
it("accepts the latest auto_review reviewer and legacy guardian_subagent alias", () => {
|
||||
expect(
|
||||
resolveCodexAppServerRuntimeOptions({
|
||||
resolveRuntimeForTest({
|
||||
pluginConfig: { appServer: { approvalsReviewer: "auto_review" } },
|
||||
env: {},
|
||||
}).approvalsReviewer,
|
||||
).toBe("auto_review");
|
||||
expect(
|
||||
resolveCodexAppServerRuntimeOptions({
|
||||
resolveRuntimeForTest({
|
||||
pluginConfig: { appServer: { approvalsReviewer: "guardian_subagent" } },
|
||||
env: {},
|
||||
}).approvalsReviewer,
|
||||
@@ -353,7 +650,7 @@ describe("Codex app-server config", () => {
|
||||
});
|
||||
|
||||
it("ignores removed OPENCLAW_CODEX_APP_SERVER_GUARDIAN fallback", () => {
|
||||
const runtime = resolveCodexAppServerRuntimeOptions({
|
||||
const runtime = resolveRuntimeForTest({
|
||||
pluginConfig: {},
|
||||
env: { OPENCLAW_CODEX_APP_SERVER_GUARDIAN: "1" },
|
||||
});
|
||||
@@ -368,7 +665,7 @@ describe("Codex app-server config", () => {
|
||||
});
|
||||
|
||||
it("lets explicit policy fields override guardian mode", () => {
|
||||
const runtime = resolveCodexAppServerRuntimeOptions({
|
||||
const runtime = resolveRuntimeForTest({
|
||||
pluginConfig: {
|
||||
appServer: {
|
||||
mode: "guardian",
|
||||
@@ -487,21 +784,27 @@ describe("Codex app-server config", () => {
|
||||
|
||||
expect(manifestKeys).toEqual([...CODEX_APP_SERVER_CONFIG_KEYS].toSorted());
|
||||
for (const key of CODEX_APP_SERVER_CONFIG_KEYS) {
|
||||
expect(manifest.uiHints[`appServer.${key}`]).toBeTruthy();
|
||||
expect(manifest.uiHints[`appServer.${key}`]).toMatchObject({
|
||||
label: expect.any(String),
|
||||
});
|
||||
}
|
||||
const computerUseManifestKeys = Object.keys(
|
||||
manifest.configSchema.properties.computerUse.properties,
|
||||
).toSorted();
|
||||
expect(computerUseManifestKeys).toEqual([...CODEX_COMPUTER_USE_CONFIG_KEYS].toSorted());
|
||||
for (const key of CODEX_COMPUTER_USE_CONFIG_KEYS) {
|
||||
expect(manifest.uiHints[`computerUse.${key}`]).toBeTruthy();
|
||||
expect(manifest.uiHints[`computerUse.${key}`]).toMatchObject({
|
||||
label: expect.any(String),
|
||||
});
|
||||
}
|
||||
const codexPluginsProperties = manifest.configSchema.properties.codexPlugins;
|
||||
const codexPluginsManifestKeys = Object.keys(codexPluginsProperties.properties).toSorted();
|
||||
expect(codexPluginsManifestKeys).toEqual([...CODEX_PLUGINS_CONFIG_KEYS].toSorted());
|
||||
expect(codexPluginsProperties.additionalProperties).toBe(false);
|
||||
for (const key of CODEX_PLUGINS_CONFIG_KEYS) {
|
||||
expect(manifest.uiHints[`codexPlugins.${key}`]).toBeTruthy();
|
||||
expect(manifest.uiHints[`codexPlugins.${key}`]).toMatchObject({
|
||||
label: expect.any(String),
|
||||
});
|
||||
}
|
||||
const pluginEntryProperties = (
|
||||
codexPluginsProperties.properties.plugins as {
|
||||
|
||||
@@ -1,11 +1,21 @@
|
||||
import { createHmac, randomBytes } from "node:crypto";
|
||||
import { readFileSync } from "node:fs";
|
||||
import { hostname as readHostName } from "node:os";
|
||||
import { z } from "openclaw/plugin-sdk/zod";
|
||||
import type { CodexSandboxPolicy, CodexServiceTier } from "./protocol.js";
|
||||
|
||||
const START_OPTIONS_KEY_SECRET = randomBytes(32);
|
||||
const UNIX_CODEX_REQUIREMENTS_PATH = "/etc/codex/requirements.toml";
|
||||
const WINDOWS_CODEX_REQUIREMENTS_SUFFIX = "\\OpenAI\\Codex\\requirements.toml";
|
||||
|
||||
type CodexAppServerTransportMode = "stdio" | "websocket";
|
||||
type CodexAppServerPolicyMode = "yolo" | "guardian";
|
||||
type CodexAppServerDefaultPolicy = {
|
||||
mode: CodexAppServerPolicyMode;
|
||||
approvalPolicy?: CodexAppServerApprovalPolicy;
|
||||
approvalsReviewer?: CodexAppServerApprovalsReviewer;
|
||||
sandbox?: CodexAppServerSandboxMode;
|
||||
};
|
||||
export type CodexAppServerApprovalPolicy = "never" | "on-request" | "on-failure" | "untrusted";
|
||||
export type CodexAppServerEffectiveApprovalPolicy =
|
||||
| CodexAppServerApprovalPolicy
|
||||
@@ -305,6 +315,11 @@ export function resolveCodexAppServerRuntimeOptions(
|
||||
params: {
|
||||
pluginConfig?: unknown;
|
||||
env?: NodeJS.ProcessEnv;
|
||||
requirementsToml?: string | null;
|
||||
requirementsPath?: string;
|
||||
readRequirementsFile?: (path: string) => string | undefined;
|
||||
platform?: NodeJS.Platform;
|
||||
hostName?: string;
|
||||
} = {},
|
||||
): CodexAppServerRuntimeOptions {
|
||||
const env = params.env ?? process.env;
|
||||
@@ -323,10 +338,20 @@ export function resolveCodexAppServerRuntimeOptions(
|
||||
const clearEnv = normalizeStringList(config.clearEnv);
|
||||
const authToken = readNonEmptyString(config.authToken);
|
||||
const url = readNonEmptyString(config.url);
|
||||
const policyMode =
|
||||
resolvePolicyMode(config.mode) ??
|
||||
resolvePolicyMode(env.OPENCLAW_CODEX_APP_SERVER_MODE) ??
|
||||
"yolo";
|
||||
const explicitPolicyMode =
|
||||
resolvePolicyMode(config.mode) ?? resolvePolicyMode(env.OPENCLAW_CODEX_APP_SERVER_MODE);
|
||||
const defaultPolicy = explicitPolicyMode
|
||||
? undefined
|
||||
: resolveDefaultCodexAppServerPolicy({
|
||||
transport,
|
||||
env,
|
||||
requirementsToml: params.requirementsToml,
|
||||
requirementsPath: params.requirementsPath,
|
||||
readRequirementsFile: params.readRequirementsFile,
|
||||
platform: params.platform,
|
||||
hostName: params.hostName,
|
||||
});
|
||||
const policyMode = explicitPolicyMode ?? defaultPolicy?.mode ?? "yolo";
|
||||
const serviceTier = normalizeCodexServiceTier(config.serviceTier);
|
||||
if (transport === "websocket" && !url) {
|
||||
throw new Error(
|
||||
@@ -353,13 +378,16 @@ export function resolveCodexAppServerRuntimeOptions(
|
||||
approvalPolicy:
|
||||
resolveApprovalPolicy(config.approvalPolicy) ??
|
||||
resolveApprovalPolicy(env.OPENCLAW_CODEX_APP_SERVER_APPROVAL_POLICY) ??
|
||||
defaultPolicy?.approvalPolicy ??
|
||||
(policyMode === "guardian" ? "on-request" : "never"),
|
||||
sandbox:
|
||||
resolveSandbox(config.sandbox) ??
|
||||
resolveSandbox(env.OPENCLAW_CODEX_APP_SERVER_SANDBOX) ??
|
||||
defaultPolicy?.sandbox ??
|
||||
(policyMode === "guardian" ? "workspace-write" : "danger-full-access"),
|
||||
approvalsReviewer:
|
||||
resolveApprovalsReviewer(config.approvalsReviewer) ??
|
||||
defaultPolicy?.approvalsReviewer ??
|
||||
(policyMode === "guardian" ? "auto_review" : "user"),
|
||||
...(serviceTier ? { serviceTier } : {}),
|
||||
};
|
||||
@@ -502,6 +530,333 @@ function resolvePolicyMode(value: unknown): CodexAppServerPolicyMode | undefined
|
||||
return value === "guardian" || value === "yolo" ? value : undefined;
|
||||
}
|
||||
|
||||
function resolveDefaultCodexAppServerPolicy(params: {
|
||||
transport: CodexAppServerTransportMode;
|
||||
env?: NodeJS.ProcessEnv;
|
||||
requirementsToml?: string | null;
|
||||
requirementsPath?: string;
|
||||
readRequirementsFile?: (path: string) => string | undefined;
|
||||
platform?: NodeJS.Platform;
|
||||
hostName?: string;
|
||||
}): CodexAppServerDefaultPolicy {
|
||||
if (params.transport !== "stdio") {
|
||||
return { mode: "yolo" };
|
||||
}
|
||||
const content = readCodexRequirementsToml(params);
|
||||
if (content === undefined) {
|
||||
return { mode: "yolo" };
|
||||
}
|
||||
const allowedSandboxModes = parseAllowedSandboxModesFromCodexRequirements(
|
||||
content,
|
||||
readNonEmptyString(params.hostName) ?? readHostName(),
|
||||
);
|
||||
const allowedApprovalPolicies = parseAllowedApprovalPoliciesFromCodexRequirements(content);
|
||||
const allowedApprovalsReviewers = parseAllowedApprovalsReviewersFromCodexRequirements(content);
|
||||
const yoloSandboxAllowed =
|
||||
allowedSandboxModes === undefined || allowedSandboxModes.has("danger-full-access");
|
||||
const yoloApprovalAllowed =
|
||||
allowedApprovalPolicies === undefined || allowedApprovalPolicies.has("never");
|
||||
const yoloReviewerAllowed =
|
||||
allowedApprovalsReviewers === undefined || allowedApprovalsReviewers.has("user");
|
||||
if (yoloSandboxAllowed && yoloApprovalAllowed && yoloReviewerAllowed) {
|
||||
return { mode: "yolo" };
|
||||
}
|
||||
return {
|
||||
mode: "guardian",
|
||||
approvalPolicy: selectGuardianApprovalPolicy(allowedApprovalPolicies),
|
||||
approvalsReviewer: selectGuardianApprovalsReviewer(allowedApprovalsReviewers),
|
||||
sandbox: selectGuardianSandbox(allowedSandboxModes),
|
||||
};
|
||||
}
|
||||
|
||||
function readCodexRequirementsToml(params: {
|
||||
env?: NodeJS.ProcessEnv;
|
||||
requirementsToml?: string | null;
|
||||
requirementsPath?: string;
|
||||
readRequirementsFile?: (path: string) => string | undefined;
|
||||
platform?: NodeJS.Platform;
|
||||
}): string | undefined {
|
||||
if (params.requirementsToml !== undefined) {
|
||||
return params.requirementsToml ?? undefined;
|
||||
}
|
||||
const path =
|
||||
readNonEmptyString(params.requirementsPath) ??
|
||||
resolveCodexRequirementsPath(params.env ?? process.env, params.platform ?? process.platform);
|
||||
try {
|
||||
if (params.readRequirementsFile) {
|
||||
return params.readRequirementsFile(path);
|
||||
}
|
||||
return readFileSync(path, "utf8");
|
||||
} catch {
|
||||
return undefined;
|
||||
}
|
||||
}
|
||||
|
||||
function resolveCodexRequirementsPath(env: NodeJS.ProcessEnv, platform: NodeJS.Platform): string {
|
||||
if (platform === "win32") {
|
||||
const programData = readNonEmptyString(env.ProgramData) ?? "C:\\ProgramData";
|
||||
return `${programData.replace(/[\\/]+$/, "")}${WINDOWS_CODEX_REQUIREMENTS_SUFFIX}`;
|
||||
}
|
||||
return UNIX_CODEX_REQUIREMENTS_PATH;
|
||||
}
|
||||
|
||||
function parseAllowedSandboxModesFromCodexRequirements(
|
||||
content: string,
|
||||
hostName: string,
|
||||
): Set<CodexAppServerSandboxMode> | undefined {
|
||||
const remoteSandboxModes = parseMatchingRemoteSandboxModesFromCodexRequirements(
|
||||
content,
|
||||
hostName,
|
||||
);
|
||||
if (remoteSandboxModes !== undefined) {
|
||||
return remoteSandboxModes;
|
||||
}
|
||||
const values = parseTopLevelRequirementsStringArray(content, "allowed_sandbox_modes");
|
||||
return parseRequirementsSandboxModes(values);
|
||||
}
|
||||
|
||||
function parseAllowedApprovalPoliciesFromCodexRequirements(
|
||||
content: string,
|
||||
): Set<CodexAppServerApprovalPolicy> | undefined {
|
||||
const values = parseTopLevelRequirementsStringArray(content, "allowed_approval_policies");
|
||||
if (values === undefined) {
|
||||
return undefined;
|
||||
}
|
||||
const normalizedPolicies = values
|
||||
.map((entry) => normalizeRequirementsApprovalPolicy(entry))
|
||||
.filter((entry): entry is CodexAppServerApprovalPolicy => entry !== undefined);
|
||||
return normalizedPolicies.length > 0 ? new Set(normalizedPolicies) : undefined;
|
||||
}
|
||||
|
||||
function parseAllowedApprovalsReviewersFromCodexRequirements(
|
||||
content: string,
|
||||
): Set<CodexAppServerApprovalsReviewer> | undefined {
|
||||
const values = parseTopLevelRequirementsStringArray(content, "allowed_approvals_reviewers");
|
||||
if (values === undefined) {
|
||||
return undefined;
|
||||
}
|
||||
const normalizedReviewers = values
|
||||
.map((entry) => normalizeRequirementsApprovalsReviewer(entry))
|
||||
.filter((entry): entry is CodexAppServerApprovalsReviewer => entry !== undefined);
|
||||
return normalizedReviewers.length > 0 ? new Set(normalizedReviewers) : undefined;
|
||||
}
|
||||
|
||||
function parseMatchingRemoteSandboxModesFromCodexRequirements(
|
||||
content: string,
|
||||
hostName: string,
|
||||
): Set<CodexAppServerSandboxMode> | undefined {
|
||||
const normalizedHostName = normalizeRequirementsHostName(hostName);
|
||||
if (normalizedHostName === undefined) {
|
||||
return undefined;
|
||||
}
|
||||
for (const section of parseTomlArrayTableSections(content, "remote_sandbox_config")) {
|
||||
const patterns = parseRequirementsStringArray(section, "hostname_patterns");
|
||||
if (!patterns || !requirementsHostNameMatchesAnyPattern(normalizedHostName, patterns)) {
|
||||
continue;
|
||||
}
|
||||
return parseRequirementsSandboxModes(
|
||||
parseRequirementsStringArray(section, "allowed_sandbox_modes"),
|
||||
);
|
||||
}
|
||||
return undefined;
|
||||
}
|
||||
|
||||
function parseRequirementsSandboxModes(
|
||||
values: string[] | undefined,
|
||||
): Set<CodexAppServerSandboxMode> | undefined {
|
||||
if (values === undefined) {
|
||||
return undefined;
|
||||
}
|
||||
const normalizedModes = values
|
||||
.map((entry) => normalizeRequirementsSandboxMode(entry))
|
||||
.filter((entry): entry is CodexAppServerSandboxMode => entry !== undefined);
|
||||
return normalizedModes.length > 0 ? new Set(normalizedModes) : undefined;
|
||||
}
|
||||
|
||||
function parseTopLevelRequirementsStringArray(content: string, key: string): string[] | undefined {
|
||||
const topLevelContent = stripTomlLineComments(content).slice(0, firstTomlTableOffset(content));
|
||||
return parseRequirementsStringArray(topLevelContent, key);
|
||||
}
|
||||
|
||||
function parseRequirementsStringArray(content: string, key: string): string[] | undefined {
|
||||
const match = content.match(new RegExp(`(?:^|\\n)\\s*${key}\\s*=\\s*\\[([\\s\\S]*?)\\]`));
|
||||
if (!match) {
|
||||
return undefined;
|
||||
}
|
||||
const arrayBody = match[1] ?? "";
|
||||
const stringMatches = [...arrayBody.matchAll(/"([^"\\]*(?:\\.[^"\\]*)*)"|'([^']*)'/g)];
|
||||
if (stringMatches.length === 0 && arrayBody.trim().length > 0) {
|
||||
return undefined;
|
||||
}
|
||||
return stringMatches.map((entry) => entry[1] ?? entry[2] ?? "");
|
||||
}
|
||||
|
||||
function parseTomlArrayTableSections(content: string, table: string): string[] {
|
||||
const strippedContent = stripTomlLineComments(content);
|
||||
const escapedTable = table.replace(/[.*+?^${}()|[\]\\]/g, "\\$&");
|
||||
const headerPattern = new RegExp(`^\\s*\\[\\[\\s*${escapedTable}\\s*\\]\\]\\s*$`, "gm");
|
||||
const sections: string[] = [];
|
||||
for (
|
||||
let match = headerPattern.exec(strippedContent);
|
||||
match;
|
||||
match = headerPattern.exec(strippedContent)
|
||||
) {
|
||||
const sectionStart = headerPattern.lastIndex;
|
||||
const rest = strippedContent.slice(sectionStart);
|
||||
const nextTableOffset = rest.search(/^\s*\[/m);
|
||||
sections.push(nextTableOffset === -1 ? rest : rest.slice(0, nextTableOffset));
|
||||
}
|
||||
return sections;
|
||||
}
|
||||
|
||||
function firstTomlTableOffset(content: string): number {
|
||||
const match = content.match(/^\s*\[[^\]\n]/m);
|
||||
return match?.index ?? content.length;
|
||||
}
|
||||
|
||||
function stripTomlLineComments(value: string): string {
|
||||
let output = "";
|
||||
let quote: '"' | "'" | undefined;
|
||||
let escaped = false;
|
||||
for (let index = 0; index < value.length; index += 1) {
|
||||
const char = value[index] ?? "";
|
||||
if (quote) {
|
||||
output += char;
|
||||
if (quote === '"' && escaped) {
|
||||
escaped = false;
|
||||
continue;
|
||||
}
|
||||
if (quote === '"' && char === "\\") {
|
||||
escaped = true;
|
||||
continue;
|
||||
}
|
||||
if (char === quote) {
|
||||
quote = undefined;
|
||||
}
|
||||
continue;
|
||||
}
|
||||
if (char === '"' || char === "'") {
|
||||
quote = char;
|
||||
output += char;
|
||||
continue;
|
||||
}
|
||||
if (char === "#") {
|
||||
while (index < value.length && value[index] !== "\n") {
|
||||
index += 1;
|
||||
}
|
||||
if (value[index] === "\n") {
|
||||
output += "\n";
|
||||
}
|
||||
continue;
|
||||
}
|
||||
output += char;
|
||||
}
|
||||
return output;
|
||||
}
|
||||
|
||||
function normalizeRequirementsSandboxMode(value: string): CodexAppServerSandboxMode | undefined {
|
||||
const compact = value.replace(/[\s_-]/g, "").toLowerCase();
|
||||
if (compact === "readonly") {
|
||||
return "read-only";
|
||||
}
|
||||
if (compact === "workspacewrite") {
|
||||
return "workspace-write";
|
||||
}
|
||||
if (compact === "dangerfullaccess") {
|
||||
return "danger-full-access";
|
||||
}
|
||||
return undefined;
|
||||
}
|
||||
|
||||
function normalizeRequirementsHostName(value: string): string | undefined {
|
||||
const normalized = value.trim().replace(/\.+$/g, "").toLowerCase();
|
||||
return normalized.length > 0 ? normalized : undefined;
|
||||
}
|
||||
|
||||
function requirementsHostNameMatchesAnyPattern(hostName: string, patterns: string[]): boolean {
|
||||
return patterns.some((pattern) => {
|
||||
const normalizedPattern = normalizeRequirementsHostName(pattern);
|
||||
return normalizedPattern !== undefined && globPatternMatches(hostName, normalizedPattern);
|
||||
});
|
||||
}
|
||||
|
||||
function globPatternMatches(value: string, pattern: string): boolean {
|
||||
let regex = "^";
|
||||
for (const char of pattern) {
|
||||
if (char === "*") {
|
||||
regex += ".*";
|
||||
} else if (char === "?") {
|
||||
regex += ".";
|
||||
} else {
|
||||
regex += char.replace(/[.*+?^${}()|[\]\\]/g, "\\$&");
|
||||
}
|
||||
}
|
||||
regex += "$";
|
||||
return new RegExp(regex).test(value);
|
||||
}
|
||||
|
||||
function normalizeRequirementsApprovalPolicy(
|
||||
value: string,
|
||||
): CodexAppServerApprovalPolicy | undefined {
|
||||
const normalized = value.trim().toLowerCase();
|
||||
return resolveApprovalPolicy(normalized);
|
||||
}
|
||||
|
||||
function normalizeRequirementsApprovalsReviewer(
|
||||
value: string,
|
||||
): CodexAppServerApprovalsReviewer | undefined {
|
||||
const normalized = value.trim().toLowerCase();
|
||||
return resolveApprovalsReviewer(normalized);
|
||||
}
|
||||
|
||||
function selectGuardianApprovalPolicy(
|
||||
allowedApprovalPolicies: Set<CodexAppServerApprovalPolicy> | undefined,
|
||||
): CodexAppServerApprovalPolicy {
|
||||
if (allowedApprovalPolicies === undefined || allowedApprovalPolicies.has("on-request")) {
|
||||
return "on-request";
|
||||
}
|
||||
if (allowedApprovalPolicies.has("on-failure")) {
|
||||
return "on-failure";
|
||||
}
|
||||
if (allowedApprovalPolicies.has("untrusted")) {
|
||||
return "untrusted";
|
||||
}
|
||||
if (allowedApprovalPolicies.has("never")) {
|
||||
return "never";
|
||||
}
|
||||
return "on-request";
|
||||
}
|
||||
|
||||
function selectGuardianApprovalsReviewer(
|
||||
allowedApprovalsReviewers: Set<CodexAppServerApprovalsReviewer> | undefined,
|
||||
): CodexAppServerApprovalsReviewer {
|
||||
if (allowedApprovalsReviewers === undefined || allowedApprovalsReviewers.has("auto_review")) {
|
||||
return "auto_review";
|
||||
}
|
||||
if (allowedApprovalsReviewers.has("guardian_subagent")) {
|
||||
return "guardian_subagent";
|
||||
}
|
||||
if (allowedApprovalsReviewers.has("user")) {
|
||||
return "user";
|
||||
}
|
||||
return "auto_review";
|
||||
}
|
||||
|
||||
function selectGuardianSandbox(
|
||||
allowedSandboxModes: Set<CodexAppServerSandboxMode> | undefined,
|
||||
): CodexAppServerSandboxMode {
|
||||
if (allowedSandboxModes === undefined || allowedSandboxModes.has("workspace-write")) {
|
||||
return "workspace-write";
|
||||
}
|
||||
if (allowedSandboxModes.has("read-only")) {
|
||||
return "read-only";
|
||||
}
|
||||
if (allowedSandboxModes.has("danger-full-access")) {
|
||||
return "danger-full-access";
|
||||
}
|
||||
return "workspace-write";
|
||||
}
|
||||
|
||||
function resolveApprovalPolicy(value: unknown): CodexAppServerApprovalPolicy | undefined {
|
||||
return value === "on-request" ||
|
||||
value === "on-failure" ||
|
||||
|
||||
@@ -911,10 +911,17 @@ describe("createCodexDynamicToolBridge", () => {
|
||||
},
|
||||
{ signal: callController.signal },
|
||||
);
|
||||
await vi.waitFor(() => expect(capturedSignal).toBeDefined());
|
||||
await vi.waitFor(() => {
|
||||
if (!capturedSignal) {
|
||||
throw new Error("expected dynamic tool call signal");
|
||||
}
|
||||
});
|
||||
if (!capturedSignal) {
|
||||
throw new Error("expected dynamic tool call signal");
|
||||
}
|
||||
|
||||
callController.abort(new Error("deadline"));
|
||||
expect(capturedSignal?.aborted).toBe(true);
|
||||
expect(capturedSignal.aborted).toBe(true);
|
||||
resolveTool?.(textToolResult("done"));
|
||||
|
||||
await expect(result).resolves.toEqual(expectInputText("done"));
|
||||
|
||||
@@ -674,8 +674,10 @@ describe("runCodexAppServerAttempt", () => {
|
||||
params.sourceReplyDeliveryMode = "message_tool_only";
|
||||
params.toolsAllow = ["message", "web_search", "heartbeat_respond"];
|
||||
|
||||
const run = runCodexAppServerAttempt(params);
|
||||
await harness.waitForMethod("turn/start", 60_000);
|
||||
const run = runCodexAppServerAttempt(params, {
|
||||
pluginConfig: { appServer: { mode: "yolo" } },
|
||||
});
|
||||
await harness.waitForMethod("turn/start", 120_000);
|
||||
await harness.completeTurn({ threadId: "thread-1", turnId: "turn-1" });
|
||||
await run;
|
||||
|
||||
@@ -1953,6 +1955,7 @@ describe("runCodexAppServerAttempt", () => {
|
||||
const { waitForMethod } = createStartedThreadHarness();
|
||||
const run = runCodexAppServerAttempt(
|
||||
createParams(path.join(tempDir, "session.jsonl"), path.join(tempDir, "workspace")),
|
||||
{ pluginConfig: { appServer: { mode: "yolo" } } },
|
||||
);
|
||||
|
||||
await waitForMethod("turn/start");
|
||||
@@ -1974,6 +1977,7 @@ describe("runCodexAppServerAttempt", () => {
|
||||
|
||||
const run = runCodexAppServerAttempt(
|
||||
createParams(path.join(tempDir, "session.jsonl"), path.join(tempDir, "workspace")),
|
||||
{ pluginConfig: { appServer: { mode: "yolo" } } },
|
||||
);
|
||||
await waitForMethod("turn/start");
|
||||
|
||||
@@ -3107,7 +3111,9 @@ describe("runCodexAppServerAttempt", () => {
|
||||
await writeExistingBinding(sessionFile, workspaceDir, { dynamicToolsFingerprint: "[]" });
|
||||
const { requests, waitForMethod, completeTurn } = createResumeHarness();
|
||||
|
||||
const run = runCodexAppServerAttempt(createParams(sessionFile, workspaceDir));
|
||||
const run = runCodexAppServerAttempt(createParams(sessionFile, workspaceDir), {
|
||||
pluginConfig: { appServer: { mode: "yolo" } },
|
||||
});
|
||||
await waitForMethod("turn/start");
|
||||
await completeTurn({ threadId: "thread-existing", turnId: "turn-1" });
|
||||
await run;
|
||||
|
||||
@@ -57,7 +57,8 @@ describe("codex app-server session binding", () => {
|
||||
modelProvider: "openai",
|
||||
dynamicToolsFingerprint: "tools-v1",
|
||||
});
|
||||
await expect(fs.stat(resolveCodexAppServerBindingPath(sessionFile))).resolves.toBeTruthy();
|
||||
const bindingStat = await fs.stat(resolveCodexAppServerBindingPath(sessionFile));
|
||||
expect(bindingStat.isFile()).toBe(true);
|
||||
});
|
||||
|
||||
it("round-trips plugin app policy context with app ids as record keys", async () => {
|
||||
|
||||
@@ -73,8 +73,10 @@ function readDiagnosticsConfirmationToken(
|
||||
): string {
|
||||
const text = result.text ?? "";
|
||||
const token = new RegExp(`${escapeRegExp(commandPrefix)} confirm ([a-f0-9]{12})`).exec(text)?.[1];
|
||||
expect(token).toBeTruthy();
|
||||
return token as string;
|
||||
if (!token) {
|
||||
throw new Error(`expected ${commandPrefix} confirmation token in command output`);
|
||||
}
|
||||
return token;
|
||||
}
|
||||
|
||||
function escapeRegExp(value: string): string {
|
||||
|
||||
@@ -12,7 +12,7 @@ describe("codex package manifest", () => {
|
||||
fs.readFileSync(new URL("../package.json", import.meta.url), "utf8"),
|
||||
) as CodexPackageManifest;
|
||||
|
||||
expect(packageJson.dependencies?.["@mariozechner/pi-coding-agent"]).toBeDefined();
|
||||
expect(packageJson.dependencies).toHaveProperty("@mariozechner/pi-coding-agent");
|
||||
expect(packageJson.dependencies?.["@openai/codex"]).toBe(
|
||||
MANAGED_CODEX_APP_SERVER_PACKAGE_VERSION,
|
||||
);
|
||||
|
||||
@@ -30,6 +30,7 @@ export function resolveCodexPromptSnapshotAppServerOptions(
|
||||
return resolveCodexAppServerRuntimeOptions({
|
||||
pluginConfig,
|
||||
env: {},
|
||||
requirementsToml: null,
|
||||
});
|
||||
}
|
||||
|
||||
|
||||
@@ -31,6 +31,14 @@ function withPluginsEnabled<T>(cfg: T): T {
|
||||
} as T;
|
||||
}
|
||||
|
||||
function requireProvider<T extends { id: string }>(providers: T[], id: string): T {
|
||||
const provider = providers.find((entry) => entry.id === id);
|
||||
if (!provider) {
|
||||
throw new Error(`expected ${id} provider to be registered`);
|
||||
}
|
||||
return provider;
|
||||
}
|
||||
|
||||
describeLive("comfy live", () => {
|
||||
let cfg = {} as OpenClawConfig;
|
||||
let agentDir = "";
|
||||
@@ -62,9 +70,8 @@ describeLive("comfy live", () => {
|
||||
it.skipIf(!isComfyCapabilityConfigured({ cfg: cfg as never, agentDir, capability: "image" }))(
|
||||
"runs an image workflow",
|
||||
async () => {
|
||||
const provider = imageProviders.find((entry) => entry.id === "comfy");
|
||||
expect(provider).toBeDefined();
|
||||
const result = await provider!.generateImage({
|
||||
const provider = requireProvider(imageProviders, "comfy");
|
||||
const result = await provider.generateImage({
|
||||
provider: "comfy",
|
||||
model: "workflow",
|
||||
prompt: "A tiny orange lobster icon on a clean background.",
|
||||
@@ -81,9 +88,8 @@ describeLive("comfy live", () => {
|
||||
it.skipIf(!isComfyCapabilityConfigured({ cfg: cfg as never, agentDir, capability: "video" }))(
|
||||
"runs a video workflow",
|
||||
async () => {
|
||||
const provider = videoProviders.find((entry) => entry.id === "comfy");
|
||||
expect(provider).toBeDefined();
|
||||
const result = await provider!.generateVideo({
|
||||
const provider = requireProvider(videoProviders, "comfy");
|
||||
const result = await provider.generateVideo({
|
||||
provider: "comfy",
|
||||
model: "workflow",
|
||||
prompt: "A tiny paper lobster gently waving, cinematic motion.",
|
||||
@@ -100,9 +106,8 @@ describeLive("comfy live", () => {
|
||||
it.skipIf(!isComfyCapabilityConfigured({ cfg: cfg as never, agentDir, capability: "music" }))(
|
||||
"runs a music workflow",
|
||||
async () => {
|
||||
const provider = musicProviders.find((entry) => entry.id === "comfy");
|
||||
expect(provider).toBeDefined();
|
||||
const result = await provider!.generateMusic({
|
||||
const provider = requireProvider(musicProviders, "comfy");
|
||||
const result = await provider.generateMusic({
|
||||
provider: "comfy",
|
||||
model: "workflow",
|
||||
prompt: "A gentle ambient synth loop with warm analog pads.",
|
||||
|
||||
@@ -60,6 +60,7 @@ describeLive("deepgram live", () => {
|
||||
outputFormat: "ulaw_8000",
|
||||
timeoutMs: 30_000,
|
||||
});
|
||||
expect(speech.byteLength).toBeGreaterThan(0);
|
||||
|
||||
await runRealtimeSttLiveTest({
|
||||
provider,
|
||||
|
||||
@@ -44,8 +44,7 @@ describe("DeepInfra provider config", () => {
|
||||
it("sets DeepInfra alias on the provided model ref", () => {
|
||||
const result = applyDeepInfraProviderConfig(emptyCfg, DEEPINFRA_DEFAULT_MODEL_REF);
|
||||
const agentModel = result.agents?.defaults?.models?.[DEEPINFRA_DEFAULT_MODEL_REF];
|
||||
expect(agentModel).toBeDefined();
|
||||
expect(agentModel?.alias).toBe("DeepInfra");
|
||||
expect(agentModel).toMatchObject({ alias: "DeepInfra" });
|
||||
});
|
||||
|
||||
it("attaches the alias to a non-default model ref when provided", () => {
|
||||
|
||||
@@ -142,18 +142,16 @@ describeLive("deepseek plugin live", () => {
|
||||
};
|
||||
let capturedPayload: Record<string, unknown> | undefined;
|
||||
const streamFn = createDeepSeekV4ThinkingWrapper(streamSimple, "high");
|
||||
expect(streamFn).toBeDefined();
|
||||
|
||||
const stream = streamFn?.(resolveDeepSeekV4LiveModel(), context, {
|
||||
const stream = streamFn(resolveDeepSeekV4LiveModel(), context, {
|
||||
apiKey: DEEPSEEK_KEY,
|
||||
maxTokens: 64,
|
||||
onPayload: (payload) => {
|
||||
capturedPayload = payload as Record<string, unknown>;
|
||||
},
|
||||
});
|
||||
expect(stream).toBeDefined();
|
||||
|
||||
const result = await (await stream!).result();
|
||||
const result = await (await stream).result();
|
||||
if (result.stopReason === "error") {
|
||||
throw new Error(result.errorMessage || "DeepSeek V4 replay returned error with no message");
|
||||
}
|
||||
@@ -204,18 +202,16 @@ describeLive("deepseek plugin live", () => {
|
||||
};
|
||||
let capturedPayload: Record<string, unknown> | undefined;
|
||||
const streamFn = createDeepSeekV4ThinkingWrapper(streamSimple, "high");
|
||||
expect(streamFn).toBeDefined();
|
||||
|
||||
const stream = streamFn?.(resolveDeepSeekV4LiveModel(), context, {
|
||||
const stream = streamFn(resolveDeepSeekV4LiveModel(), context, {
|
||||
apiKey: DEEPSEEK_KEY,
|
||||
maxTokens: 64,
|
||||
onPayload: (payload) => {
|
||||
capturedPayload = payload as Record<string, unknown>;
|
||||
},
|
||||
});
|
||||
expect(stream).toBeDefined();
|
||||
|
||||
const result = await (await stream!).result();
|
||||
const result = await (await stream).result();
|
||||
if (result.stopReason === "error") {
|
||||
throw new Error(
|
||||
result.errorMessage || "DeepSeek V4 plain replay returned error with no message",
|
||||
|
||||
@@ -119,6 +119,16 @@ function createPayloadCapturingStream(capture: PayloadCapture) {
|
||||
};
|
||||
}
|
||||
|
||||
function requireThinkingWrapper(
|
||||
wrapper: ReturnType<typeof createDeepSeekV4ThinkingWrapper>,
|
||||
label: string,
|
||||
): NonNullable<ReturnType<typeof createDeepSeekV4ThinkingWrapper>> {
|
||||
if (!wrapper) {
|
||||
throw new Error(`expected DeepSeek thinking wrapper for ${label}`);
|
||||
}
|
||||
return wrapper;
|
||||
}
|
||||
|
||||
describe("deepseek provider plugin", () => {
|
||||
it("registers DeepSeek with api-key auth wizard metadata", async () => {
|
||||
const provider = await registerSingleProviderPlugin(deepseekPlugin);
|
||||
@@ -225,9 +235,11 @@ describe("deepseek provider plugin", () => {
|
||||
return stream;
|
||||
};
|
||||
|
||||
const wrapThinkingOff = createDeepSeekV4ThinkingWrapper(baseStreamFn as never, "off");
|
||||
expect(wrapThinkingOff).toBeDefined();
|
||||
await wrapThinkingOff?.(
|
||||
const wrapThinkingOff = requireThinkingWrapper(
|
||||
createDeepSeekV4ThinkingWrapper(baseStreamFn as never, "off"),
|
||||
"off",
|
||||
);
|
||||
await wrapThinkingOff(
|
||||
{
|
||||
provider: "deepseek",
|
||||
id: "deepseek-v4-pro",
|
||||
@@ -240,9 +252,11 @@ describe("deepseek provider plugin", () => {
|
||||
expect(capturedPayload).toMatchObject({ thinking: { type: "disabled" } });
|
||||
expect(capturedPayload).not.toHaveProperty("reasoning_effort");
|
||||
|
||||
const wrapThinkingXhigh = createDeepSeekV4ThinkingWrapper(baseStreamFn as never, "xhigh");
|
||||
expect(wrapThinkingXhigh).toBeDefined();
|
||||
await wrapThinkingXhigh?.(
|
||||
const wrapThinkingXhigh = requireThinkingWrapper(
|
||||
createDeepSeekV4ThinkingWrapper(baseStreamFn as never, "xhigh"),
|
||||
"xhigh",
|
||||
);
|
||||
await wrapThinkingXhigh(
|
||||
{
|
||||
provider: "deepseek",
|
||||
id: "deepseek-v4-pro",
|
||||
@@ -264,9 +278,11 @@ describe("deepseek provider plugin", () => {
|
||||
const context = deepSeekReasoningToolReplayContext();
|
||||
const baseStreamFn = createPayloadCapturingStream(capture);
|
||||
|
||||
const wrapThinkingHigh = createDeepSeekV4ThinkingWrapper(baseStreamFn as never, "high");
|
||||
expect(wrapThinkingHigh).toBeDefined();
|
||||
await wrapThinkingHigh?.(model, context, {});
|
||||
const wrapThinkingHigh = requireThinkingWrapper(
|
||||
createDeepSeekV4ThinkingWrapper(baseStreamFn as never, "high"),
|
||||
"high",
|
||||
);
|
||||
await wrapThinkingHigh(model, context, {});
|
||||
|
||||
expect(capture.payload).toMatchObject({
|
||||
thinking: { type: "enabled" },
|
||||
@@ -301,9 +317,11 @@ describe("deepseek provider plugin", () => {
|
||||
);
|
||||
const baseStreamFn = createPayloadCapturingStream(capture);
|
||||
|
||||
const wrapThinkingHigh = createDeepSeekV4ThinkingWrapper(baseStreamFn as never, "high");
|
||||
expect(wrapThinkingHigh).toBeDefined();
|
||||
await wrapThinkingHigh?.(model, context, {});
|
||||
const wrapThinkingHigh = requireThinkingWrapper(
|
||||
createDeepSeekV4ThinkingWrapper(baseStreamFn as never, "high"),
|
||||
"high",
|
||||
);
|
||||
await wrapThinkingHigh(model, context, {});
|
||||
|
||||
expect((capture.payload?.messages as Array<Record<string, unknown>>)[1]).toMatchObject({
|
||||
role: "assistant",
|
||||
@@ -338,9 +356,11 @@ describe("deepseek provider plugin", () => {
|
||||
} as Context;
|
||||
const baseStreamFn = createPayloadCapturingStream(capture);
|
||||
|
||||
const wrapThinkingHigh = createDeepSeekV4ThinkingWrapper(baseStreamFn as never, "high");
|
||||
expect(wrapThinkingHigh).toBeDefined();
|
||||
await wrapThinkingHigh?.(model, context, {});
|
||||
const wrapThinkingHigh = requireThinkingWrapper(
|
||||
createDeepSeekV4ThinkingWrapper(baseStreamFn as never, "high"),
|
||||
"high",
|
||||
);
|
||||
await wrapThinkingHigh(model, context, {});
|
||||
|
||||
expect((capture.payload?.messages as Array<Record<string, unknown>>)[1]).toMatchObject({
|
||||
role: "assistant",
|
||||
@@ -355,12 +375,11 @@ describe("deepseek provider plugin", () => {
|
||||
const context = deepSeekReasoningToolReplayContext();
|
||||
const baseStreamFn = createPayloadCapturingStream(capture);
|
||||
|
||||
const wrapThinkingNone = createDeepSeekV4ThinkingWrapper(
|
||||
baseStreamFn as never,
|
||||
"none" as never,
|
||||
const wrapThinkingNone = requireThinkingWrapper(
|
||||
createDeepSeekV4ThinkingWrapper(baseStreamFn as never, "none" as never),
|
||||
"none",
|
||||
);
|
||||
expect(wrapThinkingNone).toBeDefined();
|
||||
await wrapThinkingNone?.(model, context, {});
|
||||
await wrapThinkingNone(model, context, {});
|
||||
|
||||
expect(capture.payload).toMatchObject({ thinking: { type: "disabled" } });
|
||||
expect(capture.payload).not.toHaveProperty("reasoning_effort");
|
||||
|
||||
@@ -131,7 +131,9 @@ describe("PlaywrightDiffScreenshotter", () => {
|
||||
expect(pages).toHaveLength(1);
|
||||
expect(pages[0]?.pdf).toHaveBeenCalledTimes(1);
|
||||
const pdfCall = pages[0]?.pdf.mock.calls[0]?.[0] as Record<string, unknown> | undefined;
|
||||
expect(pdfCall).toBeDefined();
|
||||
if (!pdfCall) {
|
||||
throw new Error("expected PDF render call");
|
||||
}
|
||||
expect(pdfCall).not.toHaveProperty("pageRanges");
|
||||
expect(pages[0]?.screenshot).toHaveBeenCalledTimes(0);
|
||||
await expect(fs.readFile(pdfPath, "utf8")).resolves.toContain("%PDF-1.7");
|
||||
|
||||
@@ -11,6 +11,6 @@ describe("diffs package manifest", () => {
|
||||
fs.readFileSync(new URL("../package.json", import.meta.url), "utf8"),
|
||||
) as DiffsPackageManifest;
|
||||
|
||||
expect(packageJson.dependencies?.["@pierre/diffs"]).toBeDefined();
|
||||
expect(packageJson.dependencies).toHaveProperty("@pierre/diffs");
|
||||
});
|
||||
});
|
||||
|
||||
@@ -38,7 +38,9 @@ describe("diffs tool", () => {
|
||||
|
||||
const text = readTextContent(result, 0);
|
||||
expect(text).toContain("http://127.0.0.1:18789/plugins/diffs/view/");
|
||||
expect((result?.details as Record<string, unknown>).viewerUrl).toBeDefined();
|
||||
expect(readDetails(result).viewerUrl).toEqual(
|
||||
expect.stringContaining("http://127.0.0.1:18789/plugins/diffs/view/"),
|
||||
);
|
||||
});
|
||||
|
||||
it("uses configured viewerBaseUrl when tool input omits baseUrl", async () => {
|
||||
@@ -92,16 +94,15 @@ describe("diffs tool", () => {
|
||||
);
|
||||
});
|
||||
|
||||
it("does not expose reserved format in the tool schema", async () => {
|
||||
it("does not expose reserved format in the tool schema", () => {
|
||||
const tool = createDiffsTool({
|
||||
api: createApi(),
|
||||
store,
|
||||
defaults: DEFAULT_DIFFS_TOOL_DEFAULTS,
|
||||
});
|
||||
|
||||
const parameters = tool.parameters as { properties?: Record<string, unknown> };
|
||||
expect(parameters.properties).toBeDefined();
|
||||
expect(parameters.properties).not.toHaveProperty("format");
|
||||
const properties = readParametersProperties(tool.parameters);
|
||||
expect(properties).not.toHaveProperty("format");
|
||||
});
|
||||
|
||||
it("returns an image artifact in image mode", async () => {
|
||||
@@ -132,16 +133,17 @@ describe("diffs tool", () => {
|
||||
expect(readTextContent(result, 0)).toContain("Diff PNG generated at:");
|
||||
expect(readTextContent(result, 0)).toContain("Use the `message` tool");
|
||||
expect(result?.content).toHaveLength(1);
|
||||
expect((result?.details as Record<string, unknown>).filePath).toBeDefined();
|
||||
expect((result?.details as Record<string, unknown>).imagePath).toBeDefined();
|
||||
expect((result?.details as Record<string, unknown>).format).toBe("png");
|
||||
expect((result?.details as Record<string, unknown>).fileQuality).toBe("standard");
|
||||
expect((result?.details as Record<string, unknown>).imageQuality).toBe("standard");
|
||||
expect((result?.details as Record<string, unknown>).fileScale).toBe(2);
|
||||
expect((result?.details as Record<string, unknown>).imageScale).toBe(2);
|
||||
expect((result?.details as Record<string, unknown>).fileMaxWidth).toBe(960);
|
||||
expect((result?.details as Record<string, unknown>).imageMaxWidth).toBe(960);
|
||||
expect((result?.details as Record<string, unknown>).viewerUrl).toBeUndefined();
|
||||
const details = readDetails(result);
|
||||
expect(requireString(details.filePath, "filePath")).toMatch(/preview\.png$/);
|
||||
expect(requireString(details.imagePath, "imagePath")).toMatch(/preview\.png$/);
|
||||
expect(details.format).toBe("png");
|
||||
expect(details.fileQuality).toBe("standard");
|
||||
expect(details.imageQuality).toBe("standard");
|
||||
expect(details.fileScale).toBe(2);
|
||||
expect(details.imageScale).toBe(2);
|
||||
expect(details.fileMaxWidth).toBe(960);
|
||||
expect(details.imageMaxWidth).toBe(960);
|
||||
expect(details.viewerUrl).toBeUndefined();
|
||||
expect(cleanupSpy).toHaveBeenCalledTimes(1);
|
||||
});
|
||||
|
||||
@@ -206,8 +208,8 @@ describe("diffs tool", () => {
|
||||
mode: "file",
|
||||
ttlSeconds: 1,
|
||||
});
|
||||
const filePath = (result?.details as Record<string, unknown>).filePath as string;
|
||||
await expect(fs.stat(filePath)).resolves.toBeDefined();
|
||||
const filePath = requireString(readDetails(result).filePath, "filePath");
|
||||
await fs.access(filePath);
|
||||
|
||||
vi.setSystemTime(new Date(now.getTime() + 2_000));
|
||||
await store.cleanupExpired();
|
||||
@@ -564,6 +566,32 @@ function createPdfScreenshotter(
|
||||
return { screenshotHtml };
|
||||
}
|
||||
|
||||
function isRecord(value: unknown): value is Record<string, unknown> {
|
||||
return typeof value === "object" && value !== null;
|
||||
}
|
||||
|
||||
function readDetails(result: unknown): Record<string, unknown> {
|
||||
const details = (result as { details?: unknown } | null | undefined)?.details;
|
||||
if (!isRecord(details)) {
|
||||
throw new Error("expected diffs tool result details");
|
||||
}
|
||||
return details;
|
||||
}
|
||||
|
||||
function readParametersProperties(parameters: unknown): Record<string, unknown> {
|
||||
if (isRecord(parameters) && isRecord(parameters.properties)) {
|
||||
return parameters.properties;
|
||||
}
|
||||
throw new Error("expected diffs tool parameter properties");
|
||||
}
|
||||
|
||||
function requireString(value: unknown, label: string): string {
|
||||
if (typeof value !== "string" || value.length === 0) {
|
||||
throw new Error(`expected ${label}`);
|
||||
}
|
||||
return value;
|
||||
}
|
||||
|
||||
function readTextContent(result: unknown, index: number): string {
|
||||
const content = (result as { content?: Array<{ type?: string; text?: string }> } | undefined)
|
||||
?.content;
|
||||
|
||||
@@ -18,26 +18,63 @@ afterEach(() => {
|
||||
vi.unstubAllEnvs();
|
||||
});
|
||||
|
||||
describe("resolveDiscordAccount allowFrom precedence", () => {
|
||||
it("uses configured defaultAccount when accountId is omitted", () => {
|
||||
const resolved = resolveDiscordAccount({
|
||||
cfg: {
|
||||
channels: {
|
||||
discord: {
|
||||
defaultAccount: "work",
|
||||
accounts: {
|
||||
work: { token: "token-work", name: "Work" },
|
||||
const defaultAccountOmissionCases = [
|
||||
{
|
||||
name: "resolveDiscordAccount",
|
||||
assert: () => {
|
||||
const resolved = resolveDiscordAccount({
|
||||
cfg: {
|
||||
channels: {
|
||||
discord: {
|
||||
defaultAccount: "work",
|
||||
accounts: {
|
||||
work: { token: "token-work", name: "Work" },
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
});
|
||||
});
|
||||
|
||||
expect(resolved.accountId).toBe("work");
|
||||
expect(resolved.name).toBe("Work");
|
||||
expect(resolved.token).toBe("token-work");
|
||||
});
|
||||
expect(resolved.accountId).toBe("work");
|
||||
expect(resolved.name).toBe("Work");
|
||||
expect(resolved.token).toBe("token-work");
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "createDiscordActionGate",
|
||||
assert: () => {
|
||||
const gate = createDiscordActionGate({
|
||||
cfg: {
|
||||
channels: {
|
||||
discord: {
|
||||
actions: { reactions: false },
|
||||
defaultAccount: "work",
|
||||
accounts: {
|
||||
work: {
|
||||
token: "token-work",
|
||||
actions: { reactions: true },
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
});
|
||||
|
||||
expect(gate("reactions")).toBe(true);
|
||||
},
|
||||
},
|
||||
];
|
||||
|
||||
describe("Discord defaultAccount omission contract", () => {
|
||||
it.each(defaultAccountOmissionCases)(
|
||||
"$name uses configured defaultAccount when accountId is omitted",
|
||||
({ assert }) => {
|
||||
assert();
|
||||
},
|
||||
);
|
||||
});
|
||||
|
||||
describe("resolveDiscordAccount allowFrom precedence", () => {
|
||||
it("prefers accounts.default.allowFrom over top-level for default account", () => {
|
||||
const resolved = resolveDiscordAccount({
|
||||
cfg: {
|
||||
@@ -93,29 +130,6 @@ describe("resolveDiscordAccount allowFrom precedence", () => {
|
||||
});
|
||||
});
|
||||
|
||||
describe("createDiscordActionGate", () => {
|
||||
it("uses configured defaultAccount when accountId is omitted", () => {
|
||||
const gate = createDiscordActionGate({
|
||||
cfg: {
|
||||
channels: {
|
||||
discord: {
|
||||
actions: { reactions: false },
|
||||
defaultAccount: "work",
|
||||
accounts: {
|
||||
work: {
|
||||
token: "token-work",
|
||||
actions: { reactions: true },
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
});
|
||||
|
||||
expect(gate("reactions")).toBe(true);
|
||||
});
|
||||
});
|
||||
|
||||
describe("resolveDiscordMaxLinesPerMessage", () => {
|
||||
it("falls back to merged root discord maxLinesPerMessage when runtime config omits it", () => {
|
||||
const resolved = resolveDiscordMaxLinesPerMessage({
|
||||
|
||||
@@ -11,8 +11,10 @@ const fetchChannelPermissionsDiscordMock = vi.fn();
|
||||
|
||||
function readDiscordGuilds(cfg: OpenClawConfig) {
|
||||
const guilds = cfg.channels?.discord?.guilds;
|
||||
expect(guilds).toBeDefined();
|
||||
return guilds ?? {};
|
||||
if (!guilds) {
|
||||
throw new Error("expected discord guilds config");
|
||||
}
|
||||
return guilds;
|
||||
}
|
||||
|
||||
describe("discord audit", () => {
|
||||
|
||||
@@ -26,7 +26,9 @@ describe("discord channel message adapter", () => {
|
||||
|
||||
it("backs declared durable-final capabilities with outbound send proofs", async () => {
|
||||
const adapter = discordPlugin.message;
|
||||
expect(adapter).toBeDefined();
|
||||
if (!adapter) {
|
||||
throw new Error("Expected discord plugin to expose a channel message adapter");
|
||||
}
|
||||
|
||||
const proveText = async () => {
|
||||
resetDiscordOutboundMocks(hoisted);
|
||||
|
||||
@@ -135,10 +135,15 @@ describe("discord component registry", () => {
|
||||
});
|
||||
const confirm = result.entries.find((entry) => entry.label === "Confirm");
|
||||
const cancel = result.entries.find((entry) => entry.label === "Cancel");
|
||||
expect(confirm?.consumptionGroupId).toBeTruthy();
|
||||
expect(cancel?.consumptionGroupId).toBe(confirm?.consumptionGroupId);
|
||||
if (!confirm?.consumptionGroupId) {
|
||||
throw new Error("expected confirm entry to carry a consumption group id");
|
||||
}
|
||||
if (!cancel) {
|
||||
throw new Error("expected cancel entry");
|
||||
}
|
||||
expect(cancel.consumptionGroupId).toBe(confirm.consumptionGroupId);
|
||||
expect(confirm?.consumptionGroupEntryIds).toEqual(
|
||||
expect.arrayContaining([confirm?.id, cancel?.id]),
|
||||
expect.arrayContaining([confirm.id, cancel.id]),
|
||||
);
|
||||
|
||||
registerDiscordComponentEntries({
|
||||
|
||||
@@ -8,13 +8,19 @@ import {
|
||||
scanDiscordNumericIdEntries,
|
||||
} from "./doctor.js";
|
||||
|
||||
function getDiscordCompatibilityNormalizer(): NonNullable<
|
||||
typeof discordDoctor.normalizeCompatibilityConfig
|
||||
> {
|
||||
const normalize = discordDoctor.normalizeCompatibilityConfig;
|
||||
if (!normalize) {
|
||||
throw new Error("Expected discord doctor to expose normalizeCompatibilityConfig");
|
||||
}
|
||||
return normalize;
|
||||
}
|
||||
|
||||
describe("discord doctor", () => {
|
||||
it("normalizes legacy discord streaming aliases for runtime config", () => {
|
||||
const normalize = discordDoctor.normalizeCompatibilityConfig;
|
||||
expect(normalize).toBeDefined();
|
||||
if (!normalize) {
|
||||
return;
|
||||
}
|
||||
const normalize = getDiscordCompatibilityNormalizer();
|
||||
|
||||
const result = normalize({
|
||||
cfg: {
|
||||
@@ -76,11 +82,7 @@ describe("discord doctor", () => {
|
||||
});
|
||||
|
||||
it("moves account voice.tts.edge into providers.microsoft", () => {
|
||||
const normalize = discordDoctor.normalizeCompatibilityConfig;
|
||||
expect(normalize).toBeDefined();
|
||||
if (!normalize) {
|
||||
return;
|
||||
}
|
||||
const normalize = getDiscordCompatibilityNormalizer();
|
||||
|
||||
const result = normalize({
|
||||
cfg: {
|
||||
@@ -117,11 +119,7 @@ describe("discord doctor", () => {
|
||||
});
|
||||
|
||||
it("moves legacy guild channel allow toggles into enabled", () => {
|
||||
const normalize = discordDoctor.normalizeCompatibilityConfig;
|
||||
expect(normalize).toBeDefined();
|
||||
if (!normalize) {
|
||||
return;
|
||||
}
|
||||
const normalize = getDiscordCompatibilityNormalizer();
|
||||
|
||||
const result = normalize({
|
||||
cfg: {
|
||||
@@ -169,11 +167,7 @@ describe("discord doctor", () => {
|
||||
});
|
||||
|
||||
it("moves legacy guild channel agentId into a top-level route binding", () => {
|
||||
const normalize = discordDoctor.normalizeCompatibilityConfig;
|
||||
expect(normalize).toBeDefined();
|
||||
if (!normalize) {
|
||||
return;
|
||||
}
|
||||
const normalize = getDiscordCompatibilityNormalizer();
|
||||
|
||||
const result = normalize({
|
||||
cfg: {
|
||||
@@ -213,11 +207,7 @@ describe("discord doctor", () => {
|
||||
});
|
||||
|
||||
it("moves account-scoped guild channel agentId into an account-scoped route binding", () => {
|
||||
const normalize = discordDoctor.normalizeCompatibilityConfig;
|
||||
expect(normalize).toBeDefined();
|
||||
if (!normalize) {
|
||||
return;
|
||||
}
|
||||
const normalize = getDiscordCompatibilityNormalizer();
|
||||
|
||||
const result = normalize({
|
||||
cfg: {
|
||||
@@ -263,11 +253,7 @@ describe("discord doctor", () => {
|
||||
});
|
||||
|
||||
it("removes legacy guild channel agentId when a matching route binding already exists", () => {
|
||||
const normalize = discordDoctor.normalizeCompatibilityConfig;
|
||||
expect(normalize).toBeDefined();
|
||||
if (!normalize) {
|
||||
return;
|
||||
}
|
||||
const normalize = getDiscordCompatibilityNormalizer();
|
||||
|
||||
const existingBinding = {
|
||||
agentId: "video",
|
||||
|
||||
@@ -544,7 +544,7 @@ describe("RequestClient", () => {
|
||||
);
|
||||
});
|
||||
|
||||
it("serializes message multipart uploads with payload_json", async () => {
|
||||
it("serializes message multipart uploads with payload_json", () => {
|
||||
const headers = new Headers();
|
||||
const body = serializeRequestBody(
|
||||
{
|
||||
|
||||
@@ -253,6 +253,20 @@ describe("discord native /think autocomplete", () => {
|
||||
} as OpenClawConfig;
|
||||
}
|
||||
|
||||
function requireThinkLevelCommand() {
|
||||
const command = findCommandByNativeName("think", "discord", {
|
||||
includeBundledChannelFallback: false,
|
||||
});
|
||||
if (!command) {
|
||||
throw new Error("expected Discord /think command");
|
||||
}
|
||||
const levelArg = command.args?.find((entry) => entry.name === "level");
|
||||
if (!levelArg) {
|
||||
throw new Error("expected Discord /think level arg");
|
||||
}
|
||||
return { command, levelArg };
|
||||
}
|
||||
|
||||
it("uses the session override context for /think choices", async () => {
|
||||
const cfg = createConfig();
|
||||
const interaction = {
|
||||
@@ -269,15 +283,7 @@ describe("discord native /think autocomplete", () => {
|
||||
respond: (choices: Array<{ name: string; value: string }>) => Promise<void>;
|
||||
};
|
||||
|
||||
const command = findCommandByNativeName("think", "discord", {
|
||||
includeBundledChannelFallback: false,
|
||||
});
|
||||
expect(command).toBeTruthy();
|
||||
const levelArg = command?.args?.find((entry) => entry.name === "level");
|
||||
expect(levelArg).toBeTruthy();
|
||||
if (!command || !levelArg) {
|
||||
return;
|
||||
}
|
||||
const { command, levelArg } = requireThinkLevelCommand();
|
||||
|
||||
const context = await resolveDiscordNativeChoiceContext({
|
||||
interaction,
|
||||
@@ -346,15 +352,7 @@ describe("discord native /think autocomplete", () => {
|
||||
accountId: "default",
|
||||
threadBindings: createNoopThreadBindingManager("default"),
|
||||
});
|
||||
const command = findCommandByNativeName("think", "discord", {
|
||||
includeBundledChannelFallback: false,
|
||||
});
|
||||
const levelArg = command?.args?.find((entry) => entry.name === "level");
|
||||
expect(command).toBeTruthy();
|
||||
expect(levelArg).toBeTruthy();
|
||||
if (!command || !levelArg) {
|
||||
return;
|
||||
}
|
||||
const { command, levelArg } = requireThinkLevelCommand();
|
||||
|
||||
const choices = resolveCommandArgChoices({
|
||||
command,
|
||||
@@ -401,15 +399,7 @@ describe("discord native /think autocomplete", () => {
|
||||
expect(context).toBeNull();
|
||||
expect(ensureConfiguredBindingRouteReadyMock).toHaveBeenCalledTimes(1);
|
||||
|
||||
const command = findCommandByNativeName("think", "discord", {
|
||||
includeBundledChannelFallback: false,
|
||||
});
|
||||
const levelArg = command?.args?.find((entry) => entry.name === "level");
|
||||
expect(command).toBeTruthy();
|
||||
expect(levelArg).toBeTruthy();
|
||||
if (!command || !levelArg) {
|
||||
return;
|
||||
}
|
||||
const { command, levelArg } = requireThinkLevelCommand();
|
||||
const choices = resolveCommandArgChoices({
|
||||
command,
|
||||
arg: levelArg,
|
||||
|
||||
@@ -315,8 +315,10 @@ describe("runDiscordGatewayLifecycle", () => {
|
||||
),
|
||||
).toBe(true);
|
||||
|
||||
expect(resolveWait).toBeDefined();
|
||||
resolveWait?.();
|
||||
if (!resolveWait) {
|
||||
throw new Error("expected lifecycle wait resolver");
|
||||
}
|
||||
resolveWait();
|
||||
await expect(lifecyclePromise).resolves.toBeUndefined();
|
||||
});
|
||||
|
||||
|
||||
@@ -18,8 +18,10 @@ function createGatewayInfoBody(overrides?: {
|
||||
}
|
||||
|
||||
function resolveGatewayInfoFetch(resolve: ((value: Response) => void) | undefined): void {
|
||||
expect(resolve).toBeDefined();
|
||||
resolve!({
|
||||
if (!resolve) {
|
||||
throw new Error("expected pending gateway info fetch resolver");
|
||||
}
|
||||
resolve({
|
||||
ok: true,
|
||||
status: 200,
|
||||
text: async () => createGatewayInfoBody(),
|
||||
@@ -449,7 +451,7 @@ describe("createDiscordGatewayPlugin", () => {
|
||||
);
|
||||
});
|
||||
|
||||
it("uses proxy agent for gateway WebSocket when configured", async () => {
|
||||
it("uses proxy agent for gateway WebSocket when configured", () => {
|
||||
const runtime = createRuntime();
|
||||
|
||||
const plugin = createDiscordGatewayPlugin({
|
||||
@@ -473,7 +475,7 @@ describe("createDiscordGatewayPlugin", () => {
|
||||
expect(runtime.error).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it("falls back to the default gateway plugin when proxy is invalid", async () => {
|
||||
it("falls back to the default gateway plugin when proxy is invalid", () => {
|
||||
const runtime = createRuntime();
|
||||
|
||||
const plugin = createDiscordGatewayPlugin({
|
||||
@@ -535,7 +537,7 @@ describe("createDiscordGatewayPlugin", () => {
|
||||
expect(runtime.error).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it("falls back to the default gateway plugin when proxy is remote", async () => {
|
||||
it("falls back to the default gateway plugin when proxy is remote", () => {
|
||||
const runtime = createRuntime();
|
||||
|
||||
const plugin = createDiscordGatewayPlugin({
|
||||
|
||||
@@ -79,7 +79,7 @@ describe("resolveDiscordRestFetch", () => {
|
||||
expect(runtime.error).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it("falls back to global fetch when proxy URL is invalid", async () => {
|
||||
it("falls back to global fetch when proxy URL is invalid", () => {
|
||||
const runtime = {
|
||||
log: vi.fn(),
|
||||
error: vi.fn(),
|
||||
|
||||
@@ -6,6 +6,8 @@ import {
|
||||
resolveDiscordThreadStarter,
|
||||
} from "./threading.js";
|
||||
|
||||
type ResolvedThreadStarter = NonNullable<Awaited<ReturnType<typeof resolveDiscordThreadStarter>>>;
|
||||
|
||||
type ThreadStarterRestMessage = {
|
||||
content?: string | null;
|
||||
embeds?: Array<{ title?: string | null; description?: string | null }>;
|
||||
@@ -65,6 +67,15 @@ function createStarterMessage(overrides: ThreadStarterRestMessage = {}): ThreadS
|
||||
};
|
||||
}
|
||||
|
||||
function requireThreadStarter(
|
||||
result: Awaited<ReturnType<typeof resolveDiscordThreadStarter>>,
|
||||
): ResolvedThreadStarter {
|
||||
if (!result) {
|
||||
throw new Error("expected resolved Discord thread starter");
|
||||
}
|
||||
return result;
|
||||
}
|
||||
|
||||
async function resolveStarter(params: {
|
||||
message: ThreadStarterRestMessage;
|
||||
parentId?: string;
|
||||
@@ -152,10 +163,10 @@ describe("resolveDiscordThreadStarter", () => {
|
||||
resolveTimestampMs: () => 456,
|
||||
});
|
||||
|
||||
expect(result).toBeTruthy();
|
||||
expect(result!.text).toContain("forwarded task content");
|
||||
expect(result!.author).toBe("Bob");
|
||||
expect(result!.timestamp).toBe(456);
|
||||
const starter = requireThreadStarter(result);
|
||||
expect(starter.text).toContain("forwarded task content");
|
||||
expect(starter.author).toBe("Bob");
|
||||
expect(starter.timestamp).toBe(456);
|
||||
});
|
||||
|
||||
it("prefers content over forwarded message snapshots", async () => {
|
||||
@@ -167,8 +178,7 @@ describe("resolveDiscordThreadStarter", () => {
|
||||
}),
|
||||
});
|
||||
|
||||
expect(result).toBeTruthy();
|
||||
expect(result!.text).toBe("direct content");
|
||||
expect(requireThreadStarter(result).text).toBe("direct content");
|
||||
});
|
||||
|
||||
it("joins multiple forwarded message snapshots", async () => {
|
||||
@@ -182,9 +192,9 @@ describe("resolveDiscordThreadStarter", () => {
|
||||
}),
|
||||
});
|
||||
|
||||
expect(result).toBeTruthy();
|
||||
expect(result!.text).toContain("first forwarded message");
|
||||
expect(result!.text).toContain("second forwarded message");
|
||||
const starter = requireThreadStarter(result);
|
||||
expect(starter.text).toContain("first forwarded message");
|
||||
expect(starter.text).toContain("second forwarded message");
|
||||
});
|
||||
|
||||
it("preserves forwarded attachment placeholders in thread starter context", async () => {
|
||||
@@ -206,9 +216,9 @@ describe("resolveDiscordThreadStarter", () => {
|
||||
}),
|
||||
});
|
||||
|
||||
expect(result).toBeTruthy();
|
||||
expect(result!.text).toContain("[Forwarded message]");
|
||||
expect(result!.text).toContain("<media:image> (1 image)");
|
||||
const starter = requireThreadStarter(result);
|
||||
expect(starter.text).toContain("[Forwarded message]");
|
||||
expect(starter.text).toContain("<media:image> (1 image)");
|
||||
});
|
||||
|
||||
it("preserves forwarded sticker placeholders in thread starter context", async () => {
|
||||
@@ -229,9 +239,9 @@ describe("resolveDiscordThreadStarter", () => {
|
||||
}),
|
||||
});
|
||||
|
||||
expect(result).toBeTruthy();
|
||||
expect(result!.text).toContain("[Forwarded message]");
|
||||
expect(result!.text).toContain("<media:sticker> (1 sticker)");
|
||||
const starter = requireThreadStarter(result);
|
||||
expect(starter.text).toContain("[Forwarded message]");
|
||||
expect(starter.text).toContain("<media:sticker> (1 sticker)");
|
||||
});
|
||||
|
||||
it("uses the thread id as the message channel id for forum parents", async () => {
|
||||
@@ -241,7 +251,7 @@ describe("resolveDiscordThreadStarter", () => {
|
||||
parentType: ChannelType.GuildForum,
|
||||
});
|
||||
|
||||
expect(result?.text).toBe("starter content");
|
||||
expect(requireThreadStarter(result).text).toBe("starter content");
|
||||
expect(get).toHaveBeenCalledWith(
|
||||
expect.stringContaining("/channels/thread-1/messages/thread-1"),
|
||||
);
|
||||
|
||||
@@ -8,8 +8,10 @@ import { createDiscordRequestClient, DISCORD_REST_TIMEOUT_MS } from "./proxy-req
|
||||
describe("createDiscordRequestClient", () => {
|
||||
it("preserves the REST client's abort signal for proxied fetch calls", async () => {
|
||||
const fetchSpy = vi.fn(async (_input: string | URL | Request, init?: RequestInit) => {
|
||||
expect(init?.signal).toBeDefined();
|
||||
expect(init!.signal!.aborted).toBe(false);
|
||||
if (!(init?.signal instanceof AbortSignal)) {
|
||||
throw new Error("Expected proxied fetch init to include an AbortSignal");
|
||||
}
|
||||
expect(init.signal.aborted).toBe(false);
|
||||
return createJsonResponse([]);
|
||||
});
|
||||
|
||||
@@ -67,8 +69,10 @@ describe("createDiscordRequestClient", () => {
|
||||
|
||||
await client.get("/channels/123/messages");
|
||||
|
||||
expect(receivedSignal).toBeDefined();
|
||||
expect(receivedSignal!.aborted).toBe(false);
|
||||
if (!receivedSignal) {
|
||||
throw new Error("Expected proxied fetch to receive the REST timeout signal");
|
||||
}
|
||||
expect(receivedSignal.aborted).toBe(false);
|
||||
});
|
||||
|
||||
it("exports a reasonable timeout constant", () => {
|
||||
|
||||
@@ -234,13 +234,15 @@ describe("Discord security audit findings", () => {
|
||||
if (testCase.expectNoNameBasedFinding) {
|
||||
expect(nameBasedFinding).toBeUndefined();
|
||||
} else {
|
||||
expect(nameBasedFinding).toBeDefined();
|
||||
expect(nameBasedFinding?.severity).toBe(testCase.expectNameBasedSeverity);
|
||||
if (!nameBasedFinding) {
|
||||
throw new Error(`expected name-based finding for ${testCase.name}`);
|
||||
}
|
||||
expect(nameBasedFinding.severity).toBe(testCase.expectNameBasedSeverity);
|
||||
for (const snippet of testCase.detailIncludes ?? []) {
|
||||
expect(nameBasedFinding?.detail).toContain(snippet);
|
||||
expect(nameBasedFinding.detail).toContain(snippet);
|
||||
}
|
||||
for (const snippet of testCase.detailExcludes ?? []) {
|
||||
expect(nameBasedFinding?.detail).not.toContain(snippet);
|
||||
expect(nameBasedFinding.detail).not.toContain(snippet);
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
@@ -10,6 +10,28 @@ vi.mock("./send.shared.js", () => ({
|
||||
|
||||
const { readMessagesDiscord, searchMessagesDiscord } = await import("./send.messages.js");
|
||||
|
||||
const restErrorCases: Array<{
|
||||
name: string;
|
||||
invoke: () => Promise<unknown>;
|
||||
}> = [
|
||||
{
|
||||
name: "readMessagesDiscord",
|
||||
invoke: () => readMessagesDiscord("C1", {}, { cfg: {} as never }),
|
||||
},
|
||||
{
|
||||
name: "searchMessagesDiscord",
|
||||
invoke: () => searchMessagesDiscord({ guildId: "G1", content: "test" }, { cfg: {} as never }),
|
||||
},
|
||||
];
|
||||
|
||||
describe("Discord message REST error handling", () => {
|
||||
it.each(restErrorCases)("$name propagates REST errors", async ({ invoke }) => {
|
||||
restMock.get.mockRejectedValueOnce(new Error("Discord API error"));
|
||||
|
||||
await expect(invoke()).rejects.toThrow("Discord API error");
|
||||
});
|
||||
});
|
||||
|
||||
describe("readMessagesDiscord", () => {
|
||||
it("returns messages from the REST client", async () => {
|
||||
const messages = [{ id: "1", content: "hello" }];
|
||||
@@ -20,14 +42,6 @@ describe("readMessagesDiscord", () => {
|
||||
expect(result).toEqual(messages);
|
||||
expect(restMock.get).toHaveBeenCalledWith(expect.stringContaining("C1"), { limit: 5 });
|
||||
});
|
||||
|
||||
it("propagates REST errors", async () => {
|
||||
restMock.get.mockRejectedValueOnce(new Error("Discord API error"));
|
||||
|
||||
await expect(readMessagesDiscord("C1", {}, { cfg: {} as never })).rejects.toThrow(
|
||||
"Discord API error",
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe("searchMessagesDiscord", () => {
|
||||
@@ -42,12 +56,4 @@ describe("searchMessagesDiscord", () => {
|
||||
|
||||
expect(result).toEqual(results);
|
||||
});
|
||||
|
||||
it("propagates REST errors", async () => {
|
||||
restMock.get.mockRejectedValueOnce(new Error("Discord API error"));
|
||||
|
||||
await expect(
|
||||
searchMessagesDiscord({ guildId: "G1", content: "test" }, { cfg: {} as never }),
|
||||
).rejects.toThrow("Discord API error");
|
||||
});
|
||||
});
|
||||
|
||||
@@ -263,9 +263,29 @@ describe("DiscordVoiceManager", () => {
|
||||
]);
|
||||
};
|
||||
|
||||
const getSessionEntry = (
|
||||
manager: InstanceType<typeof managerModule.DiscordVoiceManager>,
|
||||
guildId = "g1",
|
||||
) => {
|
||||
const entry = (manager as unknown as { sessions: Map<string, unknown> }).sessions.get(guildId);
|
||||
if (!entry) {
|
||||
throw new Error(`expected Discord voice session for guild ${guildId}`);
|
||||
}
|
||||
return entry;
|
||||
};
|
||||
|
||||
const getLastAudioPlayer = () => {
|
||||
const player = createAudioPlayerMock.mock.results.at(-1)?.value as
|
||||
| { state: { status: string } }
|
||||
| undefined;
|
||||
if (!player) {
|
||||
throw new Error("expected Discord voice audio player to be created");
|
||||
}
|
||||
return player;
|
||||
};
|
||||
|
||||
const emitDecryptFailure = (manager: InstanceType<typeof managerModule.DiscordVoiceManager>) => {
|
||||
const entry = (manager as unknown as { sessions: Map<string, unknown> }).sessions.get("g1");
|
||||
expect(entry).toBeDefined();
|
||||
const entry = getSessionEntry(manager);
|
||||
(
|
||||
manager as unknown as { handleReceiveError: (e: unknown, err: unknown) => void }
|
||||
).handleReceiveError(
|
||||
@@ -369,6 +389,29 @@ describe("DiscordVoiceManager", () => {
|
||||
expectConnectedStatus(manager, "1001");
|
||||
});
|
||||
|
||||
it("autoJoin uses the last configured channel for duplicate guild entries", async () => {
|
||||
const manager = createManager({
|
||||
voice: {
|
||||
enabled: true,
|
||||
autoJoin: [
|
||||
{ guildId: "g1", channelId: "1001" },
|
||||
{ guildId: "g1", channelId: "1002" },
|
||||
],
|
||||
},
|
||||
});
|
||||
|
||||
await manager.autoJoin();
|
||||
|
||||
expect(joinVoiceChannelMock).toHaveBeenCalledTimes(1);
|
||||
expect(joinVoiceChannelMock).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
guildId: "g1",
|
||||
channelId: "1002",
|
||||
}),
|
||||
);
|
||||
expectConnectedStatus(manager, "1002");
|
||||
});
|
||||
|
||||
it("does not throw when stale tracked voice connections are already destroyed", async () => {
|
||||
const staleConnection = createConnectionMock();
|
||||
staleConnection.state.status = "destroyed";
|
||||
@@ -424,10 +467,8 @@ describe("DiscordVoiceManager", () => {
|
||||
|
||||
await manager.join({ guildId: "g1", channelId: "1001" });
|
||||
|
||||
const player = createAudioPlayerMock.mock.results.at(-1)?.value;
|
||||
const entry = (manager as unknown as { sessions: Map<string, unknown> }).sessions.get("g1");
|
||||
expect(entry).toBeDefined();
|
||||
expect(player).toBeDefined();
|
||||
const player = getLastAudioPlayer();
|
||||
const entry = getSessionEntry(manager);
|
||||
player.state.status = "playing";
|
||||
|
||||
await (
|
||||
@@ -567,29 +608,24 @@ describe("DiscordVoiceManager", () => {
|
||||
|
||||
await manager.join({ guildId: "g1", channelId: "1001" });
|
||||
|
||||
const entry = (manager as unknown as { sessions: Map<string, unknown> }).sessions.get(
|
||||
"g1",
|
||||
) as
|
||||
| {
|
||||
guildId: string;
|
||||
channelId: string;
|
||||
capture: {
|
||||
activeSpeakers: Set<string>;
|
||||
activeCaptureStreams: Map<
|
||||
string,
|
||||
{ generation: number; stream: { destroy: () => void } }
|
||||
>;
|
||||
captureFinalizeTimers: Map<string, unknown>;
|
||||
captureGenerations: Map<string, number>;
|
||||
};
|
||||
}
|
||||
| undefined;
|
||||
expect(entry).toBeDefined();
|
||||
const entry = getSessionEntry(manager) as {
|
||||
guildId: string;
|
||||
channelId: string;
|
||||
capture: {
|
||||
activeSpeakers: Set<string>;
|
||||
activeCaptureStreams: Map<
|
||||
string,
|
||||
{ generation: number; stream: { destroy: () => void } }
|
||||
>;
|
||||
captureFinalizeTimers: Map<string, unknown>;
|
||||
captureGenerations: Map<string, number>;
|
||||
};
|
||||
};
|
||||
|
||||
const firstStream = { destroy: vi.fn() };
|
||||
entry?.capture.activeSpeakers.add("u1");
|
||||
entry?.capture.captureGenerations.set("u1", 1);
|
||||
entry?.capture.activeCaptureStreams.set("u1", { generation: 1, stream: firstStream });
|
||||
entry.capture.activeSpeakers.add("u1");
|
||||
entry.capture.captureGenerations.set("u1", 1);
|
||||
entry.capture.activeCaptureStreams.set("u1", { generation: 1, stream: firstStream });
|
||||
|
||||
(
|
||||
manager as unknown as {
|
||||
|
||||
@@ -134,21 +134,32 @@ export class DiscordVoiceManager {
|
||||
}
|
||||
this.autoJoinTask = (async () => {
|
||||
const entries = this.params.discordConfig.voice?.autoJoin ?? [];
|
||||
logVoiceVerbose(`autoJoin: ${entries.length} entries`);
|
||||
const seenGuilds = new Set<string>();
|
||||
const entriesByGuild = new Map<string, { guildId: string; channelId: string }>();
|
||||
const duplicateGuilds = new Set<string>();
|
||||
for (const entry of entries) {
|
||||
const guildId = entry.guildId.trim();
|
||||
if (!guildId) {
|
||||
const channelId = entry.channelId.trim();
|
||||
if (!guildId || !channelId) {
|
||||
continue;
|
||||
}
|
||||
if (seenGuilds.has(guildId)) {
|
||||
if (entriesByGuild.has(guildId)) {
|
||||
duplicateGuilds.add(guildId);
|
||||
}
|
||||
entriesByGuild.set(guildId, { guildId, channelId });
|
||||
}
|
||||
|
||||
logVoiceVerbose(`autoJoin: ${entries.length} entries, ${entriesByGuild.size} guilds`);
|
||||
for (const guildId of duplicateGuilds) {
|
||||
const selected = entriesByGuild.get(guildId);
|
||||
if (selected) {
|
||||
logger.warn(
|
||||
`discord voice: autoJoin has multiple entries for guild ${guildId}; skipping`,
|
||||
`discord voice: autoJoin has multiple entries for guild ${guildId}; using channel ${selected.channelId}`,
|
||||
);
|
||||
continue;
|
||||
}
|
||||
seenGuilds.add(guildId);
|
||||
logVoiceVerbose(`autoJoin: joining guild ${guildId} channel ${entry.channelId}`);
|
||||
}
|
||||
|
||||
for (const entry of entriesByGuild.values()) {
|
||||
logVoiceVerbose(`autoJoin: joining guild ${entry.guildId} channel ${entry.channelId}`);
|
||||
await this.join({
|
||||
guildId: entry.guildId,
|
||||
channelId: entry.channelId,
|
||||
|
||||
@@ -73,6 +73,7 @@ describeLive("elevenlabs plugin live", () => {
|
||||
outputFormat: "ulaw_8000",
|
||||
timeoutMs: 30_000,
|
||||
});
|
||||
expect(speech.byteLength).toBeGreaterThan(0);
|
||||
|
||||
await runRealtimeSttLiveTest({
|
||||
provider,
|
||||
|
||||
@@ -9,7 +9,7 @@ describe("elevenLabsMediaUnderstandingProvider", () => {
|
||||
expect(elevenLabsMediaUnderstandingProvider.id).toBe("elevenlabs");
|
||||
expect(elevenLabsMediaUnderstandingProvider.capabilities).toEqual(["audio"]);
|
||||
expect(elevenLabsMediaUnderstandingProvider.defaultModels?.audio).toBe("scribe_v2");
|
||||
expect(elevenLabsMediaUnderstandingProvider.transcribeAudio).toBeDefined();
|
||||
expect(elevenLabsMediaUnderstandingProvider.transcribeAudio).toBeTypeOf("function");
|
||||
});
|
||||
|
||||
it("posts multipart audio to ElevenLabs speech-to-text", async () => {
|
||||
|
||||
@@ -12,14 +12,16 @@ import {
|
||||
|
||||
function expectFalJsonPost(params: { call: number; url: string; body: Record<string, unknown> }) {
|
||||
const request = fetchWithSsrFGuardMock.mock.calls[params.call - 1]?.[0];
|
||||
expect(request).toBeTruthy();
|
||||
expect(request?.url).toBe(params.url);
|
||||
expect(request?.auditContext).toBe("fal-image-generate");
|
||||
expect(request?.init?.method).toBe("POST");
|
||||
const headers = new Headers(request?.init?.headers);
|
||||
if (!request) {
|
||||
throw new Error(`expected fal fetch request #${params.call}`);
|
||||
}
|
||||
expect(request.url).toBe(params.url);
|
||||
expect(request.auditContext).toBe("fal-image-generate");
|
||||
expect(request.init?.method).toBe("POST");
|
||||
const headers = new Headers(request.init?.headers);
|
||||
expect(headers.get("authorization")).toBe("Key fal-test-key");
|
||||
expect(headers.get("content-type")).toBe("application/json");
|
||||
expect(JSON.parse(String(request?.init?.body))).toEqual(params.body);
|
||||
expect(JSON.parse(String(request.init?.body))).toEqual(params.body);
|
||||
}
|
||||
|
||||
describe("fal image-generation provider", () => {
|
||||
|
||||
@@ -13,7 +13,11 @@ describe("feishu setup entry", () => {
|
||||
it("declares the setup entry without importing Feishu runtime dependencies", async () => {
|
||||
const { default: setupEntry } = await import("./setup-entry.js");
|
||||
|
||||
expect(setupEntry.kind).toBe("bundled-channel-setup-entry");
|
||||
expect(typeof setupEntry.loadSetupPlugin).toBe("function");
|
||||
expect(setupEntry).toEqual(
|
||||
expect.objectContaining({
|
||||
kind: "bundled-channel-setup-entry",
|
||||
loadSetupPlugin: expect.any(Function),
|
||||
}),
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
@@ -111,6 +111,16 @@ type HttpInstanceLike = {
|
||||
post: (url: string, body?: unknown, options?: Record<string, unknown>) => Promise<unknown>;
|
||||
};
|
||||
|
||||
function requireHttpInstance(value: unknown): HttpInstanceLike {
|
||||
if (isRecord(value) && typeof value.get === "function" && typeof value.post === "function") {
|
||||
return {
|
||||
get: value.get as HttpInstanceLike["get"],
|
||||
post: value.post as HttpInstanceLike["post"],
|
||||
};
|
||||
}
|
||||
throw new Error("expected Feishu HTTP instance");
|
||||
}
|
||||
|
||||
function readCallOptions(
|
||||
mock: { mock: { calls: unknown[][] } },
|
||||
index = -1,
|
||||
@@ -222,25 +232,12 @@ afterAll(() => {
|
||||
});
|
||||
|
||||
describe("createFeishuClient HTTP timeout", () => {
|
||||
const getLastClientHttpInstance = (): HttpInstanceLike | undefined => {
|
||||
const httpInstance = readCallOptions(clientCtorMock).httpInstance;
|
||||
if (
|
||||
isRecord(httpInstance) &&
|
||||
typeof httpInstance.get === "function" &&
|
||||
typeof httpInstance.post === "function"
|
||||
) {
|
||||
return {
|
||||
get: httpInstance.get as HttpInstanceLike["get"],
|
||||
post: httpInstance.post as HttpInstanceLike["post"],
|
||||
};
|
||||
}
|
||||
return undefined;
|
||||
};
|
||||
const readLastClientHttpInstance = (): HttpInstanceLike =>
|
||||
requireHttpInstance(readCallOptions(clientCtorMock).httpInstance);
|
||||
|
||||
const expectGetCallTimeout = async (timeout: number) => {
|
||||
const httpInstance = getLastClientHttpInstance();
|
||||
expect(httpInstance).toBeDefined();
|
||||
await httpInstance?.get("https://example.com/api");
|
||||
const httpInstance = readLastClientHttpInstance();
|
||||
await httpInstance.get("https://example.com/api");
|
||||
expect(mockBaseHttpInstance.get).toHaveBeenCalledWith(
|
||||
"https://example.com/api",
|
||||
expect.objectContaining({ timeout }),
|
||||
@@ -250,16 +247,18 @@ describe("createFeishuClient HTTP timeout", () => {
|
||||
it("passes a custom httpInstance with default timeout to Lark.Client", () => {
|
||||
createFeishuClient({ appId: "app_1", appSecret: "secret_1", accountId: "timeout-test" }); // pragma: allowlist secret
|
||||
|
||||
expect(readCallOptions(clientCtorMock).httpInstance).toBeDefined();
|
||||
expect(readLastClientHttpInstance()).toMatchObject({
|
||||
get: expect.any(Function),
|
||||
post: expect.any(Function),
|
||||
});
|
||||
});
|
||||
|
||||
it("injects default timeout into HTTP request options", async () => {
|
||||
createFeishuClient({ appId: "app_2", appSecret: "secret_2", accountId: "timeout-inject" }); // pragma: allowlist secret
|
||||
|
||||
const httpInstance = getLastClientHttpInstance();
|
||||
const httpInstance = readLastClientHttpInstance();
|
||||
|
||||
expect(httpInstance).toBeDefined();
|
||||
await httpInstance?.post(
|
||||
await httpInstance.post(
|
||||
"https://example.com/api",
|
||||
{ data: 1 },
|
||||
{ headers: { "X-Custom": "yes" } },
|
||||
@@ -275,10 +274,9 @@ describe("createFeishuClient HTTP timeout", () => {
|
||||
it("allows explicit timeout override per-request", async () => {
|
||||
createFeishuClient({ appId: "app_3", appSecret: "secret_3", accountId: "timeout-override" }); // pragma: allowlist secret
|
||||
|
||||
const httpInstance = getLastClientHttpInstance();
|
||||
const httpInstance = readLastClientHttpInstance();
|
||||
|
||||
expect(httpInstance).toBeDefined();
|
||||
await httpInstance?.get("https://example.com/api", { timeout: 5_000 });
|
||||
await httpInstance.get("https://example.com/api", { timeout: 5_000 });
|
||||
|
||||
expect(mockBaseHttpInstance.get).toHaveBeenCalledWith(
|
||||
"https://example.com/api",
|
||||
@@ -362,9 +360,8 @@ describe("createFeishuClient HTTP timeout", () => {
|
||||
});
|
||||
|
||||
expect(clientCtorMock.mock.calls.length).toBe(2);
|
||||
const httpInstance = getLastClientHttpInstance();
|
||||
expect(httpInstance).toBeDefined();
|
||||
await httpInstance?.get("https://example.com/api");
|
||||
const httpInstance = readLastClientHttpInstance();
|
||||
await httpInstance.get("https://example.com/api");
|
||||
|
||||
expect(mockBaseHttpInstance.get).toHaveBeenCalledWith(
|
||||
"https://example.com/api",
|
||||
|
||||
@@ -61,7 +61,6 @@ describe("createFeishuCommentReplyDispatcher", () => {
|
||||
|
||||
function latestReplyDispatcherOptions() {
|
||||
const options = createReplyDispatcherWithTypingMock.mock.calls.at(-1)?.[0];
|
||||
expect(options).toBeDefined();
|
||||
if (!options) {
|
||||
throw new Error("expected reply dispatcher options");
|
||||
}
|
||||
|
||||
@@ -163,7 +163,10 @@ describe("feishu_doc image fetch hardening", () => {
|
||||
});
|
||||
registerFeishuDocTools(harness.api);
|
||||
const tool = harness.resolveTool("feishu_doc", context);
|
||||
expect(tool).toBeDefined();
|
||||
if (!tool) {
|
||||
throw new Error("expected Feishu doc tool");
|
||||
}
|
||||
expect(tool.execute).toEqual(expect.any(Function));
|
||||
return tool;
|
||||
}
|
||||
|
||||
@@ -206,8 +209,8 @@ describe("feishu_doc image fetch hardening", () => {
|
||||
expect(blockDescendantCreateMock).toHaveBeenCalledTimes(1);
|
||||
const call = blockDescendantCreateMock.mock.calls[0]?.[0];
|
||||
expect(call?.data.children_id).toEqual(["h1", "t1", "h2"]);
|
||||
expect(call?.data.descendants).toBeDefined();
|
||||
expect(call?.data.descendants.length).toBeGreaterThanOrEqual(3);
|
||||
expect(call?.data.descendants).toEqual(expect.arrayContaining(blocks));
|
||||
expect(call?.data.descendants).toHaveLength(3);
|
||||
|
||||
expect(result.details.blocks_added).toBe(3);
|
||||
});
|
||||
|
||||
@@ -566,8 +566,10 @@ describe("sendMediaFeishu msg_type routing", () => {
|
||||
);
|
||||
expectMediaTimeoutClientConfigured();
|
||||
expect(result.buffer).toEqual(Buffer.from("image-data"));
|
||||
expect(capturedPath).toBeDefined();
|
||||
expectPathIsolatedToTmpRoot(capturedPath as string, imageKey);
|
||||
if (!capturedPath) {
|
||||
throw new Error("expected Feishu image temp path");
|
||||
}
|
||||
expectPathIsolatedToTmpRoot(capturedPath, imageKey);
|
||||
});
|
||||
|
||||
it("uses isolated temp paths for message resource downloads", async () => {
|
||||
@@ -589,8 +591,10 @@ describe("sendMediaFeishu msg_type routing", () => {
|
||||
});
|
||||
|
||||
expect(result.buffer).toEqual(Buffer.from("resource-data"));
|
||||
expect(capturedPath).toBeDefined();
|
||||
expectPathIsolatedToTmpRoot(capturedPath as string, fileKey);
|
||||
if (!capturedPath) {
|
||||
throw new Error("expected Feishu resource temp path");
|
||||
}
|
||||
expectPathIsolatedToTmpRoot(capturedPath, fileKey);
|
||||
});
|
||||
|
||||
it("rejects invalid image keys before calling feishu api", async () => {
|
||||
|
||||
@@ -188,7 +188,6 @@ describe("feishuOutbound.sendText local-image auto-convert", () => {
|
||||
throw new Error("feishuOutbound.chunker missing");
|
||||
}
|
||||
|
||||
expect(() => chunker("hello world", 5)).not.toThrow();
|
||||
expect(chunker("hello world", 5)).toEqual(["hello", "world"]);
|
||||
});
|
||||
|
||||
|
||||
@@ -298,7 +298,7 @@ describe("createFeishuReplyDispatcher streaming behavior", () => {
|
||||
expect(sendMediaFeishuMock).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it("disables block streaming by default to prevent silent reply drops", async () => {
|
||||
it("disables block streaming by default to prevent silent reply drops", () => {
|
||||
const result = createFeishuReplyDispatcher({
|
||||
cfg: {} as never,
|
||||
agentId: "agent",
|
||||
@@ -334,7 +334,7 @@ describe("createFeishuReplyDispatcher streaming behavior", () => {
|
||||
});
|
||||
});
|
||||
|
||||
it("keeps core block streaming disabled when Feishu blockStreaming is explicitly false", async () => {
|
||||
it("keeps core block streaming disabled when Feishu blockStreaming is explicitly false", () => {
|
||||
resolveFeishuAccountMock.mockReturnValue({
|
||||
accountId: "main",
|
||||
appId: "app_id",
|
||||
@@ -910,7 +910,9 @@ describe("createFeishuReplyDispatcher streaming behavior", () => {
|
||||
expect(reasoningUpdate).not.toMatch(/> _.*_/);
|
||||
|
||||
const combinedUpdate = updateCalls.find((c) => c.includes("Thinking") && c.includes("---"));
|
||||
expect(combinedUpdate).toBeDefined();
|
||||
if (!combinedUpdate) {
|
||||
throw new Error("expected combined reasoning and final-answer streaming update");
|
||||
}
|
||||
|
||||
expect(streamingInstances[0].close).toHaveBeenCalledTimes(1);
|
||||
const closeArg = streamingInstances[0].close.mock.calls[0][0] as string;
|
||||
|
||||
@@ -509,21 +509,36 @@ describe("resolveFeishuCardTemplate", () => {
|
||||
});
|
||||
});
|
||||
|
||||
describe("buildStructuredCard", () => {
|
||||
it("uses schema-2.0 width config instead of legacy wide screen mode", () => {
|
||||
const card = buildStructuredCard("hello") as {
|
||||
config: {
|
||||
width_mode?: string;
|
||||
enable_forward?: boolean;
|
||||
wide_screen_mode?: boolean;
|
||||
};
|
||||
function expectSchema2WidthConfig(card: unknown) {
|
||||
const typedCard = card as {
|
||||
config: {
|
||||
width_mode?: string;
|
||||
enable_forward?: boolean;
|
||||
wide_screen_mode?: boolean;
|
||||
};
|
||||
};
|
||||
|
||||
expect(card.config.width_mode).toBe("fill");
|
||||
expect(card.config.enable_forward).toBeUndefined();
|
||||
expect(card.config.wide_screen_mode).toBeUndefined();
|
||||
expect(typedCard.config.width_mode).toBe("fill");
|
||||
expect(typedCard.config.enable_forward).toBeUndefined();
|
||||
expect(typedCard.config.wide_screen_mode).toBeUndefined();
|
||||
}
|
||||
|
||||
describe("Feishu card schema config", () => {
|
||||
it.each([
|
||||
{
|
||||
name: "structured card",
|
||||
build: () => buildStructuredCard("hello"),
|
||||
},
|
||||
{
|
||||
name: "markdown card",
|
||||
build: () => buildMarkdownCard("hello"),
|
||||
},
|
||||
])("$name uses schema-2.0 width config instead of legacy wide screen mode", ({ build }) => {
|
||||
expectSchema2WidthConfig(build());
|
||||
});
|
||||
});
|
||||
|
||||
describe("buildStructuredCard", () => {
|
||||
it("falls back to blue when the header template is unsupported", () => {
|
||||
const card = buildStructuredCard("hello", {
|
||||
header: {
|
||||
@@ -542,19 +557,3 @@ describe("buildStructuredCard", () => {
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe("buildMarkdownCard", () => {
|
||||
it("uses schema-2.0 width config instead of legacy wide screen mode", () => {
|
||||
const card = buildMarkdownCard("hello") as {
|
||||
config: {
|
||||
width_mode?: string;
|
||||
enable_forward?: boolean;
|
||||
wide_screen_mode?: boolean;
|
||||
};
|
||||
};
|
||||
|
||||
expect(card.config.width_mode).toBe("fill");
|
||||
expect(card.config.enable_forward).toBeUndefined();
|
||||
expect(card.config.wide_screen_mode).toBeUndefined();
|
||||
});
|
||||
});
|
||||
|
||||
@@ -101,21 +101,24 @@ describe("feishu setup wizard", () => {
|
||||
) as never,
|
||||
});
|
||||
|
||||
await expect(
|
||||
runSetupWizardConfigure({
|
||||
configure: feishuConfigure,
|
||||
cfg: {
|
||||
channels: {
|
||||
feishu: {
|
||||
appId: { source: "env", id: "FEISHU_APP_ID", provider: "default" },
|
||||
appSecret: { source: "env", id: "FEISHU_APP_SECRET", provider: "default" },
|
||||
},
|
||||
const result = await runSetupWizardConfigure({
|
||||
configure: feishuConfigure,
|
||||
cfg: {
|
||||
channels: {
|
||||
feishu: {
|
||||
appId: { source: "env", id: "FEISHU_APP_ID", provider: "default" },
|
||||
appSecret: { source: "env", id: "FEISHU_APP_SECRET", provider: "default" },
|
||||
},
|
||||
} as never,
|
||||
prompter,
|
||||
runtime: createNonExitingRuntimeEnv(),
|
||||
}),
|
||||
).resolves.toBeTruthy();
|
||||
},
|
||||
} as never,
|
||||
prompter,
|
||||
runtime: createNonExitingRuntimeEnv(),
|
||||
});
|
||||
|
||||
expect(result.cfg.channels?.feishu).toMatchObject({
|
||||
appId: "cli_from_prompt",
|
||||
appSecret: "secret_from_prompt",
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
|
||||
@@ -67,7 +67,10 @@ describe("handleFileFetch — fs errors", () => {
|
||||
const r = await handleFileFetch({ path: tmpRoot });
|
||||
expect(r).toMatchObject({ ok: false, code: "IS_DIRECTORY" });
|
||||
// canonical path is reported back so the caller can re-check policy
|
||||
expect(r.ok ? null : r.canonicalPath).toBeTruthy();
|
||||
if (r.ok) {
|
||||
throw new Error("expected directory fetch to fail");
|
||||
}
|
||||
expect(r.canonicalPath).toBe(tmpRoot);
|
||||
});
|
||||
});
|
||||
|
||||
|
||||
@@ -214,7 +214,7 @@ describe("firecrawl tools", () => {
|
||||
expect(authHeader).toBe("Bearer firecrawl-test-key");
|
||||
});
|
||||
|
||||
it("blocks private and non-http scrape targets before Firecrawl requests", async () => {
|
||||
it("blocks private and non-http scrape targets before Firecrawl requests", () => {
|
||||
expect(() =>
|
||||
firecrawlClientTesting.assertFirecrawlScrapeTargetAllowed("https://example.com/page"),
|
||||
).not.toThrow();
|
||||
|
||||
@@ -318,7 +318,7 @@ describe("github-copilot token", () => {
|
||||
({ deriveCopilotApiBaseUrlFromToken, resolveCopilotApiToken } = await import("./token.js"));
|
||||
});
|
||||
|
||||
it("derives baseUrl from token", async () => {
|
||||
it("derives baseUrl from token", () => {
|
||||
expect(deriveCopilotApiBaseUrlFromToken("token;proxy-ep=proxy.example.com;")).toBe(
|
||||
"https://api.example.com",
|
||||
);
|
||||
|
||||
@@ -15,7 +15,7 @@ function requireStreamFn(streamFn: ReturnType<typeof wrapCopilotProviderStream>)
|
||||
}
|
||||
|
||||
describe("wrapCopilotAnthropicStream", () => {
|
||||
it("adds Copilot headers and Anthropic cache markers for Claude payloads", async () => {
|
||||
it("adds Copilot headers and Anthropic cache markers for Claude payloads", () => {
|
||||
const payloads: Array<{
|
||||
messages: Array<Record<string, unknown>>;
|
||||
}> = [];
|
||||
|
||||
@@ -20,6 +20,17 @@ const convertMessagesForTest = convertMessages as unknown as (
|
||||
context: Context,
|
||||
) => ReturnType<typeof convertMessages>;
|
||||
|
||||
function requireRecordProperty(
|
||||
record: Record<string, unknown>,
|
||||
key: string,
|
||||
): Record<string, unknown> {
|
||||
const value = record[key];
|
||||
if (!value || typeof value !== "object" || Array.isArray(value)) {
|
||||
throw new Error(`expected object property ${key}`);
|
||||
}
|
||||
return value as Record<string, unknown>;
|
||||
}
|
||||
|
||||
describe("google-shared convertTools", () => {
|
||||
it("preserves parameters when type is missing", () => {
|
||||
const tools = [
|
||||
@@ -41,7 +52,9 @@ describe("google-shared convertTools", () => {
|
||||
);
|
||||
|
||||
expect(params.type).toBeUndefined();
|
||||
expect(params.properties).toBeDefined();
|
||||
expect(params.properties).toEqual({
|
||||
action: { type: "string" },
|
||||
});
|
||||
expect(params.required).toEqual(["action"]);
|
||||
});
|
||||
|
||||
@@ -290,7 +303,9 @@ describe("google-shared convertMessages", () => {
|
||||
(part) => typeof part === "object" && part !== null && "functionResponse" in part,
|
||||
);
|
||||
const toolResponse = asRecord(toolResponsePart);
|
||||
expect(toolResponse.functionResponse).toBeTruthy();
|
||||
expect(requireRecordProperty(toolResponse, "functionResponse")).toMatchObject({
|
||||
name: "myTool",
|
||||
});
|
||||
expect(contents[3].role).toBe("user");
|
||||
});
|
||||
|
||||
@@ -320,7 +335,9 @@ describe("google-shared convertMessages", () => {
|
||||
(part) => typeof part === "object" && part !== null && "functionCall" in part,
|
||||
);
|
||||
const toolCall = asRecord(toolCallPart);
|
||||
expect(toolCall.functionCall).toBeTruthy();
|
||||
expect(requireRecordProperty(toolCall, "functionCall")).toMatchObject({
|
||||
name: "myTool",
|
||||
});
|
||||
});
|
||||
|
||||
it("strips tool call and response ids for google-gemini-cli", () => {
|
||||
|
||||
@@ -190,8 +190,10 @@ describe("google provider plugin hooks", () => {
|
||||
name: "Google Provider",
|
||||
});
|
||||
const provider = requireRegisteredProvider(providers, "google");
|
||||
expect(provider.resolveThinkingProfile).toBeDefined();
|
||||
const resolveThinkingProfile = provider.resolveThinkingProfile!;
|
||||
if (!provider.resolveThinkingProfile) {
|
||||
throw new Error("expected Google provider thinking profile resolver");
|
||||
}
|
||||
const resolveThinkingProfile = provider.resolveThinkingProfile;
|
||||
const gemini3Profile = resolveThinkingProfile({
|
||||
provider: "google",
|
||||
modelId: "gemini-3.1-pro-preview",
|
||||
@@ -246,9 +248,11 @@ describe("google provider plugin hooks", () => {
|
||||
onClearAudio() {},
|
||||
});
|
||||
|
||||
expect(bridge).toBeDefined();
|
||||
expect(() => bridge?.sendAudio(Buffer.alloc(160))).not.toThrow();
|
||||
expect(() => bridge?.setMediaTimestamp(20)).not.toThrow();
|
||||
expect(() => bridge?.sendUserMessage?.("hello")).not.toThrow();
|
||||
if (!bridge) {
|
||||
throw new Error("expected Google realtime bridge");
|
||||
}
|
||||
expect(() => bridge.sendAudio(Buffer.alloc(160))).not.toThrow();
|
||||
expect(() => bridge.setMediaTimestamp(20)).not.toThrow();
|
||||
expect(() => bridge.sendUserMessage?.("hello")).not.toThrow();
|
||||
});
|
||||
});
|
||||
|
||||
@@ -25,6 +25,7 @@ describe("google model id helpers", () => {
|
||||
|
||||
it("keeps bare Gemini 3.1 Pro as an alias for Google's preview-suffixed API id", () => {
|
||||
expect(normalizeGoogleModelId("gemini-3-pro")).toBe("gemini-3.1-pro-preview");
|
||||
expect(normalizeGoogleModelId("gemini-3-pro-preview")).toBe("gemini-3.1-pro-preview");
|
||||
expect(normalizeGoogleModelId("gemini-3.1-pro")).toBe("gemini-3.1-pro-preview");
|
||||
expect(normalizeGoogleModelId("gemini-3.1-pro-preview")).toBe("gemini-3.1-pro-preview");
|
||||
});
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
const ANTIGRAVITY_BARE_PRO_IDS = new Set(["gemini-3-pro", "gemini-3.1-pro", "gemini-3-1-pro"]);
|
||||
|
||||
export function normalizeGoogleModelId(id: string): string {
|
||||
if (id === "gemini-3-pro") {
|
||||
if (id === "gemini-3-pro" || id === "gemini-3-pro-preview") {
|
||||
return "gemini-3.1-pro-preview";
|
||||
}
|
||||
if (id === "gemini-3-flash") {
|
||||
|
||||
@@ -450,7 +450,7 @@ describe("extractGeminiCliCredentials", () => {
|
||||
setOAuthCredentialsFsForTest();
|
||||
});
|
||||
|
||||
it("returns null when gemini binary is not in PATH", async () => {
|
||||
it("returns null when gemini binary is not in PATH", () => {
|
||||
process.env.PATH = "/nonexistent";
|
||||
mockExistsSync.mockReturnValue(false);
|
||||
|
||||
@@ -458,7 +458,7 @@ describe("extractGeminiCliCredentials", () => {
|
||||
expect(extractGeminiCliCredentials()).toBeNull();
|
||||
});
|
||||
|
||||
it("extracts credentials from oauth2.js in known path", async () => {
|
||||
it("extracts credentials from oauth2.js in known path", () => {
|
||||
installGeminiLayout({ oauth2Exists: true, oauth2Content: FAKE_OAUTH2_CONTENT });
|
||||
|
||||
clearCredentialsCache();
|
||||
@@ -467,7 +467,7 @@ describe("extractGeminiCliCredentials", () => {
|
||||
expectFakeCliCredentials(result);
|
||||
});
|
||||
|
||||
it("extracts credentials when PATH entry is an npm global shim", async () => {
|
||||
it("extracts credentials when PATH entry is an npm global shim", () => {
|
||||
installNpmShimLayout({ oauth2Exists: true, oauth2Content: FAKE_OAUTH2_CONTENT });
|
||||
|
||||
clearCredentialsCache();
|
||||
@@ -476,7 +476,7 @@ describe("extractGeminiCliCredentials", () => {
|
||||
expectFakeCliCredentials(result);
|
||||
});
|
||||
|
||||
it("extracts credentials from bundled npm installs", async () => {
|
||||
it("extracts credentials from bundled npm installs", () => {
|
||||
installBundledNpmLayout({
|
||||
bundleContent: `
|
||||
const OAUTH_CLIENT_ID = "${FAKE_CLIENT_ID}";
|
||||
@@ -490,7 +490,7 @@ describe("extractGeminiCliCredentials", () => {
|
||||
expectFakeCliCredentials(result);
|
||||
});
|
||||
|
||||
it("extracts credentials from Homebrew libexec installs", async () => {
|
||||
it("extracts credentials from Homebrew libexec installs", () => {
|
||||
installHomebrewLibexecLayout({ oauth2Content: FAKE_OAUTH2_CONTENT });
|
||||
|
||||
clearCredentialsCache();
|
||||
@@ -499,21 +499,21 @@ describe("extractGeminiCliCredentials", () => {
|
||||
expectFakeCliCredentials(result);
|
||||
});
|
||||
|
||||
it("returns null when oauth2.js cannot be found", async () => {
|
||||
it("returns null when oauth2.js cannot be found", () => {
|
||||
installGeminiLayout({ oauth2Exists: false, readdir: [] });
|
||||
|
||||
clearCredentialsCache();
|
||||
expect(extractGeminiCliCredentials()).toBeNull();
|
||||
});
|
||||
|
||||
it("returns null when oauth2.js lacks credentials", async () => {
|
||||
it("returns null when oauth2.js lacks credentials", () => {
|
||||
installGeminiLayout({ oauth2Exists: true, oauth2Content: "// no credentials here" });
|
||||
|
||||
clearCredentialsCache();
|
||||
expect(extractGeminiCliCredentials()).toBeNull();
|
||||
});
|
||||
|
||||
it("caches credentials after first extraction", async () => {
|
||||
it("caches credentials after first extraction", () => {
|
||||
installGeminiLayout({ oauth2Exists: true, oauth2Content: FAKE_OAUTH2_CONTENT });
|
||||
|
||||
clearCredentialsCache();
|
||||
@@ -529,7 +529,7 @@ describe("extractGeminiCliCredentials", () => {
|
||||
expect(mockReadFileSync.mock.calls.length).toBe(readCount);
|
||||
});
|
||||
|
||||
it("skips unrelated oauth2.js files when gemini resolves inside a Windows nvm root", async () => {
|
||||
it("skips unrelated oauth2.js files when gemini resolves inside a Windows nvm root", () => {
|
||||
const { unrelatedOauth2Path } = installWindowsNvmLayoutWithUnrelatedOauth({
|
||||
oauth2Content: FAKE_OAUTH2_CONTENT,
|
||||
unrelatedOauth2Content: "// unrelated oauth file",
|
||||
@@ -657,6 +657,23 @@ describe("loginGeminiCliOAuth", () => {
|
||||
return JSON.parse(value);
|
||||
}
|
||||
|
||||
function requireString(value: string | null | undefined, label: string): string {
|
||||
if (!value) {
|
||||
throw new Error(`Expected ${label}`);
|
||||
}
|
||||
return value;
|
||||
}
|
||||
|
||||
function requireRecordedRequest(
|
||||
request: RecordedFetchRequest | undefined,
|
||||
label: string,
|
||||
): RecordedFetchRequest {
|
||||
if (!request) {
|
||||
throw new Error(`Expected ${label} request`);
|
||||
}
|
||||
return request;
|
||||
}
|
||||
|
||||
type LoginGeminiCliOAuthFn = (options: {
|
||||
isRemote: boolean;
|
||||
openUrl: () => Promise<void>;
|
||||
@@ -757,8 +774,10 @@ describe("loginGeminiCliOAuth", () => {
|
||||
`gl-node/${process.versions.node}`,
|
||||
);
|
||||
|
||||
const clientMetadata = getHeaderValue(firstHeaders, "Client-Metadata");
|
||||
expect(clientMetadata).toBeDefined();
|
||||
const clientMetadata = requireString(
|
||||
getHeaderValue(firstHeaders, "Client-Metadata"),
|
||||
"Client-Metadata",
|
||||
);
|
||||
expect(parseJsonString(clientMetadata, "Client-Metadata")).toEqual(
|
||||
EXPECTED_LOAD_CODE_ASSIST_METADATA,
|
||||
);
|
||||
@@ -784,13 +803,16 @@ describe("loginGeminiCliOAuth", () => {
|
||||
const { loginGeminiCliOAuth } = await import("./oauth.js");
|
||||
const { authUrl } = await runRemoteLoginWithCapturedAuthUrl(loginGeminiCliOAuth);
|
||||
|
||||
const authState = new URL(authUrl).searchParams.get("state");
|
||||
expect(authState).toBeTruthy();
|
||||
const authState = requireString(new URL(authUrl).searchParams.get("state"), "OAuth state");
|
||||
|
||||
const tokenRequest = requests.find((request) => request.url === TOKEN_URL);
|
||||
expect(tokenRequest).toBeDefined();
|
||||
const codeVerifier = getFormField(tokenRequest?.init?.body, "code_verifier");
|
||||
expect(codeVerifier).toBeTruthy();
|
||||
const tokenRequest = requireRecordedRequest(
|
||||
requests.find((request) => request.url === TOKEN_URL),
|
||||
"token",
|
||||
);
|
||||
const codeVerifier = requireString(
|
||||
getFormField(tokenRequest.init?.body, "code_verifier"),
|
||||
"PKCE code verifier",
|
||||
);
|
||||
expect(codeVerifier).not.toBe(authState);
|
||||
});
|
||||
|
||||
|
||||
@@ -51,7 +51,7 @@ describe("googlechat message actions", () => {
|
||||
vi.resetModules();
|
||||
});
|
||||
|
||||
it("describes send and reaction actions only when enabled accounts exist", async () => {
|
||||
it("describes send and reaction actions only when enabled accounts exist", () => {
|
||||
listEnabledGoogleChatAccounts.mockReturnValueOnce([]);
|
||||
expect(googlechatMessageActions.describeMessageTool?.({ cfg: {} as never })).toBeNull();
|
||||
|
||||
|
||||
@@ -455,7 +455,9 @@ describe("googlechatPlugin outbound resolveTarget", () => {
|
||||
if (result.ok) {
|
||||
throw new Error("Expected invalid target to fail");
|
||||
}
|
||||
expect(result.error).toBeDefined();
|
||||
expect(result.error.message).toBe(
|
||||
"Google Chat target is required (<spaces/{space}|users/{user}>)",
|
||||
);
|
||||
});
|
||||
|
||||
it("errors when no target is provided", () => {
|
||||
@@ -467,7 +469,9 @@ describe("googlechatPlugin outbound resolveTarget", () => {
|
||||
if (result.ok) {
|
||||
throw new Error("Expected missing target to fail");
|
||||
}
|
||||
expect(result.error).toBeDefined();
|
||||
expect(result.error.message).toBe(
|
||||
"Google Chat target is required (<spaces/{space}|users/{user}>)",
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
|
||||
@@ -378,7 +378,7 @@ describe("googlechat google auth runtime", () => {
|
||||
expect(second.interceptors.response.add).toHaveBeenCalledOnce();
|
||||
});
|
||||
|
||||
it("normalizes Google auth request headers before upstream interceptors run", async () => {
|
||||
it("normalizes Google auth request headers before upstream interceptors run", () => {
|
||||
const config = {
|
||||
headers: { "x-test": "1" },
|
||||
url: new URL("https://www.googleapis.com/oauth2/v1/certs"),
|
||||
|
||||
@@ -55,3 +55,4 @@ export {
|
||||
parseIMessageAllowTarget,
|
||||
parseIMessageTarget,
|
||||
} from "./src/targets.js";
|
||||
export { IMESSAGE_ACTION_NAMES, IMESSAGE_ACTIONS } from "./src/actions-contract.js";
|
||||
|
||||
1
extensions/imessage/message-tool-api.ts
Normal file
1
extensions/imessage/message-tool-api.ts
Normal file
@@ -0,0 +1 @@
|
||||
export { describeIMessageMessageTool as describeMessageTool } from "./src/message-tool-api.js";
|
||||
@@ -28,6 +28,7 @@ export type { MonitorIMessageOpts } from "./src/monitor.js";
|
||||
export { probeIMessage } from "./src/probe.js";
|
||||
export type { IMessageProbe } from "./src/probe.js";
|
||||
export { sendMessageIMessage } from "./src/send.js";
|
||||
export { imessageMessageActions } from "./src/actions.js";
|
||||
export { setIMessageRuntime } from "./src/runtime.js";
|
||||
export { chunkTextForOutbound } from "./src/channel-api.js";
|
||||
export type IMessageAccountConfig = Omit<
|
||||
|
||||
@@ -17,6 +17,10 @@ vi.mock("./probe.js", () => ({
|
||||
getCachedIMessagePrivateApiStatus: probeMock.getCachedIMessagePrivateApiStatus,
|
||||
}));
|
||||
|
||||
vi.mock("./private-api-status.js", () => ({
|
||||
getCachedIMessagePrivateApiStatus: probeMock.getCachedIMessagePrivateApiStatus,
|
||||
}));
|
||||
|
||||
vi.mock("./actions.runtime.js", () => ({
|
||||
imessageActionsRuntime: runtimeMock,
|
||||
}));
|
||||
@@ -126,6 +130,28 @@ describe("imessage message actions", () => {
|
||||
expect(described?.actions).toContain("edit");
|
||||
});
|
||||
|
||||
it("rejects configured-off actions at execution time", async () => {
|
||||
probeMock.getCachedIMessagePrivateApiStatus.mockReturnValue({
|
||||
available: true,
|
||||
v2Ready: true,
|
||||
selectors: {},
|
||||
});
|
||||
|
||||
await expect(
|
||||
imessageMessageActions.handleAction?.({
|
||||
action: "react",
|
||||
cfg: cfg({ reactions: false }),
|
||||
params: {
|
||||
chatGuid: "iMessage;+;chat0000",
|
||||
messageId: "message-guid",
|
||||
emoji: "👍",
|
||||
},
|
||||
} as never),
|
||||
).rejects.toThrow(/disabled in config/i);
|
||||
|
||||
expect(runtimeMock.sendReaction).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it("maps message tool reactions to imsg tapback kinds", async () => {
|
||||
probeMock.getCachedIMessagePrivateApiStatus.mockReturnValue({
|
||||
available: true,
|
||||
@@ -506,30 +532,38 @@ describe("imessage message actions", () => {
|
||||
});
|
||||
});
|
||||
|
||||
it("routes upload-file through the private API attachment bridge", async () => {
|
||||
probeMock.getCachedIMessagePrivateApiStatus.mockReturnValue({
|
||||
available: true,
|
||||
v2Ready: true,
|
||||
selectors: {},
|
||||
});
|
||||
runtimeMock.sendAttachment.mockResolvedValue({ messageId: "sent-guid" });
|
||||
it.each([
|
||||
["asVoice", { asVoice: true }],
|
||||
["as_voice", { as_voice: true }],
|
||||
])(
|
||||
"routes upload-file through the private API attachment bridge with %s",
|
||||
async (_label, voiceParam) => {
|
||||
probeMock.getCachedIMessagePrivateApiStatus.mockReturnValue({
|
||||
available: true,
|
||||
v2Ready: true,
|
||||
selectors: {},
|
||||
});
|
||||
runtimeMock.sendAttachment.mockResolvedValue({ messageId: "sent-guid" });
|
||||
|
||||
const result = await imessageMessageActions.handleAction?.({
|
||||
action: "upload-file",
|
||||
cfg: cfg(),
|
||||
params: {
|
||||
chatGuid: "iMessage;+;chat0000",
|
||||
filename: "photo.jpg",
|
||||
buffer: Buffer.from("image").toString("base64"),
|
||||
},
|
||||
} as never);
|
||||
const result = await imessageMessageActions.handleAction?.({
|
||||
action: "upload-file",
|
||||
cfg: cfg(),
|
||||
params: {
|
||||
chatGuid: "iMessage;+;chat0000",
|
||||
filename: "photo.jpg",
|
||||
buffer: Buffer.from("image").toString("base64"),
|
||||
...voiceParam,
|
||||
},
|
||||
} as never);
|
||||
|
||||
expect(runtimeMock.sendAttachment).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
chatGuid: "iMessage;+;chat0000",
|
||||
filename: "photo.jpg",
|
||||
}),
|
||||
);
|
||||
expect(result?.details).toEqual({ ok: true, messageId: "sent-guid" });
|
||||
});
|
||||
expect(runtimeMock.sendAttachment).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
chatGuid: "iMessage;+;chat0000",
|
||||
filename: "photo.jpg",
|
||||
asVoice: true,
|
||||
}),
|
||||
);
|
||||
expect(result?.details).toEqual({ ok: true, messageId: "sent-guid" });
|
||||
},
|
||||
);
|
||||
});
|
||||
|
||||
@@ -16,13 +16,10 @@ import { extractToolSend } from "openclaw/plugin-sdk/tool-send";
|
||||
import { resolveIMessageAccount } from "./accounts.js";
|
||||
import { IMESSAGE_ACTION_NAMES, IMESSAGE_ACTIONS } from "./actions-contract.js";
|
||||
import { DEFAULT_IMESSAGE_PROBE_TIMEOUT_MS } from "./constants.js";
|
||||
import { describeIMessageMessageTool } from "./message-tool-api.js";
|
||||
import { findLatestIMessageEntryForChat, type IMessageChatContext } from "./monitor-reply-cache.js";
|
||||
import { getCachedIMessagePrivateApiStatus } from "./probe.js";
|
||||
import {
|
||||
inferIMessageTargetChatType,
|
||||
parseIMessageTarget,
|
||||
type IMessageTarget,
|
||||
} from "./targets.js";
|
||||
import { parseIMessageTarget, type IMessageTarget } from "./targets.js";
|
||||
|
||||
const loadIMessageActionsRuntime = createLazyRuntimeNamedExport(
|
||||
() => import("./actions.runtime.js"),
|
||||
@@ -35,20 +32,6 @@ const SUPPORTED_ACTIONS = new Set<ChannelMessageActionName>([
|
||||
...IMESSAGE_ACTION_NAMES,
|
||||
"upload-file",
|
||||
]);
|
||||
const PRIVATE_API_ACTIONS = new Set<ChannelMessageActionName>([
|
||||
"react",
|
||||
"edit",
|
||||
"unsend",
|
||||
"reply",
|
||||
"sendWithEffect",
|
||||
"renameGroup",
|
||||
"setGroupIcon",
|
||||
"addParticipant",
|
||||
"removeParticipant",
|
||||
"leaveGroup",
|
||||
"sendAttachment",
|
||||
]);
|
||||
|
||||
function readMessageText(params: Record<string, unknown>): string | undefined {
|
||||
return readStringParam(params, "text") ?? readStringParam(params, "message");
|
||||
}
|
||||
@@ -78,16 +61,6 @@ function readMessageIdWithChatFallback(
|
||||
return readStringParam(params, "messageId", { required: true });
|
||||
}
|
||||
|
||||
function isGroupTarget(raw?: string | null): boolean {
|
||||
// Defer to the canonical target classifier so action gating and the
|
||||
// routing layer can't drift apart on edge cases (URI-encoded targets,
|
||||
// service prefixes, etc.).
|
||||
if (!raw) {
|
||||
return false;
|
||||
}
|
||||
return inferIMessageTargetChatType(raw) === "group";
|
||||
}
|
||||
|
||||
type IMessageActionsRuntime = Awaited<ReturnType<typeof loadIMessageActionsRuntime>>;
|
||||
|
||||
async function resolveChatGuid(params: {
|
||||
@@ -329,51 +302,34 @@ function effectIdFromParam(raw?: string): string | undefined {
|
||||
);
|
||||
}
|
||||
|
||||
function assertActionEnabled(
|
||||
action: ChannelMessageActionName,
|
||||
actionsConfig: Record<string, boolean | undefined> | undefined,
|
||||
): void {
|
||||
const canonicalAction = action === "upload-file" ? "sendAttachment" : action;
|
||||
const spec = IMESSAGE_ACTIONS[canonicalAction as keyof typeof IMESSAGE_ACTIONS];
|
||||
if (!spec?.gate || !createActionGate(actionsConfig)(spec.gate)) {
|
||||
throw new Error(`iMessage ${action} is disabled in config.`);
|
||||
}
|
||||
}
|
||||
|
||||
export const imessageMessageActions: ChannelMessageActionAdapter = {
|
||||
describeMessageTool: ({ cfg, accountId, currentChannelId }) => {
|
||||
const account = resolveIMessageAccount({ cfg, accountId });
|
||||
if (!account.enabled || !account.configured) {
|
||||
return null;
|
||||
}
|
||||
const privateApiStatus = getCachedIMessagePrivateApiStatus(
|
||||
account.config.cliPath?.trim() || "imsg",
|
||||
);
|
||||
const gate = createActionGate(account.config.actions);
|
||||
const actions = new Set<ChannelMessageActionName>();
|
||||
for (const action of IMESSAGE_ACTION_NAMES) {
|
||||
const spec = IMESSAGE_ACTIONS[action];
|
||||
if (!spec?.gate || !gate(spec.gate)) {
|
||||
continue;
|
||||
}
|
||||
if (privateApiStatus?.available === false && PRIVATE_API_ACTIONS.has(action)) {
|
||||
continue;
|
||||
}
|
||||
if (
|
||||
action === "edit" &&
|
||||
privateApiStatus?.selectors &&
|
||||
!privateApiStatus.selectors.editMessage &&
|
||||
!privateApiStatus.selectors.editMessageItem
|
||||
) {
|
||||
continue;
|
||||
}
|
||||
if (action === "unsend" && privateApiStatus?.selectors?.retractMessagePart !== true) {
|
||||
continue;
|
||||
}
|
||||
actions.add(action);
|
||||
}
|
||||
if (!isGroupTarget(currentChannelId)) {
|
||||
for (const action of IMESSAGE_ACTION_NAMES) {
|
||||
if ("groupOnly" in IMESSAGE_ACTIONS[action] && IMESSAGE_ACTIONS[action].groupOnly) {
|
||||
actions.delete(action);
|
||||
}
|
||||
}
|
||||
}
|
||||
if (actions.delete("sendAttachment")) {
|
||||
actions.add("upload-file");
|
||||
}
|
||||
return { actions: Array.from(actions) };
|
||||
},
|
||||
describeMessageTool: describeIMessageMessageTool,
|
||||
supportsAction: ({ action }) => SUPPORTED_ACTIONS.has(action),
|
||||
messageActionTargetAliases: {
|
||||
react: { aliases: ["chatGuid", "chatIdentifier", "chatId"] },
|
||||
edit: { aliases: ["chatGuid", "chatIdentifier", "chatId", "messageId"] },
|
||||
unsend: { aliases: ["chatGuid", "chatIdentifier", "chatId", "messageId"] },
|
||||
reply: { aliases: ["chatGuid", "chatIdentifier", "chatId", "messageId"] },
|
||||
sendWithEffect: { aliases: ["chatGuid", "chatIdentifier", "chatId"] },
|
||||
sendAttachment: { aliases: ["chatGuid", "chatIdentifier", "chatId"] },
|
||||
"upload-file": { aliases: ["chatGuid", "chatIdentifier", "chatId"] },
|
||||
renameGroup: { aliases: ["chatGuid", "chatIdentifier", "chatId"] },
|
||||
setGroupIcon: { aliases: ["chatGuid", "chatIdentifier", "chatId"] },
|
||||
addParticipant: { aliases: ["chatGuid", "chatIdentifier", "chatId"] },
|
||||
removeParticipant: { aliases: ["chatGuid", "chatIdentifier", "chatId"] },
|
||||
leaveGroup: { aliases: ["chatGuid", "chatIdentifier", "chatId"] },
|
||||
},
|
||||
extractToolSend: ({ args }) => extractToolSend(args, "sendMessage"),
|
||||
handleAction: async ({ action, params, cfg, accountId, toolContext }) => {
|
||||
const runtime = await loadIMessageActionsRuntime();
|
||||
@@ -381,6 +337,7 @@ export const imessageMessageActions: ChannelMessageActionAdapter = {
|
||||
cfg,
|
||||
accountId: accountId ?? undefined,
|
||||
});
|
||||
assertActionEnabled(action, account.config.actions);
|
||||
const cliPathForProbe = account.config.cliPath?.trim() || "imsg";
|
||||
let privateApiStatus = getCachedIMessagePrivateApiStatus(cliPathForProbe);
|
||||
const assertPrivateApiEnabled = async () => {
|
||||
@@ -607,7 +564,7 @@ export const imessageMessageActions: ChannelMessageActionAdapter = {
|
||||
if (action === "sendAttachment" || action === "upload-file") {
|
||||
await assertPrivateApiEnabled();
|
||||
const filename = readStringParam(params, "filename", { required: true });
|
||||
const asVoice = readBooleanParam(params, "asVoice");
|
||||
const asVoice = readBooleanParam(params, "asVoice") ?? readBooleanParam(params, "as_voice");
|
||||
const resolvedChatGuid = await chatGuid();
|
||||
const result = await runtime.sendAttachment({
|
||||
chatGuid: resolvedChatGuid,
|
||||
|
||||
66
extensions/imessage/src/message-tool-api.test.ts
Normal file
66
extensions/imessage/src/message-tool-api.test.ts
Normal file
@@ -0,0 +1,66 @@
|
||||
import { beforeEach, describe, expect, it } from "vitest";
|
||||
import { describeMessageTool } from "../message-tool-api.js";
|
||||
import {
|
||||
clearCachedIMessagePrivateApiStatus,
|
||||
setCachedIMessagePrivateApiStatus,
|
||||
} from "./private-api-status.js";
|
||||
|
||||
describe("iMessage message-tool artifact", () => {
|
||||
beforeEach(() => {
|
||||
clearCachedIMessagePrivateApiStatus();
|
||||
});
|
||||
|
||||
it("exposes lightweight discovery without loading the channel plugin", () => {
|
||||
setCachedIMessagePrivateApiStatus("imsg", {
|
||||
available: true,
|
||||
v2Ready: true,
|
||||
selectors: {
|
||||
editMessage: true,
|
||||
retractMessagePart: true,
|
||||
},
|
||||
rpcMethods: [],
|
||||
});
|
||||
|
||||
const discovery = describeMessageTool({
|
||||
cfg: {
|
||||
channels: {
|
||||
imessage: {
|
||||
cliPath: "imsg",
|
||||
actions: {
|
||||
edit: false,
|
||||
},
|
||||
},
|
||||
},
|
||||
} as never,
|
||||
currentChannelId: "chat_id:1",
|
||||
});
|
||||
|
||||
expect(discovery?.actions).toEqual(
|
||||
expect.arrayContaining(["react", "reply", "sendWithEffect", "upload-file"]),
|
||||
);
|
||||
expect(discovery?.actions).not.toContain("edit");
|
||||
expect(discovery?.actions).not.toContain("sendAttachment");
|
||||
});
|
||||
|
||||
it("hides private actions when cached bridge status is unavailable", () => {
|
||||
setCachedIMessagePrivateApiStatus("imsg", {
|
||||
available: false,
|
||||
v2Ready: false,
|
||||
selectors: {},
|
||||
rpcMethods: [],
|
||||
});
|
||||
|
||||
const discovery = describeMessageTool({
|
||||
cfg: {
|
||||
channels: {
|
||||
imessage: {
|
||||
cliPath: "imsg",
|
||||
},
|
||||
},
|
||||
} as never,
|
||||
currentChannelId: "chat_id:1",
|
||||
});
|
||||
|
||||
expect(discovery?.actions).toEqual([]);
|
||||
});
|
||||
});
|
||||
77
extensions/imessage/src/message-tool-api.ts
Normal file
77
extensions/imessage/src/message-tool-api.ts
Normal file
@@ -0,0 +1,77 @@
|
||||
import { createActionGate } from "openclaw/plugin-sdk/channel-actions";
|
||||
import type {
|
||||
ChannelMessageActionAdapter,
|
||||
ChannelMessageActionName,
|
||||
} from "openclaw/plugin-sdk/channel-contract";
|
||||
import { resolveIMessageAccount } from "./accounts.js";
|
||||
import { IMESSAGE_ACTION_NAMES, IMESSAGE_ACTIONS } from "./actions-contract.js";
|
||||
import { getCachedIMessagePrivateApiStatus } from "./private-api-status.js";
|
||||
import { inferIMessageTargetChatType } from "./targets.js";
|
||||
|
||||
const PRIVATE_API_ACTIONS = new Set<ChannelMessageActionName>([
|
||||
"react",
|
||||
"edit",
|
||||
"unsend",
|
||||
"reply",
|
||||
"sendWithEffect",
|
||||
"renameGroup",
|
||||
"setGroupIcon",
|
||||
"addParticipant",
|
||||
"removeParticipant",
|
||||
"leaveGroup",
|
||||
"sendAttachment",
|
||||
]);
|
||||
|
||||
function isGroupTarget(raw?: string | null): boolean {
|
||||
if (!raw) {
|
||||
return false;
|
||||
}
|
||||
return inferIMessageTargetChatType(raw) === "group";
|
||||
}
|
||||
|
||||
export function describeIMessageMessageTool({
|
||||
cfg,
|
||||
accountId,
|
||||
currentChannelId,
|
||||
}: Parameters<NonNullable<ChannelMessageActionAdapter["describeMessageTool"]>>[0]) {
|
||||
const account = resolveIMessageAccount({ cfg, accountId });
|
||||
if (!account.enabled || !account.configured) {
|
||||
return null;
|
||||
}
|
||||
const cliPath = account.config.cliPath?.trim() || "imsg";
|
||||
const privateApiStatus = getCachedIMessagePrivateApiStatus(cliPath);
|
||||
const gate = createActionGate(account.config.actions);
|
||||
const actions = new Set<ChannelMessageActionName>();
|
||||
for (const action of IMESSAGE_ACTION_NAMES) {
|
||||
const spec = IMESSAGE_ACTIONS[action];
|
||||
if (!spec?.gate || !gate(spec.gate)) {
|
||||
continue;
|
||||
}
|
||||
if (privateApiStatus?.available === false && PRIVATE_API_ACTIONS.has(action)) {
|
||||
continue;
|
||||
}
|
||||
if (
|
||||
action === "edit" &&
|
||||
privateApiStatus?.selectors &&
|
||||
!privateApiStatus.selectors.editMessage &&
|
||||
!privateApiStatus.selectors.editMessageItem
|
||||
) {
|
||||
continue;
|
||||
}
|
||||
if (action === "unsend" && privateApiStatus?.selectors?.retractMessagePart !== true) {
|
||||
continue;
|
||||
}
|
||||
actions.add(action);
|
||||
}
|
||||
if (!isGroupTarget(currentChannelId)) {
|
||||
for (const action of IMESSAGE_ACTION_NAMES) {
|
||||
if ("groupOnly" in IMESSAGE_ACTIONS[action] && IMESSAGE_ACTIONS[action].groupOnly) {
|
||||
actions.delete(action);
|
||||
}
|
||||
}
|
||||
}
|
||||
if (actions.delete("sendAttachment")) {
|
||||
actions.add("upload-file");
|
||||
}
|
||||
return { actions: Array.from(actions) };
|
||||
}
|
||||
@@ -103,6 +103,11 @@ describe("monitorIMessageProvider watch.subscribe startup retry", () => {
|
||||
expect(firstClient.stop).toHaveBeenCalledTimes(1);
|
||||
expect(secondClient.waitForClose).toHaveBeenCalledTimes(1);
|
||||
expect(secondClient.stop).toHaveBeenCalledTimes(1);
|
||||
expect(secondClient.request).toHaveBeenCalledWith(
|
||||
"watch.subscribe",
|
||||
{ attachments: false, include_reactions: true },
|
||||
expect.any(Object),
|
||||
);
|
||||
expect(runtime.log).toHaveBeenCalledWith(
|
||||
expect.stringContaining("watch.subscribe startup failed"),
|
||||
);
|
||||
|
||||
@@ -412,6 +412,58 @@ describe("describeIMessageEchoDropLog", () => {
|
||||
});
|
||||
});
|
||||
|
||||
describe("buildIMessageInboundContext", () => {
|
||||
it("keeps numeric row id and provider GUID separately for action tooling", () => {
|
||||
const decision = resolveIMessageInboundDecision({
|
||||
cfg: {} as OpenClawConfig,
|
||||
accountId: "default",
|
||||
message: {
|
||||
id: 12345,
|
||||
guid: "p:0/GUID-current",
|
||||
sender: "+15555550123",
|
||||
text: "Hello",
|
||||
is_from_me: false,
|
||||
is_group: false,
|
||||
},
|
||||
opts: undefined,
|
||||
messageText: "Hello",
|
||||
bodyText: "Hello",
|
||||
allowFrom: ["*"],
|
||||
groupAllowFrom: [],
|
||||
groupPolicy: "open",
|
||||
dmPolicy: "open",
|
||||
storeAllowFrom: [],
|
||||
historyLimit: 0,
|
||||
groupHistories: new Map(),
|
||||
echoCache: undefined,
|
||||
selfChatCache: undefined,
|
||||
logVerbose: undefined,
|
||||
});
|
||||
expect(decision.kind).toBe("dispatch");
|
||||
if (decision.kind !== "dispatch") {
|
||||
return;
|
||||
}
|
||||
|
||||
const { ctxPayload } = buildIMessageInboundContext({
|
||||
cfg: {} as OpenClawConfig,
|
||||
decision,
|
||||
message: {
|
||||
id: 12345,
|
||||
guid: "p:0/GUID-current",
|
||||
sender: "+15555550123",
|
||||
text: "Hello",
|
||||
is_from_me: false,
|
||||
is_group: false,
|
||||
},
|
||||
historyLimit: 0,
|
||||
groupHistories: new Map(),
|
||||
});
|
||||
|
||||
expect(ctxPayload.MessageSid).toBe("1");
|
||||
expect(ctxPayload.MessageSidFull).toBe("p:0/GUID-current");
|
||||
});
|
||||
});
|
||||
|
||||
describe("resolveIMessageInboundDecision command auth", () => {
|
||||
const cfg = {} as OpenClawConfig;
|
||||
const resolveDmCommandDecision = (params: {
|
||||
|
||||
@@ -141,6 +141,16 @@ function isRetriableWatchSubscribeStartupError(error: unknown): boolean {
|
||||
);
|
||||
}
|
||||
|
||||
function formatIMessageReactionText(message: IMessagePayload): string | undefined {
|
||||
if (!message.is_reaction) {
|
||||
return undefined;
|
||||
}
|
||||
const action = message.is_reaction_add === false ? "removed" : "added";
|
||||
const emoji = message.reaction_emoji?.trim() || message.reaction_type?.trim() || "reaction";
|
||||
const target = message.reacted_to_guid?.trim();
|
||||
return target ? `${action} ${emoji} reaction to [id:${target}]` : `${action} ${emoji} reaction`;
|
||||
}
|
||||
|
||||
async function waitForWatchSubscribeRetryDelay(params: {
|
||||
ms: number;
|
||||
abortSignal?: AbortSignal;
|
||||
@@ -338,7 +348,8 @@ export async function monitorIMessageProvider(opts: MonitorIMessageOpts = {}): P
|
||||
};
|
||||
|
||||
async function handleMessageNow(message: IMessagePayload) {
|
||||
const messageText = (message.text ?? "").trim();
|
||||
const reactionText = formatIMessageReactionText(message);
|
||||
const messageText = (reactionText ?? message.text ?? "").trim();
|
||||
|
||||
const attachments = includeAttachments ? (message.attachments ?? []) : [];
|
||||
const effectiveAttachmentRoots = remoteHost ? remoteAttachmentRoots : attachmentRoots;
|
||||
@@ -804,6 +815,7 @@ export async function monitorIMessageProvider(opts: MonitorIMessageOpts = {}): P
|
||||
"watch.subscribe",
|
||||
{
|
||||
attachments: includeAttachments,
|
||||
include_reactions: true,
|
||||
},
|
||||
{ timeoutMs: probeTimeoutMs },
|
||||
);
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user