mirror of
https://github.com/openclaw/openclaw.git
synced 2026-05-09 04:50:44 +00:00
Merge branch 'main' into fix/control-ui-sender-metadata-stream
This commit is contained in:
4
.gitignore
vendored
4
.gitignore
vendored
@@ -95,6 +95,10 @@ docs/internal/
|
||||
tmp/
|
||||
IDENTITY.md
|
||||
USER.md
|
||||
# Exception: oc-path real-world test fixtures need to be tracked even
|
||||
# though the bare names match the local-untracked rule above.
|
||||
!src/oc-path/tests/fixtures/real/IDENTITY.md
|
||||
!src/oc-path/tests/fixtures/real/USER.md
|
||||
*.tgz
|
||||
*.tar.gz
|
||||
*.zip
|
||||
|
||||
33
CHANGELOG.md
33
CHANGELOG.md
@@ -6,10 +6,18 @@ Docs: https://docs.openclaw.ai
|
||||
|
||||
### Changes
|
||||
|
||||
- Docker: run the runtime image under `tini` so long-lived containers reap orphaned child processes and forward signals correctly. (#77885) Thanks @VintageAyu.
|
||||
- Google/Gemini: normalize retired `google/gemini-3-pro-preview` and `google-gemini-cli/gemini-3-pro-preview` selections to `google/gemini-3.1-pro-preview` before they are written to model config.
|
||||
- Amazon Bedrock: support `serviceTier` parameter for Bedrock models, configurable via `agents.defaults.params.serviceTier` or per-model in `agents.defaults.models`. Valid values: `default`, `flex`, `priority`, `reserved`. (#64512) Thanks @mobilinkd.
|
||||
- Control UI: read the Quick Settings exec policy badge from `tools.exec.security` instead of the non-schema `agents.defaults.exec.security` path, so configured `full`/`deny` values render accurately. Fixes #78311. Thanks @FriedBack.
|
||||
- Control UI/usage: add transcript-backed historical lineage rollups for rotated logical sessions, with current-instance vs historical-lineage scope controls and long-range presets so usage history stays visible after restarts and updates. Fixes #50701. Thanks @dev-gideon-llc and @BunsDev.
|
||||
- Agents/failover: harden state-aware lane suspension by persisting quota resume transitions, restoring configured lane concurrency, preserving non-quota failure reasons, and exporting model failover events through diagnostics OTLP. Thanks @BunsDev.
|
||||
- Channels/streaming: make progress draft labels scroll away with other progress lines, render structured tool rows as compact emoji/title/details, show web-search queries from provider-native argument shapes, and skip empty Discord apply-patch starts until a patch summary exists. (#79146)
|
||||
- Workspace/oc-path: add the `oc://` addressing substrate (`src/oc-path/`) — a universal, kind-dispatched path scheme for addressing leaves and nodes inside markdown, jsonc, jsonl, and yaml workspace files, with `parseOcPath`/`formatOcPath`, per-kind `parseXxx`/`emitXxx`, universal `resolveOcPath`/`setOcPath`/`findOcPaths` verbs, the `__OPENCLAW_REDACTED__` sentinel emit guard, and the new `openclaw path resolve|find|set|validate|emit` CLI for shell-level inspection and surgical edits. Implements #78051. (#78678) Thanks @giodl73-repo.
|
||||
- Runtime/performance: avoid full-array sorting while auto-selecting providers, resolving supported thinking levels, picking node last-seen timestamps, and extracting Codex usage-limit messages. Thanks @shakkernerd.
|
||||
- Plugins/doctor: avoid full-array sorting while selecting ClawHub search/archive results and bounded dreaming doctor entries. Thanks @shakkernerd.
|
||||
- Agents/compaction: keep contributor diagnostics to a bounded top-three selection without sorting the full history. Thanks @shakkernerd.
|
||||
- Sessions/UI: avoid full-array sorting while selecting ACPX leases, Google Meet calendar events, and latest chat sessions. Thanks @shakkernerd.
|
||||
- Telegram: preserve the channel-specific 10-option poll cap in the unified outbound adapter so over-limit polls are rejected before send. (#78762) Thanks @obviyus.
|
||||
- Slack: route handled top-level channel turns in implicit-conversation channels to thread-scoped sessions when Slack reply threading is enabled, keeping the root turn and later thread replies on one OpenClaw session. (#78522) Thanks @zeroth-blip.
|
||||
- Telegram: re-probe the primary fetch transport after repeated sticky fallback success so transient IPv4 or pinned-IP fallback promotion can recover without a gateway restart. Fixes #77088. (#77157) Thanks @MkDev11.
|
||||
@@ -31,10 +39,12 @@ Docs: https://docs.openclaw.ai
|
||||
- Discord/streaming: default Discord replies to progress draft previews so tool/work activity appears in one edited Discord message unless `channels.discord.streaming.mode` is set to `off`.
|
||||
- OpenAI: support `openai/chat-latest` as an explicit direct API-key model override for trying the moving ChatGPT Instant API alias without changing the stable default model.
|
||||
- OpenAI/realtime: default realtime voice to `gpt-realtime-2`, use the GA Realtime WebSocket session shape for backend OpenAI bridges, and cover backend, WebRTC, Google Live, and Gateway relay paths in the live Talk smoke. (#79130)
|
||||
- Update/Windows: spawn the post-core-update child process with `stdio:"pipe"` on Windows so PowerShell/CMD console handles are not inherited, preventing the terminal from hanging after `openclaw update` completes. Fixes #78445. (#78483) Thanks @Beandon13.
|
||||
- Plugins/install: add `npm-pack:<path.tgz>` installs so local npm pack artifacts run through the same managed npm-root install, lockfile verification, dependency scan, and install-record path as registry npm plugins.
|
||||
- Channels/plugins: show configured official external channels as missing-plugin status rows and send errors with exact install/doctor repair commands after raw package-manager upgrades leave Feishu or WhatsApp uninstalled. Fixes #78702 and #78593. Thanks @MarkMa84 and @mkupiainen.
|
||||
- Codex app-server: disarm the short post-tool completion watchdog after current-turn activity, expose `appServer.turnCompletionIdleTimeoutMs`, and include raw assistant item context in idle-timeout diagnostics so status-only post-tool stalls stop failing as idle. Fixes #77984. Thanks @roseware-dev and @rubencu.
|
||||
- Plugin skills/Windows: publish plugin-provided skill directories as junctions on Windows so standard users without Developer Mode can register plugin skills without symlink EPERM failures. Fixes #77958. (#77971) Thanks @hclsys and @jarro.
|
||||
- Shell env/Windows: hide the login-shell environment probe child window so gateway startup and shell-env refreshes do not flash a console on Windows. Fixes #78159. (#78266) Thanks @BradGroux.
|
||||
- MS Teams: surface blocked Bot Framework egress by logging JWKS fetch network failures and adding a Bot Connector send hint for transport-level reply failures. Fixes #77674. (#78081) Thanks @Beandon13.
|
||||
- Gateway/sessions: fast-path already-qualified model refs while building session-list rows so `openclaw sessions` and Control UI session lists avoid heavyweight model resolution on large stores. (#77902) Thanks @ragesaq.
|
||||
- Contributor PRs: remind external contributors to redact private information like IP addresses, API keys, phone numbers, and non-public endpoints from real behavior proof. Thanks @pashpashpash.
|
||||
@@ -166,6 +176,7 @@ Docs: https://docs.openclaw.ai
|
||||
- Config/Nix: keep startup-derived plugin enablement, gateway auth tokens, control UI origins, and owner-display secrets runtime-only instead of rewriting `openclaw.json`; in Nix mode, config writers, mutating `openclaw update`, plugin lifecycle mutators, and doctor repair/token-generation now refuse with agent-first nix-openclaw guidance. (#78047) Thanks @joshp123.
|
||||
- Agents/context engine: invalidate cached assembled context views when source history shrinks or assembly fails, preventing stale pre-reset history from being reused. Fixes #77968. (#78163) Thanks @brokemac79 and @ChrisBot2026.
|
||||
- Plugin SDK: add a generic `api.runtime.llm.complete` host completion helper with runtime-derived caller attribution, config-gated model/agent overrides, session-bound context-engine access, request-scoped config, audit metadata, and normalized usage attribution. (#64294) Thanks @DaevMithran.
|
||||
- Control UI/exec approvals: highlight parsed shell command fragments that may deserve extra review in approval prompts. (#77153) Thanks @jesse-merhi.
|
||||
|
||||
### Breaking
|
||||
|
||||
@@ -173,11 +184,25 @@ Docs: https://docs.openclaw.ai
|
||||
|
||||
### Fixes
|
||||
|
||||
- Agents/compaction: keep the recent tail after manual `/compact` when Pi returns an empty or no-op compaction summary, preventing blank checkpoints from replacing the live context.
|
||||
- fix(discord): gate user allowlist name resolution [AI]. (#79002) Thanks @pgondhi987.
|
||||
- fix(msteams): gate startup user allowlist resolution [AI]. (#79003) Thanks @pgondhi987.
|
||||
- Harden macOS shell wrapper allowlist parsing [AI]. (#78518) Thanks @pgondhi987.
|
||||
- Doctor/OpenAI: stop pinning migrated `openai-codex/*` routes to the Codex runtime so mixed-provider agents keep automatic PI routing for MiniMax, Anthropic, and other non-OpenAI model switches.
|
||||
- Gateway/macOS: `openclaw gateway stop` now uses `launchctl bootout` by default instead of unconditionally calling `launchctl disable`, so KeepAlive auto-recovery still works after unexpected crashes; use the new `--disable` flag to opt into the persistent-disable behavior when a manual stop should survive reboots. Fixes #77934. Thanks @bmoran1022.
|
||||
- Gateway/macOS: `repairLaunchAgentBootstrap` no longer kickstarts an already-running LaunchAgent, preventing unnecessary service restarts and session disconnects when repair runs against a healthy gateway. Fixes #77428. Thanks @ramitrkar-hash.
|
||||
- Gateway/macOS: `openclaw gateway stop --disable` now persists the LaunchAgent disable bit even after a previous bootout left the service not loaded, keeping the explicit stay-down path reliable. (#78412) Thanks @wdeveloper16.
|
||||
- CLI/status: keep lean `openclaw status --json` off manifest-backed channel discovery so configured-channel checks do not repeatedly rescan plugin metadata. Fixes #79129.
|
||||
- Control UI/chat: hide retired and non-public Google Gemini model IDs from chat model catalogs and route the bare `gemini-3-pro` alias to Gemini 3.1 Pro Preview instead of the shut-down Gemini 3 Pro Preview. Thanks @BunsDev.
|
||||
- CLI/install: refuse state-mutating OpenClaw CLI runs as root by default, keep an explicit `OPENCLAW_ALLOW_ROOT=1` escape hatch for intentional root/container use, and update DigitalOcean setup guidance to run OpenClaw as a non-root user. Fixes #67478. Thanks @Jerry-Xin and @natechicago.
|
||||
- Auto-reply/media: resolve `scp` from `PATH` when staging sandbox media so nonstandard OpenSSH installs can copy remote attachments.
|
||||
- Agents/PI: route PI-native OpenAI-compatible default streams through OpenClaw boundary-aware transports so local-compatible model runs keep API-key injection and transport policy.
|
||||
- Gateway/media: require authenticated owner or admin context for managed outgoing image bytes instead of trusting requester-session headers.
|
||||
- Doctor/gateway: avoid duplicate Node runtime warnings when the daemon install plan already selected a supported Node runtime.
|
||||
- Gateway/nodes: ignore malformed non-string capability entries from live nodes instead of throwing while listing the node catalog.
|
||||
- Gateway/pairing: preserve deliberately narrowed role-token scopes when approving device scope upgrades instead of regranting the whole approved baseline.
|
||||
- Telegram/ACP: keep chat-bound ACP replies durable by delivering final-only ACP output as final text instead of transient Telegram preview blocks. Thanks @shakkernerd.
|
||||
- Telegram: hydrate replied-to messages as a persisted nearest-first reply chain so agents can see observed parent text, media refs, captions, senders, timestamps, and nested replies instead of guessing from a shallow reply id.
|
||||
- Gateway/watch: leave `OPENCLAW_TRACE_SYNC_IO` disabled by default in `pnpm gateway:watch:raw` so watch mode avoids noisy Node sync-I/O stack traces unless explicitly requested.
|
||||
- Codex app-server: close stdio stdin before force-killing the managed app-server, matching Codex single-client shutdown behavior and avoiding unsettled CLI exits after successful runs.
|
||||
- CLI/Codex: dispose registered agent harnesses during short-lived CLI shutdown so successful Codex-backed `agent --local` runs do not leave app-server child processes alive.
|
||||
@@ -202,6 +227,7 @@ Docs: https://docs.openclaw.ai
|
||||
- Compute plugin callback authorization dynamically [AI]. (#78866) Thanks @pgondhi987.
|
||||
- fix(active-memory): require admin scope for global toggles [AI]. (#78863) Thanks @pgondhi987.
|
||||
- Honor owner enforcement for native commands [AI]. (#78864) Thanks @pgondhi987.
|
||||
- Gateway/auth: allow `gateway.auth.mode: "none"` loopback backend RPC clients to skip device identity only for local non-browser backend connections, restoring subagent spawns and gateway tools without opening remote or browser-origin bypasses. Fixes #75780. Thanks @yozakura-ava.
|
||||
- Tavily: resolve dedicated `tavily_search` and `tavily_extract` tool credentials from the active runtime config snapshot, so `exec` SecretRef-backed API keys do not reach the tools unresolved. (#78610) Thanks @VACInc.
|
||||
- Gateway/sessions: clear cached skills snapshots during `/new` and `sessions.reset` so long-lived channel sessions rebuild the visible skill list after skills change. (#78873) Thanks @Evizero.
|
||||
- fix(auto-reply): gate inline skill tool dispatch [AI]. (#78517) Thanks @pgondhi987.
|
||||
@@ -212,6 +238,7 @@ Docs: https://docs.openclaw.ai
|
||||
- Control UI/chat: wait for an in-flight model dropdown patch before sending the next chat message, so immediate sends use the selected session model instead of racing the previous override. Fixes #54240.
|
||||
- Native chat: decode gateway-provided thinking metadata for the iOS/macOS picker so provider-specific levels such as `adaptive`, `xhigh`, and `max` appear without leaking unsupported default-model options. Thanks @BunsDev.
|
||||
- Agents/compaction: cap summarization output reserve tokens to the selected model's `maxTokens` so 1M-context Anthropic compactions do not request more output than the API permits. Fixes #54383.
|
||||
- Control UI/login: replace raw connection failures with structured, actionable login guidance for auth, pairing, insecure HTTP, origin, protocol, and transport failures. Thanks @BunsDev.
|
||||
- Agents/tools: fail `exec host=node` before `system.run` when the selected node is known to be disconnected, with an actionable reconnect message instead of a raw node invoke failure. Thanks @BunsDev.
|
||||
- Agents/models: accept legacy `anthropic-cli/*` model refs as Claude CLI runtime refs instead of failing model resolution with `Unknown model`. Thanks @BunsDev.
|
||||
- Agents/tools: keep restrictive-profile tool-section warnings scoped to the configured sections whose tools are still missing from `alsoAllow`, so already re-allowed filesystem tools do not make exec-only fixes look broader than they are. Thanks @BunsDev.
|
||||
@@ -251,6 +278,7 @@ Docs: https://docs.openclaw.ai
|
||||
- Discord/groups: instruct group-chat agents to stay silent when a message is addressed to someone else, replying only when invited or correcting key facts. (#78615)
|
||||
- Discord/groups: tell Discord-channel agents to wrap bare URLs as `<https://example.com>` so link previews do not expand into uninvited embeds. (#78614)
|
||||
- Agents/fallback: fail fast on session write-lock timeouts instead of trying fallback models for local file contention. Fixes #66646. Thanks @sallyom.
|
||||
- Browser/SSRF: stop closing user-owned Chrome tabs when a read-only operation (snapshot/screenshot/interactions) is rejected by the SSRF guard — only OpenClaw-initiated navigations now close on policy denial. Thanks @scotthuang.
|
||||
- Telegram/Codex: generate DM topic labels with Codex-compatible simple-completion requests so auto-created private topics can be renamed instead of staying `New Chat`.
|
||||
- Plugins/runtime fetch: drop third-party symbol metadata from plain request header dictionaries before passing them into native `fetch` or `Headers`, so SDK and guarded/proxy fetch paths do not reject otherwise valid plugin requests. Fixes #77846. Thanks @shakkernerd.
|
||||
- Web fetch: bound guarded dispatcher cleanup after request timeouts so timed-out fetches return tool errors instead of leaving Gateway tool lanes active. (#78439) Thanks @obviyus.
|
||||
@@ -309,6 +337,7 @@ Docs: https://docs.openclaw.ai
|
||||
- Plugins/update: keep installed official npm and ClawHub plugins such as Codex, Discord, WhatsApp, and diagnostics plugins synced during host updates even when disabled or previously exact-pinned, while preserving third-party plugin pins. Thanks @vincentkoc.
|
||||
- Doctor/status: warn when `OPENCLAW_GATEWAY_TOKEN` would shadow a different active `gateway.auth.token` source for local CLI commands, while avoiding false positives when config points at the same env token. Fixes #74271. Thanks @yelog.
|
||||
- Gateway/HTTP: avoid loading managed outgoing-image media handlers for unrelated requests, so disabled OpenAI-compatible routes return 404 without waiting on lazy media sidecars. Thanks @vincentkoc.
|
||||
- Plugins: dispatch cached descriptor-backed tools by the resolved runtime tool name for unnamed factories, fixing multi-tool plugins whose shared manifest contracts exposed sibling tools but failed at execution. Fixes #78671. Thanks @zanni098.
|
||||
- Gateway/OpenAI-compatible: send the assistant role SSE chunk as soon as streaming chat-completion headers are accepted, so cold agent setup cannot leave `/v1/chat/completions` clients with a bodyless 200 response until their idle timeout fires.
|
||||
- Agents/media: avoid direct generated-media completion fallback while the announce-agent run is still pending, so async video and music completions do not duplicate raw media messages. (#77754)
|
||||
- WebChat/Codex media: stage Codex app-server generated local images into managed media before Gateway display, so Codex-home image paths no longer hit `LocalMediaAccessError` while keeping Codex home out of the display allowlist. Thanks @frankekn.
|
||||
@@ -325,7 +354,7 @@ Docs: https://docs.openclaw.ai
|
||||
- CLI/status: show the selected agent runtime/harness in `openclaw status` session rows so terminal status matches the `/status` runtime line. Thanks @vincentkoc.
|
||||
|
||||
- CLI/sessions: prune old unreferenced transcript, compaction checkpoint, and trajectory artifacts during normal `sessions cleanup`, so gateway restart or crash orphans do not accumulate indefinitely outside `sessions.json`. Fixes #77608. Thanks @slideshow-dingo.
|
||||
- Doctor/Codex: repair legacy `openai-codex/*` routes in primary models, fallbacks, heartbeat/subagent/compaction overrides, hooks, channel overrides, and stale session pins to canonical `openai/*`, selecting `agentRuntime.id: "codex"` only when the Codex plugin is installed, enabled, contributes the `codex` harness, and has usable OAuth; otherwise select `agentRuntime.id: "pi"`. Thanks @vincentkoc.
|
||||
- Doctor/Codex: repair legacy `openai-codex/*` routes to canonical `openai/*`, keep OpenAI agent turns on Codex by default, ignore stale whole-agent/session runtime pins, preserve explicit provider/model runtime policy, and migrate legacy runtime model refs to model-scoped runtime entries. Thanks @vincentkoc.
|
||||
- Video generation: wait up to 20 minutes for slow fal/MiniMax queue-backed jobs, stop forwarding unsupported Google Veo generated-audio options, and normalize MiniMax `720P` requests to its supported `768P` resolution with the usual override warning/details instead of failing fallback.
|
||||
- Video generation: accept provider-specific aspect-ratio and resolution hints at the tool boundary, normalize `720P` to MiniMax's supported `768P`, and stop sending Google `generateAudio` on Gemini video requests so provider fallback can recover from model-specific parameter differences. Thanks @vincentkoc.
|
||||
- Channels/durable delivery: preserve channel-specific final reply semantics when using durable sends, including Telegram selected quotes and silent error replies plus WhatsApp message-sending cancellations.
|
||||
@@ -607,6 +636,8 @@ Docs: https://docs.openclaw.ai
|
||||
- Channels/iMessage: surface the silent group-allowlist drop at default log level by emitting a one-time `warn` per account at monitor startup when `channels.imessage.groupPolicy: "allowlist"` is set without a `channels.imessage.groups` block, plus a one-time `warn` per `chat_id` when the runtime gate drops a specific group, naming the exact `channels.imessage.groups[...]` key to add to allow it. Fixes #78749. (#79190) Thanks @omarshahine.
|
||||
- WhatsApp: stop Gateway-originated outbound echoes from advancing inbound activity in `openclaw channels status`, so outbound self-sends no longer look like handled inbound messages. Fixes #79056. (#79057) Thanks @ai-hpc and @bittoby.
|
||||
- Gateway/nodes: preserve the live node registry session and invoke ownership when an older same-node WebSocket closes after reconnecting. (#78351) Thanks @samzong.
|
||||
- Browser/downloads: route explicit and managed browser download output directories through `fs-safe` validation before staging final files, so symlinked output roots are rejected before writes. (#78780) Thanks @jesse-merhi.
|
||||
- Agents/PI: skip the idle wait during aborted embedded-run cleanup, so stopped or timed-out runs clear pending tool state and release the session lock promptly. (#74919) Thanks @medns.
|
||||
|
||||
## 2026.5.3-1
|
||||
|
||||
|
||||
@@ -160,7 +160,7 @@ RUN --mount=type=cache,id=openclaw-bookworm-apt-cache,target=/var/cache/apt,shar
|
||||
--mount=type=cache,id=openclaw-bookworm-apt-lists,target=/var/lib/apt,sharing=locked \
|
||||
apt-get update && \
|
||||
DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
|
||||
ca-certificates procps hostname curl git lsof openssl python3 && \
|
||||
ca-certificates procps hostname curl git lsof openssl python3 tini && \
|
||||
update-ca-certificates
|
||||
|
||||
RUN chown node:node /app
|
||||
@@ -287,4 +287,5 @@ USER node
|
||||
# For external access from host/ingress, override bind to "lan" and set auth.
|
||||
HEALTHCHECK --interval=3m --timeout=10s --start-period=15s --retries=3 \
|
||||
CMD node -e "fetch('http://127.0.0.1:18789/healthz').then((r)=>process.exit(r.ok?0:1)).catch(()=>process.exit(1))"
|
||||
ENTRYPOINT ["tini", "-s", "--"]
|
||||
CMD ["node", "openclaw.mjs", "gateway", "--allow-unconfigured"]
|
||||
|
||||
@@ -43,7 +43,8 @@ enum ExecApprovalEvaluator {
|
||||
let allowAlwaysPatterns = ExecCommandResolution.resolveAllowAlwaysPatterns(
|
||||
command: command,
|
||||
cwd: cwd,
|
||||
env: env)
|
||||
env: env,
|
||||
rawCommand: allowlistRawCommand)
|
||||
let allowlistMatches = security == .allowlist
|
||||
? ExecAllowlistMatcher.matchAll(entries: approvals.allowlist, resolutions: allowlistResolutions)
|
||||
: []
|
||||
|
||||
@@ -27,7 +27,7 @@ struct ExecCommandResolution {
|
||||
{
|
||||
// Allowlist resolution must follow actual argv execution for wrappers.
|
||||
// `rawCommand` is caller-supplied display text and may be canonicalized.
|
||||
let shell = ExecShellWrapperParser.extract(command: command, rawCommand: nil)
|
||||
let shell = ExecShellWrapperParser.extractForAllowlist(command: command, rawCommand: rawCommand)
|
||||
if shell.isWrapper {
|
||||
// Fail closed when env modifiers precede a shell wrapper. This mirrors
|
||||
// system-run binding behavior where such invocations must stay bound to
|
||||
@@ -68,7 +68,8 @@ struct ExecCommandResolution {
|
||||
static func resolveAllowAlwaysPatterns(
|
||||
command: [String],
|
||||
cwd: String?,
|
||||
env: [String: String]?) -> [String]
|
||||
env: [String: String]?,
|
||||
rawCommand: String? = nil) -> [String]
|
||||
{
|
||||
var patterns: [String] = []
|
||||
var seen = Set<String>()
|
||||
@@ -76,6 +77,7 @@ struct ExecCommandResolution {
|
||||
command: command,
|
||||
cwd: cwd,
|
||||
env: env,
|
||||
rawCommand: rawCommand,
|
||||
depth: 0,
|
||||
patterns: &patterns,
|
||||
seen: &seen)
|
||||
@@ -152,6 +154,7 @@ struct ExecCommandResolution {
|
||||
command: [String],
|
||||
cwd: String?,
|
||||
env: [String: String]?,
|
||||
rawCommand: String?,
|
||||
depth: Int,
|
||||
patterns: inout [String],
|
||||
seen: inout Set<String>)
|
||||
@@ -162,13 +165,19 @@ struct ExecCommandResolution {
|
||||
|
||||
if let token0 = command.first?.trimmingCharacters(in: .whitespacesAndNewlines),
|
||||
ExecCommandToken.basenameLower(token0) == "env",
|
||||
let envUnwrapped = ExecEnvInvocationUnwrapper.unwrap(command),
|
||||
!envUnwrapped.isEmpty
|
||||
let envUnwrapped = ExecEnvInvocationUnwrapper.unwrapWithMetadata(command),
|
||||
!envUnwrapped.command.isEmpty
|
||||
{
|
||||
if envUnwrapped.usesModifiers,
|
||||
self.isAllowlistShellWrapper(command: envUnwrapped.command, rawCommand: rawCommand)
|
||||
{
|
||||
return
|
||||
}
|
||||
self.collectAllowAlwaysPatterns(
|
||||
command: envUnwrapped,
|
||||
command: envUnwrapped.command,
|
||||
cwd: cwd,
|
||||
env: env,
|
||||
rawCommand: rawCommand,
|
||||
depth: depth + 1,
|
||||
patterns: &patterns,
|
||||
seen: &seen)
|
||||
@@ -180,13 +189,14 @@ struct ExecCommandResolution {
|
||||
command: shellMultiplexer,
|
||||
cwd: cwd,
|
||||
env: env,
|
||||
rawCommand: rawCommand,
|
||||
depth: depth + 1,
|
||||
patterns: &patterns,
|
||||
seen: &seen)
|
||||
return
|
||||
}
|
||||
|
||||
let shell = ExecShellWrapperParser.extract(command: command, rawCommand: nil)
|
||||
let shell = ExecShellWrapperParser.extractForAllowlist(command: command, rawCommand: rawCommand)
|
||||
if shell.isWrapper {
|
||||
guard let shellCommand = shell.command,
|
||||
let segments = self.splitShellCommandChain(shellCommand)
|
||||
@@ -202,6 +212,7 @@ struct ExecCommandResolution {
|
||||
command: tokens,
|
||||
cwd: cwd,
|
||||
env: env,
|
||||
rawCommand: nil,
|
||||
depth: depth + 1,
|
||||
patterns: &patterns,
|
||||
seen: &seen)
|
||||
@@ -218,6 +229,10 @@ struct ExecCommandResolution {
|
||||
patterns.append(pattern)
|
||||
}
|
||||
|
||||
private static func isAllowlistShellWrapper(command: [String], rawCommand: String?) -> Bool {
|
||||
ExecShellWrapperParser.extractForAllowlist(command: command, rawCommand: rawCommand).isWrapper
|
||||
}
|
||||
|
||||
private static func unwrapShellMultiplexerInvocation(_ argv: [String]) -> [String]? {
|
||||
guard let token0 = argv.first?.trimmingCharacters(in: .whitespacesAndNewlines), !token0.isEmpty else {
|
||||
return nil
|
||||
|
||||
278
apps/macos/Sources/OpenClaw/ExecInlineCommandParser.swift
Normal file
278
apps/macos/Sources/OpenClaw/ExecInlineCommandParser.swift
Normal file
@@ -0,0 +1,278 @@
|
||||
import Foundation
|
||||
|
||||
enum ExecInlineCommandParser {
|
||||
struct Match {
|
||||
let tokenIndex: Int
|
||||
let inlineCommand: String?
|
||||
let valueTokenOffset: Int
|
||||
|
||||
init(tokenIndex: Int, inlineCommand: String?, valueTokenOffset: Int = 1) {
|
||||
self.tokenIndex = tokenIndex
|
||||
self.inlineCommand = inlineCommand
|
||||
self.valueTokenOffset = valueTokenOffset
|
||||
}
|
||||
}
|
||||
|
||||
private struct CombinedCommandFlag {
|
||||
let attachedCommand: String?
|
||||
let separateValueCount: Int
|
||||
}
|
||||
|
||||
private static let posixShellOptionsWithSeparateValues = Set([
|
||||
"--init-file",
|
||||
"--rcfile",
|
||||
"-O",
|
||||
"-o",
|
||||
"+O",
|
||||
"+o",
|
||||
])
|
||||
|
||||
static func hasPosixInteractiveStartupBeforeInlineCommand(
|
||||
_ argv: [String],
|
||||
flags: Set<String>) -> Bool
|
||||
{
|
||||
var idx = 1
|
||||
var sawInteractiveMode = false
|
||||
while idx < argv.count {
|
||||
let token = argv[idx].trimmingCharacters(in: .whitespacesAndNewlines)
|
||||
if token.isEmpty {
|
||||
idx += 1
|
||||
continue
|
||||
}
|
||||
if token == "--" {
|
||||
return false
|
||||
}
|
||||
if self.isPosixInteractiveModeOption(token) {
|
||||
sawInteractiveMode = true
|
||||
}
|
||||
if flags.contains(token) || self.isCombinedCommandFlag(token) {
|
||||
return sawInteractiveMode
|
||||
}
|
||||
if !token.hasPrefix("-"), !token.hasPrefix("+") {
|
||||
return false
|
||||
}
|
||||
let combinedValueCount = self.combinedSeparateValueOptionCount(token)
|
||||
if combinedValueCount > 0 {
|
||||
idx += 1 + combinedValueCount
|
||||
continue
|
||||
}
|
||||
if self.consumesSeparateValue(token) {
|
||||
idx += 2
|
||||
continue
|
||||
}
|
||||
idx += 1
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
static func hasPosixLoginStartupBeforeInlineCommand(
|
||||
_ argv: [String],
|
||||
flags: Set<String>) -> Bool
|
||||
{
|
||||
var idx = 1
|
||||
var sawLoginMode = false
|
||||
while idx < argv.count {
|
||||
let token = argv[idx].trimmingCharacters(in: .whitespacesAndNewlines)
|
||||
if token.isEmpty {
|
||||
idx += 1
|
||||
continue
|
||||
}
|
||||
if token == "--" {
|
||||
return false
|
||||
}
|
||||
if token == "--login" || self.isPosixShortOption(token, containing: "l") {
|
||||
sawLoginMode = true
|
||||
}
|
||||
if flags.contains(token) || self.isCombinedCommandFlag(token) {
|
||||
return sawLoginMode
|
||||
}
|
||||
if !token.hasPrefix("-"), !token.hasPrefix("+") {
|
||||
return false
|
||||
}
|
||||
let combinedValueCount = self.combinedSeparateValueOptionCount(token)
|
||||
if combinedValueCount > 0 {
|
||||
idx += 1 + combinedValueCount
|
||||
continue
|
||||
}
|
||||
if self.consumesSeparateValue(token) {
|
||||
idx += 2
|
||||
continue
|
||||
}
|
||||
idx += 1
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
static func hasFishInitCommandOption(_ argv: [String]) -> Bool {
|
||||
var idx = 1
|
||||
while idx < argv.count {
|
||||
let token = argv[idx].trimmingCharacters(in: .whitespacesAndNewlines)
|
||||
if token.isEmpty {
|
||||
idx += 1
|
||||
continue
|
||||
}
|
||||
if token == "--" {
|
||||
return false
|
||||
}
|
||||
if token == "-C" || token == "--init-command" {
|
||||
return true
|
||||
}
|
||||
if token.hasPrefix("-C"), token != "-C" {
|
||||
return true
|
||||
}
|
||||
if token.hasPrefix("--init-command=") {
|
||||
return true
|
||||
}
|
||||
if !token.hasPrefix("-"), !token.hasPrefix("+") {
|
||||
return false
|
||||
}
|
||||
idx += 1
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
static func hasFishAttachedCommandOption(_ argv: [String]) -> Bool {
|
||||
var idx = 1
|
||||
while idx < argv.count {
|
||||
let token = argv[idx].trimmingCharacters(in: .whitespacesAndNewlines)
|
||||
if token.isEmpty {
|
||||
idx += 1
|
||||
continue
|
||||
}
|
||||
if token == "--" {
|
||||
return false
|
||||
}
|
||||
if token.hasPrefix("-c"), token != "-c" {
|
||||
return true
|
||||
}
|
||||
if !token.hasPrefix("-"), !token.hasPrefix("+") {
|
||||
return false
|
||||
}
|
||||
idx += 1
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
static func findMatch(
|
||||
_ argv: [String],
|
||||
flags: Set<String>,
|
||||
allowCombinedC: Bool) -> Match?
|
||||
{
|
||||
var idx = 1
|
||||
while idx < argv.count {
|
||||
let token = argv[idx].trimmingCharacters(in: .whitespacesAndNewlines)
|
||||
if token.isEmpty {
|
||||
idx += 1
|
||||
continue
|
||||
}
|
||||
if token == "--" {
|
||||
break
|
||||
}
|
||||
let comparableToken = allowCombinedC ? token : token.lowercased()
|
||||
if flags.contains(comparableToken) {
|
||||
return Match(tokenIndex: idx, inlineCommand: nil)
|
||||
}
|
||||
if allowCombinedC, let combined = self.parseCombinedCommandFlag(token) {
|
||||
if let attachedCommand = combined.attachedCommand {
|
||||
return Match(tokenIndex: idx, inlineCommand: attachedCommand, valueTokenOffset: 0)
|
||||
}
|
||||
return Match(
|
||||
tokenIndex: idx,
|
||||
inlineCommand: nil,
|
||||
valueTokenOffset: 1 + combined.separateValueCount)
|
||||
}
|
||||
if allowCombinedC, !token.hasPrefix("-"), !token.hasPrefix("+") {
|
||||
break
|
||||
}
|
||||
let combinedValueCount = allowCombinedC ? self.combinedSeparateValueOptionCount(token) : 0
|
||||
if combinedValueCount > 0 {
|
||||
idx += 1 + combinedValueCount
|
||||
continue
|
||||
}
|
||||
if allowCombinedC, self.consumesSeparateValue(token) {
|
||||
idx += 2
|
||||
continue
|
||||
}
|
||||
idx += 1
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
static func extractInlineCommand(
|
||||
_ argv: [String],
|
||||
flags: Set<String>,
|
||||
allowCombinedC: Bool) -> String?
|
||||
{
|
||||
guard let match = self.findMatch(argv, flags: flags, allowCombinedC: allowCombinedC) else {
|
||||
return nil
|
||||
}
|
||||
if let inlineCommand = match.inlineCommand {
|
||||
return inlineCommand
|
||||
}
|
||||
let nextIndex = match.tokenIndex + match.valueTokenOffset
|
||||
let payload = nextIndex < argv.count
|
||||
? argv[nextIndex].trimmingCharacters(in: .whitespacesAndNewlines)
|
||||
: ""
|
||||
return payload.isEmpty ? nil : payload
|
||||
}
|
||||
|
||||
private static func isCombinedCommandFlag(_ token: String) -> Bool {
|
||||
self.parseCombinedCommandFlag(token) != nil
|
||||
}
|
||||
|
||||
private static func parseCombinedCommandFlag(_ token: String) -> CombinedCommandFlag? {
|
||||
let chars = Array(token)
|
||||
guard chars.count >= 2, chars[0] == "-", chars[1] != "-" else {
|
||||
return nil
|
||||
}
|
||||
let optionChars = Array(chars.dropFirst())
|
||||
guard let commandFlagIndex = optionChars.firstIndex(of: "c") else {
|
||||
return nil
|
||||
}
|
||||
if optionChars.contains("-") {
|
||||
return nil
|
||||
}
|
||||
let suffix = String(optionChars.dropFirst(commandFlagIndex + 1))
|
||||
if !suffix.isEmpty,
|
||||
suffix.range(of: #"[^A-Za-z]"#, options: .regularExpression) != nil
|
||||
{
|
||||
return CombinedCommandFlag(attachedCommand: suffix, separateValueCount: 0)
|
||||
}
|
||||
let separateValueCount = optionChars.reduce(0) { count, char in
|
||||
count + ((char == "o" || char == "O") ? 1 : 0)
|
||||
}
|
||||
return CombinedCommandFlag(attachedCommand: nil, separateValueCount: separateValueCount)
|
||||
}
|
||||
|
||||
private static func combinedSeparateValueOptionCount(_ token: String) -> Int {
|
||||
let chars = Array(token)
|
||||
guard chars.count >= 2, chars[0] == "-" || chars[0] == "+", chars[1] != "-" else {
|
||||
return 0
|
||||
}
|
||||
if chars.dropFirst().contains("-") {
|
||||
return 0
|
||||
}
|
||||
return chars.dropFirst().reduce(0) { count, char in
|
||||
count + ((char == "o" || char == "O") ? 1 : 0)
|
||||
}
|
||||
}
|
||||
|
||||
private static func consumesSeparateValue(_ token: String) -> Bool {
|
||||
self.posixShellOptionsWithSeparateValues.contains(token)
|
||||
}
|
||||
|
||||
private static func isPosixInteractiveModeOption(_ token: String) -> Bool {
|
||||
token == "--interactive" || self.isPosixShortOption(token, containing: "i")
|
||||
}
|
||||
|
||||
private static func isPosixShortOption(_ token: String, containing option: Character) -> Bool {
|
||||
let chars = Array(token)
|
||||
guard chars.count >= 2, chars[0] == "-", chars[1] != "-" else {
|
||||
return false
|
||||
}
|
||||
if chars.dropFirst().contains("-") {
|
||||
return false
|
||||
}
|
||||
return chars.dropFirst().contains(option)
|
||||
}
|
||||
}
|
||||
@@ -6,9 +6,10 @@ enum ExecShellWrapperParser {
|
||||
let command: String?
|
||||
|
||||
static let notWrapper = ParsedShellWrapper(isWrapper: false, command: nil)
|
||||
static let blockedWrapper = ParsedShellWrapper(isWrapper: true, command: nil)
|
||||
}
|
||||
|
||||
private enum Kind {
|
||||
private enum Kind: Equatable {
|
||||
case posix
|
||||
case cmd
|
||||
case powershell
|
||||
@@ -27,14 +28,34 @@ enum ExecShellWrapperParser {
|
||||
WrapperSpec(kind: .cmd, names: ["cmd.exe", "cmd"]),
|
||||
WrapperSpec(kind: .powershell, names: ["powershell", "powershell.exe", "pwsh", "pwsh.exe"]),
|
||||
]
|
||||
private static let loginStartupShellNames = Set(["ash", "bash", "dash", "fish", "ksh", "sh", "zsh"])
|
||||
|
||||
static func extract(command: [String], rawCommand: String?) -> ParsedShellWrapper {
|
||||
let trimmedRaw = rawCommand?.trimmingCharacters(in: .whitespacesAndNewlines) ?? ""
|
||||
let preferredRaw = trimmedRaw.isEmpty ? nil : trimmedRaw
|
||||
return self.extract(command: command, preferredRaw: preferredRaw, depth: 0)
|
||||
return self.extract(
|
||||
command: command,
|
||||
preferredRaw: preferredRaw,
|
||||
failClosedOnStartupWrappers: false,
|
||||
depth: 0)
|
||||
}
|
||||
|
||||
private static func extract(command: [String], preferredRaw: String?, depth: Int) -> ParsedShellWrapper {
|
||||
static func extractForAllowlist(command: [String], rawCommand: String?) -> ParsedShellWrapper {
|
||||
let trimmedRaw = rawCommand?.trimmingCharacters(in: .whitespacesAndNewlines) ?? ""
|
||||
let preferredRaw = trimmedRaw.isEmpty ? nil : trimmedRaw
|
||||
return self.extract(
|
||||
command: command,
|
||||
preferredRaw: preferredRaw,
|
||||
failClosedOnStartupWrappers: true,
|
||||
depth: 0)
|
||||
}
|
||||
|
||||
private static func extract(
|
||||
command: [String],
|
||||
preferredRaw: String?,
|
||||
failClosedOnStartupWrappers: Bool,
|
||||
depth: Int) -> ParsedShellWrapper
|
||||
{
|
||||
guard depth < ExecEnvInvocationUnwrapper.maxWrapperDepth else {
|
||||
return .notWrapper
|
||||
}
|
||||
@@ -47,19 +68,96 @@ enum ExecShellWrapperParser {
|
||||
guard let unwrapped = ExecEnvInvocationUnwrapper.unwrap(command) else {
|
||||
return .notWrapper
|
||||
}
|
||||
return self.extract(command: unwrapped, preferredRaw: preferredRaw, depth: depth + 1)
|
||||
return self.extract(
|
||||
command: unwrapped,
|
||||
preferredRaw: preferredRaw,
|
||||
failClosedOnStartupWrappers: failClosedOnStartupWrappers,
|
||||
depth: depth + 1)
|
||||
}
|
||||
|
||||
guard let spec = self.wrapperSpecs.first(where: { $0.names.contains(base0) }) else {
|
||||
return .notWrapper
|
||||
}
|
||||
if spec.kind == .posix,
|
||||
base0 == "fish",
|
||||
ExecInlineCommandParser.hasFishAttachedCommandOption(command)
|
||||
{
|
||||
return .blockedWrapper
|
||||
}
|
||||
let includeLegacyLoginInlineForm = failClosedOnStartupWrappers &&
|
||||
!self.legacyLoginInlinePayloadMatchesRaw(
|
||||
command: command,
|
||||
spec: spec,
|
||||
base0: base0,
|
||||
preferredRaw: preferredRaw)
|
||||
if self.startupWrapperRequiresFullArgv(
|
||||
command: command,
|
||||
spec: spec,
|
||||
base0: base0,
|
||||
includeLegacyLoginInlineForm: includeLegacyLoginInlineForm)
|
||||
{
|
||||
return .blockedWrapper
|
||||
}
|
||||
guard let payload = self.extractPayload(command: command, spec: spec) else {
|
||||
return .notWrapper
|
||||
}
|
||||
let normalized = preferredRaw ?? payload
|
||||
let normalized = failClosedOnStartupWrappers ? payload : preferredRaw ?? payload
|
||||
return ParsedShellWrapper(isWrapper: true, command: normalized)
|
||||
}
|
||||
|
||||
private static func startupWrapperRequiresFullArgv(
|
||||
command: [String],
|
||||
spec: WrapperSpec,
|
||||
base0: String,
|
||||
includeLegacyLoginInlineForm: Bool) -> Bool
|
||||
{
|
||||
guard spec.kind == .posix else {
|
||||
return false
|
||||
}
|
||||
if base0 == "fish",
|
||||
ExecInlineCommandParser.hasFishInitCommandOption(command)
|
||||
{
|
||||
return true
|
||||
}
|
||||
if self.loginStartupShellNames.contains(base0),
|
||||
ExecInlineCommandParser.hasPosixLoginStartupBeforeInlineCommand(
|
||||
command,
|
||||
flags: self.posixInlineFlags)
|
||||
{
|
||||
return includeLegacyLoginInlineForm || !self.isLegacyShLoginInlineForm(command, base0: base0)
|
||||
}
|
||||
return ExecInlineCommandParser.hasPosixInteractiveStartupBeforeInlineCommand(
|
||||
command,
|
||||
flags: self.posixInlineFlags)
|
||||
}
|
||||
|
||||
private static func isLegacyLoginInlineForm(_ command: [String]) -> Bool {
|
||||
guard command.count > 1 else {
|
||||
return false
|
||||
}
|
||||
return command[1].trimmingCharacters(in: .whitespacesAndNewlines) == "-lc"
|
||||
}
|
||||
|
||||
private static func isLegacyShLoginInlineForm(_ command: [String], base0: String) -> Bool {
|
||||
base0 == "sh" && self.isLegacyLoginInlineForm(command)
|
||||
}
|
||||
|
||||
private static func legacyLoginInlinePayloadMatchesRaw(
|
||||
command: [String],
|
||||
spec: WrapperSpec,
|
||||
base0: String,
|
||||
preferredRaw: String?) -> Bool
|
||||
{
|
||||
guard let preferredRaw,
|
||||
base0 == "sh",
|
||||
self.isLegacyLoginInlineForm(command),
|
||||
let payload = self.extractPayload(command: command, spec: spec)
|
||||
else {
|
||||
return false
|
||||
}
|
||||
return payload == preferredRaw.trimmingCharacters(in: .whitespacesAndNewlines)
|
||||
}
|
||||
|
||||
private static func extractPayload(command: [String], spec: WrapperSpec) -> String? {
|
||||
switch spec.kind {
|
||||
case .posix:
|
||||
@@ -72,12 +170,10 @@ enum ExecShellWrapperParser {
|
||||
}
|
||||
|
||||
private static func extractPosixInlineCommand(_ command: [String]) -> String? {
|
||||
let flag = command.count > 1 ? command[1].trimmingCharacters(in: .whitespacesAndNewlines) : ""
|
||||
guard self.posixInlineFlags.contains(flag.lowercased()) else {
|
||||
return nil
|
||||
}
|
||||
let payload = command.count > 2 ? command[2].trimmingCharacters(in: .whitespacesAndNewlines) : ""
|
||||
return payload.isEmpty ? nil : payload
|
||||
ExecInlineCommandParser.extractInlineCommand(
|
||||
command,
|
||||
flags: self.posixInlineFlags,
|
||||
allowCombinedC: true)
|
||||
}
|
||||
|
||||
private static func extractCmdInlineCommand(_ command: [String]) -> String? {
|
||||
@@ -97,10 +193,10 @@ enum ExecShellWrapperParser {
|
||||
if token.isEmpty { continue }
|
||||
if token == "--" { break }
|
||||
if self.powershellInlineFlags.contains(token) {
|
||||
let payload = idx + 1 < command.count
|
||||
? command[idx + 1].trimmingCharacters(in: .whitespacesAndNewlines)
|
||||
: ""
|
||||
return payload.isEmpty ? nil : payload
|
||||
return ExecInlineCommandParser.extractInlineCommand(
|
||||
command,
|
||||
flags: self.powershellInlineFlags,
|
||||
allowCombinedC: false)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
|
||||
@@ -326,40 +326,12 @@ enum ExecSystemRunCommandValidator {
|
||||
return current
|
||||
}
|
||||
|
||||
private struct InlineCommandTokenMatch {
|
||||
var tokenIndex: Int
|
||||
var inlineCommand: String?
|
||||
}
|
||||
|
||||
private static func findInlineCommandTokenMatch(
|
||||
_ argv: [String],
|
||||
flags: Set<String>,
|
||||
allowCombinedC: Bool) -> InlineCommandTokenMatch?
|
||||
allowCombinedC: Bool) -> ExecInlineCommandParser.Match?
|
||||
{
|
||||
var idx = 1
|
||||
while idx < argv.count {
|
||||
let token = argv[idx].trimmingCharacters(in: .whitespacesAndNewlines)
|
||||
if token.isEmpty {
|
||||
idx += 1
|
||||
continue
|
||||
}
|
||||
let lower = token.lowercased()
|
||||
if lower == "--" {
|
||||
break
|
||||
}
|
||||
if flags.contains(lower) {
|
||||
return InlineCommandTokenMatch(tokenIndex: idx, inlineCommand: nil)
|
||||
}
|
||||
if allowCombinedC, let inlineOffset = self.combinedCommandInlineOffset(token) {
|
||||
let inline = String(token.dropFirst(inlineOffset))
|
||||
.trimmingCharacters(in: .whitespacesAndNewlines)
|
||||
return InlineCommandTokenMatch(
|
||||
tokenIndex: idx,
|
||||
inlineCommand: inline.isEmpty ? nil : inline)
|
||||
}
|
||||
idx += 1
|
||||
}
|
||||
return nil
|
||||
ExecInlineCommandParser.findMatch(argv, flags: flags, allowCombinedC: allowCombinedC)
|
||||
}
|
||||
|
||||
private static func resolveInlineCommandTokenIndex(
|
||||
@@ -373,24 +345,10 @@ enum ExecSystemRunCommandValidator {
|
||||
if match.inlineCommand != nil {
|
||||
return match.tokenIndex
|
||||
}
|
||||
let nextIndex = match.tokenIndex + 1
|
||||
let nextIndex = match.tokenIndex + match.valueTokenOffset
|
||||
return nextIndex < argv.count ? nextIndex : nil
|
||||
}
|
||||
|
||||
private static func combinedCommandInlineOffset(_ token: String) -> Int? {
|
||||
let chars = Array(token.lowercased())
|
||||
guard chars.count >= 2, chars[0] == "-", chars[1] != "-" else {
|
||||
return nil
|
||||
}
|
||||
if chars.dropFirst().contains("-") {
|
||||
return nil
|
||||
}
|
||||
guard let commandIndex = chars.firstIndex(of: "c"), commandIndex > 0 else {
|
||||
return nil
|
||||
}
|
||||
return commandIndex + 1
|
||||
}
|
||||
|
||||
private static func extractShellInlinePayload(
|
||||
_ argv: [String],
|
||||
normalizedWrapper: String) -> String?
|
||||
@@ -421,7 +379,7 @@ enum ExecSystemRunCommandValidator {
|
||||
if let inlineCommand = match.inlineCommand {
|
||||
return inlineCommand
|
||||
}
|
||||
let nextIndex = match.tokenIndex + 1
|
||||
let nextIndex = match.tokenIndex + match.valueTokenOffset
|
||||
return self.trimmedNonEmpty(nextIndex < argv.count ? argv[nextIndex] : nil)
|
||||
}
|
||||
|
||||
|
||||
@@ -111,7 +111,7 @@ struct ExecAllowlistTests {
|
||||
}
|
||||
|
||||
@Test func `resolve for allowlist splits shell chains`() {
|
||||
let command = ["/bin/sh", "-lc", "echo allowlisted && /usr/bin/touch /tmp/openclaw-allowlist-test"]
|
||||
let command = ["/bin/sh", "-c", "echo allowlisted && /usr/bin/touch /tmp/openclaw-allowlist-test"]
|
||||
let resolutions = ExecCommandResolution.resolveForAllowlist(
|
||||
command: command,
|
||||
rawCommand: "echo allowlisted && /usr/bin/touch /tmp/openclaw-allowlist-test",
|
||||
@@ -122,9 +122,109 @@ struct ExecAllowlistTests {
|
||||
#expect(resolutions[1].executableName == "touch")
|
||||
}
|
||||
|
||||
@Test func `resolve for allowlist splits posix combined c flag payloads`() {
|
||||
for command in [
|
||||
["/bin/bash", "-xc", "/usr/bin/printf safe_marker"],
|
||||
["/bin/bash", "-ec", "/usr/bin/printf safe_marker"],
|
||||
["/bin/bash", "-euxc", "/usr/bin/printf safe_marker"],
|
||||
["/bin/bash", "-cx", "/usr/bin/printf safe_marker"],
|
||||
["/bin/bash", "-O", "extglob", "-xc", "/usr/bin/printf safe_marker"],
|
||||
["/bin/bash", "-co", "vi", "/usr/bin/printf safe_marker"],
|
||||
["/bin/bash", "-oc", "vi", "/usr/bin/printf safe_marker"],
|
||||
["/bin/bash", "-cO", "extglob", "/usr/bin/printf safe_marker"],
|
||||
["/bin/bash", "-xo", "vi", "-c", "/usr/bin/printf safe_marker"],
|
||||
["/bin/bash", "-xO", "extglob", "-c", "/usr/bin/printf safe_marker"],
|
||||
["/bin/bash", "+xo", "vi", "-c", "/usr/bin/printf safe_marker"],
|
||||
["/bin/bash", "--rcfile", "/tmp/rc", "-c", "/usr/bin/printf safe_marker"],
|
||||
["/bin/bash", "--init-file=/tmp/rc", "-c", "/usr/bin/printf safe_marker"],
|
||||
] {
|
||||
let resolutions = ExecCommandResolution.resolveForAllowlist(
|
||||
command: command,
|
||||
rawCommand: nil,
|
||||
cwd: nil,
|
||||
env: ["PATH": "/usr/bin:/bin"])
|
||||
#expect(resolutions.count == 1)
|
||||
#expect(resolutions[0].resolvedPath == "/usr/bin/printf")
|
||||
#expect(resolutions[0].executableName == "printf")
|
||||
}
|
||||
}
|
||||
|
||||
@Test func `resolve for allowlist treats c after posix shell operand as direct exec`() {
|
||||
for command in [
|
||||
["/bin/bash", "./script.sh", "-c", "/usr/bin/printf safe_marker"],
|
||||
["/bin/bash", "-x", "-C", "echo ok", "-c", "/usr/bin/printf safe_marker"],
|
||||
] {
|
||||
let resolutions = ExecCommandResolution.resolveForAllowlist(
|
||||
command: command,
|
||||
rawCommand: nil,
|
||||
cwd: "/tmp",
|
||||
env: ["PATH": "/usr/bin:/bin"])
|
||||
#expect(resolutions.count == 1)
|
||||
#expect(resolutions[0].resolvedPath == "/bin/bash")
|
||||
#expect(resolutions[0].executableName == "bash")
|
||||
}
|
||||
}
|
||||
|
||||
@Test func `resolve for allowlist fails closed for interactive posix shell wrappers`() {
|
||||
for command in [
|
||||
["/bin/bash", "-i", "-c", "/usr/bin/printf safe_marker"],
|
||||
["/bin/bash", "-ic", "/usr/bin/printf safe_marker"],
|
||||
["/bin/bash", "--rcfile", "/tmp/payload.sh", "-i", "-c", "/usr/bin/printf safe_marker"],
|
||||
["/usr/bin/fish", "--interactive", "-c", "/usr/bin/printf safe_marker"],
|
||||
] {
|
||||
let resolutions = ExecCommandResolution.resolveForAllowlist(
|
||||
command: command,
|
||||
rawCommand: nil,
|
||||
cwd: nil,
|
||||
env: ["PATH": "/usr/bin:/bin"])
|
||||
#expect(resolutions.isEmpty)
|
||||
}
|
||||
}
|
||||
|
||||
@Test func `resolve for allowlist fails closed for login shell wrappers`() {
|
||||
for command in [
|
||||
["/bin/bash", "-l", "-c", "/usr/bin/printf safe_marker"],
|
||||
["/bin/bash", "--login", "-c", "/usr/bin/printf safe_marker"],
|
||||
["/bin/bash", "-xlc", "/usr/bin/printf safe_marker"],
|
||||
["/bin/dash", "-lc", "/usr/bin/printf safe_marker"],
|
||||
["ash", "-lc", "/usr/bin/printf safe_marker"],
|
||||
["/usr/bin/fish", "-l", "-c", "/usr/bin/printf safe_marker"],
|
||||
["/usr/bin/fish", "--login", "-c", "/usr/bin/printf safe_marker"],
|
||||
["/bin/sh", "-lc", "/usr/bin/printf safe_marker"],
|
||||
["/bin/sh", "-x", "-lc", "/usr/bin/printf safe_marker"],
|
||||
["/usr/bin/env", "/bin/sh", "-lc", "/usr/bin/printf safe_marker"],
|
||||
] {
|
||||
let resolutions = ExecCommandResolution.resolveForAllowlist(
|
||||
command: command,
|
||||
rawCommand: nil,
|
||||
cwd: nil,
|
||||
env: ["PATH": "/usr/bin:/bin"])
|
||||
#expect(resolutions.isEmpty)
|
||||
}
|
||||
}
|
||||
|
||||
@Test func `resolve for allowlist fails closed for fish init command wrappers`() {
|
||||
for command in [
|
||||
["/usr/bin/fish", "--init-command=/tmp/payload.fish", "-c", "/usr/bin/printf safe_marker"],
|
||||
["/usr/bin/fish", "--init-command", "/tmp/payload.fish", "-c", "/usr/bin/printf safe_marker"],
|
||||
["/usr/bin/fish", "-C", "/tmp/payload.fish", "-c", "/usr/bin/printf safe_marker"],
|
||||
["/usr/bin/fish", "-C/tmp/payload.fish", "-c", "/usr/bin/printf safe_marker"],
|
||||
["/usr/bin/fish", "--init-command", "-c; /tmp/payload.fish", "/usr/bin/printf safe_marker"],
|
||||
["/usr/bin/fish", "-C", "-c", "/usr/bin/printf safe_marker"],
|
||||
["/usr/bin/fish", "-c/tmp/payload.fish", "/usr/bin/printf safe_marker"],
|
||||
] {
|
||||
let resolutions = ExecCommandResolution.resolveForAllowlist(
|
||||
command: command,
|
||||
rawCommand: nil,
|
||||
cwd: nil,
|
||||
env: ["PATH": "/usr/bin:/bin"])
|
||||
#expect(resolutions.isEmpty)
|
||||
}
|
||||
}
|
||||
|
||||
@Test func `resolve for allowlist uses wrapper argv payload even with canonical raw command`() {
|
||||
let command = ["/bin/sh", "-lc", "echo allowlisted && /usr/bin/touch /tmp/openclaw-allowlist-test"]
|
||||
let canonicalRaw = "/bin/sh -lc \"echo allowlisted && /usr/bin/touch /tmp/openclaw-allowlist-test\""
|
||||
let command = ["/bin/sh", "-c", "echo allowlisted && /usr/bin/touch /tmp/openclaw-allowlist-test"]
|
||||
let canonicalRaw = "/bin/sh -c \"echo allowlisted && /usr/bin/touch /tmp/openclaw-allowlist-test\""
|
||||
let resolutions = ExecCommandResolution.resolveForAllowlist(
|
||||
command: command,
|
||||
rawCommand: canonicalRaw,
|
||||
@@ -135,6 +235,25 @@ struct ExecAllowlistTests {
|
||||
#expect(resolutions[1].executableName == "touch")
|
||||
}
|
||||
|
||||
@Test func `resolve for allowlist preserves generated sh lc raw payload binding`() {
|
||||
let command = ["/bin/sh", "-lc", "/usr/bin/printf safe_marker"]
|
||||
let resolutions = ExecCommandResolution.resolveForAllowlist(
|
||||
command: command,
|
||||
rawCommand: "/usr/bin/printf safe_marker",
|
||||
cwd: nil,
|
||||
env: ["PATH": "/usr/bin:/bin"])
|
||||
#expect(resolutions.count == 1)
|
||||
#expect(resolutions[0].resolvedPath == "/usr/bin/printf")
|
||||
#expect(resolutions[0].executableName == "printf")
|
||||
|
||||
let rawlessResolutions = ExecCommandResolution.resolveForAllowlist(
|
||||
command: command,
|
||||
rawCommand: nil,
|
||||
cwd: nil,
|
||||
env: ["PATH": "/usr/bin:/bin"])
|
||||
#expect(rawlessResolutions.isEmpty)
|
||||
}
|
||||
|
||||
@Test func `resolve for allowlist fails closed for env modified shell wrappers`() {
|
||||
let command = ["/usr/bin/env", "BASH_ENV=/tmp/payload.sh", "bash", "-lc", "echo allowlisted"]
|
||||
let canonicalRaw = "/usr/bin/env BASH_ENV=/tmp/payload.sh bash -lc \"echo allowlisted\""
|
||||
@@ -158,7 +277,7 @@ struct ExecAllowlistTests {
|
||||
}
|
||||
|
||||
@Test func `resolve for allowlist keeps quoted operators in single segment`() {
|
||||
let command = ["/bin/sh", "-lc", "echo \"a && b\""]
|
||||
let command = ["/bin/sh", "-c", "echo \"a && b\""]
|
||||
let resolutions = ExecCommandResolution.resolveForAllowlist(
|
||||
command: command,
|
||||
rawCommand: "echo \"a && b\"",
|
||||
@@ -169,7 +288,7 @@ struct ExecAllowlistTests {
|
||||
}
|
||||
|
||||
@Test func `resolve for allowlist fails closed on command substitution`() {
|
||||
let command = ["/bin/sh", "-lc", "echo $(/usr/bin/touch /tmp/openclaw-allowlist-test-subst)"]
|
||||
let command = ["/bin/sh", "-c", "echo $(/usr/bin/touch /tmp/openclaw-allowlist-test-subst)"]
|
||||
let resolutions = ExecCommandResolution.resolveForAllowlist(
|
||||
command: command,
|
||||
rawCommand: "echo $(/usr/bin/touch /tmp/openclaw-allowlist-test-subst)",
|
||||
@@ -179,7 +298,7 @@ struct ExecAllowlistTests {
|
||||
}
|
||||
|
||||
@Test func `resolve for allowlist fails closed on quoted command substitution`() {
|
||||
let command = ["/bin/sh", "-lc", "echo \"ok $(/usr/bin/touch /tmp/openclaw-allowlist-test-quoted-subst)\""]
|
||||
let command = ["/bin/sh", "-c", "echo \"ok $(/usr/bin/touch /tmp/openclaw-allowlist-test-quoted-subst)\""]
|
||||
let resolutions = ExecCommandResolution.resolveForAllowlist(
|
||||
command: command,
|
||||
rawCommand: "echo \"ok $(/usr/bin/touch /tmp/openclaw-allowlist-test-quoted-subst)\"",
|
||||
@@ -189,7 +308,7 @@ struct ExecAllowlistTests {
|
||||
}
|
||||
|
||||
@Test func `resolve for allowlist fails closed on line-continued command substitution`() {
|
||||
let command = ["/bin/sh", "-lc", "echo $\\\n(/usr/bin/touch /tmp/openclaw-allowlist-test-line-cont-subst)"]
|
||||
let command = ["/bin/sh", "-c", "echo $\\\n(/usr/bin/touch /tmp/openclaw-allowlist-test-line-cont-subst)"]
|
||||
let resolutions = ExecCommandResolution.resolveForAllowlist(
|
||||
command: command,
|
||||
rawCommand: "echo $\\\n(/usr/bin/touch /tmp/openclaw-allowlist-test-line-cont-subst)",
|
||||
@@ -201,7 +320,7 @@ struct ExecAllowlistTests {
|
||||
@Test func `resolve for allowlist fails closed on chained line-continued command substitution`() {
|
||||
let command = [
|
||||
"/bin/sh",
|
||||
"-lc",
|
||||
"-c",
|
||||
"echo ok && $\\\n(/usr/bin/touch /tmp/openclaw-allowlist-test-chained-line-cont-subst)",
|
||||
]
|
||||
let resolutions = ExecCommandResolution.resolveForAllowlist(
|
||||
@@ -213,7 +332,7 @@ struct ExecAllowlistTests {
|
||||
}
|
||||
|
||||
@Test func `resolve for allowlist fails closed on quoted backticks`() {
|
||||
let command = ["/bin/sh", "-lc", "echo \"ok `/usr/bin/id`\""]
|
||||
let command = ["/bin/sh", "-c", "echo \"ok `/usr/bin/id`\""]
|
||||
let resolutions = ExecCommandResolution.resolveForAllowlist(
|
||||
command: command,
|
||||
rawCommand: "echo \"ok `/usr/bin/id`\"",
|
||||
@@ -226,7 +345,7 @@ struct ExecAllowlistTests {
|
||||
let fixtures = try Self.loadShellParserParityCases()
|
||||
for fixture in fixtures {
|
||||
let resolutions = ExecCommandResolution.resolveForAllowlist(
|
||||
command: ["/bin/sh", "-lc", fixture.command],
|
||||
command: ["/bin/sh", "-c", fixture.command],
|
||||
rawCommand: fixture.command,
|
||||
cwd: nil,
|
||||
env: ["PATH": "/usr/bin:/bin"])
|
||||
@@ -276,7 +395,7 @@ struct ExecAllowlistTests {
|
||||
let command = [
|
||||
"/usr/bin/env",
|
||||
"/bin/sh",
|
||||
"-lc",
|
||||
"-c",
|
||||
"echo allowlisted && /usr/bin/touch /tmp/openclaw-allowlist-test",
|
||||
]
|
||||
let resolutions = ExecCommandResolution.resolveForAllowlist(
|
||||
@@ -290,7 +409,7 @@ struct ExecAllowlistTests {
|
||||
}
|
||||
|
||||
@Test func `resolve for allowlist unwraps env dispatch wrappers inside shell segments`() {
|
||||
let command = ["/bin/sh", "-lc", "env /usr/bin/touch /tmp/openclaw-allowlist-test"]
|
||||
let command = ["/bin/sh", "-c", "env /usr/bin/touch /tmp/openclaw-allowlist-test"]
|
||||
let resolutions = ExecCommandResolution.resolveForAllowlist(
|
||||
command: command,
|
||||
rawCommand: "env /usr/bin/touch /tmp/openclaw-allowlist-test",
|
||||
@@ -302,7 +421,7 @@ struct ExecAllowlistTests {
|
||||
}
|
||||
|
||||
@Test func `resolve for allowlist preserves env assignments inside shell segments`() {
|
||||
let command = ["/bin/sh", "-lc", "env FOO=bar /usr/bin/touch /tmp/openclaw-allowlist-test"]
|
||||
let command = ["/bin/sh", "-c", "env FOO=bar /usr/bin/touch /tmp/openclaw-allowlist-test"]
|
||||
let resolutions = ExecCommandResolution.resolveForAllowlist(
|
||||
command: command,
|
||||
rawCommand: "env FOO=bar /usr/bin/touch /tmp/openclaw-allowlist-test",
|
||||
@@ -326,8 +445,8 @@ struct ExecAllowlistTests {
|
||||
}
|
||||
|
||||
@Test func `approval evaluator resolves shell payload from canonical wrapper text`() async {
|
||||
let command = ["/bin/sh", "-lc", "/usr/bin/printf ok"]
|
||||
let rawCommand = "/bin/sh -lc \"/usr/bin/printf ok\""
|
||||
let command = ["/bin/sh", "-c", "/usr/bin/printf ok"]
|
||||
let rawCommand = "/bin/sh -c \"/usr/bin/printf ok\""
|
||||
let evaluation = await ExecApprovalEvaluator.evaluate(
|
||||
command: command,
|
||||
rawCommand: rawCommand,
|
||||
@@ -350,6 +469,32 @@ struct ExecAllowlistTests {
|
||||
#expect(patterns == ["/usr/bin/printf"])
|
||||
}
|
||||
|
||||
@Test func `allow always patterns fail closed for env modified shell wrappers`() {
|
||||
let patterns = ExecCommandResolution.resolveAllowAlwaysPatterns(
|
||||
command: [
|
||||
"/usr/bin/env",
|
||||
"BASH_ENV=/tmp/payload.sh",
|
||||
"/bin/sh",
|
||||
"-lc",
|
||||
"/usr/bin/printf ok",
|
||||
],
|
||||
cwd: nil,
|
||||
env: ["PATH": "/usr/bin:/bin"],
|
||||
rawCommand: "/usr/bin/printf ok")
|
||||
|
||||
#expect(patterns.isEmpty)
|
||||
}
|
||||
|
||||
@Test func `allow always patterns preserve generated sh lc raw payload binding`() {
|
||||
let patterns = ExecCommandResolution.resolveAllowAlwaysPatterns(
|
||||
command: ["/bin/sh", "-lc", "/usr/bin/printf safe_marker"],
|
||||
cwd: nil,
|
||||
env: ["PATH": "/usr/bin:/bin"],
|
||||
rawCommand: "/usr/bin/printf safe_marker")
|
||||
|
||||
#expect(patterns == ["/usr/bin/printf"])
|
||||
}
|
||||
|
||||
@Test func `match all requires every segment to match`() {
|
||||
let first = ExecCommandResolution(
|
||||
rawExecutable: "echo",
|
||||
|
||||
@@ -85,6 +85,48 @@ struct ExecSystemRunCommandValidatorTests {
|
||||
}
|
||||
}
|
||||
|
||||
@Test func `fish attached c command requires canonical raw command binding`() {
|
||||
let command = ["/usr/bin/fish", "-c/tmp/payload.fish", "/usr/bin/printf safe_marker"]
|
||||
let result = ExecSystemRunCommandValidator.resolve(
|
||||
command: command,
|
||||
rawCommand: "/usr/bin/printf safe_marker")
|
||||
|
||||
switch result {
|
||||
case .ok:
|
||||
Issue.record("expected rawCommand mismatch for attached fish command payload")
|
||||
case let .invalid(message):
|
||||
#expect(message.contains("rawCommand does not match command"))
|
||||
}
|
||||
}
|
||||
|
||||
@Test func `startup shell wrappers require canonical raw command binding`() {
|
||||
for command in [
|
||||
["/bin/bash", "-lc", "/usr/bin/printf safe_marker"],
|
||||
["/bin/bash", "--rcfile", "/tmp/payload.sh", "-i", "-c", "/usr/bin/printf safe_marker"],
|
||||
["/bin/bash", "--login", "-c", "/usr/bin/printf safe_marker"],
|
||||
["/usr/bin/fish", "--init-command=/tmp/payload.fish", "-c", "/usr/bin/printf safe_marker"],
|
||||
] {
|
||||
let legacy = ExecSystemRunCommandValidator.resolve(
|
||||
command: command,
|
||||
rawCommand: "/usr/bin/printf safe_marker")
|
||||
switch legacy {
|
||||
case .ok:
|
||||
Issue.record("expected rawCommand mismatch for startup shell wrapper")
|
||||
case let .invalid(message):
|
||||
#expect(message.contains("rawCommand does not match command"))
|
||||
}
|
||||
|
||||
let canonicalRaw = ExecCommandFormatter.displayString(for: command)
|
||||
let canonical = ExecSystemRunCommandValidator.resolve(command: command, rawCommand: canonicalRaw)
|
||||
switch canonical {
|
||||
case let .ok(resolved):
|
||||
#expect(resolved.displayCommand == canonicalRaw)
|
||||
case let .invalid(message):
|
||||
Issue.record("unexpected invalid result for canonical raw command: \(message)")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private static func loadContractCases() throws -> [SystemRunCommandContractCase] {
|
||||
let fixtureURL = try self.findContractFixtureURL()
|
||||
let data = try Data(contentsOf: fixtureURL)
|
||||
|
||||
@@ -5144,6 +5144,7 @@ public struct ExecApprovalRequestParams: Codable, Sendable {
|
||||
public let security: AnyCodable?
|
||||
public let ask: AnyCodable?
|
||||
public let warningtext: AnyCodable?
|
||||
public let commandspans: [[String: AnyCodable]]?
|
||||
public let agentid: AnyCodable?
|
||||
public let resolvedpath: AnyCodable?
|
||||
public let sessionkey: AnyCodable?
|
||||
@@ -5166,6 +5167,7 @@ public struct ExecApprovalRequestParams: Codable, Sendable {
|
||||
security: AnyCodable?,
|
||||
ask: AnyCodable?,
|
||||
warningtext: AnyCodable?,
|
||||
commandspans: [[String: AnyCodable]]?,
|
||||
agentid: AnyCodable?,
|
||||
resolvedpath: AnyCodable?,
|
||||
sessionkey: AnyCodable?,
|
||||
@@ -5187,6 +5189,7 @@ public struct ExecApprovalRequestParams: Codable, Sendable {
|
||||
self.security = security
|
||||
self.ask = ask
|
||||
self.warningtext = warningtext
|
||||
self.commandspans = commandspans
|
||||
self.agentid = agentid
|
||||
self.resolvedpath = resolvedpath
|
||||
self.sessionkey = sessionkey
|
||||
@@ -5210,6 +5213,7 @@ public struct ExecApprovalRequestParams: Codable, Sendable {
|
||||
case security
|
||||
case ask
|
||||
case warningtext = "warningText"
|
||||
case commandspans = "commandSpans"
|
||||
case agentid = "agentId"
|
||||
case resolvedpath = "resolvedPath"
|
||||
case sessionkey = "sessionKey"
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
885a734aa93cf04f6c14f8d83c1e96a66a5b96705327ea2de7b2aa7314238976 config-baseline.json
|
||||
074eb9a1480ff40836d98090ccb9be3465345ac4b46e0d273b7995504bbb8008 config-baseline.core.json
|
||||
98f80c92fc4fcb37d41470216ae6cd19b094d7f67b0ddc4983eba04aba314fe0 config-baseline.json
|
||||
d9c4b2035178d3ffe637b751036f12082d4f26761681bb8496b86550565307e8 config-baseline.core.json
|
||||
ed15b24c1ccf0234e6b3435149a6f1c1e709579d1259f1d09402688799b149bd config-baseline.channel.json
|
||||
c4e8d8898eebc4d40f35b167c987870e426e6c82121696dc055ff929f6a24046 config-baseline.plugin.json
|
||||
7a9ed89a6ff7e578bfcab7828ab660af59e62402a85bfbfc05d5ae3d975e9728 config-baseline.plugin.json
|
||||
|
||||
@@ -1,2 +1,2 @@
|
||||
fecac0023b0a8de6334740483ef03500c72f3235e5b636e089bf581b00e8734a plugin-sdk-api-baseline.json
|
||||
b427b2c8bddefb6c0ab4f411065adeec230d1e126a792ed30e6d0a45053dd4e3 plugin-sdk-api-baseline.jsonl
|
||||
9f7ea91407a66fee6bcdebaf64bd15d0a6ae8b48cf100753c96f2ad29ad86390 plugin-sdk-api-baseline.json
|
||||
4d516f6ac681cf55e916f712601abb8f5a7ddcd92a7710e8947f89c38e4054e7 plugin-sdk-api-baseline.jsonl
|
||||
|
||||
@@ -16,6 +16,14 @@ This directory owns docs authoring, Mintlify link rules, and docs i18n policy.
|
||||
- For docs, UI copy, and picker lists, order services/providers alphabetically unless the section is explicitly describing runtime order or auto-detection order.
|
||||
- Keep bundled plugin naming consistent with the repo-wide plugin terminology rules in the root `AGENTS.md`.
|
||||
|
||||
## Internal Docs
|
||||
|
||||
- Long-lived private operator docs belong in `~/Projects/manager/docs/`.
|
||||
- Repo-local internal scratch/mirror docs may live under ignored `docs/internal/`.
|
||||
- Never add `docs/internal/**` pages to `docs/docs.json` navigation or link them from public docs.
|
||||
- `scripts/docs-sync-publish.mjs` excludes and prunes `docs/internal/**` from the public `openclaw/docs` publish repo if a page is force-added later.
|
||||
- Internal docs may mention repo paths, private app names, 1Password item names, and runbooks, but never include secret values.
|
||||
|
||||
## Docs i18n
|
||||
|
||||
- Foreign-language docs are not maintained in this repo. The generated publish output lives in the separate `openclaw/docs` repo (often cloned locally as `../openclaw-docs`).
|
||||
|
||||
@@ -1,15 +1,15 @@
|
||||
---
|
||||
summary: "Switch from the BlueBubbles plugin to the bundled iMessage plugin without losing pairing, allowlists, or group bindings."
|
||||
summary: "Migrate old BlueBubbles configs to the bundled iMessage plugin without losing pairing, allowlists, or group bindings."
|
||||
read_when:
|
||||
- Planning a move from BlueBubbles to the bundled iMessage plugin
|
||||
- Translating BlueBubbles config keys to iMessage equivalents
|
||||
- Rolling back a partial iMessage cutover
|
||||
- Verifying imsg before enabling the iMessage plugin
|
||||
title: "Coming from BlueBubbles"
|
||||
---
|
||||
|
||||
The bundled `imessage` plugin now reaches the same private API surface as BlueBubbles (`react`, `edit`, `unsend`, `reply`, `sendWithEffect`, group management, attachments) by driving [`steipete/imsg`](https://github.com/steipete/imsg) over JSON-RPC. If you already run a Mac with `imsg` installed, you can drop the BlueBubbles server and let the plugin talk to Messages.app directly.
|
||||
|
||||
This guide is opt-in. BlueBubbles still works and remains the right choice if you cannot run `imsg` on the host where the Mac signs into iMessage (for example, if the Mac is unreachable from the gateway).
|
||||
BlueBubbles support was removed. OpenClaw supports iMessage through `imsg` only. This guide is for migrating old `channels.bluebubbles` configs to `channels.imessage`; there is no other supported migration path.
|
||||
|
||||
## When this migration makes sense
|
||||
|
||||
@@ -17,11 +17,15 @@ This guide is opt-in. BlueBubbles still works and remains the right choice if yo
|
||||
- You want one fewer moving part — no separate BlueBubbles server, no REST endpoint to authenticate, no webhook plumbing. Single CLI binary instead of a server + client app + helper.
|
||||
- You are on a [supported macOS / `imsg` build](/channels/imessage#requirements-and-permissions-macos) where the private API probe reports `available: true`.
|
||||
|
||||
## When to stay on BlueBubbles
|
||||
## What imsg does
|
||||
|
||||
- The Mac with Messages.app is on a network the gateway cannot reach via SSH.
|
||||
- You depend on BlueBubbles features the bundled plugin does not yet cover (rich text formatting attributes beyond bold/italic/underline/strikethrough, BlueBubbles-specific webhook integrations).
|
||||
- Your current setup hard-codes BlueBubbles webhook URLs into other systems that you cannot rewire.
|
||||
`imsg` is a local macOS CLI for Messages. OpenClaw starts `imsg rpc` as a child process and talks JSON-RPC over stdin/stdout. There is no HTTP server, webhook URL, background daemon, launch agent, or port to expose.
|
||||
|
||||
- Reads come from `~/Library/Messages/chat.db` using a read-only SQLite handle.
|
||||
- Live inbound messages come from `imsg watch` / `watch.subscribe`, which follows `chat.db` filesystem events with a polling fallback.
|
||||
- Sends use Messages.app automation for normal text and file sends.
|
||||
- Advanced actions use `imsg launch` to inject the `imsg` helper into Messages.app. That is what unlocks read receipts, typing indicators, rich sends, edit, unsend, threaded reply, tapbacks, and group management.
|
||||
- Linux builds can inspect a copied `chat.db`, but cannot send, watch the live Mac database, or drive Messages.app. For OpenClaw iMessage, run `imsg` on the signed-in Mac or through an SSH wrapper to that Mac.
|
||||
|
||||
## Before you start
|
||||
|
||||
@@ -29,11 +33,34 @@ This guide is opt-in. BlueBubbles still works and remains the right choice if yo
|
||||
|
||||
```bash
|
||||
brew install steipete/tap/imsg
|
||||
imsg launch
|
||||
imsg --version
|
||||
imsg chats --limit 3
|
||||
```
|
||||
|
||||
If `imsg chats` fails with `unable to open database file`, empty output, or `authorization denied`, grant Full Disk Access to the terminal, editor, Node process, Gateway service, or SSH parent process that launches `imsg`, then reopen that parent process.
|
||||
|
||||
2. Verify the read, watch, send, and RPC surfaces before changing OpenClaw config:
|
||||
|
||||
```bash
|
||||
imsg chats --limit 10 --json | jq -s
|
||||
imsg history --chat-id 42 --limit 10 --attachments --json | jq -s
|
||||
imsg watch --chat-id 42 --reactions --json
|
||||
imsg send --chat-id 42 --text "OpenClaw imsg test"
|
||||
imsg rpc --help
|
||||
```
|
||||
|
||||
2. Verify the private API bridge:
|
||||
Replace `42` with a real chat id from `imsg chats`. Sending requires Automation permission for Messages.app. If OpenClaw will run through SSH, run these commands through the same SSH wrapper or user context that OpenClaw will use.
|
||||
|
||||
3. Enable the private API bridge when you need advanced actions:
|
||||
|
||||
```bash
|
||||
imsg launch
|
||||
imsg status --json
|
||||
```
|
||||
|
||||
`imsg launch` requires SIP to be disabled. Basic send, history, and watch work without `imsg launch`; advanced actions do not.
|
||||
|
||||
4. Verify the bridge through OpenClaw:
|
||||
|
||||
```bash
|
||||
openclaw channels status --probe
|
||||
@@ -41,7 +68,7 @@ This guide is opt-in. BlueBubbles still works and remains the right choice if yo
|
||||
|
||||
You want `imessage.privateApi.available: true`. If it reports `false`, fix that first — see [Capability detection](/channels/imessage#private-api-actions).
|
||||
|
||||
3. Snapshot your config so you can roll back:
|
||||
5. Snapshot your config:
|
||||
|
||||
```bash
|
||||
cp ~/.openclaw/openclaw.json5 ~/.openclaw/openclaw.json5.bak
|
||||
@@ -116,7 +143,7 @@ If the gateway logs `imessage: dropping group message from chat_id=<id>` or the
|
||||
|
||||
## Step-by-step
|
||||
|
||||
1. Add an iMessage block alongside the existing BlueBubbles block. Do not delete BlueBubbles yet:
|
||||
1. Add an iMessage block alongside the existing BlueBubbles block. Keep the old block only as a copy source until the new path is verified:
|
||||
|
||||
```json5
|
||||
{
|
||||
@@ -146,7 +173,7 @@ If the gateway logs `imessage: dropping group message from chat_id=<id>` or the
|
||||
}
|
||||
```
|
||||
|
||||
2. **Dry-run probe** — start the gateway and confirm both channels report healthy:
|
||||
2. **Dry-run probe** — start the gateway and confirm iMessage reports healthy:
|
||||
|
||||
```bash
|
||||
openclaw gateway
|
||||
@@ -156,12 +183,11 @@ If the gateway logs `imessage: dropping group message from chat_id=<id>` or the
|
||||
|
||||
Because `imessage.enabled` is still `false`, no inbound iMessage traffic is routed yet — but `--probe` exercises the bridge so you catch permission/install issues before the cutover.
|
||||
|
||||
3. **Cut over.** Disable BlueBubbles and enable iMessage in one config edit:
|
||||
3. **Cut over.** Remove the BlueBubbles config and enable iMessage in one config edit:
|
||||
|
||||
```json5
|
||||
{
|
||||
channels: {
|
||||
bluebubbles: { enabled: false }, // keep the rest of the block for rollback
|
||||
imessage: { enabled: true /* ... */ },
|
||||
},
|
||||
}
|
||||
@@ -175,11 +201,11 @@ If the gateway logs `imessage: dropping group message from chat_id=<id>` or the
|
||||
|
||||
6. **Verify the action surface** — from a paired DM, ask the agent to react, edit, unsend, reply, send a photo, and (in a group) rename the group / add or remove a participant. Each action should land natively in Messages.app. If any throws "iMessage `<action>` requires the imsg private API bridge", run `imsg launch` again and refresh `channels status --probe`.
|
||||
|
||||
7. **Stop the BlueBubbles server** once you have run on iMessage for at least a few hours of normal traffic. Remove the BlueBubbles block from config and restart the gateway.
|
||||
7. **Remove the BlueBubbles server and config** once iMessage DMs, groups, and actions are verified. OpenClaw will not use `channels.bluebubbles`.
|
||||
|
||||
## Action parity at a glance
|
||||
|
||||
| Action | BlueBubbles | bundled iMessage |
|
||||
| Action | legacy BlueBubbles | bundled iMessage |
|
||||
| ---------------------------------------------------------- | ----------------------------------- | ------------------------------------------------------------------------------------ |
|
||||
| Send text / SMS fallback | ✅ | ✅ |
|
||||
| Send media (photo, video, file, voice) | ✅ | ✅ |
|
||||
@@ -194,7 +220,7 @@ If the gateway logs `imessage: dropping group message from chat_id=<id>` or the
|
||||
| Same-sender DM coalescing | ✅ | ✅ (DM-only; opt-in via `channels.imessage.coalesceSameSenderDms`) |
|
||||
| Catchup of inbound messages received while gateway is down | ✅ (webhook replay + history fetch) | _(not yet — tracked at [#78649](https://github.com/openclaw/openclaw/issues/78649))_ |
|
||||
|
||||
The catchup gap is the most operationally significant one for production deployments: planned restarts, mac sleep, or an unexpected gateway crash that takes more than a few seconds will silently drop any inbound iMessage traffic that arrives during the gap when running on bundled iMessage. BlueBubbles' webhook + history-fetch flow recovers those messages on reconnect. If your deployment is sensitive to that, stay on BlueBubbles until [#78649](https://github.com/openclaw/openclaw/issues/78649) lands.
|
||||
The catchup gap is the most operationally significant one for production deployments: planned restarts, mac sleep, or an unexpected gateway crash that takes more than a few seconds will silently drop any inbound iMessage traffic that arrives during the gap when running on bundled iMessage. BlueBubbles' webhook + history-fetch flow recovered those messages on reconnect, but BlueBubbles is no longer supported. There is no supported migration path that preserves catchup today; wait for [#78649](https://github.com/openclaw/openclaw/issues/78649).
|
||||
|
||||
## Pairing, sessions, and ACP bindings
|
||||
|
||||
@@ -202,25 +228,15 @@ The catchup gap is the most operationally significant one for production deploym
|
||||
- **Sessions** stay scoped per agent + chat. DMs collapse into the agent main session under default `session.dmScope=main`; group sessions stay isolated per `chat_id`. The session keys differ (`agent:<id>:imessage:group:<chat_id>` vs the BlueBubbles equivalent) — old conversation history under BlueBubbles session keys does not carry into iMessage sessions.
|
||||
- **ACP bindings** referencing `match.channel: "bluebubbles"` need to be updated to `"imessage"`. The `match.peer.id` shapes (`chat_id:`, `chat_guid:`, `chat_identifier:`, bare handle) are identical.
|
||||
|
||||
## Running both at once
|
||||
## No rollback channel
|
||||
|
||||
You can keep both `bluebubbles` and `imessage` enabled during cutover testing. BlueBubbles' manifest still declares `preferOver: ["imessage"]`, so the auto-enable resolver continues to prefer BlueBubbles when both channels are configured — the bundled iMessage plugin will not pick up traffic until BlueBubbles is disabled (`channels.bluebubbles.enabled: false`) or removed from config.
|
||||
|
||||
If you want both channels to run simultaneously instead of in cutover mode, that is not currently supported through plugin auto-enable; use one channel at a time.
|
||||
|
||||
## Rollback
|
||||
|
||||
Because you kept the BlueBubbles config block:
|
||||
|
||||
1. Set `channels.bluebubbles.enabled: true` and `channels.imessage.enabled: false`.
|
||||
2. Restart the gateway.
|
||||
3. Inbound traffic returns to BlueBubbles. Reply caches and ACP bindings on the iMessage side stay on disk under `~/.openclaw/state/imessage/` and resume cleanly if you re-enable later.
|
||||
There is no supported BlueBubbles runtime to switch back to. If iMessage verification fails, set `channels.imessage.enabled: false`, restart the Gateway, fix the `imsg` blocker, and retry the cutover.
|
||||
|
||||
The reply cache lives at `~/.openclaw/state/imessage/reply-cache.jsonl` (mode `0600`, parent dir `0700`). It is safe to delete if you want a clean slate.
|
||||
|
||||
## Related
|
||||
|
||||
- [iMessage](/channels/imessage) — full iMessage channel reference, including `imsg launch` setup and capability detection.
|
||||
- [BlueBubbles](/channels/bluebubbles) — full BlueBubbles channel reference for the legacy path.
|
||||
- `/channels/bluebubbles` — legacy URL that redirects to this migration guide.
|
||||
- [Pairing](/channels/pairing) — DM authentication and pairing flow.
|
||||
- [Channel Routing](/channels/channel-routing) — how the gateway picks a channel for outbound replies.
|
||||
|
||||
@@ -13,7 +13,7 @@ For OpenClaw iMessage deployments, use `imsg` on a signed-in macOS Messages host
|
||||
</Note>
|
||||
|
||||
<Warning>
|
||||
BlueBubbles is deprecated and no longer ships as a bundled OpenClaw channel. Migrate `channels.bluebubbles` configs to `channels.imessage`; OpenClaw now supports iMessage through `imsg` only. If you still need a BlueBubbles-backed bridge, publish or install it as a third-party plugin outside core.
|
||||
BlueBubbles support was removed. Migrate `channels.bluebubbles` configs to `channels.imessage`; OpenClaw supports iMessage through `imsg` only.
|
||||
</Warning>
|
||||
|
||||
Status: native external CLI integration. Gateway spawns `imsg rpc` and communicates over JSON-RPC on stdio (no separate daemon/port). Advanced actions require `imsg launch` and a successful private API probe.
|
||||
@@ -150,12 +150,12 @@ To reach the advanced action surface that this channel page documents, you need
|
||||
|
||||
> Advanced features such as `read`, `typing`, `launch`, bridge-backed rich send, message mutation, and chat management are opt-in. They require SIP to be disabled and a helper dylib to be injected into `Messages.app`. `imsg launch` refuses to inject when SIP is enabled.
|
||||
|
||||
The helper-injection technique is a manual port of the BlueBubbles private-API surface (Apache-2.0 inspired) into `imsg`'s own dylib — no third-party binary, but the same SIP-disabled requirement that BlueBubbles' Private API mode has. There is no SIP-asymmetry between the two channels.
|
||||
The helper-injection technique uses `imsg`'s own dylib to reach Messages private APIs. There is no third-party server or BlueBubbles runtime in the OpenClaw iMessage path.
|
||||
|
||||
<Warning>
|
||||
**Disabling SIP is a real security tradeoff.** SIP is one of macOS's core protections against running modified system code; turning it off system-wide opens up additional attack surface and side effects. Notably, **disabling SIP on Apple Silicon Macs also disables the ability to install and run iOS apps on your Mac**.
|
||||
|
||||
Treat this as a deliberate operational choice, not a default. If your threat model can't tolerate SIP being off, both bundled iMessage and BlueBubbles will be limited to their basic modes — text and media send/receive only, no reactions / edit / unsend / effects / group ops on either channel.
|
||||
Treat this as a deliberate operational choice, not a default. If your threat model can't tolerate SIP being off, bundled iMessage is limited to basic mode — text and media send/receive only, no reactions / edit / unsend / effects / group ops.
|
||||
</Warning>
|
||||
|
||||
### Setup
|
||||
@@ -170,13 +170,13 @@ Treat this as a deliberate operational choice, not a default. If your threat mod
|
||||
|
||||
The `imsg status --json` output reports `bridge_version`, `rpc_methods`, and per-method `selectors` so you can see what the current build supports before you start.
|
||||
|
||||
2. **Disable System Integrity Protection.** This is macOS-version-specific, identical to the BlueBubbles flow because the underlying Apple requirement is the same:
|
||||
2. **Disable System Integrity Protection.** This is macOS-version-specific because the underlying Apple requirement depends on the OS and hardware:
|
||||
- **macOS 10.13–10.15 (Sierra–Catalina):** disable Library Validation via Terminal, reboot to Recovery Mode, run `csrutil disable`, restart.
|
||||
- **macOS 11+ (Big Sur and later), Intel:** Recovery Mode (or Internet Recovery), `csrutil disable`, restart.
|
||||
- **macOS 11+, Apple Silicon:** power-button startup sequence to enter Recovery; on recent macOS versions hold the **Left Shift** key when you click Continue, then `csrutil disable`. Virtual-machine setups follow a separate flow — take a VM snapshot first.
|
||||
- **macOS 26 / Tahoe:** library-validation policies and `imagent` private-entitlement checks have tightened further; `imsg` may need an updated build to keep up. If `imsg launch` injection or specific `selectors` start returning false after a macOS major upgrade, check `imsg`'s release notes before assuming the SIP step succeeded.
|
||||
|
||||
The [BlueBubbles Private API installation guide](https://docs.bluebubbles.app/private-api/installation) is the canonical step-by-step for the SIP-disable flow itself; the macOS-side steps are not specific to BB, only the helper that gets injected differs.
|
||||
Follow Apple's Recovery-mode flow for your Mac to disable SIP before running `imsg launch`.
|
||||
|
||||
3. **Inject the helper.** With SIP disabled and Messages.app signed in:
|
||||
|
||||
@@ -200,7 +200,7 @@ If `openclaw channels status --probe` reports the channel as `works` but specifi
|
||||
|
||||
If SIP-disabled isn't acceptable for your threat model:
|
||||
|
||||
- Both `imsg` and BlueBubbles fall back to basic mode — text + media + receive only.
|
||||
- `imsg` falls back to basic mode — text + media + receive only.
|
||||
- The OpenClaw plugin still advertises text/media send and inbound monitoring; it just hides `react`, `edit`, `unsend`, `reply`, `sendWithEffect`, and group ops from the action surface (per the per-method capability gate).
|
||||
- You can run a separate non-Apple-Silicon Mac (or a dedicated bot Mac) with SIP off for the iMessage workload, while keeping SIP enabled on your primary devices. See [Dedicated bot macOS user (separate iMessage identity)](#deployment-patterns) below.
|
||||
|
||||
@@ -533,7 +533,7 @@ When a user types a command and a URL together — e.g. `Dump https://example.co
|
||||
1. A text message (`"Dump"`).
|
||||
2. A URL-preview balloon (`"https://..."`) with OG-preview images as attachments.
|
||||
|
||||
The two rows arrive at OpenClaw ~0.8-2.0 s apart on most setups. Without coalescing, the agent receives the command alone on turn 1, replies (often "send me the URL"), and only sees the URL on turn 2 — at which point the command context is already lost. This is Apple's send pipeline, not anything OpenClaw or `imsg` introduces, so the same fix applies as it does on the BlueBubbles channel.
|
||||
The two rows arrive at OpenClaw ~0.8-2.0 s apart on most setups. Without coalescing, the agent receives the command alone on turn 1, replies (often "send me the URL"), and only sees the URL on turn 2 — at which point the command context is already lost. This is Apple's send pipeline, not anything OpenClaw or `imsg` introduces.
|
||||
|
||||
`channels.imessage.coalesceSameSenderDms` opts a DM into merging consecutive same-sender rows into a single agent turn. Group chats continue to dispatch per-message so multi-user turn structure is preserved.
|
||||
|
||||
@@ -586,7 +586,7 @@ The two rows arrive at OpenClaw ~0.8-2.0 s apart on most setups. Without coalesc
|
||||
- **Added latency for DM messages.** With the flag on, every DM (including standalone control commands and single-text follow-ups) waits up to the debounce window before dispatching, in case a payload row is coming. Group-chat messages keep instant dispatch.
|
||||
- **Merged output is bounded.** Merged text caps at 4000 chars with an explicit `…[truncated]` marker; attachments cap at 20; source entries cap at 10 (first-plus-latest retained beyond that). Every source GUID is tracked in `coalescedMessageGuids` for downstream telemetry.
|
||||
- **DM-only.** Group chats fall through to per-message dispatch so the bot stays responsive when multiple people are typing.
|
||||
- **Opt-in, per-channel.** Other channels (Telegram, WhatsApp, Slack, …) are unaffected. The BlueBubbles channel has the same opt-in under `channels.bluebubbles.coalesceSameSenderDms`.
|
||||
- **Opt-in, per-channel.** Other channels (Telegram, WhatsApp, Slack, …) are unaffected. Legacy BlueBubbles configs that set `channels.bluebubbles.coalesceSameSenderDms` should migrate that value to `channels.imessage.coalesceSameSenderDms`.
|
||||
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
@@ -258,7 +258,7 @@ curl "https://api.telegram.org/bot<bot_token>/getUpdates"
|
||||
|
||||
- Telegram is owned by the gateway process.
|
||||
- Routing is deterministic: Telegram inbound replies back to Telegram (the model does not pick channels).
|
||||
- Inbound messages normalize into the shared channel envelope with reply metadata and media placeholders.
|
||||
- Inbound messages normalize into the shared channel envelope with reply metadata, media placeholders, and persisted reply-chain context for Telegram replies the gateway has observed.
|
||||
- Group sessions are isolated by group ID. Forum topics append `:topic:<threadId>` to keep topics isolated.
|
||||
- DM messages can carry `message_thread_id`; OpenClaw preserves the thread ID for replies but keeps DMs on the flat session by default. Configure `channels.telegram.dm.threadReplies: "inbound"`, `channels.telegram.direct.<chatId>.threadReplies: "inbound"`, `requireTopic: true`, or a matching topic config when you intentionally want DM topic session isolation.
|
||||
- Long polling uses grammY runner with per-chat/per-thread sequencing. Overall runner sink concurrency uses `agents.defaults.maxConcurrent`.
|
||||
@@ -773,7 +773,7 @@ curl "https://api.telegram.org/bot<bot_token>/getUpdates"
|
||||
- `channels.telegram.timeoutSeconds` overrides Telegram API client timeout (if unset, grammY default applies). Bot clients clamp configured values below the 60-second outbound text/typing request guard so grammY does not abort visible reply delivery before OpenClaw's transport guard and fallback can run. Long polling still uses a 45-second `getUpdates` request guard so idle polls are not abandoned indefinitely.
|
||||
- `channels.telegram.pollingStallThresholdMs` defaults to `120000`; tune between `30000` and `600000` only for false-positive polling-stall restarts.
|
||||
- group context history uses `channels.telegram.historyLimit` or `messages.groupChat.historyLimit` (default 50); `0` disables.
|
||||
- reply/quote/forward supplemental context is currently passed as received.
|
||||
- reply/quote/forward supplemental context is normalized into a nearest-first reply chain when the gateway has observed the parent messages; the observed-message cache is persisted beside the session store. Telegram only includes one shallow `reply_to_message` in updates, so chains older than the cache are limited to Telegram's current update payload.
|
||||
- Telegram allowlists primarily gate who can trigger the agent, not a full supplemental-context redaction boundary.
|
||||
- DM history controls:
|
||||
- `channels.telegram.dmHistoryLimit`
|
||||
|
||||
@@ -170,7 +170,7 @@ configured OpenClaw model. If no configured model is usable yet, it can fall
|
||||
back to local runtimes already present on the machine:
|
||||
|
||||
- Claude Code CLI: `claude-cli/claude-opus-4-7`
|
||||
- Codex app-server harness: `openai/gpt-5.5` with `agentRuntime.id: "codex"`
|
||||
- Codex app-server harness: `openai/gpt-5.5`
|
||||
- Codex CLI: `codex-cli/gpt-5.5`
|
||||
|
||||
The model-assisted planner cannot mutate config directly. It must translate the
|
||||
|
||||
@@ -56,7 +56,7 @@ Notes:
|
||||
- Doctor also scans `~/.openclaw/cron/jobs.json` (or `cron.store`) for legacy cron job shapes and can rewrite them in place before the scheduler has to auto-normalize them at runtime.
|
||||
- On Linux, doctor warns when the user's crontab still runs legacy `~/.openclaw/bin/ensure-whatsapp.sh`; that script is no longer maintained and can log false WhatsApp gateway outages when cron lacks the systemd user-bus environment.
|
||||
- When WhatsApp is enabled, doctor checks for a degraded Gateway event loop with local `openclaw-tui` clients still running. `doctor --fix` stops only verified local TUI clients so WhatsApp replies are not queued behind stale TUI refresh loops.
|
||||
- Doctor rewrites legacy `openai-codex/*` model refs to canonical `openai/*` refs across primary models, fallbacks, heartbeat/subagent/compaction overrides, hooks, channel model overrides, and stale session route pins. `--fix` selects `agentRuntime.id: "codex"` only when the Codex plugin is installed, enabled, contributes the `codex` harness, and has usable OAuth; otherwise it selects `agentRuntime.id: "pi"` so the route stays on the default OpenClaw runner.
|
||||
- Doctor rewrites legacy `openai-codex/*` model refs to canonical `openai/*` refs across primary models, fallbacks, heartbeat/subagent/compaction overrides, hooks, channel model overrides, and stale session route pins. `--fix` preserves explicit provider/model `agentRuntime` policy, removes stale whole-agent/session runtime pins, and leaves canonical OpenAI agent refs on the default Codex harness when the official OpenAI provider is in use.
|
||||
- Doctor cleans legacy plugin dependency staging state created by older OpenClaw versions. It also repairs missing downloadable plugins that are referenced by config, such as `plugins.entries`, configured channels, configured provider/search settings, or configured agent runtimes. During package updates, doctor skips package-manager plugin repair until the package swap is complete; rerun `openclaw doctor --fix` afterward if a configured plugin still needs recovery. If the download fails, doctor reports the install error and preserves the configured plugin entry for the next repair attempt.
|
||||
- Doctor repairs stale plugin config by removing missing plugin ids from `plugins.allow`/`plugins.entries`, plus matching dangling channel config, heartbeat targets, and channel model overrides when plugin discovery is healthy.
|
||||
- Doctor quarantines invalid plugin config by disabling the affected `plugins.entries.<id>` entry and removing its invalid `config` payload. Gateway startup already skips only that bad plugin so other plugins and channels can keep running.
|
||||
|
||||
@@ -24,7 +24,7 @@ apply across the CLI.
|
||||
| Network and nodes | [`directory`](/cli/directory) · [`nodes`](/cli/nodes) · [`devices`](/cli/devices) · [`node`](/cli/node) |
|
||||
| Runtime and sandbox | [`approvals`](/cli/approvals) · `exec-policy` (see [`approvals`](/cli/approvals)) · [`sandbox`](/cli/sandbox) · [`tui`](/cli/tui) · `chat`/`terminal` (aliases for [`tui --local`](/cli/tui)) · [`browser`](/cli/browser) |
|
||||
| Automation | [`cron`](/cli/cron) · [`tasks`](/cli/tasks) · [`hooks`](/cli/hooks) · [`webhooks`](/cli/webhooks) |
|
||||
| Discovery and docs | [`dns`](/cli/dns) · [`docs`](/cli/docs) |
|
||||
| Discovery and docs | [`dns`](/cli/dns) · [`docs`](/cli/docs) · [`path`](/cli/path) |
|
||||
| Pairing and channels | [`pairing`](/cli/pairing) · [`qr`](/cli/qr) · [`channels`](/cli/channels) |
|
||||
| Security and plugins | [`security`](/cli/security) · [`secrets`](/cli/secrets) · [`skills`](/cli/skills) · [`plugins`](/cli/plugins) · [`proxy`](/cli/proxy) |
|
||||
| Legacy aliases | [`daemon`](/cli/daemon) (gateway service) · [`clawbot`](/cli/clawbot) (namespace) |
|
||||
|
||||
121
docs/cli/path.md
Normal file
121
docs/cli/path.md
Normal file
@@ -0,0 +1,121 @@
|
||||
---
|
||||
summary: "CLI reference for `openclaw path` (inspect and edit workspace files via the `oc://` addressing scheme)"
|
||||
read_when:
|
||||
- You want to read or write a leaf inside a workspace file from the terminal
|
||||
- You're scripting against workspace state and want a stable, kind-agnostic addressing scheme
|
||||
- You're debugging a `oc://` path (validate the syntax, see what it resolves to)
|
||||
title: "Path"
|
||||
---
|
||||
|
||||
# `openclaw path`
|
||||
|
||||
Shell-level access to the `oc://` addressing substrate — one universal,
|
||||
kind-dispatched path scheme for inspecting and surgically editing workspace
|
||||
files (markdown, jsonc, jsonl, yaml). Self-hosters and editor extensions use
|
||||
it to read or write a single leaf inside a workspace file without scripting
|
||||
against the SDK directly.
|
||||
|
||||
## Subcommands
|
||||
|
||||
| Subcommand | Purpose |
|
||||
| ----------------------- | ---------------------------------------------------------------------------- |
|
||||
| `resolve <oc-path>` | Print the match at the path (or "not found"). |
|
||||
| `find <pattern>` | Enumerate matches for a wildcard / predicate path. |
|
||||
| `set <oc-path> <value>` | Write a leaf at the path. Supports `--dry-run`. |
|
||||
| `validate <oc-path>` | Parse-only — print structural breakdown (file / section / item / field). |
|
||||
| `emit <file>` | Round-trip a file through `parseXxx` + `emitXxx` (byte-fidelity diagnostic). |
|
||||
|
||||
## Global flags
|
||||
|
||||
| Flag | Purpose |
|
||||
| --------------- | ------------------------------------------------------------------------ |
|
||||
| `--cwd <dir>` | Resolve the file slot against this directory (default: `process.cwd()`). |
|
||||
| `--file <path>` | Override the file slot's resolved path (absolute access). |
|
||||
| `--json` | Force JSON output (default when stdout is not a TTY). |
|
||||
| `--human` | Force human output (default when stdout is a TTY). |
|
||||
| `--dry-run` | (only on `set`) print the bytes that would be written without writing. |
|
||||
|
||||
## `oc://` syntax
|
||||
|
||||
```
|
||||
oc://FILE/SECTION/ITEM/FIELD?session=SCOPE
|
||||
```
|
||||
|
||||
Slot rules — `field` requires `item`, `item` requires `section`. Across all
|
||||
four slots:
|
||||
|
||||
- **Quoted segments** — `"a/b.c"` survives `/` and `.` separators.
|
||||
`"\\"` and `"\""` are the only escapes inside quotes.
|
||||
The file slot is also quote-aware: `oc://"skills/email-drafter"/Tools/-1`
|
||||
treats `skills/email-drafter` as a single file path.
|
||||
- **Predicates** — `[k=v]`, `[k!=v]`, `[k*=v]`, `[k^=v]`, `[k$=v]`,
|
||||
`[k<v]`, `[k<=v]`, `[k>v]`, `[k>=v]`.
|
||||
- **Unions** — `{a,b,c}` matches any of the alternatives.
|
||||
- **Wildcards** — `*` (single sub-segment) and `**` (zero-or-more,
|
||||
recursive). `find` accepts these; `resolve` and `set` reject them as
|
||||
ambiguous.
|
||||
- **Positional** — `$first`, `$last`, `-N` (Nth from end).
|
||||
- **Ordinal** — `#N` for Nth match.
|
||||
- **Insertion markers** — `+`, `+key`, `+nnn` for keyed / indexed
|
||||
insertion (use with `set`).
|
||||
- **Session scope** — `?session=cron:daily` etc. Orthogonal to slot
|
||||
nesting.
|
||||
|
||||
Reserved characters (`?`, `&`, `%`) outside quoted, predicate, or union
|
||||
segments are rejected. Control characters (U+0000–U+001F, U+007F) are
|
||||
rejected anywhere.
|
||||
|
||||
## Examples
|
||||
|
||||
```bash
|
||||
# Validate a path (no filesystem access)
|
||||
openclaw path validate 'oc://AGENTS.md/Tools/-1/risk'
|
||||
|
||||
# Read a leaf
|
||||
openclaw path resolve 'oc://gateway.jsonc/version'
|
||||
|
||||
# Wildcard search
|
||||
openclaw path find 'oc://session.jsonl/*/event' --file ./logs/session.jsonl
|
||||
|
||||
# Dry-run a write
|
||||
openclaw path set 'oc://gateway.jsonc/version' '2.0' --dry-run
|
||||
|
||||
# Apply the write
|
||||
openclaw path set 'oc://gateway.jsonc/version' '2.0'
|
||||
|
||||
# Byte-fidelity round-trip (diagnostic)
|
||||
openclaw path emit ./AGENTS.md
|
||||
```
|
||||
|
||||
## Exit codes
|
||||
|
||||
| Code | Meaning |
|
||||
| ---- | -------------------------------------------------------------------------- |
|
||||
| `0` | Success. (`resolve` / `find`: at least one match. `set`: write succeeded.) |
|
||||
| `1` | No match, or `set` rejected by the substrate (no system-level error). |
|
||||
| `2` | Argument or parse error. |
|
||||
|
||||
## Output mode
|
||||
|
||||
`openclaw path` is TTY-aware: human-readable output on a terminal, JSON when
|
||||
stdout is piped or redirected. `--json` and `--human` override the
|
||||
auto-detection.
|
||||
|
||||
## Notes
|
||||
|
||||
- `set` writes raw bytes through the substrate's emit path, which applies the
|
||||
redaction-sentinel guard automatically. A leaf carrying
|
||||
`__OPENCLAW_REDACTED__` (verbatim or as a substring) is refused at write
|
||||
time.
|
||||
- `set` on a JSONC file currently re-renders the file (drops comments and
|
||||
trailing-comma formatting) when it mutates a leaf. Read-path round-trip is
|
||||
byte-identical. A byte-splice editor that preserves comments through
|
||||
writes is planned as a follow-up.
|
||||
- `path` does not know about LKG. If the file is LKG-tracked, the next
|
||||
observe call decides whether to promote / recover. `set --batch` for
|
||||
atomic multi-set through the LKG promote/recover lifecycle is planned
|
||||
alongside the LKG-recovery substrate.
|
||||
|
||||
## Related
|
||||
|
||||
- [CLI reference](/cli)
|
||||
@@ -23,8 +23,11 @@ configuration. They are different layers:
|
||||
|
||||
You will also see the word **harness** in code. A harness is the implementation
|
||||
that provides an agent runtime. For example, the bundled Codex harness
|
||||
implements the `codex` runtime. Public config uses `agentRuntime.id`; `openclaw
|
||||
doctor --fix` rewrites older runtime-policy keys to that shape.
|
||||
implements the `codex` runtime. Public config uses `agentRuntime.id` on
|
||||
provider or model entries; whole-agent runtime keys are legacy and ignored.
|
||||
`openclaw doctor --fix` removes old whole-agent runtime pins and rewrites
|
||||
legacy runtime model refs to canonical provider/model refs plus model-scoped
|
||||
runtime policy where needed.
|
||||
|
||||
There are two runtime families:
|
||||
|
||||
@@ -33,9 +36,9 @@ There are two runtime families:
|
||||
`codex`.
|
||||
- **CLI backends** run a local CLI process while keeping the model ref
|
||||
canonical. For example, `anthropic/claude-opus-4-7` with
|
||||
`agentRuntime.id: "claude-cli"` means "select the Anthropic model, execute
|
||||
through Claude CLI." `claude-cli` is not an embedded harness id and must not
|
||||
be passed to AgentHarness selection.
|
||||
a model-scoped `agentRuntime.id: "claude-cli"` means "select the Anthropic
|
||||
model, execute through Claude CLI." `claude-cli` is not an embedded harness id
|
||||
and must not be passed to AgentHarness selection.
|
||||
|
||||
## Codex surfaces
|
||||
|
||||
@@ -87,9 +90,9 @@ This is the agent-facing decision tree:
|
||||
2. If the user asks for **Codex as the embedded runtime** or wants the normal
|
||||
subscription-backed Codex agent experience, use `openai/<model>`.
|
||||
3. If the user explicitly chooses **PI for an OpenAI model**, keep the model ref
|
||||
as `openai/<model>` and set `agentRuntime.id: "pi"`. A selected
|
||||
`openai-codex` auth profile is routed internally through PI's legacy
|
||||
Codex-auth transport.
|
||||
as `openai/<model>` and set provider/model runtime policy to
|
||||
`agentRuntime.id: "pi"`. A selected `openai-codex` auth profile is routed
|
||||
internally through PI's legacy Codex-auth transport.
|
||||
4. If legacy config still contains **`openai-codex/*` model refs**, repair it to
|
||||
`openai/<model>` with `openclaw doctor --fix`.
|
||||
5. If the user explicitly says **ACP**, **acpx**, or **Codex ACP adapter**, use
|
||||
@@ -132,21 +135,26 @@ This ownership split is the main design rule:
|
||||
|
||||
OpenClaw chooses an embedded runtime after provider and model resolution:
|
||||
|
||||
1. A session's recorded runtime wins. Config changes do not hot-switch an
|
||||
existing transcript to a different native thread system.
|
||||
2. `OPENCLAW_AGENT_RUNTIME=<id>` forces that runtime for new or reset sessions.
|
||||
3. `agents.defaults.agentRuntime.id` or `agents.list[].agentRuntime.id` can set
|
||||
`auto`, `pi`, a registered embedded harness id such as `codex`, or a
|
||||
supported CLI backend alias such as `claude-cli`.
|
||||
4. In `auto` mode, registered plugin runtimes can claim supported provider/model
|
||||
1. Model-scoped runtime policy wins. This can live in a configured provider
|
||||
model entry or in `agents.defaults.models["provider/model"].agentRuntime` /
|
||||
`agents.list[].models["provider/model"].agentRuntime`.
|
||||
2. Provider-scoped runtime policy comes next at
|
||||
`models.providers.<provider>.agentRuntime`.
|
||||
3. In `auto` mode, registered plugin runtimes can claim supported provider/model
|
||||
pairs.
|
||||
5. If no runtime claims a turn in `auto` mode, OpenClaw uses PI as the
|
||||
4. If no runtime claims a turn in `auto` mode, OpenClaw uses PI as the
|
||||
compatibility runtime. Use an explicit runtime id when the run must be
|
||||
strict.
|
||||
|
||||
Explicit plugin runtimes fail closed. For example, `agentRuntime.id: "codex"`
|
||||
means Codex or a clear selection/runtime error; it is never silently routed back
|
||||
to PI.
|
||||
Whole-session and whole-agent runtime pins are ignored. That includes
|
||||
`OPENCLAW_AGENT_RUNTIME`, session `agentHarnessId`/`agentRuntimeOverride` state,
|
||||
`agents.defaults.agentRuntime`, and `agents.list[].agentRuntime`. Run
|
||||
`openclaw doctor --fix` to remove stale whole-agent runtime config and convert
|
||||
legacy runtime model refs where OpenClaw can preserve the intent.
|
||||
|
||||
Explicit provider/model plugin runtimes fail closed. For example,
|
||||
`agentRuntime.id: "codex"` on a provider or model means Codex or a clear
|
||||
selection/runtime error; it is never silently routed back to PI.
|
||||
|
||||
CLI backend aliases are different from embedded harness ids. The preferred
|
||||
Claude CLI form is:
|
||||
@@ -156,7 +164,11 @@ Claude CLI form is:
|
||||
agents: {
|
||||
defaults: {
|
||||
model: "anthropic/claude-opus-4-7",
|
||||
agentRuntime: { id: "claude-cli" },
|
||||
models: {
|
||||
"anthropic/claude-opus-4-7": {
|
||||
agentRuntime: { id: "claude-cli" },
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
@@ -164,15 +176,15 @@ Claude CLI form is:
|
||||
|
||||
Legacy refs such as `claude-cli/claude-opus-4-7` remain supported for
|
||||
compatibility, but new config should keep the provider/model canonical and put
|
||||
the execution backend in `agentRuntime.id`.
|
||||
the execution backend in provider/model runtime policy.
|
||||
|
||||
`auto` mode is intentionally conservative for most providers. OpenAI agent
|
||||
models are the exception: unset runtime and `auto` both resolve to the Codex
|
||||
harness. Explicit PI runtime config remains an opt-in compatibility route for
|
||||
`openai/*` agent turns; when paired with a selected `openai-codex` auth profile,
|
||||
OpenClaw routes PI internally through the legacy Codex-auth transport while
|
||||
keeping the public model ref as `openai/*`. Stale OpenAI PI session pins without
|
||||
explicit config are repaired back to Codex.
|
||||
keeping the public model ref as `openai/*`. Stale OpenAI PI session pins are
|
||||
ignored by runtime selection and can be cleaned with `openclaw doctor --fix`.
|
||||
|
||||
If `openclaw doctor` warns that the `codex` plugin is enabled while
|
||||
`openai-codex/*` remains in config, treat that as legacy route state. Run
|
||||
@@ -206,10 +218,8 @@ diagnostics, not as provider names.
|
||||
- A runtime id such as `codex` tells you which loop is executing the turn.
|
||||
- A channel label such as Telegram or Discord tells you where the conversation is happening.
|
||||
|
||||
If a session still shows PI after changing runtime config, start a new session
|
||||
with `/new` or clear the current one with `/reset`. Existing sessions keep their
|
||||
recorded runtime so a transcript is not replayed through two incompatible native
|
||||
session systems.
|
||||
If a run still shows an unexpected runtime, inspect the selected provider/model
|
||||
runtime policy first. Legacy session runtime pins no longer decide routing.
|
||||
|
||||
## Related
|
||||
|
||||
|
||||
@@ -29,19 +29,19 @@ Reference for **LLM/model providers** (not chat channels like WhatsApp/Telegram)
|
||||
<Accordion title="OpenAI provider/runtime split">
|
||||
OpenAI-family routes are prefix-specific:
|
||||
|
||||
- `openai/<model>` plus `agents.defaults.agentRuntime.id: "codex"` uses the native Codex app-server harness. This is the usual ChatGPT/Codex subscription setup.
|
||||
- `openai-codex/<model>` uses Codex OAuth in PI.
|
||||
- `openai/<model>` without a Codex runtime override uses the direct OpenAI API-key provider in PI.
|
||||
- `openai/<model>` uses the native Codex app-server harness for agent turns by default. This is the usual ChatGPT/Codex subscription setup.
|
||||
- `openai-codex/<model>` is legacy config that doctor rewrites to `openai/<model>`.
|
||||
- `openai/<model>` plus provider/model `agentRuntime.id: "pi"` uses PI for explicit API-key or compatibility routes.
|
||||
|
||||
See [OpenAI](/providers/openai) and [Codex harness](/plugins/codex-harness). If the provider/runtime split is confusing, read [Agent runtimes](/concepts/agent-runtimes) first.
|
||||
|
||||
Plugin auto-enable follows the same boundary: `openai-codex/<model>` belongs to the OpenAI plugin, while the Codex plugin is enabled by `agentRuntime.id: "codex"` or legacy `codex/<model>` refs.
|
||||
Plugin auto-enable follows the same boundary: `openai/*` agent refs enable the Codex plugin for the default route, and explicit provider/model `agentRuntime.id: "codex"` or legacy `codex/<model>` refs also require it.
|
||||
|
||||
GPT-5.5 is available through the native Codex app-server harness when `agentRuntime.id: "codex"` is set, through `openai-codex/gpt-5.5` in PI for Codex OAuth, and through `openai/gpt-5.5` in PI for direct API-key traffic when your account exposes it.
|
||||
GPT-5.5 is available through the native Codex app-server harness by default on `openai/gpt-5.5`, and through PI only when provider/model runtime policy explicitly selects `pi`.
|
||||
|
||||
</Accordion>
|
||||
<Accordion title="CLI runtimes">
|
||||
CLI runtimes use the same split: choose canonical model refs such as `anthropic/claude-*`, `google/gemini-*`, or `openai/gpt-*`, then set `agents.defaults.agentRuntime.id` to `claude-cli`, `google-gemini-cli`, or `codex-cli` when you want a local CLI backend.
|
||||
CLI runtimes use the same split: choose canonical model refs such as `anthropic/claude-*`, `google/gemini-*`, or `openai/gpt-*`, then set provider/model runtime policy to `claude-cli`, `google-gemini-cli`, or `codex-cli` when you want a local CLI backend.
|
||||
|
||||
Legacy `claude-cli/*`, `google-gemini-cli/*`, and `codex-cli/*` refs migrate back to canonical provider refs with the runtime recorded separately.
|
||||
|
||||
@@ -118,7 +118,7 @@ OpenClaw ships with the pi-ai catalog. These providers require **no** `models.pr
|
||||
- Direct public Anthropic requests support the shared `/fast` toggle and `params.fastMode`, including API-key and OAuth-authenticated traffic sent to `api.anthropic.com`; OpenClaw maps that to Anthropic `service_tier` (`auto` vs `standard_only`)
|
||||
- Preferred Claude CLI config keeps the model ref canonical and selects the CLI
|
||||
backend separately: `anthropic/claude-opus-4-7` with
|
||||
`agents.defaults.agentRuntime.id: "claude-cli"`. Legacy
|
||||
model-scoped `agentRuntime.id: "claude-cli"`. Legacy
|
||||
`claude-cli/claude-opus-4-7` refs still work for compatibility.
|
||||
|
||||
<Note>
|
||||
@@ -135,8 +135,8 @@ Anthropic staff told us OpenClaw-style Claude CLI usage is allowed again, so Ope
|
||||
|
||||
- Provider: `openai-codex`
|
||||
- Auth: OAuth (ChatGPT)
|
||||
- PI model ref: `openai-codex/gpt-5.5`
|
||||
- Native Codex app-server harness ref: `openai/gpt-5.5` with `agents.defaults.agentRuntime.id: "codex"`
|
||||
- Legacy PI model ref: `openai-codex/gpt-5.5`
|
||||
- Native Codex app-server harness ref: `openai/gpt-5.5`
|
||||
- Native Codex app-server harness docs: [Codex harness](/plugins/codex-harness)
|
||||
- Legacy model refs: `codex/gpt-*`
|
||||
- Plugin boundary: `openai-codex/*` loads the OpenAI plugin; the native Codex app-server plugin is selected only by the Codex harness runtime or legacy `codex/*` refs.
|
||||
@@ -148,8 +148,8 @@ Anthropic staff told us OpenClaw-style Claude CLI usage is allowed again, so Ope
|
||||
- Shares the same `/fast` toggle and `params.fastMode` config as direct `openai/*`; OpenClaw maps that to `service_tier=priority`
|
||||
- `openai-codex/gpt-5.5` uses the Codex catalog native `contextWindow = 400000` and default runtime `contextTokens = 272000`; override the runtime cap with `models.providers.openai-codex.models[].contextTokens`
|
||||
- Policy note: OpenAI Codex OAuth is explicitly supported for external tools/workflows like OpenClaw.
|
||||
- For the common subscription plus native Codex runtime route, sign in with `openai-codex` auth but configure `openai/gpt-5.5` plus `agents.defaults.agentRuntime.id: "codex"`.
|
||||
- Use `openai-codex/gpt-5.5` only when you want the Codex OAuth/subscription route through PI; use `openai/gpt-5.5` without the Codex runtime override when your API-key setup and local catalog expose the public API route.
|
||||
- For the common subscription plus native Codex runtime route, sign in with `openai-codex` auth but configure `openai/gpt-5.5`; OpenAI agent turns select Codex by default.
|
||||
- Use provider/model `agentRuntime.id: "pi"` only when you want a compatibility route through PI; otherwise keep `openai/gpt-5.5` on the default Codex harness.
|
||||
- Older `openai-codex/gpt-5.1*`, `openai-codex/gpt-5.2*`, and `openai-codex/gpt-5.3*` refs are suppressed because ChatGPT/Codex OAuth accounts reject them; use `openai-codex/gpt-5.5` or the native Codex runtime route instead.
|
||||
|
||||
```json5
|
||||
@@ -158,7 +158,6 @@ Anthropic staff told us OpenClaw-style Claude CLI usage is allowed again, so Ope
|
||||
agents: {
|
||||
defaults: {
|
||||
model: { primary: "openai/gpt-5.5" },
|
||||
agentRuntime: { id: "codex" },
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
@@ -23,7 +23,7 @@ sidebarTitle: "Models CLI"
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
Model refs choose a provider and model. They do not usually choose the low-level agent runtime. For example, `openai/gpt-5.5` can run through the normal OpenAI provider path or through the Codex app-server runtime, depending on `agents.defaults.agentRuntime.id`. In Codex runtime mode, the `openai/gpt-*` ref does not imply API-key billing; auth can come from a Codex account or `openai-codex` auth profile. See [Agent runtimes](/concepts/agent-runtimes).
|
||||
Model refs choose a provider and model. They do not usually choose the low-level agent runtime. OpenAI agent refs are the main exception: `openai/gpt-5.5` runs through the Codex app-server runtime by default on the official OpenAI provider. Explicit runtime overrides belong on provider/model policy, not on the whole agent or session. In Codex runtime mode, the `openai/gpt-*` ref does not imply API-key billing; auth can come from a Codex account or `openai-codex` auth profile. See [Agent runtimes](/concepts/agent-runtimes).
|
||||
|
||||
## How model selection works
|
||||
|
||||
|
||||
@@ -559,15 +559,17 @@ A green run completes in well under 30 seconds and `slack-qa-report.md` shows bo
|
||||
|
||||
### Convex credential pool
|
||||
|
||||
Telegram, Discord, and Slack lanes can lease credentials from a shared Convex pool instead of reading the env vars above. Pass `--credential-source convex` (or set `OPENCLAW_QA_CREDENTIAL_SOURCE=convex`); QA Lab acquires an exclusive lease, heartbeats it for the duration of the run, and releases it on shutdown. Pool kinds are `"telegram"`, `"discord"`, and `"slack"`.
|
||||
Telegram, Discord, Slack, and WhatsApp lanes can lease credentials from a shared Convex pool instead of reading the env vars above. Pass `--credential-source convex` (or set `OPENCLAW_QA_CREDENTIAL_SOURCE=convex`); QA Lab acquires an exclusive lease, heartbeats it for the duration of the run, and releases it on shutdown. Pool kinds are `"telegram"`, `"discord"`, `"slack"`, and `"whatsapp"`.
|
||||
|
||||
Payload shapes the broker validates on `admin/add`:
|
||||
|
||||
- Telegram (`kind: "telegram"`): `{ groupId: string, driverToken: string, sutToken: string }` - `groupId` must be a numeric chat-id string.
|
||||
- Discord (`kind: "discord"`): `{ guildId: string, channelId: string, driverBotToken: string, sutBotToken: string, sutApplicationId: string }`.
|
||||
- Slack (`kind: "slack"`): `{ channelId: string, driverBotToken: string, sutBotToken: string, sutAppToken: string }` - `channelId` must match `^[A-Z][A-Z0-9]+$` (a Slack id like `Cxxxxxxxxxx`). See [Setting up the Slack workspace](#setting-up-the-slack-workspace) for app and scope provisioning.
|
||||
- WhatsApp (`kind: "whatsapp"`): `{ driverPhoneE164: string, sutPhoneE164: string, driverAuthArchiveBase64: string, sutAuthArchiveBase64: string, groupJid?: string }` - phone numbers must be distinct E.164 strings.
|
||||
|
||||
Operational env vars and the Convex broker endpoint contract live in [Testing → Shared Telegram credentials via Convex](/help/testing#shared-telegram-credentials-via-convex-v1) (the section name predates Discord support; the broker semantics are identical for both kinds).
|
||||
Slack lanes can also use the pool. Slack payload shape checks currently live in the Slack QA runner rather than the broker; use `{ channelId: string, driverBotToken: string, sutBotToken: string, sutAppToken: string }`, with a Slack channel id like `Cxxxxxxxxxx`. See [Setting up the Slack workspace](#setting-up-the-slack-workspace) for app and scope provisioning.
|
||||
|
||||
Operational env vars and the Convex broker endpoint contract live in [Testing → Shared Telegram credentials via Convex](/help/testing#shared-telegram-credentials-via-convex-v1) (the section name predates the multi-channel pool; the lease semantics are shared across kinds).
|
||||
|
||||
## Repo-backed seeds
|
||||
|
||||
|
||||
@@ -54,7 +54,7 @@
|
||||
"redirects": [
|
||||
{
|
||||
"source": "/channels/bluebubbles",
|
||||
"destination": "/channels/imessage"
|
||||
"destination": "/channels/imessage-from-bluebubbles"
|
||||
},
|
||||
{
|
||||
"source": "/install/migrating-matrix",
|
||||
|
||||
@@ -336,9 +336,6 @@ Time format in system prompt. Default: `auto` (OS preference).
|
||||
fallbacks: ["openai/gpt-5.4-mini"],
|
||||
},
|
||||
params: { cacheRetention: "long" }, // global default provider params
|
||||
agentRuntime: {
|
||||
id: "pi", // pi | auto | registered harness id, e.g. codex
|
||||
},
|
||||
pdfMaxBytesMb: 10,
|
||||
pdfMaxPages: 20,
|
||||
thinkingDefault: "low",
|
||||
@@ -398,25 +395,28 @@ Time format in system prompt. Default: `auto` (OS preference).
|
||||
- `params.chat_template_kwargs`: vLLM/OpenAI-compatible chat-template arguments merged into top-level `api: "openai-completions"` request bodies. For `vllm/nemotron-3-*` with thinking off, the bundled vLLM plugin automatically sends `enable_thinking: false` and `force_nonempty_content: true`; explicit `chat_template_kwargs` override generated defaults, and `extra_body.chat_template_kwargs` still has final precedence. For vLLM Qwen thinking controls, set `params.qwenThinkingFormat` to `"chat-template"` or `"top-level"` on that model entry.
|
||||
- `compat.supportedReasoningEfforts`: per-model OpenAI-compatible reasoning effort list. Include `"xhigh"` for custom endpoints that truly accept it; OpenClaw then exposes `/think xhigh` in command menus, Gateway session rows, session patch validation, agent CLI validation, and `llm-task` validation for that configured provider/model. Use `compat.reasoningEffortMap` when the backend wants a provider-specific value for a canonical level.
|
||||
- `params.preserveThinking`: Z.AI-only opt-in for preserved thinking. When enabled and thinking is on, OpenClaw sends `thinking.clear_thinking: false` and replays prior `reasoning_content`; see [Z.AI thinking and preserved thinking](/providers/zai#thinking-and-preserved-thinking).
|
||||
- `agentRuntime`: default low-level agent runtime policy. Omitted id defaults to OpenClaw Pi. Use `id: "pi"` to force the built-in PI harness, `id: "auto"` to let registered plugin harnesses claim supported models and use PI when none match, a registered harness id such as `id: "codex"` to require that harness, or a supported CLI backend alias such as `id: "claude-cli"`. Explicit plugin runtimes fail closed when the harness is unavailable or fails. Keep model refs canonical as `provider/model`; select Codex, Claude CLI, Gemini CLI, and other execution backends through runtime config instead of legacy runtime provider prefixes. See [Agent runtimes](/concepts/agent-runtimes) for how this differs from provider/model selection.
|
||||
- Runtime policy belongs on providers or models, not on `agents.defaults`. Use `models.providers.<provider>.agentRuntime` for provider-wide rules or `agents.defaults.models["provider/model"].agentRuntime` / `agents.list[].models["provider/model"].agentRuntime` for model-specific rules. OpenAI agent models on the official OpenAI provider select Codex by default.
|
||||
- Config writers that mutate these fields (for example `/models set`, `/models set-image`, and fallback add/remove commands) save canonical object form and preserve existing fallback lists when possible.
|
||||
- `maxConcurrent`: max parallel agent runs across sessions (each session still serialized). Default: 4.
|
||||
|
||||
### `agents.defaults.agentRuntime`
|
||||
|
||||
`agentRuntime` controls which low-level executor runs agent turns. Most
|
||||
deployments should keep the default OpenClaw Pi runtime. Use it when a trusted
|
||||
plugin provides a native harness, such as the bundled Codex app-server harness,
|
||||
or when you want a supported CLI backend such as Claude CLI. For the mental
|
||||
model, see [Agent runtimes](/concepts/agent-runtimes).
|
||||
### Runtime policy
|
||||
|
||||
```json5
|
||||
{
|
||||
models: {
|
||||
providers: {
|
||||
openai: {
|
||||
agentRuntime: { id: "codex" },
|
||||
},
|
||||
},
|
||||
},
|
||||
agents: {
|
||||
defaults: {
|
||||
model: "openai/gpt-5.5",
|
||||
agentRuntime: {
|
||||
id: "codex",
|
||||
models: {
|
||||
"anthropic/claude-opus-4-7": {
|
||||
agentRuntime: { id: "claude-cli" },
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
@@ -425,11 +425,9 @@ model, see [Agent runtimes](/concepts/agent-runtimes).
|
||||
|
||||
- `id`: `"auto"`, `"pi"`, a registered plugin harness id, or a supported CLI backend alias. The bundled Codex plugin registers `codex`; the bundled Anthropic plugin provides the `claude-cli` CLI backend.
|
||||
- `id: "auto"` lets registered plugin harnesses claim supported turns and uses PI when no harness matches. An explicit plugin runtime such as `id: "codex"` requires that harness and fails closed if it is unavailable or fails.
|
||||
- Environment override: `OPENCLAW_AGENT_RUNTIME=<id|auto|pi>` overrides `id` for that process.
|
||||
- OpenAI agent models use the Codex harness by default; `agentRuntime.id: "codex"` remains valid when you want to make that explicit.
|
||||
- For Claude CLI deployments, prefer `model: "anthropic/claude-opus-4-7"` plus `agentRuntime.id: "claude-cli"`. Legacy `claude-cli/claude-opus-4-7` model refs still work for compatibility, but new config should keep provider/model selection canonical and put the execution backend in `agentRuntime.id`.
|
||||
- Older runtime-policy keys are rewritten to `agentRuntime` by `openclaw doctor --fix`.
|
||||
- Harness choice is pinned per session id after the first embedded run. Config/env changes affect new or reset sessions, not an existing transcript. Legacy OpenAI sessions with transcript history but no recorded pin use Codex; stale OpenAI PI pins can be repaired with `openclaw doctor --fix`. `/status` reports the effective runtime, for example `Runtime: OpenClaw Pi Default` or `Runtime: OpenAI Codex`.
|
||||
- Whole-agent runtime keys are legacy. `agents.defaults.agentRuntime`, `agents.list[].agentRuntime`, session runtime pins, and `OPENCLAW_AGENT_RUNTIME` are ignored by runtime selection. Run `openclaw doctor --fix` to remove stale values.
|
||||
- OpenAI agent models use the Codex harness by default; provider/model `agentRuntime.id: "codex"` remains valid when you want to make that explicit.
|
||||
- For Claude CLI deployments, prefer `model: "anthropic/claude-opus-4-7"` plus model-scoped `agentRuntime.id: "claude-cli"`. Legacy `claude-cli/claude-opus-4-7` model refs still work for compatibility, but new config should keep provider/model selection canonical and put the execution backend in provider/model runtime policy.
|
||||
- This only controls text agent-turn execution. Media generation, vision, PDF, music, video, and TTS still use their provider/model settings.
|
||||
|
||||
**Built-in alias shorthands** (only apply when the model is in `agents.defaults.models`):
|
||||
@@ -959,7 +957,6 @@ for provider examples and precedence.
|
||||
thinkingDefault: "high", // per-agent thinking level override
|
||||
reasoningDefault: "on", // per-agent reasoning visibility override
|
||||
fastModeDefault: false, // per-agent fast mode override
|
||||
agentRuntime: { id: "auto" },
|
||||
params: { cacheRetention: "none" }, // overrides matching defaults.models params by key
|
||||
tts: {
|
||||
providers: {
|
||||
@@ -1006,7 +1003,7 @@ for provider examples and precedence.
|
||||
- `thinkingDefault`: optional per-agent default thinking level (`off | minimal | low | medium | high | xhigh | adaptive | max`). Overrides `agents.defaults.thinkingDefault` for this agent when no per-message or session override is set. The selected provider/model profile controls which values are valid; for Google Gemini, `adaptive` keeps provider-owned dynamic thinking (`thinkingLevel` omitted on Gemini 3/3.1, `thinkingBudget: -1` on Gemini 2.5).
|
||||
- `reasoningDefault`: optional per-agent default reasoning visibility (`on | off | stream`). Overrides `agents.defaults.reasoningDefault` for this agent when no per-message or session reasoning override is set.
|
||||
- `fastModeDefault`: optional per-agent default for fast mode (`true | false`). Applies when no per-message or session fast-mode override is set.
|
||||
- `agentRuntime`: optional per-agent low-level runtime policy override. Use `{ id: "codex" }` to make one agent Codex-only while other agents keep the default PI fallback in `auto` mode.
|
||||
- `models`: optional per-agent model catalog/runtime overrides keyed by full `provider/model` ids. Use `models["provider/model"].agentRuntime` for per-agent runtime exceptions.
|
||||
- `runtime`: optional per-agent runtime descriptor. Use `type: "acp"` with `runtime.acp` defaults (`agent`, `backend`, `mode`, `cwd`) when the agent should default to ACP harness sessions.
|
||||
- `identity.avatar`: workspace-relative path, `http(s)` URL, or `data:` URI.
|
||||
- `identity` derives defaults: `ackReaction` from `emoji`, `mentionPatterns` from `name`/`emoji`.
|
||||
|
||||
@@ -585,7 +585,7 @@ When Mattermost native commands are enabled:
|
||||
|
||||
OpenClaw spawns `imsg rpc` (JSON-RPC over stdio). No daemon or port required. This is the preferred path for new OpenClaw iMessage setups when the host can grant Messages database and Automation permissions.
|
||||
|
||||
BlueBubbles is deprecated and no longer ships as a bundled OpenClaw channel. Migrate `channels.bluebubbles` configs to `channels.imessage`; third-party BlueBubbles bridges belong outside core.
|
||||
BlueBubbles support was removed. Migrate `channels.bluebubbles` configs to `channels.imessage`; OpenClaw supports iMessage through `imsg` only.
|
||||
|
||||
If the Gateway is not running on the signed-in Messages Mac, keep `channels.imessage.enabled=true` and set `channels.imessage.cliPath` to an SSH wrapper that runs `imsg "$@"` on that Mac. The default local `imsg` path is macOS-only.
|
||||
|
||||
|
||||
@@ -87,7 +87,7 @@ cat ~/.openclaw/openclaw.json
|
||||
- Legacy on-disk state migration (sessions/agent dir/WhatsApp auth).
|
||||
- Legacy plugin manifest contract key migration (`speechProviders`, `realtimeTranscriptionProviders`, `realtimeVoiceProviders`, `mediaUnderstandingProviders`, `imageGenerationProviders`, `videoGenerationProviders`, `webFetchProviders`, `webSearchProviders` → `contracts`).
|
||||
- Legacy cron store migration (`jobId`, `schedule.cron`, top-level delivery/payload fields, payload `provider`, simple `notify: true` webhook fallback jobs).
|
||||
- Legacy agent runtime-policy migration to `agents.defaults.agentRuntime` and `agents.list[].agentRuntime`.
|
||||
- Legacy whole-agent runtime-policy cleanup; provider/model runtime policy is the active route selector.
|
||||
- Stale plugin config cleanup when plugins are enabled; when `plugins.enabled=false`, stale plugin references are treated as inert containment config and are preserved.
|
||||
|
||||
</Accordion>
|
||||
@@ -109,7 +109,7 @@ cat ~/.openclaw/openclaw.json
|
||||
- Channel status warnings (probed from the running gateway).
|
||||
- Channel-specific permission checks live under `openclaw channels capabilities`; for example, Discord voice channel permissions are audited with `openclaw channels capabilities --channel discord --target channel:<channel-id>`.
|
||||
- WhatsApp responsiveness checks for degraded Gateway event-loop health with local TUI clients still running; `--fix` stops only verified local TUI clients.
|
||||
- Codex route repair for legacy `openai-codex/*` model refs in primary models, fallbacks, heartbeat/subagent/compaction overrides, hooks, channel model overrides, and session route pins; `--fix` rewrites them to `openai/*` and selects `agentRuntime.id: "codex"` only when the Codex plugin is installed, enabled, contributes the `codex` harness, and has usable OAuth. Otherwise it selects `agentRuntime.id: "pi"`.
|
||||
- Codex route repair for legacy `openai-codex/*` model refs in primary models, fallbacks, heartbeat/subagent/compaction overrides, hooks, channel model overrides, and session route pins; `--fix` rewrites them to `openai/*`, removes stale session/whole-agent runtime pins, and leaves canonical OpenAI agent refs on the default Codex harness.
|
||||
- Supervisor config audit (launchd/systemd/schtasks) with optional repair.
|
||||
- Embedded proxy environment cleanup for gateway services that captured shell `HTTP_PROXY` / `HTTPS_PROXY` / `NO_PROXY` values during install or update.
|
||||
- Gateway runtime best-practice checks (Node vs Bun, version-manager paths).
|
||||
@@ -269,8 +269,8 @@ That stages grounded durable candidates into the short-term dreaming store while
|
||||
In `--fix` / `--repair` mode, doctor rewrites affected default-agent and per-agent refs, including primary models, fallbacks, heartbeat/subagent/compaction overrides, hooks, channel model overrides, and stale persisted session route state:
|
||||
|
||||
- `openai-codex/gpt-*` becomes `openai/gpt-*`.
|
||||
- The matching agent runtime becomes `agentRuntime.id: "codex"` only when Codex is installed, enabled, contributes the `codex` harness, and has usable OAuth.
|
||||
- Otherwise the matching agent runtime becomes `agentRuntime.id: "pi"`.
|
||||
- Stale whole-agent runtime config and persisted session runtime pins are removed because runtime selection is provider/model-scoped.
|
||||
- Explicit provider/model runtime policy is preserved.
|
||||
- Existing model fallback lists are preserved with their legacy entries rewritten; copied per-model settings move from the legacy key to the canonical `openai/*` key.
|
||||
- Persisted session `modelProvider`/`providerOverride`, `model`/`modelOverride`, fallback notices, auth-profile pins, and Codex harness pins are repaired across all discovered agent session stores.
|
||||
- `/codex ...` means "control or bind a native Codex conversation from chat."
|
||||
|
||||
@@ -594,12 +594,11 @@ and troubleshooting see the main [FAQ](/help/faq).
|
||||
|
||||
<Accordion title="How does Codex auth work?">
|
||||
OpenClaw supports **OpenAI Code (Codex)** via OAuth (ChatGPT sign-in). Use
|
||||
`openai/gpt-5.5` with `agentRuntime.id: "codex"` for the common setup:
|
||||
ChatGPT/Codex subscription auth plus native Codex app-server execution. Use
|
||||
`openai-codex/gpt-5.5` only when you want Codex OAuth through the default
|
||||
Codex runtime. Direct OpenAI API-key access remains available for non-agent
|
||||
OpenAI API surfaces and for agent models through an ordered
|
||||
`openai-codex` API-key profile.
|
||||
`openai/gpt-5.5` for the common setup: ChatGPT/Codex subscription auth plus
|
||||
native Codex app-server execution. `openai-codex/gpt-*` model refs are
|
||||
legacy config repaired by `openclaw doctor --fix`. Direct OpenAI API-key
|
||||
access remains available for non-agent OpenAI API surfaces and for agent
|
||||
models through an ordered `openai-codex` API-key profile.
|
||||
See [Model providers](/concepts/model-providers) and [Onboarding (CLI)](/start/wizard).
|
||||
</Accordion>
|
||||
|
||||
|
||||
@@ -150,7 +150,7 @@ troubleshooting, see the main [FAQ](/help/faq).
|
||||
- **Native Codex coding agent:** set `agents.defaults.model.primary` to `openai/gpt-5.5`. Sign in with `openclaw models auth login --provider openai-codex` when you want ChatGPT/Codex subscription auth.
|
||||
- **Direct OpenAI API tasks outside the agent loop:** configure `OPENAI_API_KEY` for images, embeddings, speech, realtime, and other non-agent OpenAI API surfaces.
|
||||
- **OpenAI agent API-key auth:** use `/model openai/gpt-5.5` with an ordered `openai-codex` API-key profile.
|
||||
- **Sub-agents:** route coding tasks to a Codex-only agent with its own model and `agentRuntime` default.
|
||||
- **Sub-agents:** route coding tasks to a Codex-focused agent with its own `openai/gpt-5.5` model.
|
||||
|
||||
See [Models](/concepts/models) and [Slash commands](/tools/slash-commands).
|
||||
|
||||
|
||||
@@ -285,8 +285,8 @@ Docker notes:
|
||||
- Goal: validate the plugin-owned Codex harness through the normal gateway
|
||||
`agent` method:
|
||||
- load the bundled `codex` plugin
|
||||
- select `OPENCLAW_AGENT_RUNTIME=codex`
|
||||
- send a first gateway agent turn to `openai/gpt-5.5` with the Codex harness forced
|
||||
- select `openai/gpt-5.5`, which routes OpenAI agent turns through Codex by default
|
||||
- send a first gateway agent turn to `openai/gpt-5.5` with the Codex harness selected
|
||||
- send a second turn to the same OpenClaw session and verify the app-server
|
||||
thread can resume
|
||||
- run `/codex status` and `/codex models` through the same gateway command
|
||||
@@ -300,8 +300,8 @@ Docker notes:
|
||||
- Optional image probe: `OPENCLAW_LIVE_CODEX_HARNESS_IMAGE_PROBE=1`
|
||||
- Optional MCP/tool probe: `OPENCLAW_LIVE_CODEX_HARNESS_MCP_PROBE=1`
|
||||
- Optional Guardian probe: `OPENCLAW_LIVE_CODEX_HARNESS_GUARDIAN_PROBE=1`
|
||||
- The smoke uses `agentRuntime.id: "codex"` so a broken Codex harness cannot
|
||||
pass by silently falling back to PI.
|
||||
- The smoke forces provider/model `agentRuntime.id: "codex"` so a broken Codex
|
||||
harness cannot pass by silently falling back to PI.
|
||||
- Auth: Codex app-server auth from the local Codex subscription login. Docker
|
||||
smokes can also provide `OPENAI_API_KEY` for non-Codex probes when applicable,
|
||||
plus optional copied `~/.codex/auth.json` and `~/.codex/config.toml`.
|
||||
|
||||
@@ -322,8 +322,9 @@ Live transport lanes share one standard contract so new transports do not drift;
|
||||
### Shared Telegram credentials via Convex (v1)
|
||||
|
||||
When `--credential-source convex` (or `OPENCLAW_QA_CREDENTIAL_SOURCE=convex`) is enabled for
|
||||
`openclaw qa telegram`, QA lab acquires an exclusive lease from a Convex-backed pool, heartbeats
|
||||
that lease while the lane is running, and releases the lease on shutdown.
|
||||
live transport QA, QA lab acquires an exclusive lease from a Convex-backed pool, heartbeats that
|
||||
lease while the lane is running, and releases the lease on shutdown. The section name predates
|
||||
Discord, Slack, and WhatsApp support; the lease contract is shared across kinds.
|
||||
|
||||
Reference Convex project scaffold:
|
||||
|
||||
@@ -397,6 +398,16 @@ Payload shape for Telegram kind:
|
||||
- `groupId` must be a numeric Telegram chat id string.
|
||||
- `admin/add` validates this shape for `kind: "telegram"` and rejects malformed payloads.
|
||||
|
||||
Broker-validated multi-channel payloads:
|
||||
|
||||
- Discord: `{ guildId: string, channelId: string, driverBotToken: string, sutBotToken: string, sutApplicationId: string, voiceChannelId?: string }`
|
||||
- WhatsApp: `{ driverPhoneE164: string, sutPhoneE164: string, driverAuthArchiveBase64: string, sutAuthArchiveBase64: string, groupJid?: string }`
|
||||
|
||||
Slack lanes can also lease from the pool, but Slack payload validation currently
|
||||
lives in the Slack QA runner rather than the broker. Use
|
||||
`{ channelId: string, driverBotToken: string, sutBotToken: string, sutAppToken: string }`
|
||||
for Slack rows.
|
||||
|
||||
### Adding a channel to QA
|
||||
|
||||
The architecture and scenario-helper names for new channel adapters live in [QA overview → Adding a channel](/concepts/qa-e2e-automation#adding-a-channel). The minimum bar: implement the transport runner on the shared `qa-lab` host seam, declare `qaRunners` in the plugin manifest, mount as `openclaw qa <runner>`, and author scenarios under `qa/scenarios/`.
|
||||
|
||||
@@ -427,8 +427,7 @@ See [ClawDock](/install/clawdock) for the full helper guide.
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Base image metadata">
|
||||
The main Docker runtime image uses `node:24-bookworm-slim` and publishes OCI
|
||||
base-image annotations including `org.opencontainers.image.base.name`,
|
||||
The main Docker runtime image uses `node:24-bookworm-slim` and includes `tini` as the entrypoint init process (PID 1) to ensure zombie processes are reaped and signals are handled correctly in long-running containers. It publishes OCI base-image annotations including `org.opencontainers.image.base.name`,
|
||||
`org.opencontainers.image.source`, and others. The Node base digest is
|
||||
refreshed through Dependabot Docker base-image PRs; release builds do not run
|
||||
a distro upgrade layer. See
|
||||
|
||||
@@ -78,6 +78,8 @@ read_when:
|
||||
destination = "/data"
|
||||
```
|
||||
|
||||
The OpenClaw Docker image uses `tini` as its entrypoint. Fly process commands replace Docker `CMD` without replacing `ENTRYPOINT`, so the process still runs under `tini`.
|
||||
|
||||
**Key settings:**
|
||||
|
||||
| Setting | Why |
|
||||
|
||||
@@ -96,9 +96,6 @@ Computer Use available before a thread starts:
|
||||
agents: {
|
||||
defaults: {
|
||||
model: "openai/gpt-5.5",
|
||||
agentRuntime: {
|
||||
id: "codex",
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
@@ -114,9 +111,8 @@ register the bundled Codex marketplace from
|
||||
fails. If setup still cannot make the MCP server available, the turn fails
|
||||
before the thread starts.
|
||||
|
||||
Existing sessions keep their runtime and Codex thread binding. After changing
|
||||
`agentRuntime` or Computer Use config, use `/new` or `/reset` in the affected
|
||||
chat before testing.
|
||||
After changing Computer Use config, use `/new` or `/reset` in the affected chat
|
||||
before testing if an existing Codex thread has already started.
|
||||
|
||||
## Commands
|
||||
|
||||
|
||||
@@ -50,7 +50,8 @@ First sign in with Codex OAuth if you have not already:
|
||||
openclaw models auth login --provider openai-codex
|
||||
```
|
||||
|
||||
Then enable the bundled `codex` plugin and force the Codex runtime:
|
||||
Then enable the bundled `codex` plugin and use the canonical OpenAI model ref.
|
||||
OpenAI agent turns select the Codex runtime by default:
|
||||
|
||||
```json5
|
||||
{
|
||||
@@ -64,9 +65,6 @@ Then enable the bundled `codex` plugin and force the Codex runtime:
|
||||
agents: {
|
||||
defaults: {
|
||||
model: "openai/gpt-5.5",
|
||||
agentRuntime: {
|
||||
id: "codex",
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
@@ -98,7 +96,7 @@ The bundled `codex` plugin contributes several separate capabilities:
|
||||
|
||||
| Capability | How you use it | What it does |
|
||||
| --------------------------------- | --------------------------------------------------- | ----------------------------------------------------------------------------- |
|
||||
| Native embedded runtime | `agentRuntime.id: "codex"` | Runs OpenClaw embedded agent turns through Codex app-server. |
|
||||
| Native embedded runtime | `openai/gpt-*` agent model refs | Runs OpenClaw embedded agent turns through Codex app-server. |
|
||||
| Native chat-control commands | `/codex bind`, `/codex resume`, `/codex steer`, ... | Binds and controls Codex app-server threads from a messaging conversation. |
|
||||
| Codex app-server provider/catalog | `codex` internals, surfaced through the harness | Lets the runtime discover and validate app-server models. |
|
||||
| Codex media-understanding path | `codex/*` image-model compatibility paths | Runs bounded Codex app-server turns for supported image understanding models. |
|
||||
@@ -110,7 +108,7 @@ Enabling the plugin makes those capabilities available. It does **not**:
|
||||
realtime
|
||||
- convert `openai-codex/*` model refs without `openclaw doctor --fix`
|
||||
- make ACP/acpx the default Codex path
|
||||
- hot-switch existing sessions that already recorded a PI runtime
|
||||
- use stale whole-agent or session runtime pins for routing
|
||||
- replace OpenClaw channel delivery, session files, auth-profile storage, or
|
||||
message routing
|
||||
|
||||
@@ -141,35 +139,37 @@ For the plugin hook semantics themselves, see [Plugin hooks](/plugins/hooks)
|
||||
and [Plugin guard behavior](/tools/plugin).
|
||||
|
||||
OpenAI agent model refs use the harness by default. New configs should keep
|
||||
OpenAI model refs canonical as `openai/gpt-*`; `agentRuntime.id: "codex"` is
|
||||
still valid but no longer required for OpenAI agent turns. Legacy `codex/*`
|
||||
model refs still auto-select the harness for compatibility, but
|
||||
OpenAI model refs canonical as `openai/gpt-*`; provider/model
|
||||
`agentRuntime.id: "codex"` is still valid but no longer required for OpenAI
|
||||
agent turns. Legacy `codex/*` model refs still auto-select the harness for
|
||||
compatibility, but
|
||||
runtime-backed legacy provider prefixes are not shown as normal model/provider
|
||||
choices.
|
||||
|
||||
If any configured model route is still `openai-codex/*`, `openclaw doctor --fix`
|
||||
rewrites it to `openai/*`. For matching agent routes, it sets the agent runtime
|
||||
to `codex` and preserves existing `openai-codex` auth profile overrides.
|
||||
rewrites it to `openai/*` and preserves existing `openai-codex` auth profile
|
||||
overrides. It does not pin the whole agent to `agentRuntime.id: "codex"` because
|
||||
canonical OpenAI refs already select the Codex harness automatically.
|
||||
|
||||
## Route map
|
||||
|
||||
Use this table before changing config:
|
||||
|
||||
| Desired behavior | Model ref | Runtime config | Auth/profile route | Expected status label |
|
||||
| ---------------------------------------------------- | -------------------------- | -------------------------------------- | ------------------------------ | ---------------------------- |
|
||||
| ChatGPT/Codex subscription with native Codex runtime | `openai/gpt-*` | omitted or `agentRuntime.id: "codex"` | Codex OAuth or Codex account | `Runtime: OpenAI Codex` |
|
||||
| OpenAI API-key auth for agent models | `openai/gpt-*` | omitted or `agentRuntime.id: "codex"` | `openai-codex` API-key profile | `Runtime: OpenAI Codex` |
|
||||
| Legacy config that needs doctor repair | `openai-codex/gpt-*` | repaired to `codex` | Existing configured auth | Recheck after `doctor --fix` |
|
||||
| Mixed providers with conservative auto mode | provider-specific refs | `agentRuntime.id: "auto"` | Per selected provider | Depends on selected runtime |
|
||||
| Explicit Codex ACP adapter session | ACP prompt/model dependent | `sessions_spawn` with `runtime: "acp"` | ACP backend auth | ACP task/session status |
|
||||
| Desired behavior | Model ref | Runtime config | Auth/profile route | Expected status label |
|
||||
| ---------------------------------------------------- | -------------------------- | -------------------------------------------------------- | ------------------------------ | ---------------------------- |
|
||||
| ChatGPT/Codex subscription with native Codex runtime | `openai/gpt-*` | omitted or provider/model `agentRuntime.id: "codex"` | Codex OAuth or Codex account | `Runtime: OpenAI Codex` |
|
||||
| OpenAI API-key auth for agent models | `openai/gpt-*` | omitted or provider/model `agentRuntime.id: "codex"` | `openai-codex` API-key profile | `Runtime: OpenAI Codex` |
|
||||
| Legacy config that needs doctor repair | `openai-codex/gpt-*` | preserved or automatic | Existing configured auth | Recheck after `doctor --fix` |
|
||||
| Mixed providers with conservative auto mode | provider-specific refs | omitted unless a provider/model needs a runtime override | Per selected provider | Depends on selected runtime |
|
||||
| Explicit Codex ACP adapter session | ACP prompt/model dependent | `sessions_spawn` with `runtime: "acp"` | ACP backend auth | ACP task/session status |
|
||||
|
||||
The important split is provider versus runtime:
|
||||
|
||||
- `openai-codex/*` is a legacy route that doctor rewrites.
|
||||
- `agentRuntime.id: "codex"` requires the Codex harness and fails closed if it
|
||||
is unavailable.
|
||||
- `agentRuntime.id: "auto"` lets registered harnesses claim matching provider
|
||||
routes; OpenAI agent refs resolve to Codex instead of PI.
|
||||
- Provider/model `agentRuntime.id: "codex"` requires the Codex harness and fails
|
||||
closed if it is unavailable.
|
||||
- Provider/model `agentRuntime.id: "auto"` lets registered harnesses claim
|
||||
matching provider routes; OpenAI agent refs resolve to Codex instead of PI.
|
||||
- `/codex ...` answers "which native Codex conversation should this chat bind
|
||||
or control?"
|
||||
- ACP answers "which external harness process should acpx launch?"
|
||||
@@ -188,13 +188,14 @@ Treat `openai-codex/*` as legacy config that doctor should rewrite:
|
||||
|
||||
GPT-5.5 can appear on both direct OpenAI API-key and Codex subscription routes
|
||||
when your account exposes them. Use `openai/gpt-5.5` with the Codex app-server
|
||||
harness for native Codex runtime, or `openai/gpt-5.5` without a Codex runtime
|
||||
override for direct API-key traffic.
|
||||
harness for native Codex runtime. For direct API-key traffic through PI, opt in
|
||||
with provider/model `agentRuntime.id: "pi"` and a normal `openai` auth profile.
|
||||
|
||||
Legacy `codex/gpt-*` refs remain accepted as compatibility aliases. Doctor
|
||||
compatibility migration rewrites legacy runtime refs to canonical model refs
|
||||
and records the runtime policy separately. New native app-server harness configs
|
||||
should use `openai/gpt-*` plus `agentRuntime.id: "codex"`.
|
||||
should use `openai/gpt-*`; explicit provider/model `agentRuntime.id: "codex"`
|
||||
is only needed when you want the policy written down.
|
||||
|
||||
`agents.defaults.imageModel` follows the same prefix split. Use
|
||||
`openai/gpt-*` for the normal OpenAI route and `codex/gpt-*` when image
|
||||
@@ -213,27 +214,13 @@ in `auto` mode, each plugin candidate's support result.
|
||||
|
||||
`openclaw doctor` warns when configured model refs or persisted session route
|
||||
state still use `openai-codex/*`. `openclaw doctor --fix` rewrites those routes
|
||||
to:
|
||||
to `openai/<model>`. Canonical OpenAI agent refs already select the native Codex
|
||||
harness, so doctor does not pin the whole agent to Codex.
|
||||
|
||||
- `openai/<model>`
|
||||
- `agentRuntime.id: "codex"`
|
||||
|
||||
The `codex` route forces the native Codex harness. PI runtime config is not
|
||||
allowed for OpenAI agent model turns.
|
||||
Doctor also repairs stale persisted session pins across discovered agent session
|
||||
stores so old conversations do not stay wedged on the removed route.
|
||||
|
||||
Harness selection is not a live session control. When an embedded turn runs,
|
||||
OpenClaw records the selected harness id on that session and keeps using it for
|
||||
later turns in the same session id. Change `agentRuntime` config or
|
||||
`OPENCLAW_AGENT_RUNTIME` when you want future sessions to use another harness;
|
||||
use `/new` or `/reset` to start a fresh session before switching an existing
|
||||
conversation between PI and Codex. This avoids replaying one transcript through
|
||||
two incompatible native session systems.
|
||||
|
||||
Legacy sessions created before harness pins are treated as PI-pinned once they
|
||||
have transcript history. Use `/new` or `/reset` to opt that conversation into
|
||||
Codex after changing config.
|
||||
Whole-session and whole-agent runtime pins are legacy state. Runtime selection
|
||||
now comes from provider/model policy; `openclaw doctor --fix` removes stale
|
||||
session pins and old whole-agent runtime config so they do not mask the selected
|
||||
provider/model route.
|
||||
|
||||
`/status` shows the effective model runtime. The default PI harness appears as
|
||||
`Runtime: OpenClaw Pi Default`, and the Codex app-server harness appears as
|
||||
@@ -274,22 +261,21 @@ Codex behavior-shaping lane without duplicating `AGENTS.md`.
|
||||
|
||||
## Add Codex alongside other models
|
||||
|
||||
Do not set `agentRuntime.id: "codex"` globally if the same agent should freely switch
|
||||
between Codex and non-Codex provider models. A forced runtime applies to every
|
||||
embedded turn for that agent or session. If you select an Anthropic model while
|
||||
that runtime is forced, OpenClaw still tries the Codex harness and fails closed
|
||||
instead of silently routing that turn through PI.
|
||||
Do not set a whole-agent runtime. Whole-agent runtime pins are legacy and
|
||||
ignored, and they were the source of mixed-provider traps after upgrades. Keep
|
||||
runtime policy on the provider or model that needs it.
|
||||
|
||||
Use one of these shapes instead:
|
||||
|
||||
- Put Codex on a dedicated agent with `agentRuntime.id: "codex"`.
|
||||
- Keep the default agent on `agentRuntime.id: "auto"` and PI fallback for normal mixed
|
||||
provider usage.
|
||||
- Use `openai/gpt-*` for OpenAI agent turns; Codex is selected by default.
|
||||
- Put runtime overrides on `models.providers.<provider>.agentRuntime` or on a
|
||||
model entry such as `agents.defaults.models["anthropic/claude-opus-4-7"].agentRuntime`.
|
||||
- Use legacy `codex/*` refs only for compatibility. New configs should prefer
|
||||
`openai/*` plus an explicit Codex runtime policy.
|
||||
`openai/*`; add an explicit Codex runtime policy only when you need to make
|
||||
the provider/model rule strict.
|
||||
|
||||
For example, this keeps the default agent on normal automatic selection and
|
||||
adds a separate Codex agent:
|
||||
For example, this keeps mixed-provider routing ergonomic while using OpenAI
|
||||
through Codex by default and Claude through PI:
|
||||
|
||||
```json5
|
||||
{
|
||||
@@ -302,9 +288,7 @@ adds a separate Codex agent:
|
||||
},
|
||||
agents: {
|
||||
defaults: {
|
||||
agentRuntime: {
|
||||
id: "auto",
|
||||
},
|
||||
model: "anthropic/claude-opus-4-6",
|
||||
},
|
||||
list: [
|
||||
{
|
||||
@@ -316,9 +300,6 @@ adds a separate Codex agent:
|
||||
id: "codex",
|
||||
name: "Codex",
|
||||
model: "openai/gpt-5.5",
|
||||
agentRuntime: {
|
||||
id: "codex",
|
||||
},
|
||||
},
|
||||
],
|
||||
},
|
||||
@@ -355,45 +336,36 @@ routing.
|
||||
|
||||
## Codex-only deployments
|
||||
|
||||
Force the Codex harness when you need to prove that every embedded agent turn
|
||||
uses Codex. Explicit plugin runtimes fail closed and are never silently retried
|
||||
through PI:
|
||||
For OpenAI agent turns, `openai/gpt-*` already resolves to Codex. If you need a
|
||||
strict written policy, put it on the OpenAI provider or model. Explicit plugin
|
||||
runtimes fail closed and are never silently retried through PI:
|
||||
|
||||
```json5
|
||||
{
|
||||
agents: {
|
||||
defaults: {
|
||||
model: "openai/gpt-5.5",
|
||||
agentRuntime: {
|
||||
id: "codex",
|
||||
models: {
|
||||
providers: {
|
||||
openai: {
|
||||
agentRuntime: {
|
||||
id: "codex",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
agents: { defaults: { model: "openai/gpt-5.5" } },
|
||||
}
|
||||
```
|
||||
|
||||
Environment override:
|
||||
|
||||
```bash
|
||||
OPENCLAW_AGENT_RUNTIME=codex openclaw gateway run
|
||||
```
|
||||
|
||||
With Codex forced, OpenClaw fails early if the Codex plugin is disabled, the
|
||||
app-server is too old, or the app-server cannot start.
|
||||
|
||||
## Per-agent Codex
|
||||
|
||||
You can make one agent Codex-only while the default agent keeps normal
|
||||
auto-selection:
|
||||
You can make one agent Codex-strict while the default agent keeps normal
|
||||
selection by using a per-agent model runtime override:
|
||||
|
||||
```json5
|
||||
{
|
||||
agents: {
|
||||
defaults: {
|
||||
agentRuntime: {
|
||||
id: "auto",
|
||||
},
|
||||
},
|
||||
list: [
|
||||
{
|
||||
id: "main",
|
||||
@@ -404,8 +376,12 @@ auto-selection:
|
||||
id: "codex",
|
||||
name: "Codex",
|
||||
model: "openai/gpt-5.5",
|
||||
agentRuntime: {
|
||||
id: "codex",
|
||||
models: {
|
||||
"openai/gpt-5.5": {
|
||||
agentRuntime: {
|
||||
id: "codex",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
],
|
||||
@@ -827,9 +803,6 @@ Minimal config:
|
||||
agents: {
|
||||
defaults: {
|
||||
model: "openai/gpt-5.5",
|
||||
agentRuntime: {
|
||||
id: "codex",
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
@@ -876,12 +849,18 @@ Codex-only harness validation:
|
||||
|
||||
```json5
|
||||
{
|
||||
models: {
|
||||
providers: {
|
||||
openai: {
|
||||
agentRuntime: {
|
||||
id: "codex",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
agents: {
|
||||
defaults: {
|
||||
model: "openai/gpt-5.5",
|
||||
agentRuntime: {
|
||||
id: "codex",
|
||||
},
|
||||
},
|
||||
},
|
||||
plugins: {
|
||||
@@ -1185,16 +1164,16 @@ understanding continue to use the matching provider/model settings such as
|
||||
## Troubleshooting
|
||||
|
||||
**Codex does not appear as a normal `/model` provider:** that is expected for
|
||||
new configs. Select an `openai/gpt-*` model with
|
||||
`agentRuntime.id: "codex"` (or a legacy `codex/*` ref), enable
|
||||
new configs. Select an `openai/gpt-*` model, enable
|
||||
`plugins.entries.codex.enabled`, and check whether `plugins.allow` excludes
|
||||
`codex`.
|
||||
`codex`. Legacy `codex/*` refs remain compatibility aliases, not normal model
|
||||
provider choices.
|
||||
|
||||
**OpenClaw uses PI instead of Codex:** `agentRuntime.id: "auto"` can still use PI as the
|
||||
compatibility backend when no Codex harness claims the run. Set
|
||||
`agentRuntime.id: "codex"` to force Codex selection while testing. A
|
||||
forced Codex runtime fails instead of falling back to PI. Once Codex app-server
|
||||
is selected, its failures surface directly.
|
||||
**OpenClaw uses PI instead of Codex:** make sure the model ref is `openai/gpt-*`
|
||||
on the official OpenAI provider and that the Codex plugin is installed/enabled.
|
||||
If you need a strict policy while testing, set provider/model
|
||||
`agentRuntime.id: "codex"`. A forced Codex runtime fails instead of falling back
|
||||
to PI. Once Codex app-server is selected, its failures surface directly.
|
||||
|
||||
**The app-server is rejected:** upgrade Codex so the app-server handshake
|
||||
reports version `0.125.0` or newer. Same-version prereleases or build-suffixed
|
||||
@@ -1207,11 +1186,11 @@ or disable discovery.
|
||||
**WebSocket transport fails immediately:** check `appServer.url`, `authToken`,
|
||||
and that the remote app-server speaks the same Codex app-server protocol version.
|
||||
|
||||
**A non-Codex model uses PI:** that is expected unless you forced
|
||||
`agentRuntime.id: "codex"` for that agent or selected a legacy
|
||||
`codex/*` ref. Plain `openai/gpt-*` and other provider refs stay on their normal
|
||||
provider path in `auto` mode. If you force `agentRuntime.id: "codex"`, every embedded
|
||||
turn for that agent must be a Codex-supported OpenAI model.
|
||||
**A non-Codex model uses PI:** that is expected unless provider/model runtime
|
||||
policy routes it to another harness. Plain non-OpenAI provider refs stay on
|
||||
their normal provider path in `auto` mode. If you force
|
||||
`agentRuntime.id: "codex"` on a provider or model, matching embedded turns must
|
||||
be Codex-supported OpenAI models.
|
||||
|
||||
**Computer Use is installed but tools do not run:** check
|
||||
`/codex computer-use status` from a fresh session. If a tool reports
|
||||
|
||||
@@ -103,14 +103,11 @@ export default definePluginEntry({
|
||||
|
||||
OpenClaw chooses a harness after provider/model resolution:
|
||||
|
||||
1. An existing session's recorded harness id wins, so config/env changes do not
|
||||
hot-switch that transcript to another runtime.
|
||||
2. `OPENCLAW_AGENT_RUNTIME=<id>` forces a registered harness with that id for
|
||||
sessions that are not already pinned.
|
||||
3. `OPENCLAW_AGENT_RUNTIME=pi` forces the built-in PI harness.
|
||||
4. `OPENCLAW_AGENT_RUNTIME=auto` asks registered harnesses if they support the
|
||||
resolved provider/model.
|
||||
5. If no registered harness matches, OpenClaw uses PI unless PI fallback is
|
||||
1. Model-scoped runtime policy wins.
|
||||
2. Provider-scoped runtime policy comes next.
|
||||
3. `auto` asks registered harnesses if they support the resolved
|
||||
provider/model.
|
||||
4. If no registered harness matches, OpenClaw uses PI unless PI fallback is
|
||||
disabled.
|
||||
|
||||
Plugin harness failures surface as run failures. In `auto` mode, PI fallback is
|
||||
@@ -119,11 +116,10 @@ provider/model. Once a plugin harness has claimed a run, OpenClaw does not
|
||||
replay that same turn through PI because that can change auth/runtime semantics
|
||||
or duplicate side effects.
|
||||
|
||||
The selected harness id is persisted with the session id after an embedded run.
|
||||
Legacy sessions created before harness pins are treated as PI-pinned once they
|
||||
have transcript history. Use a new/reset session when changing between PI and a
|
||||
native plugin harness. `/status` shows non-default harness ids such as `codex`
|
||||
next to `Fast`; PI stays hidden because it is the default compatibility path.
|
||||
Whole-session and whole-agent runtime pins are ignored by selection. That
|
||||
includes stale session `agentHarnessId` values, `agents.defaults.agentRuntime`,
|
||||
`agents.list[].agentRuntime`, and `OPENCLAW_AGENT_RUNTIME`. `/status` shows the
|
||||
effective runtime selected from the provider/model route.
|
||||
If the selected harness is surprising, enable `agents/harness` debug logging and
|
||||
inspect the gateway's structured `agent harness selected` record. It includes
|
||||
the selected harness id, selection reason, runtime/fallback policy, and, in
|
||||
@@ -141,8 +137,7 @@ OpenClaw. The harness then claims that provider in `supports(...)`.
|
||||
|
||||
The bundled Codex plugin follows this pattern:
|
||||
|
||||
- preferred user model refs: `openai/gpt-5.5` plus
|
||||
`agentRuntime.id: "codex"`
|
||||
- preferred user model refs: `openai/gpt-5.5`
|
||||
- compatibility refs: legacy `codex/gpt-*` refs remain accepted, but new
|
||||
configs should not use them as normal provider/model refs
|
||||
- harness id: `codex`
|
||||
@@ -151,10 +146,9 @@ The bundled Codex plugin follows this pattern:
|
||||
- app-server request: OpenClaw sends the bare model id to Codex and lets the
|
||||
harness talk to the native app-server protocol
|
||||
|
||||
The Codex plugin is additive. Plain `openai/gpt-*` refs continue to use the
|
||||
normal OpenClaw provider path unless you force the Codex harness with
|
||||
`agentRuntime.id: "codex"`. Older `codex/gpt-*` refs still select the
|
||||
Codex provider and harness for compatibility.
|
||||
The Codex plugin is additive. Plain `openai/gpt-*` agent refs on the official
|
||||
OpenAI provider select the Codex harness by default. Older `codex/gpt-*` refs
|
||||
still select the Codex provider and harness for compatibility.
|
||||
|
||||
For operator setup, model prefix examples, and Codex-only configs, see
|
||||
[Codex Harness](/plugins/codex-harness).
|
||||
@@ -202,74 +196,94 @@ aliases for the native harness.
|
||||
When this mode runs, Codex owns the native thread id, resume behavior,
|
||||
compaction, and app-server execution. OpenClaw still owns the chat channel,
|
||||
visible transcript mirror, tool policy, approvals, media delivery, and session
|
||||
selection. Use `agentRuntime.id: "codex"` when you need to prove that only the
|
||||
Codex app-server path can claim the run. Explicit plugin runtimes fail closed;
|
||||
Codex app-server selection failures and runtime failures are not retried through
|
||||
PI.
|
||||
selection. Use provider/model `agentRuntime.id: "codex"` when you need to prove
|
||||
that only the Codex app-server path can claim the run. Explicit plugin runtimes
|
||||
fail closed; Codex app-server selection failures and runtime failures are not
|
||||
retried through PI.
|
||||
|
||||
## Runtime strictness
|
||||
|
||||
By default, OpenClaw runs embedded agents with OpenClaw Pi. In `auto` mode,
|
||||
registered plugin harnesses can claim a provider/model pair, and PI handles the
|
||||
turn when none match. Use an explicit plugin runtime such as
|
||||
By default, OpenClaw uses `auto` provider/model runtime policy: registered
|
||||
plugin harnesses can claim a provider/model pair, and PI handles the turn when
|
||||
none match. OpenAI agent refs on the official OpenAI provider default to Codex.
|
||||
Use an explicit provider/model plugin runtime such as
|
||||
`agentRuntime.id: "codex"` when missing harness selection should fail instead
|
||||
of routing through PI. Selected plugin harness failures always fail hard. This
|
||||
does not block an explicit `agentRuntime.id: "pi"` or
|
||||
`OPENCLAW_AGENT_RUNTIME=pi`.
|
||||
does not block an explicit provider/model `agentRuntime.id: "pi"`.
|
||||
|
||||
For Codex-only embedded runs:
|
||||
|
||||
```json
|
||||
{
|
||||
"models": {
|
||||
"providers": {
|
||||
"openai": {
|
||||
"agentRuntime": {
|
||||
"id": "codex"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"agents": {
|
||||
"defaults": {
|
||||
"model": "openai/gpt-5.5",
|
||||
"agentRuntime": {
|
||||
"id": "codex"
|
||||
"model": "openai/gpt-5.5"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If you want a CLI backend for one canonical model, put the runtime on that
|
||||
model entry:
|
||||
|
||||
```json
|
||||
{
|
||||
"agents": {
|
||||
"defaults": {
|
||||
"model": "anthropic/claude-opus-4-7",
|
||||
"models": {
|
||||
"anthropic/claude-opus-4-7": {
|
||||
"agentRuntime": {
|
||||
"id": "claude-cli"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If you want any registered plugin harness to claim matching models and otherwise
|
||||
use PI, set `id: "auto"`:
|
||||
Per-agent overrides use the same model-scoped shape:
|
||||
|
||||
```json
|
||||
{
|
||||
"agents": {
|
||||
"defaults": {
|
||||
"agentRuntime": {
|
||||
"id": "auto"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Per-agent overrides use the same shape:
|
||||
|
||||
```json
|
||||
{
|
||||
"agents": {
|
||||
"defaults": {
|
||||
"agentRuntime": { "id": "auto" }
|
||||
},
|
||||
"list": [
|
||||
{
|
||||
"id": "codex-only",
|
||||
"model": "openai/gpt-5.5",
|
||||
"agentRuntime": { "id": "codex" }
|
||||
"models": {
|
||||
"openai/gpt-5.5": {
|
||||
"agentRuntime": { "id": "codex" }
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
`OPENCLAW_AGENT_RUNTIME` still overrides the configured runtime.
|
||||
Legacy whole-agent runtime examples like this are ignored:
|
||||
|
||||
```bash
|
||||
OPENCLAW_AGENT_RUNTIME=codex openclaw gateway run
|
||||
```json
|
||||
{
|
||||
"agents": {
|
||||
"defaults": {
|
||||
"agentRuntime": {
|
||||
"id": "codex"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
With an explicit plugin runtime, a session fails early when the requested
|
||||
|
||||
@@ -106,7 +106,11 @@ Anthropic's current public docs:
|
||||
agents: {
|
||||
defaults: {
|
||||
model: { primary: "anthropic/claude-opus-4-7" },
|
||||
agentRuntime: { id: "claude-cli" },
|
||||
models: {
|
||||
"anthropic/claude-opus-4-7": {
|
||||
agentRuntime: { id: "claude-cli" },
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
@@ -114,7 +118,7 @@ Anthropic's current public docs:
|
||||
|
||||
Legacy `claude-cli/claude-opus-4-7` model refs still work for
|
||||
compatibility, but new config should keep provider/model selection as
|
||||
`anthropic/*` and put the execution backend in `agentRuntime.id`.
|
||||
`anthropic/*` and put the execution backend in provider/model runtime policy.
|
||||
|
||||
<Tip>
|
||||
If you want the clearest billing path, use an Anthropic API key instead. OpenClaw also supports subscription-style options from [OpenAI Codex](/providers/openai), [Qwen Cloud](/providers/qwen), [MiniMax](/providers/minimax), and [Z.AI / GLM](/providers/glm).
|
||||
|
||||
@@ -256,6 +256,49 @@ openclaw models list
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Service tier">
|
||||
Some Bedrock models support a `service_tier` parameter to optimize for cost
|
||||
or latency. The following tiers are available:
|
||||
|
||||
| Tier | Description |
|
||||
|------|-------------|
|
||||
| `default` | Standard Bedrock tier |
|
||||
| `flex` | Discounted processing for workloads that can tolerate longer latency |
|
||||
| `priority` | Prioritized processing for latency-sensitive workloads |
|
||||
| `reserved` | Reserved capacity for steady-state workloads |
|
||||
|
||||
Set `serviceTier` (or `service_tier`) via `agents.defaults.params` for
|
||||
Bedrock model requests, or per-model in
|
||||
`agents.defaults.models["<model-key>"].params`:
|
||||
|
||||
```json5
|
||||
{
|
||||
agents: {
|
||||
defaults: {
|
||||
params: {
|
||||
serviceTier: "flex", // applies to all models
|
||||
},
|
||||
models: {
|
||||
"amazon-bedrock/mistral.mistral-large-3-675b-instruct": {
|
||||
params: {
|
||||
serviceTier: "priority", // per-model override
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
Valid values are `default`, `flex`, `priority`, and `reserved`. Not all
|
||||
models support all tiers — if an unsupported tier is requested, Bedrock will
|
||||
return a validation error. Note: the error message is somewhat misleading;
|
||||
it may say "The provided model identifier is invalid" rather than indicating
|
||||
an unsupported service tier. If you see this error, check whether the model
|
||||
supports the requested tier.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Claude Opus 4.7 temperature">
|
||||
Bedrock rejects the `temperature` parameter for Claude Opus 4.7. OpenClaw
|
||||
omits `temperature` automatically for any Opus 4.7 Bedrock ref, including
|
||||
|
||||
@@ -13,7 +13,7 @@ Gemini Grounding.
|
||||
- Provider: `google`
|
||||
- Auth: `GEMINI_API_KEY` or `GOOGLE_API_KEY`
|
||||
- API: Google Gemini API
|
||||
- Runtime option: `agents.defaults.agentRuntime.id: "google-gemini-cli"`
|
||||
- Runtime option: provider/model `agentRuntime.id: "google-gemini-cli"`
|
||||
reuses Gemini CLI OAuth while keeping model refs canonical as `google/*`.
|
||||
|
||||
## Getting started
|
||||
|
||||
@@ -36,9 +36,9 @@ changing config.
|
||||
| ---------------------------------------------------- | ------------------------------------------------------- | --------------------------------------------------------------------- |
|
||||
| ChatGPT/Codex subscription with native Codex runtime | `openai/gpt-5.5` | Default OpenAI agent setup. Sign in with `openai-codex` auth. |
|
||||
| Direct API-key billing for agent models | `openai/gpt-5.5` plus an `openai-codex` API-key profile | Use `auth.order.openai-codex` to prefer that profile. |
|
||||
| Direct API-key billing through explicit PI | `openai/gpt-5.5` plus `agentRuntime.id: "pi"` | Select a normal `openai` API-key profile. |
|
||||
| Direct API-key billing through explicit PI | `openai/gpt-5.5` plus provider/model runtime `pi` | Select a normal `openai` API-key profile. |
|
||||
| Latest ChatGPT Instant API alias | `openai/chat-latest` | Direct API-key only. Moving alias for experiments, not the default. |
|
||||
| ChatGPT/Codex subscription auth through explicit PI | `openai/gpt-5.5` plus `agentRuntime.id: "pi"` | Select an `openai-codex` auth profile for the compatibility route. |
|
||||
| ChatGPT/Codex subscription auth through explicit PI | `openai/gpt-5.5` plus provider/model runtime `pi` | Select an `openai-codex` auth profile for the compatibility route. |
|
||||
| Image generation or editing | `openai/gpt-image-2` | Works with either `OPENAI_API_KEY` or OpenAI Codex OAuth. |
|
||||
| Transparent-background images | `openai/gpt-image-1.5` | Use `outputFormat=png` or `webp` and `openai.background=transparent`. |
|
||||
|
||||
@@ -46,14 +46,14 @@ changing config.
|
||||
|
||||
The names are similar but not interchangeable:
|
||||
|
||||
| Name you see | Layer | Meaning |
|
||||
| ---------------------------------- | ------------------- | ------------------------------------------------------------------------------------------------- |
|
||||
| `openai` | Provider prefix | Canonical OpenAI model route; agent turns use the Codex runtime. |
|
||||
| `openai-codex` | Auth/profile prefix | OpenAI Codex OAuth/subscription auth profile provider. |
|
||||
| `codex` plugin | Plugin | Bundled OpenClaw plugin that provides native Codex app-server runtime and `/codex` chat controls. |
|
||||
| `agentRuntime.id: codex` | Agent runtime | Force the native Codex app-server harness for embedded turns. |
|
||||
| `/codex ...` | Chat command set | Bind/control Codex app-server threads from a conversation. |
|
||||
| `runtime: "acp", agentId: "codex"` | ACP session route | Explicit fallback path that runs Codex through ACP/acpx. |
|
||||
| Name you see | Layer | Meaning |
|
||||
| --------------------------------------- | ------------------- | ------------------------------------------------------------------------------------------------- |
|
||||
| `openai` | Provider prefix | Canonical OpenAI model route; agent turns use the Codex runtime. |
|
||||
| `openai-codex` | Auth/profile prefix | OpenAI Codex OAuth/subscription auth profile provider. |
|
||||
| `codex` plugin | Plugin | Bundled OpenClaw plugin that provides native Codex app-server runtime and `/codex` chat controls. |
|
||||
| provider/model `agentRuntime.id: codex` | Agent runtime | Force the native Codex app-server harness for matching embedded turns. |
|
||||
| `/codex ...` | Chat command set | Bind/control Codex app-server threads from a conversation. |
|
||||
| `runtime: "acp", agentId: "codex"` | ACP session route | Explicit fallback path that runs Codex through ACP/acpx. |
|
||||
|
||||
This means a config can intentionally contain both `openai/*` model refs and
|
||||
`openai-codex` auth profiles. `openclaw doctor --fix` rewrites legacy
|
||||
@@ -79,20 +79,20 @@ explicit runtime config.
|
||||
|
||||
## OpenClaw feature coverage
|
||||
|
||||
| OpenAI capability | OpenClaw surface | Status |
|
||||
| ------------------------- | ----------------------------------------------------------------- | ------------------------------------------------------ |
|
||||
| Chat / Responses | `openai/<model>` model provider | Yes |
|
||||
| Codex subscription models | `openai/<model>` with `openai-codex` OAuth | Yes |
|
||||
| Legacy Codex model refs | `openai-codex/<model>` | Repaired by doctor to `openai/<model>` |
|
||||
| Codex app-server harness | `openai/<model>` with omitted runtime or `agentRuntime.id: codex` | Yes |
|
||||
| Server-side web search | Native OpenAI Responses tool | Yes, when web search is enabled and no provider pinned |
|
||||
| Images | `image_generate` | Yes |
|
||||
| Videos | `video_generate` | Yes |
|
||||
| Text-to-speech | `messages.tts.provider: "openai"` / `tts` | Yes |
|
||||
| Batch speech-to-text | `tools.media.audio` / media understanding | Yes |
|
||||
| Streaming speech-to-text | Voice Call `streaming.provider: "openai"` | Yes |
|
||||
| Realtime voice | Voice Call `realtime.provider: "openai"` / Control UI Talk | Yes |
|
||||
| Embeddings | memory embedding provider | Yes |
|
||||
| OpenAI capability | OpenClaw surface | Status |
|
||||
| ------------------------- | -------------------------------------------------------------------------------- | ------------------------------------------------------ |
|
||||
| Chat / Responses | `openai/<model>` model provider | Yes |
|
||||
| Codex subscription models | `openai/<model>` with `openai-codex` OAuth | Yes |
|
||||
| Legacy Codex model refs | `openai-codex/<model>` | Repaired by doctor to `openai/<model>` |
|
||||
| Codex app-server harness | `openai/<model>` with omitted runtime or provider/model `agentRuntime.id: codex` | Yes |
|
||||
| Server-side web search | Native OpenAI Responses tool | Yes, when web search is enabled and no provider pinned |
|
||||
| Images | `image_generate` | Yes |
|
||||
| Videos | `video_generate` | Yes |
|
||||
| Text-to-speech | `messages.tts.provider: "openai"` / `tts` | Yes |
|
||||
| Batch speech-to-text | `tools.media.audio` / media understanding | Yes |
|
||||
| Streaming speech-to-text | Voice Call `streaming.provider: "openai"` | Yes |
|
||||
| Realtime voice | Voice Call `realtime.provider: "openai"` / Control UI Talk | Yes |
|
||||
| Embeddings | memory embedding provider | Yes |
|
||||
|
||||
## Memory embeddings
|
||||
|
||||
@@ -152,9 +152,9 @@ Choose your preferred auth method and follow the setup steps.
|
||||
|
||||
| Model ref | Runtime config | Route | Auth |
|
||||
| ---------------------- | -------------------------- | --------------------------- | ---------------- |
|
||||
| `openai/gpt-5.5` | omitted / `agentRuntime.id: "codex"` | Codex app-server harness | `openai-codex` profile |
|
||||
| `openai/gpt-5.4-mini` | omitted / `agentRuntime.id: "codex"` | Codex app-server harness | `openai-codex` profile |
|
||||
| `openai/gpt-5.5` | `agentRuntime.id: "pi"` | PI embedded runtime | `openai` profile or selected `openai-codex` profile |
|
||||
| `openai/gpt-5.5` | omitted / provider/model `agentRuntime.id: "codex"` | Codex app-server harness | `openai-codex` profile |
|
||||
| `openai/gpt-5.4-mini` | omitted / provider/model `agentRuntime.id: "codex"` | Codex app-server harness | `openai-codex` profile |
|
||||
| `openai/gpt-5.5` | provider/model `agentRuntime.id: "pi"` | PI embedded runtime | `openai` profile or selected `openai-codex` profile |
|
||||
|
||||
<Note>
|
||||
`openai/*` agent models use the Codex app-server harness. To use API-key
|
||||
@@ -239,8 +239,8 @@ Choose your preferred auth method and follow the setup steps.
|
||||
|
||||
| Model ref | Runtime config | Route | Auth |
|
||||
|-----------|----------------|-------|------|
|
||||
| `openai/gpt-5.5` | omitted / `agentRuntime.id: "codex"` | Native Codex app-server harness | Codex sign-in or selected `openai-codex` profile |
|
||||
| `openai/gpt-5.5` | `agentRuntime.id: "pi"` | PI embedded runtime with internal Codex-auth transport | Selected `openai-codex` profile |
|
||||
| `openai/gpt-5.5` | omitted / provider/model `agentRuntime.id: "codex"` | Native Codex app-server harness | Codex sign-in or selected `openai-codex` profile |
|
||||
| `openai/gpt-5.5` | provider/model `agentRuntime.id: "pi"` | PI embedded runtime with internal Codex-auth transport | Selected `openai-codex` profile |
|
||||
| `openai-codex/gpt-5.5` | repaired by doctor | Legacy route rewritten to `openai/gpt-5.5` | Existing `openai-codex` profile |
|
||||
|
||||
<Warning>
|
||||
@@ -265,7 +265,6 @@ Choose your preferred auth method and follow the setup steps.
|
||||
agents: {
|
||||
defaults: {
|
||||
model: { primary: "openai/gpt-5.5" },
|
||||
agentRuntime: { id: "codex" },
|
||||
},
|
||||
},
|
||||
}
|
||||
@@ -284,7 +283,7 @@ Choose your preferred auth method and follow the setup steps.
|
||||
openclaw models status
|
||||
openclaw models auth list --provider openai-codex
|
||||
openclaw config get agents.defaults.model --json
|
||||
openclaw config get agents.defaults.agentRuntime --json
|
||||
openclaw config get models.providers.openai.agentRuntime --json
|
||||
```
|
||||
|
||||
For a specific agent, add `--agent <id>`:
|
||||
@@ -367,7 +366,7 @@ Choose your preferred auth method and follow the setup steps.
|
||||
## Native Codex app-server auth
|
||||
|
||||
The native Codex app-server harness uses `openai/*` model refs plus omitted
|
||||
runtime config or `agentRuntime.id: "codex"`, but its auth is still
|
||||
runtime config or provider/model `agentRuntime.id: "codex"`, but its auth is still
|
||||
account-based. OpenClaw
|
||||
selects auth in this order:
|
||||
|
||||
@@ -504,7 +503,7 @@ See [Video Generation](/tools/video-generation) for shared tool parameters, prov
|
||||
|
||||
OpenClaw adds a shared GPT-5 prompt contribution for GPT-5-family runs across providers. It applies by model id, so `openai/gpt-5.5`, legacy pre-repair refs such as `openai-codex/gpt-5.5`, `openrouter/openai/gpt-5.5`, `opencode/gpt-5.5`, and other compatible GPT-5 refs receive the same overlay. Older GPT-4.x models do not.
|
||||
|
||||
The bundled native Codex harness uses the same GPT-5 behavior and heartbeat overlay through Codex app-server developer instructions, so `openai/gpt-5.x` sessions forced through `agentRuntime.id: "codex"` keep the same follow-through and proactive heartbeat guidance even though Codex owns the rest of the harness prompt.
|
||||
The bundled native Codex harness uses the same GPT-5 behavior and heartbeat overlay through Codex app-server developer instructions, so `openai/gpt-5.x` sessions routed through Codex keep the same follow-through and proactive heartbeat guidance even though Codex owns the rest of the harness prompt.
|
||||
|
||||
The GPT-5 contribution adds a tagged behavior contract for persona persistence, execution safety, tool discipline, output shape, completion checks, and verification. Channel-specific reply and silent-message behavior stays in the shared OpenClaw system prompt and outbound delivery policy. The GPT-5 guidance is always enabled for matching models. The friendly interaction-style layer is separate and configurable.
|
||||
|
||||
@@ -912,7 +911,7 @@ the Server-side compaction accordion below.
|
||||
- Injects `context_management: [{ type: "compaction", compact_threshold: ... }]`
|
||||
- Default `compact_threshold`: 70% of `contextWindow` (or `80000` when unavailable)
|
||||
|
||||
This applies to the built-in Pi harness path and to OpenAI provider hooks used by embedded runs. The native Codex app-server harness manages its own context through Codex and is configured separately with `agents.defaults.agentRuntime.id`.
|
||||
This applies to the built-in Pi harness path and to OpenAI provider hooks used by embedded runs. The native Codex app-server harness manages its own context through Codex and is configured by OpenAI's default agent route or provider/model runtime policy.
|
||||
|
||||
<Tabs>
|
||||
<Tab title="Enable explicitly">
|
||||
|
||||
@@ -20,7 +20,7 @@ Codex has two OpenClaw routes:
|
||||
|
||||
| Route | Config/command | Setup page |
|
||||
| -------------------------- | ------------------------------------------------------ | --------------------------------------- |
|
||||
| Native Codex app-server | `/codex ...`, `agentRuntime.id: "codex"` | [Codex harness](/plugins/codex-harness) |
|
||||
| Native Codex app-server | `/codex ...`, `openai/gpt-*` agent refs | [Codex harness](/plugins/codex-harness) |
|
||||
| Explicit Codex ACP adapter | `/acp spawn codex`, `runtime: "acp", agentId: "codex"` | This page |
|
||||
|
||||
Prefer the native route unless you explicitly need ACP/acpx behavior.
|
||||
|
||||
@@ -19,8 +19,8 @@ Each ACP session spawn is tracked as a [background task](/automation/tasks).
|
||||
|
||||
<Note>
|
||||
**ACP is the external-harness path, not the default Codex path.** The
|
||||
native Codex app-server plugin owns `/codex ...` controls and the
|
||||
`agentRuntime.id: "codex"` embedded runtime; ACP owns
|
||||
native Codex app-server plugin owns `/codex ...` controls and the default
|
||||
`openai/gpt-*` embedded runtime for agent turns; ACP owns
|
||||
`/acp ...` controls and `sessions_spawn({ runtime: "acp" })` sessions.
|
||||
|
||||
If you want Codex or Claude Code to connect as an external MCP client
|
||||
|
||||
@@ -391,8 +391,8 @@ even when source overlay mounts are present.
|
||||
re-enable plugins before running doctor cleanup if you want stale ids removed
|
||||
- OpenAI-family Codex routes keep separate plugin boundaries:
|
||||
`openai-codex/*` belongs to the OpenAI plugin, while the bundled Codex
|
||||
app-server plugin is selected by `agentRuntime.id: "codex"` or legacy
|
||||
`codex/*` model refs
|
||||
app-server plugin is selected by canonical `openai/*` agent refs, explicit
|
||||
provider/model `agentRuntime.id: "codex"`, or legacy `codex/*` model refs
|
||||
|
||||
## Troubleshooting runtime hooks
|
||||
|
||||
|
||||
@@ -133,7 +133,13 @@ function selectCurrentSessionLease(params: {
|
||||
if (params.rootPid) {
|
||||
return candidates.find((lease) => lease.rootPid === params.rootPid);
|
||||
}
|
||||
return candidates.toSorted((a, b) => b.startedAt - a.startedAt)[0];
|
||||
let selected: AcpxProcessLease | undefined;
|
||||
for (const lease of candidates) {
|
||||
if (!selected || lease.startedAt > selected.startedAt) {
|
||||
selected = lease;
|
||||
}
|
||||
}
|
||||
return selected;
|
||||
}
|
||||
|
||||
function createResetAwareSessionStore(
|
||||
|
||||
@@ -160,34 +160,28 @@ function makeAppInferenceProfileDescriptor(modelId: string): never {
|
||||
} as never;
|
||||
}
|
||||
|
||||
/**
|
||||
* Call wrapStreamFn and then invoke the returned stream function, capturing
|
||||
* the payload via the onPayload hook that streamWithPayloadPatch installs.
|
||||
*/
|
||||
async function callWrappedStream(
|
||||
provider: RegisteredProviderPlugin,
|
||||
modelId: string,
|
||||
modelDescriptor: never,
|
||||
config?: OpenClawConfig,
|
||||
extraParams?: Record<string, unknown>,
|
||||
payload: Record<string, unknown> = {},
|
||||
): Promise<Record<string, unknown>> {
|
||||
const wrapped = provider.wrapStreamFn?.({
|
||||
provider: "amazon-bedrock",
|
||||
modelId,
|
||||
config,
|
||||
streamFn: spyStreamFn,
|
||||
...(extraParams ? { extraParams } : {}),
|
||||
} as never);
|
||||
|
||||
// The wrapped stream returns the options object (from spyStreamFn).
|
||||
// For guardrail-wrapped streams, streamWithPayloadPatch intercepts onPayload,
|
||||
// so we need to invoke onPayload on the returned options to trigger the patch.
|
||||
const result = wrapped?.(modelDescriptor, { messages: [] } as never, {}) as unknown as Record<
|
||||
string,
|
||||
unknown
|
||||
>;
|
||||
|
||||
// If onPayload was installed by streamWithPayloadPatch, call it to apply the patch.
|
||||
if (typeof result?.onPayload === "function") {
|
||||
const payload: Record<string, unknown> = {};
|
||||
await (result.onPayload as (p: Record<string, unknown>, model: unknown) => Promise<unknown>)(
|
||||
payload,
|
||||
modelDescriptor,
|
||||
@@ -719,6 +713,89 @@ describe("amazon-bedrock provider plugin", () => {
|
||||
});
|
||||
});
|
||||
|
||||
describe("service tier", () => {
|
||||
const CONVERSE_MODEL_DESCRIPTOR = {
|
||||
api: "bedrock-converse-stream",
|
||||
provider: "amazon-bedrock",
|
||||
id: NON_ANTHROPIC_MODEL,
|
||||
} as never;
|
||||
|
||||
it("injects serviceTier for valid camelCase value ('flex')", async () => {
|
||||
const provider = await registerWithConfig(undefined);
|
||||
const result = await callWrappedStream(
|
||||
provider,
|
||||
NON_ANTHROPIC_MODEL,
|
||||
CONVERSE_MODEL_DESCRIPTOR,
|
||||
runtimePluginConfig(undefined),
|
||||
{ serviceTier: "flex" },
|
||||
);
|
||||
expect(result._capturedPayload).toMatchObject({ serviceTier: { type: "flex" } });
|
||||
});
|
||||
|
||||
it("injects serviceTier for valid snake_case value ('priority')", async () => {
|
||||
const provider = await registerWithConfig(undefined);
|
||||
const result = await callWrappedStream(
|
||||
provider,
|
||||
NON_ANTHROPIC_MODEL,
|
||||
CONVERSE_MODEL_DESCRIPTOR,
|
||||
runtimePluginConfig(undefined),
|
||||
{ service_tier: "priority" },
|
||||
);
|
||||
expect(result._capturedPayload).toMatchObject({ serviceTier: { type: "priority" } });
|
||||
});
|
||||
|
||||
it("injects serviceTier for all valid tier names", async () => {
|
||||
const provider = await registerWithConfig(undefined);
|
||||
for (const tier of ["flex", "priority", "default", "reserved"] as const) {
|
||||
const result = await callWrappedStream(
|
||||
provider,
|
||||
NON_ANTHROPIC_MODEL,
|
||||
CONVERSE_MODEL_DESCRIPTOR,
|
||||
runtimePluginConfig(undefined),
|
||||
{ serviceTier: tier },
|
||||
);
|
||||
expect(result._capturedPayload).toMatchObject({ serviceTier: { type: tier } });
|
||||
}
|
||||
});
|
||||
|
||||
it("does not inject serviceTier when value is invalid", async () => {
|
||||
const provider = await registerWithConfig(undefined);
|
||||
const result = await callWrappedStream(
|
||||
provider,
|
||||
NON_ANTHROPIC_MODEL,
|
||||
CONVERSE_MODEL_DESCRIPTOR,
|
||||
runtimePluginConfig(undefined),
|
||||
{ serviceTier: "not-a-tier" },
|
||||
);
|
||||
expect(result).not.toHaveProperty("_capturedPayload");
|
||||
});
|
||||
|
||||
it("does not overwrite caller-provided serviceTier in payload", async () => {
|
||||
const provider = await registerWithConfig(undefined);
|
||||
const result = await callWrappedStream(
|
||||
provider,
|
||||
NON_ANTHROPIC_MODEL,
|
||||
CONVERSE_MODEL_DESCRIPTOR,
|
||||
runtimePluginConfig(undefined),
|
||||
{ serviceTier: "flex" },
|
||||
{ serviceTier: { type: "priority" } },
|
||||
);
|
||||
expect(result._capturedPayload).toMatchObject({ serviceTier: { type: "priority" } });
|
||||
});
|
||||
|
||||
it("skips injection for non-converse API models", async () => {
|
||||
const provider = await registerWithConfig(undefined);
|
||||
const result = await callWrappedStream(
|
||||
provider,
|
||||
NON_ANTHROPIC_MODEL,
|
||||
{ api: "openai-completions", provider: "amazon-bedrock", id: NON_ANTHROPIC_MODEL } as never,
|
||||
runtimePluginConfig(undefined),
|
||||
{ serviceTier: "flex" },
|
||||
);
|
||||
expect(result).not.toHaveProperty("_capturedPayload");
|
||||
});
|
||||
});
|
||||
|
||||
describe("application inference profile cache point injection", () => {
|
||||
/**
|
||||
* Invoke wrapStreamFn with a payload containing system/messages, then
|
||||
|
||||
@@ -34,6 +34,43 @@ type AmazonBedrockPluginConfig = {
|
||||
guardrail?: GuardrailConfig;
|
||||
};
|
||||
|
||||
const BEDROCK_SERVICE_TIER_VALUES = ["flex", "priority", "default", "reserved"] as const;
|
||||
type BedrockServiceTier = (typeof BEDROCK_SERVICE_TIER_VALUES)[number];
|
||||
|
||||
function isBedrockServiceTier(value: string): value is BedrockServiceTier {
|
||||
return BEDROCK_SERVICE_TIER_VALUES.some((tier) => tier === value);
|
||||
}
|
||||
|
||||
function resolveBedrockServiceTier(
|
||||
extraParams: Record<string, unknown> | undefined,
|
||||
warn: (message: string) => void,
|
||||
): BedrockServiceTier | undefined {
|
||||
const raw = extraParams?.serviceTier ?? extraParams?.service_tier;
|
||||
if (typeof raw !== "string") {
|
||||
return undefined;
|
||||
}
|
||||
const normalized = raw.trim().toLowerCase();
|
||||
if (isBedrockServiceTier(normalized)) {
|
||||
return normalized;
|
||||
}
|
||||
warn(`ignoring invalid Bedrock service_tier param: ${raw}`);
|
||||
return undefined;
|
||||
}
|
||||
|
||||
function createBedrockServiceTierWrapper(
|
||||
underlying: StreamFn,
|
||||
serviceTier: BedrockServiceTier,
|
||||
): StreamFn {
|
||||
return (model, context, options) => {
|
||||
if (model.api !== "bedrock-converse-stream") {
|
||||
return underlying(model, context, options);
|
||||
}
|
||||
return streamWithPayloadPatch(underlying, model, context, options, (payloadObj) => {
|
||||
payloadObj.serviceTier ??= { type: serviceTier };
|
||||
});
|
||||
};
|
||||
}
|
||||
|
||||
function createGuardrailWrapStreamFn(
|
||||
innerWrapStreamFn: (ctx: { modelId: string; streamFn?: StreamFn }) => StreamFn | null | undefined,
|
||||
guardrailConfig: GuardrailConfig,
|
||||
@@ -484,13 +521,20 @@ export function registerAmazonBedrockPlugin(api: OpenClawPluginApi): void {
|
||||
},
|
||||
resolveConfigApiKey: ({ env }) => resolveBedrockConfigApiKey(env),
|
||||
...anthropicByModelReplayHooks,
|
||||
wrapStreamFn: ({ modelId, config, model, streamFn, thinkingLevel }) => {
|
||||
wrapStreamFn: ({ modelId, config, model, streamFn, thinkingLevel, extraParams }) => {
|
||||
const currentGuardrail = resolveCurrentPluginConfig(config)?.guardrail;
|
||||
// Apply cache + guardrail wrapping.
|
||||
const wrapped =
|
||||
currentGuardrail?.guardrailIdentifier && currentGuardrail?.guardrailVersion
|
||||
let wrapped =
|
||||
(currentGuardrail?.guardrailIdentifier && currentGuardrail?.guardrailVersion
|
||||
? createGuardrailWrapStreamFn(baseWrapStreamFn, currentGuardrail)({ modelId, streamFn })
|
||||
: baseWrapStreamFn({ modelId, streamFn });
|
||||
: baseWrapStreamFn({ modelId, streamFn })) ?? undefined;
|
||||
|
||||
const serviceTier = resolveBedrockServiceTier(extraParams, (message) =>
|
||||
api.logger.warn(message),
|
||||
);
|
||||
if (serviceTier && wrapped) {
|
||||
wrapped = createBedrockServiceTierWrapper(wrapped, serviceTier);
|
||||
}
|
||||
|
||||
const region = resolveBedrockRegion(config) ?? extractRegionFromBaseUrl(model?.baseUrl);
|
||||
const mayNeedCacheInjection =
|
||||
isBedrockAppInferenceProfile(modelId) && !piAiWouldInjectCachePoints(modelId);
|
||||
|
||||
@@ -117,7 +117,9 @@ describe("anthropic provider policy public artifact", () => {
|
||||
if (!profile) {
|
||||
throw new Error("Expected Anthropic policy profile");
|
||||
}
|
||||
expect(profile.levels.some((level) => level.id === "xhigh" || level.id === "max")).toBe(false);
|
||||
expect(
|
||||
profile.levels.map((level) => level.id).filter((id) => id === "xhigh" || id === "max"),
|
||||
).toEqual([]);
|
||||
});
|
||||
|
||||
it("does not expose Anthropic thinking profiles for unrelated providers", () => {
|
||||
|
||||
@@ -290,8 +290,11 @@ describe("gateway bonjour advertiser", () => {
|
||||
|
||||
await started.stop();
|
||||
childProcessModule.exec('arp -a | findstr /C:"---"', () => {});
|
||||
const afterStopOptions = execMock.mock.calls.at(-1)?.[1];
|
||||
expect(afterStopOptions).toEqual(expect.any(Function));
|
||||
const afterStopCallback = execMock.mock.calls.at(-1)?.[1];
|
||||
if (typeof afterStopCallback !== "function") {
|
||||
throw new Error("expected restored exec callback overload");
|
||||
}
|
||||
afterStopCallback(null, "", "");
|
||||
} finally {
|
||||
childProcessModule.exec = originalExec;
|
||||
}
|
||||
|
||||
@@ -96,6 +96,14 @@ function effectiveSpawnCommand(call: unknown[] | undefined): unknown {
|
||||
return command;
|
||||
}
|
||||
|
||||
function mockExpiredLaunchPollingClock(): void {
|
||||
let now = 1_000_000;
|
||||
vi.spyOn(Date, "now").mockImplementation(() => {
|
||||
now += 1_000;
|
||||
return now;
|
||||
});
|
||||
}
|
||||
|
||||
async function withMockChromeCdpServer(params: {
|
||||
wsPath: string;
|
||||
onConnection?: (wss: WebSocketServer) => void;
|
||||
@@ -507,15 +515,16 @@ describe("chrome.ts internal", () => {
|
||||
let spawnCalls = 0;
|
||||
const firstProc = makeFakeProc();
|
||||
const secondProc = makeFakeProc();
|
||||
mockExpiredLaunchPollingClock();
|
||||
spawnMock.mockImplementation(() => {
|
||||
spawnCalls += 1;
|
||||
if (spawnCalls === 1) {
|
||||
setTimeout(() => {
|
||||
void Promise.resolve().then(() => {
|
||||
firstProc.stderr.emit(
|
||||
"data",
|
||||
Buffer.from("The profile appears to be in use by another Chromium process"),
|
||||
);
|
||||
}, 0);
|
||||
});
|
||||
return firstProc;
|
||||
}
|
||||
cdpReachable = true;
|
||||
@@ -566,7 +575,10 @@ describe("chrome.ts internal", () => {
|
||||
const fakeProc = makeFakeProc();
|
||||
spawnMock.mockReturnValue(fakeProc);
|
||||
// Leak some stderr into the buffer so the hint renders.
|
||||
setTimeout(() => fakeProc.stderr.emit("data", Buffer.from("crash dump\n")), 10);
|
||||
void Promise.resolve().then(() =>
|
||||
fakeProc.stderr.emit("data", Buffer.from("crash dump\n")),
|
||||
);
|
||||
mockExpiredLaunchPollingClock();
|
||||
|
||||
// fetch always fails → isChromeReachable returns false every poll.
|
||||
vi.stubGlobal("fetch", vi.fn().mockRejectedValue(new Error("ECONNREFUSED")));
|
||||
@@ -587,38 +599,32 @@ describe("chrome.ts internal", () => {
|
||||
});
|
||||
|
||||
it("uses the configured local launch timeout while waiting for CDP discovery", async () => {
|
||||
vi.useFakeTimers();
|
||||
try {
|
||||
const executablePath = path.join(tmpDir, "chrome");
|
||||
await fsp.writeFile(executablePath, "");
|
||||
const existsSync = fs.existsSync.bind(fs);
|
||||
vi.spyOn(fs, "existsSync").mockImplementation((p) => {
|
||||
const s = String(p);
|
||||
if (s.endsWith("Local State") || s.endsWith("Preferences")) {
|
||||
return true;
|
||||
}
|
||||
return existsSync(p);
|
||||
});
|
||||
const fakeProc = makeFakeProc();
|
||||
spawnMock.mockReturnValue(fakeProc);
|
||||
vi.stubGlobal("fetch", vi.fn().mockRejectedValue(new Error("ECONNREFUSED")));
|
||||
const executablePath = path.join(tmpDir, "chrome");
|
||||
await fsp.writeFile(executablePath, "");
|
||||
const existsSync = fs.existsSync.bind(fs);
|
||||
vi.spyOn(fs, "existsSync").mockImplementation((p) => {
|
||||
const s = String(p);
|
||||
if (s.endsWith("Local State") || s.endsWith("Preferences")) {
|
||||
return true;
|
||||
}
|
||||
return existsSync(p);
|
||||
});
|
||||
const fakeProc = makeFakeProc();
|
||||
spawnMock.mockReturnValue(fakeProc);
|
||||
mockExpiredLaunchPollingClock();
|
||||
vi.stubGlobal("fetch", vi.fn().mockRejectedValue(new Error("ECONNREFUSED")));
|
||||
|
||||
const resolved = {
|
||||
...makeResolved(),
|
||||
executablePath,
|
||||
localLaunchTimeoutMs: 1,
|
||||
};
|
||||
const profile = makeProfile(55556);
|
||||
const rejection = expect(launchOpenClawChrome(resolved, profile)).rejects.toThrow(
|
||||
/Failed to start Chrome CDP/,
|
||||
);
|
||||
const resolved = {
|
||||
...makeResolved(),
|
||||
executablePath,
|
||||
localLaunchTimeoutMs: 1,
|
||||
};
|
||||
const profile = makeProfile(55556);
|
||||
|
||||
await vi.advanceTimersByTimeAsync(10);
|
||||
await rejection;
|
||||
expect(fakeProc.kill).toHaveBeenCalledWith("SIGKILL");
|
||||
} finally {
|
||||
vi.useRealTimers();
|
||||
}
|
||||
await expect(launchOpenClawChrome(resolved, profile)).rejects.toThrow(
|
||||
/Failed to start Chrome CDP/,
|
||||
);
|
||||
expect(fakeProc.kill).toHaveBeenCalledWith("SIGKILL");
|
||||
});
|
||||
});
|
||||
|
||||
@@ -997,9 +1003,12 @@ describe("chrome.ts internal", () => {
|
||||
const fakeProc = makeFakeProc();
|
||||
spawnMock.mockImplementation(() => {
|
||||
// Synthesize stderr data shortly after spawn.
|
||||
setTimeout(() => fakeProc.stderr.emit("data", Buffer.from("chrome crash log\n")), 5);
|
||||
void Promise.resolve().then(() =>
|
||||
fakeProc.stderr.emit("data", Buffer.from("chrome crash log\n")),
|
||||
);
|
||||
return fakeProc;
|
||||
});
|
||||
mockExpiredLaunchPollingClock();
|
||||
vi.stubGlobal("fetch", vi.fn().mockRejectedValue(new Error("ECONNREFUSED")));
|
||||
const profile = {
|
||||
name: "openclaw-stderr",
|
||||
@@ -1036,6 +1045,7 @@ describe("chrome.ts internal", () => {
|
||||
return false;
|
||||
});
|
||||
spawnMock.mockImplementation(() => makeFakeProc());
|
||||
mockExpiredLaunchPollingClock();
|
||||
vi.stubGlobal("fetch", vi.fn().mockRejectedValue(new Error("ECONNREFUSED")));
|
||||
const profile = {
|
||||
name: "openclaw-mac",
|
||||
@@ -1064,13 +1074,9 @@ describe("chrome.ts internal", () => {
|
||||
|
||||
it("breaks out of the bootstrap prefs-wait loop as soon as both files exist", async () => {
|
||||
// Covers the `if (exists(localStatePath) && exists(preferencesPath)) break;` branch.
|
||||
// Use a wallclock flag that the mock checks each call so the loop
|
||||
// iterates (awaiting its 100ms setTimeout) once with prefs-absent,
|
||||
// then the flag flips and the next iteration hits the break.
|
||||
let prefsVisible = false;
|
||||
setTimeout(() => {
|
||||
prefsVisible = true;
|
||||
}, 50);
|
||||
// The first prefs probe makes bootstrap necessary; subsequent probes
|
||||
// make both prefs files visible so the polling loop breaks immediately.
|
||||
let prefsProbeCount = 0;
|
||||
vi.spyOn(fs, "existsSync").mockImplementation((p) => {
|
||||
const s = String(p);
|
||||
if (
|
||||
@@ -1081,7 +1087,8 @@ describe("chrome.ts internal", () => {
|
||||
return true;
|
||||
}
|
||||
if (s.endsWith("Local State") || s.endsWith("Preferences")) {
|
||||
return prefsVisible;
|
||||
prefsProbeCount += 1;
|
||||
return prefsProbeCount > 1;
|
||||
}
|
||||
return false;
|
||||
});
|
||||
@@ -1136,17 +1143,15 @@ describe("chrome.ts internal", () => {
|
||||
});
|
||||
const bootstrapProc = makeFakeProc();
|
||||
const runtimeProc = makeFakeProc();
|
||||
bootstrapProc.kill = vi.fn((_sig?: string) => {
|
||||
bootstrapProc.killed = true;
|
||||
bootstrapProc.exitCode = 0;
|
||||
return true;
|
||||
});
|
||||
let callCount = 0;
|
||||
spawnMock.mockImplementation(() => {
|
||||
callCount += 1;
|
||||
if (callCount === 1) {
|
||||
// Set exitCode shortly after spawn so the exit-wait loop breaks.
|
||||
setTimeout(() => {
|
||||
bootstrapProc.exitCode = 0;
|
||||
}, 25);
|
||||
return bootstrapProc;
|
||||
}
|
||||
return runtimeProc;
|
||||
return callCount === 1 ? bootstrapProc : runtimeProc;
|
||||
});
|
||||
await withMockChromeCdpServer({
|
||||
wsPath: "/devtools/browser/EXIT_BREAK",
|
||||
|
||||
@@ -59,6 +59,7 @@ import {
|
||||
DEFAULT_OPENCLAW_BROWSER_PROFILE_NAME,
|
||||
} from "./constants.js";
|
||||
import { BrowserProfileUnavailableError } from "./errors.js";
|
||||
import { ensureOutputDirectory } from "./output-directories.js";
|
||||
import { DEFAULT_DOWNLOAD_DIR } from "./paths.js";
|
||||
|
||||
const log = createSubsystemLogger("browser").child("chrome");
|
||||
@@ -423,7 +424,7 @@ export async function launchOpenClawChrome(
|
||||
|
||||
const userDataDir = resolveOpenClawUserDataDir(profile.name);
|
||||
fs.mkdirSync(userDataDir, { recursive: true });
|
||||
fs.mkdirSync(DEFAULT_DOWNLOAD_DIR, { recursive: true });
|
||||
await ensureOutputDirectory(DEFAULT_DOWNLOAD_DIR);
|
||||
|
||||
const needsDecorate = !isProfileDecorated(
|
||||
userDataDir,
|
||||
|
||||
@@ -316,9 +316,13 @@ describe("browser client", () => {
|
||||
browserScreenshotAction("http://127.0.0.1:18791", { targetId: "t-default" }),
|
||||
).resolves.toMatchObject({ ok: true, path: "/tmp/a.png" });
|
||||
|
||||
expect(calls.some((c) => c.url.endsWith("/tabs"))).toBe(true);
|
||||
expect(calls.some((c) => c.url.endsWith("/doctor"))).toBe(true);
|
||||
expect(calls.some((c) => c.url.endsWith("/doctor?profile=openclaw&deep=true"))).toBe(true);
|
||||
expect(calls.map((call) => call.url)).toEqual(
|
||||
expect.arrayContaining([
|
||||
expect.stringMatching(/\/tabs$/),
|
||||
expect.stringMatching(/\/doctor$/),
|
||||
expect.stringMatching(/\/doctor\?profile=openclaw&deep=true$/),
|
||||
]),
|
||||
);
|
||||
const open = calls.find((c) => c.url.endsWith("/tabs/open"));
|
||||
expect(open?.init?.method).toBe("POST");
|
||||
|
||||
|
||||
@@ -101,7 +101,9 @@ describe("buildBrowserDoctorReport", () => {
|
||||
});
|
||||
|
||||
expect(report.ok).toBe(true);
|
||||
expect(report.checks.some((check) => check.status === "warn")).toBe(true);
|
||||
expect(
|
||||
report.checks.filter((check) => check.status === "warn").map((check) => check.id),
|
||||
).toEqual(["managed-executable", "display", "linux-sandbox"]);
|
||||
expect(report.checks.find((check) => check.id === "display")).toMatchObject({
|
||||
summary: "No DISPLAY or WAYLAND_DISPLAY is set while headed mode is selected (config)",
|
||||
});
|
||||
|
||||
@@ -1,12 +1,12 @@
|
||||
import fs from "node:fs/promises";
|
||||
import { writeExternalFileWithinRoot } from "../sdk-security-runtime.js";
|
||||
import { ensureOutputDirectory } from "./output-directories.js";
|
||||
|
||||
export async function writeViaSiblingTempPath(params: {
|
||||
rootDir: string;
|
||||
targetPath: string;
|
||||
writeTemp: (tempPath: string) => Promise<void>;
|
||||
}): Promise<void> {
|
||||
await fs.mkdir(params.rootDir, { recursive: true });
|
||||
await ensureOutputDirectory(params.rootDir);
|
||||
await writeExternalFileWithinRoot({
|
||||
rootDir: params.rootDir,
|
||||
path: params.targetPath,
|
||||
|
||||
44
extensions/browser/src/browser/output-directories.test.ts
Normal file
44
extensions/browser/src/browser/output-directories.test.ts
Normal file
@@ -0,0 +1,44 @@
|
||||
import fs from "node:fs/promises";
|
||||
import os from "node:os";
|
||||
import path from "node:path";
|
||||
import { describe, expect, it } from "vitest";
|
||||
import { ensureOutputDirectory } from "./output-directories.js";
|
||||
|
||||
async function withTempDir<T>(run: (tempDir: string) => Promise<T>): Promise<T> {
|
||||
const tempDir = await fs.mkdtemp(path.join(os.tmpdir(), "openclaw-output-dir-test-"));
|
||||
try {
|
||||
return await run(tempDir);
|
||||
} finally {
|
||||
await fs.rm(tempDir, { recursive: true, force: true });
|
||||
}
|
||||
}
|
||||
|
||||
describe("ensureOutputDirectory", () => {
|
||||
it("creates nested missing output directories", async () => {
|
||||
await withTempDir(async (tempDir) => {
|
||||
const outputDir = path.join(tempDir, "reports", "downloads");
|
||||
|
||||
await ensureOutputDirectory(outputDir);
|
||||
|
||||
const stat = await fs.stat(outputDir);
|
||||
expect(stat.isDirectory()).toBe(true);
|
||||
});
|
||||
});
|
||||
|
||||
it.runIf(process.platform !== "win32")(
|
||||
"rejects symlinked output directory ancestors",
|
||||
async () => {
|
||||
await withTempDir(async (tempDir) => {
|
||||
const outsideDir = path.join(tempDir, "outside");
|
||||
await fs.mkdir(outsideDir);
|
||||
const symlinkDir = path.join(tempDir, "downloads");
|
||||
await fs.symlink(outsideDir, symlinkDir);
|
||||
|
||||
await expect(ensureOutputDirectory(path.join(symlinkDir, "nested"))).rejects.toThrow(
|
||||
/symlink|output directory/i,
|
||||
);
|
||||
await expect(fs.access(path.join(outsideDir, "nested"))).rejects.toThrow();
|
||||
});
|
||||
},
|
||||
);
|
||||
});
|
||||
35
extensions/browser/src/browser/output-directories.ts
Normal file
35
extensions/browser/src/browser/output-directories.ts
Normal file
@@ -0,0 +1,35 @@
|
||||
import fs from "node:fs/promises";
|
||||
import path from "node:path";
|
||||
import { ensureAbsoluteDirectory } from "../sdk-security-runtime.js";
|
||||
|
||||
async function resolveSystemDirectoryAlias(dirPath: string): Promise<string> {
|
||||
// macOS exposes /tmp and /var as fixed system symlinks into /private.
|
||||
// Canonicalize only those roots before rejecting symlinks below them.
|
||||
for (const aliasRoot of ["/tmp", "/var"]) {
|
||||
if (dirPath !== aliasRoot && !dirPath.startsWith(`${aliasRoot}${path.sep}`)) {
|
||||
continue;
|
||||
}
|
||||
try {
|
||||
const stat = await fs.lstat(aliasRoot);
|
||||
if (!stat.isSymbolicLink()) {
|
||||
return dirPath;
|
||||
}
|
||||
return path.join(await fs.realpath(aliasRoot), path.relative(aliasRoot, dirPath));
|
||||
} catch {
|
||||
return dirPath;
|
||||
}
|
||||
}
|
||||
return dirPath;
|
||||
}
|
||||
|
||||
export async function ensureOutputDirectory(dirPath: string): Promise<void> {
|
||||
const result = await ensureAbsoluteDirectory(
|
||||
await resolveSystemDirectoryAlias(path.resolve(dirPath)),
|
||||
{
|
||||
scopeLabel: "output directory",
|
||||
},
|
||||
);
|
||||
if (!result.ok) {
|
||||
throw result.error;
|
||||
}
|
||||
}
|
||||
26
extensions/browser/src/browser/output-files.ts
Normal file
26
extensions/browser/src/browser/output-files.ts
Normal file
@@ -0,0 +1,26 @@
|
||||
import path from "node:path";
|
||||
import { writeExternalFileWithinRoot } from "../sdk-security-runtime.js";
|
||||
import { ensureOutputDirectory } from "./output-directories.js";
|
||||
|
||||
export async function writeExternalFileWithinOutputRoot(params: {
|
||||
rootDir?: string;
|
||||
path: string;
|
||||
write: (filePath: string) => Promise<void>;
|
||||
}): Promise<string> {
|
||||
const outputPath = params.path.trim();
|
||||
if (!outputPath) {
|
||||
throw new Error("output path is required");
|
||||
}
|
||||
|
||||
const rootDir = params.rootDir
|
||||
? path.resolve(params.rootDir)
|
||||
: path.dirname(path.resolve(outputPath));
|
||||
await ensureOutputDirectory(rootDir);
|
||||
|
||||
const result = await writeExternalFileWithinRoot({
|
||||
rootDir,
|
||||
path: outputPath,
|
||||
write: params.write,
|
||||
});
|
||||
return result.path;
|
||||
}
|
||||
@@ -0,0 +1,124 @@
|
||||
import type { Page } from "playwright-core";
|
||||
import { afterEach, describe, expect, it, vi } from "vitest";
|
||||
import { SsrFBlockedError } from "../infra/net/ssrf.js";
|
||||
import {
|
||||
assertBrowserNavigationRedirectChainAllowed,
|
||||
assertBrowserNavigationResultAllowed,
|
||||
} from "./navigation-guard.js";
|
||||
import { assertPageNavigationCompletedSafely } from "./pw-session.js";
|
||||
|
||||
vi.mock("./navigation-guard.js", async (importOriginal) => {
|
||||
const actual = await importOriginal<Record<string, unknown>>();
|
||||
return {
|
||||
...actual,
|
||||
assertBrowserNavigationRedirectChainAllowed: vi.fn(async () => {}),
|
||||
assertBrowserNavigationResultAllowed: vi.fn(async () => {}),
|
||||
};
|
||||
});
|
||||
|
||||
const mockedRedirectChain = vi.mocked(assertBrowserNavigationRedirectChainAllowed);
|
||||
const mockedResultAllowed = vi.mocked(assertBrowserNavigationResultAllowed);
|
||||
|
||||
afterEach(() => {
|
||||
mockedRedirectChain.mockReset();
|
||||
mockedRedirectChain.mockImplementation(async () => {});
|
||||
mockedResultAllowed.mockReset();
|
||||
mockedResultAllowed.mockImplementation(async () => {});
|
||||
});
|
||||
|
||||
function fakePage(url = "https://blocked.example/admin"): {
|
||||
page: Page;
|
||||
close: ReturnType<typeof vi.fn>;
|
||||
} {
|
||||
const close = vi.fn(async () => {});
|
||||
const page = {
|
||||
url: vi.fn(() => url),
|
||||
close,
|
||||
} as unknown as Page;
|
||||
return { page, close };
|
||||
}
|
||||
|
||||
describe("assertPageNavigationCompletedSafely", () => {
|
||||
it("does not close the tab when a read-only caller hits an SSRF-blocked URL (response: null)", async () => {
|
||||
// A read-only caller (snapshot/screenshot/interactions) passes response: null
|
||||
// and must never lose the user's tab when the policy guard rejects.
|
||||
mockedResultAllowed.mockRejectedValueOnce(new SsrFBlockedError("blocked by policy"));
|
||||
|
||||
const { page, close } = fakePage();
|
||||
|
||||
await expect(
|
||||
assertPageNavigationCompletedSafely({
|
||||
cdpUrl: "http://127.0.0.1:18792",
|
||||
page,
|
||||
response: null,
|
||||
ssrfPolicy: { allowPrivateNetwork: false },
|
||||
targetId: "tab-1",
|
||||
}),
|
||||
).rejects.toBeInstanceOf(SsrFBlockedError);
|
||||
|
||||
expect(close).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it("does not close the tab when a navigate caller hits an SSRF-blocked URL (response: non-null)", async () => {
|
||||
// Even when the helper is invoked with a real Response (i.e. on the
|
||||
// navigate path), the close decision now belongs to the caller. The
|
||||
// helper must only quarantine + rethrow; the caller's try/catch is
|
||||
// responsible for closing if it owns the navigation lifecycle.
|
||||
mockedResultAllowed.mockRejectedValueOnce(new SsrFBlockedError("blocked by policy"));
|
||||
|
||||
const { page, close } = fakePage();
|
||||
const response = { request: () => undefined } as unknown as Parameters<
|
||||
typeof assertPageNavigationCompletedSafely
|
||||
>[0]["response"];
|
||||
|
||||
await expect(
|
||||
assertPageNavigationCompletedSafely({
|
||||
cdpUrl: "http://127.0.0.1:18792",
|
||||
page,
|
||||
response,
|
||||
ssrfPolicy: { allowPrivateNetwork: false },
|
||||
targetId: "tab-1",
|
||||
}),
|
||||
).rejects.toBeInstanceOf(SsrFBlockedError);
|
||||
|
||||
expect(close).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it("rethrows non-policy errors without touching the tab", async () => {
|
||||
const boom = new Error("transient playwright error");
|
||||
mockedResultAllowed.mockRejectedValueOnce(boom);
|
||||
|
||||
const { page, close } = fakePage();
|
||||
|
||||
await expect(
|
||||
assertPageNavigationCompletedSafely({
|
||||
cdpUrl: "http://127.0.0.1:18792",
|
||||
page,
|
||||
response: null,
|
||||
ssrfPolicy: { allowPrivateNetwork: false },
|
||||
targetId: "tab-1",
|
||||
}),
|
||||
).rejects.toBe(boom);
|
||||
|
||||
expect(close).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it("returns silently when both guards pass", async () => {
|
||||
const { page, close } = fakePage("https://allowed.example/");
|
||||
|
||||
await expect(
|
||||
assertPageNavigationCompletedSafely({
|
||||
cdpUrl: "http://127.0.0.1:18792",
|
||||
page,
|
||||
response: null,
|
||||
ssrfPolicy: { allowPrivateNetwork: false },
|
||||
targetId: "tab-1",
|
||||
}),
|
||||
).resolves.toBeUndefined();
|
||||
|
||||
expect(close).not.toHaveBeenCalled();
|
||||
expect(mockedResultAllowed).toHaveBeenCalledWith(
|
||||
expect.objectContaining({ url: "https://allowed.example/" }),
|
||||
);
|
||||
});
|
||||
});
|
||||
@@ -138,11 +138,6 @@ describe("pw-session role refs cache", () => {
|
||||
describe("pw-session ensurePageState", () => {
|
||||
it("stores unmanaged downloads under unique managed paths", async () => {
|
||||
const { page, handlers } = fakePage();
|
||||
const mkdirActual = fs.mkdir.bind(fs);
|
||||
const mkdirSpy = vi.spyOn(fs, "mkdir").mockImplementation(async (target, options) => {
|
||||
await mkdirActual(target, options);
|
||||
return undefined;
|
||||
});
|
||||
ensurePageState(page);
|
||||
|
||||
const saveAsA = vi.fn(async (outPath: string) => {
|
||||
@@ -175,7 +170,6 @@ describe("pw-session ensurePageState", () => {
|
||||
expect(saveAsB.mock.calls[0]?.[0]).not.toBe(managedPathB);
|
||||
await expect(fs.readFile(managedPathA ?? "", "utf8")).resolves.toBe("download-a");
|
||||
await expect(fs.readFile(managedPathB ?? "", "utf8")).resolves.toBe("download-b");
|
||||
expect(mkdirSpy).toHaveBeenCalledWith(DEFAULT_DOWNLOAD_DIR, { recursive: true });
|
||||
});
|
||||
|
||||
it("suppresses unmanaged download save rejections until path is awaited", async () => {
|
||||
|
||||
@@ -854,16 +854,20 @@ function isSubframeDocumentNavigationRequest(page: Page, request: Request): bool
|
||||
}
|
||||
}
|
||||
|
||||
function isPolicyDenyNavigationError(err: unknown): boolean {
|
||||
export function isPolicyDenyNavigationError(err: unknown): boolean {
|
||||
return err instanceof SsrFBlockedError || err instanceof InvalidBrowserNavigationUrlError;
|
||||
}
|
||||
|
||||
async function closeBlockedNavigationTarget(opts: {
|
||||
// Mark a page (and its CDP target id when resolvable) as blocked so subsequent
|
||||
// OpenClaw operations short-circuit instead of re-running the SSRF check on a
|
||||
// page we have already proven is non-compliant. This is a pure bookkeeping
|
||||
// step; it does NOT close the tab. Read-only paths can call this safely on a
|
||||
// user-owned tab without losing the user's content.
|
||||
async function quarantineBlockedTarget(opts: {
|
||||
cdpUrl: string;
|
||||
page: Page;
|
||||
targetId?: string;
|
||||
}): Promise<void> {
|
||||
// Quarantine the concrete page first; then persist by target id when available.
|
||||
markPageRefBlocked(opts.cdpUrl, opts.page);
|
||||
const resolvedTargetId = await pageTargetId(opts.page).catch(() => null);
|
||||
const fallbackTargetId = normalizeOptionalString(opts.targetId) ?? "";
|
||||
@@ -871,9 +875,24 @@ async function closeBlockedNavigationTarget(opts: {
|
||||
if (targetIdToBlock) {
|
||||
markTargetBlocked(opts.cdpUrl, targetIdToBlock);
|
||||
}
|
||||
}
|
||||
|
||||
// Quarantine and close a tab that OpenClaw itself navigated to a blocked URL.
|
||||
// Only callers that own the navigation lifecycle (gotoPageWithNavigationGuard
|
||||
// and the navigate-style entry points that wrap it) may invoke this — closing
|
||||
// a tab is a destructive action that must not happen on user-owned tabs from
|
||||
// read-only operations like snapshot/screenshot/interactions.
|
||||
export async function closeBlockedNavigationTarget(opts: {
|
||||
cdpUrl: string;
|
||||
page: Page;
|
||||
targetId?: string;
|
||||
}): Promise<void> {
|
||||
await quarantineBlockedTarget(opts);
|
||||
await opts.page.close().catch(() => {});
|
||||
}
|
||||
|
||||
// On policy denial: quarantines and rethrows (never closes).
|
||||
// Navigate-style callers catch the rethrow and close via closeBlockedNavigationTarget.
|
||||
export async function assertPageNavigationCompletedSafely(
|
||||
opts: {
|
||||
cdpUrl: string;
|
||||
@@ -896,7 +915,7 @@ export async function assertPageNavigationCompletedSafely(
|
||||
});
|
||||
} catch (err) {
|
||||
if (isPolicyDenyNavigationError(err)) {
|
||||
await closeBlockedNavigationTarget({
|
||||
await quarantineBlockedTarget({
|
||||
cdpUrl: opts.cdpUrl,
|
||||
page: opts.page,
|
||||
targetId: opts.targetId,
|
||||
@@ -1340,14 +1359,27 @@ export async function createPageViaPlaywright(
|
||||
throw err;
|
||||
}
|
||||
}
|
||||
await assertPageNavigationCompletedSafely({
|
||||
cdpUrl: opts.cdpUrl,
|
||||
page,
|
||||
response,
|
||||
ssrfPolicy: opts.ssrfPolicy,
|
||||
browserProxyMode: opts.browserProxyMode,
|
||||
targetId: createdTargetId ?? undefined,
|
||||
});
|
||||
// OpenClaw owns this newly-created tab: if the post-navigation safety
|
||||
// check trips, close the tab we just spawned.
|
||||
try {
|
||||
await assertPageNavigationCompletedSafely({
|
||||
cdpUrl: opts.cdpUrl,
|
||||
page,
|
||||
response,
|
||||
ssrfPolicy: opts.ssrfPolicy,
|
||||
browserProxyMode: opts.browserProxyMode,
|
||||
targetId: createdTargetId ?? undefined,
|
||||
});
|
||||
} catch (err) {
|
||||
if (isPolicyDenyNavigationError(err)) {
|
||||
await closeBlockedNavigationTarget({
|
||||
cdpUrl: opts.cdpUrl,
|
||||
page,
|
||||
targetId: createdTargetId ?? undefined,
|
||||
});
|
||||
}
|
||||
throw err;
|
||||
}
|
||||
}
|
||||
|
||||
// Get the targetId for this page
|
||||
|
||||
@@ -7,6 +7,7 @@ const pageState = vi.hoisted(() => ({
|
||||
|
||||
const sessionMocks = vi.hoisted(() => ({
|
||||
assertPageNavigationCompletedSafely: vi.fn(async () => {}),
|
||||
closeBlockedNavigationTarget: vi.fn(async () => {}),
|
||||
ensurePageState: vi.fn(() => ({})),
|
||||
forceDisconnectPlaywrightForTarget: vi.fn(async () => {}),
|
||||
getPageForTargetId: vi.fn(async () => {
|
||||
@@ -16,6 +17,7 @@ const sessionMocks = vi.hoisted(() => ({
|
||||
return pageState.page;
|
||||
}),
|
||||
gotoPageWithNavigationGuard: vi.fn(async () => null),
|
||||
isPolicyDenyNavigationError: vi.fn(() => false),
|
||||
refLocator: vi.fn(() => {
|
||||
if (!pageState.locator) {
|
||||
throw new Error("missing locator");
|
||||
|
||||
@@ -2,7 +2,7 @@ import crypto from "node:crypto";
|
||||
import path from "node:path";
|
||||
import type { Page } from "playwright-core";
|
||||
import { resolvePreferredOpenClawTmpDir } from "../infra/tmp-openclaw-dir.js";
|
||||
import { writeViaSiblingTempPath } from "./output-atomic.js";
|
||||
import { writeExternalFileWithinOutputRoot } from "./output-files.js";
|
||||
import { DEFAULT_UPLOAD_DIR, resolveStrictExistingPathsWithinRoot } from "./paths.js";
|
||||
import {
|
||||
ensurePageState,
|
||||
@@ -88,33 +88,22 @@ type DownloadPayload = {
|
||||
saveAs?: (outPath: string) => Promise<void>;
|
||||
};
|
||||
|
||||
async function saveDownloadPayload(download: DownloadPayload, outPath: string) {
|
||||
async function saveDownloadPayload(download: DownloadPayload, outPath: string, rootDir?: string) {
|
||||
const suggested = download.suggestedFilename?.() || "download.bin";
|
||||
const requestedPath = outPath?.trim();
|
||||
const resolvedOutPath = path.resolve(requestedPath || buildTempDownloadPath(suggested));
|
||||
|
||||
if (!requestedPath) {
|
||||
await writeViaSiblingTempPath({
|
||||
rootDir: path.dirname(resolvedOutPath),
|
||||
targetPath: resolvedOutPath,
|
||||
writeTemp: async (tempPath) => {
|
||||
await download.saveAs?.(tempPath);
|
||||
},
|
||||
});
|
||||
} else {
|
||||
await writeViaSiblingTempPath({
|
||||
rootDir: path.dirname(resolvedOutPath),
|
||||
targetPath: resolvedOutPath,
|
||||
writeTemp: async (tempPath) => {
|
||||
await download.saveAs?.(tempPath);
|
||||
},
|
||||
});
|
||||
}
|
||||
const finalPath = await writeExternalFileWithinOutputRoot({
|
||||
rootDir,
|
||||
path: resolvedOutPath,
|
||||
write: async (tempPath) => {
|
||||
await download.saveAs?.(tempPath);
|
||||
},
|
||||
});
|
||||
|
||||
return {
|
||||
url: download.url?.() || "",
|
||||
suggestedFilename: suggested,
|
||||
path: resolvedOutPath,
|
||||
path: finalPath,
|
||||
};
|
||||
}
|
||||
|
||||
@@ -123,13 +112,14 @@ async function awaitDownloadPayload(params: {
|
||||
state: ReturnType<typeof ensurePageState>;
|
||||
armId: number;
|
||||
outPath?: string;
|
||||
rootDir?: string;
|
||||
}) {
|
||||
try {
|
||||
const download = (await params.waiter.promise) as DownloadPayload;
|
||||
if (params.state.armIdDownload !== params.armId) {
|
||||
throw new Error("Download was superseded by another waiter");
|
||||
}
|
||||
return await saveDownloadPayload(download, params.outPath ?? "");
|
||||
return await saveDownloadPayload(download, params.outPath ?? "", params.rootDir);
|
||||
} catch (err) {
|
||||
params.waiter.cancel();
|
||||
throw err;
|
||||
@@ -233,6 +223,7 @@ export async function waitForDownloadViaPlaywright(opts: {
|
||||
cdpUrl: string;
|
||||
targetId?: string;
|
||||
path?: string;
|
||||
rootDir?: string;
|
||||
timeoutMs?: number;
|
||||
}): Promise<{
|
||||
url: string;
|
||||
@@ -247,7 +238,13 @@ export async function waitForDownloadViaPlaywright(opts: {
|
||||
const armId = state.armIdDownload;
|
||||
|
||||
const waiter = createPageDownloadWaiter(page, timeout);
|
||||
return await awaitDownloadPayload({ waiter, state, armId, outPath: opts.path });
|
||||
return await awaitDownloadPayload({
|
||||
waiter,
|
||||
state,
|
||||
armId,
|
||||
outPath: opts.path,
|
||||
rootDir: opts.rootDir,
|
||||
});
|
||||
}
|
||||
|
||||
export async function downloadViaPlaywright(opts: {
|
||||
@@ -255,6 +252,7 @@ export async function downloadViaPlaywright(opts: {
|
||||
targetId?: string;
|
||||
ref: string;
|
||||
path: string;
|
||||
rootDir?: string;
|
||||
timeoutMs?: number;
|
||||
}): Promise<{
|
||||
url: string;
|
||||
@@ -283,7 +281,13 @@ export async function downloadViaPlaywright(opts: {
|
||||
} catch (err) {
|
||||
throw toAIFriendlyError(err, ref);
|
||||
}
|
||||
return await awaitDownloadPayload({ waiter, state, armId, outPath });
|
||||
return await awaitDownloadPayload({
|
||||
waiter,
|
||||
state,
|
||||
armId,
|
||||
outPath,
|
||||
rootDir: opts.rootDir,
|
||||
});
|
||||
} catch (err) {
|
||||
waiter.cancel();
|
||||
throw err;
|
||||
|
||||
@@ -144,5 +144,38 @@ describe("pw-tools-core.snapshot navigate guard", () => {
|
||||
expect(getPwToolsCoreSessionMocks().assertPageNavigationCompletedSafely).toHaveBeenCalledTimes(
|
||||
1,
|
||||
);
|
||||
// Navigate-style entry points OWN the navigation lifecycle, so when the
|
||||
// post-navigation safety check rejects with an SSRF policy error the
|
||||
// caller is responsible for closing the tab it just navigated. This is
|
||||
// the counterpart to the read-only paths (snapshot/screenshot/
|
||||
// interactions), which must NOT close the tab on the same error.
|
||||
expect(getPwToolsCoreSessionMocks().closeBlockedNavigationTarget).toHaveBeenCalledTimes(1);
|
||||
expect(getPwToolsCoreSessionMocks().closeBlockedNavigationTarget).toHaveBeenCalledWith({
|
||||
cdpUrl: "http://127.0.0.1:18792",
|
||||
page: expect.anything(),
|
||||
targetId: undefined,
|
||||
});
|
||||
});
|
||||
|
||||
it("does not close the tab when post-navigation rejection is not a policy deny", async () => {
|
||||
// Non-policy errors (e.g. transient playwright failures) must not be
|
||||
// treated as "we navigated to a blocked URL" — the tab stays open.
|
||||
const goto = vi.fn(async () => ({ request: () => undefined }));
|
||||
setPwToolsCoreCurrentPage({
|
||||
goto,
|
||||
url: vi.fn(() => "https://example.com/final"),
|
||||
});
|
||||
getPwToolsCoreSessionMocks().assertPageNavigationCompletedSafely.mockRejectedValueOnce(
|
||||
new Error("transient playwright error"),
|
||||
);
|
||||
|
||||
await expect(
|
||||
mod.navigateViaPlaywright({
|
||||
cdpUrl: "http://127.0.0.1:18792",
|
||||
url: "https://example.com/final",
|
||||
}),
|
||||
).rejects.toThrow("transient playwright error");
|
||||
|
||||
expect(getPwToolsCoreSessionMocks().closeBlockedNavigationTarget).not.toHaveBeenCalled();
|
||||
});
|
||||
});
|
||||
|
||||
@@ -9,10 +9,12 @@ const formatAriaSnapshot = vi.fn();
|
||||
|
||||
vi.mock("./pw-session.js", () => ({
|
||||
assertPageNavigationCompletedSafely: vi.fn(),
|
||||
closeBlockedNavigationTarget: vi.fn(),
|
||||
ensurePageState,
|
||||
forceDisconnectPlaywrightForTarget: vi.fn(),
|
||||
getPageForTargetId,
|
||||
gotoPageWithNavigationGuard: vi.fn(),
|
||||
isPolicyDenyNavigationError: vi.fn(() => false),
|
||||
storeRoleRefsForTarget,
|
||||
}));
|
||||
|
||||
|
||||
@@ -19,10 +19,12 @@ import {
|
||||
} from "./pw-role-snapshot.js";
|
||||
import {
|
||||
assertPageNavigationCompletedSafely,
|
||||
closeBlockedNavigationTarget,
|
||||
ensurePageState,
|
||||
forceDisconnectPlaywrightForTarget,
|
||||
getPageForTargetId,
|
||||
gotoPageWithNavigationGuard,
|
||||
isPolicyDenyNavigationError,
|
||||
storeRoleRefsForTarget,
|
||||
} from "./pw-session.js";
|
||||
import { markBackendDomRefsOnPage, withPageScopedCdpClient } from "./pw-session.page-cdp.js";
|
||||
@@ -378,14 +380,25 @@ export async function navigateViaPlaywright(opts: {
|
||||
ensurePageState(page);
|
||||
response = await navigate();
|
||||
}
|
||||
await assertPageNavigationCompletedSafely({
|
||||
cdpUrl: opts.cdpUrl,
|
||||
page,
|
||||
response,
|
||||
ssrfPolicy: opts.ssrfPolicy,
|
||||
browserProxyMode: opts.browserProxyMode,
|
||||
targetId: opts.targetId,
|
||||
});
|
||||
try {
|
||||
await assertPageNavigationCompletedSafely({
|
||||
cdpUrl: opts.cdpUrl,
|
||||
page,
|
||||
response,
|
||||
ssrfPolicy: opts.ssrfPolicy,
|
||||
browserProxyMode: opts.browserProxyMode,
|
||||
targetId: opts.targetId,
|
||||
});
|
||||
} catch (err) {
|
||||
if (isPolicyDenyNavigationError(err)) {
|
||||
await closeBlockedNavigationTarget({
|
||||
cdpUrl: opts.cdpUrl,
|
||||
page,
|
||||
targetId: opts.targetId,
|
||||
});
|
||||
}
|
||||
throw err;
|
||||
}
|
||||
const finalUrl = page.url();
|
||||
return { url: finalUrl };
|
||||
}
|
||||
|
||||
@@ -18,6 +18,7 @@ let pageState: {
|
||||
|
||||
const sessionMocks = vi.hoisted(() => ({
|
||||
assertPageNavigationCompletedSafely: vi.fn(async () => {}),
|
||||
closeBlockedNavigationTarget: vi.fn(async () => {}),
|
||||
getPageForTargetId: vi.fn(async () => {
|
||||
if (!currentPage) {
|
||||
throw new Error("missing page");
|
||||
@@ -33,6 +34,13 @@ const sessionMocks = vi.hoisted(() => ({
|
||||
page: { goto: (url: string, init: { timeout: number }) => Promise<unknown> };
|
||||
}) => (await opts.page.goto(opts.url, { timeout: opts.timeoutMs })) ?? null,
|
||||
),
|
||||
// Match by name so mocked errors are recognized without importing real classes.
|
||||
isPolicyDenyNavigationError: vi.fn((err: unknown) => {
|
||||
if (!(err instanceof Error)) {
|
||||
return false;
|
||||
}
|
||||
return err.name === "SsrFBlockedError" || err.name === "InvalidBrowserNavigationUrlError";
|
||||
}),
|
||||
restoreRoleRefsForTarget: vi.fn(() => {}),
|
||||
storeRoleRefsForTarget: vi.fn(() => {}),
|
||||
refLocator: vi.fn(() => {
|
||||
|
||||
@@ -137,7 +137,9 @@ describe("pw-tools-core", () => {
|
||||
const savedPath = params.saveAs.mock.calls[0]?.[0];
|
||||
expect(typeof savedPath).toBe("string");
|
||||
expect(savedPath).not.toBe(params.targetPath);
|
||||
expect(path.basename(String(savedPath))).toBe(path.basename(params.targetPath));
|
||||
expect(path.basename(path.dirname(String(savedPath)))).toContain("fs-safe-output");
|
||||
expect(path.basename(String(savedPath))).toContain(path.basename(params.targetPath));
|
||||
expect(path.basename(String(savedPath))).toMatch(/\.part$/);
|
||||
expect(await fs.readFile(params.targetPath, "utf8")).toBe(params.content);
|
||||
await expect(fs.access(String(savedPath))).rejects.toThrow();
|
||||
}
|
||||
@@ -173,6 +175,39 @@ describe("pw-tools-core", () => {
|
||||
});
|
||||
});
|
||||
|
||||
it("creates missing explicit download output parents through the safe output directory path", async () => {
|
||||
await withTempDir(async (tempDir) => {
|
||||
const harness = createDownloadEventHarness();
|
||||
const targetPath = path.join(tempDir, "nested", "deeper", "file.bin");
|
||||
|
||||
const saveAs = vi.fn(async (outPath: string) => {
|
||||
await fs.writeFile(outPath, "nested-content", "utf8");
|
||||
});
|
||||
|
||||
const p = mod.waitForDownloadViaPlaywright({
|
||||
cdpUrl: "http://127.0.0.1:18792",
|
||||
targetId: "T1",
|
||||
path: targetPath,
|
||||
timeoutMs: 1000,
|
||||
});
|
||||
|
||||
await Promise.resolve();
|
||||
harness.expectArmed();
|
||||
harness.trigger({
|
||||
url: () => "https://example.com/file.bin",
|
||||
suggestedFilename: () => "file.bin",
|
||||
saveAs,
|
||||
});
|
||||
|
||||
await p;
|
||||
await expectAtomicDownloadSave({
|
||||
saveAs,
|
||||
targetPath,
|
||||
content: "nested-content",
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
it("marks explicit download waiters as owning the next download until cleanup", async () => {
|
||||
const harness = createDownloadEventHarness();
|
||||
const state = sessionMocks.ensurePageState();
|
||||
@@ -282,8 +317,9 @@ describe("pw-tools-core", () => {
|
||||
path.join(path.sep, "tmp", "openclaw-preferred", "downloads"),
|
||||
);
|
||||
const expectedDownloadsTail = `${path.join("tmp", "openclaw-preferred", "downloads")}${path.sep}`;
|
||||
expect(path.dirname(res.path)).toBe(expectedRootedDownloadsDir);
|
||||
expect(path.basename(outPath)).toBe(path.basename(res.path));
|
||||
expect(path.dirname(outPath)).not.toBe(expectedRootedDownloadsDir);
|
||||
expect(path.basename(outPath)).toContain(path.basename(res.path));
|
||||
expect(path.basename(outPath)).toMatch(/\.part$/);
|
||||
await expect(fs.readFile(res.path, "utf8")).resolves.toBe("download-content");
|
||||
expect(path.normalize(res.path)).toContain(path.normalize(expectedDownloadsTail));
|
||||
expect(tmpDirMocks.resolvePreferredOpenClawTmpDir).toHaveBeenCalled();
|
||||
@@ -296,15 +332,51 @@ describe("pw-tools-core", () => {
|
||||
suggestedFilename: "../../../../etc/passwd",
|
||||
});
|
||||
expect(typeof outPath).toBe("string");
|
||||
expect(path.dirname(res.path)).toBe(
|
||||
expect(path.dirname(outPath)).not.toBe(
|
||||
path.resolve(path.join(path.sep, "tmp", "openclaw-preferred", "downloads")),
|
||||
);
|
||||
expect(path.basename(outPath)).toBe(path.basename(res.path));
|
||||
expect(path.basename(outPath)).toContain(path.basename(res.path));
|
||||
expect(path.basename(outPath)).toMatch(/\.part$/);
|
||||
await expect(fs.readFile(res.path, "utf8")).resolves.toBe("download-content");
|
||||
expect(path.normalize(res.path)).toContain(
|
||||
path.normalize(`${path.join("tmp", "openclaw-preferred", "downloads")}${path.sep}`),
|
||||
);
|
||||
});
|
||||
|
||||
it.runIf(process.platform !== "win32")(
|
||||
"rejects implicit downloads when the output directory is a symlink",
|
||||
async () => {
|
||||
await withTempDir(async (tempDir) => {
|
||||
const outsideDir = path.join(tempDir, "outside");
|
||||
await fs.mkdir(outsideDir, { recursive: true });
|
||||
await fs.symlink(outsideDir, path.join(tempDir, "downloads"));
|
||||
tmpDirMocks.resolvePreferredOpenClawTmpDir.mockReturnValue(tempDir);
|
||||
|
||||
const harness = createDownloadEventHarness();
|
||||
const saveAs = vi.fn(async (outPath: string) => {
|
||||
await fs.writeFile(outPath, "should-not-write", "utf8");
|
||||
});
|
||||
|
||||
const p = mod.waitForDownloadViaPlaywright({
|
||||
cdpUrl: "http://127.0.0.1:18792",
|
||||
targetId: "T1",
|
||||
timeoutMs: 1000,
|
||||
});
|
||||
|
||||
await Promise.resolve();
|
||||
harness.expectArmed();
|
||||
harness.trigger({
|
||||
url: () => "https://example.com/file.bin",
|
||||
suggestedFilename: () => "file.bin",
|
||||
saveAs,
|
||||
});
|
||||
|
||||
await expect(p).rejects.toThrow(/output directory/i);
|
||||
expect(saveAs).not.toHaveBeenCalled();
|
||||
await expect(fs.readdir(outsideDir)).resolves.toEqual([]);
|
||||
});
|
||||
},
|
||||
);
|
||||
it("waits for a matching response and returns its body", async () => {
|
||||
let responseHandler: ((resp: unknown) => void) | undefined;
|
||||
const on = vi.fn((event: string, handler: (resp: unknown) => void) => {
|
||||
|
||||
@@ -63,6 +63,7 @@ export function registerBrowserAgentActDownloadRoutes(
|
||||
const result = await pw.waitForDownloadViaPlaywright({
|
||||
...requestBase,
|
||||
path: downloadPath,
|
||||
rootDir: DEFAULT_DOWNLOAD_DIR,
|
||||
});
|
||||
res.json({ ok: true, targetId: tab.targetId, download: result });
|
||||
},
|
||||
@@ -113,6 +114,7 @@ export function registerBrowserAgentActDownloadRoutes(
|
||||
...requestBase,
|
||||
ref,
|
||||
path: downloadPath,
|
||||
rootDir: DEFAULT_DOWNLOAD_DIR,
|
||||
});
|
||||
res.json({ ok: true, targetId: tab.targetId, download: result });
|
||||
},
|
||||
|
||||
@@ -1,9 +1,9 @@
|
||||
import fs from "node:fs/promises";
|
||||
import { ensureOutputDirectory } from "../output-directories.js";
|
||||
import { pathScope } from "./path-output.js";
|
||||
import type { BrowserResponse } from "./types.js";
|
||||
|
||||
export async function ensureOutputRootDir(rootDir: string): Promise<void> {
|
||||
await fs.mkdir(rootDir, { recursive: true });
|
||||
await ensureOutputDirectory(rootDir);
|
||||
}
|
||||
|
||||
export async function resolveWritableOutputPathOrRespond(params: {
|
||||
|
||||
@@ -3,6 +3,7 @@ export {
|
||||
ensurePortAvailable,
|
||||
extractErrorCode,
|
||||
formatErrorMessage,
|
||||
ensureAbsoluteDirectory,
|
||||
hasProxyEnvConfigured,
|
||||
isNotFoundPathError,
|
||||
isPathInside,
|
||||
|
||||
@@ -59,9 +59,14 @@ async function withFetchPathTest(
|
||||
describe("discoverDeepInfraModels", () => {
|
||||
it("returns static catalog in test environment", async () => {
|
||||
const models = await discoverDeepInfraModels();
|
||||
const modelIds = models.map((m) => m.id);
|
||||
const streamingUsageIncompatibleModelIds = models
|
||||
.filter((m) => !m.compat?.supportsUsageInStreaming)
|
||||
.map((m) => m.id);
|
||||
|
||||
expect(DEEPINFRA_DEFAULT_MODEL_REF).toBe("deepinfra/deepseek-ai/DeepSeek-V3.2");
|
||||
expect(models.some((m) => m.id === "deepseek-ai/DeepSeek-V3.2")).toBe(true);
|
||||
expect(models.every((m) => m.compat?.supportsUsageInStreaming)).toBe(true);
|
||||
expect(modelIds).toContain("deepseek-ai/DeepSeek-V3.2");
|
||||
expect(streamingUsageIncompatibleModelIds).toEqual([]);
|
||||
});
|
||||
|
||||
it("fetches DeepInfra's curated LLM catalog and parses model metadata", async () => {
|
||||
@@ -144,7 +149,7 @@ describe("discoverDeepInfraModels", () => {
|
||||
|
||||
await withFetchPathTest(mockFetch, async () => {
|
||||
const models = await discoverDeepInfraModels();
|
||||
expect(models.some((m) => m.id === "deepseek-ai/DeepSeek-V3.2")).toBe(true);
|
||||
expect(models.map((m) => m.id)).toContain("deepseek-ai/DeepSeek-V3.2");
|
||||
});
|
||||
});
|
||||
|
||||
|
||||
@@ -142,6 +142,9 @@ describeLive("deepseek plugin live", () => {
|
||||
};
|
||||
let capturedPayload: Record<string, unknown> | undefined;
|
||||
const streamFn = createDeepSeekV4ThinkingWrapper(streamSimple, "high");
|
||||
if (!streamFn) {
|
||||
throw new Error("expected DeepSeek V4 thinking stream wrapper");
|
||||
}
|
||||
|
||||
const stream = streamFn(resolveDeepSeekV4LiveModel(), context, {
|
||||
apiKey: DEEPSEEK_KEY,
|
||||
@@ -202,6 +205,9 @@ describeLive("deepseek plugin live", () => {
|
||||
};
|
||||
let capturedPayload: Record<string, unknown> | undefined;
|
||||
const streamFn = createDeepSeekV4ThinkingWrapper(streamSimple, "high");
|
||||
if (!streamFn) {
|
||||
throw new Error("expected DeepSeek V4 thinking stream wrapper");
|
||||
}
|
||||
|
||||
const stream = streamFn(resolveDeepSeekV4LiveModel(), context, {
|
||||
apiKey: DEEPSEEK_KEY,
|
||||
|
||||
@@ -32,7 +32,7 @@ describe("discord channel message adapter", () => {
|
||||
|
||||
const proveText = async () => {
|
||||
resetDiscordOutboundMocks(hoisted);
|
||||
const result = await adapter!.send!.text!({
|
||||
const result = await adapter.send!.text!({
|
||||
cfg: {},
|
||||
to: "channel:123456",
|
||||
text: "hello",
|
||||
@@ -49,7 +49,7 @@ describe("discord channel message adapter", () => {
|
||||
|
||||
const proveMedia = async () => {
|
||||
resetDiscordOutboundMocks(hoisted);
|
||||
const result = await adapter!.send!.media!({
|
||||
const result = await adapter.send!.media!({
|
||||
cfg: {},
|
||||
to: "channel:123456",
|
||||
text: "caption",
|
||||
@@ -69,7 +69,7 @@ describe("discord channel message adapter", () => {
|
||||
|
||||
const provePayload = async () => {
|
||||
resetDiscordOutboundMocks(hoisted);
|
||||
const result = await adapter!.send!.payload!({
|
||||
const result = await adapter.send!.payload!({
|
||||
cfg: {},
|
||||
to: "channel:123456",
|
||||
text: "payload",
|
||||
@@ -86,7 +86,7 @@ describe("discord channel message adapter", () => {
|
||||
|
||||
const proveReplyThreadSilent = async () => {
|
||||
resetDiscordOutboundMocks(hoisted);
|
||||
const result = await adapter!.send!.text!({
|
||||
const result = await adapter.send!.text!({
|
||||
cfg: {},
|
||||
to: "channel:parent-1",
|
||||
text: "threaded",
|
||||
@@ -110,7 +110,7 @@ describe("discord channel message adapter", () => {
|
||||
|
||||
await verifyChannelMessageAdapterCapabilityProofs({
|
||||
adapterName: "discordMessageAdapter",
|
||||
adapter: adapter!,
|
||||
adapter: adapter,
|
||||
proofs: {
|
||||
text: proveText,
|
||||
media: proveMedia,
|
||||
@@ -119,7 +119,7 @@ describe("discord channel message adapter", () => {
|
||||
replyTo: proveReplyThreadSilent,
|
||||
thread: proveReplyThreadSilent,
|
||||
messageSendingHooks: () => {
|
||||
expect(adapter!.send!.text).toBeTypeOf("function");
|
||||
expect(adapter.send!.text).toBeTypeOf("function");
|
||||
},
|
||||
},
|
||||
});
|
||||
|
||||
@@ -501,7 +501,7 @@ describe("discordPlugin outbound", () => {
|
||||
includeApplication: true,
|
||||
}),
|
||||
);
|
||||
expect(statusPatches.some((patch) => "bot" in patch || "application" in patch)).toBe(false);
|
||||
expect(statusPatches.filter((patch) => "bot" in patch || "application" in patch)).toEqual([]);
|
||||
|
||||
resolveProbe({
|
||||
ok: true,
|
||||
|
||||
@@ -44,7 +44,8 @@ describe("createDiscordRestClient proxy support", () => {
|
||||
options?: { fetch?: typeof fetch };
|
||||
};
|
||||
|
||||
expect(requestClient.options?.fetch).toEqual(expect.any(Function));
|
||||
expect(makeProxyFetchMock).toHaveBeenCalledWith("http://127.0.0.1:8080");
|
||||
expect(requestClient.options?.fetch).toBe(makeProxyFetchMock.mock.results[0]?.value);
|
||||
expect(requestClient.customFetch).toBe(requestClient.options?.fetch);
|
||||
});
|
||||
|
||||
@@ -119,7 +120,7 @@ describe("createDiscordRestClient proxy support", () => {
|
||||
};
|
||||
|
||||
expect(makeProxyFetchMock).toHaveBeenCalledWith("http://[::1]:8080");
|
||||
expect(requestClient.options?.fetch).toEqual(expect.any(Function));
|
||||
expect(requestClient.options?.fetch).toBe(makeProxyFetchMock.mock.results[0]?.value);
|
||||
});
|
||||
|
||||
it("serializes multipart media with undici-compatible FormData for proxy fetches", async () => {
|
||||
|
||||
@@ -50,7 +50,7 @@ describe("discord wildcard component registration ids", () => {
|
||||
const components = createWildcardComponents();
|
||||
const customIds = components.map((component) => component.customId);
|
||||
|
||||
expect(customIds.every((id) => id !== "*")).toBe(true);
|
||||
expect(customIds.filter((id) => id === "*")).toEqual([]);
|
||||
expect(new Set(customIds).size).toBe(customIds.length);
|
||||
});
|
||||
|
||||
|
||||
@@ -1,3 +1,4 @@
|
||||
import type { DiscordAccountConfig } from "openclaw/plugin-sdk/config-types";
|
||||
import { createNonExitingRuntimeEnv } from "openclaw/plugin-sdk/plugin-test-runtime";
|
||||
import { beforeEach, describe, expect, it, vi } from "vitest";
|
||||
import * as resolveChannelsModule from "../resolve-channels.js";
|
||||
@@ -44,6 +45,7 @@ describe("resolveDiscordAllowlistConfig", () => {
|
||||
},
|
||||
fetcher: vi.fn() as unknown as typeof fetch,
|
||||
runtime,
|
||||
discordConfig: { dangerouslyAllowNameMatching: true } as DiscordAccountConfig,
|
||||
});
|
||||
|
||||
expect(result.allowFrom).toEqual(["111", "*"]);
|
||||
@@ -76,6 +78,7 @@ describe("resolveDiscordAllowlistConfig", () => {
|
||||
},
|
||||
fetcher: vi.fn() as unknown as typeof fetch,
|
||||
runtime,
|
||||
discordConfig: { dangerouslyAllowNameMatching: true } as DiscordAccountConfig,
|
||||
});
|
||||
|
||||
const logs = (runtime.log as ReturnType<typeof vi.fn>).mock.calls
|
||||
@@ -135,6 +138,7 @@ describe("resolveDiscordAllowlistConfig", () => {
|
||||
},
|
||||
fetcher: vi.fn() as unknown as typeof fetch,
|
||||
runtime,
|
||||
discordConfig: {} as DiscordAccountConfig,
|
||||
});
|
||||
|
||||
const logs = (runtime.log as ReturnType<typeof vi.fn>).mock.calls
|
||||
@@ -146,4 +150,68 @@ describe("resolveDiscordAllowlistConfig", () => {
|
||||
"1456350064065904867/1456744319972282449 (guild:Friends of the Crustacean 🦞🤝; channel:maintainers)",
|
||||
);
|
||||
});
|
||||
|
||||
it("keeps user allowlist names unresolved unless name matching is enabled", async () => {
|
||||
const runtime = createNonExitingRuntimeEnv();
|
||||
const result = await resolveDiscordAllowlistConfig({
|
||||
token: "token",
|
||||
allowFrom: ["Alice", "111", "*"],
|
||||
guildEntries: {
|
||||
"*": {
|
||||
users: ["Bob", "999"],
|
||||
channels: {
|
||||
"*": {
|
||||
users: ["Carol", "888"],
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
fetcher: vi.fn() as unknown as typeof fetch,
|
||||
runtime,
|
||||
discordConfig: {} as DiscordAccountConfig,
|
||||
});
|
||||
|
||||
expect(result.allowFrom).toEqual(["Alice", "111", "*"]);
|
||||
expect(result.guildEntries?.["*"]?.users).toEqual(["Bob", "999"]);
|
||||
expect(result.guildEntries?.["*"]?.channels?.["*"]?.users).toEqual(["Carol", "888"]);
|
||||
expect(resolveUsersModule.resolveDiscordUserAllowlist).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it("still resolves guild and channel ids when name matching is disabled", async () => {
|
||||
vi.spyOn(resolveChannelsModule, "resolveDiscordChannelAllowlist").mockResolvedValueOnce([
|
||||
{
|
||||
input: "ops/general",
|
||||
resolved: true,
|
||||
guildId: "145",
|
||||
guildName: "Ops",
|
||||
channelId: "246",
|
||||
channelName: "general",
|
||||
},
|
||||
]);
|
||||
const runtime = createNonExitingRuntimeEnv();
|
||||
|
||||
const result = await resolveDiscordAllowlistConfig({
|
||||
token: "token",
|
||||
allowFrom: ["Alice"],
|
||||
guildEntries: {
|
||||
ops: {
|
||||
users: ["Bob"],
|
||||
channels: {
|
||||
general: {
|
||||
users: ["Carol"],
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
fetcher: vi.fn() as unknown as typeof fetch,
|
||||
runtime,
|
||||
discordConfig: {} as DiscordAccountConfig,
|
||||
});
|
||||
|
||||
expect(result.allowFrom).toEqual(["Alice"]);
|
||||
expect(result.guildEntries?.["145"]?.channels?.["246"]?.users).toEqual(["Carol"]);
|
||||
expect(result.guildEntries?.ops?.users).toEqual(["Bob"]);
|
||||
expect(resolveChannelsModule.resolveDiscordChannelAllowlist).toHaveBeenCalledTimes(1);
|
||||
expect(resolveUsersModule.resolveDiscordUserAllowlist).not.toHaveBeenCalled();
|
||||
});
|
||||
});
|
||||
|
||||
@@ -5,7 +5,8 @@ import {
|
||||
patchAllowlistUsersInConfigEntries,
|
||||
summarizeMapping,
|
||||
} from "openclaw/plugin-sdk/allow-from";
|
||||
import type { DiscordGuildEntry } from "openclaw/plugin-sdk/config-types";
|
||||
import type { DiscordAccountConfig, DiscordGuildEntry } from "openclaw/plugin-sdk/config-types";
|
||||
import { isDangerousNameMatchingEnabled } from "openclaw/plugin-sdk/dangerous-name-runtime";
|
||||
import type { RuntimeEnv } from "openclaw/plugin-sdk/runtime-env";
|
||||
import { formatErrorMessage } from "openclaw/plugin-sdk/ssrf-runtime";
|
||||
import { normalizeStringEntries } from "openclaw/plugin-sdk/text-runtime";
|
||||
@@ -356,6 +357,7 @@ export async function resolveDiscordAllowlistConfig(params: {
|
||||
token: string;
|
||||
guildEntries: unknown;
|
||||
allowFrom: unknown;
|
||||
discordConfig: DiscordAccountConfig;
|
||||
fetcher: typeof fetch;
|
||||
runtime: RuntimeEnv;
|
||||
}): Promise<{ guildEntries: GuildEntries | undefined; allowFrom: string[] | undefined }> {
|
||||
@@ -371,20 +373,22 @@ export async function resolveDiscordAllowlistConfig(params: {
|
||||
});
|
||||
}
|
||||
|
||||
allowFrom = await resolveAllowFromByUserAllowlist({
|
||||
token: params.token,
|
||||
allowFrom,
|
||||
fetcher: params.fetcher,
|
||||
runtime: params.runtime,
|
||||
});
|
||||
|
||||
if (hasGuildEntries(guildEntries)) {
|
||||
guildEntries = await resolveGuildEntriesByUserAllowlist({
|
||||
if (isDangerousNameMatchingEnabled(params.discordConfig)) {
|
||||
allowFrom = await resolveAllowFromByUserAllowlist({
|
||||
token: params.token,
|
||||
guildEntries,
|
||||
allowFrom,
|
||||
fetcher: params.fetcher,
|
||||
runtime: params.runtime,
|
||||
});
|
||||
|
||||
if (hasGuildEntries(guildEntries)) {
|
||||
guildEntries = await resolveGuildEntriesByUserAllowlist({
|
||||
token: params.token,
|
||||
guildEntries,
|
||||
fetcher: params.fetcher,
|
||||
runtime: params.runtime,
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
|
||||
@@ -279,6 +279,7 @@ export async function monitorDiscordProvider(opts: MonitorDiscordOpts = {}) {
|
||||
token,
|
||||
guildEntries,
|
||||
allowFrom,
|
||||
discordConfig: discordCfg,
|
||||
fetcher: discordRestFetch,
|
||||
runtime,
|
||||
});
|
||||
|
||||
@@ -217,6 +217,6 @@ describe("resolveDiscordUserAllowlist", () => {
|
||||
});
|
||||
|
||||
expect(results).toHaveLength(2);
|
||||
expect(results.every((r) => !r.resolved)).toBe(true);
|
||||
expect(results.map((result) => result.resolved)).toEqual([false, false]);
|
||||
});
|
||||
});
|
||||
|
||||
@@ -276,7 +276,7 @@ describe("DiscordVoiceManager", () => {
|
||||
|
||||
const getLastAudioPlayer = () => {
|
||||
const player = createAudioPlayerMock.mock.results.at(-1)?.value as
|
||||
| { state: { status: string } }
|
||||
| { state: { status: string }; stop: ReturnType<typeof vi.fn> }
|
||||
| undefined;
|
||||
if (!player) {
|
||||
throw new Error("expected Discord voice audio player to be created");
|
||||
|
||||
@@ -7,7 +7,7 @@ function expectSchemaIssue(
|
||||
) {
|
||||
expect(result.success).toBe(false);
|
||||
if (!result.success) {
|
||||
expect(result.error.issues.some((issue) => issue.path.join(".") === issuePath)).toBe(true);
|
||||
expect(result.error.issues.map((issue) => issue.path.join("."))).toContain(issuePath);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -315,9 +315,7 @@ describe("FeishuConfigSchema defaultAccount", () => {
|
||||
|
||||
expect(result.success).toBe(false);
|
||||
if (!result.success) {
|
||||
expect(result.error.issues.some((issue) => issue.path.join(".") === "defaultAccount")).toBe(
|
||||
true,
|
||||
);
|
||||
expect(result.error.issues.map((issue) => issue.path.join("."))).toContain("defaultAccount");
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
@@ -166,7 +166,6 @@ describe("feishu_doc image fetch hardening", () => {
|
||||
if (!tool) {
|
||||
throw new Error("expected Feishu doc tool");
|
||||
}
|
||||
expect(tool.execute).toEqual(expect.any(Function));
|
||||
return tool;
|
||||
}
|
||||
|
||||
|
||||
@@ -237,6 +237,12 @@ function createMention(params: { openId: string; name: string; key?: string }):
|
||||
};
|
||||
}
|
||||
|
||||
function mentionOpenIds(event: FeishuMessageEvent): string[] {
|
||||
return (event.message.mentions ?? []).flatMap((mention) =>
|
||||
mention.id.open_id ? [mention.id.open_id] : [],
|
||||
);
|
||||
}
|
||||
|
||||
function createFeishuMonitorRuntime(params?: {
|
||||
createInboundDebouncer?: PluginRuntime["channel"]["debounce"]["createInboundDebouncer"];
|
||||
resolveInboundDebounceMs?: PluginRuntime["channel"]["debounce"]["resolveInboundDebounceMs"];
|
||||
@@ -541,9 +547,9 @@ describe("Feishu inbound debounce regressions", () => {
|
||||
await vi.advanceTimersByTimeAsync(25);
|
||||
|
||||
const dispatched = expectSingleDispatchedEvent();
|
||||
const mergedMentions = dispatched.message.mentions ?? [];
|
||||
expect(mergedMentions.some((mention) => mention.id.open_id === "ou_bot")).toBe(true);
|
||||
expect(mergedMentions.some((mention) => mention.id.open_id === "ou_user_a")).toBe(false);
|
||||
const mergedOpenIds = mentionOpenIds(dispatched);
|
||||
expect(mergedOpenIds).toContain("ou_bot");
|
||||
expect(mergedOpenIds).not.toContain("ou_user_a");
|
||||
});
|
||||
|
||||
it("passes prefetched botName through to handleFeishuMessage", async () => {
|
||||
@@ -601,8 +607,7 @@ describe("Feishu inbound debounce regressions", () => {
|
||||
const { dispatched, parsed } = expectParsedFirstDispatchedEvent();
|
||||
expect(parsed.mentionedBot).toBe(true);
|
||||
expect(parsed.mentionTargets).toBeUndefined();
|
||||
const mergedMentions = dispatched.message.mentions ?? [];
|
||||
expect(mergedMentions.every((mention) => mention.id.open_id === "ou_bot")).toBe(true);
|
||||
expect(mentionOpenIds(dispatched)).toEqual(["ou_bot"]);
|
||||
});
|
||||
|
||||
it("preserves bot mention signal when the latest merged message has no mentions", async () => {
|
||||
|
||||
@@ -1138,7 +1138,7 @@ describe("createFeishuReplyDispatcher streaming behavior", () => {
|
||||
const updateTexts = streamingInstances[0].update.mock.calls.map((call: unknown[]) =>
|
||||
typeof call[0] === "string" ? call[0] : "",
|
||||
);
|
||||
expect(updateTexts.some((text) => text.includes("🔎 Web Search"))).toBe(true);
|
||||
expect(updateTexts).toEqual(expect.arrayContaining([expect.stringContaining("🔎 Web Search")]));
|
||||
expect(streamingInstances[0].close).toHaveBeenCalledWith("final answer", {
|
||||
note: "Agent: agent",
|
||||
});
|
||||
@@ -1171,9 +1171,11 @@ describe("createFeishuReplyDispatcher streaming behavior", () => {
|
||||
const updateTexts = streamingInstances[0].update.mock.calls.map((call: unknown[]) =>
|
||||
typeof call[0] === "string" ? call[0] : "",
|
||||
);
|
||||
expect(
|
||||
updateTexts.some((text) => text.includes("🛠️ Exec: run tests, `pnpm test -- --watch=false`")),
|
||||
).toBe(true);
|
||||
expect(updateTexts).toEqual(
|
||||
expect.arrayContaining([
|
||||
expect.stringContaining("🛠️ Exec: run tests, `pnpm test -- --watch=false`"),
|
||||
]),
|
||||
);
|
||||
});
|
||||
|
||||
it("omits message-like tools from streaming card status", async () => {
|
||||
@@ -1199,7 +1201,7 @@ describe("createFeishuReplyDispatcher streaming behavior", () => {
|
||||
const updateTexts = streamingInstances[0].update.mock.calls.map((call: unknown[]) =>
|
||||
typeof call[0] === "string" ? call[0] : "",
|
||||
);
|
||||
expect(updateTexts.some((text) => text.includes("Message"))).toBe(false);
|
||||
expect(updateTexts).not.toEqual(expect.arrayContaining([expect.stringContaining("Message")]));
|
||||
});
|
||||
|
||||
it("does not suppress a later final after error closeout", async () => {
|
||||
|
||||
@@ -47,15 +47,13 @@ describe("Feishu security audit findings", () => {
|
||||
},
|
||||
])("$name", ({ cfg, expectedFinding, expectedNoFinding }) => {
|
||||
const findings = collectFeishuSecurityAuditFindings({ cfg });
|
||||
const findingKeys = findings.map((finding) => `${finding.checkId}:${finding.severity}`);
|
||||
const checkIds = findings.map((finding) => finding.checkId);
|
||||
if (expectedFinding) {
|
||||
expect(
|
||||
findings.some(
|
||||
(finding) => finding.checkId === expectedFinding && finding.severity === "warn",
|
||||
),
|
||||
).toBe(true);
|
||||
expect(findingKeys).toContain(`${expectedFinding}:warn`);
|
||||
}
|
||||
if (expectedNoFinding) {
|
||||
expect(findings.some((finding) => finding.checkId === expectedNoFinding)).toBe(false);
|
||||
expect(checkIds).not.toContain(expectedNoFinding);
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
@@ -1101,7 +1101,10 @@ describe("google-meet plugin", () => {
|
||||
"/drive/v3/files/doc-1/export",
|
||||
"/drive/v3/files/doc-2/export",
|
||||
]);
|
||||
expect(driveCalls.every((url) => url.searchParams.get("mimeType") === "text/plain")).toBe(true);
|
||||
expect(driveCalls.map((url) => url.searchParams.get("mimeType"))).toEqual([
|
||||
"text/plain",
|
||||
"text/plain",
|
||||
]);
|
||||
});
|
||||
|
||||
it("fetches only the latest Meet conference record for a meeting", async () => {
|
||||
|
||||
@@ -138,10 +138,19 @@ function chooseBestMeetCalendarEvent(
|
||||
now: Date,
|
||||
): GoogleMeetCalendarLookupResult["event"] | undefined {
|
||||
const nowMs = now.getTime();
|
||||
return events
|
||||
.filter((event) => event.status !== "cancelled")
|
||||
.filter((event) => extractGoogleMeetUriFromCalendarEvent(event))
|
||||
.toSorted((left, right) => rankCalendarEvent(left, nowMs) - rankCalendarEvent(right, nowMs))[0];
|
||||
let selected: GoogleMeetCalendarEvent | undefined;
|
||||
let selectedRank = Number.POSITIVE_INFINITY;
|
||||
for (const event of events) {
|
||||
if (event.status === "cancelled" || !extractGoogleMeetUriFromCalendarEvent(event)) {
|
||||
continue;
|
||||
}
|
||||
const rank = rankCalendarEvent(event, nowMs);
|
||||
if (!selected || rank < selectedRank) {
|
||||
selected = event;
|
||||
selectedRank = rank;
|
||||
}
|
||||
}
|
||||
return selected;
|
||||
}
|
||||
|
||||
async function fetchGoogleCalendarEvents(params: {
|
||||
|
||||
@@ -19,9 +19,9 @@ function makeZeroUsageSnapshot() {
|
||||
}
|
||||
|
||||
export const asRecord = (value: unknown): Record<string, unknown> => {
|
||||
expect(value).toBeTruthy();
|
||||
expect(typeof value).toBe("object");
|
||||
expect(Array.isArray(value)).toBe(false);
|
||||
if (!value || typeof value !== "object" || Array.isArray(value)) {
|
||||
throw new Error("expected record");
|
||||
}
|
||||
return value as Record<string, unknown>;
|
||||
};
|
||||
|
||||
|
||||
@@ -2,6 +2,14 @@ import { readFileSync } from "node:fs";
|
||||
import { describe, expect, it } from "vitest";
|
||||
|
||||
type GoogleManifest = {
|
||||
modelIdNormalization?: {
|
||||
providers?: Record<
|
||||
string,
|
||||
{
|
||||
aliases?: Record<string, string>;
|
||||
}
|
||||
>;
|
||||
};
|
||||
modelCatalog?: {
|
||||
suppressions?: Array<{
|
||||
provider?: string;
|
||||
@@ -83,4 +91,14 @@ describe("google manifest model catalog", () => {
|
||||
expect(suppressionRefs).not.toContain("google/gemini-2.5-pro");
|
||||
expect(suppressionRefs).not.toContain("google/gemini-3.1-pro-preview");
|
||||
});
|
||||
|
||||
it("normalizes retired Gemini 3 Pro aliases for all Google chat providers", () => {
|
||||
const manifest = loadManifest();
|
||||
|
||||
for (const provider of GOOGLE_CHAT_PROVIDERS) {
|
||||
expect(manifest.modelIdNormalization?.providers?.[provider]?.aliases).toMatchObject({
|
||||
"gemini-3-pro": "gemini-3.1-pro-preview",
|
||||
});
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
@@ -18,6 +18,16 @@
|
||||
"gemini-3.1-flash-preview": "gemini-3-flash-preview"
|
||||
}
|
||||
},
|
||||
"google-gemini-cli": {
|
||||
"aliases": {
|
||||
"gemini-3-pro": "gemini-3.1-pro-preview",
|
||||
"gemini-3-flash": "gemini-3-flash-preview",
|
||||
"gemini-3.1-pro": "gemini-3.1-pro-preview",
|
||||
"gemini-3.1-flash-lite": "gemini-3.1-flash-lite-preview",
|
||||
"gemini-3.1-flash": "gemini-3-flash-preview",
|
||||
"gemini-3.1-flash-preview": "gemini-3-flash-preview"
|
||||
}
|
||||
},
|
||||
"google-vertex": {
|
||||
"aliases": {
|
||||
"gemini-3-pro": "gemini-3.1-pro-preview",
|
||||
|
||||
@@ -235,7 +235,9 @@ describe("google video generation provider", () => {
|
||||
});
|
||||
|
||||
const [{ downloadPath }] = downloadMock.mock.calls[0] ?? [{}];
|
||||
expect(path.basename(String(downloadPath))).toBe("video-1.mp4");
|
||||
const downloadBaseName = path.basename(String(downloadPath));
|
||||
expect(downloadBaseName).toContain("video-1.mp4");
|
||||
expect(downloadBaseName).toMatch(/\.part$/);
|
||||
expect(result.videos[0]?.buffer).toEqual(Buffer.from("sdk-video"));
|
||||
expect(result.videos[0]?.fileName).toBe("video-1.mp4");
|
||||
});
|
||||
|
||||
@@ -39,6 +39,10 @@ describe("irc protocol", () => {
|
||||
it("splits long text on boundaries", () => {
|
||||
const chunks = splitIrcText("a ".repeat(300), 120);
|
||||
expect(chunks.length).toBeGreaterThan(2);
|
||||
expect(chunks.every((chunk) => chunk.length <= 120)).toBe(true);
|
||||
expect(
|
||||
chunks
|
||||
.map((chunk, index) => ({ index, length: chunk.length }))
|
||||
.filter((chunk) => chunk.length > 120),
|
||||
).toEqual([]);
|
||||
});
|
||||
});
|
||||
|
||||
@@ -121,7 +121,7 @@ describe("discoverKilocodeModels", () => {
|
||||
it("returns static catalog in test environment", async () => {
|
||||
const models = await discoverKilocodeModels();
|
||||
expect(models.length).toBeGreaterThan(0);
|
||||
expect(models.some((m) => m.id === "kilo/auto")).toBe(true);
|
||||
expect(requireModelById(models, "kilo/auto").id).toBe("kilo/auto");
|
||||
});
|
||||
|
||||
it("static catalog has correct defaults for kilo/auto", async () => {
|
||||
@@ -185,7 +185,7 @@ describe("discoverKilocodeModels (fetch path)", () => {
|
||||
await withFetchPathTest(mockFetch, async () => {
|
||||
const models = await discoverKilocodeModels();
|
||||
expect(models.length).toBeGreaterThan(0);
|
||||
expect(models.some((m) => m.id === "kilo/auto")).toBe(true);
|
||||
expect(requireModelById(models, "kilo/auto").id).toBe("kilo/auto");
|
||||
});
|
||||
});
|
||||
|
||||
@@ -197,7 +197,7 @@ describe("discoverKilocodeModels (fetch path)", () => {
|
||||
await withFetchPathTest(mockFetch, async () => {
|
||||
const models = await discoverKilocodeModels();
|
||||
expect(models.length).toBeGreaterThan(0);
|
||||
expect(models.some((m) => m.id === "kilo/auto")).toBe(true);
|
||||
expect(requireModelById(models, "kilo/auto").id).toBe("kilo/auto");
|
||||
});
|
||||
});
|
||||
|
||||
@@ -211,8 +211,10 @@ describe("discoverKilocodeModels (fetch path)", () => {
|
||||
});
|
||||
await withFetchPathTest(mockFetch, async () => {
|
||||
const models = await discoverKilocodeModels();
|
||||
expect(models.some((m) => m.id === "kilo/auto")).toBe(true);
|
||||
expect(models.some((m) => m.id === "anthropic/claude-sonnet-4")).toBe(true);
|
||||
expect(requireModelById(models, "kilo/auto").id).toBe("kilo/auto");
|
||||
expect(requireModelById(models, "anthropic/claude-sonnet-4").id).toBe(
|
||||
"anthropic/claude-sonnet-4",
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
@@ -256,7 +258,9 @@ describe("discoverKilocodeModels (fetch path)", () => {
|
||||
const auto = requireModelById(models, "kilo/auto");
|
||||
expect(auto.name).toBe("Kilo: Auto");
|
||||
expect(auto.cost.input).toBeCloseTo(5.0);
|
||||
expect(models.some((m) => m.id === "anthropic/claude-sonnet-4")).toBe(true);
|
||||
expect(requireModelById(models, "anthropic/claude-sonnet-4").id).toBe(
|
||||
"anthropic/claude-sonnet-4",
|
||||
);
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
import crypto from "node:crypto";
|
||||
import type { IncomingMessage, ServerResponse } from "node:http";
|
||||
import type { RuntimeEnv } from "openclaw/plugin-sdk/runtime-env";
|
||||
import { createMockIncomingRequest } from "openclaw/plugin-sdk/test-env";
|
||||
import { describe, expect, it, vi } from "vitest";
|
||||
import { createLineNodeWebhookHandler, readLineWebhookRequestBody } from "./webhook-node.js";
|
||||
@@ -29,6 +30,20 @@ function createRes() {
|
||||
|
||||
const SECRET = "secret";
|
||||
|
||||
type RuntimeEnvMock = RuntimeEnv & {
|
||||
error: ReturnType<typeof vi.fn<(...args: unknown[]) => void>>;
|
||||
exit: ReturnType<typeof vi.fn<(code: number) => void>>;
|
||||
log: ReturnType<typeof vi.fn<(...args: unknown[]) => void>>;
|
||||
};
|
||||
|
||||
function createRuntimeMock(): RuntimeEnvMock {
|
||||
return {
|
||||
error: vi.fn<(...args: unknown[]) => void>(),
|
||||
exit: vi.fn<(code: number) => void>(),
|
||||
log: vi.fn<(...args: unknown[]) => void>(),
|
||||
};
|
||||
}
|
||||
|
||||
function createMiddlewareRes() {
|
||||
const res = {
|
||||
status: vi.fn(),
|
||||
@@ -42,7 +57,7 @@ function createMiddlewareRes() {
|
||||
|
||||
function createPostWebhookTestHarness(rawBody: string, secret = "secret") {
|
||||
const bot = { handleWebhook: vi.fn(async () => {}) };
|
||||
const runtime = { log: vi.fn(), error: vi.fn(), exit: vi.fn() };
|
||||
const runtime = createRuntimeMock();
|
||||
const handler = createLineNodeWebhookHandler({
|
||||
channelSecret: secret,
|
||||
bot,
|
||||
@@ -71,11 +86,7 @@ async function invokeWebhook(params: {
|
||||
headers?: Record<string, string>;
|
||||
onEvents?: ReturnType<typeof vi.fn>;
|
||||
autoSign?: boolean;
|
||||
runtime?: {
|
||||
log: ReturnType<typeof vi.fn>;
|
||||
error: ReturnType<typeof vi.fn>;
|
||||
exit: ReturnType<typeof vi.fn>;
|
||||
};
|
||||
runtime?: RuntimeEnv;
|
||||
}) {
|
||||
const onEventsMock = params.onEvents ?? vi.fn(async () => {});
|
||||
const middleware = createLineWebhookMiddleware({
|
||||
@@ -138,7 +149,7 @@ async function invokeNodePostContract(params: {
|
||||
throw params.failWith;
|
||||
}
|
||||
});
|
||||
const runtime = { log: vi.fn(), error: vi.fn(), exit: vi.fn() };
|
||||
const runtime = createRuntimeMock();
|
||||
const handler = createLineNodeWebhookHandler({
|
||||
channelSecret: SECRET,
|
||||
bot: { handleWebhook: dispatched },
|
||||
@@ -167,7 +178,7 @@ async function invokeMiddlewarePostContract(params: {
|
||||
rawBody: string;
|
||||
signed: boolean;
|
||||
}) {
|
||||
const runtime = { log: vi.fn(), error: vi.fn(), exit: vi.fn() };
|
||||
const runtime = createRuntimeMock();
|
||||
const onEvents = vi.fn(async () => {
|
||||
if (params.failWith) {
|
||||
throw params.failWith;
|
||||
@@ -182,6 +193,7 @@ async function invokeMiddlewarePostContract(params: {
|
||||
});
|
||||
return {
|
||||
body: res.json.mock.calls.at(-1)?.[0],
|
||||
contentType: undefined,
|
||||
dispatched,
|
||||
runtimeError: runtime.error,
|
||||
status: res.status.mock.calls.at(-1)?.[0],
|
||||
@@ -288,7 +300,7 @@ describe("LINE webhook shared POST contract", () => {
|
||||
describe("createLineNodeWebhookHandler", () => {
|
||||
it("returns 200 for GET", async () => {
|
||||
const bot = { handleWebhook: vi.fn(async () => {}) };
|
||||
const runtime = { log: vi.fn(), error: vi.fn(), exit: vi.fn() };
|
||||
const runtime = createRuntimeMock();
|
||||
const handler = createLineNodeWebhookHandler({
|
||||
channelSecret: "secret",
|
||||
bot,
|
||||
@@ -305,7 +317,7 @@ describe("createLineNodeWebhookHandler", () => {
|
||||
|
||||
it("returns 204 for HEAD", async () => {
|
||||
const bot = { handleWebhook: vi.fn(async () => {}) };
|
||||
const runtime = { log: vi.fn(), error: vi.fn(), exit: vi.fn() };
|
||||
const runtime = createRuntimeMock();
|
||||
const handler = createLineNodeWebhookHandler({
|
||||
channelSecret: "secret",
|
||||
bot,
|
||||
@@ -333,7 +345,7 @@ describe("createLineNodeWebhookHandler", () => {
|
||||
|
||||
it("rejects unsigned POST requests before reading the body", async () => {
|
||||
const bot = { handleWebhook: vi.fn(async () => {}) };
|
||||
const runtime = { log: vi.fn(), error: vi.fn(), exit: vi.fn() };
|
||||
const runtime = createRuntimeMock();
|
||||
const readBody = vi.fn(async () => JSON.stringify({ events: [{ type: "message" }] }));
|
||||
const handler = createLineNodeWebhookHandler({
|
||||
channelSecret: "secret",
|
||||
@@ -353,7 +365,7 @@ describe("createLineNodeWebhookHandler", () => {
|
||||
it("uses strict pre-auth limits for signed POST requests", async () => {
|
||||
const rawBody = JSON.stringify({ events: [{ type: "message" }] });
|
||||
const bot = { handleWebhook: vi.fn(async () => {}) };
|
||||
const runtime = { log: vi.fn(), error: vi.fn(), exit: vi.fn() };
|
||||
const runtime = createRuntimeMock();
|
||||
const readBody = vi.fn(async (_req: IncomingMessage, maxBytes: number, timeoutMs?: number) => {
|
||||
expect(maxBytes).toBe(64 * 1024);
|
||||
expect(timeoutMs).toBe(5_000);
|
||||
@@ -414,7 +426,7 @@ describe("createLineNodeWebhookHandler", () => {
|
||||
),
|
||||
};
|
||||
const onRequestAuthenticated = vi.fn();
|
||||
const runtime = { log: vi.fn(), error: vi.fn(), exit: vi.fn() };
|
||||
const runtime = createRuntimeMock();
|
||||
const handler = createLineNodeWebhookHandler({
|
||||
channelSecret: SECRET,
|
||||
bot,
|
||||
|
||||
@@ -378,7 +378,7 @@ describe("lmstudio-models", () => {
|
||||
context_length: 32768,
|
||||
}),
|
||||
});
|
||||
const loadInit = loadCall![1] as RequestInit;
|
||||
const loadInit = loadCall[1] as RequestInit;
|
||||
const loadBody = parseJsonRequestBody(loadInit) as { context_length: number };
|
||||
expect(loadBody.context_length).not.toBe(LMSTUDIO_DEFAULT_LOAD_CONTEXT_LENGTH);
|
||||
});
|
||||
|
||||
@@ -505,7 +505,6 @@ describe("lmstudio stream wrapper", () => {
|
||||
"toolcall_delta",
|
||||
"done",
|
||||
]);
|
||||
expect(events.some((event) => event.type === "text_delta")).toBe(false);
|
||||
const done = events.find((event) => event.type === "done") as {
|
||||
message?: { content?: Array<Record<string, unknown>>; stopReason?: string };
|
||||
reason?: string;
|
||||
|
||||
@@ -9,6 +9,9 @@ const cliMocks = vi.hoisted(() => ({
|
||||
|
||||
const runtimeMocks = vi.hoisted(() => ({
|
||||
ensureMatrixCryptoRuntime: vi.fn(async () => {}),
|
||||
handleMatrixSubagentDeliveryTarget: vi.fn(() => "delivery-target"),
|
||||
handleMatrixSubagentEnded: vi.fn(async () => {}),
|
||||
handleMatrixSubagentSpawning: vi.fn(async () => "spawned"),
|
||||
handleVerificationBootstrap: vi.fn(async () => {}),
|
||||
handleVerificationStatus: vi.fn(async () => {}),
|
||||
handleVerifyRecoveryKey: vi.fn(async () => {}),
|
||||
@@ -23,6 +26,7 @@ vi.mock("./src/cli.js", () => {
|
||||
|
||||
vi.mock("./plugin-entry.handlers.runtime.js", () => runtimeMocks);
|
||||
vi.mock("./runtime-setter-api.js", () => ({ setMatrixRuntime: runtimeMocks.setMatrixRuntime }));
|
||||
vi.mock("./src/matrix/subagent-hooks.js", () => runtimeMocks);
|
||||
|
||||
describe("matrix plugin", () => {
|
||||
it("registers matrix CLI through a descriptor-backed lazy registrar", async () => {
|
||||
@@ -68,7 +72,10 @@ describe("matrix plugin", () => {
|
||||
expect(entry.kind).toBe("bundled-channel-entry");
|
||||
expect(entry.id).toBe("matrix");
|
||||
expect(entry.name).toBe("Matrix");
|
||||
expect(entry.setChannelRuntime).toEqual(expect.any(Function));
|
||||
if (!entry.setChannelRuntime) {
|
||||
throw new Error("expected Matrix runtime setter");
|
||||
}
|
||||
expect(() => entry.setChannelRuntime?.({ marker: "runtime" } as never)).not.toThrow();
|
||||
});
|
||||
|
||||
it("wires CLI metadata through the bundled entry", () => {
|
||||
@@ -99,7 +106,7 @@ describe("matrix plugin", () => {
|
||||
expect(registerGatewayMethod).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it("registers subagent lifecycle hooks during full runtime registration", () => {
|
||||
it("registers subagent lifecycle hooks during full runtime registration", async () => {
|
||||
const on = vi.fn();
|
||||
const registerGatewayMethod = vi.fn();
|
||||
const api = createTestPluginApi({
|
||||
@@ -121,8 +128,14 @@ describe("matrix plugin", () => {
|
||||
"subagent_ended",
|
||||
"subagent_delivery_target",
|
||||
]);
|
||||
for (const [, handler] of on.mock.calls) {
|
||||
expect(handler).toEqual(expect.any(Function));
|
||||
}
|
||||
const handlers = Object.fromEntries(on.mock.calls);
|
||||
await expect(handlers.subagent_spawning({ id: "spawn" })).resolves.toBe("spawned");
|
||||
await expect(handlers.subagent_ended({ id: "ended" })).resolves.toBeUndefined();
|
||||
await expect(handlers.subagent_delivery_target({ id: "target" })).resolves.toBe(
|
||||
"delivery-target",
|
||||
);
|
||||
expect(runtimeMocks.handleMatrixSubagentSpawning).toHaveBeenCalledWith(api, { id: "spawn" });
|
||||
expect(runtimeMocks.handleMatrixSubagentEnded).toHaveBeenCalledWith({ id: "ended" });
|
||||
expect(runtimeMocks.handleMatrixSubagentDeliveryTarget).toHaveBeenCalledWith({ id: "target" });
|
||||
});
|
||||
});
|
||||
|
||||
@@ -47,10 +47,12 @@ describe("matrix channel message adapter", () => {
|
||||
if (adapter?.send?.text === undefined || adapter.send.media === undefined) {
|
||||
throw new Error("expected matrix text and media message adapter");
|
||||
}
|
||||
const sendText = adapter.send.text;
|
||||
const sendMedia = adapter.send.media;
|
||||
|
||||
const proveText = async () => {
|
||||
mocks.sendMessageMatrix.mockClear();
|
||||
const result = await adapter.send.text({
|
||||
const result = await sendText({
|
||||
cfg,
|
||||
to: "room:!room:example",
|
||||
text: "hello",
|
||||
@@ -67,7 +69,7 @@ describe("matrix channel message adapter", () => {
|
||||
|
||||
const proveMedia = async () => {
|
||||
mocks.sendMessageMatrix.mockClear();
|
||||
const result = await adapter.send.media({
|
||||
const result = await sendMedia({
|
||||
cfg,
|
||||
to: "room:!room:example",
|
||||
text: "caption",
|
||||
@@ -91,7 +93,7 @@ describe("matrix channel message adapter", () => {
|
||||
|
||||
const proveReplyThread = async () => {
|
||||
mocks.sendMessageMatrix.mockClear();
|
||||
const result = await adapter!.send!.text!({
|
||||
const result = await adapter.send!.text!({
|
||||
cfg,
|
||||
to: "room:!room:example",
|
||||
text: "threaded",
|
||||
@@ -114,14 +116,14 @@ describe("matrix channel message adapter", () => {
|
||||
|
||||
await verifyChannelMessageAdapterCapabilityProofs({
|
||||
adapterName: "matrixMessageAdapter",
|
||||
adapter: adapter!,
|
||||
adapter: adapter,
|
||||
proofs: {
|
||||
text: proveText,
|
||||
media: proveMedia,
|
||||
replyTo: proveReplyThread,
|
||||
thread: proveReplyThread,
|
||||
messageSendingHooks: () => {
|
||||
expect(adapter!.send!.text).toBeTypeOf("function");
|
||||
expect(adapter.send!.text).toBeTypeOf("function");
|
||||
},
|
||||
},
|
||||
});
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user