mirror of
https://github.com/openclaw/openclaw.git
synced 2026-05-06 05:10:44 +00:00
fix(file-transfer): require canonical node policy authorization (#74742)
* feat(file-transfer): add bundled plugin for binary file ops on nodes
New extensions/file-transfer/ plugin exposing four agent tools
(file_fetch, dir_list, dir_fetch, file_write) and four matching
node-host commands (file.fetch, dir.list, dir.fetch, file.write).
Lets agents read and write files on paired nodes by absolute path,
bypassing the bash output cap (200KB) and the live tool-result
text cap that would otherwise truncate base64 payloads.
Public surface
--------------
- file_fetch({ node, path, maxBytes? })
Image MIMEs return image content blocks; small text (<=8 KB) inlines
as text content; everything else returns a saved-media-path text
block. sha256-verified end-to-end.
- dir_list({ node, path, pageToken?, maxEntries? })
Structured directory listing — name, path, size, mimeType, isDir,
mtime. Paginated. No content transfer.
- dir_fetch({ node, path, maxBytes?, includeDotfiles? })
Server-side tar -czf streamed back, unpacked into the gateway media
store, returns a manifest of saved paths. Single round-trip.
60s wall-clock timeouts on tar create/unpack. tar -xzf without -P
rejects absolute paths in archive entries.
- file_write({ node, path, contentBase64, mimeType?, overwrite?,
createParents? })
Atomic write (temp + rename). Refuses to overwrite by default.
Refuses to write through symlinks (lstat check). Buffer-side
sha256 (no read-back race). Pair with file_fetch to round-trip
files between nodes — DO NOT use exec/cp for file copies.
All four commands gated by:
- dangerous-by-default node command policy
(gateway.nodes.allowCommands opt-in)
- per-node path policy (gateway.nodes.fileTransfer)
- optional operator approval prompt (ask: off | on-miss | always)
16 MB raw byte ceiling per single-frame round-trip (25 MB WS frame
with ~33% base64 overhead and JSON envelope). 8 MB defaults.
Path policy and approvals
-------------------------
Default behavior is DENY. The operator must explicitly opt in:
{
"gateway": {
"nodes": {
"fileTransfer": {
"<nodeId-or-displayName>": {
"ask": "off" | "on-miss" | "always",
"allowReadPaths": ["~/Screenshots/**", "/tmp/**"],
"allowWritePaths": ["~/Downloads/**"],
"denyPaths": ["**/.ssh/**", "**/.aws/**"],
"maxBytes": 16777216
},
"*": { "ask": "on-miss" }
}
}
}
}
ask modes:
off — silent: allow if matched, deny if not (default)
on-miss — silent allow if matched; prompt on miss
always — prompt every call (denyPaths still hard-deny)
denyPaths always wins. allow-always from the prompt persists the
exact path back into allowReadPaths/allowWritePaths via
mutateConfigFile so subsequent matching calls go silent.
Reuses existing primitives — no new gateway methods:
plugin.approval.request / plugin.approval.waitDecision
decision: allow-once | allow-always | deny
Pre-flight against requested path AND post-flight against the
canonicalPath returned by the node — closes symlink-escape attacks
where the requested path matched policy but realpath resolves
somewhere else.
Audit log
---------
JSONL at ~/.openclaw/audit/file-transfer.jsonl. Records every
decision (allow/allowed-once/allowed-always/denied/error) with
timestamp, op, nodeId, displayName, requestedPath, canonicalPath,
decision, error code, sizeBytes, sha256, durationMs. Best-effort
writes; never propagates failure.
Plugin layout
-------------
extensions/file-transfer/
index.ts definePluginEntry, nodeHostCommands
openclaw.plugin.json contracts.tools registration
package.json
src/node-host/{file-fetch,dir-list,dir-fetch,file-write}.ts
src/tools/{file-fetch,dir-list,dir-fetch,file-write}-tool.ts
src/shared/
mime.ts single-source extension->MIME map + image/text sets
errors.ts shared error code enum and helpers
params.ts shared param-validation helpers + GatewayCallOptions
policy.ts evaluateFilePolicy, persistAllowAlways
approval.ts plugin.approval.request wrapper
gatekeep.ts one-stop policy + approval + audit orchestrator
audit.ts JSONL audit sink
Core touch points
-----------------
- src/infra/node-commands.ts: NODE_FILE_FETCH_COMMAND,
NODE_DIR_LIST_COMMAND, NODE_DIR_FETCH_COMMAND,
NODE_FILE_WRITE_COMMAND, NODE_FILE_COMMANDS array
- src/gateway/node-command-policy.ts: all four added to
DEFAULT_DANGEROUS_NODE_COMMANDS
- src/security/audit-extra.sync.ts: audit detail mentions file ops
- src/agents/tools/nodes-tool-media.ts: MEDIA_INVOKE_ACTIONS entry
for file.fetch redirects raw nodes(action=invoke) callers to the
dedicated file_fetch tool to prevent base64 context bloat
- src/agents/tools/nodes-tool.ts: nodes tool description points to
the dedicated file_fetch tool
Known limitations / follow-ups
------------------------------
- No tests in this PR. For a security-sensitive surface this is a
gap; will follow up with a test pass.
- Direct CLI invocation (openclaw nodes invoke --command file.fetch)
bypasses the plugin policy entirely. Plugin-side gating is the
realistic threat model (agent on iMessage requesting paths it
shouldn't), but for true defense-in-depth, policy belongs in the
gateway-side node.invoke dispatch. Move-policy-to-core is a
separate PR.
- file_watch (long-lived filesystem event subscription) is not
included; it needs a new node-protocol primitive for streaming
event channels and was descoped from this PR.
- dir_fetch includeDotfiles: true is the only supported mode;
BSD tar exclude patterns reliably collapse dotfile filtering
to an empty archive. Reliable filtering needs a
`find ! -name ".*" | tar -T -` pipeline; deferred.
- dir_fetch du -sk preflight is a heuristic (du * 4 vs maxBytes);
the mid-stream byte cap is the actual safety net.
* test(file-transfer): add unit tests for handlers, policy, and shared utilities
Adds 77 tests covering:
- handleFileFetch: validation, fs errors, sha256, size cap, symlink canonicalization
- handleFileWrite: validation, atomic write, overwrite policy, parent dir handling, symlink refusal, integrity check, size cap
- handleDirList: validation, fs errors, sorted listing, dotfile inclusion, pagination
- handleDirFetch: validation, fs errors, gzipped tar with sha256, mid-stream byte cap
- evaluateFilePolicy: default-deny, denyPaths-wins, allow matching, ask modes (off/on-miss/always), node-id/displayName/'*' resolution
- persistAllowAlways: append, dedupe, create-on-missing
- shared/mime: extension lookup, image/text inline sets
- shared/errors: err helper, classifyFsError, throwFromNodePayload
Also fixes accumulated lint regressions in the prod source flagged once these
files moved into the changed-gate scope (parseInt -> Number.parseInt, redundant
type casts removed, single-statement if bodies wrapped in braces).
* fix(file-transfer): address PR review feedback (security + availability)
Reviewer findings addressed (greptile + aisle):
- policy: persistAllowAlways no longer escalates per-node approvals to the
'*' wildcard entry; allow-always now writes under the specific node's
own entry, never the wildcard (greptile P1 SECURITY).
- policy: add literal '..' segment short-circuit in evaluateFilePolicy,
raised before glob match. Stops "/allowed/../etc/passwd" from passing
preflight against "/allowed/**" globs (aisle MEDIUM CWE-22).
- file-write: replace no-op base64 try/catch with actual round-trip
validation. Buffer.from(s, "base64") never throws — invalid input
silently decoded to garbage bytes. Now re-encodes and compares
modulo padding/url-variant chars (greptile P1 SECURITY).
- file-write: document the parent-symlink residual risk and rely on the
existing gateway-side post-flight policy check; full rollback requires
a node-side file.unlink which is deferred to a follow-up. Initial
segment-walk attempt was reverted because it false-positives on system
symlinks like macOS /var → /private/var (aisle HIGH CWE-59).
- dir-fetch tool: add preValidateTarball pass that runs `tar -tzvf` and
rejects symlinks, hardlinks, absolute paths, '..' traversal,
uncompressed sizes >64MB, and entry counts >5000 — before any
extraction. Drops --no-overwrite-dir (GNU-only flag rejected by BSD
tar on macOS) (aisle HIGH x2 CWE-22 + CWE-409, greptile P2).
- dir-fetch tool: stream-hash files via fs.open + read loop instead of
fs.readFile to avoid full-buffer reads on large extracted entries.
- dir-fetch handler: replace spawnSync in countTarEntries with async
spawn + bounded buffer so tar -tzf can't park the node-host event
loop for up to 10s on a slow filesystem (greptile P1 AVAIL).
- audit: clear auditDirPromise on rejection so a transient mkdir
failure doesn't permanently silence the audit log (greptile P2).
New tests: wildcard escalation rejection, base64 malformed/url-variant,
'..' traversal short-circuit (3 cases). 84/84 passing.
* fix(file-transfer): CI failures + second-round PR review feedback
CI failures on previous push:
- Declare runtime deps (minimatch, typebox) in package.json — failed the
extension-runtime-dependencies contract test that scans imports.
- Switch policy.ts and policy.test.ts off the broad
openclaw/plugin-sdk/config-runtime barrel and onto the narrow
openclaw/plugin-sdk/config-mutation + runtime-config-snapshot subpaths.
This satisfies the deprecated-internal-config-api architecture guard.
Second-round Aisle findings:
- policy: traversal-segment check now treats backslash and forward slash
as equivalent, so a Windows node can't be hit with mixed-separator
"C:\\allowed\\..\\Windows\\system.ini" (Aisle HIGH CWE-22).
- dir-fetch tool: replace the single fragile `tar -tvzf` parser pass
(which broke for filenames containing whitespace) with two robust
passes: `tar -tzf` for paths only (one per line, no parsing of
fixed columns) and `tar -tzvf` for type chars only (FIRST CHAR of each
line, never the path column). Also reject backslash-containing entry
names. Drops the in-process uncompressed-size cap because reliably
parsing sizes from tar output is fragile and Aisle flagged it as a
bypass primitive — entry-count cap stays (Aisle HIGH CWE-22, MED).
Tests still 84/84 passing.
* fix(file-transfer): third-round PR review feedback
Aisle's re-analysis on b63daa6a05 surfaced 3 actionable findings:
- nodes.invoke bypass (HIGH CWE-285): generic nodes.action="invoke" let
agents call dir.list/dir.fetch/file.write directly, skipping the
file-transfer plugin's gatekeep + policy + approval flow. Only file.fetch
was redirected to its dedicated tool. Add the other three to
MEDIA_INVOKE_ACTIONS so the redirect-or-deny logic in
nodes-tool-commands fires for all four. The dedicated tools enforce
policy; the generic invoke surface no longer has a way to skip them
without an explicit allowMediaInvokeCommands opt-in.
- prototype pollution in persistAllowAlways (MED CWE-1321): a paired
node with displayName "__proto__" / "prototype" / "constructor" would
mutate the fileTransfer object's prototype when persisting allow-always.
Reject those keys explicitly. Switch the existing-key lookup to
Object.prototype.hasOwnProperty.call so a key like "constructor"
doesn't accidentally match Object.prototype.constructor.
- decompression-bomb cap in dir_fetch (MED CWE-409): compressed tar is
bounded upstream, but a highly compressible bomb can still expand to
gigabytes. Enforce DIR_FETCH_MAX_UNCOMPRESSED_BYTES (64MB) summed
across extracted files and DIR_FETCH_MAX_SINGLE_FILE_BYTES (16MB) per
entry, both checked during the post-extract walk. On bust, rm -rf the
rootDir and audit-log + throw UNCOMPRESSED_TOO_LARGE.
Tests: 85/85 passing (added prototype-pollution rejection test).
Aisle's HIGH parent-symlink finding remains documented as deferred — full
rollback requires a node-side file.unlink command which is out of scope
for this PR. The gateway-side post-flight policy check still detects and
loudly errors on canonical-path mismatches.
* fix(file-transfer): refuse symlink traversal by default with followSymlinks opt-in
Closes the deferred Aisle HIGH parent-symlink finding. Instead of
detecting the escape in a post-flight gateway check after the file is
already written, the node-side handler now refuses pre-flight if any
component of the requested path resolves through a symlink.
Behavior:
- Reads (file.fetch / dir.list / dir.fetch): node realpath()s the
requested path. If canonical != requested AND followSymlinks=false,
return SYMLINK_REDIRECT { canonicalPath } — no I/O happens.
- Writes (file.write): node realpath()s the parent dir. Same refusal
rule. The lstat-on-final check is kept to catch the case where the
target file itself is an existing symlink.
- Opt-in: set gateway.nodes.fileTransfer.<node>.followSymlinks=true to
bring back the previous "follow + post-flight check" behavior.
Operator UX: the SYMLINK_REDIRECT response includes the canonical path
so the operator can either update their allow list to the canonical form
or set followSymlinks=true on that node. On macOS, /var → /private/var
and /tmp → /private/tmp are system aliases that trip the new check, so
operators using those paths need followSymlinks=true OR canonical-path
allowlists.
Wiring:
- Add followSymlinks?: boolean to NodeFilePolicyConfig.
- evaluateFilePolicy returns followSymlinks (default false) on its
ok=true branches.
- gatekeep propagates it via GatekeepOutcome.
- Each tool passes it as a node.invoke param.
- Each handler honors it pre-flight before any read/write.
Tests updated: 89/89 passing.
- realpath(mkdtemp()) so existing happy-path tests don't trip the new
default on macOS where mkdtemp lands under symlinked /var/folders.
- New tests: SYMLINK_REDIRECT refusal for file.fetch and file.write
parent traversal; opt-in passthrough when followSymlinks=true.
- New policy test: followSymlinks propagation default false / true.
* fix(file-transfer): close two more aisle findings on 069bd66
Aisle re-analysis on 069bd66 surfaced two issues my earlier round-three
fix missed:
- HIGH (CWE-284): file.fetch / dir.fetch / dir.list / file.write were
still bypassable via the generic nodes.action="invoke" surface when
the operator had set allowMediaInvokeCommands=true. That flag was
meant to opt in to base64-bloat for camera/screen, not to disable
path policy on file-transfer. Split the redirect map: introduce
POLICY_REDIRECT_INVOKE_COMMANDS (file-transfer only) which ALWAYS
rerouts to its dedicated tool regardless of the bloat flag. Camera
and screen continue to use the bloat-only redirect (suppressed by
allowMediaInvokeCommands=true). Confirmed by clawsweeper P1.
- MED (CWE-276): tar -xzf in dir_fetch unpack preserved archive
ownership and permissions, so a malicious node could plant
setuid/setgid or world-writable files on a gateway running with
elevated privileges. Add --no-same-owner --no-same-permissions
(both flags are portable across BSD tar / GNU tar).
Tests: 89/89 passing.
* chore(file-transfer): drop file_watch from plugin description
Phase 5 (file_watch) was deferred earlier in this PR. Strip the watch
mention from the plugin description in package.json,
openclaw.plugin.json, and index.ts so the metadata reflects what's
actually shipped (file_fetch, dir_list, dir_fetch, file_write).
Closes clawsweeper P3.
* fix(file-transfer): hash before rename and allow zero-byte round-trip
Two of Peter's review findings on PR #74134:
- P2 (file-write integrity): hash the decoded buffer + compare against
expectedSha256 BEFORE temp+rename. Previously the rename happened
first, then the sha check unlinked the target on mismatch — with
overwrite=true a bad caller hash could replace + delete the original.
Now a hash mismatch returns INTEGRITY_FAILURE without touching disk.
Added a regression test that asserts the original file survives.
- P2/P3 (zero-byte round-trip): the tool layer's truthy checks on
contentBase64 and base64 rejected the empty string, blocking zero-byte
files from round-tripping through file_fetch -> file_write. Switched
to type-checks (typeof === "string") and added zero-byte tests at the
handler layer for both fetch and write (sha matches the known empty
digest).
Tests: 92/92 passing.
* fix(file-transfer): declare gateway.nodes.fileTransfer in core config schema
Peter's P1/P2 finding: the plugin reads/writes gateway.nodes.fileTransfer
via casts through unknown because the strict zod schema and OpenClawConfig
type didn't declare it. That meant `openclaw config validate` would
reject the very examples in the plugin's own documentation.
- Add fileTransfer block to gateway.nodes in src/config/zod-schema.ts
with the full per-node entry shape (ask, allowReadPaths,
allowWritePaths, denyPaths, maxBytes, followSymlinks).
- Add GatewayNodeFileTransferEntry + the fileTransfer field on
GatewayNodesConfig in src/config/types.gateway.ts.
- Drop the `as unknown` casts in the extension's policy.ts now that
gateway.nodes.fileTransfer is properly typed end-to-end.
- Regenerate docs/.generated/config-baseline.sha256.
Tests: 92/92 passing. pnpm config:docs:check OK.
* fix(file-transfer): enforce path policy at gateway dispatch
Closes Peter's P1 review finding on PR #74134.
The agent-tool-only redirect added in earlier commits left CLI
(`openclaw nodes invoke`), plugin-runtime, and raw `node.invoke` callers
able to skip the file-transfer path policy entirely. The fix moves the
security boundary down to the gateway: every code path that reaches
`node.invoke` for file.fetch / dir.list / dir.fetch / file.write now
runs the same allow/deny check.
- New: src/gateway/file-transfer-dispatch.ts with
`evaluateFileTransferDispatchPolicy` and `isFileTransferCommand`. Same
semantics as the extension-side `evaluateFilePolicy` minus the
operator-prompt flow (prompts stay at the agent-tool layer; the
gateway is silent enforcement).
- src/gateway/server-methods/nodes.ts: after the existing command
allowlist check, run the new gate before forwarding. Denies emit
INVALID_REQUEST with a structured `{ command, code, reason }`.
- Decision matrix mirrors the extension: NO_POLICY (no entry for
this node) deny, denyPaths-wins, '..' traversal short-circuit
(with backslash separator handling), allowPaths match → allow,
no allow match → deny.
- 19 new unit tests covering each branch including identity
resolution (nodeId/displayName/'*'), prototype-pollution-safe lookup,
and read-vs-write allow-list separation.
Note on allow-once approvals: the agent tool's interactive
`allow-once` decision now has to flow through the dedicated tool's
pre-flight (which forwards an approved request); raw `nodes.invoke`
callers cannot benefit from one-time approvals because the gateway is
silent. allow-always (which persists to allowReadPaths/allowWritePaths)
continues to work transparently because by the time the next request
hits the gateway the path is in the persisted allow list.
Tests: 92 extension + 19 gateway = 111 total, all passing.
* fix(file-transfer): enforce node policy in gateway
* fix(file-transfer): use plugin node policy only
* fix(file-transfer): harden node policy edge cases
* fix(file-transfer): close review hardening gaps
* fix(file-transfer): harden node invoke policy
* fix(file-transfer): align runtime dependency versions
* fix(file-transfer): keep minimatch extension-owned
* refactor(file-transfer): remove unused approval gate
* fix(file-transfer): require canonical node policy authorization
Co-authored-by: Omar Shahine <10343873+omarshahine@users.noreply.github.com>
* fix(clawsweeper): address review for automerge-openclaw-openclaw-74134 (1)
Co-authored-by: Omar Shahine <10343873+omarshahine@users.noreply.github.com>
* fix(file-transfer): recheck dir fetch archive policy after fetch
* fix(file-transfer): name file-transfer tool in invoke redirect
---------
Co-authored-by: Omar Shahine <10343873+omarshahine@users.noreply.github.com>
Co-authored-by: Peter Steinberger <steipete@gmail.com>
Co-authored-by: clawsweeper-repair <clawsweeper-repair@users.noreply.github.com>
This commit is contained in:
6
.github/labeler.yml
vendored
6
.github/labeler.yml
vendored
@@ -9,6 +9,12 @@
|
||||
- "extensions/azure-speech/**"
|
||||
- "docs/providers/azure-speech.md"
|
||||
- "docs/tools/tts.md"
|
||||
"plugin: file-transfer":
|
||||
- changed-files:
|
||||
- any-glob-to-any-file:
|
||||
- "extensions/file-transfer/**"
|
||||
- "docs/nodes/index.md"
|
||||
- "docs/plugins/sdk-runtime.md"
|
||||
"channel: discord":
|
||||
- changed-files:
|
||||
- any-glob-to-any-file:
|
||||
|
||||
@@ -45,6 +45,7 @@ Docs: https://docs.openclaw.ai
|
||||
- Plugins/runtime-deps: verify staged package entry files before reusing mirrored runtime roots, so browser-control repairs incomplete `ajv`/MCP SDK installs after update instead of failing after restart on a missing `ajv/dist/ajv.js`. Refs #74630. Thanks @spickeringlr.
|
||||
- Heartbeat: resolve `responsePrefix` template variables with the selected provider, model, and thinking context before delivering alerts or suppressing prefixed `HEARTBEAT_OK` replies. Fixes #43064; repairs #43065; supersedes #46858. Thanks @yweiii and @JunJD.
|
||||
- Memory/LanceDB: show full memory UUIDs in the `memory_forget` candidate list so agents can pass the displayed ID back to targeted deletion without hitting the full-UUID validator. (#66913) Thanks @amittell.
|
||||
- File-transfer plugin: require canonical read-path preflight authorization for `file.fetch`, fail closed when `dir.fetch` preflight entries are missing, absolute, or traversing, and recheck returned archive entries before handing archive bytes to callers. Carries forward #74134. Thanks @omarshahine.
|
||||
- Channels/Feishu: retry file-typed iOS video resource downloads as `media` after a Feishu/Lark HTTP 502 and preserve the original 502 when the fallback also fails. Fixes #49855; carries forward #50164 and #73986. Thanks @alex-xuweilong.
|
||||
- Providers/Amazon Bedrock: expose the full Claude Opus 4.7 thinking profile (`xhigh`, `adaptive`, and `max`) for Bedrock model refs, while keeping Opus/Sonnet 4.6 on adaptive-by-default, so `/think` menus and validation match the Anthropic transport behavior. Fixes #74701. Thanks @prasad-yashdeep, @sparkleHazard, @Sanjays2402, and @hclsys.
|
||||
- Plugins/tokenjuice: compile the bundled plugin against tokenjuice 0.7.0's published OpenClaw host types instead of a local compatibility shim, so package contract drift fails in OpenClaw validation before release. Thanks @vincentkoc.
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
7bf720f6d9040c53323553b1bd351f688137c6b352c4cf2acfd7f7d252644b38 config-baseline.json
|
||||
cb32b51492306179b4537514b0650ab24e2f5f8f6c2eda92154cb1420a11e560 config-baseline.json
|
||||
ab9a004ec78ed51e646be29eb10aa6700de1d47fee77331a85ca5e2cd15b6e93 config-baseline.core.json
|
||||
92712871defa92eeda8161b516db85574681f2b70678b940508a808b987aeae2 config-baseline.channel.json
|
||||
c4231c2194206547af8ad94342dc00aadb734f43cb49cc79d4c46bdbb80c3f95 config-baseline.plugin.json
|
||||
ede8b3d9bd7848a09abfcd9fa4f007d289d05742f66b0e38ef459da6dbf40897 config-baseline.plugin.json
|
||||
|
||||
@@ -1,2 +1,2 @@
|
||||
9262e43a0171f4c38a0590fe36d80b13382beee4a1be1ae2ffb109ace5f37b31 plugin-sdk-api-baseline.json
|
||||
c7385f6584052938fe9cce00ae5f9d90cb3b08f943cc82122235f18a1213439b plugin-sdk-api-baseline.jsonl
|
||||
dd840b7c222ca003aa5336aabff8a126e3e254474941ddab93165e0e44944ffa plugin-sdk-api-baseline.json
|
||||
443878722940029e4ae5220f3c23ffc321559b73848f6a7a3f4cab98c076924e plugin-sdk-api-baseline.jsonl
|
||||
|
||||
@@ -202,6 +202,12 @@ Dangerous or privacy-heavy commands such as `camera.snap`, `camera.clip`, and
|
||||
`gateway.nodes.allowCommands`. `gateway.nodes.denyCommands` always wins over
|
||||
defaults and extra allowlist entries.
|
||||
|
||||
Plugin-owned node commands can add a Gateway node-invoke policy. That policy
|
||||
runs after the allowlist check and before forwarding to the node, so raw
|
||||
`node.invoke`, CLI helpers, and dedicated agent tools share the same plugin
|
||||
permission boundary. Dangerous plugin node commands still require explicit
|
||||
`gateway.nodes.allowCommands` opt-in.
|
||||
|
||||
After a node changes its declared command list, reject the old device pairing
|
||||
and approve the new request so the gateway stores the updated command snapshot.
|
||||
|
||||
|
||||
@@ -178,7 +178,9 @@ Provider and channel execution paths must use the active runtime config snapshot
|
||||
});
|
||||
```
|
||||
|
||||
Inside the Gateway this runtime is in-process. In plugin CLI commands it calls the configured Gateway over RPC, so commands such as `openclaw googlemeet recover-tab` can inspect paired nodes from the terminal. Node commands still go through normal Gateway node pairing, command allowlists, and node-local command handling.
|
||||
Inside the Gateway this runtime is in-process. In plugin CLI commands it calls the configured Gateway over RPC, so commands such as `openclaw googlemeet recover-tab` can inspect paired nodes from the terminal. Node commands still go through normal Gateway node pairing, command allowlists, plugin node-invoke policies, and node-local command handling.
|
||||
|
||||
Plugins that expose dangerous node-host commands should register a node-invoke policy with `api.registerNodeInvokePolicy(...)`. The policy runs in the Gateway after command allowlist checks and before the command is forwarded to the node, so direct `node.invoke` calls and higher-level plugin tools share the same enforcement path.
|
||||
|
||||
</Accordion>
|
||||
<Accordion title="api.runtime.tasks.managedFlows">
|
||||
|
||||
70
extensions/file-transfer/index.ts
Normal file
70
extensions/file-transfer/index.ts
Normal file
@@ -0,0 +1,70 @@
|
||||
import {
|
||||
definePluginEntry,
|
||||
type OpenClawPluginNodeHostCommand,
|
||||
} from "openclaw/plugin-sdk/plugin-entry";
|
||||
import { handleDirFetch } from "./src/node-host/dir-fetch.js";
|
||||
import { handleDirList } from "./src/node-host/dir-list.js";
|
||||
import { handleFileFetch } from "./src/node-host/file-fetch.js";
|
||||
import { handleFileWrite } from "./src/node-host/file-write.js";
|
||||
import { createFileTransferNodeInvokePolicy } from "./src/shared/node-invoke-policy.js";
|
||||
import { createDirFetchTool } from "./src/tools/dir-fetch-tool.js";
|
||||
import { createDirListTool } from "./src/tools/dir-list-tool.js";
|
||||
import { createFileFetchTool } from "./src/tools/file-fetch-tool.js";
|
||||
import { createFileWriteTool } from "./src/tools/file-write-tool.js";
|
||||
|
||||
const fileTransferNodeHostCommands: OpenClawPluginNodeHostCommand[] = [
|
||||
{
|
||||
command: "file.fetch",
|
||||
cap: "file",
|
||||
dangerous: true,
|
||||
handle: async (paramsJSON) => {
|
||||
const params = paramsJSON ? JSON.parse(paramsJSON) : {};
|
||||
const result = await handleFileFetch(params);
|
||||
return JSON.stringify(result);
|
||||
},
|
||||
},
|
||||
{
|
||||
command: "dir.list",
|
||||
cap: "file",
|
||||
dangerous: true,
|
||||
handle: async (paramsJSON) => {
|
||||
const params = paramsJSON ? JSON.parse(paramsJSON) : {};
|
||||
const result = await handleDirList(params);
|
||||
return JSON.stringify(result);
|
||||
},
|
||||
},
|
||||
{
|
||||
command: "dir.fetch",
|
||||
cap: "file",
|
||||
dangerous: true,
|
||||
handle: async (paramsJSON) => {
|
||||
const params = paramsJSON ? JSON.parse(paramsJSON) : {};
|
||||
const result = await handleDirFetch(params);
|
||||
return JSON.stringify(result);
|
||||
},
|
||||
},
|
||||
{
|
||||
command: "file.write",
|
||||
cap: "file",
|
||||
dangerous: true,
|
||||
handle: async (paramsJSON) => {
|
||||
const params = paramsJSON ? JSON.parse(paramsJSON) : {};
|
||||
const result = await handleFileWrite(params);
|
||||
return JSON.stringify(result);
|
||||
},
|
||||
},
|
||||
];
|
||||
|
||||
export default definePluginEntry({
|
||||
id: "file-transfer",
|
||||
name: "File Transfer",
|
||||
description: "Fetch, list, and write files on paired nodes via dedicated node commands.",
|
||||
nodeHostCommands: fileTransferNodeHostCommands,
|
||||
register(api) {
|
||||
api.registerNodeInvokePolicy(createFileTransferNodeInvokePolicy());
|
||||
api.registerTool(createFileFetchTool());
|
||||
api.registerTool(createDirListTool());
|
||||
api.registerTool(createDirFetchTool());
|
||||
api.registerTool(createFileWriteTool());
|
||||
},
|
||||
});
|
||||
50
extensions/file-transfer/openclaw.plugin.json
Normal file
50
extensions/file-transfer/openclaw.plugin.json
Normal file
@@ -0,0 +1,50 @@
|
||||
{
|
||||
"id": "file-transfer",
|
||||
"activation": {
|
||||
"onStartup": true
|
||||
},
|
||||
"enabledByDefault": true,
|
||||
"name": "File Transfer",
|
||||
"description": "Fetch, list, and write files on paired nodes via dedicated node commands. Bypasses bash stdout truncation by using base64 over node.invoke for binaries up to 16 MB.",
|
||||
"contracts": {
|
||||
"tools": ["file_fetch", "dir_list", "dir_fetch", "file_write"]
|
||||
},
|
||||
"configSchema": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
"nodes": {
|
||||
"type": "object",
|
||||
"additionalProperties": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
"ask": {
|
||||
"type": "string",
|
||||
"enum": ["off", "on-miss", "always"]
|
||||
},
|
||||
"allowReadPaths": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" }
|
||||
},
|
||||
"allowWritePaths": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" }
|
||||
},
|
||||
"denyPaths": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" }
|
||||
},
|
||||
"maxBytes": {
|
||||
"type": "number"
|
||||
},
|
||||
"followSymlinks": {
|
||||
"type": "boolean",
|
||||
"default": false
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
21
extensions/file-transfer/package.json
Normal file
21
extensions/file-transfer/package.json
Normal file
@@ -0,0 +1,21 @@
|
||||
{
|
||||
"name": "@openclaw/file-transfer",
|
||||
"version": "2026.4.27",
|
||||
"description": "OpenClaw file transfer plugin (file_fetch, dir_list, dir_fetch, file_write)",
|
||||
"type": "module",
|
||||
"dependencies": {
|
||||
"minimatch": "10.2.4",
|
||||
"typebox": "1.1.34"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@openclaw/plugin-sdk": "workspace:*"
|
||||
},
|
||||
"openclaw": {
|
||||
"extensions": [
|
||||
"./index.ts"
|
||||
],
|
||||
"bundle": {
|
||||
"stageRuntimeDependencies": false
|
||||
}
|
||||
}
|
||||
}
|
||||
135
extensions/file-transfer/src/node-host/dir-fetch.test.ts
Normal file
135
extensions/file-transfer/src/node-host/dir-fetch.test.ts
Normal file
@@ -0,0 +1,135 @@
|
||||
import crypto from "node:crypto";
|
||||
import fs from "node:fs/promises";
|
||||
import os from "node:os";
|
||||
import path from "node:path";
|
||||
import { afterEach, beforeEach, describe, expect, it } from "vitest";
|
||||
import { handleDirFetch } from "./dir-fetch.js";
|
||||
|
||||
let tmpRoot: string;
|
||||
|
||||
beforeEach(async () => {
|
||||
// realpath: see file-fetch.test.ts for the macOS symlinked-tmpdir reason.
|
||||
tmpRoot = await fs.realpath(await fs.mkdtemp(path.join(os.tmpdir(), "dir-fetch-test-")));
|
||||
});
|
||||
|
||||
afterEach(async () => {
|
||||
await fs.rm(tmpRoot, { recursive: true, force: true });
|
||||
});
|
||||
|
||||
// dir-fetch shells out to /usr/bin/tar. Skip the body of these tests on
|
||||
// platforms without it (Windows CI). They still register, just no-op.
|
||||
const HAS_TAR = process.platform !== "win32";
|
||||
|
||||
describe("handleDirFetch — input validation", () => {
|
||||
it("rejects empty / non-string path", async () => {
|
||||
expect(await handleDirFetch({ path: "" })).toMatchObject({
|
||||
ok: false,
|
||||
code: "INVALID_PATH",
|
||||
});
|
||||
});
|
||||
|
||||
it("rejects relative paths", async () => {
|
||||
expect(await handleDirFetch({ path: "relative" })).toMatchObject({
|
||||
ok: false,
|
||||
code: "INVALID_PATH",
|
||||
});
|
||||
});
|
||||
|
||||
it("rejects paths with NUL bytes", async () => {
|
||||
expect(await handleDirFetch({ path: "/tmp/foo\0bar" })).toMatchObject({
|
||||
ok: false,
|
||||
code: "INVALID_PATH",
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
describe("handleDirFetch — fs errors", () => {
|
||||
it.runIf(HAS_TAR)("returns NOT_FOUND for a missing directory", async () => {
|
||||
const r = await handleDirFetch({ path: path.join(tmpRoot, "missing") });
|
||||
expect(r).toMatchObject({ ok: false, code: "NOT_FOUND" });
|
||||
});
|
||||
|
||||
it.runIf(HAS_TAR)("returns IS_FILE when path resolves to a file", async () => {
|
||||
const f = path.join(tmpRoot, "f.txt");
|
||||
await fs.writeFile(f, "x");
|
||||
expect(await handleDirFetch({ path: f })).toMatchObject({
|
||||
ok: false,
|
||||
code: "IS_FILE",
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
describe("handleDirFetch — happy path", () => {
|
||||
it("preflights directory entries without creating a tarball", async () => {
|
||||
await fs.writeFile(path.join(tmpRoot, "a.txt"), "alpha\n");
|
||||
await fs.mkdir(path.join(tmpRoot, ".ssh"));
|
||||
await fs.writeFile(path.join(tmpRoot, ".ssh", "id_rsa"), "secret\n");
|
||||
await fs.mkdir(path.join(tmpRoot, "sub"));
|
||||
await fs.writeFile(path.join(tmpRoot, "sub", "b.txt"), "beta\n");
|
||||
|
||||
const r = await handleDirFetch({ path: tmpRoot, preflightOnly: true });
|
||||
if (!r.ok) {
|
||||
throw new Error(`expected ok, got ${r.code}: ${r.message}`);
|
||||
}
|
||||
|
||||
expect(r).toMatchObject({
|
||||
path: tmpRoot,
|
||||
tarBase64: "",
|
||||
tarBytes: 0,
|
||||
sha256: "",
|
||||
preflightOnly: true,
|
||||
});
|
||||
expect(r.entries).toEqual([".ssh", ".ssh/id_rsa", "a.txt", "sub", "sub/b.txt"]);
|
||||
expect(r.fileCount).toBe(r.entries?.length);
|
||||
});
|
||||
|
||||
it.runIf(HAS_TAR)("returns a gzipped tar with byte count and sha256", async () => {
|
||||
await fs.writeFile(path.join(tmpRoot, "a.txt"), "alpha\n");
|
||||
await fs.writeFile(path.join(tmpRoot, "b.txt"), "beta\n");
|
||||
await fs.mkdir(path.join(tmpRoot, "sub"));
|
||||
await fs.writeFile(path.join(tmpRoot, "sub", "c.txt"), "gamma\n");
|
||||
|
||||
const r = await handleDirFetch({ path: tmpRoot });
|
||||
if (!r.ok) {
|
||||
throw new Error(`expected ok, got ${r.code}: ${r.message}`);
|
||||
}
|
||||
|
||||
expect(r.tarBytes).toBeGreaterThan(0);
|
||||
expect(r.tarBase64.length).toBeGreaterThan(0);
|
||||
|
||||
const buf = Buffer.from(r.tarBase64, "base64");
|
||||
expect(buf.byteLength).toBe(r.tarBytes);
|
||||
|
||||
const expectedSha = crypto.createHash("sha256").update(buf).digest("hex");
|
||||
expect(r.sha256).toBe(expectedSha);
|
||||
|
||||
// gzip magic bytes
|
||||
expect(buf[0]).toBe(0x1f);
|
||||
expect(buf[1]).toBe(0x8b);
|
||||
|
||||
// file count covers the regular files we created (3); BSD tar may also
|
||||
// list directory entries, so be generous.
|
||||
expect(r.fileCount).toBeGreaterThanOrEqual(3);
|
||||
expect(r.entries).toEqual(expect.arrayContaining(["a.txt", "b.txt", "sub", "sub/c.txt"]));
|
||||
expect(r.fileCount).toBe(r.entries?.length);
|
||||
});
|
||||
});
|
||||
|
||||
describe("handleDirFetch — size cap", () => {
|
||||
it.runIf(HAS_TAR)(
|
||||
"returns TREE_TOO_LARGE when content exceeds the cap mid-stream",
|
||||
async () => {
|
||||
// Write enough random content to exceed a small maxBytes. Random bytes
|
||||
// don't compress, so gzip output is roughly the same size as input.
|
||||
const big = crypto.randomBytes(512 * 1024);
|
||||
await fs.writeFile(path.join(tmpRoot, "big1.bin"), big);
|
||||
await fs.writeFile(path.join(tmpRoot, "big2.bin"), big);
|
||||
await fs.writeFile(path.join(tmpRoot, "big3.bin"), big);
|
||||
|
||||
// 64KB cap should trip either the du preflight or the streaming SIGTERM.
|
||||
const r = await handleDirFetch({ path: tmpRoot, maxBytes: 64 * 1024 });
|
||||
expect(r).toMatchObject({ ok: false, code: "TREE_TOO_LARGE" });
|
||||
},
|
||||
30_000,
|
||||
);
|
||||
});
|
||||
381
extensions/file-transfer/src/node-host/dir-fetch.ts
Normal file
381
extensions/file-transfer/src/node-host/dir-fetch.ts
Normal file
@@ -0,0 +1,381 @@
|
||||
import { spawn } from "node:child_process";
|
||||
import crypto from "node:crypto";
|
||||
import fs from "node:fs/promises";
|
||||
import path from "node:path";
|
||||
|
||||
export const DIR_FETCH_HARD_MAX_BYTES = 16 * 1024 * 1024;
|
||||
export const DIR_FETCH_DEFAULT_MAX_BYTES = 8 * 1024 * 1024;
|
||||
|
||||
export type DirFetchParams = {
|
||||
path?: unknown;
|
||||
maxBytes?: unknown;
|
||||
includeDotfiles?: unknown;
|
||||
followSymlinks?: unknown;
|
||||
preflightOnly?: unknown;
|
||||
};
|
||||
|
||||
export type DirFetchOk = {
|
||||
ok: true;
|
||||
path: string;
|
||||
tarBase64: string;
|
||||
tarBytes: number;
|
||||
sha256: string;
|
||||
fileCount: number;
|
||||
entries?: string[];
|
||||
preflightOnly?: boolean;
|
||||
};
|
||||
|
||||
export type DirFetchErrCode =
|
||||
| "INVALID_PATH"
|
||||
| "NOT_FOUND"
|
||||
| "IS_FILE"
|
||||
| "TREE_TOO_LARGE"
|
||||
| "SYMLINK_REDIRECT"
|
||||
| "READ_ERROR";
|
||||
|
||||
export type DirFetchErr = {
|
||||
ok: false;
|
||||
code: DirFetchErrCode;
|
||||
message: string;
|
||||
canonicalPath?: string;
|
||||
};
|
||||
|
||||
export type DirFetchResult = DirFetchOk | DirFetchErr;
|
||||
|
||||
function clampMaxBytes(input: unknown): number {
|
||||
if (typeof input !== "number" || !Number.isFinite(input) || input <= 0) {
|
||||
return DIR_FETCH_DEFAULT_MAX_BYTES;
|
||||
}
|
||||
return Math.min(Math.floor(input), DIR_FETCH_HARD_MAX_BYTES);
|
||||
}
|
||||
|
||||
function classifyFsError(err: unknown): DirFetchErrCode {
|
||||
const code = (err as { code?: string } | null)?.code;
|
||||
if (code === "ENOENT") {
|
||||
return "NOT_FOUND";
|
||||
}
|
||||
return "READ_ERROR";
|
||||
}
|
||||
|
||||
async function preflightDu(dirPath: string, maxBytes: number): Promise<boolean> {
|
||||
// du -sk gives size in 1KB blocks (512-byte blocks on macOS with -k)
|
||||
// We use maxBytes * 4 as the rough heuristic ceiling (generous, gzip compresses)
|
||||
const heuristicKb = Math.ceil((maxBytes * 4) / 1024);
|
||||
return new Promise((resolve) => {
|
||||
const du = spawn("du", ["-sk", dirPath], { stdio: ["ignore", "pipe", "ignore"] });
|
||||
let output = "";
|
||||
du.stdout.on("data", (chunk: Buffer) => {
|
||||
output += chunk.toString();
|
||||
});
|
||||
du.on("close", (code) => {
|
||||
if (code !== 0) {
|
||||
// du failed; be permissive and let tar catch the overflow
|
||||
resolve(true);
|
||||
return;
|
||||
}
|
||||
const match = /^(\d+)/.exec(output.trim());
|
||||
if (!match) {
|
||||
resolve(true);
|
||||
return;
|
||||
}
|
||||
const sizeKb = Number.parseInt(match[1], 10);
|
||||
resolve(sizeKb <= heuristicKb);
|
||||
});
|
||||
du.on("error", () => {
|
||||
// du not available; skip preflight
|
||||
resolve(true);
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
async function listTarEntries(tarBuffer: Buffer): Promise<string[]> {
|
||||
// Async spawn so a slow `tar -tzf` doesn't park the node-host event
|
||||
// loop for up to 10s. Other in-flight requests continue to be served.
|
||||
return new Promise<string[]>((resolve) => {
|
||||
const child = spawn("tar", ["-tzf", "-"], { stdio: ["pipe", "pipe", "ignore"] });
|
||||
let stdoutBuf = "";
|
||||
let aborted = false;
|
||||
const watchdog = setTimeout(() => {
|
||||
aborted = true;
|
||||
try {
|
||||
child.kill("SIGKILL");
|
||||
} catch {
|
||||
/* gone */
|
||||
}
|
||||
resolve([]);
|
||||
}, 10_000);
|
||||
child.stdout.on("data", (chunk: Buffer) => {
|
||||
stdoutBuf += chunk.toString();
|
||||
// Bound buffer growth — pathological archives shouldn't OOM us.
|
||||
if (stdoutBuf.length > 32 * 1024 * 1024) {
|
||||
aborted = true;
|
||||
try {
|
||||
child.kill("SIGKILL");
|
||||
} catch {
|
||||
/* gone */
|
||||
}
|
||||
clearTimeout(watchdog);
|
||||
resolve([]);
|
||||
}
|
||||
});
|
||||
child.on("close", (code) => {
|
||||
clearTimeout(watchdog);
|
||||
if (aborted) {
|
||||
return;
|
||||
}
|
||||
if (code !== 0) {
|
||||
resolve([]);
|
||||
return;
|
||||
}
|
||||
const lines = stdoutBuf
|
||||
.split("\n")
|
||||
.map((line) => line.replace(/\\/gu, "/").replace(/^\.\//u, "").replace(/\/$/u, ""))
|
||||
.filter((line) => line.length > 0);
|
||||
resolve(lines);
|
||||
});
|
||||
child.on("error", () => {
|
||||
clearTimeout(watchdog);
|
||||
if (!aborted) {
|
||||
resolve([]);
|
||||
}
|
||||
});
|
||||
child.stdin.end(tarBuffer);
|
||||
});
|
||||
}
|
||||
|
||||
async function listTreeEntries(root: string, maxEntries: number): Promise<string[] | "TOO_MANY"> {
|
||||
const results: string[] = [];
|
||||
async function visit(dir: string): Promise<boolean> {
|
||||
const entries = await fs.readdir(dir, { withFileTypes: true });
|
||||
entries.sort((left, right) => left.name.localeCompare(right.name));
|
||||
for (const entry of entries) {
|
||||
const abs = path.join(dir, entry.name);
|
||||
const rel = path.relative(root, abs).replace(/\\/gu, "/");
|
||||
results.push(rel);
|
||||
if (results.length > maxEntries) {
|
||||
return false;
|
||||
}
|
||||
if (entry.isDirectory()) {
|
||||
const ok = await visit(abs);
|
||||
if (!ok) {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
}
|
||||
return true;
|
||||
}
|
||||
return (await visit(root)) ? results : "TOO_MANY";
|
||||
}
|
||||
|
||||
export async function handleDirFetch(params: DirFetchParams): Promise<DirFetchResult> {
|
||||
const requestedPath = params.path;
|
||||
if (typeof requestedPath !== "string" || requestedPath.length === 0) {
|
||||
return { ok: false, code: "INVALID_PATH", message: "path required" };
|
||||
}
|
||||
if (requestedPath.includes("\0")) {
|
||||
return { ok: false, code: "INVALID_PATH", message: "path contains NUL byte" };
|
||||
}
|
||||
if (!path.isAbsolute(requestedPath)) {
|
||||
return { ok: false, code: "INVALID_PATH", message: "path must be absolute" };
|
||||
}
|
||||
|
||||
const maxBytes = clampMaxBytes(params.maxBytes);
|
||||
const includeDotfiles = params.includeDotfiles === true;
|
||||
const followSymlinks = params.followSymlinks === true;
|
||||
const preflightOnly = params.preflightOnly === true;
|
||||
|
||||
let canonical: string;
|
||||
try {
|
||||
canonical = await fs.realpath(requestedPath);
|
||||
} catch (err) {
|
||||
const code = classifyFsError(err);
|
||||
return {
|
||||
ok: false,
|
||||
code,
|
||||
message: code === "NOT_FOUND" ? "directory not found" : `realpath failed: ${String(err)}`,
|
||||
};
|
||||
}
|
||||
|
||||
if (!followSymlinks && canonical !== requestedPath) {
|
||||
return {
|
||||
ok: false,
|
||||
code: "SYMLINK_REDIRECT",
|
||||
message: `path traverses a symlink; refusing because followSymlinks=false (set plugins.entries.file-transfer.config.nodes.<node>.followSymlinks=true to allow, or update allowReadPaths to the canonical path)`,
|
||||
canonicalPath: canonical,
|
||||
};
|
||||
}
|
||||
|
||||
let stats: Awaited<ReturnType<typeof fs.stat>>;
|
||||
try {
|
||||
stats = await fs.stat(canonical);
|
||||
} catch (err) {
|
||||
const code = classifyFsError(err);
|
||||
return { ok: false, code, message: `stat failed: ${String(err)}`, canonicalPath: canonical };
|
||||
}
|
||||
|
||||
if (!stats.isDirectory()) {
|
||||
return {
|
||||
ok: false,
|
||||
code: "IS_FILE",
|
||||
message: "path is not a directory",
|
||||
canonicalPath: canonical,
|
||||
};
|
||||
}
|
||||
|
||||
if (preflightOnly) {
|
||||
try {
|
||||
const entries = await listTreeEntries(canonical, 5000);
|
||||
if (entries === "TOO_MANY") {
|
||||
return {
|
||||
ok: false,
|
||||
code: "TREE_TOO_LARGE",
|
||||
message: "directory tree exceeds 5000 entries during preflight",
|
||||
canonicalPath: canonical,
|
||||
};
|
||||
}
|
||||
return {
|
||||
ok: true,
|
||||
path: canonical,
|
||||
tarBase64: "",
|
||||
tarBytes: 0,
|
||||
sha256: "",
|
||||
fileCount: entries.length,
|
||||
entries,
|
||||
preflightOnly: true,
|
||||
};
|
||||
} catch (err) {
|
||||
const code = classifyFsError(err);
|
||||
return {
|
||||
ok: false,
|
||||
code,
|
||||
message: `preflight readdir failed: ${String(err)}`,
|
||||
canonicalPath: canonical,
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
// Preflight size check using du
|
||||
const withinBudget = await preflightDu(canonical, maxBytes);
|
||||
if (!withinBudget) {
|
||||
return {
|
||||
ok: false,
|
||||
code: "TREE_TOO_LARGE",
|
||||
message: `directory tree exceeds estimated size limit (${maxBytes} bytes raw)`,
|
||||
canonicalPath: canonical,
|
||||
};
|
||||
}
|
||||
|
||||
// Build tar args. Shell out to /usr/bin/tar for portability.
|
||||
// -cz: create + gzip
|
||||
// -C <dir>: change to directory so paths in archive are relative
|
||||
// .: include everything from that directory
|
||||
// v1: includeDotfiles is accepted in the API but not enforced. BSD tar's
|
||||
// --exclude pattern matching is unreliable for dotfiles (every plausible
|
||||
// pattern except "*/.*" collapses the archive on macOS). Reliable filtering
|
||||
// requires a `find ! -name '.*' | tar -T -` pipeline; deferred to v2.
|
||||
// For now we always archive everything in the directory.
|
||||
void includeDotfiles;
|
||||
const tarArgs: string[] = ["-czf", "-", "-C", canonical, "."];
|
||||
|
||||
// Capture tar output with a hard byte cap and a wall-clock timeout.
|
||||
// SIGTERM if the byte cap is exceeded; SIGKILL if the timeout fires
|
||||
// (covers tar hanging on a slow filesystem or symlink loop).
|
||||
const TAR_HARD_TIMEOUT_MS = 60_000;
|
||||
const tarBuffer = await new Promise<Buffer | "TOO_LARGE" | "TIMEOUT" | "ERROR">((resolve) => {
|
||||
const tarBin = process.platform !== "win32" ? "/usr/bin/tar" : "tar";
|
||||
const child = spawn(tarBin, tarArgs, {
|
||||
stdio: ["ignore", "pipe", "pipe"],
|
||||
});
|
||||
|
||||
const chunks: Buffer[] = [];
|
||||
let totalBytes = 0;
|
||||
let aborted = false;
|
||||
|
||||
const watchdog = setTimeout(() => {
|
||||
if (aborted) {
|
||||
return;
|
||||
}
|
||||
aborted = true;
|
||||
try {
|
||||
child.kill("SIGKILL");
|
||||
} catch {
|
||||
/* already gone */
|
||||
}
|
||||
resolve("TIMEOUT");
|
||||
}, TAR_HARD_TIMEOUT_MS);
|
||||
|
||||
child.stdout.on("data", (chunk: Buffer) => {
|
||||
if (aborted) {
|
||||
return;
|
||||
}
|
||||
totalBytes += chunk.byteLength;
|
||||
if (totalBytes > maxBytes) {
|
||||
aborted = true;
|
||||
clearTimeout(watchdog);
|
||||
child.kill("SIGTERM");
|
||||
resolve("TOO_LARGE");
|
||||
return;
|
||||
}
|
||||
chunks.push(chunk);
|
||||
});
|
||||
|
||||
child.on("close", (code) => {
|
||||
clearTimeout(watchdog);
|
||||
if (aborted) {
|
||||
return;
|
||||
}
|
||||
if (code !== 0) {
|
||||
resolve("ERROR");
|
||||
return;
|
||||
}
|
||||
resolve(Buffer.concat(chunks));
|
||||
});
|
||||
|
||||
child.on("error", () => {
|
||||
clearTimeout(watchdog);
|
||||
if (!aborted) {
|
||||
resolve("ERROR");
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
if (tarBuffer === "TOO_LARGE") {
|
||||
return {
|
||||
ok: false,
|
||||
code: "TREE_TOO_LARGE",
|
||||
message: `tarball exceeded ${maxBytes} byte limit mid-stream`,
|
||||
canonicalPath: canonical,
|
||||
};
|
||||
}
|
||||
if (tarBuffer === "TIMEOUT") {
|
||||
return {
|
||||
ok: false,
|
||||
code: "READ_ERROR",
|
||||
message: "tar command exceeded 60s wall-clock timeout (slow filesystem or symlink loop?)",
|
||||
canonicalPath: canonical,
|
||||
};
|
||||
}
|
||||
if (tarBuffer === "ERROR") {
|
||||
return {
|
||||
ok: false,
|
||||
code: "READ_ERROR",
|
||||
message: "tar command failed",
|
||||
canonicalPath: canonical,
|
||||
};
|
||||
}
|
||||
|
||||
const sha256 = crypto.createHash("sha256").update(tarBuffer).digest("hex");
|
||||
const tarBase64 = tarBuffer.toString("base64");
|
||||
const tarBytes = tarBuffer.byteLength;
|
||||
const entries = await listTarEntries(tarBuffer);
|
||||
|
||||
return {
|
||||
ok: true,
|
||||
path: canonical,
|
||||
tarBase64,
|
||||
tarBytes,
|
||||
sha256,
|
||||
fileCount: entries.length,
|
||||
entries,
|
||||
};
|
||||
}
|
||||
143
extensions/file-transfer/src/node-host/dir-list.test.ts
Normal file
143
extensions/file-transfer/src/node-host/dir-list.test.ts
Normal file
@@ -0,0 +1,143 @@
|
||||
import fs from "node:fs/promises";
|
||||
import os from "node:os";
|
||||
import path from "node:path";
|
||||
import { afterEach, beforeEach, describe, expect, it } from "vitest";
|
||||
import {
|
||||
DIR_LIST_DEFAULT_MAX_ENTRIES,
|
||||
DIR_LIST_HARD_MAX_ENTRIES,
|
||||
handleDirList,
|
||||
} from "./dir-list.js";
|
||||
|
||||
let tmpRoot: string;
|
||||
|
||||
beforeEach(async () => {
|
||||
// realpath: see file-fetch.test.ts for the macOS symlinked-tmpdir reason.
|
||||
tmpRoot = await fs.realpath(await fs.mkdtemp(path.join(os.tmpdir(), "dir-list-test-")));
|
||||
});
|
||||
|
||||
afterEach(async () => {
|
||||
await fs.rm(tmpRoot, { recursive: true, force: true });
|
||||
});
|
||||
|
||||
describe("handleDirList — input validation", () => {
|
||||
it("rejects empty / non-string path", async () => {
|
||||
expect(await handleDirList({ path: "" })).toMatchObject({ ok: false, code: "INVALID_PATH" });
|
||||
expect(await handleDirList({ path: undefined })).toMatchObject({
|
||||
ok: false,
|
||||
code: "INVALID_PATH",
|
||||
});
|
||||
});
|
||||
|
||||
it("rejects relative paths", async () => {
|
||||
expect(await handleDirList({ path: "relative" })).toMatchObject({
|
||||
ok: false,
|
||||
code: "INVALID_PATH",
|
||||
});
|
||||
});
|
||||
|
||||
it("rejects paths with NUL bytes", async () => {
|
||||
expect(await handleDirList({ path: "/tmp/foo\0bar" })).toMatchObject({
|
||||
ok: false,
|
||||
code: "INVALID_PATH",
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
describe("handleDirList — fs errors", () => {
|
||||
it("returns NOT_FOUND for a missing directory", async () => {
|
||||
expect(await handleDirList({ path: path.join(tmpRoot, "does-not-exist") })).toMatchObject({
|
||||
ok: false,
|
||||
code: "NOT_FOUND",
|
||||
});
|
||||
});
|
||||
|
||||
it("returns IS_FILE when path resolves to a regular file", async () => {
|
||||
const f = path.join(tmpRoot, "f.txt");
|
||||
await fs.writeFile(f, "x");
|
||||
expect(await handleDirList({ path: f })).toMatchObject({ ok: false, code: "IS_FILE" });
|
||||
});
|
||||
});
|
||||
|
||||
describe("handleDirList — happy path", () => {
|
||||
it("lists files and subdirs with metadata, sorted by name", async () => {
|
||||
await fs.writeFile(path.join(tmpRoot, "z.txt"), "Z");
|
||||
await fs.writeFile(path.join(tmpRoot, "a.png"), "PNG-bytes");
|
||||
await fs.mkdir(path.join(tmpRoot, "subdir"));
|
||||
|
||||
const r = await handleDirList({ path: tmpRoot });
|
||||
if (!r.ok) {
|
||||
throw new Error("expected ok");
|
||||
}
|
||||
expect(r.entries.map((e) => e.name)).toEqual(["a.png", "subdir", "z.txt"]);
|
||||
|
||||
const a = r.entries.find((e) => e.name === "a.png")!;
|
||||
expect(a.isDir).toBe(false);
|
||||
expect(a.size).toBeGreaterThan(0);
|
||||
expect(a.mimeType).toBe("image/png");
|
||||
|
||||
const sub = r.entries.find((e) => e.name === "subdir")!;
|
||||
expect(sub.isDir).toBe(true);
|
||||
expect(sub.size).toBe(0);
|
||||
expect(sub.mimeType).toBe("inode/directory");
|
||||
|
||||
expect(r.truncated).toBe(false);
|
||||
expect(r.nextPageToken).toBeUndefined();
|
||||
});
|
||||
|
||||
it("includes dotfiles in the listing", async () => {
|
||||
await fs.writeFile(path.join(tmpRoot, ".hidden"), "x");
|
||||
await fs.writeFile(path.join(tmpRoot, "visible"), "x");
|
||||
|
||||
const r = await handleDirList({ path: tmpRoot });
|
||||
if (!r.ok) {
|
||||
throw new Error("expected ok");
|
||||
}
|
||||
expect(r.entries.map((e) => e.name)).toEqual([".hidden", "visible"]);
|
||||
});
|
||||
|
||||
it("paginates via pageToken (offset-based)", async () => {
|
||||
for (let i = 0; i < 7; i++) {
|
||||
// zero-pad so localeCompare-stable sort matches creation order
|
||||
await fs.writeFile(path.join(tmpRoot, `f-${i}.txt`), "x");
|
||||
}
|
||||
|
||||
const page1 = await handleDirList({ path: tmpRoot, maxEntries: 3 });
|
||||
if (!page1.ok) {
|
||||
throw new Error("page1");
|
||||
}
|
||||
expect(page1.entries.map((e) => e.name)).toEqual(["f-0.txt", "f-1.txt", "f-2.txt"]);
|
||||
expect(page1.truncated).toBe(true);
|
||||
expect(page1.nextPageToken).toBe("3");
|
||||
|
||||
const page2 = await handleDirList({
|
||||
path: tmpRoot,
|
||||
maxEntries: 3,
|
||||
pageToken: page1.nextPageToken,
|
||||
});
|
||||
if (!page2.ok) {
|
||||
throw new Error("page2");
|
||||
}
|
||||
expect(page2.entries.map((e) => e.name)).toEqual(["f-3.txt", "f-4.txt", "f-5.txt"]);
|
||||
expect(page2.truncated).toBe(true);
|
||||
|
||||
const page3 = await handleDirList({
|
||||
path: tmpRoot,
|
||||
maxEntries: 3,
|
||||
pageToken: page2.nextPageToken,
|
||||
});
|
||||
if (!page3.ok) {
|
||||
throw new Error("page3");
|
||||
}
|
||||
expect(page3.entries.map((e) => e.name)).toEqual(["f-6.txt"]);
|
||||
expect(page3.truncated).toBe(false);
|
||||
expect(page3.nextPageToken).toBeUndefined();
|
||||
});
|
||||
});
|
||||
|
||||
describe("handleDirList — limits", () => {
|
||||
it("clamps maxEntries to the hard ceiling and uses the default for invalid values", () => {
|
||||
expect(DIR_LIST_DEFAULT_MAX_ENTRIES).toBe(200);
|
||||
expect(DIR_LIST_HARD_MAX_ENTRIES).toBe(5000);
|
||||
expect(DIR_LIST_DEFAULT_MAX_ENTRIES).toBeLessThan(DIR_LIST_HARD_MAX_ENTRIES);
|
||||
});
|
||||
});
|
||||
179
extensions/file-transfer/src/node-host/dir-list.ts
Normal file
179
extensions/file-transfer/src/node-host/dir-list.ts
Normal file
@@ -0,0 +1,179 @@
|
||||
import fs from "node:fs/promises";
|
||||
import path from "node:path";
|
||||
import { mimeFromExtension } from "../shared/mime.js";
|
||||
|
||||
export const DIR_LIST_DEFAULT_MAX_ENTRIES = 200;
|
||||
export const DIR_LIST_HARD_MAX_ENTRIES = 5000;
|
||||
|
||||
export type DirListParams = {
|
||||
path?: unknown;
|
||||
pageToken?: unknown;
|
||||
maxEntries?: unknown;
|
||||
followSymlinks?: unknown;
|
||||
};
|
||||
|
||||
export type DirListEntry = {
|
||||
name: string;
|
||||
path: string;
|
||||
size: number;
|
||||
mimeType: string;
|
||||
isDir: boolean;
|
||||
mtime: number;
|
||||
};
|
||||
|
||||
export type DirListOk = {
|
||||
ok: true;
|
||||
path: string;
|
||||
entries: DirListEntry[];
|
||||
nextPageToken?: string;
|
||||
truncated: boolean;
|
||||
};
|
||||
|
||||
export type DirListErrCode =
|
||||
| "INVALID_PATH"
|
||||
| "NOT_FOUND"
|
||||
| "PERMISSION_DENIED"
|
||||
| "IS_FILE"
|
||||
| "SYMLINK_REDIRECT"
|
||||
| "READ_ERROR";
|
||||
|
||||
export type DirListErr = {
|
||||
ok: false;
|
||||
code: DirListErrCode;
|
||||
message: string;
|
||||
canonicalPath?: string;
|
||||
};
|
||||
|
||||
export type DirListResult = DirListOk | DirListErr;
|
||||
|
||||
function clampMaxEntries(input: unknown): number {
|
||||
if (typeof input !== "number" || !Number.isFinite(input) || input <= 0) {
|
||||
return DIR_LIST_DEFAULT_MAX_ENTRIES;
|
||||
}
|
||||
return Math.min(Math.floor(input), DIR_LIST_HARD_MAX_ENTRIES);
|
||||
}
|
||||
|
||||
function classifyFsError(err: unknown): DirListErrCode {
|
||||
const code = (err as { code?: string } | null)?.code;
|
||||
if (code === "ENOENT") {
|
||||
return "NOT_FOUND";
|
||||
}
|
||||
if (code === "EACCES" || code === "EPERM") {
|
||||
return "PERMISSION_DENIED";
|
||||
}
|
||||
return "READ_ERROR";
|
||||
}
|
||||
|
||||
export async function handleDirList(params: DirListParams): Promise<DirListResult> {
|
||||
const requestedPath = params.path;
|
||||
if (typeof requestedPath !== "string" || requestedPath.length === 0) {
|
||||
return { ok: false, code: "INVALID_PATH", message: "path required" };
|
||||
}
|
||||
if (requestedPath.includes("\0")) {
|
||||
return { ok: false, code: "INVALID_PATH", message: "path contains NUL byte" };
|
||||
}
|
||||
if (!path.isAbsolute(requestedPath)) {
|
||||
return { ok: false, code: "INVALID_PATH", message: "path must be absolute" };
|
||||
}
|
||||
|
||||
const maxEntries = clampMaxEntries(params.maxEntries);
|
||||
const offset =
|
||||
typeof params.pageToken === "string" && params.pageToken.length > 0
|
||||
? Math.max(0, Number.parseInt(params.pageToken, 10) || 0)
|
||||
: 0;
|
||||
|
||||
const followSymlinks = params.followSymlinks === true;
|
||||
|
||||
let canonical: string;
|
||||
try {
|
||||
canonical = await fs.realpath(requestedPath);
|
||||
} catch (err) {
|
||||
const code = classifyFsError(err);
|
||||
return {
|
||||
ok: false,
|
||||
code,
|
||||
message: code === "NOT_FOUND" ? "path not found" : `realpath failed: ${String(err)}`,
|
||||
};
|
||||
}
|
||||
|
||||
if (!followSymlinks && canonical !== requestedPath) {
|
||||
return {
|
||||
ok: false,
|
||||
code: "SYMLINK_REDIRECT",
|
||||
message: `path traverses a symlink; refusing because followSymlinks=false (set plugins.entries.file-transfer.config.nodes.<node>.followSymlinks=true to allow, or update allowReadPaths to the canonical path)`,
|
||||
canonicalPath: canonical,
|
||||
};
|
||||
}
|
||||
|
||||
let stats: Awaited<ReturnType<typeof fs.stat>>;
|
||||
try {
|
||||
stats = await fs.stat(canonical);
|
||||
} catch (err) {
|
||||
const code = classifyFsError(err);
|
||||
return { ok: false, code, message: `stat failed: ${String(err)}`, canonicalPath: canonical };
|
||||
}
|
||||
|
||||
if (!stats.isDirectory()) {
|
||||
return {
|
||||
ok: false,
|
||||
code: "IS_FILE",
|
||||
message: "path is not a directory",
|
||||
canonicalPath: canonical,
|
||||
};
|
||||
}
|
||||
|
||||
let names: string[];
|
||||
try {
|
||||
names = await fs.readdir(canonical, { encoding: "utf8" });
|
||||
} catch (err) {
|
||||
const code = classifyFsError(err);
|
||||
return {
|
||||
ok: false,
|
||||
code,
|
||||
message: `readdir failed: ${String(err)}`,
|
||||
canonicalPath: canonical,
|
||||
};
|
||||
}
|
||||
|
||||
// Sort by name for stable pagination
|
||||
names.sort((a, b) => a.localeCompare(b));
|
||||
|
||||
const total = names.length;
|
||||
const page = names.slice(offset, offset + maxEntries);
|
||||
const truncated = offset + maxEntries < total;
|
||||
const nextPageToken = truncated ? String(offset + maxEntries) : undefined;
|
||||
|
||||
const entries: DirListEntry[] = [];
|
||||
for (const name of page) {
|
||||
const entryPath = path.join(canonical, name);
|
||||
|
||||
let isDir = false;
|
||||
let size = 0;
|
||||
let mtime = 0;
|
||||
try {
|
||||
const s = await fs.stat(entryPath);
|
||||
isDir = s.isDirectory();
|
||||
size = isDir ? 0 : s.size;
|
||||
mtime = s.mtimeMs;
|
||||
} catch {
|
||||
// stat may fail for broken symlinks; keep zeros and treat as file
|
||||
}
|
||||
|
||||
entries.push({
|
||||
name,
|
||||
path: entryPath,
|
||||
size,
|
||||
mimeType: isDir ? "inode/directory" : mimeFromExtension(name),
|
||||
isDir,
|
||||
mtime,
|
||||
});
|
||||
}
|
||||
|
||||
return {
|
||||
ok: true,
|
||||
path: canonical,
|
||||
entries,
|
||||
nextPageToken,
|
||||
truncated,
|
||||
};
|
||||
}
|
||||
203
extensions/file-transfer/src/node-host/file-fetch.test.ts
Normal file
203
extensions/file-transfer/src/node-host/file-fetch.test.ts
Normal file
@@ -0,0 +1,203 @@
|
||||
import crypto from "node:crypto";
|
||||
import fs from "node:fs/promises";
|
||||
import os from "node:os";
|
||||
import path from "node:path";
|
||||
import { afterEach, beforeEach, describe, expect, it, vi } from "vitest";
|
||||
import {
|
||||
FILE_FETCH_DEFAULT_MAX_BYTES,
|
||||
FILE_FETCH_HARD_MAX_BYTES,
|
||||
handleFileFetch,
|
||||
} from "./file-fetch.js";
|
||||
|
||||
let tmpRoot: string;
|
||||
|
||||
beforeEach(async () => {
|
||||
// realpath the mkdtemp result — on macOS /tmp/foo and /var/folders/... are
|
||||
// symlinks to /private/{tmp,var/folders}, and the new SYMLINK_REDIRECT
|
||||
// default would otherwise refuse every test path. Tests want to exercise
|
||||
// the happy path with canonical paths; symlink-specific assertions create
|
||||
// explicit symlinks inside tmpRoot.
|
||||
tmpRoot = await fs.realpath(await fs.mkdtemp(path.join(os.tmpdir(), "file-fetch-test-")));
|
||||
});
|
||||
|
||||
afterEach(async () => {
|
||||
vi.restoreAllMocks();
|
||||
await fs.rm(tmpRoot, { recursive: true, force: true });
|
||||
});
|
||||
|
||||
describe("handleFileFetch — input validation", () => {
|
||||
it("returns INVALID_PATH for empty / non-string path", async () => {
|
||||
expect(await handleFileFetch({ path: "" })).toMatchObject({
|
||||
ok: false,
|
||||
code: "INVALID_PATH",
|
||||
});
|
||||
expect(await handleFileFetch({ path: undefined })).toMatchObject({
|
||||
ok: false,
|
||||
code: "INVALID_PATH",
|
||||
});
|
||||
expect(await handleFileFetch({ path: 42 as unknown })).toMatchObject({
|
||||
ok: false,
|
||||
code: "INVALID_PATH",
|
||||
});
|
||||
});
|
||||
|
||||
it("rejects relative paths", async () => {
|
||||
const r = await handleFileFetch({ path: "relative/file.txt" });
|
||||
expect(r).toMatchObject({ ok: false, code: "INVALID_PATH" });
|
||||
expect(r.ok ? "" : r.message).toMatch(/absolute/);
|
||||
});
|
||||
|
||||
it("rejects paths with NUL bytes", async () => {
|
||||
const r = await handleFileFetch({ path: "/tmp/foo\0bar" });
|
||||
expect(r).toMatchObject({ ok: false, code: "INVALID_PATH" });
|
||||
expect(r.ok ? "" : r.message).toMatch(/NUL/);
|
||||
});
|
||||
});
|
||||
|
||||
describe("handleFileFetch — fs errors", () => {
|
||||
it("returns NOT_FOUND for a missing file", async () => {
|
||||
const target = path.join(tmpRoot, "missing.txt");
|
||||
expect(await handleFileFetch({ path: target })).toMatchObject({
|
||||
ok: false,
|
||||
code: "NOT_FOUND",
|
||||
});
|
||||
});
|
||||
|
||||
it("returns IS_DIRECTORY when the path resolves to a directory", async () => {
|
||||
const r = await handleFileFetch({ path: tmpRoot });
|
||||
expect(r).toMatchObject({ ok: false, code: "IS_DIRECTORY" });
|
||||
// canonical path is reported back so the caller can re-check policy
|
||||
expect(r.ok ? null : r.canonicalPath).toBeTruthy();
|
||||
});
|
||||
});
|
||||
|
||||
describe("handleFileFetch — zero-byte round-trip", () => {
|
||||
it("fetches an empty file with size=0 and base64=''", async () => {
|
||||
const target = path.join(tmpRoot, "empty.bin");
|
||||
await fs.writeFile(target, "");
|
||||
|
||||
const r = await handleFileFetch({ path: target });
|
||||
if (!r.ok) {
|
||||
throw new Error(`expected ok, got ${r.code}: ${r.message}`);
|
||||
}
|
||||
expect(r.size).toBe(0);
|
||||
expect(r.base64).toBe("");
|
||||
// SHA-256 of empty input.
|
||||
expect(r.sha256).toBe("e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855");
|
||||
});
|
||||
});
|
||||
|
||||
describe("handleFileFetch — happy path", () => {
|
||||
it("reads a small file and returns size + sha256 + base64", async () => {
|
||||
const target = path.join(tmpRoot, "hello.txt");
|
||||
const contents = "hello world\n";
|
||||
await fs.writeFile(target, contents);
|
||||
|
||||
const r = await handleFileFetch({ path: target });
|
||||
if (!r.ok) {
|
||||
throw new Error(`expected ok, got ${r.code}: ${r.message}`);
|
||||
}
|
||||
|
||||
expect(r.size).toBe(contents.length);
|
||||
expect(Buffer.from(r.base64, "base64").toString("utf-8")).toBe(contents);
|
||||
const expectedSha = crypto.createHash("sha256").update(contents).digest("hex");
|
||||
expect(r.sha256).toBe(expectedSha);
|
||||
// canonicalized path may differ from input on macOS (/tmp -> /private/tmp)
|
||||
expect(path.basename(r.path)).toBe("hello.txt");
|
||||
});
|
||||
|
||||
it("preflights canonical path and size without reading bytes", async () => {
|
||||
const target = path.join(tmpRoot, "hello.txt");
|
||||
await fs.writeFile(target, "hello world\n");
|
||||
const readFileSpy = vi.spyOn(fs, "readFile");
|
||||
|
||||
const r = await handleFileFetch({ path: target, preflightOnly: true });
|
||||
|
||||
expect(r).toMatchObject({
|
||||
ok: true,
|
||||
path: target,
|
||||
size: 12,
|
||||
base64: "",
|
||||
sha256: "",
|
||||
preflightOnly: true,
|
||||
});
|
||||
expect(readFileSpy).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it("returns a sensible mime type for known extensions", async () => {
|
||||
const target = path.join(tmpRoot, "readme.md");
|
||||
await fs.writeFile(target, "# heading\n");
|
||||
|
||||
const r = await handleFileFetch({ path: target });
|
||||
if (!r.ok) {
|
||||
throw new Error("expected ok");
|
||||
}
|
||||
// libmagic ("file" cli) typically reports text/plain or text/markdown for
|
||||
// a one-line markdown file; the extension fallback yields text/markdown.
|
||||
// Accept either.
|
||||
expect(r.mimeType).toMatch(/^text\/(plain|markdown)$/);
|
||||
});
|
||||
});
|
||||
|
||||
describe("handleFileFetch — size enforcement", () => {
|
||||
it("returns FILE_TOO_LARGE when stat size exceeds the cap", async () => {
|
||||
const target = path.join(tmpRoot, "big.bin");
|
||||
const data = Buffer.alloc(2048, 0xab);
|
||||
await fs.writeFile(target, data);
|
||||
|
||||
const r = await handleFileFetch({ path: target, maxBytes: 1024 });
|
||||
expect(r).toMatchObject({ ok: false, code: "FILE_TOO_LARGE" });
|
||||
});
|
||||
|
||||
it("clamps maxBytes to the hard ceiling", async () => {
|
||||
expect(FILE_FETCH_HARD_MAX_BYTES).toBe(16 * 1024 * 1024);
|
||||
expect(FILE_FETCH_DEFAULT_MAX_BYTES).toBeLessThanOrEqual(FILE_FETCH_HARD_MAX_BYTES);
|
||||
|
||||
// A request asking for a maxBytes well above the hard ceiling should
|
||||
// still be honored for a small file (no error).
|
||||
const target = path.join(tmpRoot, "tiny.bin");
|
||||
await fs.writeFile(target, Buffer.from([0x01, 0x02, 0x03]));
|
||||
const r = await handleFileFetch({ path: target, maxBytes: Number.MAX_SAFE_INTEGER });
|
||||
expect(r.ok).toBe(true);
|
||||
});
|
||||
|
||||
it("uses default cap when maxBytes is not finite or non-positive", async () => {
|
||||
const target = path.join(tmpRoot, "small.bin");
|
||||
await fs.writeFile(target, Buffer.from([0xff]));
|
||||
expect(await handleFileFetch({ path: target, maxBytes: -1 })).toMatchObject({ ok: true });
|
||||
expect(await handleFileFetch({ path: target, maxBytes: Number.NaN })).toMatchObject({
|
||||
ok: true,
|
||||
});
|
||||
expect(await handleFileFetch({ path: target, maxBytes: "8" as unknown })).toMatchObject({
|
||||
ok: true,
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
describe("handleFileFetch — symlink handling", () => {
|
||||
it("refuses to follow a symlink by default (SYMLINK_REDIRECT)", async () => {
|
||||
const real = path.join(tmpRoot, "real.txt");
|
||||
const link = path.join(tmpRoot, "link.txt");
|
||||
await fs.writeFile(real, "data");
|
||||
await fs.symlink(real, link);
|
||||
|
||||
const r = await handleFileFetch({ path: link });
|
||||
expect(r).toMatchObject({ ok: false, code: "SYMLINK_REDIRECT" });
|
||||
// Caller learns the canonical target so the operator can update the
|
||||
// allowlist or set followSymlinks=true.
|
||||
expect(r.ok ? null : r.canonicalPath).toBe(real);
|
||||
});
|
||||
|
||||
it("follows symlinks and returns the canonical path when followSymlinks=true", async () => {
|
||||
const real = path.join(tmpRoot, "real.txt");
|
||||
const link = path.join(tmpRoot, "link.txt");
|
||||
await fs.writeFile(real, "data");
|
||||
await fs.symlink(real, link);
|
||||
|
||||
const r = await handleFileFetch({ path: link, followSymlinks: true });
|
||||
if (!r.ok) {
|
||||
throw new Error(`expected ok, got ${r.code}`);
|
||||
}
|
||||
expect(path.basename(r.path)).toBe("real.txt");
|
||||
});
|
||||
});
|
||||
203
extensions/file-transfer/src/node-host/file-fetch.ts
Normal file
203
extensions/file-transfer/src/node-host/file-fetch.ts
Normal file
@@ -0,0 +1,203 @@
|
||||
import { spawnSync } from "node:child_process";
|
||||
import crypto from "node:crypto";
|
||||
import fs from "node:fs/promises";
|
||||
import path from "node:path";
|
||||
import { EXTENSION_MIME } from "../shared/mime.js";
|
||||
|
||||
export const FILE_FETCH_HARD_MAX_BYTES = 16 * 1024 * 1024;
|
||||
export const FILE_FETCH_DEFAULT_MAX_BYTES = 8 * 1024 * 1024;
|
||||
|
||||
export type FileFetchParams = {
|
||||
path?: unknown;
|
||||
maxBytes?: unknown;
|
||||
followSymlinks?: unknown;
|
||||
preflightOnly?: unknown;
|
||||
};
|
||||
|
||||
export type FileFetchOk = {
|
||||
ok: true;
|
||||
path: string;
|
||||
size: number;
|
||||
mimeType: string;
|
||||
base64: string;
|
||||
sha256: string;
|
||||
preflightOnly?: boolean;
|
||||
};
|
||||
|
||||
export type FileFetchErrCode =
|
||||
| "INVALID_PATH"
|
||||
| "NOT_FOUND"
|
||||
| "PERMISSION_DENIED"
|
||||
| "IS_DIRECTORY"
|
||||
| "FILE_TOO_LARGE"
|
||||
| "PATH_TRAVERSAL"
|
||||
| "SYMLINK_REDIRECT"
|
||||
| "READ_ERROR";
|
||||
|
||||
export type FileFetchErr = {
|
||||
ok: false;
|
||||
code: FileFetchErrCode;
|
||||
message: string;
|
||||
canonicalPath?: string;
|
||||
};
|
||||
|
||||
export type FileFetchResult = FileFetchOk | FileFetchErr;
|
||||
|
||||
function detectMimeType(filePath: string): string {
|
||||
if (process.platform !== "win32") {
|
||||
try {
|
||||
const result = spawnSync("file", ["-b", "--mime-type", filePath], {
|
||||
encoding: "utf-8",
|
||||
timeout: 2000,
|
||||
});
|
||||
const stdout = result.stdout?.trim();
|
||||
if (result.status === 0 && stdout) {
|
||||
return stdout;
|
||||
}
|
||||
} catch {
|
||||
// fall through to extension fallback
|
||||
}
|
||||
}
|
||||
const ext = path.extname(filePath).toLowerCase();
|
||||
return EXTENSION_MIME[ext] ?? "application/octet-stream";
|
||||
}
|
||||
|
||||
function clampMaxBytes(input: unknown): number {
|
||||
if (typeof input !== "number" || !Number.isFinite(input) || input <= 0) {
|
||||
return FILE_FETCH_DEFAULT_MAX_BYTES;
|
||||
}
|
||||
return Math.min(Math.floor(input), FILE_FETCH_HARD_MAX_BYTES);
|
||||
}
|
||||
|
||||
function classifyFsError(err: unknown): FileFetchErrCode {
|
||||
const code = (err as { code?: string } | null)?.code;
|
||||
if (code === "ENOENT") {
|
||||
return "NOT_FOUND";
|
||||
}
|
||||
if (code === "EACCES" || code === "EPERM") {
|
||||
return "PERMISSION_DENIED";
|
||||
}
|
||||
if (code === "EISDIR") {
|
||||
return "IS_DIRECTORY";
|
||||
}
|
||||
return "READ_ERROR";
|
||||
}
|
||||
|
||||
export async function handleFileFetch(params: FileFetchParams): Promise<FileFetchResult> {
|
||||
const requestedPath = params.path;
|
||||
if (typeof requestedPath !== "string" || requestedPath.length === 0) {
|
||||
return { ok: false, code: "INVALID_PATH", message: "path required" };
|
||||
}
|
||||
if (requestedPath.includes("\0")) {
|
||||
return { ok: false, code: "INVALID_PATH", message: "path contains NUL byte" };
|
||||
}
|
||||
if (!path.isAbsolute(requestedPath)) {
|
||||
return { ok: false, code: "INVALID_PATH", message: "path must be absolute" };
|
||||
}
|
||||
|
||||
const maxBytes = clampMaxBytes(params.maxBytes);
|
||||
const followSymlinks = params.followSymlinks === true;
|
||||
const preflightOnly = params.preflightOnly === true;
|
||||
|
||||
let canonical: string;
|
||||
try {
|
||||
canonical = await fs.realpath(requestedPath);
|
||||
} catch (err) {
|
||||
const code = classifyFsError(err);
|
||||
return {
|
||||
ok: false,
|
||||
code,
|
||||
message: code === "NOT_FOUND" ? "file not found" : `realpath failed: ${String(err)}`,
|
||||
};
|
||||
}
|
||||
|
||||
// Refuse to follow symlinks anywhere in the path unless the operator
|
||||
// has explicitly opted in. A symlink in user-controlled territory
|
||||
// (e.g. ~/Downloads/evil → /etc) could redirect an allowed-looking
|
||||
// request to a disallowed canonical target. The error includes the
|
||||
// canonical path so the operator can either update their allowlist
|
||||
// to the canonical form or set followSymlinks=true on this node.
|
||||
if (!followSymlinks && canonical !== requestedPath) {
|
||||
return {
|
||||
ok: false,
|
||||
code: "SYMLINK_REDIRECT",
|
||||
message: `path traverses a symlink; refusing because followSymlinks=false (set plugins.entries.file-transfer.config.nodes.<node>.followSymlinks=true to allow, or update allowReadPaths to the canonical path)`,
|
||||
canonicalPath: canonical,
|
||||
};
|
||||
}
|
||||
|
||||
let stats: Awaited<ReturnType<typeof fs.stat>>;
|
||||
try {
|
||||
stats = await fs.stat(canonical);
|
||||
} catch (err) {
|
||||
const code = classifyFsError(err);
|
||||
return { ok: false, code, message: `stat failed: ${String(err)}`, canonicalPath: canonical };
|
||||
}
|
||||
|
||||
if (stats.isDirectory()) {
|
||||
return {
|
||||
ok: false,
|
||||
code: "IS_DIRECTORY",
|
||||
message: "path is a directory",
|
||||
canonicalPath: canonical,
|
||||
};
|
||||
}
|
||||
if (!stats.isFile()) {
|
||||
return {
|
||||
ok: false,
|
||||
code: "READ_ERROR",
|
||||
message: "path is not a regular file",
|
||||
canonicalPath: canonical,
|
||||
};
|
||||
}
|
||||
if (stats.size > maxBytes) {
|
||||
return {
|
||||
ok: false,
|
||||
code: "FILE_TOO_LARGE",
|
||||
message: `file size ${stats.size} exceeds limit ${maxBytes}`,
|
||||
canonicalPath: canonical,
|
||||
};
|
||||
}
|
||||
|
||||
if (preflightOnly) {
|
||||
return {
|
||||
ok: true,
|
||||
path: canonical,
|
||||
size: stats.size,
|
||||
mimeType: "",
|
||||
base64: "",
|
||||
sha256: "",
|
||||
preflightOnly: true,
|
||||
};
|
||||
}
|
||||
|
||||
let buffer: Buffer;
|
||||
try {
|
||||
buffer = await fs.readFile(canonical);
|
||||
} catch (err) {
|
||||
const code = classifyFsError(err);
|
||||
return { ok: false, code, message: `read failed: ${String(err)}`, canonicalPath: canonical };
|
||||
}
|
||||
|
||||
if (buffer.byteLength > maxBytes) {
|
||||
return {
|
||||
ok: false,
|
||||
code: "FILE_TOO_LARGE",
|
||||
message: `read ${buffer.byteLength} bytes exceeds limit ${maxBytes}`,
|
||||
canonicalPath: canonical,
|
||||
};
|
||||
}
|
||||
|
||||
const sha256 = crypto.createHash("sha256").update(buffer).digest("hex");
|
||||
const base64 = buffer.toString("base64");
|
||||
const mimeType = detectMimeType(canonical);
|
||||
|
||||
return {
|
||||
ok: true,
|
||||
path: canonical,
|
||||
size: buffer.byteLength,
|
||||
mimeType,
|
||||
base64,
|
||||
sha256,
|
||||
};
|
||||
}
|
||||
357
extensions/file-transfer/src/node-host/file-write.test.ts
Normal file
357
extensions/file-transfer/src/node-host/file-write.test.ts
Normal file
@@ -0,0 +1,357 @@
|
||||
import crypto from "node:crypto";
|
||||
import fs from "node:fs/promises";
|
||||
import os from "node:os";
|
||||
import path from "node:path";
|
||||
import { afterEach, beforeEach, describe, expect, it } from "vitest";
|
||||
import { handleFileWrite } from "./file-write.js";
|
||||
|
||||
let tmpRoot: string;
|
||||
|
||||
beforeEach(async () => {
|
||||
// realpath: see file-fetch.test.ts for the macOS symlinked-tmpdir reason.
|
||||
tmpRoot = await fs.realpath(await fs.mkdtemp(path.join(os.tmpdir(), "file-write-test-")));
|
||||
});
|
||||
|
||||
afterEach(async () => {
|
||||
await fs.rm(tmpRoot, { recursive: true, force: true });
|
||||
});
|
||||
|
||||
function b64(s: string): string {
|
||||
return Buffer.from(s, "utf-8").toString("base64");
|
||||
}
|
||||
|
||||
describe("handleFileWrite — input validation", () => {
|
||||
it("rejects empty / non-string path", async () => {
|
||||
expect(await handleFileWrite({ path: "", contentBase64: b64("x") })).toMatchObject({
|
||||
ok: false,
|
||||
code: "INVALID_PATH",
|
||||
});
|
||||
});
|
||||
|
||||
it("rejects relative paths", async () => {
|
||||
const r = await handleFileWrite({ path: "relative.txt", contentBase64: b64("x") });
|
||||
expect(r).toMatchObject({ ok: false, code: "INVALID_PATH" });
|
||||
});
|
||||
|
||||
it("rejects paths with NUL bytes", async () => {
|
||||
const r = await handleFileWrite({ path: "/tmp/foo\0bar", contentBase64: b64("x") });
|
||||
expect(r).toMatchObject({ ok: false, code: "INVALID_PATH" });
|
||||
});
|
||||
|
||||
it("requires contentBase64 but allows an empty encoded payload", async () => {
|
||||
const missing = await handleFileWrite({ path: path.join(tmpRoot, "missing.bin") });
|
||||
expect(missing).toMatchObject({ ok: false, code: "INVALID_BASE64" });
|
||||
|
||||
const target = path.join(tmpRoot, "empty.bin");
|
||||
const empty = await handleFileWrite({ path: target, contentBase64: "" });
|
||||
expect(empty).toMatchObject({
|
||||
ok: true,
|
||||
size: 0,
|
||||
sha256: "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
|
||||
});
|
||||
expect(await fs.readFile(target)).toHaveLength(0);
|
||||
});
|
||||
});
|
||||
|
||||
describe("handleFileWrite — happy path", () => {
|
||||
it("writes a new file and returns size + sha256 + overwritten=false", async () => {
|
||||
const target = path.join(tmpRoot, "out.txt");
|
||||
const contents = "hello write\n";
|
||||
const r = await handleFileWrite({ path: target, contentBase64: b64(contents) });
|
||||
if (!r.ok) {
|
||||
throw new Error(`expected ok, got ${r.code}: ${r.message}`);
|
||||
}
|
||||
expect(r.size).toBe(contents.length);
|
||||
expect(r.overwritten).toBe(false);
|
||||
const expectedSha = crypto.createHash("sha256").update(contents).digest("hex");
|
||||
expect(r.sha256).toBe(expectedSha);
|
||||
|
||||
const onDisk = await fs.readFile(target, "utf-8");
|
||||
expect(onDisk).toBe(contents);
|
||||
});
|
||||
|
||||
it("does not leave .tmp files behind on success", async () => {
|
||||
const target = path.join(tmpRoot, "atomic.txt");
|
||||
const r = await handleFileWrite({ path: target, contentBase64: b64("body") });
|
||||
expect(r.ok).toBe(true);
|
||||
|
||||
const entries = await fs.readdir(tmpRoot);
|
||||
const tmpFiles = entries.filter((n) => n.includes(".tmp"));
|
||||
expect(tmpFiles).toEqual([]);
|
||||
});
|
||||
});
|
||||
|
||||
describe("handleFileWrite — overwrite policy", () => {
|
||||
it("refuses to overwrite an existing file when overwrite=false", async () => {
|
||||
const target = path.join(tmpRoot, "exists.txt");
|
||||
await fs.writeFile(target, "before");
|
||||
|
||||
const r = await handleFileWrite({
|
||||
path: target,
|
||||
contentBase64: b64("after"),
|
||||
overwrite: false,
|
||||
});
|
||||
expect(r).toMatchObject({ ok: false, code: "EXISTS_NO_OVERWRITE" });
|
||||
expect(await fs.readFile(target, "utf-8")).toBe("before");
|
||||
});
|
||||
|
||||
it("overwrites and reports overwritten=true when overwrite=true", async () => {
|
||||
const target = path.join(tmpRoot, "exists.txt");
|
||||
await fs.writeFile(target, "before");
|
||||
|
||||
const r = await handleFileWrite({
|
||||
path: target,
|
||||
contentBase64: b64("after"),
|
||||
overwrite: true,
|
||||
});
|
||||
if (!r.ok) {
|
||||
throw new Error("expected ok");
|
||||
}
|
||||
expect(r.overwritten).toBe(true);
|
||||
expect(await fs.readFile(target, "utf-8")).toBe("after");
|
||||
});
|
||||
});
|
||||
|
||||
describe("handleFileWrite — parent directory handling", () => {
|
||||
it("returns PARENT_NOT_FOUND when parent is missing and createParents=false", async () => {
|
||||
const target = path.join(tmpRoot, "nested", "child.txt");
|
||||
const r = await handleFileWrite({
|
||||
path: target,
|
||||
contentBase64: b64("x"),
|
||||
createParents: false,
|
||||
});
|
||||
expect(r).toMatchObject({ ok: false, code: "PARENT_NOT_FOUND" });
|
||||
});
|
||||
|
||||
it("creates missing parents when createParents=true", async () => {
|
||||
const target = path.join(tmpRoot, "deep", "nested", "child.txt");
|
||||
const r = await handleFileWrite({
|
||||
path: target,
|
||||
contentBase64: b64("x"),
|
||||
createParents: true,
|
||||
});
|
||||
expect(r.ok).toBe(true);
|
||||
expect(await fs.readFile(target, "utf-8")).toBe("x");
|
||||
});
|
||||
});
|
||||
|
||||
describe("handleFileWrite — symlink protection", () => {
|
||||
it("refuses to write through an existing symlink (lstat)", async () => {
|
||||
const real = path.join(tmpRoot, "real.txt");
|
||||
const link = path.join(tmpRoot, "link.txt");
|
||||
await fs.writeFile(real, "untouched");
|
||||
await fs.symlink(real, link);
|
||||
|
||||
const r = await handleFileWrite({
|
||||
path: link,
|
||||
contentBase64: b64("evil"),
|
||||
overwrite: true,
|
||||
});
|
||||
expect(r).toMatchObject({ ok: false, code: "SYMLINK_TARGET_DENIED" });
|
||||
// The original file must be unchanged.
|
||||
expect(await fs.readFile(real, "utf-8")).toBe("untouched");
|
||||
});
|
||||
|
||||
it("refuses to write through a symlink in a parent directory by default", async () => {
|
||||
// realDir is the actual victim; sentinel is a pre-existing file in it.
|
||||
const realDir = path.join(tmpRoot, "real-dir");
|
||||
await fs.mkdir(realDir);
|
||||
const sentinel = path.join(realDir, "sentinel.txt");
|
||||
await fs.writeFile(sentinel, "DO_NOT_TOUCH");
|
||||
|
||||
// /tmpRoot/allowed -> /tmpRoot/real-dir (symlink in a parent segment).
|
||||
const allowed = path.join(tmpRoot, "allowed");
|
||||
await fs.symlink(realDir, allowed);
|
||||
|
||||
// Asking to write to .../allowed/new-file.txt — the lexical parent
|
||||
// (.../allowed) resolves through a symlink to .../real-dir. Refuse.
|
||||
const r = await handleFileWrite({
|
||||
path: path.join(allowed, "new-file.txt"),
|
||||
contentBase64: b64("payload"),
|
||||
});
|
||||
expect(r).toMatchObject({ ok: false, code: "SYMLINK_REDIRECT" });
|
||||
// The error includes the canonical target so the operator can
|
||||
// either update allowWritePaths or set followSymlinks=true.
|
||||
expect(r.ok ? null : r.canonicalPath).toBe(path.join(realDir, "new-file.txt"));
|
||||
// No file was created at the canonical target.
|
||||
await expect(fs.access(path.join(realDir, "new-file.txt"))).rejects.toMatchObject({
|
||||
code: "ENOENT",
|
||||
});
|
||||
// Sentinel must be untouched.
|
||||
expect(await fs.readFile(sentinel, "utf-8")).toBe("DO_NOT_TOUCH");
|
||||
});
|
||||
|
||||
it("checks symlinked parents before recursive mkdir", async () => {
|
||||
const realDir = path.join(tmpRoot, "real-dir");
|
||||
await fs.mkdir(realDir);
|
||||
const allowed = path.join(tmpRoot, "allowed");
|
||||
await fs.symlink(realDir, allowed);
|
||||
|
||||
const r = await handleFileWrite({
|
||||
path: path.join(allowed, "new", "child.txt"),
|
||||
contentBase64: b64("payload"),
|
||||
createParents: true,
|
||||
});
|
||||
|
||||
expect(r).toMatchObject({ ok: false, code: "SYMLINK_REDIRECT" });
|
||||
expect(r.ok ? null : r.canonicalPath).toBe(path.join(realDir, "new", "child.txt"));
|
||||
await expect(fs.access(path.join(realDir, "new"))).rejects.toMatchObject({
|
||||
code: "ENOENT",
|
||||
});
|
||||
});
|
||||
|
||||
it("follows the parent symlink when followSymlinks=true", async () => {
|
||||
const realDir = path.join(tmpRoot, "real-dir");
|
||||
await fs.mkdir(realDir);
|
||||
const allowed = path.join(tmpRoot, "allowed");
|
||||
await fs.symlink(realDir, allowed);
|
||||
|
||||
const r = await handleFileWrite({
|
||||
path: path.join(allowed, "new-file.txt"),
|
||||
contentBase64: b64("payload"),
|
||||
followSymlinks: true,
|
||||
});
|
||||
expect(r.ok).toBe(true);
|
||||
// The file landed in the canonical (real) directory.
|
||||
expect(await fs.readFile(path.join(realDir, "new-file.txt"), "utf-8")).toBe("payload");
|
||||
});
|
||||
|
||||
it("preflights canonical write targets without creating files or parents", async () => {
|
||||
const realDir = path.join(tmpRoot, "real-dir");
|
||||
await fs.mkdir(realDir);
|
||||
const allowed = path.join(tmpRoot, "allowed");
|
||||
await fs.symlink(realDir, allowed);
|
||||
|
||||
const r = await handleFileWrite({
|
||||
path: path.join(allowed, "new", "child.txt"),
|
||||
contentBase64: b64("payload"),
|
||||
createParents: true,
|
||||
followSymlinks: true,
|
||||
preflightOnly: true,
|
||||
});
|
||||
|
||||
expect(r).toMatchObject({
|
||||
ok: true,
|
||||
path: path.join(realDir, "new", "child.txt"),
|
||||
size: "payload".length,
|
||||
});
|
||||
await expect(fs.access(path.join(realDir, "new"))).rejects.toMatchObject({
|
||||
code: "ENOENT",
|
||||
});
|
||||
});
|
||||
|
||||
it("refuses to overwrite a directory", async () => {
|
||||
const target = path.join(tmpRoot, "is-a-dir");
|
||||
await fs.mkdir(target);
|
||||
|
||||
const r = await handleFileWrite({
|
||||
path: target,
|
||||
contentBase64: b64("x"),
|
||||
overwrite: true,
|
||||
});
|
||||
expect(r).toMatchObject({ ok: false, code: "IS_DIRECTORY" });
|
||||
});
|
||||
});
|
||||
|
||||
describe("handleFileWrite — integrity check", () => {
|
||||
it("returns INTEGRITY_FAILURE before writing when expectedSha256 mismatches", async () => {
|
||||
const target = path.join(tmpRoot, "checked.txt");
|
||||
const r = await handleFileWrite({
|
||||
path: target,
|
||||
contentBase64: b64("real-content"),
|
||||
expectedSha256: "0".repeat(64),
|
||||
});
|
||||
expect(r).toMatchObject({ ok: false, code: "INTEGRITY_FAILURE" });
|
||||
// The file must never be created on a mismatch.
|
||||
await expect(fs.access(target)).rejects.toMatchObject({ code: "ENOENT" });
|
||||
});
|
||||
|
||||
it("does NOT replace or delete an existing file when overwrite=true and expectedSha256 mismatches", async () => {
|
||||
const target = path.join(tmpRoot, "victim.txt");
|
||||
await fs.writeFile(target, "ORIGINAL_CONTENT_DO_NOT_TOUCH");
|
||||
|
||||
const r = await handleFileWrite({
|
||||
path: target,
|
||||
contentBase64: b64("attacker-content"),
|
||||
overwrite: true,
|
||||
expectedSha256: "0".repeat(64),
|
||||
});
|
||||
expect(r).toMatchObject({ ok: false, code: "INTEGRITY_FAILURE" });
|
||||
// Critical: the original must survive. A bad caller hash must not
|
||||
// be a primitive for replacing-then-deleting an existing file.
|
||||
expect(await fs.readFile(target, "utf-8")).toBe("ORIGINAL_CONTENT_DO_NOT_TOUCH");
|
||||
});
|
||||
|
||||
it("accepts a matching expectedSha256 and keeps the file", async () => {
|
||||
const target = path.join(tmpRoot, "checked.txt");
|
||||
const contents = "real-content";
|
||||
const sha = crypto.createHash("sha256").update(contents).digest("hex");
|
||||
|
||||
const r = await handleFileWrite({
|
||||
path: target,
|
||||
contentBase64: b64(contents),
|
||||
expectedSha256: sha,
|
||||
});
|
||||
expect(r.ok).toBe(true);
|
||||
expect(await fs.readFile(target, "utf-8")).toBe(contents);
|
||||
});
|
||||
|
||||
it("treats expectedSha256 as case-insensitive", async () => {
|
||||
const target = path.join(tmpRoot, "checked.txt");
|
||||
const contents = "abc";
|
||||
const sha = crypto.createHash("sha256").update(contents).digest("hex").toUpperCase();
|
||||
|
||||
const r = await handleFileWrite({
|
||||
path: target,
|
||||
contentBase64: b64(contents),
|
||||
expectedSha256: sha,
|
||||
});
|
||||
expect(r.ok).toBe(true);
|
||||
});
|
||||
});
|
||||
|
||||
describe("handleFileWrite — base64 round-trip validation", () => {
|
||||
it("rejects malformed base64 that silently drops characters", async () => {
|
||||
const target = path.join(tmpRoot, "bad.bin");
|
||||
// "@" is not in the base64 alphabet — Buffer.from would silently drop
|
||||
// it and decode "AAA" instead of failing.
|
||||
const r = await handleFileWrite({
|
||||
path: target,
|
||||
contentBase64: "AAA@@@",
|
||||
});
|
||||
expect(r).toMatchObject({ ok: false, code: "INVALID_BASE64" });
|
||||
await expect(fs.access(target)).rejects.toMatchObject({ code: "ENOENT" });
|
||||
});
|
||||
|
||||
it("accepts standard base64 with and without padding", async () => {
|
||||
const target = path.join(tmpRoot, "padded.bin");
|
||||
// Buffer.from("hi") -> "aGk=" with padding, "aGk" without.
|
||||
const r1 = await handleFileWrite({ path: target, contentBase64: "aGk=" });
|
||||
expect(r1.ok).toBe(true);
|
||||
|
||||
const target2 = path.join(tmpRoot, "unpadded.bin");
|
||||
const r2 = await handleFileWrite({ path: target2, contentBase64: "aGk" });
|
||||
expect(r2.ok).toBe(true);
|
||||
});
|
||||
|
||||
it("accepts base64url variant (-_ instead of +/)", async () => {
|
||||
const target = path.join(tmpRoot, "url.bin");
|
||||
// Buffer.from([0xfb, 0xff]) -> "+/8=" standard, "-_8=" url
|
||||
const r = await handleFileWrite({ path: target, contentBase64: "-_8=" });
|
||||
expect(r.ok).toBe(true);
|
||||
});
|
||||
});
|
||||
|
||||
describe("handleFileWrite — size cap", () => {
|
||||
it("rejects content larger than the 16MB cap", async () => {
|
||||
const target = path.join(tmpRoot, "big.bin");
|
||||
// 17MB of zero-bytes — base64 inflates by ~4/3 but we're checking the
|
||||
// decoded buffer length so this is fine.
|
||||
const big = Buffer.alloc(17 * 1024 * 1024, 0);
|
||||
const r = await handleFileWrite({
|
||||
path: target,
|
||||
contentBase64: big.toString("base64"),
|
||||
});
|
||||
expect(r).toMatchObject({ ok: false, code: "FILE_TOO_LARGE" });
|
||||
});
|
||||
});
|
||||
314
extensions/file-transfer/src/node-host/file-write.ts
Normal file
314
extensions/file-transfer/src/node-host/file-write.ts
Normal file
@@ -0,0 +1,314 @@
|
||||
import crypto from "node:crypto";
|
||||
import fs from "node:fs/promises";
|
||||
import path from "node:path";
|
||||
|
||||
const MAX_CONTENT_BYTES = 16 * 1024 * 1024; // 16 MB
|
||||
|
||||
type FileWriteParams = {
|
||||
path: string;
|
||||
contentBase64: string;
|
||||
overwrite: boolean;
|
||||
createParents: boolean;
|
||||
expectedSha256?: string;
|
||||
followSymlinks?: boolean;
|
||||
preflightOnly?: boolean;
|
||||
};
|
||||
|
||||
type FileWriteSuccess = {
|
||||
ok: true;
|
||||
path: string;
|
||||
size: number;
|
||||
sha256: string;
|
||||
overwritten: boolean;
|
||||
};
|
||||
|
||||
type FileWriteError = {
|
||||
ok: false;
|
||||
code: string;
|
||||
message: string;
|
||||
canonicalPath?: string;
|
||||
};
|
||||
|
||||
type FileWriteResult = FileWriteSuccess | FileWriteError;
|
||||
|
||||
function sha256Hex(buf: Buffer): string {
|
||||
return crypto.createHash("sha256").update(buf).digest("hex");
|
||||
}
|
||||
|
||||
function err(code: string, message: string, canonicalPath?: string): FileWriteError {
|
||||
return { ok: false, code, message, ...(canonicalPath ? { canonicalPath } : {}) };
|
||||
}
|
||||
|
||||
async function pathExists(p: string): Promise<boolean> {
|
||||
try {
|
||||
await fs.access(p);
|
||||
return true;
|
||||
} catch {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
async function findExistingAncestor(p: string): Promise<string | null> {
|
||||
let current = p;
|
||||
while (true) {
|
||||
try {
|
||||
await fs.lstat(current);
|
||||
return current;
|
||||
} catch (error) {
|
||||
if ((error as NodeJS.ErrnoException).code !== "ENOENT") {
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
const parent = path.dirname(current);
|
||||
if (parent === current) {
|
||||
return null;
|
||||
}
|
||||
current = parent;
|
||||
}
|
||||
}
|
||||
|
||||
async function canonicalTargetFromExistingAncestor(targetPath: string): Promise<string> {
|
||||
const ancestor = await findExistingAncestor(targetPath);
|
||||
if (!ancestor) {
|
||||
return targetPath;
|
||||
}
|
||||
let canonicalAncestor: string;
|
||||
try {
|
||||
canonicalAncestor = await fs.realpath(ancestor);
|
||||
} catch {
|
||||
canonicalAncestor = ancestor;
|
||||
}
|
||||
const relative = path.relative(ancestor, targetPath);
|
||||
return relative ? path.join(canonicalAncestor, relative) : canonicalAncestor;
|
||||
}
|
||||
|
||||
async function rejectParentSymlinkRedirect(
|
||||
targetPath: string,
|
||||
parentDir: string,
|
||||
): Promise<FileWriteError | null> {
|
||||
const ancestor = await findExistingAncestor(parentDir);
|
||||
if (!ancestor) {
|
||||
return null;
|
||||
}
|
||||
let canonicalAncestor: string;
|
||||
try {
|
||||
canonicalAncestor = await fs.realpath(ancestor);
|
||||
} catch {
|
||||
return null;
|
||||
}
|
||||
if (canonicalAncestor === ancestor) {
|
||||
return null;
|
||||
}
|
||||
const canonicalTarget = path.join(canonicalAncestor, path.relative(ancestor, targetPath));
|
||||
return err(
|
||||
"SYMLINK_REDIRECT",
|
||||
`parent ${ancestor} resolves through a symlink to ${canonicalAncestor}; refusing because followSymlinks=false (set plugins.entries.file-transfer.config.nodes.<node>.followSymlinks=true to allow, or update allowWritePaths to the canonical path)`,
|
||||
canonicalTarget,
|
||||
);
|
||||
}
|
||||
|
||||
export async function handleFileWrite(
|
||||
params: Partial<FileWriteParams> & Record<string, unknown>,
|
||||
): Promise<FileWriteResult> {
|
||||
const rawPath = typeof params?.path === "string" ? params.path : "";
|
||||
const hasContentBase64 = typeof params?.contentBase64 === "string";
|
||||
const contentBase64 = hasContentBase64 ? (params.contentBase64 as string) : "";
|
||||
const overwrite = params?.overwrite === true;
|
||||
const createParents = params?.createParents === true;
|
||||
const expectedSha256 =
|
||||
typeof params?.expectedSha256 === "string" ? params.expectedSha256 : undefined;
|
||||
const followSymlinks = params?.followSymlinks === true;
|
||||
const preflightOnly = params?.preflightOnly === true;
|
||||
|
||||
// 1. Validate path: must be absolute, non-empty, no NUL byte
|
||||
if (!rawPath) {
|
||||
return err("INVALID_PATH", "path is required");
|
||||
}
|
||||
if (rawPath.includes("\0")) {
|
||||
return err("INVALID_PATH", "path must not contain NUL bytes");
|
||||
}
|
||||
if (!path.isAbsolute(rawPath)) {
|
||||
return err("INVALID_PATH", "path must be absolute");
|
||||
}
|
||||
if (!hasContentBase64) {
|
||||
return err("INVALID_BASE64", "contentBase64 is required");
|
||||
}
|
||||
|
||||
// 2. Decode base64 → Buffer.
|
||||
// Buffer.from(s, "base64") in Node never throws — it silently drops
|
||||
// non-base64 characters and returns whatever it could decode. That
|
||||
// means a typo or truncated input would land garbage on disk if we
|
||||
// accepted whatever decoded. Defense: round-trip the decoded buffer
|
||||
// back to base64 and compare against the input modulo padding/url
|
||||
// variants. A mismatch means characters were silently dropped.
|
||||
const buf = Buffer.from(contentBase64, "base64");
|
||||
const reEncoded = buf.toString("base64");
|
||||
// Normalize: drop padding and convert base64url chars to standard so the
|
||||
// comparison tolerates both "=" / no-"=" inputs and "-_" base64url.
|
||||
const normalize = (s: string): string =>
|
||||
s.replace(/=+$/u, "").replace(/-/gu, "+").replace(/_/gu, "/");
|
||||
if (normalize(reEncoded) !== normalize(contentBase64)) {
|
||||
return err("INVALID_BASE64", "contentBase64 is not valid base64");
|
||||
}
|
||||
|
||||
if (buf.length > MAX_CONTENT_BYTES) {
|
||||
return err(
|
||||
"FILE_TOO_LARGE",
|
||||
`decoded content is ${buf.length} bytes; maximum is ${MAX_CONTENT_BYTES} bytes (16 MB)`,
|
||||
);
|
||||
}
|
||||
|
||||
// 3. Resolve parent dir
|
||||
const targetPath = path.normalize(rawPath);
|
||||
const parentDir = path.dirname(targetPath);
|
||||
|
||||
const parentExists = await pathExists(parentDir);
|
||||
|
||||
// Refuse symlink traversal in the existing parent chain before creating
|
||||
// missing directories. Recursive mkdir follows symlinked ancestors, so this
|
||||
// has to run before mkdir can mutate the canonical target.
|
||||
if (!followSymlinks) {
|
||||
const redirect = await rejectParentSymlinkRedirect(targetPath, parentDir);
|
||||
if (redirect) {
|
||||
return redirect;
|
||||
}
|
||||
}
|
||||
|
||||
if (!parentExists) {
|
||||
if (!createParents) {
|
||||
return err("PARENT_NOT_FOUND", `parent directory does not exist: ${parentDir}`);
|
||||
}
|
||||
if (preflightOnly) {
|
||||
const computedSha256 = sha256Hex(buf);
|
||||
if (expectedSha256 && expectedSha256.toLowerCase() !== computedSha256) {
|
||||
return err(
|
||||
"INTEGRITY_FAILURE",
|
||||
`sha256 mismatch: expected ${expectedSha256.toLowerCase()}, got ${computedSha256}`,
|
||||
targetPath,
|
||||
);
|
||||
}
|
||||
return {
|
||||
ok: true,
|
||||
path: await canonicalTargetFromExistingAncestor(targetPath),
|
||||
size: buf.length,
|
||||
sha256: computedSha256,
|
||||
overwritten: false,
|
||||
};
|
||||
}
|
||||
try {
|
||||
await fs.mkdir(parentDir, { recursive: true });
|
||||
} catch (mkdirErr) {
|
||||
const message = mkdirErr instanceof Error ? mkdirErr.message : String(mkdirErr);
|
||||
return err("WRITE_ERROR", `failed to create parent directories: ${message}`);
|
||||
}
|
||||
}
|
||||
|
||||
// Re-check after mkdir as a race-defense: if the parent chain changed
|
||||
// between the first check and directory creation, fail before writing bytes.
|
||||
if (!followSymlinks) {
|
||||
const redirect = await rejectParentSymlinkRedirect(targetPath, parentDir);
|
||||
if (redirect) {
|
||||
return redirect;
|
||||
}
|
||||
}
|
||||
|
||||
let overwritten = false;
|
||||
try {
|
||||
const existingLStat = await fs.lstat(targetPath);
|
||||
if (existingLStat.isSymbolicLink()) {
|
||||
return err(
|
||||
"SYMLINK_TARGET_DENIED",
|
||||
`path is a symlink; refusing to write through it: ${targetPath}`,
|
||||
);
|
||||
}
|
||||
if (existingLStat.isDirectory()) {
|
||||
return err("IS_DIRECTORY", `path resolves to a directory: ${targetPath}`);
|
||||
}
|
||||
if (!overwrite) {
|
||||
return err(
|
||||
"EXISTS_NO_OVERWRITE",
|
||||
`file already exists and overwrite is false: ${targetPath}`,
|
||||
);
|
||||
}
|
||||
overwritten = true;
|
||||
} catch (statErr: unknown) {
|
||||
// ENOENT is fine — file does not exist yet
|
||||
if ((statErr as NodeJS.ErrnoException).code !== "ENOENT") {
|
||||
const message = statErr instanceof Error ? statErr.message : String(statErr);
|
||||
if (message.toLowerCase().includes("permission")) {
|
||||
return err("PERMISSION_DENIED", `permission denied: ${targetPath}`);
|
||||
}
|
||||
return err("WRITE_ERROR", `unexpected stat error: ${message}`);
|
||||
}
|
||||
}
|
||||
|
||||
// 5. Hash the decoded buffer BEFORE touching disk. If the caller
|
||||
// supplied expectedSha256 and it doesn't match, refuse outright so
|
||||
// a bad caller hash with overwrite=true can't replace + delete the
|
||||
// original. Computing from the buffer (not a re-read) is the right
|
||||
// source of truth — the caller asked us to write THESE bytes.
|
||||
const computedSha256 = sha256Hex(buf);
|
||||
if (expectedSha256 && expectedSha256.toLowerCase() !== computedSha256) {
|
||||
return err(
|
||||
"INTEGRITY_FAILURE",
|
||||
`sha256 mismatch: expected ${expectedSha256.toLowerCase()}, got ${computedSha256}`,
|
||||
targetPath,
|
||||
);
|
||||
}
|
||||
|
||||
if (preflightOnly) {
|
||||
return {
|
||||
ok: true,
|
||||
path: await canonicalTargetFromExistingAncestor(targetPath),
|
||||
size: buf.length,
|
||||
sha256: computedSha256,
|
||||
overwritten,
|
||||
};
|
||||
}
|
||||
|
||||
// 6. Atomic write: write to tmp, then rename
|
||||
const tmpSuffix = crypto.randomBytes(8).toString("hex");
|
||||
const tmpPath = `${targetPath}.${tmpSuffix}.tmp`;
|
||||
|
||||
try {
|
||||
await fs.writeFile(tmpPath, buf);
|
||||
} catch (writeErr) {
|
||||
const message = writeErr instanceof Error ? writeErr.message : String(writeErr);
|
||||
// Clean up tmp if possible
|
||||
await fs.unlink(tmpPath).catch(() => {});
|
||||
if (message.toLowerCase().includes("permission") || message.toLowerCase().includes("access")) {
|
||||
return err("PERMISSION_DENIED", `permission denied writing to: ${parentDir}`);
|
||||
}
|
||||
return err("WRITE_ERROR", `failed to write file: ${message}`);
|
||||
}
|
||||
|
||||
try {
|
||||
await fs.rename(tmpPath, targetPath);
|
||||
} catch (renameErr) {
|
||||
const message = renameErr instanceof Error ? renameErr.message : String(renameErr);
|
||||
await fs.unlink(tmpPath).catch(() => {});
|
||||
if (message.toLowerCase().includes("permission") || message.toLowerCase().includes("access")) {
|
||||
return err("PERMISSION_DENIED", `permission denied renaming to: ${targetPath}`);
|
||||
}
|
||||
return err("WRITE_ERROR", `failed to rename tmp to target: ${message}`);
|
||||
}
|
||||
|
||||
const writtenBuf = buf;
|
||||
|
||||
// 8. Re-realpath to resolve any symlinks in the final path
|
||||
let canonicalPath = targetPath;
|
||||
try {
|
||||
canonicalPath = await fs.realpath(targetPath);
|
||||
} catch {
|
||||
// Best effort; use normalized path as fallback
|
||||
canonicalPath = targetPath;
|
||||
}
|
||||
|
||||
return {
|
||||
ok: true,
|
||||
path: canonicalPath,
|
||||
size: writtenBuf.length,
|
||||
sha256: computedSha256,
|
||||
overwritten,
|
||||
};
|
||||
}
|
||||
93
extensions/file-transfer/src/shared/audit.ts
Normal file
93
extensions/file-transfer/src/shared/audit.ts
Normal file
@@ -0,0 +1,93 @@
|
||||
// Append-only audit log for file-transfer operations.
|
||||
//
|
||||
// Records every decision (allow/deny/error) at the gateway-side tool
|
||||
// layer. Lands at ~/.openclaw/audit/file-transfer.jsonl. Rotation is
|
||||
// caller's responsibility — the file grows unbounded.
|
||||
//
|
||||
// Log records do NOT include file contents or hashes of secrets. They do
|
||||
// include canonical paths and sha256 of the payload, so treat the audit
|
||||
// file as sensitive.
|
||||
|
||||
import fs from "node:fs/promises";
|
||||
import os from "node:os";
|
||||
import path from "node:path";
|
||||
|
||||
export type FileTransferAuditOp = "file.fetch" | "dir.list" | "dir.fetch" | "file.write";
|
||||
|
||||
export type FileTransferAuditDecision =
|
||||
| "allowed"
|
||||
| "allowed:once"
|
||||
| "allowed:always"
|
||||
| "denied:no_policy"
|
||||
| "denied:policy"
|
||||
| "denied:approval"
|
||||
| "denied:command_not_allowed"
|
||||
| "denied:symlink_escape"
|
||||
| "error";
|
||||
|
||||
export type FileTransferAuditRecord = {
|
||||
timestamp: string;
|
||||
op: FileTransferAuditOp;
|
||||
nodeId: string;
|
||||
nodeDisplayName?: string;
|
||||
requestedPath: string;
|
||||
canonicalPath?: string;
|
||||
decision: FileTransferAuditDecision;
|
||||
errorCode?: string;
|
||||
errorMessage?: string;
|
||||
sizeBytes?: number;
|
||||
sha256?: string;
|
||||
durationMs?: number;
|
||||
// Tying back to the agent that initiated the op
|
||||
requesterAgentId?: string;
|
||||
sessionKey?: string;
|
||||
// Reason text for denials
|
||||
reason?: string;
|
||||
};
|
||||
|
||||
let auditDirPromise: Promise<string> | null = null;
|
||||
|
||||
async function ensureAuditDir(): Promise<string> {
|
||||
if (auditDirPromise) {
|
||||
return auditDirPromise;
|
||||
}
|
||||
const promise = (async () => {
|
||||
const dir = path.join(os.homedir(), ".openclaw", "audit");
|
||||
await fs.mkdir(dir, { recursive: true, mode: 0o700 });
|
||||
return dir;
|
||||
})();
|
||||
// If the mkdir rejects (transient permission error etc.), clear the
|
||||
// cached singleton so the NEXT call retries instead of permanently
|
||||
// silencing the audit log.
|
||||
promise.catch(() => {
|
||||
if (auditDirPromise === promise) {
|
||||
auditDirPromise = null;
|
||||
}
|
||||
});
|
||||
auditDirPromise = promise;
|
||||
return promise;
|
||||
}
|
||||
|
||||
function auditFilePath(dir: string): string {
|
||||
return path.join(dir, "file-transfer.jsonl");
|
||||
}
|
||||
|
||||
/**
|
||||
* Append an audit record. Best-effort — failures are logged to stderr and
|
||||
* never propagated to the caller (the caller's operation is the source of
|
||||
* truth, not the audit write).
|
||||
*/
|
||||
export async function appendFileTransferAudit(
|
||||
record: Omit<FileTransferAuditRecord, "timestamp">,
|
||||
): Promise<void> {
|
||||
try {
|
||||
const dir = await ensureAuditDir();
|
||||
const line = `${JSON.stringify({
|
||||
timestamp: new Date().toISOString(),
|
||||
...record,
|
||||
})}\n`;
|
||||
await fs.appendFile(auditFilePath(dir), line, { mode: 0o600 });
|
||||
} catch (e) {
|
||||
process.stderr.write(`[file-transfer:audit] append failed: ${String(e)}\n`);
|
||||
}
|
||||
}
|
||||
62
extensions/file-transfer/src/shared/errors.test.ts
Normal file
62
extensions/file-transfer/src/shared/errors.test.ts
Normal file
@@ -0,0 +1,62 @@
|
||||
import { describe, expect, it } from "vitest";
|
||||
import { classifyFsError, err, throwFromNodePayload } from "./errors.js";
|
||||
|
||||
describe("err", () => {
|
||||
it("returns an error envelope without canonicalPath when omitted", () => {
|
||||
const e = err("INVALID_PATH", "path required");
|
||||
expect(e).toEqual({ ok: false, code: "INVALID_PATH", message: "path required" });
|
||||
expect("canonicalPath" in e).toBe(false);
|
||||
});
|
||||
|
||||
it("includes canonicalPath only when provided non-empty", () => {
|
||||
const withPath = err("NOT_FOUND", "missing", "/tmp/x");
|
||||
expect(withPath.canonicalPath).toBe("/tmp/x");
|
||||
|
||||
const blankPath = err("NOT_FOUND", "missing", "");
|
||||
expect("canonicalPath" in blankPath).toBe(false);
|
||||
});
|
||||
});
|
||||
|
||||
describe("classifyFsError", () => {
|
||||
it("maps ENOENT to NOT_FOUND", () => {
|
||||
expect(classifyFsError({ code: "ENOENT" })).toBe("NOT_FOUND");
|
||||
});
|
||||
|
||||
it("maps EACCES and EPERM to PERMISSION_DENIED", () => {
|
||||
expect(classifyFsError({ code: "EACCES" })).toBe("PERMISSION_DENIED");
|
||||
expect(classifyFsError({ code: "EPERM" })).toBe("PERMISSION_DENIED");
|
||||
});
|
||||
|
||||
it("maps EISDIR to IS_DIRECTORY", () => {
|
||||
expect(classifyFsError({ code: "EISDIR" })).toBe("IS_DIRECTORY");
|
||||
});
|
||||
|
||||
it("falls back to READ_ERROR for unknown / null / non-object input", () => {
|
||||
expect(classifyFsError({ code: "EUNKNOWN" })).toBe("READ_ERROR");
|
||||
expect(classifyFsError(null)).toBe("READ_ERROR");
|
||||
expect(classifyFsError(undefined)).toBe("READ_ERROR");
|
||||
expect(classifyFsError("nope")).toBe("READ_ERROR");
|
||||
});
|
||||
});
|
||||
|
||||
describe("throwFromNodePayload", () => {
|
||||
it("preserves code and message in the thrown Error", () => {
|
||||
expect(() =>
|
||||
throwFromNodePayload("file.fetch", { code: "NOT_FOUND", message: "file not found" }),
|
||||
).toThrow(/file\.fetch NOT_FOUND: file not found/);
|
||||
});
|
||||
|
||||
it("appends canonicalPath when present", () => {
|
||||
expect(() =>
|
||||
throwFromNodePayload("file.fetch", {
|
||||
code: "POLICY_DENIED",
|
||||
message: "blocked",
|
||||
canonicalPath: "/tmp/x",
|
||||
}),
|
||||
).toThrow(/canonical=\/tmp\/x/);
|
||||
});
|
||||
|
||||
it("falls back to ERROR / generic message when fields are missing", () => {
|
||||
expect(() => throwFromNodePayload("dir.list", {})).toThrow(/dir\.list ERROR: dir\.list failed/);
|
||||
});
|
||||
});
|
||||
68
extensions/file-transfer/src/shared/errors.ts
Normal file
68
extensions/file-transfer/src/shared/errors.ts
Normal file
@@ -0,0 +1,68 @@
|
||||
// Shared error code surface across the four file-transfer tools/handlers.
|
||||
// Every tool returns the same { ok: false, code, message, canonicalPath? }
|
||||
// shape so the model can reason about errors uniformly.
|
||||
|
||||
export type FileTransferErrCode =
|
||||
// Path-shape errors (caller's fault)
|
||||
| "INVALID_PATH"
|
||||
| "INVALID_BASE64"
|
||||
| "INVALID_PARAMS"
|
||||
// Filesystem errors (file/dir layer)
|
||||
| "NOT_FOUND"
|
||||
| "PERMISSION_DENIED"
|
||||
| "IS_DIRECTORY"
|
||||
| "IS_FILE"
|
||||
| "PARENT_NOT_FOUND"
|
||||
| "EXISTS_NO_OVERWRITE"
|
||||
| "READ_ERROR"
|
||||
| "WRITE_ERROR"
|
||||
// Size/limit errors
|
||||
| "FILE_TOO_LARGE"
|
||||
| "TREE_TOO_LARGE"
|
||||
// Safety errors
|
||||
| "PATH_TRAVERSAL"
|
||||
| "SYMLINK_TARGET_DENIED"
|
||||
| "INTEGRITY_FAILURE"
|
||||
// Policy errors (gateway-side)
|
||||
| "POLICY_DENIED"
|
||||
| "NO_POLICY";
|
||||
|
||||
export type FileTransferErr = {
|
||||
ok: false;
|
||||
code: FileTransferErrCode;
|
||||
message: string;
|
||||
canonicalPath?: string;
|
||||
};
|
||||
|
||||
export function err(
|
||||
code: FileTransferErrCode,
|
||||
message: string,
|
||||
canonicalPath?: string,
|
||||
): FileTransferErr {
|
||||
return { ok: false, code, message, ...(canonicalPath ? { canonicalPath } : {}) };
|
||||
}
|
||||
|
||||
// Translate a node-side fs error to a public error code.
|
||||
export function classifyFsError(e: unknown): FileTransferErrCode {
|
||||
const code = (e as { code?: string } | null)?.code;
|
||||
if (code === "ENOENT") {
|
||||
return "NOT_FOUND";
|
||||
}
|
||||
if (code === "EACCES" || code === "EPERM") {
|
||||
return "PERMISSION_DENIED";
|
||||
}
|
||||
if (code === "EISDIR") {
|
||||
return "IS_DIRECTORY";
|
||||
}
|
||||
return "READ_ERROR";
|
||||
}
|
||||
|
||||
// Convert a node-host error payload to a thrown Error for agent-tool consumption.
|
||||
// The agent-tool surfaces these as failed tool results uniformly.
|
||||
export function throwFromNodePayload(operation: string, payload: Record<string, unknown>): never {
|
||||
const code = typeof payload.code === "string" ? payload.code : "ERROR";
|
||||
const message = typeof payload.message === "string" ? payload.message : `${operation} failed`;
|
||||
const canonical =
|
||||
typeof payload.canonicalPath === "string" ? ` (canonical=${payload.canonicalPath})` : "";
|
||||
throw new Error(`${operation} ${code}: ${message}${canonical}`);
|
||||
}
|
||||
58
extensions/file-transfer/src/shared/mime.test.ts
Normal file
58
extensions/file-transfer/src/shared/mime.test.ts
Normal file
@@ -0,0 +1,58 @@
|
||||
import { describe, expect, it } from "vitest";
|
||||
import {
|
||||
EXTENSION_MIME,
|
||||
IMAGE_MIME_INLINE_SET,
|
||||
TEXT_INLINE_MAX_BYTES,
|
||||
TEXT_INLINE_MIME_SET,
|
||||
mimeFromExtension,
|
||||
} from "./mime.js";
|
||||
|
||||
describe("mimeFromExtension", () => {
|
||||
it("returns the mapped mime for known extensions", () => {
|
||||
expect(mimeFromExtension("foo.png")).toBe("image/png");
|
||||
expect(mimeFromExtension("/abs/path/bar.JPG")).toBe("image/jpeg");
|
||||
expect(mimeFromExtension("doc.pdf")).toBe("application/pdf");
|
||||
expect(mimeFromExtension("notes.md")).toBe("text/markdown");
|
||||
});
|
||||
|
||||
it("falls back to application/octet-stream for unknown extensions", () => {
|
||||
expect(mimeFromExtension("blob.xyz")).toBe("application/octet-stream");
|
||||
expect(mimeFromExtension("Makefile")).toBe("application/octet-stream");
|
||||
});
|
||||
|
||||
it("is case-insensitive on the extension", () => {
|
||||
expect(mimeFromExtension("foo.PNG")).toBe("image/png");
|
||||
expect(mimeFromExtension("foo.WeBp")).toBe("image/webp");
|
||||
});
|
||||
});
|
||||
|
||||
describe("MIME constants", () => {
|
||||
it("EXTENSION_MIME includes the v1 image set", () => {
|
||||
expect(EXTENSION_MIME[".png"]).toBe("image/png");
|
||||
expect(EXTENSION_MIME[".jpg"]).toBe("image/jpeg");
|
||||
expect(EXTENSION_MIME[".jpeg"]).toBe("image/jpeg");
|
||||
expect(EXTENSION_MIME[".webp"]).toBe("image/webp");
|
||||
expect(EXTENSION_MIME[".gif"]).toBe("image/gif");
|
||||
});
|
||||
|
||||
it("IMAGE_MIME_INLINE_SET is the inline-renderable image set", () => {
|
||||
expect(IMAGE_MIME_INLINE_SET.has("image/png")).toBe(true);
|
||||
expect(IMAGE_MIME_INLINE_SET.has("image/jpeg")).toBe(true);
|
||||
expect(IMAGE_MIME_INLINE_SET.has("image/webp")).toBe(true);
|
||||
expect(IMAGE_MIME_INLINE_SET.has("image/gif")).toBe(true);
|
||||
// heic/heif intentionally excluded
|
||||
expect(IMAGE_MIME_INLINE_SET.has("image/heic")).toBe(false);
|
||||
expect(IMAGE_MIME_INLINE_SET.has("image/heif")).toBe(false);
|
||||
});
|
||||
|
||||
it("TEXT_INLINE_MIME_SET covers small-text inlining types", () => {
|
||||
expect(TEXT_INLINE_MIME_SET.has("text/plain")).toBe(true);
|
||||
expect(TEXT_INLINE_MIME_SET.has("text/markdown")).toBe(true);
|
||||
expect(TEXT_INLINE_MIME_SET.has("application/json")).toBe(true);
|
||||
expect(TEXT_INLINE_MIME_SET.has("text/csv")).toBe(true);
|
||||
});
|
||||
|
||||
it("TEXT_INLINE_MAX_BYTES is the documented 8KB cap", () => {
|
||||
expect(TEXT_INLINE_MAX_BYTES).toBe(8 * 1024);
|
||||
});
|
||||
});
|
||||
53
extensions/file-transfer/src/shared/mime.ts
Normal file
53
extensions/file-transfer/src/shared/mime.ts
Normal file
@@ -0,0 +1,53 @@
|
||||
import path from "node:path";
|
||||
|
||||
// Single source of truth for extension→MIME mapping. Used by all four
|
||||
// handlers/tools so adding a new extension lands everywhere at once.
|
||||
export const EXTENSION_MIME: Record<string, string> = {
|
||||
".png": "image/png",
|
||||
".jpg": "image/jpeg",
|
||||
".jpeg": "image/jpeg",
|
||||
".webp": "image/webp",
|
||||
".gif": "image/gif",
|
||||
".bmp": "image/bmp",
|
||||
".heic": "image/heic",
|
||||
".heif": "image/heif",
|
||||
".pdf": "application/pdf",
|
||||
".txt": "text/plain",
|
||||
".log": "text/plain",
|
||||
".md": "text/markdown",
|
||||
".json": "application/json",
|
||||
".csv": "text/csv",
|
||||
".html": "text/html",
|
||||
".xml": "application/xml",
|
||||
".zip": "application/zip",
|
||||
".tar": "application/x-tar",
|
||||
".gz": "application/gzip",
|
||||
};
|
||||
|
||||
// MIME types we treat as inline-displayable images for vision-capable models.
|
||||
// Note: heic/heif are detectable but not all providers can render them, so we
|
||||
// leave them out of the inline-image set and let them flow as text+saved-path.
|
||||
export const IMAGE_MIME_INLINE_SET = new Set([
|
||||
"image/png",
|
||||
"image/jpeg",
|
||||
"image/webp",
|
||||
"image/gif",
|
||||
]);
|
||||
|
||||
// Plain-text MIME types where inlining the content into a text block is more
|
||||
// useful than a "saved at <path>" stub for small files (under TEXT_INLINE_MAX).
|
||||
export const TEXT_INLINE_MIME_SET = new Set([
|
||||
"text/plain",
|
||||
"text/markdown",
|
||||
"text/csv",
|
||||
"text/html",
|
||||
"application/json",
|
||||
"application/xml",
|
||||
]);
|
||||
|
||||
export const TEXT_INLINE_MAX_BYTES = 8 * 1024;
|
||||
|
||||
export function mimeFromExtension(filePath: string): string {
|
||||
const ext = path.extname(filePath).toLowerCase();
|
||||
return EXTENSION_MIME[ext] ?? "application/octet-stream";
|
||||
}
|
||||
584
extensions/file-transfer/src/shared/node-invoke-policy.test.ts
Normal file
584
extensions/file-transfer/src/shared/node-invoke-policy.test.ts
Normal file
@@ -0,0 +1,584 @@
|
||||
import { spawn } from "node:child_process";
|
||||
import fs from "node:fs/promises";
|
||||
import os from "node:os";
|
||||
import path from "node:path";
|
||||
import type { OpenClawPluginNodeInvokePolicyContext } from "openclaw/plugin-sdk/plugin-entry";
|
||||
import { afterEach, describe, expect, it, vi } from "vitest";
|
||||
import { createFileTransferNodeInvokePolicy } from "./node-invoke-policy.js";
|
||||
|
||||
vi.mock("./audit.js", () => ({
|
||||
appendFileTransferAudit: vi.fn(async () => undefined),
|
||||
}));
|
||||
|
||||
vi.mock("./policy.js", async (importOriginal) => {
|
||||
const actual = await importOriginal<typeof import("./policy.js")>();
|
||||
return {
|
||||
...actual,
|
||||
persistAllowAlways: vi.fn(async () => undefined),
|
||||
};
|
||||
});
|
||||
|
||||
const tmpRoots: string[] = [];
|
||||
const testUnlessWindows = process.platform === "win32" ? it.skip : it;
|
||||
|
||||
afterEach(async () => {
|
||||
await Promise.all(tmpRoots.map((tmpRoot) => fs.rm(tmpRoot, { recursive: true, force: true })));
|
||||
tmpRoots.length = 0;
|
||||
});
|
||||
|
||||
async function tarEntries(entries: Record<string, string>): Promise<string> {
|
||||
const tmpRoot = await fs.realpath(await fs.mkdtemp(path.join(os.tmpdir(), "node-policy-tar-")));
|
||||
tmpRoots.push(tmpRoot);
|
||||
for (const [relPath, contents] of Object.entries(entries)) {
|
||||
const absPath = path.join(tmpRoot, relPath);
|
||||
await fs.mkdir(path.dirname(absPath), { recursive: true });
|
||||
await fs.writeFile(absPath, contents);
|
||||
}
|
||||
return await new Promise<string>((resolve, reject) => {
|
||||
const tarBin = process.platform !== "win32" ? "/usr/bin/tar" : "tar";
|
||||
const child = spawn(tarBin, ["-czf", "-", "-C", tmpRoot, "."], {
|
||||
stdio: ["ignore", "pipe", "pipe"],
|
||||
});
|
||||
const chunks: Buffer[] = [];
|
||||
let stderr = "";
|
||||
child.stdout.on("data", (chunk: Buffer) => chunks.push(chunk));
|
||||
child.stderr.on("data", (chunk: Buffer) => {
|
||||
stderr += chunk.toString();
|
||||
});
|
||||
child.on("close", (code) => {
|
||||
if (code !== 0) {
|
||||
reject(new Error(`tar exited ${code}: ${stderr}`));
|
||||
return;
|
||||
}
|
||||
resolve(Buffer.concat(chunks).toString("base64"));
|
||||
});
|
||||
child.on("error", reject);
|
||||
});
|
||||
}
|
||||
|
||||
function createCtx(overrides: {
|
||||
command?: string;
|
||||
params?: Record<string, unknown>;
|
||||
pluginConfig?: Record<string, unknown>;
|
||||
approvals?: OpenClawPluginNodeInvokePolicyContext["approvals"];
|
||||
}) {
|
||||
const invokeNode = vi.fn<OpenClawPluginNodeInvokePolicyContext["invokeNode"]>(
|
||||
async ({
|
||||
params,
|
||||
}: Parameters<OpenClawPluginNodeInvokePolicyContext["invokeNode"]>[0] = {}) => ({
|
||||
ok: true,
|
||||
payload: {
|
||||
ok: true,
|
||||
path:
|
||||
typeof (params as { path?: unknown } | undefined)?.path === "string"
|
||||
? (params as { path: string }).path
|
||||
: "/tmp/file.txt",
|
||||
size: 1,
|
||||
sha256: "a".repeat(64),
|
||||
},
|
||||
}),
|
||||
);
|
||||
return {
|
||||
ctx: {
|
||||
nodeId: "node-1",
|
||||
command: overrides.command ?? "file.fetch",
|
||||
params: overrides.params ?? { path: "/tmp/file.txt", maxBytes: 1024 },
|
||||
config: {},
|
||||
pluginConfig: overrides.pluginConfig ?? {
|
||||
nodes: {
|
||||
"node-1": {
|
||||
allowReadPaths: ["/tmp/**"],
|
||||
allowWritePaths: ["/tmp/**"],
|
||||
maxBytes: 512,
|
||||
},
|
||||
},
|
||||
},
|
||||
node: { nodeId: "node-1", displayName: "Node One" },
|
||||
...(overrides.approvals ? { approvals: overrides.approvals } : {}),
|
||||
invokeNode,
|
||||
},
|
||||
invokeNode,
|
||||
};
|
||||
}
|
||||
|
||||
describe("file-transfer node invoke policy", () => {
|
||||
it("injects policy-owned limits before invoking the node", async () => {
|
||||
const policy = createFileTransferNodeInvokePolicy();
|
||||
const { ctx, invokeNode } = createCtx({
|
||||
command: "file.fetch",
|
||||
params: { path: "/tmp/file.txt", maxBytes: 4096, followSymlinks: true },
|
||||
});
|
||||
|
||||
const result = await policy.handle(ctx);
|
||||
|
||||
expect(result.ok).toBe(true);
|
||||
expect(invokeNode).toHaveBeenNthCalledWith(1, {
|
||||
params: {
|
||||
path: "/tmp/file.txt",
|
||||
maxBytes: 512,
|
||||
followSymlinks: false,
|
||||
preflightOnly: true,
|
||||
},
|
||||
});
|
||||
expect(invokeNode).toHaveBeenNthCalledWith(2, {
|
||||
params: {
|
||||
path: "/tmp/file.txt",
|
||||
maxBytes: 512,
|
||||
followSymlinks: false,
|
||||
},
|
||||
});
|
||||
});
|
||||
|
||||
it("denies raw node.invoke before the node when plugin policy is missing", async () => {
|
||||
const policy = createFileTransferNodeInvokePolicy();
|
||||
const { ctx, invokeNode } = createCtx({ pluginConfig: {} });
|
||||
|
||||
const result = await policy.handle(ctx);
|
||||
|
||||
expect(result).toMatchObject({ ok: false, code: "NO_POLICY" });
|
||||
expect(invokeNode).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it("uses plugin approvals for ask-on-miss before invoking the node", async () => {
|
||||
const policy = createFileTransferNodeInvokePolicy();
|
||||
const approvals = {
|
||||
request: vi.fn(async () => ({ id: "approval-1", decision: "allow-once" as const })),
|
||||
};
|
||||
const { ctx, invokeNode } = createCtx({
|
||||
params: { path: "/tmp/new.txt" },
|
||||
pluginConfig: {
|
||||
nodes: {
|
||||
"node-1": {
|
||||
ask: "on-miss",
|
||||
allowReadPaths: ["/allowed/**"],
|
||||
maxBytes: 256,
|
||||
},
|
||||
},
|
||||
},
|
||||
approvals,
|
||||
});
|
||||
|
||||
const result = await policy.handle(ctx);
|
||||
|
||||
expect(result.ok).toBe(true);
|
||||
expect(approvals.request).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
title: "Read file: /tmp/new.txt",
|
||||
severity: "info",
|
||||
toolName: "file.fetch",
|
||||
}),
|
||||
);
|
||||
expect(invokeNode).toHaveBeenNthCalledWith(1, {
|
||||
params: {
|
||||
path: "/tmp/new.txt",
|
||||
followSymlinks: false,
|
||||
maxBytes: 256,
|
||||
preflightOnly: true,
|
||||
},
|
||||
});
|
||||
expect(invokeNode).toHaveBeenNthCalledWith(2, {
|
||||
params: {
|
||||
path: "/tmp/new.txt",
|
||||
followSymlinks: false,
|
||||
maxBytes: 256,
|
||||
},
|
||||
});
|
||||
});
|
||||
|
||||
it("marks node transport failures as unavailable", async () => {
|
||||
const policy = createFileTransferNodeInvokePolicy();
|
||||
const { ctx, invokeNode } = createCtx({
|
||||
params: { path: "/tmp/file.txt" },
|
||||
});
|
||||
invokeNode.mockResolvedValueOnce({
|
||||
ok: false,
|
||||
code: "TIMEOUT",
|
||||
message: "node timed out",
|
||||
details: { nodeError: { code: "TIMEOUT" } },
|
||||
});
|
||||
|
||||
const result = await policy.handle(ctx);
|
||||
|
||||
expect(result).toMatchObject({
|
||||
ok: false,
|
||||
code: "TIMEOUT",
|
||||
unavailable: true,
|
||||
details: { nodeError: { code: "TIMEOUT" } },
|
||||
});
|
||||
});
|
||||
|
||||
it("checks file.fetch canonical policy before requesting bytes", async () => {
|
||||
const policy = createFileTransferNodeInvokePolicy();
|
||||
const { ctx, invokeNode } = createCtx({
|
||||
params: { path: "/tmp/link.txt" },
|
||||
});
|
||||
invokeNode.mockResolvedValueOnce({
|
||||
ok: true,
|
||||
payload: {
|
||||
ok: true,
|
||||
path: "/etc/passwd",
|
||||
size: 1,
|
||||
sha256: "a".repeat(64),
|
||||
},
|
||||
});
|
||||
|
||||
const result = await policy.handle(ctx);
|
||||
|
||||
expect(result).toMatchObject({ ok: false, code: "SYMLINK_TARGET_DENIED" });
|
||||
expect(invokeNode).toHaveBeenCalledTimes(1);
|
||||
expect(invokeNode).toHaveBeenCalledWith({
|
||||
params: expect.objectContaining({
|
||||
path: "/tmp/link.txt",
|
||||
followSymlinks: false,
|
||||
preflightOnly: true,
|
||||
}),
|
||||
});
|
||||
});
|
||||
|
||||
it("continues file.fetch after preflight without forwarding caller preflightOnly", async () => {
|
||||
const policy = createFileTransferNodeInvokePolicy();
|
||||
const { ctx, invokeNode } = createCtx({
|
||||
params: { path: "/tmp/file.txt", preflightOnly: true },
|
||||
});
|
||||
|
||||
const result = await policy.handle(ctx);
|
||||
|
||||
expect(result).toMatchObject({ ok: true });
|
||||
expect(invokeNode).toHaveBeenCalledTimes(2);
|
||||
expect(invokeNode).toHaveBeenNthCalledWith(1, {
|
||||
params: expect.objectContaining({ path: "/tmp/file.txt", preflightOnly: true }),
|
||||
});
|
||||
expect(invokeNode).toHaveBeenNthCalledWith(2, {
|
||||
params: expect.not.objectContaining({ preflightOnly: true }),
|
||||
});
|
||||
});
|
||||
|
||||
it("checks file.write canonical policy before the mutating node call", async () => {
|
||||
const policy = createFileTransferNodeInvokePolicy();
|
||||
const { ctx, invokeNode } = createCtx({
|
||||
command: "file.write",
|
||||
params: {
|
||||
path: "/tmp/link/out.txt",
|
||||
contentBase64: Buffer.from("payload").toString("base64"),
|
||||
createParents: true,
|
||||
},
|
||||
pluginConfig: {
|
||||
nodes: {
|
||||
"node-1": {
|
||||
allowWritePaths: ["/tmp/**"],
|
||||
followSymlinks: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
});
|
||||
invokeNode.mockResolvedValueOnce({
|
||||
ok: true,
|
||||
payload: {
|
||||
ok: true,
|
||||
path: "/etc/out.txt",
|
||||
size: 7,
|
||||
sha256: "b".repeat(64),
|
||||
overwritten: false,
|
||||
},
|
||||
});
|
||||
|
||||
const result = await policy.handle(ctx);
|
||||
|
||||
expect(result).toMatchObject({ ok: false, code: "SYMLINK_TARGET_DENIED" });
|
||||
expect(invokeNode).toHaveBeenCalledTimes(1);
|
||||
expect(invokeNode).toHaveBeenCalledWith({
|
||||
params: expect.objectContaining({
|
||||
path: "/tmp/link/out.txt",
|
||||
followSymlinks: true,
|
||||
preflightOnly: true,
|
||||
}),
|
||||
});
|
||||
});
|
||||
|
||||
it("continues file.write after preflight without forwarding caller preflightOnly", async () => {
|
||||
const policy = createFileTransferNodeInvokePolicy();
|
||||
const { ctx, invokeNode } = createCtx({
|
||||
command: "file.write",
|
||||
params: {
|
||||
path: "/tmp/link/out.txt",
|
||||
contentBase64: Buffer.from("payload").toString("base64"),
|
||||
createParents: true,
|
||||
preflightOnly: true,
|
||||
},
|
||||
pluginConfig: {
|
||||
nodes: {
|
||||
"node-1": {
|
||||
allowWritePaths: ["/tmp/**", "/private/tmp/**"],
|
||||
followSymlinks: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
});
|
||||
invokeNode
|
||||
.mockResolvedValueOnce({
|
||||
ok: true,
|
||||
payload: {
|
||||
ok: true,
|
||||
path: "/private/tmp/out.txt",
|
||||
size: 7,
|
||||
sha256: "b".repeat(64),
|
||||
overwritten: false,
|
||||
},
|
||||
})
|
||||
.mockResolvedValueOnce({
|
||||
ok: true,
|
||||
payload: {
|
||||
ok: true,
|
||||
path: "/private/tmp/out.txt",
|
||||
size: 7,
|
||||
sha256: "b".repeat(64),
|
||||
overwritten: false,
|
||||
},
|
||||
});
|
||||
|
||||
const result = await policy.handle(ctx);
|
||||
|
||||
expect(result).toMatchObject({ ok: true });
|
||||
expect(invokeNode).toHaveBeenCalledTimes(2);
|
||||
expect(invokeNode).toHaveBeenNthCalledWith(1, {
|
||||
params: expect.objectContaining({ preflightOnly: true }),
|
||||
});
|
||||
expect(invokeNode).toHaveBeenNthCalledWith(2, {
|
||||
params: expect.not.objectContaining({ preflightOnly: true }),
|
||||
});
|
||||
});
|
||||
|
||||
it("checks every dir.fetch preflight entry before requesting the archive", async () => {
|
||||
const policy = createFileTransferNodeInvokePolicy();
|
||||
const { ctx, invokeNode } = createCtx({
|
||||
command: "dir.fetch",
|
||||
params: { path: "/home/me" },
|
||||
pluginConfig: {
|
||||
nodes: {
|
||||
"node-1": {
|
||||
allowReadPaths: ["/home/me", "/home/me/**"],
|
||||
denyPaths: ["**/.ssh/**"],
|
||||
},
|
||||
},
|
||||
},
|
||||
});
|
||||
invokeNode.mockResolvedValueOnce({
|
||||
ok: true,
|
||||
payload: {
|
||||
ok: true,
|
||||
path: "/home/me",
|
||||
entries: ["ok.txt", ".ssh/id_rsa"],
|
||||
fileCount: 2,
|
||||
preflightOnly: true,
|
||||
},
|
||||
});
|
||||
|
||||
const result = await policy.handle(ctx);
|
||||
|
||||
expect(result).toMatchObject({
|
||||
ok: false,
|
||||
code: "PATH_POLICY_DENIED",
|
||||
details: { path: "/home/me/.ssh/id_rsa" },
|
||||
});
|
||||
expect(invokeNode).toHaveBeenCalledTimes(1);
|
||||
expect(invokeNode).toHaveBeenCalledWith({
|
||||
params: expect.objectContaining({ path: "/home/me", preflightOnly: true }),
|
||||
});
|
||||
});
|
||||
|
||||
it("rejects dir.fetch preflight responses without an entry list", async () => {
|
||||
const policy = createFileTransferNodeInvokePolicy();
|
||||
const { ctx, invokeNode } = createCtx({
|
||||
command: "dir.fetch",
|
||||
params: { path: "/home/me" },
|
||||
pluginConfig: {
|
||||
nodes: {
|
||||
"node-1": {
|
||||
allowReadPaths: ["/home/me", "/home/me/**"],
|
||||
},
|
||||
},
|
||||
},
|
||||
});
|
||||
invokeNode.mockResolvedValueOnce({
|
||||
ok: true,
|
||||
payload: {
|
||||
ok: true,
|
||||
path: "/home/me",
|
||||
fileCount: 2,
|
||||
preflightOnly: true,
|
||||
},
|
||||
});
|
||||
|
||||
const result = await policy.handle(ctx);
|
||||
|
||||
expect(result).toMatchObject({ ok: false, code: "PREFLIGHT_ENTRIES_MISSING" });
|
||||
expect(invokeNode).toHaveBeenCalledTimes(1);
|
||||
});
|
||||
|
||||
it("rejects invalid dir.fetch preflight entries before requesting the archive", async () => {
|
||||
const policy = createFileTransferNodeInvokePolicy();
|
||||
const { ctx, invokeNode } = createCtx({
|
||||
command: "dir.fetch",
|
||||
params: { path: "/home/me" },
|
||||
pluginConfig: {
|
||||
nodes: {
|
||||
"node-1": {
|
||||
allowReadPaths: ["/home/me", "/home/me/**"],
|
||||
},
|
||||
},
|
||||
},
|
||||
});
|
||||
invokeNode.mockResolvedValueOnce({
|
||||
ok: true,
|
||||
payload: {
|
||||
ok: true,
|
||||
path: "/home/me",
|
||||
entries: ["ok.txt", "/etc/passwd"],
|
||||
fileCount: 2,
|
||||
preflightOnly: true,
|
||||
},
|
||||
});
|
||||
|
||||
const result = await policy.handle(ctx);
|
||||
|
||||
expect(result).toMatchObject({ ok: false, code: "PREFLIGHT_ENTRY_INVALID" });
|
||||
expect(invokeNode).toHaveBeenCalledTimes(1);
|
||||
});
|
||||
|
||||
testUnlessWindows(
|
||||
"continues dir.fetch after preflight without forwarding caller preflightOnly",
|
||||
async () => {
|
||||
const policy = createFileTransferNodeInvokePolicy();
|
||||
const tarBase64 = await tarEntries({
|
||||
"a.txt": "a",
|
||||
"sub/b.txt": "b",
|
||||
});
|
||||
const { ctx, invokeNode } = createCtx({
|
||||
command: "dir.fetch",
|
||||
params: { path: "/tmp/project", preflightOnly: true },
|
||||
});
|
||||
invokeNode
|
||||
.mockResolvedValueOnce({
|
||||
ok: true,
|
||||
payload: {
|
||||
ok: true,
|
||||
path: "/tmp/project",
|
||||
entries: ["a.txt", "sub/b.txt"],
|
||||
fileCount: 2,
|
||||
preflightOnly: true,
|
||||
},
|
||||
})
|
||||
.mockResolvedValueOnce({
|
||||
ok: true,
|
||||
payload: {
|
||||
ok: true,
|
||||
path: "/tmp/project",
|
||||
tarBase64,
|
||||
tarBytes: 7,
|
||||
sha256: "c".repeat(64),
|
||||
fileCount: 2,
|
||||
entries: ["a.txt", "sub/b.txt"],
|
||||
},
|
||||
});
|
||||
|
||||
const result = await policy.handle(ctx);
|
||||
|
||||
expect(result).toMatchObject({ ok: true });
|
||||
expect(invokeNode).toHaveBeenCalledTimes(2);
|
||||
expect(invokeNode).toHaveBeenNthCalledWith(1, {
|
||||
params: expect.objectContaining({ path: "/tmp/project", preflightOnly: true }),
|
||||
});
|
||||
expect(invokeNode).toHaveBeenNthCalledWith(2, {
|
||||
params: expect.not.objectContaining({ preflightOnly: true }),
|
||||
});
|
||||
},
|
||||
);
|
||||
|
||||
testUnlessWindows(
|
||||
"checks final dir.fetch archive entries before returning the archive",
|
||||
async () => {
|
||||
const policy = createFileTransferNodeInvokePolicy();
|
||||
const tarBase64 = await tarEntries({
|
||||
"ok.txt": "ok",
|
||||
".ssh/id_rsa": "secret",
|
||||
});
|
||||
const { ctx, invokeNode } = createCtx({
|
||||
command: "dir.fetch",
|
||||
params: { path: "/home/me" },
|
||||
pluginConfig: {
|
||||
nodes: {
|
||||
"node-1": {
|
||||
allowReadPaths: ["/home/me", "/home/me/**"],
|
||||
denyPaths: ["**/.ssh/**"],
|
||||
},
|
||||
},
|
||||
},
|
||||
});
|
||||
invokeNode
|
||||
.mockResolvedValueOnce({
|
||||
ok: true,
|
||||
payload: {
|
||||
ok: true,
|
||||
path: "/home/me",
|
||||
entries: ["ok.txt"],
|
||||
fileCount: 1,
|
||||
preflightOnly: true,
|
||||
},
|
||||
})
|
||||
.mockResolvedValueOnce({
|
||||
ok: true,
|
||||
payload: {
|
||||
ok: true,
|
||||
path: "/home/me",
|
||||
tarBase64,
|
||||
tarBytes: 7,
|
||||
sha256: "c".repeat(64),
|
||||
fileCount: 2,
|
||||
},
|
||||
});
|
||||
|
||||
const result = await policy.handle(ctx);
|
||||
|
||||
expect(result).toMatchObject({
|
||||
ok: false,
|
||||
code: "PATH_POLICY_DENIED",
|
||||
details: { path: "/home/me/.ssh/id_rsa" },
|
||||
});
|
||||
expect(invokeNode).toHaveBeenCalledTimes(2);
|
||||
},
|
||||
);
|
||||
|
||||
it("rejects final dir.fetch archive responses without readable archive entries", async () => {
|
||||
const policy = createFileTransferNodeInvokePolicy();
|
||||
const { ctx, invokeNode } = createCtx({
|
||||
command: "dir.fetch",
|
||||
params: { path: "/tmp/project" },
|
||||
});
|
||||
invokeNode
|
||||
.mockResolvedValueOnce({
|
||||
ok: true,
|
||||
payload: {
|
||||
ok: true,
|
||||
path: "/tmp/project",
|
||||
entries: ["a.txt"],
|
||||
fileCount: 1,
|
||||
preflightOnly: true,
|
||||
},
|
||||
})
|
||||
.mockResolvedValueOnce({
|
||||
ok: true,
|
||||
payload: {
|
||||
ok: true,
|
||||
path: "/tmp/project",
|
||||
tarBytes: 7,
|
||||
sha256: "c".repeat(64),
|
||||
fileCount: 1,
|
||||
},
|
||||
});
|
||||
|
||||
const result = await policy.handle(ctx);
|
||||
|
||||
expect(result).toMatchObject({ ok: false, code: "ARCHIVE_ENTRIES_MISSING" });
|
||||
expect(invokeNode).toHaveBeenCalledTimes(2);
|
||||
});
|
||||
});
|
||||
938
extensions/file-transfer/src/shared/node-invoke-policy.ts
Normal file
938
extensions/file-transfer/src/shared/node-invoke-policy.ts
Normal file
@@ -0,0 +1,938 @@
|
||||
import { spawn } from "node:child_process";
|
||||
import type {
|
||||
OpenClawPluginNodeInvokePolicy,
|
||||
OpenClawPluginNodeInvokePolicyContext,
|
||||
OpenClawPluginNodeInvokePolicyResult,
|
||||
} from "openclaw/plugin-sdk/plugin-entry";
|
||||
import { appendFileTransferAudit, type FileTransferAuditOp } from "./audit.js";
|
||||
import { evaluateFilePolicy, persistAllowAlways, type FilePolicyKind } from "./policy.js";
|
||||
|
||||
const FILE_FETCH_DEFAULT_MAX_BYTES = 8 * 1024 * 1024;
|
||||
const FILE_FETCH_HARD_MAX_BYTES = 16 * 1024 * 1024;
|
||||
const DIR_FETCH_DEFAULT_MAX_BYTES = 8 * 1024 * 1024;
|
||||
const DIR_FETCH_HARD_MAX_BYTES = 16 * 1024 * 1024;
|
||||
const DIR_FETCH_ARCHIVE_LIST_TIMEOUT_MS = 30_000;
|
||||
const DIR_FETCH_ARCHIVE_LIST_MAX_OUTPUT_BYTES = 32 * 1024 * 1024;
|
||||
|
||||
type FileTransferCommand = "file.fetch" | "dir.list" | "dir.fetch" | "file.write";
|
||||
|
||||
const COMMANDS: FileTransferCommand[] = ["file.fetch", "dir.list", "dir.fetch", "file.write"];
|
||||
|
||||
function asRecord(value: unknown): Record<string, unknown> {
|
||||
return value && typeof value === "object" && !Array.isArray(value)
|
||||
? (value as Record<string, unknown>)
|
||||
: {};
|
||||
}
|
||||
|
||||
function readPath(params: Record<string, unknown>): string {
|
||||
return typeof params.path === "string" ? params.path.trim() : "";
|
||||
}
|
||||
|
||||
function readMaxBytes(input: {
|
||||
value: unknown;
|
||||
defaultValue: number;
|
||||
hardMax: number;
|
||||
policyMax?: number;
|
||||
}): number {
|
||||
const requested =
|
||||
typeof input.value === "number" && Number.isFinite(input.value)
|
||||
? Math.floor(input.value)
|
||||
: input.defaultValue;
|
||||
const clamped = Math.max(1, Math.min(requested, input.hardMax));
|
||||
return input.policyMax ? Math.min(clamped, input.policyMax) : clamped;
|
||||
}
|
||||
|
||||
function commandKind(command: FileTransferCommand): FilePolicyKind {
|
||||
return command === "file.write" ? "write" : "read";
|
||||
}
|
||||
|
||||
function promptVerb(command: FileTransferCommand): string {
|
||||
switch (command) {
|
||||
case "dir.fetch":
|
||||
return "Fetch directory";
|
||||
case "dir.list":
|
||||
return "List directory";
|
||||
case "file.write":
|
||||
return "Write file";
|
||||
case "file.fetch":
|
||||
return "Read file";
|
||||
}
|
||||
return command;
|
||||
}
|
||||
|
||||
async function requestApproval(input: {
|
||||
ctx: OpenClawPluginNodeInvokePolicyContext;
|
||||
op: FileTransferAuditOp;
|
||||
kind: FilePolicyKind;
|
||||
path: string;
|
||||
startedAt: number;
|
||||
}): Promise<
|
||||
| { ok: true; followSymlinks: boolean; maxBytes?: number }
|
||||
| { ok: false; message: string; code: string }
|
||||
> {
|
||||
const nodeDisplayName = input.ctx.node?.displayName;
|
||||
const decision = evaluateFilePolicy({
|
||||
nodeId: input.ctx.nodeId,
|
||||
nodeDisplayName,
|
||||
kind: input.kind,
|
||||
path: input.path,
|
||||
pluginConfig: input.ctx.pluginConfig,
|
||||
});
|
||||
|
||||
if (decision.ok && decision.reason === "matched-allow") {
|
||||
return {
|
||||
ok: true,
|
||||
followSymlinks: decision.followSymlinks,
|
||||
maxBytes: decision.maxBytes,
|
||||
};
|
||||
}
|
||||
|
||||
const shouldAsk =
|
||||
(decision.ok && decision.reason === "ask-always") || (!decision.ok && decision.askable);
|
||||
if (!shouldAsk) {
|
||||
await appendFileTransferAudit({
|
||||
op: input.op,
|
||||
nodeId: input.ctx.nodeId,
|
||||
nodeDisplayName,
|
||||
requestedPath: input.path,
|
||||
decision:
|
||||
!decision.ok && decision.code === "NO_POLICY" ? "denied:no_policy" : "denied:policy",
|
||||
errorCode: decision.ok ? undefined : decision.code,
|
||||
reason: decision.ok ? decision.reason : decision.reason,
|
||||
durationMs: Date.now() - input.startedAt,
|
||||
});
|
||||
return {
|
||||
ok: false,
|
||||
code: decision.ok ? "POLICY_DENIED" : decision.code,
|
||||
message: `${input.op} ${decision.ok ? "POLICY_DENIED" : decision.code}: ${decision.reason}`,
|
||||
};
|
||||
}
|
||||
|
||||
const approvals = input.ctx.approvals;
|
||||
if (!approvals) {
|
||||
await appendFileTransferAudit({
|
||||
op: input.op,
|
||||
nodeId: input.ctx.nodeId,
|
||||
nodeDisplayName,
|
||||
requestedPath: input.path,
|
||||
decision: "denied:approval",
|
||||
reason: "plugin approvals unavailable",
|
||||
durationMs: Date.now() - input.startedAt,
|
||||
});
|
||||
return {
|
||||
ok: false,
|
||||
code: "APPROVAL_UNAVAILABLE",
|
||||
message: `${input.op} APPROVAL_UNAVAILABLE: plugin approvals unavailable`,
|
||||
};
|
||||
}
|
||||
|
||||
const verb = promptVerb(input.op);
|
||||
const subject = nodeDisplayName ?? input.ctx.nodeId;
|
||||
const approval = await approvals.request({
|
||||
title: `${verb}: ${input.path}`,
|
||||
description: `Allow ${verb.toLowerCase()} on ${subject}\nPath: ${input.path}\nKind: ${input.kind}\n\n"allow-always" appends this exact path to allow${input.kind === "read" ? "Read" : "Write"}Paths.`,
|
||||
severity: input.kind === "write" ? "warning" : "info",
|
||||
toolName: input.op,
|
||||
});
|
||||
|
||||
if (approval.decision === "deny" || approval.decision === null || !approval.decision) {
|
||||
await appendFileTransferAudit({
|
||||
op: input.op,
|
||||
nodeId: input.ctx.nodeId,
|
||||
nodeDisplayName,
|
||||
requestedPath: input.path,
|
||||
decision: "denied:approval",
|
||||
reason: approval.decision === "deny" ? "operator denied" : "no operator available",
|
||||
durationMs: Date.now() - input.startedAt,
|
||||
});
|
||||
return {
|
||||
ok: false,
|
||||
code: approval.decision === "deny" ? "APPROVAL_DENIED" : "APPROVAL_UNAVAILABLE",
|
||||
message:
|
||||
approval.decision === "deny"
|
||||
? `${input.op} APPROVAL_DENIED: operator denied the prompt`
|
||||
: `${input.op} APPROVAL_UNAVAILABLE: no operator client connected to approve the request`,
|
||||
};
|
||||
}
|
||||
|
||||
if (approval.decision === "allow-always") {
|
||||
try {
|
||||
await persistAllowAlways({
|
||||
nodeId: input.ctx.nodeId,
|
||||
nodeDisplayName,
|
||||
kind: input.kind,
|
||||
path: input.path,
|
||||
});
|
||||
const refreshed = evaluateFilePolicy({
|
||||
nodeId: input.ctx.nodeId,
|
||||
nodeDisplayName,
|
||||
kind: input.kind,
|
||||
path: input.path,
|
||||
pluginConfig: input.ctx.pluginConfig,
|
||||
});
|
||||
if (refreshed.ok) {
|
||||
await appendFileTransferAudit({
|
||||
op: input.op,
|
||||
nodeId: input.ctx.nodeId,
|
||||
nodeDisplayName,
|
||||
requestedPath: input.path,
|
||||
decision: "allowed:always",
|
||||
durationMs: Date.now() - input.startedAt,
|
||||
});
|
||||
return {
|
||||
ok: true,
|
||||
followSymlinks: refreshed.followSymlinks,
|
||||
maxBytes: refreshed.maxBytes,
|
||||
};
|
||||
}
|
||||
} catch (error) {
|
||||
await appendFileTransferAudit({
|
||||
op: input.op,
|
||||
nodeId: input.ctx.nodeId,
|
||||
nodeDisplayName,
|
||||
requestedPath: input.path,
|
||||
decision: "allowed:always",
|
||||
reason: `persist failed: ${String(error)}`,
|
||||
durationMs: Date.now() - input.startedAt,
|
||||
});
|
||||
return {
|
||||
ok: true,
|
||||
followSymlinks: decision.ok ? decision.followSymlinks : false,
|
||||
maxBytes: decision.maxBytes,
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
await appendFileTransferAudit({
|
||||
op: input.op,
|
||||
nodeId: input.ctx.nodeId,
|
||||
nodeDisplayName,
|
||||
requestedPath: input.path,
|
||||
decision: approval.decision === "allow-always" ? "allowed:always" : "allowed:once",
|
||||
durationMs: Date.now() - input.startedAt,
|
||||
});
|
||||
return {
|
||||
ok: true,
|
||||
followSymlinks: decision.ok ? decision.followSymlinks : false,
|
||||
maxBytes: decision.maxBytes,
|
||||
};
|
||||
}
|
||||
|
||||
function prepareParams(input: {
|
||||
command: FileTransferCommand;
|
||||
params: Record<string, unknown>;
|
||||
followSymlinks: boolean;
|
||||
maxBytes?: number;
|
||||
}): Record<string, unknown> {
|
||||
const next: Record<string, unknown> = {
|
||||
...input.params,
|
||||
followSymlinks: input.followSymlinks,
|
||||
};
|
||||
delete next.preflightOnly;
|
||||
if (input.command === "file.fetch") {
|
||||
next.maxBytes = readMaxBytes({
|
||||
value: input.params.maxBytes,
|
||||
defaultValue: FILE_FETCH_DEFAULT_MAX_BYTES,
|
||||
hardMax: FILE_FETCH_HARD_MAX_BYTES,
|
||||
policyMax: input.maxBytes,
|
||||
});
|
||||
} else if (input.command === "dir.fetch") {
|
||||
next.maxBytes = readMaxBytes({
|
||||
value: input.params.maxBytes,
|
||||
defaultValue: DIR_FETCH_DEFAULT_MAX_BYTES,
|
||||
hardMax: DIR_FETCH_HARD_MAX_BYTES,
|
||||
policyMax: input.maxBytes,
|
||||
});
|
||||
}
|
||||
return next;
|
||||
}
|
||||
|
||||
function readResultPayload(result: { payload?: unknown }): Record<string, unknown> | null {
|
||||
return result.payload && typeof result.payload === "object" && !Array.isArray(result.payload)
|
||||
? (result.payload as Record<string, unknown>)
|
||||
: null;
|
||||
}
|
||||
|
||||
function joinRemotePolicyPath(root: string, relPath: string): string {
|
||||
const rel = relPath.replace(/\\/gu, "/").replace(/^\.\//u, "");
|
||||
if (!rel || rel === ".") {
|
||||
return root;
|
||||
}
|
||||
const sep = root.includes("\\") && !root.includes("/") ? "\\" : "/";
|
||||
const cleanRoot = root.replace(/[\\/]$/u, "");
|
||||
const prefix = cleanRoot || sep;
|
||||
return `${prefix}${prefix.endsWith(sep) ? "" : sep}${rel.split("/").join(sep)}`;
|
||||
}
|
||||
|
||||
function validateDirFetchPreflightEntry(
|
||||
entry: string,
|
||||
): { ok: true } | { ok: false; reason: string } {
|
||||
if (entry.includes("\0")) {
|
||||
return { ok: false, reason: "entry contains NUL byte" };
|
||||
}
|
||||
const normalized = entry.replace(/\\/gu, "/").replace(/^\.\//u, "");
|
||||
if (!normalized || normalized === ".") {
|
||||
return { ok: false, reason: "entry is empty" };
|
||||
}
|
||||
if (normalized.startsWith("/") || /^[A-Za-z]:\//u.test(normalized)) {
|
||||
return { ok: false, reason: "entry is absolute" };
|
||||
}
|
||||
if (normalized === ".." || normalized.startsWith("../") || normalized.includes("/../")) {
|
||||
return { ok: false, reason: "entry contains '..' traversal" };
|
||||
}
|
||||
return { ok: true };
|
||||
}
|
||||
|
||||
function normalizeTarEntryPath(entry: string): string | null {
|
||||
const normalized = entry.replace(/\\/gu, "/").replace(/^\.\//u, "").replace(/\/$/u, "");
|
||||
return normalized.length > 0 ? normalized : null;
|
||||
}
|
||||
|
||||
async function listDirFetchArchiveEntries(
|
||||
payload: Record<string, unknown> | null,
|
||||
): Promise<{ ok: true; entries: string[] } | { ok: false; code: string; reason: string }> {
|
||||
const tarBase64 = typeof payload?.tarBase64 === "string" ? payload.tarBase64 : "";
|
||||
if (!tarBase64) {
|
||||
return {
|
||||
ok: false,
|
||||
code: "ARCHIVE_ENTRIES_MISSING",
|
||||
reason: "dir.fetch archive did not return tarBase64",
|
||||
};
|
||||
}
|
||||
const tarBuffer = Buffer.from(tarBase64, "base64");
|
||||
return await new Promise<
|
||||
{ ok: true; entries: string[] } | { ok: false; code: string; reason: string }
|
||||
>((resolve) => {
|
||||
const tarBin = process.platform !== "win32" ? "/usr/bin/tar" : "tar";
|
||||
const child = spawn(tarBin, ["-tzf", "-"], { stdio: ["pipe", "pipe", "pipe"] });
|
||||
let stdout = "";
|
||||
let stderr = "";
|
||||
let aborted = false;
|
||||
const watchdog = setTimeout(() => {
|
||||
aborted = true;
|
||||
try {
|
||||
child.kill("SIGKILL");
|
||||
} catch {
|
||||
/* gone */
|
||||
}
|
||||
resolve({
|
||||
ok: false,
|
||||
code: "ARCHIVE_ENTRIES_UNREADABLE",
|
||||
reason: "tar -tzf timed out",
|
||||
});
|
||||
}, DIR_FETCH_ARCHIVE_LIST_TIMEOUT_MS);
|
||||
child.stdout.on("data", (chunk: Buffer) => {
|
||||
stdout += chunk.toString();
|
||||
if (stdout.length > DIR_FETCH_ARCHIVE_LIST_MAX_OUTPUT_BYTES) {
|
||||
aborted = true;
|
||||
clearTimeout(watchdog);
|
||||
try {
|
||||
child.kill("SIGKILL");
|
||||
} catch {
|
||||
/* gone */
|
||||
}
|
||||
resolve({
|
||||
ok: false,
|
||||
code: "ARCHIVE_ENTRIES_UNREADABLE",
|
||||
reason: "tar -tzf output too large",
|
||||
});
|
||||
}
|
||||
});
|
||||
child.stderr.on("data", (chunk: Buffer) => {
|
||||
stderr += chunk.toString();
|
||||
});
|
||||
child.on("close", (code) => {
|
||||
clearTimeout(watchdog);
|
||||
if (aborted) {
|
||||
return;
|
||||
}
|
||||
if (code !== 0) {
|
||||
resolve({
|
||||
ok: false,
|
||||
code: "ARCHIVE_ENTRIES_UNREADABLE",
|
||||
reason: `tar -tzf exited ${code}: ${stderr.slice(0, 200)}`,
|
||||
});
|
||||
return;
|
||||
}
|
||||
resolve({
|
||||
ok: true,
|
||||
entries: stdout
|
||||
.split("\n")
|
||||
.map(normalizeTarEntryPath)
|
||||
.filter((entry): entry is string => entry !== null),
|
||||
});
|
||||
});
|
||||
child.on("error", (error) => {
|
||||
clearTimeout(watchdog);
|
||||
if (!aborted) {
|
||||
resolve({
|
||||
ok: false,
|
||||
code: "ARCHIVE_ENTRIES_UNREADABLE",
|
||||
reason: `tar -tzf error: ${String(error)}`,
|
||||
});
|
||||
}
|
||||
});
|
||||
child.stdin.end(tarBuffer);
|
||||
});
|
||||
}
|
||||
|
||||
async function validateDirFetchEntries(input: {
|
||||
ctx: OpenClawPluginNodeInvokePolicyContext;
|
||||
op: FileTransferAuditOp;
|
||||
requestedPath: string;
|
||||
canonicalPath: string;
|
||||
entries: unknown;
|
||||
startedAt: number;
|
||||
phase: "preflight" | "archive";
|
||||
}): Promise<OpenClawPluginNodeInvokePolicyResult | null> {
|
||||
const nodeDisplayName = input.ctx.node?.displayName;
|
||||
const missingCode =
|
||||
input.phase === "preflight" ? "PREFLIGHT_ENTRIES_MISSING" : "ARCHIVE_ENTRIES_MISSING";
|
||||
const invalidCode =
|
||||
input.phase === "preflight" ? "PREFLIGHT_ENTRY_INVALID" : "ARCHIVE_ENTRY_INVALID";
|
||||
if (!Array.isArray(input.entries)) {
|
||||
await appendFileTransferAudit({
|
||||
op: input.op,
|
||||
nodeId: input.ctx.nodeId,
|
||||
nodeDisplayName,
|
||||
requestedPath: input.requestedPath,
|
||||
canonicalPath: input.canonicalPath,
|
||||
decision: "error",
|
||||
errorCode: missingCode,
|
||||
reason: `dir.fetch ${input.phase} did not return entries`,
|
||||
durationMs: Date.now() - input.startedAt,
|
||||
});
|
||||
return policyDeniedResult({
|
||||
op: input.op,
|
||||
code: missingCode,
|
||||
message: `dir.fetch ${input.phase} did not return entries; refusing archive transfer`,
|
||||
details: { path: input.canonicalPath },
|
||||
});
|
||||
}
|
||||
|
||||
const entries: string[] = [];
|
||||
for (const entry of input.entries) {
|
||||
if (typeof entry !== "string" || entry.length === 0) {
|
||||
await appendFileTransferAudit({
|
||||
op: input.op,
|
||||
nodeId: input.ctx.nodeId,
|
||||
nodeDisplayName,
|
||||
requestedPath: input.requestedPath,
|
||||
canonicalPath: input.canonicalPath,
|
||||
decision: "denied:policy",
|
||||
errorCode: invalidCode,
|
||||
reason: "entry is not a non-empty string",
|
||||
durationMs: Date.now() - input.startedAt,
|
||||
});
|
||||
return policyDeniedResult({
|
||||
op: input.op,
|
||||
code: invalidCode,
|
||||
message: `directory ${input.phase} entry is invalid: entry is not a non-empty string`,
|
||||
details: { path: input.canonicalPath, reason: "entry is not a non-empty string" },
|
||||
});
|
||||
}
|
||||
const entryValidation = validateDirFetchPreflightEntry(entry);
|
||||
if (!entryValidation.ok) {
|
||||
const candidate = joinRemotePolicyPath(input.canonicalPath, entry);
|
||||
await appendFileTransferAudit({
|
||||
op: input.op,
|
||||
nodeId: input.ctx.nodeId,
|
||||
nodeDisplayName,
|
||||
requestedPath: input.requestedPath,
|
||||
canonicalPath: candidate,
|
||||
decision: "denied:policy",
|
||||
errorCode: invalidCode,
|
||||
reason: entryValidation.reason,
|
||||
durationMs: Date.now() - input.startedAt,
|
||||
});
|
||||
return policyDeniedResult({
|
||||
op: input.op,
|
||||
code: invalidCode,
|
||||
message: `directory ${input.phase} entry ${entry} is invalid: ${entryValidation.reason}`,
|
||||
details: { path: candidate, reason: entryValidation.reason },
|
||||
});
|
||||
}
|
||||
entries.push(entry);
|
||||
}
|
||||
|
||||
const candidates = [
|
||||
input.canonicalPath,
|
||||
...entries.map((entry) => joinRemotePolicyPath(input.canonicalPath, entry)),
|
||||
];
|
||||
for (const candidate of candidates) {
|
||||
const policy = evaluateFilePolicy({
|
||||
nodeId: input.ctx.nodeId,
|
||||
nodeDisplayName,
|
||||
kind: "read",
|
||||
path: candidate,
|
||||
pluginConfig: input.ctx.pluginConfig,
|
||||
});
|
||||
if (policy.ok) {
|
||||
continue;
|
||||
}
|
||||
await appendFileTransferAudit({
|
||||
op: input.op,
|
||||
nodeId: input.ctx.nodeId,
|
||||
nodeDisplayName,
|
||||
requestedPath: input.requestedPath,
|
||||
canonicalPath: candidate,
|
||||
decision: "denied:policy",
|
||||
errorCode: policy.code,
|
||||
reason: policy.reason,
|
||||
durationMs: Date.now() - input.startedAt,
|
||||
});
|
||||
return policyDeniedResult({
|
||||
op: input.op,
|
||||
code: "PATH_POLICY_DENIED",
|
||||
message: `directory ${input.phase} entry ${candidate} is not allowed by policy: ${policy.reason}`,
|
||||
details: { path: candidate, reason: policy.reason },
|
||||
});
|
||||
}
|
||||
|
||||
return null;
|
||||
}
|
||||
|
||||
function policyDeniedResult(input: {
|
||||
op: FileTransferAuditOp;
|
||||
code: string;
|
||||
message: string;
|
||||
details?: Record<string, unknown>;
|
||||
}): OpenClawPluginNodeInvokePolicyResult {
|
||||
return {
|
||||
ok: false,
|
||||
code: input.code,
|
||||
message: `${input.op} ${input.code}: ${input.message}`,
|
||||
...(input.details ? { details: input.details } : {}),
|
||||
};
|
||||
}
|
||||
|
||||
async function runWritePreflight(input: {
|
||||
ctx: OpenClawPluginNodeInvokePolicyContext;
|
||||
op: FileTransferAuditOp;
|
||||
params: Record<string, unknown>;
|
||||
requestedPath: string;
|
||||
startedAt: number;
|
||||
}): Promise<OpenClawPluginNodeInvokePolicyResult | null> {
|
||||
const nodeDisplayName = input.ctx.node?.displayName;
|
||||
const preflight = await input.ctx.invokeNode({
|
||||
params: {
|
||||
...input.params,
|
||||
preflightOnly: true,
|
||||
},
|
||||
});
|
||||
if (!preflight.ok) {
|
||||
await appendFileTransferAudit({
|
||||
op: input.op,
|
||||
nodeId: input.ctx.nodeId,
|
||||
nodeDisplayName,
|
||||
requestedPath: input.requestedPath,
|
||||
decision: "error",
|
||||
errorCode: preflight.code,
|
||||
errorMessage: preflight.message,
|
||||
durationMs: Date.now() - input.startedAt,
|
||||
});
|
||||
return {
|
||||
ok: false,
|
||||
code: preflight.code,
|
||||
message: `${input.op} failed: ${preflight.message}`,
|
||||
details: preflight.details,
|
||||
unavailable: true,
|
||||
};
|
||||
}
|
||||
|
||||
const payload = readResultPayload(preflight);
|
||||
if (payload?.ok === false) {
|
||||
await appendFileTransferAudit({
|
||||
op: input.op,
|
||||
nodeId: input.ctx.nodeId,
|
||||
nodeDisplayName,
|
||||
requestedPath: input.requestedPath,
|
||||
canonicalPath: typeof payload.canonicalPath === "string" ? payload.canonicalPath : undefined,
|
||||
decision: "error",
|
||||
errorCode: typeof payload.code === "string" ? payload.code : undefined,
|
||||
errorMessage: typeof payload.message === "string" ? payload.message : undefined,
|
||||
durationMs: Date.now() - input.startedAt,
|
||||
});
|
||||
return preflight;
|
||||
}
|
||||
|
||||
const canonicalPath =
|
||||
payload && typeof payload.path === "string" && payload.path
|
||||
? payload.path
|
||||
: input.requestedPath;
|
||||
if (canonicalPath === input.requestedPath) {
|
||||
return null;
|
||||
}
|
||||
|
||||
const policy = evaluateFilePolicy({
|
||||
nodeId: input.ctx.nodeId,
|
||||
nodeDisplayName,
|
||||
kind: "write",
|
||||
path: canonicalPath,
|
||||
pluginConfig: input.ctx.pluginConfig,
|
||||
});
|
||||
if (policy.ok) {
|
||||
return null;
|
||||
}
|
||||
|
||||
await appendFileTransferAudit({
|
||||
op: input.op,
|
||||
nodeId: input.ctx.nodeId,
|
||||
nodeDisplayName,
|
||||
requestedPath: input.requestedPath,
|
||||
canonicalPath,
|
||||
decision: "denied:symlink_escape",
|
||||
errorCode: policy.code,
|
||||
reason: policy.reason,
|
||||
durationMs: Date.now() - input.startedAt,
|
||||
});
|
||||
return {
|
||||
ok: false,
|
||||
code: "SYMLINK_TARGET_DENIED",
|
||||
message: `${input.op} SYMLINK_TARGET_DENIED: requested path resolved to ${canonicalPath} which is not allowed by policy`,
|
||||
};
|
||||
}
|
||||
|
||||
async function runFileFetchPreflight(input: {
|
||||
ctx: OpenClawPluginNodeInvokePolicyContext;
|
||||
op: FileTransferAuditOp;
|
||||
params: Record<string, unknown>;
|
||||
requestedPath: string;
|
||||
startedAt: number;
|
||||
}): Promise<OpenClawPluginNodeInvokePolicyResult | null> {
|
||||
const nodeDisplayName = input.ctx.node?.displayName;
|
||||
const preflight = await input.ctx.invokeNode({
|
||||
params: {
|
||||
...input.params,
|
||||
preflightOnly: true,
|
||||
},
|
||||
});
|
||||
if (!preflight.ok) {
|
||||
await appendFileTransferAudit({
|
||||
op: input.op,
|
||||
nodeId: input.ctx.nodeId,
|
||||
nodeDisplayName,
|
||||
requestedPath: input.requestedPath,
|
||||
decision: "error",
|
||||
errorCode: preflight.code,
|
||||
errorMessage: preflight.message,
|
||||
durationMs: Date.now() - input.startedAt,
|
||||
});
|
||||
return {
|
||||
ok: false,
|
||||
code: preflight.code,
|
||||
message: `${input.op} failed: ${preflight.message}`,
|
||||
details: preflight.details,
|
||||
unavailable: true,
|
||||
};
|
||||
}
|
||||
|
||||
const payload = readResultPayload(preflight);
|
||||
if (payload?.ok === false) {
|
||||
await appendFileTransferAudit({
|
||||
op: input.op,
|
||||
nodeId: input.ctx.nodeId,
|
||||
nodeDisplayName,
|
||||
requestedPath: input.requestedPath,
|
||||
canonicalPath: typeof payload.canonicalPath === "string" ? payload.canonicalPath : undefined,
|
||||
decision: "error",
|
||||
errorCode: typeof payload.code === "string" ? payload.code : undefined,
|
||||
errorMessage: typeof payload.message === "string" ? payload.message : undefined,
|
||||
durationMs: Date.now() - input.startedAt,
|
||||
});
|
||||
return preflight;
|
||||
}
|
||||
|
||||
const canonicalPath =
|
||||
payload && typeof payload.path === "string" && payload.path
|
||||
? payload.path
|
||||
: input.requestedPath;
|
||||
if (canonicalPath === input.requestedPath) {
|
||||
return null;
|
||||
}
|
||||
|
||||
const policy = evaluateFilePolicy({
|
||||
nodeId: input.ctx.nodeId,
|
||||
nodeDisplayName,
|
||||
kind: "read",
|
||||
path: canonicalPath,
|
||||
pluginConfig: input.ctx.pluginConfig,
|
||||
});
|
||||
if (policy.ok) {
|
||||
return null;
|
||||
}
|
||||
|
||||
await appendFileTransferAudit({
|
||||
op: input.op,
|
||||
nodeId: input.ctx.nodeId,
|
||||
nodeDisplayName,
|
||||
requestedPath: input.requestedPath,
|
||||
canonicalPath,
|
||||
decision: "denied:symlink_escape",
|
||||
errorCode: policy.code,
|
||||
reason: policy.reason,
|
||||
durationMs: Date.now() - input.startedAt,
|
||||
});
|
||||
return {
|
||||
ok: false,
|
||||
code: "SYMLINK_TARGET_DENIED",
|
||||
message: `${input.op} SYMLINK_TARGET_DENIED: requested path resolved to ${canonicalPath} which is not allowed by policy`,
|
||||
};
|
||||
}
|
||||
|
||||
async function runDirFetchPreflight(input: {
|
||||
ctx: OpenClawPluginNodeInvokePolicyContext;
|
||||
op: FileTransferAuditOp;
|
||||
params: Record<string, unknown>;
|
||||
requestedPath: string;
|
||||
startedAt: number;
|
||||
}): Promise<OpenClawPluginNodeInvokePolicyResult | null> {
|
||||
const nodeDisplayName = input.ctx.node?.displayName;
|
||||
const preflight = await input.ctx.invokeNode({
|
||||
params: {
|
||||
...input.params,
|
||||
preflightOnly: true,
|
||||
},
|
||||
});
|
||||
if (!preflight.ok) {
|
||||
await appendFileTransferAudit({
|
||||
op: input.op,
|
||||
nodeId: input.ctx.nodeId,
|
||||
nodeDisplayName,
|
||||
requestedPath: input.requestedPath,
|
||||
decision: "error",
|
||||
errorCode: preflight.code,
|
||||
errorMessage: preflight.message,
|
||||
durationMs: Date.now() - input.startedAt,
|
||||
});
|
||||
return {
|
||||
ok: false,
|
||||
code: preflight.code,
|
||||
message: `${input.op} failed: ${preflight.message}`,
|
||||
details: preflight.details,
|
||||
unavailable: true,
|
||||
};
|
||||
}
|
||||
|
||||
const payload = readResultPayload(preflight);
|
||||
if (payload?.ok === false) {
|
||||
await appendFileTransferAudit({
|
||||
op: input.op,
|
||||
nodeId: input.ctx.nodeId,
|
||||
nodeDisplayName,
|
||||
requestedPath: input.requestedPath,
|
||||
canonicalPath: typeof payload.canonicalPath === "string" ? payload.canonicalPath : undefined,
|
||||
decision: "error",
|
||||
errorCode: typeof payload.code === "string" ? payload.code : undefined,
|
||||
errorMessage: typeof payload.message === "string" ? payload.message : undefined,
|
||||
durationMs: Date.now() - input.startedAt,
|
||||
});
|
||||
return preflight;
|
||||
}
|
||||
|
||||
const canonicalPath =
|
||||
payload && typeof payload.path === "string" && payload.path
|
||||
? payload.path
|
||||
: input.requestedPath;
|
||||
return await validateDirFetchEntries({
|
||||
ctx: input.ctx,
|
||||
op: input.op,
|
||||
requestedPath: input.requestedPath,
|
||||
canonicalPath,
|
||||
entries: payload?.entries,
|
||||
startedAt: input.startedAt,
|
||||
phase: "preflight",
|
||||
});
|
||||
}
|
||||
|
||||
async function handleFileTransferInvoke(
|
||||
ctx: OpenClawPluginNodeInvokePolicyContext,
|
||||
): Promise<OpenClawPluginNodeInvokePolicyResult> {
|
||||
if (!COMMANDS.includes(ctx.command as FileTransferCommand)) {
|
||||
return { ok: false, code: "UNSUPPORTED_COMMAND", message: "unsupported file-transfer command" };
|
||||
}
|
||||
const command = ctx.command as FileTransferCommand;
|
||||
const op: FileTransferAuditOp = command;
|
||||
const params = asRecord(ctx.params);
|
||||
const requestedPath = readPath(params);
|
||||
const nodeDisplayName = ctx.node?.displayName;
|
||||
const startedAt = Date.now();
|
||||
|
||||
if (!requestedPath) {
|
||||
return { ok: false, code: "INVALID_PARAMS", message: `${op} path required` };
|
||||
}
|
||||
|
||||
const gate = await requestApproval({
|
||||
ctx,
|
||||
op,
|
||||
kind: commandKind(command),
|
||||
path: requestedPath,
|
||||
startedAt,
|
||||
});
|
||||
if (!gate.ok) {
|
||||
return { ok: false, code: gate.code, message: gate.message };
|
||||
}
|
||||
|
||||
const forwardedParams = prepareParams({
|
||||
command,
|
||||
params,
|
||||
followSymlinks: gate.followSymlinks,
|
||||
maxBytes: gate.maxBytes,
|
||||
});
|
||||
if (command === "file.fetch") {
|
||||
const preflightDeny = await runFileFetchPreflight({
|
||||
ctx,
|
||||
op,
|
||||
params: forwardedParams,
|
||||
requestedPath,
|
||||
startedAt,
|
||||
});
|
||||
if (preflightDeny) {
|
||||
return preflightDeny;
|
||||
}
|
||||
} else if (command === "file.write") {
|
||||
const preflightDeny = await runWritePreflight({
|
||||
ctx,
|
||||
op,
|
||||
params: forwardedParams,
|
||||
requestedPath,
|
||||
startedAt,
|
||||
});
|
||||
if (preflightDeny) {
|
||||
return preflightDeny;
|
||||
}
|
||||
} else if (command === "dir.fetch") {
|
||||
const preflightDeny = await runDirFetchPreflight({
|
||||
ctx,
|
||||
op,
|
||||
params: forwardedParams,
|
||||
requestedPath,
|
||||
startedAt,
|
||||
});
|
||||
if (preflightDeny) {
|
||||
return preflightDeny;
|
||||
}
|
||||
}
|
||||
|
||||
const result = await ctx.invokeNode({ params: forwardedParams });
|
||||
if (!result.ok) {
|
||||
await appendFileTransferAudit({
|
||||
op,
|
||||
nodeId: ctx.nodeId,
|
||||
nodeDisplayName,
|
||||
requestedPath,
|
||||
decision: "error",
|
||||
errorCode: result.code,
|
||||
errorMessage: result.message,
|
||||
durationMs: Date.now() - startedAt,
|
||||
});
|
||||
return {
|
||||
ok: false,
|
||||
code: result.code,
|
||||
message: `${op} failed: ${result.message}`,
|
||||
details: result.details,
|
||||
unavailable: true,
|
||||
};
|
||||
}
|
||||
|
||||
const payload = readResultPayload(result);
|
||||
if (payload?.ok === false) {
|
||||
await appendFileTransferAudit({
|
||||
op,
|
||||
nodeId: ctx.nodeId,
|
||||
nodeDisplayName,
|
||||
requestedPath,
|
||||
canonicalPath: typeof payload.canonicalPath === "string" ? payload.canonicalPath : undefined,
|
||||
decision: "error",
|
||||
errorCode: typeof payload.code === "string" ? payload.code : undefined,
|
||||
errorMessage: typeof payload.message === "string" ? payload.message : undefined,
|
||||
durationMs: Date.now() - startedAt,
|
||||
});
|
||||
return result;
|
||||
}
|
||||
|
||||
const canonicalPath =
|
||||
payload && typeof payload.path === "string" && payload.path ? payload.path : requestedPath;
|
||||
if (canonicalPath !== requestedPath) {
|
||||
const postflight = evaluateFilePolicy({
|
||||
nodeId: ctx.nodeId,
|
||||
nodeDisplayName,
|
||||
kind: commandKind(command),
|
||||
path: canonicalPath,
|
||||
pluginConfig: ctx.pluginConfig,
|
||||
});
|
||||
if (!postflight.ok) {
|
||||
await appendFileTransferAudit({
|
||||
op,
|
||||
nodeId: ctx.nodeId,
|
||||
nodeDisplayName,
|
||||
requestedPath,
|
||||
canonicalPath,
|
||||
decision: "denied:symlink_escape",
|
||||
errorCode: postflight.code,
|
||||
reason: postflight.reason,
|
||||
durationMs: Date.now() - startedAt,
|
||||
});
|
||||
return {
|
||||
ok: false,
|
||||
code: "SYMLINK_TARGET_DENIED",
|
||||
message: `${op} SYMLINK_TARGET_DENIED: requested path resolved to ${canonicalPath} which is not allowed by policy`,
|
||||
};
|
||||
}
|
||||
}
|
||||
if (command === "dir.fetch") {
|
||||
const archiveEntries = await listDirFetchArchiveEntries(payload);
|
||||
if (!archiveEntries.ok) {
|
||||
await appendFileTransferAudit({
|
||||
op,
|
||||
nodeId: ctx.nodeId,
|
||||
nodeDisplayName,
|
||||
requestedPath,
|
||||
canonicalPath,
|
||||
decision: "error",
|
||||
errorCode: archiveEntries.code,
|
||||
reason: archiveEntries.reason,
|
||||
durationMs: Date.now() - startedAt,
|
||||
});
|
||||
return policyDeniedResult({
|
||||
op,
|
||||
code: archiveEntries.code,
|
||||
message: `${archiveEntries.reason}; refusing archive transfer`,
|
||||
details: { path: canonicalPath, reason: archiveEntries.reason },
|
||||
});
|
||||
}
|
||||
const archiveDeny = await validateDirFetchEntries({
|
||||
ctx,
|
||||
op,
|
||||
requestedPath,
|
||||
canonicalPath,
|
||||
entries: archiveEntries.entries,
|
||||
startedAt,
|
||||
phase: "archive",
|
||||
});
|
||||
if (archiveDeny) {
|
||||
return archiveDeny;
|
||||
}
|
||||
}
|
||||
|
||||
await appendFileTransferAudit({
|
||||
op,
|
||||
nodeId: ctx.nodeId,
|
||||
nodeDisplayName,
|
||||
requestedPath,
|
||||
canonicalPath,
|
||||
decision: "allowed",
|
||||
sizeBytes: typeof payload?.size === "number" ? payload.size : undefined,
|
||||
sha256: typeof payload?.sha256 === "string" ? payload.sha256 : undefined,
|
||||
durationMs: Date.now() - startedAt,
|
||||
});
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
export function createFileTransferNodeInvokePolicy(): OpenClawPluginNodeInvokePolicy {
|
||||
return {
|
||||
commands: COMMANDS,
|
||||
handle: handleFileTransferInvoke,
|
||||
};
|
||||
}
|
||||
62
extensions/file-transfer/src/shared/params.ts
Normal file
62
extensions/file-transfer/src/shared/params.ts
Normal file
@@ -0,0 +1,62 @@
|
||||
// Shared param-validation helpers used by all four agent tools.
|
||||
// Goal: identical validation behavior + identical error shapes everywhere.
|
||||
|
||||
export type GatewayCallOptions = {
|
||||
gatewayUrl?: string;
|
||||
gatewayToken?: string;
|
||||
timeoutMs?: number;
|
||||
};
|
||||
|
||||
export function readGatewayCallOptions(params: Record<string, unknown>): GatewayCallOptions {
|
||||
const opts: GatewayCallOptions = {};
|
||||
if (typeof params.gatewayUrl === "string" && params.gatewayUrl.trim()) {
|
||||
opts.gatewayUrl = params.gatewayUrl.trim();
|
||||
}
|
||||
if (typeof params.gatewayToken === "string" && params.gatewayToken.trim()) {
|
||||
opts.gatewayToken = params.gatewayToken.trim();
|
||||
}
|
||||
if (typeof params.timeoutMs === "number" && Number.isFinite(params.timeoutMs)) {
|
||||
opts.timeoutMs = params.timeoutMs;
|
||||
}
|
||||
return opts;
|
||||
}
|
||||
|
||||
export function readTrimmedString(params: Record<string, unknown>, key: string): string {
|
||||
const value = params[key];
|
||||
return typeof value === "string" ? value.trim() : "";
|
||||
}
|
||||
|
||||
export function readBoolean(
|
||||
params: Record<string, unknown>,
|
||||
key: string,
|
||||
defaultValue = false,
|
||||
): boolean {
|
||||
const value = params[key];
|
||||
if (typeof value === "boolean") {
|
||||
return value;
|
||||
}
|
||||
return defaultValue;
|
||||
}
|
||||
|
||||
export function readClampedInt(params: {
|
||||
input: Record<string, unknown>;
|
||||
key: string;
|
||||
defaultValue: number;
|
||||
hardMin: number;
|
||||
hardMax: number;
|
||||
}): number {
|
||||
const value = params.input[params.key];
|
||||
const requested =
|
||||
typeof value === "number" && Number.isFinite(value) ? Math.floor(value) : params.defaultValue;
|
||||
return Math.max(params.hardMin, Math.min(requested, params.hardMax));
|
||||
}
|
||||
|
||||
export function humanSize(bytes: number): string {
|
||||
if (bytes < 1024) {
|
||||
return `${bytes} B`;
|
||||
}
|
||||
if (bytes < 1024 * 1024) {
|
||||
return `${(bytes / 1024).toFixed(1)} KB`;
|
||||
}
|
||||
return `${(bytes / (1024 * 1024)).toFixed(2)} MB`;
|
||||
}
|
||||
506
extensions/file-transfer/src/shared/policy.test.ts
Normal file
506
extensions/file-transfer/src/shared/policy.test.ts
Normal file
@@ -0,0 +1,506 @@
|
||||
import os from "node:os";
|
||||
import path from "node:path";
|
||||
import { afterEach, beforeEach, describe, expect, it, vi } from "vitest";
|
||||
|
||||
// Mock the plugin-sdk runtime-config surface so we can drive the policy
|
||||
// reader from the test without booting a gateway. mutateConfigFile is also
|
||||
// mocked so persistAllowAlways tests can assert what would have been written
|
||||
// without touching ~/.openclaw/openclaw.json.
|
||||
const getRuntimeConfigMock = vi.fn();
|
||||
const mutateConfigFileMock = vi.fn();
|
||||
|
||||
vi.mock("openclaw/plugin-sdk/runtime-config-snapshot", () => ({
|
||||
getRuntimeConfig: () => getRuntimeConfigMock(),
|
||||
}));
|
||||
vi.mock("openclaw/plugin-sdk/config-mutation", () => ({
|
||||
mutateConfigFile: (input: unknown) => mutateConfigFileMock(input),
|
||||
}));
|
||||
|
||||
// Imported AFTER vi.mock so the mocked module is what policy.ts binds to.
|
||||
const { evaluateFilePolicy, persistAllowAlways } = await import("./policy.js");
|
||||
|
||||
beforeEach(() => {
|
||||
getRuntimeConfigMock.mockReset();
|
||||
mutateConfigFileMock.mockReset();
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
vi.restoreAllMocks();
|
||||
});
|
||||
|
||||
function withConfig(fileTransfer: Record<string, unknown> | undefined) {
|
||||
if (fileTransfer === undefined) {
|
||||
getRuntimeConfigMock.mockReturnValue({});
|
||||
} else {
|
||||
getRuntimeConfigMock.mockReturnValue({
|
||||
plugins: {
|
||||
entries: {
|
||||
"file-transfer": {
|
||||
config: { nodes: fileTransfer },
|
||||
},
|
||||
},
|
||||
},
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
describe("evaluateFilePolicy — default deny", () => {
|
||||
it("returns NO_POLICY when no plugin config block is present", () => {
|
||||
getRuntimeConfigMock.mockReturnValue({});
|
||||
const r = evaluateFilePolicy({ nodeId: "n1", kind: "read", path: "/tmp/x" });
|
||||
expect(r).toMatchObject({ ok: false, code: "NO_POLICY", askable: false });
|
||||
});
|
||||
|
||||
it("returns NO_POLICY when plugin policy block is missing", () => {
|
||||
getRuntimeConfigMock.mockReturnValue({ plugins: { entries: { "file-transfer": {} } } });
|
||||
const r = evaluateFilePolicy({ nodeId: "n1", kind: "read", path: "/tmp/x" });
|
||||
expect(r).toMatchObject({ ok: false, code: "NO_POLICY" });
|
||||
});
|
||||
|
||||
it("returns NO_POLICY when no entry exists for the node and no '*' fallback", () => {
|
||||
withConfig({ "other-node": { allowReadPaths: ["/tmp/**"] } });
|
||||
const r = evaluateFilePolicy({ nodeId: "n1", kind: "read", path: "/tmp/x" });
|
||||
expect(r).toMatchObject({ ok: false, code: "NO_POLICY" });
|
||||
});
|
||||
|
||||
it("prefers the current runtime config over a stale passed plugin config", () => {
|
||||
getRuntimeConfigMock.mockReturnValue({
|
||||
plugins: {
|
||||
entries: {
|
||||
"file-transfer": {
|
||||
config: {
|
||||
nodes: {
|
||||
n1: { allowReadPaths: ["/tmp/**"] },
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
});
|
||||
const r = evaluateFilePolicy({
|
||||
nodeId: "n1",
|
||||
kind: "read",
|
||||
path: "/tmp/x",
|
||||
pluginConfig: {
|
||||
nodes: {
|
||||
n1: { allowReadPaths: ["/stale/**"] },
|
||||
},
|
||||
},
|
||||
});
|
||||
expect(r).toMatchObject({ ok: true, reason: "matched-allow" });
|
||||
});
|
||||
});
|
||||
|
||||
describe("evaluateFilePolicy — '..' traversal short-circuit", () => {
|
||||
it("rejects /allowed/../etc/passwd even when /allowed/** is allowed", () => {
|
||||
withConfig({
|
||||
n1: { allowReadPaths: ["/allowed/**"] },
|
||||
});
|
||||
const r = evaluateFilePolicy({
|
||||
nodeId: "n1",
|
||||
kind: "read",
|
||||
path: "/allowed/../etc/passwd",
|
||||
});
|
||||
expect(r).toMatchObject({ ok: false, code: "POLICY_DENIED", askable: false });
|
||||
expect(r.ok ? "" : r.reason).toMatch(/\.\./);
|
||||
});
|
||||
|
||||
it("rejects a path that ENDS in /..", () => {
|
||||
withConfig({
|
||||
n1: { allowReadPaths: ["/tmp/**"] },
|
||||
});
|
||||
const r = evaluateFilePolicy({
|
||||
nodeId: "n1",
|
||||
kind: "read",
|
||||
path: "/tmp/foo/..",
|
||||
});
|
||||
expect(r).toMatchObject({ ok: false, code: "POLICY_DENIED" });
|
||||
});
|
||||
|
||||
it("rejects bare '..'", () => {
|
||||
withConfig({
|
||||
n1: { allowReadPaths: ["/**"] },
|
||||
});
|
||||
const r = evaluateFilePolicy({ nodeId: "n1", kind: "read", path: ".." });
|
||||
expect(r).toMatchObject({ ok: false, code: "POLICY_DENIED" });
|
||||
});
|
||||
});
|
||||
|
||||
describe("evaluateFilePolicy — denyPaths always wins", () => {
|
||||
it("denies even when allowReadPaths matches", () => {
|
||||
withConfig({
|
||||
n1: {
|
||||
allowReadPaths: ["/tmp/**"],
|
||||
denyPaths: ["**/.ssh/**"],
|
||||
},
|
||||
});
|
||||
const r = evaluateFilePolicy({
|
||||
nodeId: "n1",
|
||||
kind: "read",
|
||||
path: "/tmp/.ssh/id_rsa",
|
||||
});
|
||||
expect(r).toMatchObject({ ok: false, code: "POLICY_DENIED", askable: false });
|
||||
expect(r.ok ? "" : r.reason).toMatch(/deny/);
|
||||
});
|
||||
|
||||
it("denies even with ask=always (denyPaths is hard)", () => {
|
||||
withConfig({
|
||||
n1: {
|
||||
ask: "always",
|
||||
denyPaths: ["**/secrets/**"],
|
||||
},
|
||||
});
|
||||
const r = evaluateFilePolicy({
|
||||
nodeId: "n1",
|
||||
kind: "read",
|
||||
path: "/var/secrets/api.key",
|
||||
});
|
||||
expect(r).toMatchObject({ ok: false, code: "POLICY_DENIED", askable: false });
|
||||
});
|
||||
});
|
||||
|
||||
describe("evaluateFilePolicy — allow matching", () => {
|
||||
it("allows on matched-allow with ask=off (default)", () => {
|
||||
withConfig({
|
||||
n1: { allowReadPaths: ["/tmp/**"] },
|
||||
});
|
||||
expect(evaluateFilePolicy({ nodeId: "n1", kind: "read", path: "/tmp/foo/bar.png" })).toEqual({
|
||||
ok: true,
|
||||
reason: "matched-allow",
|
||||
maxBytes: undefined,
|
||||
followSymlinks: false,
|
||||
});
|
||||
});
|
||||
|
||||
it("propagates per-node maxBytes on matched-allow", () => {
|
||||
withConfig({
|
||||
n1: { allowReadPaths: ["/tmp/**"], maxBytes: 1024 },
|
||||
});
|
||||
const r = evaluateFilePolicy({ nodeId: "n1", kind: "read", path: "/tmp/x" });
|
||||
expect(r).toMatchObject({ ok: true, maxBytes: 1024 });
|
||||
});
|
||||
|
||||
it("uses kind=write to consult allowWritePaths, not allowReadPaths", () => {
|
||||
withConfig({
|
||||
n1: { allowReadPaths: ["/tmp/**"], allowWritePaths: ["/srv/**"] },
|
||||
});
|
||||
expect(evaluateFilePolicy({ nodeId: "n1", kind: "write", path: "/srv/out.txt" })).toMatchObject(
|
||||
{ ok: true },
|
||||
);
|
||||
expect(evaluateFilePolicy({ nodeId: "n1", kind: "write", path: "/tmp/out.txt" })).toMatchObject(
|
||||
{ ok: false, code: "POLICY_DENIED" },
|
||||
);
|
||||
});
|
||||
|
||||
it("propagates followSymlinks=false by default and =true when configured", () => {
|
||||
withConfig({
|
||||
n1: { allowReadPaths: ["/tmp/**"] },
|
||||
});
|
||||
expect(evaluateFilePolicy({ nodeId: "n1", kind: "read", path: "/tmp/x" })).toMatchObject({
|
||||
ok: true,
|
||||
followSymlinks: false,
|
||||
});
|
||||
|
||||
withConfig({
|
||||
n2: { allowReadPaths: ["/tmp/**"], followSymlinks: true },
|
||||
});
|
||||
expect(evaluateFilePolicy({ nodeId: "n2", kind: "read", path: "/tmp/x" })).toMatchObject({
|
||||
ok: true,
|
||||
followSymlinks: true,
|
||||
});
|
||||
});
|
||||
|
||||
it("expands tilde in patterns relative to homedir", () => {
|
||||
const home = os.homedir();
|
||||
withConfig({
|
||||
n1: { allowReadPaths: ["~/Screenshots/**"] },
|
||||
});
|
||||
expect(
|
||||
evaluateFilePolicy({
|
||||
nodeId: "n1",
|
||||
kind: "read",
|
||||
path: path.join(home, "Screenshots", "shot.png"),
|
||||
}),
|
||||
).toMatchObject({ ok: true });
|
||||
});
|
||||
|
||||
it("matches Windows node paths without gateway-local path semantics", () => {
|
||||
withConfig({
|
||||
n1: { allowReadPaths: ["C:/Users/me/**"] },
|
||||
});
|
||||
expect(
|
||||
evaluateFilePolicy({
|
||||
nodeId: "n1",
|
||||
kind: "read",
|
||||
path: "C:\\Users\\me\\file.txt",
|
||||
}),
|
||||
).toMatchObject({ ok: true });
|
||||
});
|
||||
});
|
||||
|
||||
describe("evaluateFilePolicy — ask modes", () => {
|
||||
it("ask=on-miss returns askable POLICY_DENIED on miss", () => {
|
||||
withConfig({
|
||||
n1: { ask: "on-miss", allowReadPaths: ["/var/log/**"] },
|
||||
});
|
||||
const r = evaluateFilePolicy({ nodeId: "n1", kind: "read", path: "/tmp/x" });
|
||||
expect(r).toMatchObject({
|
||||
ok: false,
|
||||
code: "POLICY_DENIED",
|
||||
askable: true,
|
||||
askMode: "on-miss",
|
||||
});
|
||||
});
|
||||
|
||||
it("ask=on-miss miss preserves transfer caps for one-time approvals", () => {
|
||||
withConfig({
|
||||
n1: {
|
||||
ask: "on-miss",
|
||||
allowReadPaths: ["/var/log/**"],
|
||||
maxBytes: 4096,
|
||||
followSymlinks: true,
|
||||
},
|
||||
});
|
||||
const r = evaluateFilePolicy({ nodeId: "n1", kind: "read", path: "/tmp/x" });
|
||||
expect(r).toMatchObject({
|
||||
ok: false,
|
||||
code: "POLICY_DENIED",
|
||||
askable: true,
|
||||
askMode: "on-miss",
|
||||
maxBytes: 4096,
|
||||
followSymlinks: true,
|
||||
});
|
||||
});
|
||||
|
||||
it("ask=on-miss still silent-allows on a match", () => {
|
||||
withConfig({
|
||||
n1: { ask: "on-miss", allowReadPaths: ["/tmp/**"] },
|
||||
});
|
||||
const r = evaluateFilePolicy({ nodeId: "n1", kind: "read", path: "/tmp/x" });
|
||||
expect(r).toMatchObject({ ok: true, reason: "matched-allow" });
|
||||
});
|
||||
|
||||
it("ask=always always returns ask-always (prompt on every call)", () => {
|
||||
withConfig({
|
||||
n1: { ask: "always", allowReadPaths: ["/tmp/**"] },
|
||||
});
|
||||
const r = evaluateFilePolicy({ nodeId: "n1", kind: "read", path: "/tmp/x" });
|
||||
expect(r).toMatchObject({ ok: true, reason: "ask-always", askMode: "always" });
|
||||
});
|
||||
|
||||
it("ask=off returns non-askable POLICY_DENIED on miss", () => {
|
||||
withConfig({
|
||||
n1: { ask: "off", allowReadPaths: ["/var/log/**"] },
|
||||
});
|
||||
const r = evaluateFilePolicy({ nodeId: "n1", kind: "read", path: "/tmp/x" });
|
||||
expect(r).toMatchObject({ ok: false, code: "POLICY_DENIED", askable: false });
|
||||
});
|
||||
|
||||
it("invalid ask values normalize to off", () => {
|
||||
withConfig({
|
||||
n1: { ask: "sometimes", allowReadPaths: ["/var/log/**"] },
|
||||
});
|
||||
const r = evaluateFilePolicy({ nodeId: "n1", kind: "read", path: "/tmp/x" });
|
||||
expect(r).toMatchObject({ ok: false, askable: false });
|
||||
});
|
||||
});
|
||||
|
||||
describe("evaluateFilePolicy — node-id resolution", () => {
|
||||
it("resolves by displayName when nodeId has no entry", () => {
|
||||
withConfig({
|
||||
"Lobster MacBook": { allowReadPaths: ["/tmp/**"] },
|
||||
});
|
||||
expect(
|
||||
evaluateFilePolicy({
|
||||
nodeId: "node-abc-123",
|
||||
nodeDisplayName: "Lobster MacBook",
|
||||
kind: "read",
|
||||
path: "/tmp/x",
|
||||
}),
|
||||
).toMatchObject({ ok: true });
|
||||
});
|
||||
|
||||
it("falls back to '*' wildcard when neither id nor displayName matches", () => {
|
||||
withConfig({
|
||||
"*": { allowReadPaths: ["/tmp/**"] },
|
||||
});
|
||||
expect(
|
||||
evaluateFilePolicy({
|
||||
nodeId: "n1",
|
||||
nodeDisplayName: "anything",
|
||||
kind: "read",
|
||||
path: "/tmp/x",
|
||||
}),
|
||||
).toMatchObject({ ok: true });
|
||||
});
|
||||
});
|
||||
|
||||
describe("persistAllowAlways", () => {
|
||||
it("appends path to allowReadPaths under the existing matching key", async () => {
|
||||
let captured: Record<string, unknown> | null = null;
|
||||
mutateConfigFileMock.mockImplementation(
|
||||
async ({ mutate }: { mutate: (draft: Record<string, unknown>) => void }) => {
|
||||
const draft: Record<string, unknown> = {
|
||||
plugins: {
|
||||
entries: {
|
||||
"file-transfer": {
|
||||
config: { nodes: { n1: { allowReadPaths: ["/tmp/**"] } } },
|
||||
},
|
||||
},
|
||||
},
|
||||
};
|
||||
mutate(draft);
|
||||
captured = draft;
|
||||
},
|
||||
);
|
||||
await persistAllowAlways({ nodeId: "n1", kind: "read", path: "/srv/added.png" });
|
||||
|
||||
expect(mutateConfigFileMock).toHaveBeenCalledOnce();
|
||||
// Drill back into the captured draft to assert the added path.
|
||||
const root = captured as unknown as {
|
||||
plugins: {
|
||||
entries: {
|
||||
"file-transfer": {
|
||||
config: { nodes: Record<string, { allowReadPaths: string[] }> };
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
expect(root.plugins.entries["file-transfer"].config.nodes.n1.allowReadPaths).toContain(
|
||||
"/srv/added.png",
|
||||
);
|
||||
});
|
||||
|
||||
it("creates a new node entry keyed by displayName when no entry exists", async () => {
|
||||
let captured: Record<string, unknown> | null = null;
|
||||
mutateConfigFileMock.mockImplementation(
|
||||
async ({ mutate }: { mutate: (draft: Record<string, unknown>) => void }) => {
|
||||
const draft: Record<string, unknown> = {};
|
||||
mutate(draft);
|
||||
captured = draft;
|
||||
},
|
||||
);
|
||||
|
||||
await persistAllowAlways({
|
||||
nodeId: "n1",
|
||||
nodeDisplayName: "Lobster",
|
||||
kind: "write",
|
||||
path: "/srv/out.txt",
|
||||
});
|
||||
|
||||
const root = captured as unknown as {
|
||||
plugins: {
|
||||
entries: {
|
||||
"file-transfer": {
|
||||
config: { nodes: Record<string, { allowWritePaths: string[] }> };
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
expect(root.plugins.entries["file-transfer"].config.nodes["Lobster"].allowWritePaths).toContain(
|
||||
"/srv/out.txt",
|
||||
);
|
||||
});
|
||||
|
||||
it("never persists under the '*' wildcard even when '*' is the matching key", async () => {
|
||||
let captured: Record<string, unknown> | null = null;
|
||||
mutateConfigFileMock.mockImplementation(
|
||||
async ({ mutate }: { mutate: (draft: Record<string, unknown>) => void }) => {
|
||||
const draft: Record<string, unknown> = {
|
||||
plugins: {
|
||||
entries: {
|
||||
"file-transfer": {
|
||||
config: { nodes: { "*": { allowReadPaths: ["/var/log/**"] } } },
|
||||
},
|
||||
},
|
||||
},
|
||||
};
|
||||
mutate(draft);
|
||||
captured = draft;
|
||||
},
|
||||
);
|
||||
|
||||
await persistAllowAlways({
|
||||
nodeId: "n1",
|
||||
nodeDisplayName: "Lobster",
|
||||
kind: "read",
|
||||
path: "/srv/added.png",
|
||||
});
|
||||
|
||||
const root = captured as unknown as {
|
||||
plugins: {
|
||||
entries: {
|
||||
"file-transfer": {
|
||||
config: { nodes: Record<string, { allowReadPaths?: string[] }> };
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
// The "*" entry must not have been mutated.
|
||||
expect(root.plugins.entries["file-transfer"].config.nodes["*"].allowReadPaths).toEqual([
|
||||
"/var/log/**",
|
||||
]);
|
||||
// A new entry keyed by displayName (not "*") must hold the new path.
|
||||
expect(root.plugins.entries["file-transfer"].config.nodes["Lobster"].allowReadPaths).toEqual([
|
||||
"/srv/added.png",
|
||||
]);
|
||||
});
|
||||
|
||||
it("rejects unsafe keys (__proto__, prototype, constructor) that would mutate prototype chain", async () => {
|
||||
mutateConfigFileMock.mockImplementation(
|
||||
async ({ mutate }: { mutate: (draft: Record<string, unknown>) => void }) => {
|
||||
const draft: Record<string, unknown> = {};
|
||||
mutate(draft);
|
||||
},
|
||||
);
|
||||
|
||||
await expect(
|
||||
persistAllowAlways({
|
||||
nodeId: "n1",
|
||||
nodeDisplayName: "__proto__",
|
||||
kind: "read",
|
||||
path: "/etc/passwd",
|
||||
}),
|
||||
).rejects.toThrow(/unsafe key.*__proto__/);
|
||||
|
||||
await expect(
|
||||
persistAllowAlways({
|
||||
nodeId: "constructor",
|
||||
kind: "read",
|
||||
path: "/etc/passwd",
|
||||
}),
|
||||
).rejects.toThrow(/unsafe key.*constructor/);
|
||||
});
|
||||
|
||||
it("dedupes when path already present", async () => {
|
||||
let captured: Record<string, unknown> | null = null;
|
||||
mutateConfigFileMock.mockImplementation(
|
||||
async ({ mutate }: { mutate: (draft: Record<string, unknown>) => void }) => {
|
||||
const draft: Record<string, unknown> = {
|
||||
plugins: {
|
||||
entries: {
|
||||
"file-transfer": {
|
||||
config: { nodes: { n1: { allowReadPaths: ["/tmp/x"] } } },
|
||||
},
|
||||
},
|
||||
},
|
||||
};
|
||||
mutate(draft);
|
||||
captured = draft;
|
||||
},
|
||||
);
|
||||
await persistAllowAlways({ nodeId: "n1", kind: "read", path: "/tmp/x" });
|
||||
|
||||
const root = captured as unknown as {
|
||||
plugins: {
|
||||
entries: {
|
||||
"file-transfer": {
|
||||
config: { nodes: Record<string, { allowReadPaths: string[] }> };
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
const list = root.plugins.entries["file-transfer"].config.nodes.n1.allowReadPaths;
|
||||
expect(list.filter((p) => p === "/tmp/x").length).toBe(1);
|
||||
});
|
||||
});
|
||||
383
extensions/file-transfer/src/shared/policy.ts
Normal file
383
extensions/file-transfer/src/shared/policy.ts
Normal file
@@ -0,0 +1,383 @@
|
||||
// Path policy for file-transfer node.invoke calls.
|
||||
//
|
||||
// Default behavior is DENY. The operator must explicitly opt in by adding
|
||||
// a config block to ~/.openclaw/openclaw.json under
|
||||
// `plugins.entries.file-transfer.config.nodes`. Without a matching block,
|
||||
// every file operation is rejected before reaching the node.
|
||||
//
|
||||
// Schema (informal):
|
||||
//
|
||||
// "plugins": {
|
||||
// "entries": {
|
||||
// "file-transfer": {
|
||||
// "config": {
|
||||
// "nodes": {
|
||||
// "<nodeId-or-displayName>": {
|
||||
// "ask": "off" | "on-miss" | "always",
|
||||
// "allowReadPaths": ["~/Screenshots/**", "/tmp/**"],
|
||||
// "allowWritePaths": ["~/Downloads/**"],
|
||||
// "denyPaths": ["**/.ssh/**", "**/.aws/**"],
|
||||
// "maxBytes": 16777216,
|
||||
// "followSymlinks": false
|
||||
// },
|
||||
// "*": { "ask": "on-miss" }
|
||||
// }
|
||||
// }
|
||||
// }
|
||||
// }
|
||||
// }
|
||||
//
|
||||
// `ask` modes:
|
||||
// off — silent: allow if matched, deny if not (today's default)
|
||||
// on-miss — silent allow if matched; prompt operator if not matched
|
||||
// always — prompt operator on every call (denyPaths still hard-deny)
|
||||
//
|
||||
// `denyPaths` always wins, even in `ask: always`.
|
||||
// `allow-always` from the prompt appends the path back into allowReadPaths /
|
||||
// allowWritePaths via mutateConfigFile.
|
||||
//
|
||||
// `followSymlinks` (default false): if false, the node-side handler
|
||||
// realpaths the requested path (or its parent for new-file writes) BEFORE
|
||||
// any I/O, and refuses with SYMLINK_REDIRECT if it differs from the
|
||||
// requested path. This stops a symlink in user-controlled territory
|
||||
// (e.g. ~/Downloads/evil → /etc) from redirecting an allowed-looking path
|
||||
// to a disallowed canonical location. Set to true to opt back into the
|
||||
// looser "follow + post-flight check" behavior, e.g. on macOS where
|
||||
// /var → /private/var trips the check for /var/folders paths.
|
||||
|
||||
import os from "node:os";
|
||||
import path from "node:path";
|
||||
import { minimatch } from "minimatch";
|
||||
import { mutateConfigFile } from "openclaw/plugin-sdk/config-mutation";
|
||||
import { getRuntimeConfig } from "openclaw/plugin-sdk/runtime-config-snapshot";
|
||||
|
||||
export type FilePolicyKind = "read" | "write";
|
||||
export type FilePolicyAskMode = "off" | "on-miss" | "always";
|
||||
|
||||
export type FilePolicyDecision =
|
||||
| { ok: true; reason: "matched-allow"; maxBytes?: number; followSymlinks: boolean }
|
||||
| {
|
||||
ok: true;
|
||||
reason: "ask-always";
|
||||
askMode: FilePolicyAskMode;
|
||||
maxBytes?: number;
|
||||
followSymlinks: boolean;
|
||||
}
|
||||
| {
|
||||
ok: false;
|
||||
code: "NO_POLICY" | "POLICY_DENIED";
|
||||
reason: string;
|
||||
askable: boolean;
|
||||
askMode?: FilePolicyAskMode;
|
||||
maxBytes?: number;
|
||||
followSymlinks?: boolean;
|
||||
};
|
||||
|
||||
type NodeFilePolicyConfig = {
|
||||
ask?: FilePolicyAskMode;
|
||||
allowReadPaths?: string[];
|
||||
allowWritePaths?: string[];
|
||||
denyPaths?: string[];
|
||||
maxBytes?: number;
|
||||
followSymlinks?: boolean;
|
||||
};
|
||||
|
||||
type FilePolicyConfig = Record<string, NodeFilePolicyConfig>;
|
||||
|
||||
function asFilePolicyConfig(value: unknown): FilePolicyConfig | null {
|
||||
if (!value || typeof value !== "object" || Array.isArray(value)) {
|
||||
return null;
|
||||
}
|
||||
return value as FilePolicyConfig;
|
||||
}
|
||||
|
||||
function readFilePolicyConfigFromPluginConfig(pluginConfig: unknown): FilePolicyConfig | null {
|
||||
if (!pluginConfig || typeof pluginConfig !== "object" || Array.isArray(pluginConfig)) {
|
||||
return null;
|
||||
}
|
||||
const nodes = (pluginConfig as { nodes?: unknown }).nodes;
|
||||
return asFilePolicyConfig(nodes);
|
||||
}
|
||||
|
||||
function readPluginConfigFromRuntimeConfig(): Record<string, unknown> | null {
|
||||
const cfg = getRuntimeConfig();
|
||||
const plugins = (cfg as { plugins?: unknown }).plugins;
|
||||
if (!plugins || typeof plugins !== "object") {
|
||||
return null;
|
||||
}
|
||||
const entries = (plugins as { entries?: unknown }).entries;
|
||||
if (!entries || typeof entries !== "object") {
|
||||
return null;
|
||||
}
|
||||
const entry = (entries as Record<string, unknown>)["file-transfer"];
|
||||
if (!entry || typeof entry !== "object") {
|
||||
return null;
|
||||
}
|
||||
const pluginConfig = (entry as { config?: unknown }).config;
|
||||
return pluginConfig && typeof pluginConfig === "object" && !Array.isArray(pluginConfig)
|
||||
? (pluginConfig as Record<string, unknown>)
|
||||
: null;
|
||||
}
|
||||
|
||||
function readFilePolicyConfig(pluginConfig?: Record<string, unknown>): FilePolicyConfig | null {
|
||||
return (
|
||||
readFilePolicyConfigFromPluginConfig(readPluginConfigFromRuntimeConfig()) ??
|
||||
readFilePolicyConfigFromPluginConfig(pluginConfig)
|
||||
);
|
||||
}
|
||||
|
||||
function expandTilde(p: string): string {
|
||||
if (p.startsWith("~/") || p === "~") {
|
||||
return path.join(os.homedir(), p.slice(p === "~" ? 1 : 2));
|
||||
}
|
||||
return p;
|
||||
}
|
||||
|
||||
function normalizeGlobs(patterns: string[] | undefined): string[] {
|
||||
if (!Array.isArray(patterns)) {
|
||||
return [];
|
||||
}
|
||||
return patterns
|
||||
.filter((p): p is string => typeof p === "string" && p.trim().length > 0)
|
||||
.map((p) => expandTilde(p.trim()));
|
||||
}
|
||||
|
||||
function matchesAny(target: string, patterns: string[]): boolean {
|
||||
const normalizedTarget = target.replace(/\\/gu, "/");
|
||||
for (const pattern of patterns) {
|
||||
const normalizedPattern = pattern.replace(/\\/gu, "/");
|
||||
if (
|
||||
minimatch(target, pattern, { dot: true }) ||
|
||||
minimatch(normalizedTarget, normalizedPattern, { dot: true })
|
||||
) {
|
||||
return true;
|
||||
}
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
function resolveNodePolicy(
|
||||
config: FilePolicyConfig,
|
||||
nodeId: string,
|
||||
nodeDisplayName?: string,
|
||||
): { key: string; entry: NodeFilePolicyConfig } | null {
|
||||
const candidates = [nodeId, nodeDisplayName].filter(
|
||||
(k): k is string => typeof k === "string" && k.length > 0,
|
||||
);
|
||||
for (const key of candidates) {
|
||||
if (config[key]) {
|
||||
return { key, entry: config[key] };
|
||||
}
|
||||
}
|
||||
if (config["*"]) {
|
||||
return { key: "*", entry: config["*"] };
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
function normalizeAskMode(value: unknown): FilePolicyAskMode {
|
||||
if (value === "on-miss" || value === "always" || value === "off") {
|
||||
return value;
|
||||
}
|
||||
return "off";
|
||||
}
|
||||
|
||||
/**
|
||||
* Evaluate whether (nodeId, kind, path) is permitted.
|
||||
*
|
||||
* Resolution order:
|
||||
* 1. No file-transfer config or no entry for this node → NO_POLICY (deny,
|
||||
* not askable — operator hasn't opted in at all).
|
||||
* 2. denyPaths matches → POLICY_DENIED, not askable (hard deny).
|
||||
* 3. ask=always → ask-always (prompt every time).
|
||||
* 4. allowPaths matches → matched-allow (silent allow).
|
||||
* 5. ask=on-miss → POLICY_DENIED with askable=true.
|
||||
* 6. ask=off (or unset) → POLICY_DENIED, not askable.
|
||||
*/
|
||||
/**
|
||||
* Reject any path whose RAW string contains a ".." segment. Checking the
|
||||
* raw string (not the normalized form) is the point — `posix.normalize`
|
||||
* collapses "/allowed/../etc/passwd" to "/etc/passwd", which would defeat
|
||||
* the check. We want to flag the literal traversal sequence the agent
|
||||
* passed in, before any glob match runs.
|
||||
*
|
||||
* Without this, "/allowed/../etc/passwd" matches the glob "/allowed/**"
|
||||
* pre-realpath, so the node fetches the bytes before the post-flight
|
||||
* canonical-path check denies — too late, the bytes already crossed the
|
||||
* node→gateway boundary.
|
||||
*
|
||||
* Treats backslash and forward slash as equivalent separators so a Windows
|
||||
* node can't be hit with "C:\\allowed\\..\\Windows\\system.ini".
|
||||
*/
|
||||
function containsParentRefSegment(p: string): boolean {
|
||||
const unified = p.replace(/\\/gu, "/");
|
||||
return unified.split("/").includes("..");
|
||||
}
|
||||
|
||||
export function evaluateFilePolicy(input: {
|
||||
nodeId: string;
|
||||
nodeDisplayName?: string;
|
||||
kind: FilePolicyKind;
|
||||
path: string;
|
||||
pluginConfig?: Record<string, unknown>;
|
||||
}): FilePolicyDecision {
|
||||
// Reject literal traversal sequences before consulting any allow/deny
|
||||
// glob list. minimatch on the raw string can wrongly accept
|
||||
// "/allowed/../etc/passwd" against "/allowed/**".
|
||||
if (containsParentRefSegment(input.path)) {
|
||||
return {
|
||||
ok: false,
|
||||
code: "POLICY_DENIED",
|
||||
reason: "path contains '..' segments; reject before glob match",
|
||||
askable: false,
|
||||
};
|
||||
}
|
||||
const config = readFilePolicyConfig(input.pluginConfig);
|
||||
if (!config) {
|
||||
return {
|
||||
ok: false,
|
||||
code: "NO_POLICY",
|
||||
reason:
|
||||
"no plugins.entries.file-transfer.config.nodes config; file-transfer is deny-by-default until configured",
|
||||
askable: false,
|
||||
};
|
||||
}
|
||||
const resolved = resolveNodePolicy(config, input.nodeId, input.nodeDisplayName);
|
||||
if (!resolved) {
|
||||
return {
|
||||
ok: false,
|
||||
code: "NO_POLICY",
|
||||
reason: `no file-transfer policy entry for "${input.nodeDisplayName ?? input.nodeId}"; configure plugins.entries.file-transfer.config.nodes or "*"`,
|
||||
askable: false,
|
||||
};
|
||||
}
|
||||
const nodeConfig = resolved.entry;
|
||||
const askMode = normalizeAskMode(nodeConfig.ask);
|
||||
|
||||
const maxBytes =
|
||||
typeof nodeConfig.maxBytes === "number" && Number.isFinite(nodeConfig.maxBytes)
|
||||
? Math.max(1, Math.floor(nodeConfig.maxBytes))
|
||||
: undefined;
|
||||
const followSymlinks = nodeConfig.followSymlinks === true;
|
||||
|
||||
// 1. Deny patterns always win.
|
||||
const denyPatterns = normalizeGlobs(nodeConfig.denyPaths);
|
||||
if (matchesAny(input.path, denyPatterns)) {
|
||||
return {
|
||||
ok: false,
|
||||
code: "POLICY_DENIED",
|
||||
reason: "path matches a denyPaths pattern",
|
||||
askable: false,
|
||||
askMode,
|
||||
maxBytes,
|
||||
followSymlinks,
|
||||
};
|
||||
}
|
||||
|
||||
// 2. ask=always: prompt every time even if matched.
|
||||
if (askMode === "always") {
|
||||
return { ok: true, reason: "ask-always", askMode, maxBytes, followSymlinks };
|
||||
}
|
||||
|
||||
// 3. Match against allow list for this kind.
|
||||
const allowPatterns =
|
||||
input.kind === "read"
|
||||
? normalizeGlobs(nodeConfig.allowReadPaths)
|
||||
: normalizeGlobs(nodeConfig.allowWritePaths);
|
||||
|
||||
if (allowPatterns.length > 0 && matchesAny(input.path, allowPatterns)) {
|
||||
return { ok: true, reason: "matched-allow", maxBytes, followSymlinks };
|
||||
}
|
||||
|
||||
// 4. No allow match. Either askable on miss or hard-deny.
|
||||
if (askMode === "on-miss") {
|
||||
return {
|
||||
ok: false,
|
||||
code: "POLICY_DENIED",
|
||||
reason: `path does not match any allow${input.kind === "read" ? "Read" : "Write"}Paths pattern`,
|
||||
askable: true,
|
||||
askMode,
|
||||
maxBytes,
|
||||
followSymlinks,
|
||||
};
|
||||
}
|
||||
|
||||
return {
|
||||
ok: false,
|
||||
code: "POLICY_DENIED",
|
||||
reason:
|
||||
allowPatterns.length === 0
|
||||
? `no allow${input.kind === "read" ? "Read" : "Write"}Paths configured`
|
||||
: `path does not match any allow${input.kind === "read" ? "Read" : "Write"}Paths pattern`,
|
||||
askable: false,
|
||||
askMode,
|
||||
maxBytes,
|
||||
followSymlinks,
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Persist an "allow-always" approval by appending the path to the
|
||||
* relevant allowReadPaths / allowWritePaths list for the node. Uses
|
||||
* mutateConfigFile so the change survives gateway restarts.
|
||||
*
|
||||
* Inserts under whichever key matched the policy (per-node entry, or
|
||||
* the "*" wildcard if that's what was hit). If no entry exists yet,
|
||||
* creates one keyed by nodeDisplayName ?? nodeId.
|
||||
*/
|
||||
/**
|
||||
* Reject special object keys that would mutate the prototype chain when
|
||||
* used as a property name (e.g. `__proto__` setter on a plain object).
|
||||
* The nodeDisplayName comes from paired-node metadata which we don't
|
||||
* fully control; refuse to persist policy under a key that could corrupt
|
||||
* the plugin policy container's prototype.
|
||||
*/
|
||||
function assertSafeConfigKey(key: string): string {
|
||||
if (key === "__proto__" || key === "prototype" || key === "constructor") {
|
||||
throw new Error(`refusing to persist file-transfer policy under unsafe key: ${key}`);
|
||||
}
|
||||
return key;
|
||||
}
|
||||
|
||||
export async function persistAllowAlways(input: {
|
||||
nodeId: string;
|
||||
nodeDisplayName?: string;
|
||||
kind: FilePolicyKind;
|
||||
path: string;
|
||||
}): Promise<void> {
|
||||
const field = input.kind === "read" ? "allowReadPaths" : "allowWritePaths";
|
||||
await mutateConfigFile({
|
||||
afterWrite: { mode: "none", reason: "file-transfer allow-always policy update" },
|
||||
mutate: (draft) => {
|
||||
// Plugin config is intentionally plugin-owned; the root OpenClawConfig
|
||||
// type only guarantees `Record<string, unknown>` here.
|
||||
const root = draft as unknown as Record<string, unknown>;
|
||||
const plugins = (root.plugins ??= {}) as Record<string, unknown>;
|
||||
const entries = (plugins.entries ??= {}) as Record<string, unknown>;
|
||||
const pluginEntry = (entries["file-transfer"] ??= {}) as Record<string, unknown>;
|
||||
const pluginConfig = (pluginEntry.config ??= {}) as Record<string, unknown>;
|
||||
const fileTransfer = (pluginConfig.nodes ??= {}) as Record<string, NodeFilePolicyConfig>;
|
||||
|
||||
// SECURITY: never persist allow-always under the "*" wildcard. An
|
||||
// operator approving a path on node A must not silently grant the
|
||||
// same path on every other node sharing the wildcard entry. Always
|
||||
// write under the specific node's own entry, creating it if needed.
|
||||
const candidates = [input.nodeId, input.nodeDisplayName].filter(
|
||||
(k): k is string => typeof k === "string" && k.length > 0,
|
||||
);
|
||||
// Use hasOwnProperty so a node with displayName "constructor" doesn't
|
||||
// accidentally hit Object.prototype.constructor and pretend to match.
|
||||
let key = candidates.find((c) => Object.prototype.hasOwnProperty.call(fileTransfer, c));
|
||||
if (!key) {
|
||||
key = assertSafeConfigKey(input.nodeDisplayName ?? input.nodeId);
|
||||
fileTransfer[key] = {};
|
||||
}
|
||||
const entry = fileTransfer[key];
|
||||
const list = Array.isArray(entry[field]) ? entry[field] : [];
|
||||
if (!list.includes(input.path)) {
|
||||
list.push(input.path);
|
||||
}
|
||||
entry[field] = list;
|
||||
},
|
||||
});
|
||||
}
|
||||
58
extensions/file-transfer/src/tools/dir-fetch-tool.test.ts
Normal file
58
extensions/file-transfer/src/tools/dir-fetch-tool.test.ts
Normal file
@@ -0,0 +1,58 @@
|
||||
import { spawn } from "node:child_process";
|
||||
import fs from "node:fs/promises";
|
||||
import os from "node:os";
|
||||
import path from "node:path";
|
||||
import { afterEach, beforeEach, describe, expect, it } from "vitest";
|
||||
import { validateTarUncompressedBudget } from "./dir-fetch-tool.js";
|
||||
|
||||
let tmpRoot: string;
|
||||
|
||||
beforeEach(async () => {
|
||||
tmpRoot = await fs.realpath(await fs.mkdtemp(path.join(os.tmpdir(), "dir-fetch-tool-test-")));
|
||||
});
|
||||
|
||||
afterEach(async () => {
|
||||
await fs.rm(tmpRoot, { recursive: true, force: true });
|
||||
});
|
||||
|
||||
async function tarDirectory(dir: string): Promise<Buffer> {
|
||||
return new Promise((resolve, reject) => {
|
||||
const tarBin = process.platform !== "win32" ? "/usr/bin/tar" : "tar";
|
||||
const child = spawn(tarBin, ["-czf", "-", "-C", dir, "."], {
|
||||
stdio: ["ignore", "pipe", "pipe"],
|
||||
});
|
||||
const chunks: Buffer[] = [];
|
||||
let stderr = "";
|
||||
child.stdout.on("data", (chunk: Buffer) => chunks.push(chunk));
|
||||
child.stderr.on("data", (chunk: Buffer) => {
|
||||
stderr += chunk.toString();
|
||||
});
|
||||
child.on("close", (code) => {
|
||||
if (code !== 0) {
|
||||
reject(new Error(`tar exited ${code}: ${stderr}`));
|
||||
return;
|
||||
}
|
||||
resolve(Buffer.concat(chunks));
|
||||
});
|
||||
child.on("error", reject);
|
||||
});
|
||||
}
|
||||
|
||||
const testUnlessWindows = process.platform === "win32" ? it.skip : it;
|
||||
|
||||
describe("validateTarUncompressedBudget", () => {
|
||||
testUnlessWindows(
|
||||
"rejects an archive before extraction when expanded bytes exceed budget",
|
||||
async () => {
|
||||
await fs.writeFile(path.join(tmpRoot, "zeros.txt"), "0".repeat(128));
|
||||
const tarBuffer = await tarDirectory(tmpRoot);
|
||||
|
||||
await expect(validateTarUncompressedBudget(tarBuffer, 64)).resolves.toMatchObject({
|
||||
ok: false,
|
||||
});
|
||||
await expect(validateTarUncompressedBudget(tarBuffer, 256)).resolves.toMatchObject({
|
||||
ok: true,
|
||||
});
|
||||
},
|
||||
);
|
||||
});
|
||||
705
extensions/file-transfer/src/tools/dir-fetch-tool.ts
Normal file
705
extensions/file-transfer/src/tools/dir-fetch-tool.ts
Normal file
@@ -0,0 +1,705 @@
|
||||
import { spawn } from "node:child_process";
|
||||
import crypto from "node:crypto";
|
||||
import fs from "node:fs/promises";
|
||||
import path from "node:path";
|
||||
import {
|
||||
callGatewayTool,
|
||||
listNodes,
|
||||
resolveNodeIdFromList,
|
||||
type AnyAgentTool,
|
||||
type NodeListNode,
|
||||
} from "openclaw/plugin-sdk/agent-harness-runtime";
|
||||
import { saveMediaBuffer } from "openclaw/plugin-sdk/media-store";
|
||||
import { Type } from "typebox";
|
||||
import { appendFileTransferAudit } from "../shared/audit.js";
|
||||
import { throwFromNodePayload } from "../shared/errors.js";
|
||||
import { IMAGE_MIME_INLINE_SET, mimeFromExtension } from "../shared/mime.js";
|
||||
import {
|
||||
humanSize,
|
||||
readBoolean,
|
||||
readClampedInt,
|
||||
readGatewayCallOptions,
|
||||
readTrimmedString,
|
||||
} from "../shared/params.js";
|
||||
|
||||
const DIR_FETCH_DEFAULT_MAX_BYTES = 8 * 1024 * 1024;
|
||||
const DIR_FETCH_HARD_MAX_BYTES = 16 * 1024 * 1024;
|
||||
const FILE_TRANSFER_SUBDIR = "file-transfer";
|
||||
|
||||
// Cap how many local file paths we surface in details.media.mediaUrls.
|
||||
// Larger trees still land on disk but we don't spam the channel adapter
|
||||
// with hundreds of attachments.
|
||||
const MEDIA_URL_CAP = 25;
|
||||
|
||||
// Hard timeout for gateway-side tar processes.
|
||||
const TAR_UNPACK_TIMEOUT_MS = 60_000;
|
||||
|
||||
// Cap on number of entries pre-validated. The compressed tar is already
|
||||
// capped at DIR_FETCH_HARD_MAX_BYTES upstream, and we walk the unpacked
|
||||
// tree to compute hashes — TAR_UNPACK_MAX_ENTRIES bounds how much work
|
||||
// that walk can do.
|
||||
const TAR_UNPACK_MAX_ENTRIES = 5000;
|
||||
|
||||
// Hard caps on uncompressed extraction. Defends against decompression-bomb
|
||||
// archives that compress to <16MB but expand to gigabytes. Both caps are
|
||||
// enforced during the post-extract walk: total bytes summed across entries
|
||||
// and per-file size to bound any single fs.stat / hash operation.
|
||||
const DIR_FETCH_MAX_UNCOMPRESSED_BYTES = 64 * 1024 * 1024;
|
||||
const DIR_FETCH_MAX_SINGLE_FILE_BYTES = 16 * 1024 * 1024;
|
||||
|
||||
const DirFetchToolSchema = Type.Object({
|
||||
node: Type.String({
|
||||
description: "Node id, name, or IP. Resolves the same way as the nodes tool.",
|
||||
}),
|
||||
path: Type.String({
|
||||
description: "Absolute path to the directory on the node to fetch. Canonicalized server-side.",
|
||||
}),
|
||||
maxBytes: Type.Optional(
|
||||
Type.Number({
|
||||
description:
|
||||
"Max gzipped tarball bytes to fetch. Default 8 MB, hard ceiling 16 MB (single round-trip).",
|
||||
}),
|
||||
),
|
||||
includeDotfiles: Type.Optional(
|
||||
Type.Boolean({
|
||||
description: "Reserved for v2; currently always includes dotfiles (v1 quirk in BSD tar).",
|
||||
}),
|
||||
),
|
||||
gatewayUrl: Type.Optional(Type.String()),
|
||||
gatewayToken: Type.Optional(Type.String()),
|
||||
timeoutMs: Type.Optional(Type.Number()),
|
||||
});
|
||||
|
||||
async function computeFileSha256(filePath: string): Promise<string> {
|
||||
// Stream the hash so we never pull a whole large file into memory.
|
||||
// file_fetch caps single files at 16MB, but unpacked dir_fetch entries
|
||||
// share the 64MB uncompressed budget — better to stream regardless.
|
||||
const hash = crypto.createHash("sha256");
|
||||
const handle = await fs.open(filePath, "r");
|
||||
try {
|
||||
const chunkSize = 64 * 1024;
|
||||
const buf = Buffer.allocUnsafe(chunkSize);
|
||||
while (true) {
|
||||
const { bytesRead } = await handle.read(buf, 0, chunkSize, null);
|
||||
if (bytesRead === 0) {
|
||||
break;
|
||||
}
|
||||
hash.update(buf.subarray(0, bytesRead));
|
||||
}
|
||||
} finally {
|
||||
await handle.close();
|
||||
}
|
||||
return hash.digest("hex");
|
||||
}
|
||||
|
||||
/**
|
||||
* Run two passes against the buffer to enumerate entries BEFORE we extract:
|
||||
*
|
||||
* 1. `tar -tf -` produces names ONLY, one per line. This is whitespace-safe
|
||||
* because each line is exactly one path; no parsing of fixed columns.
|
||||
* Used to validate paths (reject absolute, '..' traversal).
|
||||
* 2. `tar -tvf -` adds type info via the `ls -l`-style perm prefix.
|
||||
* Used ONLY to detect symlinks / hardlinks / non-regular entries via
|
||||
* the FIRST CHARACTER of each line, never the path column.
|
||||
*
|
||||
* Size limits are enforced at the *extraction* step instead — the tar
|
||||
* unpack process is bounded by the maxBytes we already pass through, and
|
||||
* the post-extract walkDir is hard-capped by TAR_UNPACK_MAX_ENTRIES.
|
||||
* Trying to parse uncompressed sizes from `tar -tvf` output is fragile
|
||||
* (filenames with whitespace shift the columns) and Aisle flagged that
|
||||
* shape as a bypass primitive — drop it.
|
||||
*/
|
||||
async function listTarPaths(
|
||||
tarBuffer: Buffer,
|
||||
): Promise<{ ok: true; paths: string[] } | { ok: false; reason: string }> {
|
||||
return new Promise((resolve) => {
|
||||
const tarBin = process.platform !== "win32" ? "/usr/bin/tar" : "tar";
|
||||
const child = spawn(tarBin, ["-tzf", "-"], { stdio: ["pipe", "pipe", "pipe"] });
|
||||
let stdout = "";
|
||||
let stderr = "";
|
||||
let aborted = false;
|
||||
const watchdog = setTimeout(() => {
|
||||
aborted = true;
|
||||
try {
|
||||
child.kill("SIGKILL");
|
||||
} catch {
|
||||
/* gone */
|
||||
}
|
||||
resolve({ ok: false, reason: "tar -tzf timed out" });
|
||||
}, 30_000);
|
||||
child.stdout.on("data", (c: Buffer) => {
|
||||
stdout += c.toString();
|
||||
if (stdout.length > 32 * 1024 * 1024) {
|
||||
aborted = true;
|
||||
try {
|
||||
child.kill("SIGKILL");
|
||||
} catch {
|
||||
/* gone */
|
||||
}
|
||||
clearTimeout(watchdog);
|
||||
resolve({ ok: false, reason: "tar -tzf output too large" });
|
||||
}
|
||||
});
|
||||
child.stderr.on("data", (c: Buffer) => {
|
||||
stderr += c.toString();
|
||||
});
|
||||
child.on("close", (code) => {
|
||||
clearTimeout(watchdog);
|
||||
if (aborted) {
|
||||
return;
|
||||
}
|
||||
if (code !== 0) {
|
||||
resolve({ ok: false, reason: `tar -tzf exited ${code}: ${stderr.slice(0, 200)}` });
|
||||
return;
|
||||
}
|
||||
// tar -tf emits one path per line with literal newlines as record
|
||||
// separators. Filenames containing newlines are exotic enough that
|
||||
// refusing them is safer than trying to parse around them.
|
||||
const paths = stdout.split("\n").filter((l) => l.length > 0);
|
||||
resolve({ ok: true, paths });
|
||||
});
|
||||
child.on("error", (e) => {
|
||||
clearTimeout(watchdog);
|
||||
if (!aborted) {
|
||||
resolve({ ok: false, reason: `tar -tzf error: ${String(e)}` });
|
||||
}
|
||||
});
|
||||
child.stdin.end(tarBuffer);
|
||||
});
|
||||
}
|
||||
|
||||
async function listTarTypeChars(
|
||||
tarBuffer: Buffer,
|
||||
): Promise<{ ok: true; typeChars: string[] } | { ok: false; reason: string }> {
|
||||
return new Promise((resolve) => {
|
||||
const tarBin = process.platform !== "win32" ? "/usr/bin/tar" : "tar";
|
||||
const child = spawn(tarBin, ["-tzvf", "-"], { stdio: ["pipe", "pipe", "pipe"] });
|
||||
let stdout = "";
|
||||
let stderr = "";
|
||||
let aborted = false;
|
||||
const watchdog = setTimeout(() => {
|
||||
aborted = true;
|
||||
try {
|
||||
child.kill("SIGKILL");
|
||||
} catch {
|
||||
/* gone */
|
||||
}
|
||||
resolve({ ok: false, reason: "tar -tzvf timed out" });
|
||||
}, 30_000);
|
||||
child.stdout.on("data", (c: Buffer) => {
|
||||
stdout += c.toString();
|
||||
if (stdout.length > 32 * 1024 * 1024) {
|
||||
aborted = true;
|
||||
try {
|
||||
child.kill("SIGKILL");
|
||||
} catch {
|
||||
/* gone */
|
||||
}
|
||||
clearTimeout(watchdog);
|
||||
resolve({ ok: false, reason: "tar -tzvf output too large" });
|
||||
}
|
||||
});
|
||||
child.stderr.on("data", (c: Buffer) => {
|
||||
stderr += c.toString();
|
||||
});
|
||||
child.on("close", (code) => {
|
||||
clearTimeout(watchdog);
|
||||
if (aborted) {
|
||||
return;
|
||||
}
|
||||
if (code !== 0) {
|
||||
resolve({ ok: false, reason: `tar -tzvf exited ${code}: ${stderr.slice(0, 200)}` });
|
||||
return;
|
||||
}
|
||||
// Take only the first character of each line — the entry type.
|
||||
// We don't touch the rest of the line (path/size/etc) so filenames
|
||||
// with whitespace can't shift our parser.
|
||||
const typeChars = stdout
|
||||
.split("\n")
|
||||
.filter((l) => l.length > 0)
|
||||
.map((l) => l.charAt(0));
|
||||
resolve({ ok: true, typeChars });
|
||||
});
|
||||
child.on("error", (e) => {
|
||||
clearTimeout(watchdog);
|
||||
if (!aborted) {
|
||||
resolve({ ok: false, reason: `tar -tzvf error: ${String(e)}` });
|
||||
}
|
||||
});
|
||||
child.stdin.end(tarBuffer);
|
||||
});
|
||||
}
|
||||
|
||||
async function preValidateTarball(
|
||||
tarBuffer: Buffer,
|
||||
): Promise<{ ok: true } | { ok: false; reason: string }> {
|
||||
const namesResult = await listTarPaths(tarBuffer);
|
||||
if (!namesResult.ok) {
|
||||
return namesResult;
|
||||
}
|
||||
const paths = namesResult.paths;
|
||||
if (paths.length > TAR_UNPACK_MAX_ENTRIES) {
|
||||
return {
|
||||
ok: false,
|
||||
reason: `archive contains ${paths.length} entries; limit ${TAR_UNPACK_MAX_ENTRIES}`,
|
||||
};
|
||||
}
|
||||
|
||||
const typesResult = await listTarTypeChars(tarBuffer);
|
||||
if (!typesResult.ok) {
|
||||
return typesResult;
|
||||
}
|
||||
const typeChars = typesResult.typeChars;
|
||||
// The two passes should report the same number of entries; if they
|
||||
// don't, something exotic is going on (filenames with newlines, etc.)
|
||||
// and we refuse defensively.
|
||||
if (typeChars.length !== paths.length) {
|
||||
return {
|
||||
ok: false,
|
||||
reason: `tar -tzf and tar -tzvf disagree on entry count (${paths.length} vs ${typeChars.length}); refusing`,
|
||||
};
|
||||
}
|
||||
|
||||
for (let i = 0; i < paths.length; i++) {
|
||||
const entryPath = paths[i];
|
||||
const t = typeChars[i];
|
||||
if (t === "l" || t === "h") {
|
||||
return { ok: false, reason: `archive contains link entry: ${entryPath}` };
|
||||
}
|
||||
if (t !== "-" && t !== "d") {
|
||||
return { ok: false, reason: `archive contains non-regular entry type '${t}': ${entryPath}` };
|
||||
}
|
||||
if (path.isAbsolute(entryPath)) {
|
||||
return { ok: false, reason: `archive contains absolute path: ${entryPath}` };
|
||||
}
|
||||
const norm = path.posix.normalize(entryPath);
|
||||
if (norm === ".." || norm.startsWith("../") || norm.includes("/../")) {
|
||||
return { ok: false, reason: `archive contains '..' traversal: ${entryPath}` };
|
||||
}
|
||||
// Reject backslash-containing names too — refuses Windows-style
|
||||
// traversal in archives produced by an attacker on a Windows node.
|
||||
if (entryPath.includes("\\")) {
|
||||
return { ok: false, reason: `archive contains backslash in path: ${entryPath}` };
|
||||
}
|
||||
}
|
||||
return { ok: true };
|
||||
}
|
||||
|
||||
export async function validateTarUncompressedBudget(
|
||||
tarBuffer: Buffer,
|
||||
maxBytes = DIR_FETCH_MAX_UNCOMPRESSED_BYTES,
|
||||
): Promise<{ ok: true } | { ok: false; reason: string }> {
|
||||
return new Promise((resolve) => {
|
||||
const tarBin = process.platform !== "win32" ? "/usr/bin/tar" : "tar";
|
||||
const child = spawn(tarBin, ["-xOzf", "-"], { stdio: ["pipe", "pipe", "pipe"] });
|
||||
let totalBytes = 0;
|
||||
let stderr = "";
|
||||
let settled = false;
|
||||
let watchdog: ReturnType<typeof setTimeout>;
|
||||
const finish = (result: { ok: true } | { ok: false; reason: string }): void => {
|
||||
if (settled) {
|
||||
return;
|
||||
}
|
||||
settled = true;
|
||||
clearTimeout(watchdog);
|
||||
resolve(result);
|
||||
};
|
||||
watchdog = setTimeout(() => {
|
||||
try {
|
||||
child.kill("SIGKILL");
|
||||
} catch {
|
||||
/* gone */
|
||||
}
|
||||
finish({ ok: false, reason: "tar uncompressed budget validation timed out" });
|
||||
}, TAR_UNPACK_TIMEOUT_MS);
|
||||
|
||||
child.stdout.on("data", (chunk: Buffer) => {
|
||||
totalBytes += chunk.byteLength;
|
||||
if (totalBytes > maxBytes) {
|
||||
try {
|
||||
child.kill("SIGKILL");
|
||||
} catch {
|
||||
/* gone */
|
||||
}
|
||||
finish({
|
||||
ok: false,
|
||||
reason: `archive expands past uncompressed budget ${maxBytes} bytes`,
|
||||
});
|
||||
}
|
||||
});
|
||||
child.stderr.on("data", (chunk: Buffer) => {
|
||||
stderr += chunk.toString();
|
||||
if (stderr.length > 4096) {
|
||||
stderr = stderr.slice(-4096);
|
||||
}
|
||||
});
|
||||
child.on("close", (code) => {
|
||||
if (settled) {
|
||||
return;
|
||||
}
|
||||
if (code !== 0) {
|
||||
finish({
|
||||
ok: false,
|
||||
reason: `tar uncompressed budget validation exited ${code}: ${stderr.slice(0, 200)}`,
|
||||
});
|
||||
return;
|
||||
}
|
||||
finish({ ok: true });
|
||||
});
|
||||
child.on("error", (error) => {
|
||||
finish({
|
||||
ok: false,
|
||||
reason: `tar uncompressed budget validation error: ${String(error)}`,
|
||||
});
|
||||
});
|
||||
child.stdin.on("error", (error: NodeJS.ErrnoException) => {
|
||||
if (settled && error.code === "EPIPE") {
|
||||
return;
|
||||
}
|
||||
finish({
|
||||
ok: false,
|
||||
reason: `tar uncompressed budget validation input error: ${String(error)}`,
|
||||
});
|
||||
});
|
||||
child.stdin.end(tarBuffer);
|
||||
});
|
||||
}
|
||||
|
||||
type UnpackedFileEntry = {
|
||||
relPath: string;
|
||||
size: number;
|
||||
mimeType: string;
|
||||
sha256: string;
|
||||
localPath: string;
|
||||
};
|
||||
|
||||
/**
|
||||
* Unpack a gzipped tarball into a target directory via `tar -xzf -`.
|
||||
* Caller MUST have run `preValidateTarball` first — this function trusts
|
||||
* that the archive contains only regular files / dirs with relative,
|
||||
* non-traversing paths. Without that pre-validation, raw `tar -xzf` is
|
||||
* unsafe (tarbomb, symlink-then-write tricks, decompression bomb).
|
||||
*
|
||||
* The `-P` flag is intentionally omitted so absolute paths in the
|
||||
* archive are stripped to relative ones (defense-in-depth on top of the
|
||||
* pre-validation rejection). A hard wall-clock timeout caps the unpack
|
||||
* at TAR_UNPACK_TIMEOUT_MS to avoid hangs.
|
||||
*
|
||||
* BSD tar (macOS) and GNU tar disagree on flags: `--no-overwrite-dir` is
|
||||
* GNU-only and BSD tar rejects it. We use only flags both implementations
|
||||
* accept. Defense-in-depth comes from the pre-validation step instead.
|
||||
*
|
||||
* `--no-same-owner` and `--no-same-permissions` are accepted by both BSD
|
||||
* and GNU tar. They prevent the archive from setting file ownership
|
||||
* (uid/gid) and dangerous mode bits (setuid/setgid/world-writable) on
|
||||
* the gateway filesystem. If the gateway is ever run as root or with
|
||||
* elevated privileges, a malicious node could otherwise plant
|
||||
* privileged executables here.
|
||||
*/
|
||||
async function unpackTar(tarBuffer: Buffer, destDir: string): Promise<void> {
|
||||
await fs.mkdir(destDir, { recursive: true, mode: 0o700 });
|
||||
return new Promise((resolve, reject) => {
|
||||
const tarBin = process.platform !== "win32" ? "/usr/bin/tar" : "tar";
|
||||
const child = spawn(
|
||||
tarBin,
|
||||
["-xzf", "-", "-C", destDir, "--no-same-owner", "--no-same-permissions"],
|
||||
{
|
||||
stdio: ["pipe", "ignore", "pipe"],
|
||||
},
|
||||
);
|
||||
let stderrOut = "";
|
||||
const watchdog = setTimeout(() => {
|
||||
try {
|
||||
child.kill("SIGKILL");
|
||||
} catch {
|
||||
/* already gone */
|
||||
}
|
||||
reject(new Error(`tar unpack timed out after ${TAR_UNPACK_TIMEOUT_MS}ms`));
|
||||
}, TAR_UNPACK_TIMEOUT_MS);
|
||||
child.stderr.on("data", (chunk: Buffer) => {
|
||||
stderrOut += chunk.toString();
|
||||
});
|
||||
child.on("close", (code) => {
|
||||
clearTimeout(watchdog);
|
||||
if (code !== 0) {
|
||||
reject(new Error(`tar unpack exited ${code}: ${stderrOut.slice(0, 300)}`));
|
||||
return;
|
||||
}
|
||||
resolve();
|
||||
});
|
||||
child.on("error", (e) => {
|
||||
clearTimeout(watchdog);
|
||||
reject(e);
|
||||
});
|
||||
child.stdin.end(tarBuffer);
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Walk a directory recursively, collecting file entries (skips directories).
|
||||
* Skips symlinks — we don't want to follow links the archive might have
|
||||
* carried in. Files only.
|
||||
*/
|
||||
async function walkDir(
|
||||
dir: string,
|
||||
rootDir: string,
|
||||
): Promise<{ relPath: string; absPath: string }[]> {
|
||||
const entries = await fs.readdir(dir, { withFileTypes: true });
|
||||
const results: { relPath: string; absPath: string }[] = [];
|
||||
for (const entry of entries) {
|
||||
const absPath = path.join(dir, entry.name);
|
||||
if (entry.isDirectory()) {
|
||||
const nested = await walkDir(absPath, rootDir);
|
||||
results.push(...nested);
|
||||
} else if (entry.isFile()) {
|
||||
const relPath = path.relative(rootDir, absPath);
|
||||
results.push({ relPath, absPath });
|
||||
}
|
||||
// Symlinks are intentionally ignored: don't follow them out of destDir.
|
||||
}
|
||||
return results;
|
||||
}
|
||||
|
||||
export function createDirFetchTool(): AnyAgentTool {
|
||||
return {
|
||||
label: "Directory Fetch",
|
||||
name: "dir_fetch",
|
||||
description:
|
||||
"Retrieve a directory tree from a paired node as a gzipped tarball, unpack it on the gateway, and return a manifest of saved paths. Use to pull source trees, asset folders, or log directories in a single round-trip. The unpacked files live on the GATEWAY (not your local machine); pass localPath into other tools or use file_fetch on individual entries to ship them elsewhere. Rejects trees larger than 16 MB compressed. Requires operator opt-in: gateway.nodes.allowCommands must include 'dir.fetch' AND plugins.entries.file-transfer.config.nodes.<node>.allowReadPaths must match the directory path.",
|
||||
parameters: DirFetchToolSchema,
|
||||
execute: async (_toolCallId, args) => {
|
||||
const params = args as Record<string, unknown>;
|
||||
const node = readTrimmedString(params, "node");
|
||||
const dirPath = readTrimmedString(params, "path");
|
||||
if (!node) {
|
||||
throw new Error("node required");
|
||||
}
|
||||
if (!dirPath) {
|
||||
throw new Error("path required");
|
||||
}
|
||||
|
||||
const maxBytes = readClampedInt({
|
||||
input: params,
|
||||
key: "maxBytes",
|
||||
defaultValue: DIR_FETCH_DEFAULT_MAX_BYTES,
|
||||
hardMin: 1,
|
||||
hardMax: DIR_FETCH_HARD_MAX_BYTES,
|
||||
});
|
||||
const includeDotfiles = readBoolean(params, "includeDotfiles", false);
|
||||
|
||||
const gatewayOpts = readGatewayCallOptions(params);
|
||||
const nodes: NodeListNode[] = await listNodes(gatewayOpts);
|
||||
const nodeId = resolveNodeIdFromList(nodes, node, false);
|
||||
const nodeMeta = nodes.find((n) => n.nodeId === nodeId);
|
||||
const nodeDisplayName = nodeMeta?.displayName ?? node;
|
||||
const startedAt = Date.now();
|
||||
|
||||
const raw = await callGatewayTool<{ payload: unknown }>("node.invoke", gatewayOpts, {
|
||||
nodeId,
|
||||
command: "dir.fetch",
|
||||
params: {
|
||||
path: dirPath,
|
||||
maxBytes,
|
||||
includeDotfiles,
|
||||
},
|
||||
idempotencyKey: crypto.randomUUID(),
|
||||
});
|
||||
|
||||
const payload =
|
||||
raw?.payload && typeof raw.payload === "object" && !Array.isArray(raw.payload)
|
||||
? (raw.payload as Record<string, unknown>)
|
||||
: null;
|
||||
if (!payload) {
|
||||
await appendFileTransferAudit({
|
||||
op: "dir.fetch",
|
||||
nodeId,
|
||||
nodeDisplayName,
|
||||
requestedPath: dirPath,
|
||||
decision: "error",
|
||||
errorMessage: "invalid payload",
|
||||
durationMs: Date.now() - startedAt,
|
||||
});
|
||||
throw new Error("invalid dir.fetch payload");
|
||||
}
|
||||
if (payload.ok === false) {
|
||||
await appendFileTransferAudit({
|
||||
op: "dir.fetch",
|
||||
nodeId,
|
||||
nodeDisplayName,
|
||||
requestedPath: dirPath,
|
||||
canonicalPath:
|
||||
typeof payload.canonicalPath === "string" ? payload.canonicalPath : undefined,
|
||||
decision: "error",
|
||||
errorCode: typeof payload.code === "string" ? payload.code : undefined,
|
||||
errorMessage: typeof payload.message === "string" ? payload.message : undefined,
|
||||
durationMs: Date.now() - startedAt,
|
||||
});
|
||||
throwFromNodePayload("dir.fetch", payload);
|
||||
}
|
||||
|
||||
const canonicalPath = typeof payload.path === "string" ? payload.path : "";
|
||||
const tarBase64 = typeof payload.tarBase64 === "string" ? payload.tarBase64 : "";
|
||||
const tarBytes = typeof payload.tarBytes === "number" ? payload.tarBytes : -1;
|
||||
const sha256 = typeof payload.sha256 === "string" ? payload.sha256 : "";
|
||||
const fileCount = typeof payload.fileCount === "number" ? payload.fileCount : 0;
|
||||
|
||||
if (!canonicalPath || !tarBase64 || tarBytes < 0 || !sha256) {
|
||||
throw new Error("invalid dir.fetch payload (missing fields)");
|
||||
}
|
||||
|
||||
const tarBuffer = Buffer.from(tarBase64, "base64");
|
||||
if (tarBuffer.byteLength !== tarBytes) {
|
||||
throw new Error(
|
||||
`dir.fetch size mismatch: payload says ${tarBytes} bytes, decoded ${tarBuffer.byteLength}`,
|
||||
);
|
||||
}
|
||||
const localSha256 = crypto.createHash("sha256").update(tarBuffer).digest("hex");
|
||||
if (localSha256 !== sha256) {
|
||||
throw new Error("dir.fetch sha256 mismatch (integrity failure)");
|
||||
}
|
||||
|
||||
// Pre-validate before extraction. The node is in the trust boundary
|
||||
// for v1, but a malicious or compromised node should not be able to
|
||||
// pivot into arbitrary file write on the gateway via tar tricks.
|
||||
// Rejects: symlinks, hardlinks, absolute paths, ".." traversal,
|
||||
// entry counts and uncompressed sizes above the caps.
|
||||
const validation = await preValidateTarball(tarBuffer);
|
||||
if (!validation.ok) {
|
||||
await appendFileTransferAudit({
|
||||
op: "dir.fetch",
|
||||
nodeId,
|
||||
nodeDisplayName,
|
||||
requestedPath: dirPath,
|
||||
canonicalPath,
|
||||
decision: "error",
|
||||
errorCode: "UNSAFE_ARCHIVE",
|
||||
errorMessage: validation.reason,
|
||||
sizeBytes: tarBytes,
|
||||
sha256,
|
||||
durationMs: Date.now() - startedAt,
|
||||
});
|
||||
throw new Error(`dir.fetch UNSAFE_ARCHIVE: ${validation.reason}`);
|
||||
}
|
||||
|
||||
const budget = await validateTarUncompressedBudget(tarBuffer);
|
||||
if (!budget.ok) {
|
||||
await appendFileTransferAudit({
|
||||
op: "dir.fetch",
|
||||
nodeId,
|
||||
nodeDisplayName,
|
||||
requestedPath: dirPath,
|
||||
canonicalPath,
|
||||
decision: "error",
|
||||
errorCode: "TREE_TOO_LARGE",
|
||||
errorMessage: budget.reason,
|
||||
sizeBytes: tarBytes,
|
||||
sha256,
|
||||
durationMs: Date.now() - startedAt,
|
||||
});
|
||||
throw new Error(`dir.fetch UNCOMPRESSED_TOO_LARGE: ${budget.reason}`);
|
||||
}
|
||||
|
||||
// Save tarball under the file-transfer subdir (no 2-min TTL).
|
||||
const savedTar = await saveMediaBuffer(
|
||||
tarBuffer,
|
||||
"application/gzip",
|
||||
FILE_TRANSFER_SUBDIR,
|
||||
DIR_FETCH_HARD_MAX_BYTES,
|
||||
);
|
||||
|
||||
const tarDir = path.dirname(savedTar.path);
|
||||
const tarBaseName = path.basename(savedTar.path, path.extname(savedTar.path));
|
||||
const unpackId = `dir-fetch-${tarBaseName}`;
|
||||
const rootDir = path.join(tarDir, unpackId);
|
||||
|
||||
await unpackTar(tarBuffer, rootDir);
|
||||
|
||||
const walked = await walkDir(rootDir, rootDir);
|
||||
const files: UnpackedFileEntry[] = [];
|
||||
// Defense-in-depth budget on the *uncompressed* extraction. Compressed
|
||||
// tar is bounded upstream; an attacker can still send a highly
|
||||
// compressible bomb (gigabytes of zeros) that fits under that cap.
|
||||
// Stop walking + clean up if the unpacked tree busts the budget.
|
||||
let totalUncompressed = 0;
|
||||
const abortAndCleanup = async (reason: string): Promise<never> => {
|
||||
await fs.rm(rootDir, { recursive: true, force: true }).catch(() => {});
|
||||
await appendFileTransferAudit({
|
||||
op: "dir.fetch",
|
||||
nodeId,
|
||||
nodeDisplayName,
|
||||
requestedPath: dirPath,
|
||||
canonicalPath,
|
||||
decision: "error",
|
||||
errorCode: "TREE_TOO_LARGE",
|
||||
errorMessage: reason,
|
||||
sizeBytes: tarBytes,
|
||||
sha256,
|
||||
durationMs: Date.now() - startedAt,
|
||||
});
|
||||
throw new Error(`dir.fetch UNCOMPRESSED_TOO_LARGE: ${reason}`);
|
||||
};
|
||||
for (const { relPath, absPath } of walked) {
|
||||
let size = 0;
|
||||
try {
|
||||
const st = await fs.stat(absPath);
|
||||
size = st.size;
|
||||
} catch {
|
||||
continue;
|
||||
}
|
||||
if (size > DIR_FETCH_MAX_SINGLE_FILE_BYTES) {
|
||||
await abortAndCleanup(
|
||||
`extracted file ${relPath} is ${size} bytes (limit ${DIR_FETCH_MAX_SINGLE_FILE_BYTES})`,
|
||||
);
|
||||
}
|
||||
totalUncompressed += size;
|
||||
if (totalUncompressed > DIR_FETCH_MAX_UNCOMPRESSED_BYTES) {
|
||||
await abortAndCleanup(
|
||||
`extracted tree exceeds uncompressed budget ${DIR_FETCH_MAX_UNCOMPRESSED_BYTES} bytes (decompression bomb?)`,
|
||||
);
|
||||
}
|
||||
const mimeType = mimeFromExtension(relPath);
|
||||
const fileSha256 = await computeFileSha256(absPath);
|
||||
files.push({ relPath, size, mimeType, sha256: fileSha256, localPath: absPath });
|
||||
}
|
||||
|
||||
const imageFiles = files.filter((f) => IMAGE_MIME_INLINE_SET.has(f.mimeType));
|
||||
const nonImageFiles = files.filter((f) => !IMAGE_MIME_INLINE_SET.has(f.mimeType));
|
||||
const allOrdered = [...imageFiles, ...nonImageFiles];
|
||||
const droppedFromMedia = Math.max(0, allOrdered.length - MEDIA_URL_CAP);
|
||||
const mediaUrls = allOrdered.slice(0, MEDIA_URL_CAP).map((f) => f.localPath);
|
||||
|
||||
const shortHash = sha256.slice(0, 12);
|
||||
const mediaNote = droppedFromMedia
|
||||
? ` (channel attaches first ${MEDIA_URL_CAP}; ${droppedFromMedia} more in details.files)`
|
||||
: "";
|
||||
const summaryText = `Fetched ${fileCount} files from ${canonicalPath} (${humanSize(tarBytes)} compressed, sha256:${shortHash}) — saved on the gateway under ${rootDir}/${mediaNote}`;
|
||||
|
||||
await appendFileTransferAudit({
|
||||
op: "dir.fetch",
|
||||
nodeId,
|
||||
nodeDisplayName,
|
||||
requestedPath: dirPath,
|
||||
canonicalPath,
|
||||
decision: "allowed",
|
||||
sizeBytes: tarBytes,
|
||||
sha256,
|
||||
durationMs: Date.now() - startedAt,
|
||||
});
|
||||
|
||||
return {
|
||||
content: [{ type: "text" as const, text: summaryText }],
|
||||
details: {
|
||||
path: canonicalPath,
|
||||
rootDir,
|
||||
fileCount,
|
||||
tarBytes,
|
||||
sha256,
|
||||
files,
|
||||
media: {
|
||||
mediaUrls,
|
||||
},
|
||||
},
|
||||
};
|
||||
},
|
||||
};
|
||||
}
|
||||
156
extensions/file-transfer/src/tools/dir-list-tool.ts
Normal file
156
extensions/file-transfer/src/tools/dir-list-tool.ts
Normal file
@@ -0,0 +1,156 @@
|
||||
import crypto from "node:crypto";
|
||||
import {
|
||||
callGatewayTool,
|
||||
listNodes,
|
||||
resolveNodeIdFromList,
|
||||
type AnyAgentTool,
|
||||
type NodeListNode,
|
||||
} from "openclaw/plugin-sdk/agent-harness-runtime";
|
||||
import { Type } from "typebox";
|
||||
import { appendFileTransferAudit } from "../shared/audit.js";
|
||||
import { throwFromNodePayload } from "../shared/errors.js";
|
||||
import { readClampedInt, readGatewayCallOptions, readTrimmedString } from "../shared/params.js";
|
||||
|
||||
const DIR_LIST_DEFAULT_MAX_ENTRIES = 200;
|
||||
const DIR_LIST_HARD_MAX_ENTRIES = 5000;
|
||||
|
||||
const DirListToolSchema = Type.Object({
|
||||
node: Type.String({
|
||||
description: "Node id, name, or IP. Resolves the same way as the nodes tool.",
|
||||
}),
|
||||
path: Type.String({
|
||||
description: "Absolute path to the directory on the node. Canonicalized server-side.",
|
||||
}),
|
||||
pageToken: Type.Optional(
|
||||
Type.String({
|
||||
description:
|
||||
"Pagination token from a previous dir_list call. Omit to start from the beginning.",
|
||||
}),
|
||||
),
|
||||
maxEntries: Type.Optional(
|
||||
Type.Number({
|
||||
description: `Max entries per page. Default ${DIR_LIST_DEFAULT_MAX_ENTRIES}, hard ceiling ${DIR_LIST_HARD_MAX_ENTRIES}.`,
|
||||
}),
|
||||
),
|
||||
gatewayUrl: Type.Optional(Type.String()),
|
||||
gatewayToken: Type.Optional(Type.String()),
|
||||
timeoutMs: Type.Optional(Type.Number()),
|
||||
});
|
||||
|
||||
export function createDirListTool(): AnyAgentTool {
|
||||
return {
|
||||
label: "Directory List",
|
||||
name: "dir_list",
|
||||
description:
|
||||
"Retrieve a structured directory listing from a paired node. Returns file and subdirectory metadata (name, path, size, mimeType, isDir, mtime) without transferring file content. Use this to discover what files exist before fetching them with file_fetch. Pagination is offset-based; pass nextPageToken from the previous result. Requires operator opt-in: gateway.nodes.allowCommands must include 'dir.list' AND plugins.entries.file-transfer.config.nodes.<node>.allowReadPaths must match the directory path. Without policy configured, every call is denied.",
|
||||
parameters: DirListToolSchema,
|
||||
execute: async (_toolCallId, args) => {
|
||||
const params = args as Record<string, unknown>;
|
||||
const node = readTrimmedString(params, "node");
|
||||
const dirPath = readTrimmedString(params, "path");
|
||||
if (!node) {
|
||||
throw new Error("node required");
|
||||
}
|
||||
if (!dirPath) {
|
||||
throw new Error("path required");
|
||||
}
|
||||
|
||||
const maxEntries = readClampedInt({
|
||||
input: params,
|
||||
key: "maxEntries",
|
||||
defaultValue: DIR_LIST_DEFAULT_MAX_ENTRIES,
|
||||
hardMin: 1,
|
||||
hardMax: DIR_LIST_HARD_MAX_ENTRIES,
|
||||
});
|
||||
|
||||
const pageToken =
|
||||
typeof params.pageToken === "string" && params.pageToken.trim()
|
||||
? params.pageToken.trim()
|
||||
: undefined;
|
||||
|
||||
const gatewayOpts = readGatewayCallOptions(params);
|
||||
const nodes: NodeListNode[] = await listNodes(gatewayOpts);
|
||||
const nodeId = resolveNodeIdFromList(nodes, node, false);
|
||||
const nodeMeta = nodes.find((n) => n.nodeId === nodeId);
|
||||
const nodeDisplayName = nodeMeta?.displayName ?? node;
|
||||
const startedAt = Date.now();
|
||||
|
||||
const raw = await callGatewayTool<{ payload: unknown }>("node.invoke", gatewayOpts, {
|
||||
nodeId,
|
||||
command: "dir.list",
|
||||
params: {
|
||||
path: dirPath,
|
||||
pageToken,
|
||||
maxEntries,
|
||||
},
|
||||
idempotencyKey: crypto.randomUUID(),
|
||||
});
|
||||
|
||||
const payload =
|
||||
raw?.payload && typeof raw.payload === "object" && !Array.isArray(raw.payload)
|
||||
? (raw.payload as Record<string, unknown>)
|
||||
: null;
|
||||
if (!payload) {
|
||||
await appendFileTransferAudit({
|
||||
op: "dir.list",
|
||||
nodeId,
|
||||
nodeDisplayName,
|
||||
requestedPath: dirPath,
|
||||
decision: "error",
|
||||
errorMessage: "invalid payload",
|
||||
durationMs: Date.now() - startedAt,
|
||||
});
|
||||
throw new Error("invalid dir.list payload");
|
||||
}
|
||||
if (payload.ok === false) {
|
||||
await appendFileTransferAudit({
|
||||
op: "dir.list",
|
||||
nodeId,
|
||||
nodeDisplayName,
|
||||
requestedPath: dirPath,
|
||||
canonicalPath:
|
||||
typeof payload.canonicalPath === "string" ? payload.canonicalPath : undefined,
|
||||
decision: "error",
|
||||
errorCode: typeof payload.code === "string" ? payload.code : undefined,
|
||||
errorMessage: typeof payload.message === "string" ? payload.message : undefined,
|
||||
durationMs: Date.now() - startedAt,
|
||||
});
|
||||
throwFromNodePayload("dir.list", payload);
|
||||
}
|
||||
|
||||
const canonicalPath = typeof payload.path === "string" ? payload.path : dirPath;
|
||||
|
||||
const entries = Array.isArray(payload.entries)
|
||||
? (payload.entries as Array<Record<string, unknown>>)
|
||||
: [];
|
||||
const truncated = payload.truncated === true;
|
||||
const nextPageToken =
|
||||
typeof payload.nextPageToken === "string" ? payload.nextPageToken : undefined;
|
||||
|
||||
const fileCount = entries.filter((e) => !e.isDir).length;
|
||||
const dirCount = entries.filter((e) => e.isDir).length;
|
||||
const truncatedNote = truncated ? " (more entries available — pass nextPageToken)" : "";
|
||||
const summary = `Listed ${canonicalPath}: ${fileCount} file${fileCount !== 1 ? "s" : ""}, ${dirCount} subdir${dirCount !== 1 ? "s" : ""}${truncatedNote}`;
|
||||
|
||||
await appendFileTransferAudit({
|
||||
op: "dir.list",
|
||||
nodeId,
|
||||
nodeDisplayName,
|
||||
requestedPath: dirPath,
|
||||
canonicalPath,
|
||||
decision: "allowed",
|
||||
durationMs: Date.now() - startedAt,
|
||||
});
|
||||
|
||||
return {
|
||||
content: [{ type: "text" as const, text: summary }],
|
||||
details: {
|
||||
path: canonicalPath,
|
||||
entries,
|
||||
nextPageToken,
|
||||
truncated,
|
||||
},
|
||||
};
|
||||
},
|
||||
};
|
||||
}
|
||||
198
extensions/file-transfer/src/tools/file-fetch-tool.ts
Normal file
198
extensions/file-transfer/src/tools/file-fetch-tool.ts
Normal file
@@ -0,0 +1,198 @@
|
||||
import crypto from "node:crypto";
|
||||
import {
|
||||
callGatewayTool,
|
||||
listNodes,
|
||||
resolveNodeIdFromList,
|
||||
type AnyAgentTool,
|
||||
type NodeListNode,
|
||||
} from "openclaw/plugin-sdk/agent-harness-runtime";
|
||||
import { saveMediaBuffer } from "openclaw/plugin-sdk/media-store";
|
||||
import { Type } from "typebox";
|
||||
import { appendFileTransferAudit } from "../shared/audit.js";
|
||||
import { throwFromNodePayload } from "../shared/errors.js";
|
||||
import {
|
||||
IMAGE_MIME_INLINE_SET,
|
||||
TEXT_INLINE_MAX_BYTES,
|
||||
TEXT_INLINE_MIME_SET,
|
||||
} from "../shared/mime.js";
|
||||
import { humanSize, readGatewayCallOptions, readTrimmedString } from "../shared/params.js";
|
||||
|
||||
const FILE_FETCH_DEFAULT_MAX_BYTES = 8 * 1024 * 1024;
|
||||
const FILE_FETCH_HARD_MAX_BYTES = 16 * 1024 * 1024;
|
||||
// Stash fetched files in a non-TTL subdir so a follow-up tool call within
|
||||
// the same agent turn can still reference them. The default "inbound"
|
||||
// subdir gets cleaned every 2 minutes which has bitten us in iMessage flows.
|
||||
const FILE_TRANSFER_SUBDIR = "file-transfer";
|
||||
|
||||
const FileFetchToolSchema = Type.Object({
|
||||
node: Type.String({
|
||||
description: "Node id, name, or IP. Resolves the same way as the nodes tool.",
|
||||
}),
|
||||
path: Type.String({
|
||||
description: "Absolute path to the file on the node. Canonicalized server-side.",
|
||||
}),
|
||||
maxBytes: Type.Optional(
|
||||
Type.Number({
|
||||
description: "Max bytes to fetch. Default 8 MB, hard ceiling 16 MB (single round-trip).",
|
||||
}),
|
||||
),
|
||||
gatewayUrl: Type.Optional(Type.String()),
|
||||
gatewayToken: Type.Optional(Type.String()),
|
||||
timeoutMs: Type.Optional(Type.Number()),
|
||||
});
|
||||
|
||||
export function createFileFetchTool(): AnyAgentTool {
|
||||
return {
|
||||
label: "File Fetch",
|
||||
name: "file_fetch",
|
||||
description:
|
||||
"Retrieve a file from a paired node by absolute path. Returns image content blocks for image MIME types, inlines small text files (≤8 KB) as text content, and saves everything else under the gateway media store with a path you can pass to file_write or other tools. Use this for screenshots, photos, receipts, logs, source files. Pair with file_write to copy a file from one node to another (no exec/cp shell-out needed). Requires operator opt-in: gateway.nodes.allowCommands must include 'file.fetch' AND plugins.entries.file-transfer.config.nodes.<node>.allowReadPaths must match the path. Without policy configured, every call is denied.",
|
||||
parameters: FileFetchToolSchema,
|
||||
execute: async (_toolCallId, args) => {
|
||||
const params = args as Record<string, unknown>;
|
||||
const node = readTrimmedString(params, "node");
|
||||
const filePath = readTrimmedString(params, "path");
|
||||
if (!node) {
|
||||
throw new Error("node required");
|
||||
}
|
||||
if (!filePath) {
|
||||
throw new Error("path required");
|
||||
}
|
||||
const requestedMax =
|
||||
typeof params.maxBytes === "number" && Number.isFinite(params.maxBytes)
|
||||
? Math.floor(params.maxBytes)
|
||||
: FILE_FETCH_DEFAULT_MAX_BYTES;
|
||||
const maxBytes = Math.max(1, Math.min(requestedMax, FILE_FETCH_HARD_MAX_BYTES));
|
||||
|
||||
const gatewayOpts = readGatewayCallOptions(params);
|
||||
const nodes: NodeListNode[] = await listNodes(gatewayOpts);
|
||||
const nodeId = resolveNodeIdFromList(nodes, node, false);
|
||||
const nodeMeta = nodes.find((n) => n.nodeId === nodeId);
|
||||
const nodeDisplayName = nodeMeta?.displayName ?? node;
|
||||
const startedAt = Date.now();
|
||||
|
||||
const raw = await callGatewayTool<{ payload: unknown }>("node.invoke", gatewayOpts, {
|
||||
nodeId,
|
||||
command: "file.fetch",
|
||||
params: {
|
||||
path: filePath,
|
||||
maxBytes,
|
||||
},
|
||||
idempotencyKey: crypto.randomUUID(),
|
||||
});
|
||||
|
||||
const payload =
|
||||
raw?.payload && typeof raw.payload === "object" && !Array.isArray(raw.payload)
|
||||
? (raw.payload as Record<string, unknown>)
|
||||
: null;
|
||||
if (!payload) {
|
||||
await appendFileTransferAudit({
|
||||
op: "file.fetch",
|
||||
nodeId,
|
||||
nodeDisplayName,
|
||||
requestedPath: filePath,
|
||||
decision: "error",
|
||||
errorMessage: "invalid payload",
|
||||
durationMs: Date.now() - startedAt,
|
||||
});
|
||||
throw new Error("invalid file.fetch payload");
|
||||
}
|
||||
if (payload.ok === false) {
|
||||
await appendFileTransferAudit({
|
||||
op: "file.fetch",
|
||||
nodeId,
|
||||
nodeDisplayName,
|
||||
requestedPath: filePath,
|
||||
canonicalPath:
|
||||
typeof payload.canonicalPath === "string" ? payload.canonicalPath : undefined,
|
||||
decision: "error",
|
||||
errorCode: typeof payload.code === "string" ? payload.code : undefined,
|
||||
errorMessage: typeof payload.message === "string" ? payload.message : undefined,
|
||||
durationMs: Date.now() - startedAt,
|
||||
});
|
||||
throwFromNodePayload("file.fetch", payload);
|
||||
}
|
||||
|
||||
// Type-checks, NOT truthy-checks: an empty file legitimately has
|
||||
// size=0 and base64="". Rejecting falsy values would block zero-byte
|
||||
// round-trips through file_fetch → file_write.
|
||||
const canonicalPath = typeof payload.path === "string" ? payload.path : "";
|
||||
const size = typeof payload.size === "number" ? payload.size : -1;
|
||||
const mimeType = typeof payload.mimeType === "string" ? payload.mimeType : "";
|
||||
const hasBase64 = typeof payload.base64 === "string";
|
||||
const base64 = hasBase64 ? (payload.base64 as string) : "";
|
||||
const sha256 = typeof payload.sha256 === "string" ? payload.sha256 : "";
|
||||
if (!canonicalPath || size < 0 || !mimeType || !hasBase64 || !sha256) {
|
||||
throw new Error("invalid file.fetch payload (missing fields)");
|
||||
}
|
||||
|
||||
const buffer = Buffer.from(base64, "base64");
|
||||
if (buffer.byteLength !== size) {
|
||||
throw new Error(
|
||||
`file.fetch size mismatch: payload says ${size} bytes, decoded ${buffer.byteLength}`,
|
||||
);
|
||||
}
|
||||
const localSha256 = crypto.createHash("sha256").update(buffer).digest("hex");
|
||||
if (localSha256 !== sha256) {
|
||||
throw new Error("file.fetch sha256 mismatch (integrity failure)");
|
||||
}
|
||||
|
||||
const saved = await saveMediaBuffer(
|
||||
buffer,
|
||||
mimeType,
|
||||
FILE_TRANSFER_SUBDIR,
|
||||
FILE_FETCH_HARD_MAX_BYTES,
|
||||
);
|
||||
const localPath = saved.path;
|
||||
|
||||
const isInlineImage = IMAGE_MIME_INLINE_SET.has(mimeType);
|
||||
const isInlineText = TEXT_INLINE_MIME_SET.has(mimeType) && size <= TEXT_INLINE_MAX_BYTES;
|
||||
|
||||
const content: Array<
|
||||
{ type: "text"; text: string } | { type: "image"; data: string; mimeType: string }
|
||||
> = [];
|
||||
if (isInlineImage) {
|
||||
content.push({ type: "image", data: base64, mimeType });
|
||||
} else if (isInlineText) {
|
||||
const text = buffer.toString("utf-8");
|
||||
content.push({
|
||||
type: "text",
|
||||
text: `Fetched ${canonicalPath} (${humanSize(size)}, ${mimeType}, sha256:${sha256.slice(0, 12)}) saved at ${localPath}\n\n--- contents ---\n${text}`,
|
||||
});
|
||||
} else {
|
||||
const shortHash = sha256.slice(0, 12);
|
||||
content.push({
|
||||
type: "text",
|
||||
text: `Fetched ${canonicalPath} (${humanSize(size)}, ${mimeType}, sha256:${shortHash}) saved at ${localPath}`,
|
||||
});
|
||||
}
|
||||
|
||||
await appendFileTransferAudit({
|
||||
op: "file.fetch",
|
||||
nodeId,
|
||||
nodeDisplayName,
|
||||
requestedPath: filePath,
|
||||
canonicalPath,
|
||||
decision: "allowed",
|
||||
sizeBytes: size,
|
||||
sha256,
|
||||
durationMs: Date.now() - startedAt,
|
||||
});
|
||||
|
||||
return {
|
||||
content,
|
||||
details: {
|
||||
path: canonicalPath,
|
||||
size,
|
||||
mimeType,
|
||||
sha256,
|
||||
localPath,
|
||||
mediaId: saved.id,
|
||||
media: {
|
||||
mediaUrls: [localPath],
|
||||
},
|
||||
},
|
||||
};
|
||||
},
|
||||
};
|
||||
}
|
||||
209
extensions/file-transfer/src/tools/file-write-tool.ts
Normal file
209
extensions/file-transfer/src/tools/file-write-tool.ts
Normal file
@@ -0,0 +1,209 @@
|
||||
import crypto from "node:crypto";
|
||||
import fs from "node:fs/promises";
|
||||
import {
|
||||
callGatewayTool,
|
||||
listNodes,
|
||||
resolveNodeIdFromList,
|
||||
type AnyAgentTool,
|
||||
type NodeListNode,
|
||||
} from "openclaw/plugin-sdk/agent-harness-runtime";
|
||||
import { resolveMediaBufferPath } from "openclaw/plugin-sdk/media-store";
|
||||
import { Type } from "typebox";
|
||||
import { appendFileTransferAudit } from "../shared/audit.js";
|
||||
import { throwFromNodePayload } from "../shared/errors.js";
|
||||
import {
|
||||
humanSize,
|
||||
readBoolean,
|
||||
readGatewayCallOptions,
|
||||
readTrimmedString,
|
||||
} from "../shared/params.js";
|
||||
|
||||
const FILE_WRITE_HARD_MAX_BYTES = 16 * 1024 * 1024;
|
||||
|
||||
const FILE_WRITE_SCHEMA = Type.Object({
|
||||
node: Type.String({ description: "Node id or display name to write the file on." }),
|
||||
path: Type.String({
|
||||
description: "Absolute path on the node to write. Canonicalized server-side.",
|
||||
}),
|
||||
contentBase64: Type.Optional(
|
||||
Type.String({
|
||||
description: "Base64-encoded bytes to write. Maximum 16 MB after decode.",
|
||||
}),
|
||||
),
|
||||
sourceMediaId: Type.Optional(
|
||||
Type.String({
|
||||
description:
|
||||
"Media id returned by file_fetch. Preferred for binary copies because bytes stay in the gateway media store.",
|
||||
}),
|
||||
),
|
||||
mimeType: Type.Optional(
|
||||
Type.String({
|
||||
description: "Content type hint. Not validated against the content.",
|
||||
}),
|
||||
),
|
||||
overwrite: Type.Optional(
|
||||
Type.Boolean({
|
||||
description: "Allow overwriting an existing file. Default false.",
|
||||
default: false,
|
||||
}),
|
||||
),
|
||||
createParents: Type.Optional(
|
||||
Type.Boolean({
|
||||
description: "Create missing parent directories (mkdir -p). Default false.",
|
||||
default: false,
|
||||
}),
|
||||
),
|
||||
});
|
||||
|
||||
async function readSourceBytes(input: {
|
||||
contentBase64?: string;
|
||||
sourceMediaId?: string;
|
||||
}): Promise<{ buffer: Buffer; contentBase64: string; source: "inline" | "media" }> {
|
||||
const sourceMediaId = input.sourceMediaId?.trim();
|
||||
if (sourceMediaId) {
|
||||
const mediaPath = await resolveMediaBufferPath(sourceMediaId, "file-transfer");
|
||||
const stat = await fs.stat(mediaPath);
|
||||
if (stat.size > FILE_WRITE_HARD_MAX_BYTES) {
|
||||
throw new Error(
|
||||
`sourceMediaId too large: ${stat.size} bytes; maximum is ${FILE_WRITE_HARD_MAX_BYTES} bytes`,
|
||||
);
|
||||
}
|
||||
const buffer = await fs.readFile(mediaPath);
|
||||
return { buffer, contentBase64: buffer.toString("base64"), source: "media" };
|
||||
}
|
||||
if (input.contentBase64 === undefined) {
|
||||
throw new Error("contentBase64 or sourceMediaId required");
|
||||
}
|
||||
const buffer = Buffer.from(input.contentBase64, "base64");
|
||||
return { buffer, contentBase64: input.contentBase64, source: "inline" };
|
||||
}
|
||||
|
||||
type FileWriteSuccess = {
|
||||
ok: true;
|
||||
path: string;
|
||||
size: number;
|
||||
sha256: string;
|
||||
overwritten: boolean;
|
||||
};
|
||||
|
||||
type FileWriteError = {
|
||||
ok: false;
|
||||
code: string;
|
||||
message: string;
|
||||
canonicalPath?: string;
|
||||
};
|
||||
|
||||
type FileWritePayload = FileWriteSuccess | FileWriteError;
|
||||
|
||||
export function createFileWriteTool(): AnyAgentTool {
|
||||
return {
|
||||
label: "File Write",
|
||||
name: "file_write",
|
||||
description:
|
||||
"Write file bytes to a paired node by absolute path. Atomic write (temp + rename). Refuses to overwrite by default — pass overwrite=true to replace. Refuses to write through symlink targets unless policy explicitly allows following symlinks. Pair with file_fetch by passing its mediaId as sourceMediaId for binary copy. Requires operator opt-in: gateway.nodes.allowCommands must include 'file.write' AND plugins.entries.file-transfer.config.nodes.<node>.allowWritePaths must match the destination path. Without policy configured, every call is denied.",
|
||||
parameters: FILE_WRITE_SCHEMA,
|
||||
async execute(_toolCallId, params) {
|
||||
const raw: Record<string, unknown> =
|
||||
params && typeof params === "object" && !Array.isArray(params)
|
||||
? (params as Record<string, unknown>)
|
||||
: {};
|
||||
|
||||
const nodeQuery = readTrimmedString(raw, "node");
|
||||
const filePath = readTrimmedString(raw, "path");
|
||||
const contentBase64 = typeof raw.contentBase64 === "string" ? raw.contentBase64 : undefined;
|
||||
const sourceMediaId = typeof raw.sourceMediaId === "string" ? raw.sourceMediaId : undefined;
|
||||
const overwrite = readBoolean(raw, "overwrite", false);
|
||||
const createParents = readBoolean(raw, "createParents", false);
|
||||
|
||||
if (!nodeQuery) {
|
||||
throw new Error("node required");
|
||||
}
|
||||
if (!filePath) {
|
||||
throw new Error("path required");
|
||||
}
|
||||
// Compute the sha256 of the bytes we're sending so the node can do
|
||||
// an end-to-end integrity check after writing. This is always
|
||||
// sender-side computed; ignore any caller-supplied expectedSha256
|
||||
// to avoid the model passing a wrong hash and triggering an
|
||||
// unintended unlink.
|
||||
const sourceBytes = await readSourceBytes({ contentBase64, sourceMediaId });
|
||||
const buffer = sourceBytes.buffer;
|
||||
const expectedSha256 = crypto.createHash("sha256").update(buffer).digest("hex");
|
||||
|
||||
const gatewayOpts = readGatewayCallOptions(raw);
|
||||
const nodes: NodeListNode[] = await listNodes(gatewayOpts);
|
||||
const nodeId = resolveNodeIdFromList(nodes, nodeQuery, false);
|
||||
const nodeMeta = nodes.find((n) => n.nodeId === nodeId);
|
||||
const nodeDisplayName = nodeMeta?.displayName ?? nodeQuery;
|
||||
const startedAt = Date.now();
|
||||
|
||||
const result = await callGatewayTool<{ payload: unknown }>("node.invoke", gatewayOpts, {
|
||||
nodeId,
|
||||
command: "file.write",
|
||||
params: {
|
||||
path: filePath,
|
||||
contentBase64: sourceBytes.contentBase64,
|
||||
overwrite,
|
||||
createParents,
|
||||
expectedSha256,
|
||||
},
|
||||
idempotencyKey: crypto.randomUUID(),
|
||||
});
|
||||
|
||||
const payload = (result as { payload?: unknown })?.payload;
|
||||
if (!payload || typeof payload !== "object" || Array.isArray(payload)) {
|
||||
await appendFileTransferAudit({
|
||||
op: "file.write",
|
||||
nodeId,
|
||||
nodeDisplayName,
|
||||
requestedPath: filePath,
|
||||
decision: "error",
|
||||
errorMessage: "unexpected response from node",
|
||||
sizeBytes: buffer.byteLength,
|
||||
durationMs: Date.now() - startedAt,
|
||||
});
|
||||
throw new Error("unexpected file.write response from node");
|
||||
}
|
||||
|
||||
const typed = payload as FileWritePayload;
|
||||
if (!typed.ok) {
|
||||
await appendFileTransferAudit({
|
||||
op: "file.write",
|
||||
nodeId,
|
||||
nodeDisplayName,
|
||||
requestedPath: filePath,
|
||||
canonicalPath: typed.canonicalPath,
|
||||
decision: "error",
|
||||
errorCode: typed.code,
|
||||
errorMessage: typed.message,
|
||||
sizeBytes: buffer.byteLength,
|
||||
durationMs: Date.now() - startedAt,
|
||||
});
|
||||
throwFromNodePayload("file.write", typed as unknown as Record<string, unknown>);
|
||||
}
|
||||
|
||||
await appendFileTransferAudit({
|
||||
op: "file.write",
|
||||
nodeId,
|
||||
nodeDisplayName,
|
||||
requestedPath: filePath,
|
||||
canonicalPath: typed.path,
|
||||
decision: "allowed",
|
||||
sizeBytes: typed.size,
|
||||
sha256: typed.sha256,
|
||||
durationMs: Date.now() - startedAt,
|
||||
});
|
||||
|
||||
const overwriteNote = typed.overwritten ? " (overwrote existing file)" : "";
|
||||
return {
|
||||
content: [
|
||||
{
|
||||
type: "text" as const,
|
||||
text: `Wrote ${typed.path} (${humanSize(typed.size)}, sha256:${typed.sha256.slice(0, 12)})${overwriteNote}`,
|
||||
},
|
||||
],
|
||||
details: { ...typed, source: sourceBytes.source },
|
||||
};
|
||||
},
|
||||
};
|
||||
}
|
||||
13
pnpm-lock.yaml
generated
13
pnpm-lock.yaml
generated
@@ -589,6 +589,19 @@ importers:
|
||||
specifier: workspace:*
|
||||
version: link:../..
|
||||
|
||||
extensions/file-transfer:
|
||||
dependencies:
|
||||
minimatch:
|
||||
specifier: 10.2.4
|
||||
version: 10.2.4
|
||||
typebox:
|
||||
specifier: 1.1.34
|
||||
version: 1.1.34
|
||||
devDependencies:
|
||||
'@openclaw/plugin-sdk':
|
||||
specifier: workspace:*
|
||||
version: link:../../packages/plugin-sdk
|
||||
|
||||
extensions/firecrawl:
|
||||
dependencies:
|
||||
typebox:
|
||||
|
||||
@@ -5,6 +5,7 @@ import { normalizeLowercaseStringOrEmpty } from "../../shared/string-coerce.js";
|
||||
import { jsonResult, readStringParam } from "./common.js";
|
||||
import type { GatewayCallOptions } from "./gateway.js";
|
||||
import { callGatewayTool } from "./gateway.js";
|
||||
import { POLICY_REDIRECT_INVOKE_COMMANDS } from "./nodes-tool-media.js";
|
||||
import { resolveNodeId } from "./nodes-utils.js";
|
||||
|
||||
export const BLOCKED_INVOKE_COMMANDS = new Set(["system.run", "system.run.prepare"]);
|
||||
@@ -123,6 +124,17 @@ export async function executeNodeCommandAction(params: {
|
||||
);
|
||||
}
|
||||
const dedicatedAction = params.mediaInvokeActions[invokeCommandNormalized];
|
||||
// Policy-redirect commands (file-transfer) ALWAYS reroute to their
|
||||
// dedicated tool. The dedicated tool runs gatekeep() + path policy
|
||||
// + operator approval; the generic invoke path doesn't. Operators
|
||||
// who set allowMediaInvokeCommands=true to allow camera/screen
|
||||
// bytes via raw invoke must not also get a path-policy bypass for
|
||||
// file-transfer.
|
||||
if (dedicatedAction && POLICY_REDIRECT_INVOKE_COMMANDS.has(invokeCommandNormalized)) {
|
||||
throw new Error(
|
||||
`invokeCommand "${invokeCommand}" enforces a path-allowlist policy and cannot be invoked via the generic nodes.invoke surface; use the dedicated file-transfer tool "${dedicatedAction}"`,
|
||||
);
|
||||
}
|
||||
if (dedicatedAction && !params.allowMediaInvokeCommands) {
|
||||
throw new Error(
|
||||
`invokeCommand "${invokeCommand}" returns media payloads and is blocked to prevent base64 context bloat; use action="${dedicatedAction}"`,
|
||||
|
||||
@@ -27,8 +27,24 @@ export const MEDIA_INVOKE_ACTIONS = {
|
||||
"camera.clip": "camera_clip",
|
||||
"photos.latest": "photos_latest",
|
||||
"screen.record": "screen_record",
|
||||
// file-transfer commands: redirect to dedicated tools for better result
|
||||
// formatting and media-store handling. The gateway still enforces the
|
||||
// underlying node-invoke path policy for raw callers.
|
||||
"file.fetch": "file_fetch",
|
||||
"dir.list": "dir_list",
|
||||
"dir.fetch": "dir_fetch",
|
||||
"file.write": "file_write",
|
||||
} as const;
|
||||
|
||||
// Subset of MEDIA_INVOKE_ACTIONS where the dedicated tool is the preferred
|
||||
// agent UX. Gateway node-invoke policy still protects raw node.invoke callers.
|
||||
export const POLICY_REDIRECT_INVOKE_COMMANDS: ReadonlySet<string> = new Set([
|
||||
"file.fetch",
|
||||
"dir.list",
|
||||
"dir.fetch",
|
||||
"file.write",
|
||||
]);
|
||||
|
||||
export type NodeMediaAction = "camera_snap" | "photos_latest" | "camera_clip" | "screen_record";
|
||||
|
||||
type ExecuteNodeMediaActionParams = {
|
||||
|
||||
@@ -321,6 +321,20 @@ describe("createNodesTool screen_record duration guardrails", () => {
|
||||
).rejects.toThrow('invokeCommand "system.run" is reserved for shell execution');
|
||||
});
|
||||
|
||||
it("redirects file-transfer invoke commands to the dedicated file-transfer tool", async () => {
|
||||
const tool = createNodesTool({ allowMediaInvokeCommands: true });
|
||||
|
||||
await expect(
|
||||
tool.execute("call-1", {
|
||||
action: "invoke",
|
||||
node: "macbook",
|
||||
invokeCommand: "file.fetch",
|
||||
}),
|
||||
).rejects.toThrow(
|
||||
'invokeCommand "file.fetch" enforces a path-allowlist policy and cannot be invoked via the generic nodes.invoke surface; use the dedicated file-transfer tool "file_fetch"',
|
||||
);
|
||||
});
|
||||
|
||||
it("keeps invoke pairing guidance for scope upgrade rejections", async () => {
|
||||
gatewayMocks.callGatewayTool.mockRejectedValueOnce(
|
||||
new Error("scope upgrade pending approval (requestId: req-123)"),
|
||||
|
||||
@@ -138,7 +138,7 @@ export function createNodesTool(options?: {
|
||||
name: "nodes",
|
||||
ownerOnly: isOpenClawOwnerOnlyCoreToolName("nodes"),
|
||||
description:
|
||||
"Discover and control paired nodes (status/describe/pairing/notify/camera/photos/screen/location/notifications/invoke).",
|
||||
"Discover and control paired nodes (status/describe/pairing/notify/camera/photos/screen/location/notifications/invoke). For file retrieval, use the dedicated file_fetch tool.",
|
||||
parameters: NodesToolSchema,
|
||||
execute: async (_toolCallId, args) => {
|
||||
const params = args as Record<string, unknown>;
|
||||
|
||||
@@ -4,6 +4,7 @@ import {
|
||||
NODE_SYSTEM_NOTIFY_COMMAND,
|
||||
NODE_SYSTEM_RUN_COMMANDS,
|
||||
} from "../infra/node-commands.js";
|
||||
import { getActiveRuntimePluginRegistry } from "../plugins/active-runtime-registry.js";
|
||||
import { normalizeDeviceMetadataForPolicy } from "./device-metadata-normalization.js";
|
||||
import type { NodeSession } from "./node-registry.js";
|
||||
|
||||
@@ -182,6 +183,20 @@ function normalizePlatformId(platform?: string, deviceFamily?: string): Platform
|
||||
return byFamily ?? "unknown";
|
||||
}
|
||||
|
||||
export function listDangerousPluginNodeCommands(): string[] {
|
||||
const registry = getActiveRuntimePluginRegistry();
|
||||
if (!registry) {
|
||||
return [];
|
||||
}
|
||||
const commands = [
|
||||
...(registry.nodeHostCommands ?? [])
|
||||
.filter((entry) => entry.command.dangerous === true)
|
||||
.map((entry) => entry.command.command),
|
||||
...(registry.nodeInvokePolicies ?? []).flatMap((entry) => entry.policy.commands),
|
||||
];
|
||||
return [...new Set(commands.map((command) => command.trim()).filter(Boolean))];
|
||||
}
|
||||
|
||||
export function resolveNodeCommandAllowlist(
|
||||
cfg: OpenClawConfig,
|
||||
node?: Pick<NodeSession, "platform" | "deviceFamily">,
|
||||
@@ -190,7 +205,18 @@ export function resolveNodeCommandAllowlist(
|
||||
const base = PLATFORM_DEFAULTS[platformId] ?? PLATFORM_DEFAULTS.unknown;
|
||||
const extra = cfg.gateway?.nodes?.allowCommands ?? [];
|
||||
const deny = new Set(cfg.gateway?.nodes?.denyCommands ?? []);
|
||||
const allow = new Set([...base, ...extra].map((cmd) => cmd.trim()).filter(Boolean));
|
||||
const dangerousPluginCommands = new Set(listDangerousPluginNodeCommands());
|
||||
const allow = new Set(
|
||||
[...base, ...extra]
|
||||
.map((cmd) => cmd.trim())
|
||||
.filter((cmd) => cmd && !dangerousPluginCommands.has(cmd)),
|
||||
);
|
||||
for (const cmd of extra) {
|
||||
const trimmed = cmd.trim();
|
||||
if (trimmed) {
|
||||
allow.add(trimmed);
|
||||
}
|
||||
}
|
||||
for (const blocked of deny) {
|
||||
const trimmed = blocked.trim();
|
||||
if (trimmed) {
|
||||
|
||||
143
src/gateway/node-invoke-plugin-policy.test.ts
Normal file
143
src/gateway/node-invoke-plugin-policy.test.ts
Normal file
@@ -0,0 +1,143 @@
|
||||
import { beforeEach, describe, expect, it, vi } from "vitest";
|
||||
import type { PluginRegistry } from "../plugins/registry-types.js";
|
||||
import type { OpenClawPluginNodeInvokePolicyContext } from "../plugins/types.js";
|
||||
import { applyPluginNodeInvokePolicy } from "./node-invoke-plugin-policy.js";
|
||||
import type { NodeSession } from "./node-registry.js";
|
||||
import type { GatewayRequestContext } from "./server-methods/types.js";
|
||||
|
||||
const registryState = vi.hoisted(() => ({
|
||||
current: null as PluginRegistry | null,
|
||||
}));
|
||||
|
||||
vi.mock("../plugins/active-runtime-registry.js", () => ({
|
||||
getActiveRuntimePluginRegistry: () => registryState.current,
|
||||
}));
|
||||
|
||||
function createNodeSession(): NodeSession {
|
||||
return {
|
||||
nodeId: "node-1",
|
||||
connId: "conn-1",
|
||||
client: {} as NodeSession["client"],
|
||||
caps: [],
|
||||
commands: ["demo.read"],
|
||||
connectedAtMs: 0,
|
||||
};
|
||||
}
|
||||
|
||||
function createContext() {
|
||||
const invoke = vi.fn(async () => ({
|
||||
ok: true,
|
||||
payload: { ok: true, value: 1 },
|
||||
payloadJSON: null,
|
||||
error: null,
|
||||
}));
|
||||
return {
|
||||
context: {
|
||||
getRuntimeConfig: () => ({}),
|
||||
nodeRegistry: { invoke },
|
||||
broadcast: vi.fn(),
|
||||
} as unknown as GatewayRequestContext,
|
||||
invoke,
|
||||
};
|
||||
}
|
||||
|
||||
describe("applyPluginNodeInvokePolicy", () => {
|
||||
beforeEach(() => {
|
||||
registryState.current = null;
|
||||
});
|
||||
|
||||
it("fails closed for dangerous plugin node commands without a policy", async () => {
|
||||
registryState.current = {
|
||||
nodeHostCommands: [
|
||||
{
|
||||
pluginId: "demo",
|
||||
command: {
|
||||
command: "demo.read",
|
||||
dangerous: true,
|
||||
handle: async () => "{}",
|
||||
},
|
||||
source: "test",
|
||||
},
|
||||
],
|
||||
nodeInvokePolicies: [],
|
||||
} as unknown as PluginRegistry;
|
||||
const { context, invoke } = createContext();
|
||||
|
||||
const result = await applyPluginNodeInvokePolicy({
|
||||
context,
|
||||
client: null,
|
||||
nodeSession: createNodeSession(),
|
||||
command: "demo.read",
|
||||
params: { path: "/tmp/x" },
|
||||
});
|
||||
|
||||
expect(result).toMatchObject({
|
||||
ok: false,
|
||||
code: "PLUGIN_POLICY_MISSING",
|
||||
});
|
||||
expect(invoke).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it("uses a matching plugin policy when one is registered", async () => {
|
||||
registryState.current = {
|
||||
nodeHostCommands: [
|
||||
{
|
||||
pluginId: "demo",
|
||||
command: {
|
||||
command: "demo.read",
|
||||
dangerous: true,
|
||||
handle: async () => "{}",
|
||||
},
|
||||
source: "test",
|
||||
},
|
||||
],
|
||||
nodeInvokePolicies: [
|
||||
{
|
||||
pluginId: "demo",
|
||||
policy: {
|
||||
commands: ["demo.read"],
|
||||
handle: (ctx: OpenClawPluginNodeInvokePolicyContext) => ctx.invokeNode(),
|
||||
},
|
||||
pluginConfig: { enabled: true },
|
||||
source: "test",
|
||||
},
|
||||
],
|
||||
} as unknown as PluginRegistry;
|
||||
const { context, invoke } = createContext();
|
||||
|
||||
const result = await applyPluginNodeInvokePolicy({
|
||||
context,
|
||||
client: null,
|
||||
nodeSession: createNodeSession(),
|
||||
command: "demo.read",
|
||||
params: { path: "/tmp/x" },
|
||||
});
|
||||
|
||||
expect(result).toMatchObject({ ok: true, payload: { ok: true, value: 1 } });
|
||||
expect(invoke).toHaveBeenCalledWith({
|
||||
nodeId: "node-1",
|
||||
command: "demo.read",
|
||||
params: { path: "/tmp/x" },
|
||||
timeoutMs: undefined,
|
||||
idempotencyKey: undefined,
|
||||
});
|
||||
});
|
||||
|
||||
it("leaves commands without a dangerous plugin registration to normal allowlist handling", async () => {
|
||||
registryState.current = {
|
||||
nodeHostCommands: [],
|
||||
nodeInvokePolicies: [],
|
||||
} as unknown as PluginRegistry;
|
||||
const { context } = createContext();
|
||||
|
||||
const result = await applyPluginNodeInvokePolicy({
|
||||
context,
|
||||
client: null,
|
||||
nodeSession: createNodeSession(),
|
||||
command: "safe.echo",
|
||||
params: { value: "hello" },
|
||||
});
|
||||
|
||||
expect(result).toBeNull();
|
||||
});
|
||||
});
|
||||
171
src/gateway/node-invoke-plugin-policy.ts
Normal file
171
src/gateway/node-invoke-plugin-policy.ts
Normal file
@@ -0,0 +1,171 @@
|
||||
import { randomUUID } from "node:crypto";
|
||||
import type { PluginApprovalRequestPayload } from "../infra/plugin-approvals.js";
|
||||
import { DEFAULT_PLUGIN_APPROVAL_TIMEOUT_MS } from "../infra/plugin-approvals.js";
|
||||
import { getActiveRuntimePluginRegistry } from "../plugins/active-runtime-registry.js";
|
||||
import type { PluginRegistry } from "../plugins/registry-types.js";
|
||||
import type {
|
||||
OpenClawPluginNodeInvokePolicyContext,
|
||||
OpenClawPluginNodeInvokePolicyResult,
|
||||
OpenClawPluginNodeInvokeTransportResult,
|
||||
} from "../plugins/types.js";
|
||||
import { normalizeOptionalString } from "../shared/string-coerce.js";
|
||||
import type { NodeSession } from "./node-registry.js";
|
||||
import type { GatewayClient, GatewayRequestContext } from "./server-methods/types.js";
|
||||
|
||||
function parseScopes(client: GatewayClient | null): string[] {
|
||||
return Array.isArray(client?.connect?.scopes)
|
||||
? client.connect.scopes.filter((scope): scope is string => typeof scope === "string")
|
||||
: [];
|
||||
}
|
||||
|
||||
function parsePayload(payloadJSON: string | null | undefined, payload: unknown): unknown {
|
||||
if (!payloadJSON) {
|
||||
return payload;
|
||||
}
|
||||
try {
|
||||
return JSON.parse(payloadJSON) as unknown;
|
||||
} catch {
|
||||
return payload;
|
||||
}
|
||||
}
|
||||
|
||||
function findDangerousPluginNodeCommand(registry: PluginRegistry | null, command: string) {
|
||||
const normalizedCommand = command.trim();
|
||||
if (!normalizedCommand) {
|
||||
return null;
|
||||
}
|
||||
return (
|
||||
registry?.nodeHostCommands?.find(
|
||||
(entry) =>
|
||||
entry.command.dangerous === true && entry.command.command.trim() === normalizedCommand,
|
||||
) ?? null
|
||||
);
|
||||
}
|
||||
|
||||
function createApprovalRuntime(params: {
|
||||
context: GatewayRequestContext;
|
||||
client: GatewayClient | null;
|
||||
pluginId: string;
|
||||
}): OpenClawPluginNodeInvokePolicyContext["approvals"] | undefined {
|
||||
const manager = params.context.pluginApprovalManager;
|
||||
if (!manager) {
|
||||
return undefined;
|
||||
}
|
||||
return {
|
||||
async request(input) {
|
||||
const timeoutMs =
|
||||
typeof input.timeoutMs === "number" && Number.isFinite(input.timeoutMs)
|
||||
? input.timeoutMs
|
||||
: DEFAULT_PLUGIN_APPROVAL_TIMEOUT_MS;
|
||||
const request: PluginApprovalRequestPayload = {
|
||||
pluginId: params.pluginId,
|
||||
title: input.title.slice(0, 80),
|
||||
description: input.description.slice(0, 256),
|
||||
severity: input.severity ?? "warning",
|
||||
toolName: normalizeOptionalString(input.toolName) ?? null,
|
||||
toolCallId: normalizeOptionalString(input.toolCallId) ?? null,
|
||||
agentId: normalizeOptionalString(input.agentId) ?? null,
|
||||
sessionKey: normalizeOptionalString(input.sessionKey) ?? null,
|
||||
};
|
||||
const record = manager.create(request, timeoutMs, `plugin:${randomUUID()}`);
|
||||
const decisionPromise = manager.register(record, timeoutMs);
|
||||
const requestEvent = {
|
||||
id: record.id,
|
||||
request: record.request,
|
||||
createdAtMs: record.createdAtMs,
|
||||
expiresAtMs: record.expiresAtMs,
|
||||
};
|
||||
params.context.broadcast("plugin.approval.requested", requestEvent, {
|
||||
dropIfSlow: true,
|
||||
});
|
||||
const hasApprovalClients =
|
||||
params.context.hasExecApprovalClients?.(params.client?.connId) ?? false;
|
||||
if (!hasApprovalClients) {
|
||||
manager.expire(record.id, "no-approval-route");
|
||||
return { id: record.id, decision: null };
|
||||
}
|
||||
const decision = await decisionPromise;
|
||||
return { id: record.id, decision };
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
export async function applyPluginNodeInvokePolicy(params: {
|
||||
context: GatewayRequestContext;
|
||||
client: GatewayClient | null;
|
||||
nodeSession: NodeSession;
|
||||
command: string;
|
||||
params: unknown;
|
||||
timeoutMs?: number;
|
||||
idempotencyKey?: string;
|
||||
}): Promise<OpenClawPluginNodeInvokePolicyResult | null> {
|
||||
const registry = getActiveRuntimePluginRegistry();
|
||||
const entry = registry?.nodeInvokePolicies?.find((candidate) =>
|
||||
candidate.policy.commands.includes(params.command),
|
||||
);
|
||||
if (!entry) {
|
||||
const dangerousCommand = findDangerousPluginNodeCommand(registry, params.command);
|
||||
if (dangerousCommand) {
|
||||
return {
|
||||
ok: false,
|
||||
code: "PLUGIN_POLICY_MISSING",
|
||||
message: `node.invoke ${params.command} is registered as dangerous by plugin ${dangerousCommand.pluginId} but has no plugin node.invoke policy`,
|
||||
};
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
const invokeNode: OpenClawPluginNodeInvokePolicyContext["invokeNode"] = async (
|
||||
override = {},
|
||||
): Promise<OpenClawPluginNodeInvokeTransportResult> => {
|
||||
const res = await params.context.nodeRegistry.invoke({
|
||||
nodeId: params.nodeSession.nodeId,
|
||||
command: params.command,
|
||||
params: override.params ?? params.params,
|
||||
timeoutMs: override.timeoutMs ?? params.timeoutMs,
|
||||
idempotencyKey: override.idempotencyKey ?? params.idempotencyKey,
|
||||
});
|
||||
if (!res.ok) {
|
||||
return {
|
||||
ok: false,
|
||||
code: res.error?.code,
|
||||
message: res.error?.message ?? "node command failed",
|
||||
details: { nodeError: res.error ?? null },
|
||||
};
|
||||
}
|
||||
return {
|
||||
ok: true,
|
||||
payload: parsePayload(res.payloadJSON, res.payload),
|
||||
payloadJSON: res.payloadJSON ?? null,
|
||||
};
|
||||
};
|
||||
|
||||
return await entry.policy.handle({
|
||||
nodeId: params.nodeSession.nodeId,
|
||||
command: params.command,
|
||||
params: params.params,
|
||||
timeoutMs: params.timeoutMs,
|
||||
idempotencyKey: params.idempotencyKey,
|
||||
config: params.context.getRuntimeConfig(),
|
||||
pluginConfig: entry.pluginConfig,
|
||||
node: {
|
||||
nodeId: params.nodeSession.nodeId,
|
||||
displayName: params.nodeSession.displayName,
|
||||
platform: params.nodeSession.platform,
|
||||
deviceFamily: params.nodeSession.deviceFamily,
|
||||
commands: params.nodeSession.commands,
|
||||
},
|
||||
client: params.client
|
||||
? {
|
||||
connId: params.client.connId,
|
||||
scopes: parseScopes(params.client),
|
||||
}
|
||||
: null,
|
||||
approvals: createApprovalRuntime({
|
||||
context: params.context,
|
||||
client: params.client,
|
||||
pluginId: entry.pluginId,
|
||||
}),
|
||||
invokeNode,
|
||||
});
|
||||
}
|
||||
@@ -32,6 +32,7 @@ import {
|
||||
} from "../canvas-capability.js";
|
||||
import { createKnownNodeCatalog, getKnownNode, listKnownNodes } from "../node-catalog.js";
|
||||
import { isNodeCommandAllowed, resolveNodeCommandAllowlist } from "../node-command-policy.js";
|
||||
import { applyPluginNodeInvokePolicy } from "../node-invoke-plugin-policy.js";
|
||||
import { sanitizeNodeInvokeParamsForForwarding } from "../node-invoke-sanitize.js";
|
||||
import {
|
||||
type ConnectParams,
|
||||
@@ -1034,6 +1035,7 @@ export const nodeHandlers: GatewayRequestHandlers = {
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
const forwardedParams = sanitizeNodeInvokeParamsForForwarding({
|
||||
nodeId,
|
||||
command,
|
||||
@@ -1051,6 +1053,45 @@ export const nodeHandlers: GatewayRequestHandlers = {
|
||||
);
|
||||
return;
|
||||
}
|
||||
const policyResult = await applyPluginNodeInvokePolicy({
|
||||
context,
|
||||
client,
|
||||
nodeSession,
|
||||
command,
|
||||
params: forwardedParams.params,
|
||||
timeoutMs: p.timeoutMs,
|
||||
idempotencyKey: p.idempotencyKey,
|
||||
});
|
||||
if (policyResult) {
|
||||
if (!policyResult.ok) {
|
||||
const errorCode = policyResult.unavailable
|
||||
? ErrorCodes.UNAVAILABLE
|
||||
: ErrorCodes.INVALID_REQUEST;
|
||||
respond(
|
||||
false,
|
||||
undefined,
|
||||
errorShape(errorCode, policyResult.message, {
|
||||
details: {
|
||||
...policyResult.details,
|
||||
...(policyResult.code ? { code: policyResult.code } : {}),
|
||||
},
|
||||
}),
|
||||
);
|
||||
return;
|
||||
}
|
||||
respond(
|
||||
true,
|
||||
{
|
||||
ok: true,
|
||||
nodeId,
|
||||
command,
|
||||
payload: policyResult.payload,
|
||||
payloadJSON: policyResult.payloadJSON ?? null,
|
||||
},
|
||||
undefined,
|
||||
);
|
||||
return;
|
||||
}
|
||||
const res = await context.nodeRegistry.invoke({
|
||||
nodeId,
|
||||
command,
|
||||
|
||||
@@ -307,6 +307,36 @@ describe("media store", () => {
|
||||
});
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "rejects traversal media subdirs before saving buffers",
|
||||
run: async () => {
|
||||
await withTempStore(async (store, home) => {
|
||||
const mediaDir = await store.ensureMediaDir();
|
||||
const outsideDir = path.join(home, "outside-media");
|
||||
const traversalSubdir = path.relative(mediaDir, outsideDir);
|
||||
|
||||
await expect(
|
||||
store.saveMediaBuffer(Buffer.from("escape"), "text/plain", traversalSubdir),
|
||||
).rejects.toThrow("unsafe media subdir");
|
||||
await expect(fs.stat(outsideDir)).rejects.toThrow();
|
||||
});
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "rejects traversal media subdirs before resolving IDs",
|
||||
run: async () => {
|
||||
await withTempStore(async (store, home) => {
|
||||
const mediaDir = await store.ensureMediaDir();
|
||||
const outsideDir = path.join(home, "outside-media-resolve");
|
||||
await fs.mkdir(outsideDir, { recursive: true });
|
||||
await fs.writeFile(path.join(outsideDir, "passwd"), "not media");
|
||||
|
||||
await expect(
|
||||
store.resolveMediaBufferPath("passwd", path.relative(mediaDir, outsideDir)),
|
||||
).rejects.toThrow("unsafe media subdir");
|
||||
});
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "retries local-source writes when cleanup prunes the target directory",
|
||||
run: async () => {
|
||||
|
||||
@@ -34,6 +34,39 @@ function formatMediaLimitMb(maxBytes: number): string {
|
||||
return `${(maxBytes / (1024 * 1024)).toFixed(0)}MB`;
|
||||
}
|
||||
|
||||
function resolveMediaSubdir(subdir: string, caller: string): string {
|
||||
if (typeof subdir !== "string") {
|
||||
throw new Error(`${caller}: unsafe media subdir: ${JSON.stringify(subdir)}`);
|
||||
}
|
||||
if (!subdir || subdir === ".") {
|
||||
return "";
|
||||
}
|
||||
if (
|
||||
subdir.includes("\0") ||
|
||||
path.isAbsolute(subdir) ||
|
||||
path.posix.isAbsolute(subdir) ||
|
||||
path.win32.isAbsolute(subdir)
|
||||
) {
|
||||
throw new Error(`${caller}: unsafe media subdir: ${JSON.stringify(subdir)}`);
|
||||
}
|
||||
const segments = subdir.split(/[\\/]+/u);
|
||||
if (segments.some((segment) => !segment || segment === "." || segment === "..")) {
|
||||
throw new Error(`${caller}: unsafe media subdir: ${JSON.stringify(subdir)}`);
|
||||
}
|
||||
return path.join(...segments);
|
||||
}
|
||||
|
||||
function resolveMediaScopedDir(subdir: string, caller: string): string {
|
||||
const mediaDir = resolveMediaDir();
|
||||
const safeSubdir = resolveMediaSubdir(subdir, caller);
|
||||
const dir = safeSubdir ? path.join(mediaDir, safeSubdir) : mediaDir;
|
||||
const relative = path.relative(mediaDir, dir);
|
||||
if (relative && (relative === ".." || relative.startsWith(`..${path.sep}`))) {
|
||||
throw new Error(`${caller}: media subdir escapes media directory: ${JSON.stringify(subdir)}`);
|
||||
}
|
||||
return dir;
|
||||
}
|
||||
|
||||
let httpRequestImpl: RequestImpl = defaultHttpRequestImpl;
|
||||
let httpsRequestImpl: RequestImpl = defaultHttpsRequestImpl;
|
||||
let resolvePinnedHostnameImpl: ResolvePinnedHostnameImpl = defaultResolvePinnedHostnameImpl;
|
||||
@@ -376,8 +409,7 @@ export async function saveMediaSource(
|
||||
subdir = "",
|
||||
maxBytes = MAX_BYTES,
|
||||
): Promise<SavedMedia> {
|
||||
const baseDir = resolveMediaDir();
|
||||
const dir = subdir ? path.join(baseDir, subdir) : baseDir;
|
||||
const dir = resolveMediaScopedDir(subdir, "saveMediaSource");
|
||||
await fs.mkdir(dir, { recursive: true, mode: 0o700 });
|
||||
await cleanOldMedia(DEFAULT_TTL_MS, { recursive: false });
|
||||
const baseId = crypto.randomUUID();
|
||||
@@ -422,7 +454,7 @@ export async function saveMediaBuffer(
|
||||
if (buffer.byteLength > maxBytes) {
|
||||
throw new Error(`Media exceeds ${formatMediaLimitMb(maxBytes)} limit`);
|
||||
}
|
||||
const dir = path.join(resolveMediaDir(), subdir);
|
||||
const dir = resolveMediaScopedDir(subdir, "saveMediaBuffer");
|
||||
await fs.mkdir(dir, { recursive: true, mode: 0o700 });
|
||||
const uuid = crypto.randomUUID();
|
||||
const headerExt = extensionForMime(normalizeOptionalString(contentType?.split(";")[0]));
|
||||
@@ -442,8 +474,8 @@ export async function saveMediaBuffer(
|
||||
* Gateway's claim-check offload path.
|
||||
*
|
||||
* Security:
|
||||
* - Rejects IDs containing path separators, "..", or null bytes to prevent
|
||||
* directory traversal and path injection outside the resolved subdir.
|
||||
* - Rejects IDs and subdirs containing path traversal, absolute paths, empty
|
||||
* segments, or null bytes to prevent path injection outside the media root.
|
||||
* - Verifies the resolved path is a regular file (not a symlink or directory)
|
||||
* before returning it, matching the write-side MEDIA_FILE_MODE policy.
|
||||
*
|
||||
@@ -455,10 +487,7 @@ export async function saveMediaBuffer(
|
||||
* @throws If the ID is unsafe, the file does not exist, or is not a
|
||||
* regular file.
|
||||
*/
|
||||
export async function resolveMediaBufferPath(
|
||||
id: string,
|
||||
subdir: "inbound" = "inbound",
|
||||
): Promise<string> {
|
||||
export async function resolveMediaBufferPath(id: string, subdir = "inbound"): Promise<string> {
|
||||
// Guard against path traversal and null-byte injection.
|
||||
//
|
||||
// - Separator checks: reject any ID containing "/" or "\" (covers all
|
||||
@@ -477,7 +506,7 @@ export async function resolveMediaBufferPath(
|
||||
throw new Error(`resolveMediaBufferPath: unsafe media ID: ${JSON.stringify(id)}`);
|
||||
}
|
||||
|
||||
const dir = path.join(resolveMediaDir(), subdir);
|
||||
const dir = resolveMediaScopedDir(subdir, "resolveMediaBufferPath");
|
||||
const resolved = path.join(dir, id);
|
||||
|
||||
// Double-check that path.join didn't escape the intended directory.
|
||||
|
||||
@@ -1,3 +1,3 @@
|
||||
// Narrow media store helpers for channel runtimes that do not need the full media runtime.
|
||||
|
||||
export { saveMediaBuffer } from "../media/store.js";
|
||||
export { resolveMediaBufferPath, saveMediaBuffer } from "../media/store.js";
|
||||
|
||||
@@ -18,6 +18,9 @@ import type {
|
||||
OpenClawPluginDefinition,
|
||||
OpenClawPluginHttpRouteHandler,
|
||||
OpenClawPluginNodeHostCommand,
|
||||
OpenClawPluginNodeInvokePolicy,
|
||||
OpenClawPluginNodeInvokePolicyContext,
|
||||
OpenClawPluginNodeInvokePolicyResult,
|
||||
OpenClawPluginReloadRegistration,
|
||||
OpenClawPluginSecurityAuditCollector,
|
||||
OpenClawPluginSecurityAuditContext,
|
||||
@@ -116,6 +119,9 @@ export type {
|
||||
MigrationSummary,
|
||||
OpenClawPluginApi,
|
||||
OpenClawPluginNodeHostCommand,
|
||||
OpenClawPluginNodeInvokePolicy,
|
||||
OpenClawPluginNodeInvokePolicyContext,
|
||||
OpenClawPluginNodeInvokePolicyResult,
|
||||
OpenClawPluginReloadRegistration,
|
||||
OpenClawPluginSecurityAuditCollector,
|
||||
OpenClawPluginSecurityAuditContext,
|
||||
|
||||
@@ -23,6 +23,7 @@ export function createTestPluginApi(api: TestPluginApiInput = {}): OpenClawPlugi
|
||||
registerGatewayDiscoveryService() {},
|
||||
registerReload() {},
|
||||
registerNodeHostCommand() {},
|
||||
registerNodeInvokePolicy() {},
|
||||
registerSecurityAuditCollector() {},
|
||||
registerConfigMigration() {},
|
||||
registerMigrationProvider() {},
|
||||
|
||||
@@ -26,6 +26,7 @@ export type BuildPluginApiParams = {
|
||||
| "registerCli"
|
||||
| "registerReload"
|
||||
| "registerNodeHostCommand"
|
||||
| "registerNodeInvokePolicy"
|
||||
| "registerSecurityAuditCollector"
|
||||
| "registerService"
|
||||
| "registerGatewayDiscoveryService"
|
||||
@@ -84,6 +85,7 @@ const noopRegisterGatewayMethod: OpenClawPluginApi["registerGatewayMethod"] = ()
|
||||
const noopRegisterCli: OpenClawPluginApi["registerCli"] = () => {};
|
||||
const noopRegisterReload: OpenClawPluginApi["registerReload"] = () => {};
|
||||
const noopRegisterNodeHostCommand: OpenClawPluginApi["registerNodeHostCommand"] = () => {};
|
||||
const noopRegisterNodeInvokePolicy: OpenClawPluginApi["registerNodeInvokePolicy"] = () => {};
|
||||
const noopRegisterSecurityAuditCollector: OpenClawPluginApi["registerSecurityAuditCollector"] =
|
||||
() => {};
|
||||
const noopRegisterService: OpenClawPluginApi["registerService"] = () => {};
|
||||
@@ -171,6 +173,7 @@ export function buildPluginApi(params: BuildPluginApiParams): OpenClawPluginApi
|
||||
registerCli: handlers.registerCli ?? noopRegisterCli,
|
||||
registerReload: handlers.registerReload ?? noopRegisterReload,
|
||||
registerNodeHostCommand: handlers.registerNodeHostCommand ?? noopRegisterNodeHostCommand,
|
||||
registerNodeInvokePolicy: handlers.registerNodeInvokePolicy ?? noopRegisterNodeInvokePolicy,
|
||||
registerSecurityAuditCollector:
|
||||
handlers.registerSecurityAuditCollector ?? noopRegisterSecurityAuditCollector,
|
||||
registerService: handlers.registerService ?? noopRegisterService,
|
||||
|
||||
@@ -45,6 +45,11 @@ const packageManifestContractTests: PackageManifestContractParams[] = [
|
||||
pluginLocalRuntimeDeps: ["@pierre/diffs", "@pierre/theme", "playwright-core"],
|
||||
mirroredRootRuntimeDeps: ["typebox"],
|
||||
},
|
||||
{
|
||||
pluginId: "file-transfer",
|
||||
pluginLocalRuntimeDeps: ["minimatch"],
|
||||
mirroredRootRuntimeDeps: ["typebox"],
|
||||
},
|
||||
{
|
||||
pluginId: "matrix",
|
||||
pluginLocalRuntimeDeps: [
|
||||
|
||||
@@ -3654,6 +3654,10 @@ module.exports = { id: "throws-after-import", register() {} };`,
|
||||
description: "failme",
|
||||
run: async () => ({ ok: true }),
|
||||
});
|
||||
api.registerNodeInvokePolicy({
|
||||
commands: ["failme.node"],
|
||||
handle: async () => ({ ok: true }),
|
||||
});
|
||||
api.registerSecurityAuditCollector({
|
||||
id: "failme",
|
||||
collect: async () => [],
|
||||
@@ -3696,6 +3700,7 @@ module.exports = { id: "throws-after-import", register() {} };`,
|
||||
expect(getPluginCommandSpecs()).toEqual([]);
|
||||
expect(registry.reloads).toEqual([]);
|
||||
expect(registry.nodeHostCommands).toEqual([]);
|
||||
expect(registry.nodeInvokePolicies).toEqual([]);
|
||||
expect(registry.securityAuditCollectors).toEqual([]);
|
||||
expect(resolvePluginInteractiveNamespaceMatch("slack", "failme:payload")).toBeNull();
|
||||
expect(getContextEngineFactory("failme-context")).toBeUndefined();
|
||||
|
||||
@@ -339,6 +339,7 @@ type PluginRegistrySnapshot = {
|
||||
cliRegistrars: PluginRegistry["cliRegistrars"];
|
||||
reloads: NonNullable<PluginRegistry["reloads"]>;
|
||||
nodeHostCommands: NonNullable<PluginRegistry["nodeHostCommands"]>;
|
||||
nodeInvokePolicies: NonNullable<PluginRegistry["nodeInvokePolicies"]>;
|
||||
securityAuditCollectors: NonNullable<PluginRegistry["securityAuditCollectors"]>;
|
||||
services: PluginRegistry["services"];
|
||||
commands: PluginRegistry["commands"];
|
||||
@@ -378,6 +379,7 @@ function snapshotPluginRegistry(registry: PluginRegistry): PluginRegistrySnapsho
|
||||
cliRegistrars: [...registry.cliRegistrars],
|
||||
reloads: [...(registry.reloads ?? [])],
|
||||
nodeHostCommands: [...(registry.nodeHostCommands ?? [])],
|
||||
nodeInvokePolicies: [...(registry.nodeInvokePolicies ?? [])],
|
||||
securityAuditCollectors: [...(registry.securityAuditCollectors ?? [])],
|
||||
services: [...registry.services],
|
||||
commands: [...registry.commands],
|
||||
@@ -416,6 +418,7 @@ function restorePluginRegistry(registry: PluginRegistry, snapshot: PluginRegistr
|
||||
registry.cliRegistrars = snapshot.arrays.cliRegistrars;
|
||||
registry.reloads = snapshot.arrays.reloads;
|
||||
registry.nodeHostCommands = snapshot.arrays.nodeHostCommands;
|
||||
registry.nodeInvokePolicies = snapshot.arrays.nodeInvokePolicies;
|
||||
registry.securityAuditCollectors = snapshot.arrays.securityAuditCollectors;
|
||||
registry.services = snapshot.arrays.services;
|
||||
registry.commands = snapshot.arrays.commands;
|
||||
|
||||
@@ -31,6 +31,7 @@ export function createEmptyPluginRegistry(): PluginRegistry {
|
||||
cliRegistrars: [],
|
||||
reloads: [],
|
||||
nodeHostCommands: [],
|
||||
nodeInvokePolicies: [],
|
||||
securityAuditCollectors: [],
|
||||
services: [],
|
||||
gatewayDiscoveryServices: [],
|
||||
|
||||
@@ -230,6 +230,15 @@ export type PluginNodeHostCommandRegistration = {
|
||||
rootDir?: string;
|
||||
};
|
||||
|
||||
export type PluginNodeInvokePolicyRegistration = {
|
||||
pluginId: string;
|
||||
pluginName?: string;
|
||||
policy: import("./types.js").OpenClawPluginNodeInvokePolicy;
|
||||
pluginConfig?: Record<string, unknown>;
|
||||
source: string;
|
||||
rootDir?: string;
|
||||
};
|
||||
|
||||
export type PluginSecurityAuditCollectorRegistration = {
|
||||
pluginId: string;
|
||||
pluginName?: string;
|
||||
@@ -399,6 +408,7 @@ export type PluginRegistry = {
|
||||
cliRegistrars: PluginCliRegistration[];
|
||||
reloads?: PluginReloadRegistration[];
|
||||
nodeHostCommands?: PluginNodeHostCommandRegistration[];
|
||||
nodeInvokePolicies?: PluginNodeInvokePolicyRegistration[];
|
||||
securityAuditCollectors?: PluginSecurityAuditCollectorRegistration[];
|
||||
services: PluginServiceRegistration[];
|
||||
gatewayDiscoveryServices: PluginGatewayDiscoveryServiceRegistration[];
|
||||
|
||||
@@ -147,6 +147,7 @@ import type {
|
||||
OpenClawPluginHttpRouteParams,
|
||||
OpenClawPluginHookOptions,
|
||||
OpenClawPluginNodeHostCommand,
|
||||
OpenClawPluginNodeInvokePolicy,
|
||||
OpenClawPluginReloadRegistration,
|
||||
OpenClawPluginSecurityAuditCollector,
|
||||
MediaUnderstandingProviderPlugin,
|
||||
@@ -1248,6 +1249,57 @@ export function createPluginRegistry(registryParams: PluginRegistryParams) {
|
||||
});
|
||||
};
|
||||
|
||||
const registerNodeInvokePolicy = (
|
||||
record: PluginRecord,
|
||||
policy: OpenClawPluginNodeInvokePolicy,
|
||||
pluginConfig?: Record<string, unknown>,
|
||||
) => {
|
||||
const commands = Array.isArray(policy.commands)
|
||||
? policy.commands.map((command) => command.trim()).filter(Boolean)
|
||||
: [];
|
||||
if (commands.length === 0) {
|
||||
pushDiagnostic({
|
||||
level: "error",
|
||||
pluginId: record.id,
|
||||
source: record.source,
|
||||
message: "node invoke policy registration missing commands",
|
||||
});
|
||||
return;
|
||||
}
|
||||
if (typeof policy.handle !== "function") {
|
||||
pushDiagnostic({
|
||||
level: "error",
|
||||
pluginId: record.id,
|
||||
source: record.source,
|
||||
message: `node invoke policy registration missing handler: ${commands.join(", ")}`,
|
||||
});
|
||||
return;
|
||||
}
|
||||
registry.nodeInvokePolicies ??= [];
|
||||
for (const command of commands) {
|
||||
const existing = registry.nodeInvokePolicies.find((entry) =>
|
||||
entry.policy.commands.includes(command),
|
||||
);
|
||||
if (existing) {
|
||||
pushDiagnostic({
|
||||
level: "error",
|
||||
pluginId: record.id,
|
||||
source: record.source,
|
||||
message: `node invoke policy already registered for ${command} (${existing.pluginId})`,
|
||||
});
|
||||
return;
|
||||
}
|
||||
}
|
||||
registry.nodeInvokePolicies.push({
|
||||
pluginId: record.id,
|
||||
pluginName: record.name,
|
||||
policy: { ...policy, commands },
|
||||
pluginConfig,
|
||||
source: record.source,
|
||||
rootDir: record.rootDir,
|
||||
});
|
||||
};
|
||||
|
||||
const registerSecurityAuditCollector = (
|
||||
record: PluginRecord,
|
||||
collector: OpenClawPluginSecurityAuditCollector,
|
||||
@@ -2076,6 +2128,8 @@ export function createPluginRegistry(registryParams: PluginRegistryParams) {
|
||||
registerTextTransforms: (transforms) => registerTextTransforms(record, transforms),
|
||||
registerReload: (registration) => registerReload(record, registration),
|
||||
registerNodeHostCommand: (command) => registerNodeHostCommand(record, command),
|
||||
registerNodeInvokePolicy: (policy) =>
|
||||
registerNodeInvokePolicy(record, policy, params.pluginConfig),
|
||||
registerSecurityAuditCollector: (collector) =>
|
||||
registerSecurityAuditCollector(record, collector),
|
||||
registerInteractiveHandler: (registration) => {
|
||||
|
||||
@@ -2045,9 +2045,89 @@ export type OpenClawPluginReloadRegistration = {
|
||||
export type OpenClawPluginNodeHostCommand = {
|
||||
command: string;
|
||||
cap?: string;
|
||||
dangerous?: boolean;
|
||||
handle: (paramsJSON?: string | null) => Promise<string>;
|
||||
};
|
||||
|
||||
export type OpenClawPluginNodeInvokeTransportResult =
|
||||
| {
|
||||
ok: true;
|
||||
payload?: unknown;
|
||||
payloadJSON?: string | null;
|
||||
}
|
||||
| {
|
||||
ok: false;
|
||||
code?: string;
|
||||
message: string;
|
||||
details?: Record<string, unknown>;
|
||||
};
|
||||
|
||||
export type OpenClawPluginNodeInvokeApprovalDecision = "allow-once" | "allow-always" | "deny";
|
||||
|
||||
export type OpenClawPluginNodeInvokePolicyApprovalRuntime = {
|
||||
request: (input: {
|
||||
title: string;
|
||||
description: string;
|
||||
severity?: "info" | "warning" | "critical";
|
||||
toolName?: string;
|
||||
toolCallId?: string;
|
||||
agentId?: string;
|
||||
sessionKey?: string;
|
||||
timeoutMs?: number;
|
||||
}) => Promise<{
|
||||
id?: string;
|
||||
decision?: OpenClawPluginNodeInvokeApprovalDecision | null;
|
||||
}>;
|
||||
};
|
||||
|
||||
export type OpenClawPluginNodeInvokePolicyContext = {
|
||||
nodeId: string;
|
||||
command: string;
|
||||
params: unknown;
|
||||
timeoutMs?: number;
|
||||
idempotencyKey?: string;
|
||||
config: OpenClawConfig;
|
||||
pluginConfig?: Record<string, unknown>;
|
||||
node?: {
|
||||
nodeId: string;
|
||||
displayName?: string;
|
||||
platform?: string;
|
||||
deviceFamily?: string;
|
||||
commands?: string[];
|
||||
};
|
||||
client?: {
|
||||
connId?: string;
|
||||
scopes?: string[];
|
||||
} | null;
|
||||
approvals?: OpenClawPluginNodeInvokePolicyApprovalRuntime;
|
||||
invokeNode: (input?: {
|
||||
params?: unknown;
|
||||
timeoutMs?: number;
|
||||
idempotencyKey?: string;
|
||||
}) => Promise<OpenClawPluginNodeInvokeTransportResult>;
|
||||
};
|
||||
|
||||
export type OpenClawPluginNodeInvokePolicyResult =
|
||||
| {
|
||||
ok: true;
|
||||
payload?: unknown;
|
||||
payloadJSON?: string | null;
|
||||
}
|
||||
| {
|
||||
ok: false;
|
||||
message: string;
|
||||
code?: string;
|
||||
details?: Record<string, unknown>;
|
||||
unavailable?: boolean;
|
||||
};
|
||||
|
||||
export type OpenClawPluginNodeInvokePolicy = {
|
||||
commands: string[];
|
||||
handle: (
|
||||
ctx: OpenClawPluginNodeInvokePolicyContext,
|
||||
) => Promise<OpenClawPluginNodeInvokePolicyResult> | OpenClawPluginNodeInvokePolicyResult;
|
||||
};
|
||||
|
||||
export type OpenClawPluginSecurityAuditContext = {
|
||||
config: OpenClawConfig;
|
||||
sourceConfig: OpenClawConfig;
|
||||
@@ -2318,6 +2398,7 @@ export type OpenClawPluginApi = {
|
||||
) => void;
|
||||
registerReload: (registration: OpenClawPluginReloadRegistration) => void;
|
||||
registerNodeHostCommand: (command: OpenClawPluginNodeHostCommand) => void;
|
||||
registerNodeInvokePolicy: (policy: OpenClawPluginNodeInvokePolicy) => void;
|
||||
registerSecurityAuditCollector: (collector: OpenClawPluginSecurityAuditCollector) => void;
|
||||
registerService: (service: OpenClawPluginService) => void;
|
||||
/** Register a local gateway discovery advertiser such as mDNS/Bonjour. */
|
||||
|
||||
@@ -12,6 +12,7 @@ import { resolveGatewayAuth } from "../gateway/auth.js";
|
||||
import { resolveAllowedAgentIds } from "../gateway/hooks-policy.js";
|
||||
import {
|
||||
DEFAULT_DANGEROUS_NODE_COMMANDS,
|
||||
listDangerousPluginNodeCommands,
|
||||
resolveNodeCommandAllowlist,
|
||||
} from "../gateway/node-command-policy.js";
|
||||
import {
|
||||
@@ -868,9 +869,10 @@ export function collectNodeDangerousAllowCommandFindings(
|
||||
}
|
||||
|
||||
const deny = new Set((cfg.gateway?.nodes?.denyCommands ?? []).map(normalizeNodeCommand));
|
||||
const dangerousAllowed = DEFAULT_DANGEROUS_NODE_COMMANDS.filter(
|
||||
(cmd) => allow.has(cmd) && !deny.has(cmd),
|
||||
);
|
||||
const dangerousAllowed = [
|
||||
...DEFAULT_DANGEROUS_NODE_COMMANDS,
|
||||
...listDangerousPluginNodeCommands(),
|
||||
].filter((cmd) => allow.has(cmd) && !deny.has(cmd));
|
||||
if (dangerousAllowed.length === 0) {
|
||||
return findings;
|
||||
}
|
||||
@@ -881,7 +883,7 @@ export function collectNodeDangerousAllowCommandFindings(
|
||||
title: "Dangerous node commands explicitly enabled",
|
||||
detail:
|
||||
`gateway.nodes.allowCommands includes: ${dangerousAllowed.join(", ")}. ` +
|
||||
"These commands can trigger high-impact device actions (camera/screen/contacts/calendar/reminders/SMS).",
|
||||
"These commands can trigger high-impact device actions or read node files (camera/screen/contacts/calendar/reminders/SMS/file).",
|
||||
remediation:
|
||||
"Remove these entries from gateway.nodes.allowCommands (recommended). " +
|
||||
"If you keep them, treat gateway auth as full operator access and keep gateway exposure local/tailnet-only.",
|
||||
|
||||
Reference in New Issue
Block a user