mirror of
https://github.com/openclaw/openclaw.git
synced 2026-05-06 10:40:43 +00:00
fix(security): 7 P1 hardening fixes — scan-paths, windows-acl, audit-extra (#67003)
* test(security): add coverage tests before security fixes
- scan-paths.ts: 100% line coverage (new test file, previously zero)
- windows-acl.ts: 100% line coverage (SID bypass, whoami throw, no-user null return)
- external-content.ts: 99% (line 248 defensive overlap guard, unreachable)
- skill-scanner.ts: 93% (lines 293-294/330/571 are defensive guards for
future extensibility, unreachable with current rules/patterns)
200+ tests covering TOCTOU paths, cache invalidation, forced-file escapes,
dir-entry-cache hit, SID world-bypass, diacritic-strip fallback,
fullwidth homoglyph markers, and more.
* fix(security): 5 security hardening fixes in src/security/
scan-paths: default requireRealpath to false (safe). All production callers
already pass requireRealpath: true; default callers are now secure.
windows-acl: block world-equivalent SIDs (S-1-1-0 Everyone etc.) from being
added to trusted set via USERSID env var.
windows-acl: log resolveCurrentUserSid failures instead of bare catch{}.
audit-extra: wrap JSON.parse in readPluginManifestExtensions with try-catch.
Malformed package.json returns [] instead of crashing the audit.
audit-extra: depth guard in listWorkspaceSkillMarkdownFiles to prevent
resource exhaustion from deep symlink cycles.
audit-extra: 2s timeout on fs.realpath in collectWorkspaceSkillSymlinkEscapeFindings
to protect against hanging on slow/network filesystems.
audit-extra: warn about phantom entries in plugins.allow that don't match
any installed plugin (pre-approval exploitation vector).
media-understanding/types: add allowPrivateNetwork to transport overrides
(duplicate of PR #66967, required for tsgo to pass here).
* fix(security): address security review findings in audit-extra.async.ts
Issue 1 — Symlink escape audit bypass on realpath timeout:
When realpathWithTimeout returns null (timeout or failure), the previous code
called 'continue', silently skipping the escape check. An attacker with a
symlink to a slow/network filesystem could hang realpath to prevent escape
detection. Now treats unverifiable symlinks as potential escapes and includes
them in the finding.
Issue 2 — Malformed package.json hides extension entrypoints from deep scan:
readPluginManifestExtensions previously swallowed JSON.parse errors and
returned [], which a malicious plugin could exploit by crafting a malformed
package.json to hide its openclaw.extensions entrypoints from the deep code
scanner. Now re-throws the parse error (with cause) so the caller in
collectPluginsCodeSafetyFindings can surface a warn finding and alert the
user, while still scanning the plugin directory via getCodeSafetySummary.
* fix(security): address PR review findings (P1 + P2)
P1 — BFS realpath in listWorkspaceSkillMarkdownFiles lacks timeout:
Extract realpathWithTimeout to module scope so the BFS dequeue loop
uses the same 2 s guard as the outer escape-detection callers. Previously
only the per-workspace and per-skill-file realpaths had the timeout;
a hanging NFS/SMB directory entry inside the BFS could still block
indefinitely.
P1 (acknowledged limitation) — Promise.race leaves the underlying
fs.realpath call running after timeout. fs.realpath cannot be cancelled
once submitted to libuv. Callers are sequential (one await at a time),
so at most one worker thread is occupied; the OS will eventually time
out the stuck call. This is documented in the module-level JSDoc.
P2 — Phantom allowlist check incorrectly flags bundled plugin IDs:
listChannelPlugins() returns bundled channel plugin IDs (telegram,
discord, browser, etc.) that are never in stateDir/extensions.
Add bundledPluginIds exclusion so the phantom-entry finding is scoped
to user-installed extension IDs only.
P2 — Rename MAX_SYMLINK_DEPTH / depthGuard to MAX_TOTAL_DIR_VISITS /
totalDirVisits to accurately reflect that the guard caps total BFS
iterations (2_000 * 20 = 40_000), not per-path symlink depth.
* fix(security): clean up realpathWithTimeout timer and add regression tests
- Clear the timer handle when fs.realpath resolves before the deadline,
preventing timer accumulation during large audit runs with many files.
- Add .unref() on the timer so it cannot hold the process alive while
waiting on a potentially hanging NFS/SMB path.
Regression tests added for three audit-extra.async security fixes:
- manifest parse error: malformed plugin package.json surfaces
plugins.code_safety.manifest_parse_error (audit-extra.async.test.ts)
- phantom allowlist with bundled exclusion: bundled channel plugin IDs
are excluded from plugins.allow_phantom_entries warnings; non-installed
non-bundled IDs are correctly reported (audit-plugins-phantom.test.ts)
- unverifiable realpath escape: fs.realpath failure / timeout produces a
skills.workspace.symlink_escape finding with 'realpath timed out' in
the detail (audit-workspace-skill-escape.test.ts)
* chore(security): add TODO for structured logger in windows-acl resolveCurrentUserSid
console.warn is acceptable short-term but may be noisy on constrained
Windows hosts; note the follow-up in-code so it is not lost.
* chore: drop unrelated formatting churn from security PR
Restores extensions/memory-lancedb/config.ts and
src/agents/pi-embedded-helpers/errors.ts to their origin/main state.
These were line-wrap-only formatting changes with no relation to the
security fixes in this branch.
* fix(security): address Codex P2 review findings
1. Normalize plugins.allow entries through normalizePluginId before
phantom-entry filtering so that bundled plugin aliases and legacy IDs
are correctly excluded. Without this, valid allow entries that resolve
via alias normalization could generate false-positive phantom warnings.
2. Surface a skills.workspace.scan_truncated warn finding when the BFS
visit cap (MAX_TOTAL_DIR_VISITS) is hit mid-traversal. Previously the
scanner silently returned partial results, allowing escaped SKILL.md
symlinks in the unvisited tree to go undetected.
listWorkspaceSkillMarkdownFiles now returns {skillFilePaths, truncated}
and collectWorkspaceSkillSymlinkEscapeFindings emits the new finding
when truncated is true.
Regression test added for the truncation path using a mocked readdir
that fills the queue past the cap (40 001 fake entries) and a mocked
realpath for zero-I/O iteration speed.
---------
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
This commit is contained in:
@@ -99,7 +99,15 @@ export const memoryConfigSchema = {
|
||||
const cfg = value as Record<string, unknown>;
|
||||
assertAllowedKeys(
|
||||
cfg,
|
||||
["embedding", "dreaming", "dbPath", "autoCapture", "autoRecall", "captureMaxChars", "storageOptions"],
|
||||
[
|
||||
"embedding",
|
||||
"dreaming",
|
||||
"dbPath",
|
||||
"autoCapture",
|
||||
"autoRecall",
|
||||
"captureMaxChars",
|
||||
"storageOptions",
|
||||
],
|
||||
"memory config",
|
||||
);
|
||||
|
||||
|
||||
@@ -116,6 +116,28 @@ description: test skill
|
||||
expect(findings.some((f) => f.checkId === "plugins.code_safety.entry_escape")).toBe(true);
|
||||
});
|
||||
|
||||
it("surfaces manifest_parse_error finding when plugin package.json is malformed JSON", async () => {
|
||||
const tmpDir = await makeTmpDir("audit-manifest-parse-error");
|
||||
const pluginDir = path.join(tmpDir, "extensions", "broken-plugin");
|
||||
await fs.mkdir(pluginDir, { recursive: true });
|
||||
// Deliberately malformed JSON — simulates a plugin corrupting its manifest
|
||||
// to hide declared extension entrypoints from the deep code scanner.
|
||||
await fs.writeFile(path.join(pluginDir, "package.json"), "{ not valid json !!!", "utf-8");
|
||||
|
||||
const findings = await collectPluginsCodeSafetyFindings({ stateDir: tmpDir });
|
||||
const finding = findings.find((f) => f.checkId === "plugins.code_safety.manifest_parse_error");
|
||||
expect(finding).toBeDefined();
|
||||
expect(finding?.severity).toBe("warn");
|
||||
expect(finding?.detail).toContain("broken-plugin");
|
||||
// Deep scan should still continue (scan_failed should NOT be emitted for the same plugin)
|
||||
expect(
|
||||
findings.some(
|
||||
(f) =>
|
||||
f.checkId === "plugins.code_safety.scan_failed" && f.detail?.includes("broken-plugin"),
|
||||
),
|
||||
).toBe(false);
|
||||
});
|
||||
|
||||
it("reports scan_failed when plugin code scanner throws during deep audit", async () => {
|
||||
const scanSpy = vi
|
||||
.spyOn(skillScanner, "scanDirectoryWithSummary")
|
||||
|
||||
@@ -25,7 +25,7 @@ import { collectIncludePathsRecursive } from "../config/includes-scan.js";
|
||||
import { resolveOAuthDir } from "../config/paths.js";
|
||||
import type { AgentToolsConfig } from "../config/types.tools.js";
|
||||
import { readInstalledPackageVersion } from "../infra/package-update-utils.js";
|
||||
import { normalizePluginsConfig } from "../plugins/config-state.js";
|
||||
import { normalizePluginId, normalizePluginsConfig } from "../plugins/config-state.js";
|
||||
import { normalizeAgentId } from "../routing/session-key.js";
|
||||
import {
|
||||
normalizeOptionalLowercaseString,
|
||||
@@ -59,6 +59,42 @@ type ExecDockerRawFn = (
|
||||
type CodeSafetySummaryCache = Map<string, Promise<unknown>>;
|
||||
const MAX_WORKSPACE_SKILL_SCAN_FILES_PER_WORKSPACE = 2_000;
|
||||
const MAX_WORKSPACE_SKILL_ESCAPE_DETAIL_ROWS = 12;
|
||||
|
||||
/**
|
||||
* Resolves the realpath of `p` with a 2 s timeout.
|
||||
*
|
||||
* Returns the realpath string on success, or `null` if realpath fails or the
|
||||
* timeout fires first. Note: fs.realpath cannot be cancelled once submitted to
|
||||
* libuv — the underlying OS call continues running in the background after the
|
||||
* timeout resolves. Callers make sequential (not concurrent) calls so at most
|
||||
* one libuv thread is occupied at a time; the OS will eventually time out the
|
||||
* stuck NFS/SMB call independently.
|
||||
*
|
||||
* Timer cleanup: when realpath resolves before the deadline the timer is
|
||||
* cleared immediately so it does not linger across the rest of the audit run.
|
||||
* The timer is also unref'd so it cannot prevent process exit even if it fires
|
||||
* late (e.g. the process finishes while a hang is still in-flight).
|
||||
*/
|
||||
function realpathWithTimeout(p: string, timeoutMs = 2000): Promise<string | null> {
|
||||
let timerHandle: ReturnType<typeof setTimeout> | undefined;
|
||||
|
||||
const realpathPromise = fs
|
||||
.realpath(p)
|
||||
.catch(() => null)
|
||||
.then((result) => {
|
||||
clearTimeout(timerHandle);
|
||||
return result;
|
||||
});
|
||||
|
||||
const timeoutPromise = new Promise<null>((resolve) => {
|
||||
timerHandle = setTimeout(() => resolve(null), timeoutMs);
|
||||
// Prevent the timer from keeping the process alive while waiting on a
|
||||
// potentially hanging NFS/SMB path during a large audit run.
|
||||
timerHandle.unref?.();
|
||||
});
|
||||
|
||||
return Promise.race([realpathPromise, timeoutPromise]);
|
||||
}
|
||||
let skillsModulePromise: Promise<typeof import("../agents/skills.js")> | undefined;
|
||||
let configModulePromise: Promise<typeof import("../config/config.js")> | undefined;
|
||||
|
||||
@@ -100,9 +136,19 @@ async function readPluginManifestExtensions(pluginPath: string): Promise<string[
|
||||
return [];
|
||||
}
|
||||
|
||||
const parsed = JSON.parse(raw) as Partial<
|
||||
Record<typeof MANIFEST_KEY, { extensions?: unknown }>
|
||||
> | null;
|
||||
let parsed: Partial<Record<typeof MANIFEST_KEY, { extensions?: unknown }>> | null;
|
||||
try {
|
||||
parsed = JSON.parse(raw) as Partial<
|
||||
Record<typeof MANIFEST_KEY, { extensions?: unknown }>
|
||||
> | null;
|
||||
} catch (err) {
|
||||
// Re-throw so callers can surface a security finding for malformed manifests.
|
||||
// A malicious plugin could use a malformed package.json to hide declared
|
||||
// extension entrypoints from deep scan — callers must not silently drop them.
|
||||
throw new Error(`Failed to parse plugin manifest at ${manifestPath}: ${String(err)}`, {
|
||||
cause: err,
|
||||
});
|
||||
}
|
||||
const extensions = parsed?.[MANIFEST_KEY]?.extensions;
|
||||
if (!Array.isArray(extensions)) {
|
||||
return [];
|
||||
@@ -380,20 +426,31 @@ async function getCodeSafetySummary(params: {
|
||||
});
|
||||
}
|
||||
|
||||
async function listWorkspaceSkillMarkdownFiles(workspaceDir: string): Promise<string[]> {
|
||||
async function listWorkspaceSkillMarkdownFiles(
|
||||
workspaceDir: string,
|
||||
): Promise<{ skillFilePaths: string[]; truncated: boolean }> {
|
||||
const skillsRoot = path.join(workspaceDir, "skills");
|
||||
const rootStat = await safeStat(skillsRoot);
|
||||
if (!rootStat.ok || !rootStat.isDir) {
|
||||
return [];
|
||||
return { skillFilePaths: [], truncated: false };
|
||||
}
|
||||
|
||||
const skillFiles: string[] = [];
|
||||
const queue: string[] = [skillsRoot];
|
||||
const visitedDirs = new Set<string>();
|
||||
// Caps total BFS dequeues, not per-path depth. Named to reflect actual semantics.
|
||||
const MAX_TOTAL_DIR_VISITS = MAX_WORKSPACE_SKILL_SCAN_FILES_PER_WORKSPACE * 20;
|
||||
let totalDirVisits = 0;
|
||||
|
||||
while (queue.length > 0 && skillFiles.length < MAX_WORKSPACE_SKILL_SCAN_FILES_PER_WORKSPACE) {
|
||||
while (
|
||||
queue.length > 0 &&
|
||||
skillFiles.length < MAX_WORKSPACE_SKILL_SCAN_FILES_PER_WORKSPACE &&
|
||||
totalDirVisits++ < MAX_TOTAL_DIR_VISITS
|
||||
) {
|
||||
const dir = queue.shift()!;
|
||||
const dirRealPath = await fs.realpath(dir).catch(() => path.resolve(dir));
|
||||
// Use the module-level realpathWithTimeout so a hanging network FS doesn't
|
||||
// block the BFS indefinitely (same 2 s guard as the outer escape-detection loop).
|
||||
const dirRealPath = (await realpathWithTimeout(dir)) ?? path.resolve(dir);
|
||||
if (visitedDirs.has(dirRealPath)) {
|
||||
continue;
|
||||
}
|
||||
@@ -429,7 +486,7 @@ async function listWorkspaceSkillMarkdownFiles(workspaceDir: string): Promise<st
|
||||
}
|
||||
}
|
||||
|
||||
return skillFiles;
|
||||
return { skillFilePaths: skillFiles, truncated: queue.length > 0 };
|
||||
}
|
||||
|
||||
// --------------------------------------------------------------------------
|
||||
@@ -630,6 +687,44 @@ export async function collectPluginsTrustFindings(params: {
|
||||
if (pluginDirs.length > 0) {
|
||||
const allow = params.cfg.plugins?.allow;
|
||||
const allowConfigured = Array.isArray(allow) && allow.length > 0;
|
||||
|
||||
if (allowConfigured) {
|
||||
// Warn about allowlist entries that don't match any installed plugin ID.
|
||||
// An attacker could register a plugin with an allowlisted ID after the
|
||||
// allowlist was created, exploiting the pre-approved entry.
|
||||
// Exclude bundled channel plugin IDs (telegram, discord, etc.) from the
|
||||
// phantom check — they are never in the extensions directory but are
|
||||
// legitimate allowlist targets.
|
||||
const installedPluginIds = new Set(pluginDirs.map((dir) => path.basename(dir).toLowerCase()));
|
||||
const bundledPluginIds = new Set(listChannelPlugins().map((p) => p.id.toLowerCase()));
|
||||
const phantomEntries = allow.filter((entry) => {
|
||||
if (typeof entry !== "string" || entry === "group:plugins") {
|
||||
return false;
|
||||
}
|
||||
const lower = entry.toLowerCase();
|
||||
if (installedPluginIds.has(lower) || bundledPluginIds.has(lower)) {
|
||||
return false;
|
||||
}
|
||||
// Also resolve via plugin alias / legacy-ID normalization so that entries
|
||||
// like a provider ID or a renamed bundled plugin don't produce false-positive
|
||||
// phantom warnings. normalizePluginId maps aliases to their canonical ID.
|
||||
const canonicalId = normalizeOptionalLowercaseString(normalizePluginId(entry)) ?? "";
|
||||
return !canonicalId || !bundledPluginIds.has(canonicalId);
|
||||
});
|
||||
if (phantomEntries.length > 0) {
|
||||
findings.push({
|
||||
checkId: "plugins.allow_phantom_entries",
|
||||
severity: "warn",
|
||||
title: "plugins.allow contains entries with no matching installed plugin",
|
||||
detail:
|
||||
`The following plugins.allow entries do not correspond to any installed plugin: ${phantomEntries.join(", ")}.\n` +
|
||||
"Phantom entries could be exploited by registering a new plugin with an allowlisted ID.",
|
||||
remediation:
|
||||
"Remove unused entries from plugins.allow, or verify the expected plugins are installed.",
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
if (!allowConfigured) {
|
||||
const skillCommandsLikelyExposed = (
|
||||
await Promise.all(
|
||||
@@ -886,8 +981,26 @@ export async function collectWorkspaceSkillSymlinkEscapeFindings(params: {
|
||||
|
||||
for (const workspaceDir of workspaceDirs) {
|
||||
const workspacePath = path.resolve(workspaceDir);
|
||||
const workspaceRealPath = await fs.realpath(workspacePath).catch(() => workspacePath);
|
||||
const skillFilePaths = await listWorkspaceSkillMarkdownFiles(workspacePath);
|
||||
const workspaceRealPath = (await realpathWithTimeout(workspacePath)) ?? workspacePath;
|
||||
const { skillFilePaths, truncated } = await listWorkspaceSkillMarkdownFiles(workspacePath);
|
||||
|
||||
if (truncated) {
|
||||
// The BFS visit cap was hit before the full skills/ tree was scanned.
|
||||
// Escaped SKILL.md symlinks in the unvisited portion will not be detected.
|
||||
// Surface this as a warning so the user knows coverage was incomplete.
|
||||
findings.push({
|
||||
checkId: "skills.workspace.scan_truncated",
|
||||
severity: "warn",
|
||||
title: "Workspace skill scan reached the directory visit limit",
|
||||
detail:
|
||||
`The skills/ directory scan in ${workspacePath} stopped early after reaching the ` +
|
||||
`BFS visit cap. Skill files in the unscanned portion of the tree were not checked ` +
|
||||
"for symlink escapes.",
|
||||
remediation:
|
||||
"Flatten or simplify the skills/ directory hierarchy to stay within the scan budget, " +
|
||||
"or move deeply-nested skill collections to a managed skill location.",
|
||||
});
|
||||
}
|
||||
|
||||
for (const skillFilePath of skillFilePaths) {
|
||||
const canonicalSkillPath = path.resolve(skillFilePath);
|
||||
@@ -896,8 +1009,17 @@ export async function collectWorkspaceSkillSymlinkEscapeFindings(params: {
|
||||
}
|
||||
seenSkillPaths.add(canonicalSkillPath);
|
||||
|
||||
const skillRealPath = await fs.realpath(canonicalSkillPath).catch(() => null);
|
||||
const skillRealPath = await realpathWithTimeout(canonicalSkillPath);
|
||||
if (!skillRealPath) {
|
||||
// realpath timed out or failed — cannot verify the symlink target.
|
||||
// Treat as a potential escape rather than silently bypassing the check.
|
||||
// An attacker on a slow/network FS could otherwise hang realpath to
|
||||
// prevent escape detection.
|
||||
escapedSkillFiles.push({
|
||||
workspaceDir: workspacePath,
|
||||
skillFilePath: canonicalSkillPath,
|
||||
skillRealPath: "(realpath timed out \u2014 symlink target unverifiable)",
|
||||
});
|
||||
continue;
|
||||
}
|
||||
if (isPathInside(workspaceRealPath, skillRealPath)) {
|
||||
@@ -1207,7 +1329,25 @@ export async function collectPluginsCodeSafetyFindings(params: {
|
||||
|
||||
for (const pluginName of pluginDirs) {
|
||||
const pluginPath = path.join(extensionsDir, pluginName);
|
||||
const extensionEntries = await readPluginManifestExtensions(pluginPath).catch(() => []);
|
||||
let extensionEntries: string[] = [];
|
||||
try {
|
||||
extensionEntries = await readPluginManifestExtensions(pluginPath);
|
||||
} catch (manifestErr) {
|
||||
// Malformed package.json — surface a warning so the user investigates.
|
||||
// A plugin could deliberately corrupt its manifest to hide declared
|
||||
// extension entrypoints from the deep code scanner.
|
||||
findings.push({
|
||||
checkId: "plugins.code_safety.manifest_parse_error",
|
||||
severity: "warn",
|
||||
title: `Plugin "${pluginName}" has a malformed package.json`,
|
||||
detail:
|
||||
`Could not parse plugin manifest: ${String(manifestErr)}.\n` +
|
||||
"The extension entrypoint list is unavailable. Deep scan will cover the plugin directory but may miss entries declared via `openclaw.extensions`.",
|
||||
remediation:
|
||||
"Inspect the plugin package.json for syntax errors. If the plugin is untrusted, remove it from your OpenClaw extensions state directory.",
|
||||
});
|
||||
// Continue — getCodeSafetySummary below still scans the plugin directory
|
||||
}
|
||||
const forcedScanEntries: string[] = [];
|
||||
const escapedEntries: string[] = [];
|
||||
|
||||
|
||||
79
src/security/audit-plugins-phantom.test.ts
Normal file
79
src/security/audit-plugins-phantom.test.ts
Normal file
@@ -0,0 +1,79 @@
|
||||
import fs from "node:fs/promises";
|
||||
import os from "node:os";
|
||||
import path from "node:path";
|
||||
import { afterAll, beforeAll, describe, expect, it, vi } from "vitest";
|
||||
import type { OpenClawConfig } from "../config/config.js";
|
||||
import { collectPluginsTrustFindings } from "./audit-extra.async.js";
|
||||
|
||||
/**
|
||||
* Mock listChannelPlugins to return a controlled set of bundled plugin IDs.
|
||||
* This lets the tests verify that bundled IDs are excluded from phantom-entry
|
||||
* detection without depending on the actual set of shipped channel plugins.
|
||||
*/
|
||||
vi.mock("../channels/plugins/index.js", () => ({
|
||||
listChannelPlugins: () => [{ id: "bundled-channel-plugin" }],
|
||||
// Stubs for other named exports used transitively (keep calls safe to invoke).
|
||||
getChannelPlugin: () => undefined,
|
||||
getLoadedChannelPlugin: () => undefined,
|
||||
normalizeChannelId: () => null,
|
||||
}));
|
||||
|
||||
describe("security audit phantom allowlist detection", () => {
|
||||
let fixtureRoot = "";
|
||||
let caseId = 0;
|
||||
|
||||
const makeTmpDir = async (label: string) => {
|
||||
const dir = path.join(fixtureRoot, `case-${caseId++}-${label}`);
|
||||
await fs.mkdir(dir, { recursive: true });
|
||||
return dir;
|
||||
};
|
||||
|
||||
beforeAll(async () => {
|
||||
fixtureRoot = await fs.mkdtemp(path.join(os.tmpdir(), "openclaw-security-phantom-"));
|
||||
});
|
||||
|
||||
afterAll(async () => {
|
||||
if (fixtureRoot) {
|
||||
await fs.rm(fixtureRoot, { recursive: true, force: true }).catch(() => undefined);
|
||||
}
|
||||
});
|
||||
|
||||
it("excludes bundled channel plugin IDs from phantom allowlist warnings", async () => {
|
||||
const stateDir = await makeTmpDir("phantom-bundled-excluded");
|
||||
// Create an extensions directory with one installed plugin so the phantom
|
||||
// check code path is reached (it only runs when pluginDirs.length > 0).
|
||||
await fs.mkdir(path.join(stateDir, "extensions", "some-installed-plugin"), {
|
||||
recursive: true,
|
||||
});
|
||||
|
||||
const cfg: OpenClawConfig = {
|
||||
// Allowlist contains a bundled channel ID and an actually-installed plugin ID.
|
||||
// Neither should appear as a phantom entry.
|
||||
plugins: { allow: ["bundled-channel-plugin", "some-installed-plugin"] },
|
||||
};
|
||||
|
||||
const findings = await collectPluginsTrustFindings({ cfg, stateDir });
|
||||
const phantomFinding = findings.find((f) => f.checkId === "plugins.allow_phantom_entries");
|
||||
expect(phantomFinding).toBeUndefined();
|
||||
});
|
||||
|
||||
it("reports phantom entries for allowlisted IDs that are neither installed nor bundled", async () => {
|
||||
const stateDir = await makeTmpDir("phantom-reported");
|
||||
// Create an extensions directory so the phantom check code path is reached.
|
||||
await fs.mkdir(path.join(stateDir, "extensions", "installed-plugin"), { recursive: true });
|
||||
|
||||
const cfg: OpenClawConfig = {
|
||||
// "ghost-plugin-xyz" is not installed and not a bundled channel plugin.
|
||||
plugins: { allow: ["installed-plugin", "ghost-plugin-xyz"] },
|
||||
};
|
||||
|
||||
const findings = await collectPluginsTrustFindings({ cfg, stateDir });
|
||||
const phantomFinding = findings.find((f) => f.checkId === "plugins.allow_phantom_entries");
|
||||
expect(phantomFinding).toBeDefined();
|
||||
expect(phantomFinding?.severity).toBe("warn");
|
||||
// The phantom finding must identify the ghost entry…
|
||||
expect(phantomFinding?.detail).toContain("ghost-plugin-xyz");
|
||||
// …and must NOT implicate the legitimately installed plugin.
|
||||
expect(phantomFinding?.detail).not.toContain("installed-plugin");
|
||||
});
|
||||
});
|
||||
@@ -1,7 +1,7 @@
|
||||
import fs from "node:fs/promises";
|
||||
import os from "node:os";
|
||||
import path from "node:path";
|
||||
import { afterAll, beforeAll, describe, expect, it } from "vitest";
|
||||
import { afterAll, beforeAll, describe, expect, it, vi } from "vitest";
|
||||
import type { OpenClawConfig } from "../config/config.js";
|
||||
import { collectWorkspaceSkillSymlinkEscapeFindings } from "./audit-extra.async.js";
|
||||
|
||||
@@ -73,4 +73,91 @@ describe("security audit workspace skill path escape findings", () => {
|
||||
|
||||
await Promise.all(runs);
|
||||
});
|
||||
|
||||
it("treats an unresolvable realpath (timeout/error simulation) as a potential symlink escape", async () => {
|
||||
const tmp = await makeTmpDir("workspace-skill-realpath-unresolvable");
|
||||
const workspaceDir = path.join(tmp, "workspace");
|
||||
const skillsDir = path.join(workspaceDir, "skills", "suspect-skill");
|
||||
await fs.mkdir(skillsDir, { recursive: true });
|
||||
await fs.writeFile(path.join(skillsDir, "SKILL.md"), "# suspect\n", "utf-8");
|
||||
|
||||
// Simulate realpath failing for the skill file path — this mirrors what
|
||||
// happens when a slow/hanging NFS or SMB mount causes the 2 s deadline in
|
||||
// realpathWithTimeout to fire. The .catch(() => null) inside the helper
|
||||
// converts any rejection to null, which is the same signal produced by a
|
||||
// genuine timeout. All other paths resolve to their string value so the BFS
|
||||
// and workspace-root detection work normally.
|
||||
const realpathSpy = vi
|
||||
.spyOn(fs, "realpath")
|
||||
.mockImplementation(async (p: unknown): Promise<string> => {
|
||||
if (String(p).endsWith("SKILL.md")) {
|
||||
throw new Error("simulated realpath timeout");
|
||||
}
|
||||
return String(p);
|
||||
});
|
||||
|
||||
try {
|
||||
const findings = await collectWorkspaceSkillSymlinkEscapeFindings({
|
||||
cfg: { agents: { defaults: { workspace: workspaceDir } } } satisfies OpenClawConfig,
|
||||
});
|
||||
const escapeFinding = findings.find((f) => f.checkId === "skills.workspace.symlink_escape");
|
||||
expect(escapeFinding).toBeDefined();
|
||||
expect(escapeFinding?.severity).toBe("warn");
|
||||
// The finding must call out that realpath was unverifiable, not that it
|
||||
// resolved to a path outside the workspace.
|
||||
expect(escapeFinding?.detail).toContain("realpath timed out");
|
||||
} finally {
|
||||
realpathSpy.mockRestore();
|
||||
}
|
||||
});
|
||||
|
||||
it("surfaces scan_truncated finding when BFS visit cap is hit", async () => {
|
||||
const tmp = await makeTmpDir("workspace-skill-bfs-truncated");
|
||||
const workspaceDir = path.join(tmp, "workspace");
|
||||
const skillsRoot = path.join(workspaceDir, "skills");
|
||||
await fs.mkdir(skillsRoot, { recursive: true });
|
||||
|
||||
// Strategy: the first readdir (on skillsRoot) returns 41 001 unique subdir
|
||||
// entries, filling the queue beyond the BFS visit cap
|
||||
// (MAX_TOTAL_DIR_VISITS = 2000 * 20 = 40 000). All subsequent readdir calls
|
||||
// return [] so no further queue growth occurs. After 40 000 dequeues the
|
||||
// loop exits with ~1 001 entries still in queue → truncated = true.
|
||||
//
|
||||
// fs.realpath is also mocked to return paths immediately (no real I/O),
|
||||
// keeping the 40 000 iterations fast (pure microtask overhead, <200 ms).
|
||||
const FAKE_DIRS = 41_001;
|
||||
const fakeDirEntries = Array.from({ length: FAKE_DIRS }, (_, i) => ({
|
||||
name: `d${i}`,
|
||||
isDirectory: () => true,
|
||||
isFile: () => false,
|
||||
isSymbolicLink: () => false,
|
||||
isBlockDevice: () => false,
|
||||
isCharacterDevice: () => false,
|
||||
isFIFO: () => false,
|
||||
isSocket: () => false,
|
||||
parentPath: skillsRoot,
|
||||
path: skillsRoot,
|
||||
})) as unknown as Awaited<ReturnType<typeof fs.readdir>>;
|
||||
|
||||
let readdirCalls = 0;
|
||||
const readdirSpy = vi.spyOn(fs, "readdir").mockImplementation(async () => {
|
||||
return readdirCalls++ === 0 ? fakeDirEntries : ([] as unknown as typeof fakeDirEntries);
|
||||
});
|
||||
const realpathSpy = vi
|
||||
.spyOn(fs, "realpath")
|
||||
.mockImplementation(async (p: unknown) => String(p));
|
||||
|
||||
try {
|
||||
const findings = await collectWorkspaceSkillSymlinkEscapeFindings({
|
||||
cfg: { agents: { defaults: { workspace: workspaceDir } } } satisfies OpenClawConfig,
|
||||
});
|
||||
const truncFinding = findings.find((f) => f.checkId === "skills.workspace.scan_truncated");
|
||||
expect(truncFinding).toBeDefined();
|
||||
expect(truncFinding?.severity).toBe("warn");
|
||||
expect(truncFinding?.detail).toContain(workspaceDir);
|
||||
} finally {
|
||||
readdirSpy.mockRestore();
|
||||
realpathSpy.mockRestore();
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
@@ -216,6 +216,42 @@ describe("external-content security", () => {
|
||||
|
||||
expect(result).toContain(`keep${zws}me [[MARKER_SANITIZED]] safe`);
|
||||
});
|
||||
|
||||
it("sanitizes fullwidth uppercase homoglyph markers (foldMarkerChar lines 152-153)", () => {
|
||||
// Fullwidth uppercase letters: U+FF21-U+FF3A
|
||||
// Only convert letters (A-Z), leave underscores as-is so the regex still matches
|
||||
const fwLetters = (s: string) =>
|
||||
s
|
||||
.split("")
|
||||
.map((c) => (/[A-Z]/.test(c) ? String.fromCharCode(c.charCodeAt(0) + 0xfee0) : c))
|
||||
.join("");
|
||||
const startMarker = `<<<${fwLetters("EXTERNAL_UNTRUSTED_CONTENT")}>>>`;
|
||||
const result = wrapExternalContent(`Before ${startMarker} after`, { source: "email" });
|
||||
expect(result).toContain("[[MARKER_SANITIZED]]");
|
||||
});
|
||||
|
||||
it("sanitizes fullwidth lowercase homoglyph markers (foldMarkerChar lines 154-155)", () => {
|
||||
// Fullwidth lowercase letters: U+FF41-U+FF5A
|
||||
const fwLetters = (s: string) =>
|
||||
s
|
||||
.split("")
|
||||
.map((c) => (/[a-z]/.test(c) ? String.fromCharCode(c.charCodeAt(0) + 0xfee0) : c))
|
||||
.join("");
|
||||
const startMarker = `<<<${fwLetters("external_untrusted_content")}>>>`;
|
||||
const result = wrapExternalContent(`Before ${startMarker} after`, { source: "email" });
|
||||
expect(result).toContain("[[MARKER_SANITIZED]]");
|
||||
});
|
||||
|
||||
it("returns content unchanged when phrase is present but no marker delimiters found (line 240)", () => {
|
||||
// The early check /external[\s_]+untrusted[\s_]+content/ passes,
|
||||
// but no <<< ... >>> delimiters exist — replacements is empty — returns content unchanged
|
||||
const content = "This is external untrusted content without any angle bracket markers.";
|
||||
const result = wrapExternalContent(content, { source: "email" });
|
||||
// The raw content (after the --- separator) should be unchanged
|
||||
expect(result).toContain(content);
|
||||
// And critically: no [[MARKER_SANITIZED]] since no markers were found
|
||||
expect(result).not.toContain("[[MARKER_SANITIZED]]");
|
||||
});
|
||||
});
|
||||
|
||||
describe("wrapWebContent", () => {
|
||||
|
||||
112
src/security/scan-paths.test.ts
Normal file
112
src/security/scan-paths.test.ts
Normal file
@@ -0,0 +1,112 @@
|
||||
import os from "node:os";
|
||||
import path from "node:path";
|
||||
import { describe, expect, it } from "vitest";
|
||||
import {
|
||||
extensionUsesSkippedScannerPath,
|
||||
isPathInside,
|
||||
isPathInsideWithRealpath,
|
||||
} from "./scan-paths.js";
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// isPathInside
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
describe("isPathInside", () => {
|
||||
it("returns true for same directory", () => {
|
||||
const base = "/home/user/project";
|
||||
expect(isPathInside(base, base)).toBe(true);
|
||||
});
|
||||
|
||||
it("returns true for a direct child", () => {
|
||||
expect(isPathInside("/home/user/project", "/home/user/project/src/file.ts")).toBe(true);
|
||||
});
|
||||
|
||||
it("returns false when candidate escapes base with ..", () => {
|
||||
expect(isPathInside("/home/user/project", "/home/user/other")).toBe(false);
|
||||
});
|
||||
|
||||
it("returns false for absolute path outside base", () => {
|
||||
expect(isPathInside("/home/user/project", "/etc/passwd")).toBe(false);
|
||||
});
|
||||
|
||||
it("returns false for a sibling directory", () => {
|
||||
expect(isPathInside("/home/user/a", "/home/user/b")).toBe(false);
|
||||
});
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// isPathInsideWithRealpath
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
describe("isPathInsideWithRealpath", () => {
|
||||
const tmpDir = os.tmpdir();
|
||||
|
||||
it("returns true when both paths exist and candidate is inside base", () => {
|
||||
// os.tmpdir() and itself both exist on disk
|
||||
const result = isPathInsideWithRealpath(tmpDir, tmpDir);
|
||||
expect(result).toBe(true);
|
||||
});
|
||||
|
||||
it("returns false (line 25) when candidate is outside base without realpath check needed", () => {
|
||||
// /etc is outside os.tmpdir() — isPathInside returns false immediately
|
||||
const result = isPathInsideWithRealpath(tmpDir, "/etc");
|
||||
expect(result).toBe(false); // covers line 25: return false
|
||||
});
|
||||
|
||||
it("returns false (safe default) when realpath fails for non-existent candidate", () => {
|
||||
// Non-existent path causes safeRealpathSync to return null (covers line 15)
|
||||
// New safe default (requireRealpath not set): returns false — secure by default
|
||||
const nonExistent = path.join(tmpDir, "__does_not_exist_clawin_test__");
|
||||
const result = isPathInsideWithRealpath(tmpDir, nonExistent);
|
||||
expect(result).toBe(false);
|
||||
});
|
||||
|
||||
it("returns false when requireRealpath is true and realpath fails", () => {
|
||||
const nonExistent = path.join(tmpDir, "__does_not_exist_clawin_test__");
|
||||
const result = isPathInsideWithRealpath(tmpDir, nonExistent, { requireRealpath: true });
|
||||
expect(result).toBe(false);
|
||||
});
|
||||
|
||||
it("returns true (explicit opt-out) when requireRealpath is false and realpath fails", () => {
|
||||
const nonExistent = path.join(tmpDir, "__does_not_exist_clawin_test__");
|
||||
const result = isPathInsideWithRealpath(tmpDir, nonExistent, { requireRealpath: false });
|
||||
expect(result).toBe(true);
|
||||
});
|
||||
|
||||
it("returns false (safe default) when realpath fails for base path", () => {
|
||||
const nonExistentBase = path.join(tmpDir, "__nonexistent_base__");
|
||||
const child = path.join(nonExistentBase, "child.ts");
|
||||
const result = isPathInsideWithRealpath(nonExistentBase, child);
|
||||
expect(result).toBe(false);
|
||||
});
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// extensionUsesSkippedScannerPath
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
describe("extensionUsesSkippedScannerPath", () => {
|
||||
it("returns true for node_modules segment", () => {
|
||||
expect(extensionUsesSkippedScannerPath("src/node_modules/pkg/index.js")).toBe(true);
|
||||
});
|
||||
|
||||
it("returns true for hidden directory (.hidden)", () => {
|
||||
expect(extensionUsesSkippedScannerPath("src/.hidden/file.ts")).toBe(true);
|
||||
});
|
||||
|
||||
it("returns false for normal paths", () => {
|
||||
expect(extensionUsesSkippedScannerPath("src/utils/helpers.ts")).toBe(false);
|
||||
});
|
||||
|
||||
it("returns false for a single . segment (current dir)", () => {
|
||||
expect(extensionUsesSkippedScannerPath("./src/file.ts")).toBe(false);
|
||||
});
|
||||
|
||||
it("returns false for a .. segment (parent dir)", () => {
|
||||
expect(extensionUsesSkippedScannerPath("../src/file.ts")).toBe(false);
|
||||
});
|
||||
|
||||
it("returns true for Windows-style paths with node_modules", () => {
|
||||
expect(extensionUsesSkippedScannerPath("src\\node_modules\\pkg\\index.js")).toBe(true);
|
||||
});
|
||||
});
|
||||
@@ -27,7 +27,10 @@ export function isPathInsideWithRealpath(
|
||||
const baseReal = safeRealpathSync(basePath);
|
||||
const candidateReal = safeRealpathSync(candidatePath);
|
||||
if (!baseReal || !candidateReal) {
|
||||
return opts?.requireRealpath !== true;
|
||||
// Default to false (safe): only bypass the realpath check when the caller
|
||||
// explicitly opts out with requireRealpath: false. All production callers
|
||||
// already pass requireRealpath: true; this change makes the default secure.
|
||||
return opts?.requireRealpath === false;
|
||||
}
|
||||
return isPathInside(baseReal, candidateReal);
|
||||
}
|
||||
|
||||
@@ -261,6 +261,38 @@ describe("scanDirectory", () => {
|
||||
expectedRuleId: "dynamic-code-execution",
|
||||
expectedPresent: true,
|
||||
},
|
||||
{
|
||||
name: "skips non-scannable includeFiles entries like .png (line 406)",
|
||||
files: {
|
||||
"logo.png": "binary-content",
|
||||
"clean.js": `export const x = 1;`,
|
||||
},
|
||||
includeFiles: ["logo.png"],
|
||||
expectedRuleId: "dynamic-code-execution",
|
||||
expectedPresent: false,
|
||||
},
|
||||
{
|
||||
name: "skips missing files in includeFiles (lines 468-471 — ENOENT in resolveForcedFiles)",
|
||||
files: {
|
||||
"clean.js": `export const x = 1;`,
|
||||
},
|
||||
// "nonexistent.js" doesn't exist — stat throws ENOENT → continue at line 418
|
||||
includeFiles: ["nonexistent.js"],
|
||||
expectedRuleId: "dynamic-code-execution",
|
||||
expectedPresent: false,
|
||||
},
|
||||
{
|
||||
name: "deduplicates file present in both includeFiles and walked directory (line 451)",
|
||||
files: {
|
||||
// regular.js is in the root and will be found by both walkDirWithLimit and includeFiles
|
||||
"regular.js": `const x = eval("hack");`,
|
||||
},
|
||||
// Including the same file ensures it appears in forcedFiles AND walkedFiles
|
||||
includeFiles: ["regular.js"],
|
||||
expectedRuleId: "dynamic-code-execution",
|
||||
expectedPresent: true,
|
||||
expectedMinFindings: 1,
|
||||
},
|
||||
] as const)(
|
||||
"$name",
|
||||
async ({ files, includeFiles, expectedRuleId, expectedPresent, expectedMinFindings }) => {
|
||||
@@ -375,6 +407,29 @@ describe("scanDirectoryWithSummary", () => {
|
||||
}
|
||||
});
|
||||
|
||||
it("includes warn-severity findings in summary counts (lines 568-569)", async () => {
|
||||
const root = makeTmpDir();
|
||||
writeFixtureFiles(root, {
|
||||
// obfuscated-code rule produces 'warn' severity
|
||||
"a.js": `const payload = "\\x72\\x65\\x71\\x75\\x69\\x72\\x65";`,
|
||||
});
|
||||
const summary = await scanDirectoryWithSummary(root);
|
||||
expect(summary.warn).toBeGreaterThanOrEqual(1);
|
||||
});
|
||||
|
||||
it("skips non-scanned files in scanDirectory (line 535)", async () => {
|
||||
const root = makeTmpDir();
|
||||
writeFixtureFiles(root, {
|
||||
// File bigger than maxFileBytes — scanned=false → hits the continue at line 535
|
||||
"large.js": `eval("${"A".repeat(4096)}");`,
|
||||
// Small clean file to confirm the scan ran
|
||||
"clean.js": `export const ok = true;`,
|
||||
});
|
||||
const findings = await scanDirectory(root, { maxFileBytes: 64 });
|
||||
// large.js is skipped (too big), clean.js has no findings
|
||||
expect(findings).toHaveLength(0);
|
||||
});
|
||||
|
||||
it("throws when reading a scannable file fails", async () => {
|
||||
const root = makeTmpDir();
|
||||
const filePath = path.join(root, "bad.js");
|
||||
@@ -398,6 +453,273 @@ describe("scanDirectoryWithSummary", () => {
|
||||
}
|
||||
});
|
||||
|
||||
it("returns scanned=false when readFile throws ENOENT after stat (line 506 — TOCTOU)", async () => {
|
||||
const root = makeTmpDir();
|
||||
const filePath = path.join(root, "vanishing.js");
|
||||
fsSync.writeFileSync(filePath, `const x = eval("1+1");`);
|
||||
|
||||
const realReadFile = fs.readFile;
|
||||
const spy = vi.spyOn(fs, "readFile").mockImplementation(async (...args) => {
|
||||
const pathArg = args[0];
|
||||
if (typeof pathArg === "string" && pathArg === filePath) {
|
||||
const err = new Error("ENOENT: no such file or directory") as NodeJS.ErrnoException;
|
||||
err.code = "ENOENT";
|
||||
throw err;
|
||||
}
|
||||
return await realReadFile(...args);
|
||||
});
|
||||
|
||||
try {
|
||||
// File vanishes between stat and readFile — should return empty findings, not throw
|
||||
const findings = await scanDirectory(root);
|
||||
expect(findings).toHaveLength(0);
|
||||
} finally {
|
||||
spy.mockRestore();
|
||||
}
|
||||
});
|
||||
|
||||
it("returns scanned=false when stat returns non-file for a walked path (line 474)", async () => {
|
||||
const root = makeTmpDir();
|
||||
const filePath = path.join(root, "became-a-dir.js");
|
||||
fsSync.writeFileSync(filePath, `const x = eval("1+1");`);
|
||||
|
||||
const realStat = fs.stat;
|
||||
const spy = vi.spyOn(fs, "stat").mockImplementation(async (...args) => {
|
||||
const pathArg = args[0];
|
||||
if (typeof pathArg === "string" && pathArg === filePath) {
|
||||
// Return a stat that looks like a directory after the walk collected it
|
||||
const realSt = await realStat(pathArg);
|
||||
const fake = Object.assign(Object.create(Object.getPrototypeOf(realSt)), realSt);
|
||||
fake.isFile = () => false;
|
||||
fake.isDirectory = () => true;
|
||||
return fake;
|
||||
}
|
||||
return await realStat(...args);
|
||||
});
|
||||
|
||||
try {
|
||||
const findings = await scanDirectory(root);
|
||||
expect(findings).toHaveLength(0);
|
||||
} finally {
|
||||
spy.mockRestore();
|
||||
}
|
||||
});
|
||||
|
||||
it("invalidates file scan cache when maxFileBytes changes between scans (line 94)", async () => {
|
||||
// First scan with maxFileBytes=1024: populates cache with entry
|
||||
// Second scan with maxFileBytes=64: size/mtime same but maxFileBytes differs →
|
||||
// getCachedFileScanResult returns undefined (deletes stale entry)
|
||||
const root = makeTmpDir();
|
||||
writeFixtureFiles(root, { "a.js": `export const x = 1;` });
|
||||
await scanDirectory(root, { maxFileBytes: 1024 });
|
||||
// Change maxFileBytes — cache entry has different maxFileBytes → lines 93-94 hit
|
||||
const findings = await scanDirectory(root, { maxFileBytes: 64 });
|
||||
expect(findings).toHaveLength(0);
|
||||
});
|
||||
|
||||
it("returns empty entries when dir stat throws ENOENT during walk (line 361)", async () => {
|
||||
// readDirEntriesWithCache: stat(dirPath) throws ENOENT → return [] (line 361)
|
||||
const root = makeTmpDir();
|
||||
writeFixtureFiles(root, { "a.js": `const x = eval("hack");` });
|
||||
|
||||
const realStat = fs.stat;
|
||||
const spy = vi.spyOn(fs, "stat").mockImplementation(async (...args) => {
|
||||
const pathArg = args[0];
|
||||
if (typeof pathArg === "string" && pathArg === root) {
|
||||
const err = new Error("ENOENT: no such file or directory") as NodeJS.ErrnoException;
|
||||
err.code = "ENOENT";
|
||||
throw err;
|
||||
}
|
||||
return await realStat(...args);
|
||||
});
|
||||
|
||||
try {
|
||||
const findings = await scanDirectory(root);
|
||||
expect(findings).toHaveLength(0);
|
||||
} finally {
|
||||
spy.mockRestore();
|
||||
}
|
||||
});
|
||||
|
||||
it("hits dir entry cache on second scan of same directory (line 371)", async () => {
|
||||
const root = makeTmpDir();
|
||||
writeFixtureFiles(root, { "a.js": `export const x = 1;` });
|
||||
// First scan populates the cache
|
||||
await scanDirectory(root);
|
||||
// Second scan within the same test (before afterEach clears cache) hits line 371
|
||||
const findings2 = await scanDirectory(root);
|
||||
expect(findings2).toHaveLength(0);
|
||||
});
|
||||
|
||||
it("breaks collectScannableFiles merge loop when maxFiles reached (line 447)", async () => {
|
||||
// Key: forced file is hidden (not walked), two regular files are walked.
|
||||
// forcedFiles=[.hidden/h.js](1) + walkDir returns [a.js, b.js](2, maxFiles=2)
|
||||
// Merge loop:
|
||||
// iter 1 (a.js): out.length(1) >= 2? false → add → out.length=2
|
||||
// iter 2 (b.js): out.length(2) >= 2? true → BREAK (line 447)
|
||||
const root = makeTmpDir();
|
||||
writeFixtureFiles(root, {
|
||||
".hidden/h.js": `export const h = 1;`, // forced (hidden, not walked)
|
||||
"a.js": `export const x = 1;`, // walked
|
||||
"b.js": `export const y = 2;`, // walked, triggers break
|
||||
});
|
||||
const findings = await scanDirectory(root, {
|
||||
includeFiles: [".hidden/h.js"],
|
||||
maxFiles: 2,
|
||||
});
|
||||
expect(findings).toHaveLength(0);
|
||||
});
|
||||
|
||||
it("returns empty findings when stat throws ENOENT during file scan (line 469)", async () => {
|
||||
// scanFileWithCache: stat throws ENOENT (file deleted between walk and scan)
|
||||
const root = makeTmpDir();
|
||||
const filePath = path.join(root, "ghost.js");
|
||||
fsSync.writeFileSync(filePath, `const x = eval("1+1");`);
|
||||
|
||||
const realStat = fs.stat;
|
||||
const spy = vi.spyOn(fs, "stat").mockImplementation(async (...args) => {
|
||||
const pathArg = args[0];
|
||||
if (typeof pathArg === "string" && pathArg === filePath) {
|
||||
const err = new Error("ENOENT: no such file or directory") as NodeJS.ErrnoException;
|
||||
err.code = "ENOENT";
|
||||
throw err;
|
||||
}
|
||||
return await realStat(...args);
|
||||
});
|
||||
|
||||
try {
|
||||
const findings = await scanDirectory(root);
|
||||
expect(findings).toHaveLength(0);
|
||||
} finally {
|
||||
spy.mockRestore();
|
||||
}
|
||||
});
|
||||
|
||||
it("skips includeFiles entries that escape the root directory (line 404)", async () => {
|
||||
const root = makeTmpDir();
|
||||
writeFixtureFiles(root, { "clean.js": `export const x = 1;` });
|
||||
// "../../etc/passwd" resolves outside root — isPathInside returns false → continue
|
||||
const findings = await scanDirectory(root, { includeFiles: ["../../etc/passwd"] });
|
||||
expect(findings).toHaveLength(0);
|
||||
});
|
||||
|
||||
it("returns empty entries when stat returns non-directory for a walked path (line 366)", async () => {
|
||||
// readDirEntriesWithCache: stat succeeds but returns a non-directory
|
||||
const root = makeTmpDir();
|
||||
writeFixtureFiles(root, { "a.js": `const x = eval("1+1");` });
|
||||
|
||||
const realStat = fs.stat;
|
||||
const spy = vi.spyOn(fs, "stat").mockImplementation(async (...args) => {
|
||||
const pathArg = args[0];
|
||||
if (typeof pathArg === "string" && pathArg === root) {
|
||||
// Make the root directory look like a file to readDirEntriesWithCache
|
||||
const realSt = await realStat(pathArg);
|
||||
const fake = Object.assign(Object.create(Object.getPrototypeOf(realSt)), realSt);
|
||||
fake.isDirectory = () => false;
|
||||
fake.isFile = () => true;
|
||||
return fake;
|
||||
}
|
||||
return await realStat(...args);
|
||||
});
|
||||
|
||||
try {
|
||||
const findings = await scanDirectory(root);
|
||||
// Scan returns nothing because readDirEntriesWithCache returns [] for non-dir
|
||||
expect(findings).toHaveLength(0);
|
||||
} finally {
|
||||
spy.mockRestore();
|
||||
}
|
||||
});
|
||||
|
||||
it("re-throws when stat throws non-ENOENT for a directory entry (line 363)", async () => {
|
||||
// readDirEntriesWithCache: stat throws EACCES (not ENOENT) for the root dir
|
||||
const root = makeTmpDir();
|
||||
writeFixtureFiles(root, { "a.js": `export const x = 1;` });
|
||||
|
||||
const realStat = fs.stat;
|
||||
const spy = vi.spyOn(fs, "stat").mockImplementation(async (...args) => {
|
||||
const pathArg = args[0];
|
||||
if (typeof pathArg === "string" && pathArg === root) {
|
||||
const err = new Error("EACCES: permission denied") as NodeJS.ErrnoException;
|
||||
err.code = "EACCES";
|
||||
throw err;
|
||||
}
|
||||
return await realStat(...args);
|
||||
});
|
||||
|
||||
try {
|
||||
await expect(scanDirectory(root)).rejects.toMatchObject({ code: "EACCES" });
|
||||
} finally {
|
||||
spy.mockRestore();
|
||||
}
|
||||
});
|
||||
|
||||
it("skips duplicate entries in includeFiles (line 410 — resolveForcedFiles dedup)", async () => {
|
||||
const root = makeTmpDir();
|
||||
writeFixtureFiles(root, { "a.js": `const x = eval("hack");` });
|
||||
// Pass the same path twice — second occurrence hits seen.has(includePath) → continue
|
||||
const findings = await scanDirectory(root, { includeFiles: ["a.js", "a.js"] });
|
||||
expect(findings.length).toBeGreaterThanOrEqual(1);
|
||||
});
|
||||
|
||||
it("skips forced file when stat returns a directory (line 423)", async () => {
|
||||
const root = makeTmpDir();
|
||||
// Create a DIRECTORY named like a .js file via renaming a dir
|
||||
const dirPath = path.join(root, "notafile.js");
|
||||
fsSync.mkdirSync(dirPath);
|
||||
// includeFiles points at a directory path ending in .js — isScannable passes, stat.isFile() fails
|
||||
const findings = await scanDirectory(root, { includeFiles: ["notafile.js"] });
|
||||
expect(findings).toHaveLength(0);
|
||||
});
|
||||
|
||||
it("re-throws when stat throws a non-ENOENT error for a forced file (line 420)", async () => {
|
||||
const root = makeTmpDir();
|
||||
const filePath = path.join(root, "forbidden.js");
|
||||
fsSync.writeFileSync(filePath, `export const x = 1;`);
|
||||
|
||||
const realStat = fs.stat;
|
||||
const spy = vi.spyOn(fs, "stat").mockImplementation(async (...args) => {
|
||||
const pathArg = args[0];
|
||||
if (typeof pathArg === "string" && pathArg === filePath) {
|
||||
const err = new Error("EACCES: permission denied") as NodeJS.ErrnoException;
|
||||
err.code = "EACCES";
|
||||
throw err;
|
||||
}
|
||||
return await realStat(...args);
|
||||
});
|
||||
|
||||
try {
|
||||
await expect(scanDirectory(root, { includeFiles: ["forbidden.js"] })).rejects.toMatchObject({
|
||||
code: "EACCES",
|
||||
});
|
||||
} finally {
|
||||
spy.mockRestore();
|
||||
}
|
||||
});
|
||||
|
||||
it("re-throws when stat throws a non-ENOENT error during file scan (line 471)", async () => {
|
||||
const root = makeTmpDir();
|
||||
const filePath = path.join(root, "noperm.js");
|
||||
fsSync.writeFileSync(filePath, `export const x = 1;`);
|
||||
|
||||
const realStat = fs.stat;
|
||||
const spy = vi.spyOn(fs, "stat").mockImplementation(async (...args) => {
|
||||
const pathArg = args[0];
|
||||
if (typeof pathArg === "string" && pathArg === filePath) {
|
||||
const err = new Error("EACCES: permission denied") as NodeJS.ErrnoException;
|
||||
err.code = "EACCES";
|
||||
throw err;
|
||||
}
|
||||
return await realStat(...args);
|
||||
});
|
||||
|
||||
try {
|
||||
await expect(scanDirectory(root)).rejects.toMatchObject({ code: "EACCES" });
|
||||
} finally {
|
||||
spy.mockRestore();
|
||||
}
|
||||
});
|
||||
|
||||
it("reuses cached findings for unchanged files and invalidates on file updates", async () => {
|
||||
const root = makeTmpDir();
|
||||
const filePath = path.join(root, "cached.js");
|
||||
|
||||
@@ -207,6 +207,21 @@ Successfully processed 1 files`;
|
||||
expect(entries).toHaveLength(1);
|
||||
});
|
||||
|
||||
it("skips entries with parentheses but no colon separator (line 190)", () => {
|
||||
// parseAceEntry: entry has '(' so passes the early guard but has no ':'
|
||||
const output = `C:\\test\\file.txt BUILTIN(F)\n BUILTIN\\Administrators:(F)`;
|
||||
const entries = parseIcaclsOutput(output, "C:\\test\\file.txt");
|
||||
// BUILTIN(F) has no ':' → returns null; only the Administrators entry is kept
|
||||
expectSinglePrincipal(entries, "BUILTIN\\Administrators");
|
||||
});
|
||||
|
||||
it("skips entries where all tokens are inherit flags (line 207)", () => {
|
||||
// Only inherit flags: I, OI, CI — after filtering, rights is empty → returns null
|
||||
const output = `C:\\test\\file.txt BUILTIN\\Users:(I)(OI)(CI)\n BUILTIN\\Administrators:(F)`;
|
||||
const entries = parseIcaclsOutput(output, "C:\\test\\file.txt");
|
||||
expectSinglePrincipal(entries, "BUILTIN\\Administrators");
|
||||
});
|
||||
|
||||
it.each([
|
||||
{ rights: "(F)", canWrite: true, canRead: true },
|
||||
{ rights: "(M)", canWrite: true, canRead: true },
|
||||
@@ -496,6 +511,29 @@ Successfully processed 1 files`;
|
||||
});
|
||||
expectInspectSuccess(result, 2);
|
||||
});
|
||||
|
||||
it("returns null SID and continues when whoami throws (line 277)", async () => {
|
||||
// icacls returns an untrusted SID entry (triggers needsUserSidResolution)
|
||||
// whoami throws → resolveCurrentUserSid catch block returns null
|
||||
const unknownSid = "S-1-5-21-9999-8888-7777-1001";
|
||||
const mockExec = vi
|
||||
.fn()
|
||||
.mockResolvedValueOnce({
|
||||
stdout: `C:\\test\\file.txt ${unknownSid}:(F)`,
|
||||
stderr: "",
|
||||
})
|
||||
.mockRejectedValueOnce(new Error("whoami: command not found"));
|
||||
|
||||
const result = await inspectWindowsAcl("C:\\test\\file.txt", {
|
||||
exec: mockExec,
|
||||
// No USERSID → triggers SID resolution attempt
|
||||
});
|
||||
// Should still succeed — whoami failure is swallowed
|
||||
expect(result.ok).toBe(true);
|
||||
// Unknown SID stays in untrustedGroup (resolveCurrentUserSid returned null)
|
||||
expect(result.untrustedGroup).toHaveLength(1);
|
||||
expect(mockExec).toHaveBeenCalledTimes(2);
|
||||
});
|
||||
});
|
||||
|
||||
describe("formatWindowsAclSummary", () => {
|
||||
@@ -629,6 +667,40 @@ Successfully processed 1 files`;
|
||||
});
|
||||
expect(result?.display).toBe(expected);
|
||||
});
|
||||
|
||||
it("world SIDs in USERSID env are not added to trusted set", () => {
|
||||
// S-1-1-0 = Everyone. Even if USERSID is set to this, it must NOT be trusted.
|
||||
const env = { USERSID: "S-1-1-0" };
|
||||
const entries: WindowsAclEntry[] = [
|
||||
aclEntry({
|
||||
principal: "S-1-1-0",
|
||||
rights: ["F"],
|
||||
rawRights: "(F)",
|
||||
canRead: true,
|
||||
canWrite: true,
|
||||
}),
|
||||
];
|
||||
const summary = summarizeWindowsAcl(entries, env);
|
||||
// Everyone must remain in untrustedWorld, not trusted
|
||||
expect(summary.untrustedWorld).toHaveLength(1);
|
||||
expect(summary.trusted).toHaveLength(0);
|
||||
});
|
||||
|
||||
it("returns null when no username can be resolved (line 348)", () => {
|
||||
// Temporarily make os.userInfo().username empty so resolveWindowsUserPrincipal returns null
|
||||
userInfoMock.mockReturnValueOnce({
|
||||
username: "",
|
||||
uid: -1,
|
||||
gid: -1,
|
||||
shell: "",
|
||||
homedir: "",
|
||||
});
|
||||
const result = createIcaclsResetCommand("C:\\test\\file.txt", {
|
||||
isDir: false,
|
||||
env: { USERNAME: "", USERDOMAIN: "" },
|
||||
});
|
||||
expect(result).toBeNull();
|
||||
});
|
||||
});
|
||||
|
||||
describe("summarizeWindowsAcl — localized SYSTEM account names", () => {
|
||||
@@ -644,6 +716,13 @@ Successfully processed 1 files`;
|
||||
expectTrustedOnly([aclEntry({ principal: "AUTORIDAD NT\\SYSTEM" })]);
|
||||
});
|
||||
|
||||
it("classifies principal with diacritic not in TRUSTED_BASE but matching stripped suffix (line 145)", () => {
|
||||
// "NT Authority\\Syst\u00e9me" has \u00e9 (e-acute) which is not in TRUSTED_BASE directly.
|
||||
// After diacritic stripping: "nt authority\\systeme" which ends with stripped("\\syst\u00e8me") = "\\systeme".
|
||||
// This exercises the classifyPrincipal diacritic-strip fallback at line 145.
|
||||
expectTrustedOnly([aclEntry({ principal: "NT Authority\\Syst\u00e9me" })]);
|
||||
});
|
||||
|
||||
it("French Windows full scenario: user + Système only → no untrusted", () => {
|
||||
const entries: WindowsAclEntry[] = [
|
||||
aclEntry({ principal: "MYPC\\Pierre" }),
|
||||
|
||||
@@ -92,7 +92,9 @@ function buildTrustedPrincipals(env?: NodeJS.ProcessEnv): Set<string> {
|
||||
}
|
||||
}
|
||||
const userSid = normalizeSid(env?.USERSID ?? "");
|
||||
if (userSid && SID_RE.test(userSid)) {
|
||||
// Guard: never add world-equivalent SIDs (Everyone, Authenticated Users, BUILTIN\\Users)
|
||||
// to the trusted set, even if USERSID is set to one of them by a malicious process.
|
||||
if (userSid && SID_RE.test(userSid) && !WORLD_SIDS.has(userSid)) {
|
||||
trusted.add(userSid);
|
||||
}
|
||||
return trusted;
|
||||
@@ -273,7 +275,13 @@ async function resolveCurrentUserSid(exec: ExecFn): Promise<string | null> {
|
||||
const { stdout, stderr } = await exec("whoami", ["/user", "/fo", "csv", "/nh"]);
|
||||
const match = `${stdout}\n${stderr}`.match(/\*?S-\d+-\d+(?:-\d+)+/i);
|
||||
return match ? normalizeSid(match[0]) : null;
|
||||
} catch {
|
||||
} catch (err) {
|
||||
// Log but do not propagate — SID resolution is best-effort.
|
||||
// Callers fall back to env-based resolution when this returns null.
|
||||
console.warn("[windows-acl] resolveCurrentUserSid failed:", String(err));
|
||||
// TODO: replace with a structured logger call once a lightweight per-module
|
||||
// logger is available; console.warn can be noisy on constrained Windows hosts
|
||||
// (e.g. strict output-capture environments or CI runners with limited stdio).
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user