mirror of
https://github.com/openclaw/openclaw.git
synced 2026-05-06 08:50:43 +00:00
test: parallelize docker aggregate
This commit is contained in:
@@ -56,7 +56,7 @@ Jobs are ordered so cheap checks fail before expensive ones run:
|
||||
Scope logic lives in `scripts/ci-changed-scope.mjs` and is covered by unit tests in `src/scripts/ci-changed-scope.test.ts`.
|
||||
CI workflow edits validate the Node CI graph plus workflow linting, but do not force Windows, Android, or macOS native builds by themselves; those platform lanes stay scoped to platform source changes.
|
||||
Windows Node checks are scoped to Windows-specific process/path wrappers, npm/pnpm/UI runner helpers, package manager config, and the CI workflow surfaces that execute that lane; unrelated source, plugin, install-smoke, and test-only changes stay on the Linux Node lanes so they do not reserve a 16-vCPU Windows worker for coverage that is already exercised by the normal test shards.
|
||||
The separate `install-smoke` workflow reuses the same scope script through its own `preflight` job. It computes `run_install_smoke` from the narrower changed-smoke signal, so Docker/install smoke runs for install, packaging, container-relevant changes, bundled extension production changes, and the core plugin/channel/gateway/Plugin SDK surfaces that the Docker smoke jobs exercise. Test-only and docs-only edits do not reserve Docker workers. Its QR package smoke forces the Docker `pnpm install` layer to rerun while preserving the BuildKit pnpm store cache, so it still exercises installation without redownloading dependencies on every run. Its gateway-network e2e reuses the runtime image built earlier in the job, so it adds real container-to-container WebSocket coverage without adding another Docker build. Local `test:docker:all` prebuilds one shared `scripts/e2e/Dockerfile` built-app image and reuses it across the E2E container smoke runners; the reusable live/E2E workflow mirrors that pattern by building and pushing one SHA-tagged GHCR Docker E2E image before the Docker matrix, then running the matrix with `OPENCLAW_SKIP_DOCKER_BUILD=1`. QR and installer Docker tests keep their own install-focused Dockerfiles. A separate `docker-e2e-fast` job runs the bounded bundled-plugin Docker profile under a 120-second command timeout: setup-entry dependency repair plus synthetic bundled-loader failure isolation. The full bundled update/channel matrix remains manual/full-suite because it performs repeated real npm update and doctor repair passes.
|
||||
The separate `install-smoke` workflow reuses the same scope script through its own `preflight` job. It computes `run_install_smoke` from the narrower changed-smoke signal, so Docker/install smoke runs for install, packaging, container-relevant changes, bundled extension production changes, and the core plugin/channel/gateway/Plugin SDK surfaces that the Docker smoke jobs exercise. Test-only and docs-only edits do not reserve Docker workers. Its QR package smoke forces the Docker `pnpm install` layer to rerun while preserving the BuildKit pnpm store cache, so it still exercises installation without redownloading dependencies on every run. Its gateway-network e2e reuses the runtime image built earlier in the job, so it adds real container-to-container WebSocket coverage without adding another Docker build. Local `test:docker:all` prebuilds one shared live-test image and one shared `scripts/e2e/Dockerfile` built-app image, then runs the live/E2E smoke lanes in parallel with `OPENCLAW_SKIP_DOCKER_BUILD=1`; tune the default concurrency of 4 with `OPENCLAW_DOCKER_ALL_PARALLELISM`. Startup- or provider-sensitive lanes run exclusively after the parallel pool. The reusable live/E2E workflow mirrors the shared-image pattern by building and pushing one SHA-tagged GHCR Docker E2E image before the Docker matrix, then running the matrix with `OPENCLAW_SKIP_DOCKER_BUILD=1`. QR and installer Docker tests keep their own install-focused Dockerfiles. A separate `docker-e2e-fast` job runs the bounded bundled-plugin Docker profile under a 120-second command timeout: setup-entry dependency repair plus synthetic bundled-loader failure isolation. The full bundled update/channel matrix remains manual/full-suite because it performs repeated real npm update and doctor repair passes.
|
||||
|
||||
Local changed-lane logic lives in `scripts/changed-lanes.mjs` and is executed by `scripts/check-changed.mjs`. That local gate is stricter about architecture boundaries than the broad CI platform scope: core production changes run core prod typecheck plus core tests, core test-only changes run only core test typecheck/tests, extension production changes run extension prod typecheck plus extension tests, and extension test-only changes run only extension test typecheck/tests. Public Plugin SDK or plugin-contract changes expand to extension validation because extensions depend on those core contracts. Release metadata-only version bumps run targeted version/config/root-dependency checks. Unknown root/config changes fail safe to all lanes.
|
||||
|
||||
|
||||
@@ -32,6 +32,7 @@ title: "Tests"
|
||||
- Gateway integration: opt-in via `OPENCLAW_TEST_INCLUDE_GATEWAY=1 pnpm test` or `pnpm test:gateway`.
|
||||
- `pnpm test:e2e`: Runs gateway end-to-end smoke tests (multi-instance WS/HTTP/node pairing). Defaults to `threads` + `isolate: false` with adaptive workers in `vitest.e2e.config.ts`; tune with `OPENCLAW_E2E_WORKERS=<n>` and set `OPENCLAW_E2E_VERBOSE=1` for verbose logs.
|
||||
- `pnpm test:live`: Runs provider live tests (minimax/zai). Requires API keys and `LIVE=1` (or provider-specific `*_LIVE_TEST=1`) to unskip.
|
||||
- `pnpm test:docker:all`: Builds the shared live-test image and Docker E2E image once, then runs the Docker smoke lanes with `OPENCLAW_SKIP_DOCKER_BUILD=1` at concurrency 4 by default. Tune with `OPENCLAW_DOCKER_ALL_PARALLELISM=<n>`. Startup- or provider-sensitive lanes run exclusively after the parallel pool. Per-lane logs are written under `.artifacts/docker-tests/<run-id>/`.
|
||||
- `pnpm test:docker:openwebui`: Starts Dockerized OpenClaw + Open WebUI, signs in through Open WebUI, checks `/api/models`, then runs a real proxied chat through `/api/chat/completions`. Requires a usable live model key (for example OpenAI in `~/.profile`), pulls an external Open WebUI image, and is not expected to be CI-stable like the normal unit/e2e suites.
|
||||
- `pnpm test:docker:mcp-channels`: Starts a seeded Gateway container and a second client container that spawns `openclaw mcp serve`, then verifies routed conversation discovery, transcript reads, attachment metadata, live event queue behavior, outbound send routing, and Claude-style channel + permission notifications over the real stdio bridge. The Claude notification assertion reads the raw stdio MCP frames directly so the smoke reflects what the bridge actually emits.
|
||||
|
||||
|
||||
@@ -1419,7 +1419,7 @@
|
||||
"test:contracts:plugins": "node scripts/run-vitest.mjs run --config test/vitest/vitest.contracts-plugin.config.ts --maxWorkers=1",
|
||||
"test:coverage": "node scripts/run-vitest.mjs run --config test/vitest/vitest.unit.config.ts --coverage",
|
||||
"test:coverage:changed": "node scripts/run-vitest.mjs run --config test/vitest/vitest.unit.config.ts --coverage --changed origin/main",
|
||||
"test:docker:all": "bash scripts/test-docker-all.sh",
|
||||
"test:docker:all": "node scripts/test-docker-all.mjs",
|
||||
"test:docker:bundled-channel-deps": "bash scripts/e2e/bundled-channel-runtime-deps-docker.sh",
|
||||
"test:docker:bundled-channel-deps:fast": "OPENCLAW_BUNDLED_CHANNEL_SCENARIOS=0 OPENCLAW_BUNDLED_CHANNEL_UPDATE_SCENARIO=0 OPENCLAW_BUNDLED_CHANNEL_ROOT_OWNED_SCENARIO=0 OPENCLAW_BUNDLED_CHANNEL_SETUP_ENTRY_SCENARIO=1 OPENCLAW_BUNDLED_CHANNEL_LOAD_FAILURE_SCENARIO=1 bash scripts/e2e/bundled-channel-runtime-deps-docker.sh",
|
||||
"test:docker:cleanup": "bash scripts/test-cleanup-docker.sh",
|
||||
|
||||
@@ -41,8 +41,9 @@ export type McpClientHandle = {
|
||||
rawMessages: unknown[];
|
||||
};
|
||||
|
||||
const GATEWAY_WS_TIMEOUT_MS = 30_000;
|
||||
const GATEWAY_CONNECT_RETRY_WINDOW_MS = 45_000;
|
||||
const GATEWAY_WS_OPEN_TIMEOUT_MS = 5_000;
|
||||
const GATEWAY_RPC_TIMEOUT_MS = 30_000;
|
||||
const GATEWAY_CONNECT_RETRY_WINDOW_MS = 120_000;
|
||||
|
||||
export function assert(condition: unknown, message: string): asserts condition {
|
||||
if (!condition) {
|
||||
@@ -119,7 +120,7 @@ async function connectGatewayOnce(params: {
|
||||
await new Promise<void>((resolve, reject) => {
|
||||
const timeout = setTimeout(
|
||||
() => reject(new Error("gateway ws open timeout")),
|
||||
GATEWAY_WS_TIMEOUT_MS,
|
||||
GATEWAY_WS_OPEN_TIMEOUT_MS,
|
||||
);
|
||||
timeout.unref?.();
|
||||
ws.once("open", () => {
|
||||
@@ -228,7 +229,7 @@ async function connectGatewayOnce(params: {
|
||||
const timeout = setTimeout(() => {
|
||||
pending.delete(connectId);
|
||||
reject(new Error("gateway connect timeout"));
|
||||
}, GATEWAY_WS_TIMEOUT_MS);
|
||||
}, GATEWAY_RPC_TIMEOUT_MS);
|
||||
timeout.unref?.();
|
||||
pending.set(connectId, {
|
||||
resolve: () => {
|
||||
@@ -247,7 +248,7 @@ async function connectGatewayOnce(params: {
|
||||
const timeout = setTimeout(() => {
|
||||
pending.delete(id);
|
||||
reject(new Error("gateway sessions.subscribe timeout"));
|
||||
}, GATEWAY_WS_TIMEOUT_MS);
|
||||
}, GATEWAY_RPC_TIMEOUT_MS);
|
||||
timeout.unref?.();
|
||||
pending.set(id, {
|
||||
resolve: () => {
|
||||
|
||||
@@ -346,6 +346,7 @@ for _ in $(seq 1 360); do
|
||||
if node "$entry" gateway health \
|
||||
--url "ws://127.0.0.1:$PORT" \
|
||||
--token "$TOKEN" \
|
||||
--timeout 30000 \
|
||||
--json >/dev/null 2>&1; then
|
||||
break
|
||||
fi
|
||||
@@ -354,6 +355,7 @@ done
|
||||
node "$entry" gateway health \
|
||||
--url "ws://127.0.0.1:$PORT" \
|
||||
--token "$TOKEN" \
|
||||
--timeout 30000 \
|
||||
--json >/dev/null
|
||||
|
||||
cat >/tmp/openclaw-openai-web-search-minimal-client.mjs <<'NODE'
|
||||
|
||||
@@ -15,11 +15,16 @@ if [[ "${OPENCLAW_QR_SMOKE_FORCE_INSTALL:-0}" == "1" ]]; then
|
||||
fi
|
||||
|
||||
echo "Building Docker image..."
|
||||
run_logged qr-import-build docker build \
|
||||
"${DOCKER_BUILD_ARGS[@]}" \
|
||||
-t "$IMAGE_NAME" \
|
||||
-f "$ROOT_DIR/scripts/e2e/Dockerfile.qr-import" \
|
||||
DOCKER_BUILD_CMD=(docker build)
|
||||
if ((${#DOCKER_BUILD_ARGS[@]} > 0)); then
|
||||
DOCKER_BUILD_CMD+=("${DOCKER_BUILD_ARGS[@]}")
|
||||
fi
|
||||
DOCKER_BUILD_CMD+=(
|
||||
-t "$IMAGE_NAME"
|
||||
-f "$ROOT_DIR/scripts/e2e/Dockerfile.qr-import"
|
||||
"$ROOT_DIR"
|
||||
)
|
||||
run_logged qr-import-build "${DOCKER_BUILD_CMD[@]}"
|
||||
|
||||
echo "Running qrcode-terminal import smoke..."
|
||||
run_logged qr-import-run docker run --rm -t "$IMAGE_NAME" node -e "import('qrcode-terminal').then((m)=>m.default.generate('qr-smoke',{small:true}))"
|
||||
|
||||
@@ -4,7 +4,7 @@ run_logged() {
|
||||
local label="$1"
|
||||
shift
|
||||
local log_file
|
||||
log_file="$(mktemp "${TMPDIR:-/tmp}/openclaw-${label}.XXXXXX.log")"
|
||||
log_file="$(mktemp "${TMPDIR:-/tmp}/openclaw-${label}.XXXXXX").log"
|
||||
if ! "$@" >"$log_file" 2>&1; then
|
||||
cat "$log_file"
|
||||
rm -f "$log_file"
|
||||
|
||||
@@ -163,6 +163,18 @@ openclaw_live_join_csv() {
|
||||
done
|
||||
}
|
||||
|
||||
openclaw_live_append_array() {
|
||||
local target_array="${1:?target array required}"
|
||||
local source_array="${2:?source array required}"
|
||||
local count
|
||||
|
||||
eval "count=\${#$source_array[@]}"
|
||||
if ((count == 0)); then
|
||||
return 0
|
||||
fi
|
||||
eval "$target_array+=(\"\${$source_array[@]}\")"
|
||||
}
|
||||
|
||||
openclaw_live_stage_auth_into_home() {
|
||||
local dest_home="${1:?destination home directory required}"
|
||||
shift
|
||||
|
||||
294
scripts/test-docker-all.mjs
Normal file
294
scripts/test-docker-all.mjs
Normal file
@@ -0,0 +1,294 @@
|
||||
import { spawn } from "node:child_process";
|
||||
import fs from "node:fs";
|
||||
import { mkdir, readFile } from "node:fs/promises";
|
||||
import path from "node:path";
|
||||
import { fileURLToPath } from "node:url";
|
||||
|
||||
const ROOT_DIR = path.resolve(path.dirname(fileURLToPath(import.meta.url)), "..");
|
||||
const DEFAULT_E2E_IMAGE = "openclaw-docker-e2e:local";
|
||||
const DEFAULT_PARALLELISM = 4;
|
||||
const DEFAULT_FAILURE_TAIL_LINES = 80;
|
||||
|
||||
const lanes = [
|
||||
["live-models", "OPENCLAW_SKIP_DOCKER_BUILD=1 pnpm test:docker:live-models"],
|
||||
["live-gateway", "OPENCLAW_SKIP_DOCKER_BUILD=1 pnpm test:docker:live-gateway"],
|
||||
[
|
||||
"live-cli-backend-claude",
|
||||
"OPENCLAW_SKIP_DOCKER_BUILD=1 pnpm test:docker:live-cli-backend:claude",
|
||||
],
|
||||
[
|
||||
"live-cli-backend-gemini",
|
||||
"OPENCLAW_SKIP_DOCKER_BUILD=1 pnpm test:docker:live-cli-backend:gemini",
|
||||
],
|
||||
["openwebui", "OPENCLAW_SKIP_DOCKER_BUILD=1 pnpm test:docker:openwebui"],
|
||||
["onboard", "OPENCLAW_SKIP_DOCKER_BUILD=1 pnpm test:docker:onboard"],
|
||||
[
|
||||
"npm-onboard-channel-agent",
|
||||
"OPENCLAW_SKIP_DOCKER_BUILD=1 pnpm test:docker:npm-onboard-channel-agent",
|
||||
],
|
||||
["gateway-network", "OPENCLAW_SKIP_DOCKER_BUILD=1 pnpm test:docker:gateway-network"],
|
||||
["mcp-channels", "OPENCLAW_SKIP_DOCKER_BUILD=1 pnpm test:docker:mcp-channels"],
|
||||
["pi-bundle-mcp-tools", "OPENCLAW_SKIP_DOCKER_BUILD=1 pnpm test:docker:pi-bundle-mcp-tools"],
|
||||
["cron-mcp-cleanup", "OPENCLAW_SKIP_DOCKER_BUILD=1 pnpm test:docker:cron-mcp-cleanup"],
|
||||
["doctor-switch", "OPENCLAW_SKIP_DOCKER_BUILD=1 pnpm test:docker:doctor-switch"],
|
||||
["plugins", "OPENCLAW_SKIP_DOCKER_BUILD=1 pnpm test:docker:plugins"],
|
||||
["plugin-update", "OPENCLAW_SKIP_DOCKER_BUILD=1 pnpm test:docker:plugin-update"],
|
||||
["config-reload", "OPENCLAW_SKIP_DOCKER_BUILD=1 pnpm test:docker:config-reload"],
|
||||
["bundled-channel-deps", "OPENCLAW_SKIP_DOCKER_BUILD=1 pnpm test:docker:bundled-channel-deps"],
|
||||
["qr", "pnpm test:docker:qr"],
|
||||
];
|
||||
|
||||
const exclusiveLanes = [
|
||||
[
|
||||
"openai-web-search-minimal",
|
||||
"OPENCLAW_SKIP_DOCKER_BUILD=1 pnpm test:docker:openai-web-search-minimal",
|
||||
],
|
||||
["live-codex-harness", "OPENCLAW_SKIP_DOCKER_BUILD=1 pnpm test:docker:live-codex-harness"],
|
||||
[
|
||||
"live-cli-backend-codex",
|
||||
"OPENCLAW_SKIP_DOCKER_BUILD=1 pnpm test:docker:live-cli-backend:codex",
|
||||
],
|
||||
["live-acp-bind", "OPENCLAW_SKIP_DOCKER_BUILD=1 pnpm test:docker:live-acp-bind"],
|
||||
];
|
||||
|
||||
function parsePositiveInt(raw, fallback, label) {
|
||||
if (!raw) {
|
||||
return fallback;
|
||||
}
|
||||
const parsed = Number(raw);
|
||||
if (!Number.isInteger(parsed) || parsed < 1) {
|
||||
throw new Error(`${label} must be a positive integer. Got: ${JSON.stringify(raw)}`);
|
||||
}
|
||||
return parsed;
|
||||
}
|
||||
|
||||
function utcStampForPath() {
|
||||
return new Date().toISOString().replaceAll("-", "").replaceAll(":", "").replace(/\..*$/, "Z");
|
||||
}
|
||||
|
||||
function utcStamp() {
|
||||
return new Date().toISOString().replace(/\..*$/, "Z");
|
||||
}
|
||||
|
||||
function appendExtension(env, extension) {
|
||||
const current = env.OPENCLAW_DOCKER_BUILD_EXTENSIONS ?? env.OPENCLAW_EXTENSIONS ?? "";
|
||||
const tokens = current.split(/\s+/).filter(Boolean);
|
||||
if (!tokens.includes(extension)) {
|
||||
tokens.push(extension);
|
||||
}
|
||||
env.OPENCLAW_DOCKER_BUILD_EXTENSIONS = tokens.join(" ");
|
||||
}
|
||||
|
||||
function commandEnv(extra = {}) {
|
||||
return {
|
||||
...process.env,
|
||||
...extra,
|
||||
};
|
||||
}
|
||||
|
||||
function runShellCommand({ command, env, label, logFile }) {
|
||||
return new Promise((resolve) => {
|
||||
const child = spawn("bash", ["-lc", command], {
|
||||
cwd: ROOT_DIR,
|
||||
env,
|
||||
stdio: logFile ? ["ignore", "pipe", "pipe"] : "inherit",
|
||||
});
|
||||
activeChildren.add(child);
|
||||
|
||||
let stream;
|
||||
if (logFile) {
|
||||
stream = fs.createWriteStream(logFile, { flags: "a" });
|
||||
stream.write(`==> [${label}] command: ${command}\n`);
|
||||
stream.write(`==> [${label}] started: ${utcStamp()}\n`);
|
||||
child.stdout.pipe(stream, { end: false });
|
||||
child.stderr.pipe(stream, { end: false });
|
||||
}
|
||||
|
||||
child.on("close", (status, signal) => {
|
||||
activeChildren.delete(child);
|
||||
const exitCode = typeof status === "number" ? status : signal ? 128 : 1;
|
||||
if (stream) {
|
||||
stream.write(`\n==> [${label}] finished: ${utcStamp()} status=${exitCode}\n`);
|
||||
stream.end();
|
||||
}
|
||||
resolve({ status: exitCode, signal });
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
async function runForeground(label, command, env) {
|
||||
console.log(`==> ${label}`);
|
||||
const result = await runShellCommand({ command, env, label });
|
||||
if (result.status !== 0) {
|
||||
throw new Error(`${label} failed with status ${result.status}`);
|
||||
}
|
||||
}
|
||||
|
||||
function laneEnv(name, baseEnv, logDir) {
|
||||
const env = {
|
||||
...baseEnv,
|
||||
};
|
||||
if (!process.env.OPENCLAW_DOCKER_CLI_TOOLS_DIR) {
|
||||
env.OPENCLAW_DOCKER_CLI_TOOLS_DIR = path.join(logDir, `${name}-cli-tools`);
|
||||
}
|
||||
if (!process.env.OPENCLAW_DOCKER_CACHE_HOME_DIR) {
|
||||
env.OPENCLAW_DOCKER_CACHE_HOME_DIR = path.join(logDir, `${name}-cache`);
|
||||
}
|
||||
return env;
|
||||
}
|
||||
|
||||
async function runLane(lane, baseEnv, logDir) {
|
||||
const [name, command] = lane;
|
||||
const logFile = path.join(logDir, `${name}.log`);
|
||||
const env = laneEnv(name, baseEnv, logDir);
|
||||
await mkdir(env.OPENCLAW_DOCKER_CLI_TOOLS_DIR, { recursive: true });
|
||||
await mkdir(env.OPENCLAW_DOCKER_CACHE_HOME_DIR, { recursive: true });
|
||||
await fs.promises.writeFile(
|
||||
logFile,
|
||||
[
|
||||
`==> [${name}] cli tools dir: ${env.OPENCLAW_DOCKER_CLI_TOOLS_DIR}`,
|
||||
`==> [${name}] cache dir: ${env.OPENCLAW_DOCKER_CACHE_HOME_DIR}`,
|
||||
"",
|
||||
].join("\n"),
|
||||
);
|
||||
console.log(`==> [${name}] start`);
|
||||
const startedAt = Date.now();
|
||||
const result = await runShellCommand({ command, env, label: name, logFile });
|
||||
const elapsedSeconds = Math.round((Date.now() - startedAt) / 1000);
|
||||
if (result.status === 0) {
|
||||
console.log(`==> [${name}] pass ${elapsedSeconds}s`);
|
||||
} else {
|
||||
console.error(`==> [${name}] fail status=${result.status} ${elapsedSeconds}s log=${logFile}`);
|
||||
}
|
||||
return {
|
||||
command,
|
||||
logFile,
|
||||
name,
|
||||
status: result.status,
|
||||
};
|
||||
}
|
||||
|
||||
async function runLanePool(poolLanes, baseEnv, logDir, parallelism) {
|
||||
const failures = [];
|
||||
let nextIndex = 0;
|
||||
|
||||
async function worker() {
|
||||
while (nextIndex < poolLanes.length) {
|
||||
const lane = poolLanes[nextIndex++];
|
||||
const result = await runLane(lane, baseEnv, logDir);
|
||||
if (result.status !== 0) {
|
||||
failures.push(result);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
const workerCount = Math.min(parallelism, poolLanes.length);
|
||||
await Promise.all(Array.from({ length: workerCount }, () => worker()));
|
||||
return failures;
|
||||
}
|
||||
|
||||
async function runLanesSerial(serialLanes, baseEnv, logDir) {
|
||||
const failures = [];
|
||||
for (const lane of serialLanes) {
|
||||
const result = await runLane(lane, baseEnv, logDir);
|
||||
if (result.status !== 0) {
|
||||
failures.push(result);
|
||||
}
|
||||
}
|
||||
return failures;
|
||||
}
|
||||
|
||||
async function tailFile(file, lines) {
|
||||
const content = await readFile(file, "utf8").catch(() => "");
|
||||
const tail = content.split(/\r?\n/).slice(-lines).join("\n");
|
||||
return tail.trimEnd();
|
||||
}
|
||||
|
||||
async function printFailureSummary(failures, tailLines) {
|
||||
console.error(`ERROR: ${failures.length} Docker lane(s) failed.`);
|
||||
for (const failure of failures) {
|
||||
console.error(`---- ${failure.name} failed (status=${failure.status}): ${failure.logFile}`);
|
||||
const tail = await tailFile(failure.logFile, tailLines);
|
||||
if (tail) {
|
||||
console.error(tail);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
const activeChildren = new Set();
|
||||
function terminateActiveChildren(signal) {
|
||||
for (const child of activeChildren) {
|
||||
child.kill(signal);
|
||||
}
|
||||
}
|
||||
|
||||
process.on("SIGINT", () => {
|
||||
terminateActiveChildren("SIGINT");
|
||||
process.exit(130);
|
||||
});
|
||||
process.on("SIGTERM", () => {
|
||||
terminateActiveChildren("SIGTERM");
|
||||
process.exit(143);
|
||||
});
|
||||
|
||||
async function main() {
|
||||
const parallelism = parsePositiveInt(
|
||||
process.env.OPENCLAW_DOCKER_ALL_PARALLELISM,
|
||||
DEFAULT_PARALLELISM,
|
||||
"OPENCLAW_DOCKER_ALL_PARALLELISM",
|
||||
);
|
||||
const tailLines = parsePositiveInt(
|
||||
process.env.OPENCLAW_DOCKER_ALL_FAILURE_TAIL_LINES,
|
||||
DEFAULT_FAILURE_TAIL_LINES,
|
||||
"OPENCLAW_DOCKER_ALL_FAILURE_TAIL_LINES",
|
||||
);
|
||||
const runId = process.env.OPENCLAW_DOCKER_ALL_RUN_ID || utcStampForPath();
|
||||
const logDir = path.resolve(
|
||||
process.env.OPENCLAW_DOCKER_ALL_LOG_DIR ||
|
||||
path.join(ROOT_DIR, ".artifacts/docker-tests", runId),
|
||||
);
|
||||
await mkdir(logDir, { recursive: true });
|
||||
|
||||
const baseEnv = commandEnv({
|
||||
OPENCLAW_DOCKER_E2E_IMAGE: process.env.OPENCLAW_DOCKER_E2E_IMAGE || DEFAULT_E2E_IMAGE,
|
||||
});
|
||||
appendExtension(baseEnv, "matrix");
|
||||
appendExtension(baseEnv, "acpx");
|
||||
appendExtension(baseEnv, "codex");
|
||||
|
||||
console.log(`==> Docker test logs: ${logDir}`);
|
||||
console.log(`==> Parallelism: ${parallelism}`);
|
||||
console.log(`==> Live-test bundled plugin deps: ${baseEnv.OPENCLAW_DOCKER_BUILD_EXTENSIONS}`);
|
||||
|
||||
await runForeground("Build shared live-test image once", "pnpm test:docker:live-build", baseEnv);
|
||||
await runForeground(
|
||||
`Build shared Docker E2E image once: ${baseEnv.OPENCLAW_DOCKER_E2E_IMAGE}`,
|
||||
"pnpm test:docker:e2e-build",
|
||||
baseEnv,
|
||||
);
|
||||
|
||||
const failures = await runLanePool(lanes, baseEnv, logDir, parallelism);
|
||||
if (failures.length > 0) {
|
||||
await printFailureSummary(failures, tailLines);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
console.log("==> Running provider-sensitive Docker lanes exclusively");
|
||||
failures.push(...(await runLanesSerial(exclusiveLanes, baseEnv, logDir)));
|
||||
if (failures.length > 0) {
|
||||
await printFailureSummary(failures, tailLines);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
await runForeground(
|
||||
"Run cleanup smoke after parallel lanes",
|
||||
"pnpm test:docker:cleanup",
|
||||
baseEnv,
|
||||
);
|
||||
console.log("==> Docker test suite passed");
|
||||
}
|
||||
|
||||
await main().catch((error) => {
|
||||
console.error(error instanceof Error ? error.message : String(error));
|
||||
process.exit(1);
|
||||
});
|
||||
@@ -2,27 +2,4 @@
|
||||
set -euo pipefail
|
||||
|
||||
ROOT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
|
||||
cd "$ROOT_DIR"
|
||||
|
||||
pnpm test:docker:live-build
|
||||
OPENCLAW_SKIP_DOCKER_BUILD=1 pnpm test:docker:live-models
|
||||
OPENCLAW_SKIP_DOCKER_BUILD=1 pnpm test:docker:live-gateway
|
||||
|
||||
export OPENCLAW_DOCKER_E2E_IMAGE="${OPENCLAW_DOCKER_E2E_IMAGE:-openclaw-docker-e2e:local}"
|
||||
pnpm test:docker:e2e-build
|
||||
|
||||
OPENCLAW_SKIP_DOCKER_BUILD=1 pnpm test:docker:openwebui
|
||||
OPENCLAW_SKIP_DOCKER_BUILD=1 pnpm test:docker:onboard
|
||||
OPENCLAW_SKIP_DOCKER_BUILD=1 pnpm test:docker:npm-onboard-channel-agent
|
||||
OPENCLAW_SKIP_DOCKER_BUILD=1 pnpm test:docker:gateway-network
|
||||
OPENCLAW_SKIP_DOCKER_BUILD=1 pnpm test:docker:openai-web-search-minimal
|
||||
OPENCLAW_SKIP_DOCKER_BUILD=1 pnpm test:docker:mcp-channels
|
||||
OPENCLAW_SKIP_DOCKER_BUILD=1 pnpm test:docker:pi-bundle-mcp-tools
|
||||
OPENCLAW_SKIP_DOCKER_BUILD=1 pnpm test:docker:cron-mcp-cleanup
|
||||
pnpm test:docker:qr
|
||||
OPENCLAW_SKIP_DOCKER_BUILD=1 pnpm test:docker:doctor-switch
|
||||
OPENCLAW_SKIP_DOCKER_BUILD=1 pnpm test:docker:plugins
|
||||
OPENCLAW_SKIP_DOCKER_BUILD=1 pnpm test:docker:plugin-update
|
||||
OPENCLAW_SKIP_DOCKER_BUILD=1 pnpm test:docker:config-reload
|
||||
OPENCLAW_SKIP_DOCKER_BUILD=1 pnpm test:docker:bundled-channel-deps
|
||||
pnpm test:docker:cleanup
|
||||
exec node "$ROOT_DIR/scripts/test-docker-all.mjs" "$@"
|
||||
|
||||
@@ -271,7 +271,7 @@ for ACP_AGENT in "${ACP_AGENTS[@]}"; do
|
||||
echo "==> Agent: $ACP_AGENT"
|
||||
echo "==> Auth dirs: ${AUTH_DIRS_CSV:-none}"
|
||||
echo "==> Auth files: ${AUTH_FILES_CSV:-none}"
|
||||
docker run --rm -t \
|
||||
DOCKER_RUN_ARGS=(docker run --rm -t \
|
||||
-u "$DOCKER_USER" \
|
||||
--entrypoint bash \
|
||||
-e ANTHROPIC_API_KEY \
|
||||
@@ -292,15 +292,18 @@ for ACP_AGENT in "${ACP_AGENTS[@]}"; do
|
||||
-e OPENCLAW_LIVE_TEST=1 \
|
||||
-e OPENCLAW_LIVE_ACP_BIND=1 \
|
||||
-e OPENCLAW_LIVE_ACP_BIND_AGENT="$ACP_AGENT" \
|
||||
-e OPENCLAW_LIVE_ACP_BIND_AGENT_COMMAND="$AGENT_COMMAND" \
|
||||
"${DOCKER_HOME_MOUNT[@]}" \
|
||||
-e OPENCLAW_LIVE_ACP_BIND_AGENT_COMMAND="$AGENT_COMMAND")
|
||||
openclaw_live_append_array DOCKER_RUN_ARGS DOCKER_HOME_MOUNT
|
||||
DOCKER_RUN_ARGS+=(\
|
||||
-v "$CACHE_HOME_DIR":/home/node/.cache \
|
||||
-v "$ROOT_DIR":/src:ro \
|
||||
-v "$CONFIG_DIR":/home/node/.openclaw \
|
||||
-v "$WORKSPACE_DIR":/home/node/.openclaw/workspace \
|
||||
-v "$CLI_TOOLS_DIR":/home/node/.npm-global \
|
||||
"${EXTERNAL_AUTH_MOUNTS[@]}" \
|
||||
"${PROFILE_MOUNT[@]}" \
|
||||
-v "$CLI_TOOLS_DIR":/home/node/.npm-global)
|
||||
openclaw_live_append_array DOCKER_RUN_ARGS EXTERNAL_AUTH_MOUNTS
|
||||
openclaw_live_append_array DOCKER_RUN_ARGS PROFILE_MOUNT
|
||||
DOCKER_RUN_ARGS+=(\
|
||||
"$LIVE_IMAGE_NAME" \
|
||||
-lc "$LIVE_TEST_CMD"
|
||||
-lc "$LIVE_TEST_CMD")
|
||||
"${DOCKER_RUN_ARGS[@]}"
|
||||
done
|
||||
|
||||
@@ -421,7 +421,7 @@ else
|
||||
)
|
||||
fi
|
||||
|
||||
docker run --rm -t \
|
||||
DOCKER_RUN_ARGS=(docker run --rm -t \
|
||||
-u "$DOCKER_USER" \
|
||||
--entrypoint bash \
|
||||
-e COREPACK_ENABLE_DOWNLOAD_PROMPT=0 \
|
||||
@@ -452,16 +452,19 @@ docker run --rm -t \
|
||||
-e OPENCLAW_LIVE_CLI_BACKEND_IMAGE_PROBE="${OPENCLAW_LIVE_CLI_BACKEND_IMAGE_PROBE:-}" \
|
||||
-e OPENCLAW_LIVE_CLI_BACKEND_MCP_PROBE="${OPENCLAW_LIVE_CLI_BACKEND_MCP_PROBE:-}" \
|
||||
-e OPENCLAW_LIVE_CLI_BACKEND_IMAGE_ARG="${OPENCLAW_LIVE_CLI_BACKEND_IMAGE_ARG:-}" \
|
||||
-e OPENCLAW_LIVE_CLI_BACKEND_IMAGE_MODE="${OPENCLAW_LIVE_CLI_BACKEND_IMAGE_MODE:-}" \
|
||||
"${DOCKER_HOME_MOUNT[@]}" \
|
||||
"${DOCKER_EXTRA_ENV_FILES[@]}" \
|
||||
-e OPENCLAW_LIVE_CLI_BACKEND_IMAGE_MODE="${OPENCLAW_LIVE_CLI_BACKEND_IMAGE_MODE:-}")
|
||||
openclaw_live_append_array DOCKER_RUN_ARGS DOCKER_HOME_MOUNT
|
||||
openclaw_live_append_array DOCKER_RUN_ARGS DOCKER_EXTRA_ENV_FILES
|
||||
DOCKER_RUN_ARGS+=(\
|
||||
-v "$CACHE_HOME_DIR":/home/node/.cache \
|
||||
-v "$ROOT_DIR":/src:ro \
|
||||
-v "$CONFIG_DIR":/home/node/.openclaw \
|
||||
-v "$WORKSPACE_DIR":/home/node/.openclaw/workspace \
|
||||
-v "$CLI_TOOLS_DIR":/home/node/.npm-global \
|
||||
"${EXTERNAL_AUTH_MOUNTS[@]}" \
|
||||
"${DOCKER_AUTH_ENV[@]}" \
|
||||
"${PROFILE_MOUNT[@]}" \
|
||||
-v "$CLI_TOOLS_DIR":/home/node/.npm-global)
|
||||
openclaw_live_append_array DOCKER_RUN_ARGS EXTERNAL_AUTH_MOUNTS
|
||||
openclaw_live_append_array DOCKER_RUN_ARGS DOCKER_AUTH_ENV
|
||||
openclaw_live_append_array DOCKER_RUN_ARGS PROFILE_MOUNT
|
||||
DOCKER_RUN_ARGS+=(\
|
||||
"$LIVE_IMAGE_NAME" \
|
||||
-lc "$LIVE_TEST_CMD"
|
||||
-lc "$LIVE_TEST_CMD")
|
||||
"${DOCKER_RUN_ARGS[@]}"
|
||||
|
||||
@@ -196,7 +196,7 @@ echo "==> Auth mode: $CODEX_HARNESS_AUTH_MODE"
|
||||
echo "==> CI-safe Codex config: ${OPENCLAW_LIVE_CODEX_HARNESS_USE_CI_SAFE_CODEX_CONFIG:-1}"
|
||||
echo "==> Harness fallback: none"
|
||||
echo "==> Auth files: ${AUTH_FILES_CSV:-none}"
|
||||
docker run --rm -t \
|
||||
DOCKER_RUN_ARGS=(docker run --rm -t \
|
||||
-u "$DOCKER_USER" \
|
||||
--entrypoint bash \
|
||||
-e COREPACK_ENABLE_DOWNLOAD_PROMPT=0 \
|
||||
@@ -216,16 +216,19 @@ docker run --rm -t \
|
||||
-e OPENCLAW_LIVE_CODEX_HARNESS_REQUEST_TIMEOUT_MS="${OPENCLAW_LIVE_CODEX_HARNESS_REQUEST_TIMEOUT_MS:-}" \
|
||||
-e OPENCLAW_LIVE_CODEX_HARNESS_USE_CI_SAFE_CODEX_CONFIG="${OPENCLAW_LIVE_CODEX_HARNESS_USE_CI_SAFE_CODEX_CONFIG:-1}" \
|
||||
-e OPENCLAW_LIVE_TEST=1 \
|
||||
-e OPENCLAW_VITEST_FS_MODULE_CACHE=0 \
|
||||
"${DOCKER_AUTH_ENV[@]}" \
|
||||
"${DOCKER_EXTRA_ENV_FILES[@]}" \
|
||||
"${DOCKER_HOME_MOUNT[@]}" \
|
||||
-e OPENCLAW_VITEST_FS_MODULE_CACHE=0)
|
||||
openclaw_live_append_array DOCKER_RUN_ARGS DOCKER_AUTH_ENV
|
||||
openclaw_live_append_array DOCKER_RUN_ARGS DOCKER_EXTRA_ENV_FILES
|
||||
openclaw_live_append_array DOCKER_RUN_ARGS DOCKER_HOME_MOUNT
|
||||
DOCKER_RUN_ARGS+=(\
|
||||
-v "$CACHE_HOME_DIR":/home/node/.cache \
|
||||
-v "$ROOT_DIR":/src:ro \
|
||||
-v "$CONFIG_DIR":/home/node/.openclaw \
|
||||
-v "$WORKSPACE_DIR":/home/node/.openclaw/workspace \
|
||||
-v "$CLI_TOOLS_DIR":/home/node/.npm-global \
|
||||
"${EXTERNAL_AUTH_MOUNTS[@]}" \
|
||||
"${PROFILE_MOUNT[@]}" \
|
||||
-v "$CLI_TOOLS_DIR":/home/node/.npm-global)
|
||||
openclaw_live_append_array DOCKER_RUN_ARGS EXTERNAL_AUTH_MOUNTS
|
||||
openclaw_live_append_array DOCKER_RUN_ARGS PROFILE_MOUNT
|
||||
DOCKER_RUN_ARGS+=(\
|
||||
"$LIVE_IMAGE_NAME" \
|
||||
-lc "$LIVE_TEST_CMD"
|
||||
-lc "$LIVE_TEST_CMD")
|
||||
"${DOCKER_RUN_ARGS[@]}"
|
||||
|
||||
@@ -160,7 +160,7 @@ echo "==> Run gateway live model tests (profile keys)"
|
||||
echo "==> Target: src/gateway/gateway-models.profiles.live.test.ts"
|
||||
echo "==> External auth dirs: ${AUTH_DIRS_CSV:-none}"
|
||||
echo "==> External auth files: ${AUTH_FILES_CSV:-none}"
|
||||
docker run --rm -t \
|
||||
DOCKER_RUN_ARGS=(docker run --rm -t \
|
||||
-u "$DOCKER_USER" \
|
||||
--entrypoint bash \
|
||||
-e COREPACK_ENABLE_DOWNLOAD_PROMPT=0 \
|
||||
@@ -177,13 +177,16 @@ docker run --rm -t \
|
||||
-e OPENCLAW_LIVE_GATEWAY_SMOKE="${OPENCLAW_LIVE_GATEWAY_SMOKE:-1}" \
|
||||
-e OPENCLAW_LIVE_GATEWAY_MAX_MODELS="${OPENCLAW_LIVE_GATEWAY_MAX_MODELS:-8}" \
|
||||
-e OPENCLAW_LIVE_GATEWAY_STEP_TIMEOUT_MS="${OPENCLAW_LIVE_GATEWAY_STEP_TIMEOUT_MS:-45000}" \
|
||||
-e OPENCLAW_LIVE_GATEWAY_MODEL_TIMEOUT_MS="${OPENCLAW_LIVE_GATEWAY_MODEL_TIMEOUT_MS:-90000}" \
|
||||
"${DOCKER_HOME_MOUNT[@]}" \
|
||||
-e OPENCLAW_LIVE_GATEWAY_MODEL_TIMEOUT_MS="${OPENCLAW_LIVE_GATEWAY_MODEL_TIMEOUT_MS:-90000}")
|
||||
openclaw_live_append_array DOCKER_RUN_ARGS DOCKER_HOME_MOUNT
|
||||
DOCKER_RUN_ARGS+=(\
|
||||
-v "$CACHE_HOME_DIR":/home/node/.cache \
|
||||
-v "$ROOT_DIR":/src:ro \
|
||||
-v "$CONFIG_DIR":/home/node/.openclaw \
|
||||
-v "$WORKSPACE_DIR":/home/node/.openclaw/workspace \
|
||||
"${EXTERNAL_AUTH_MOUNTS[@]}" \
|
||||
"${PROFILE_MOUNT[@]}" \
|
||||
-v "$WORKSPACE_DIR":/home/node/.openclaw/workspace)
|
||||
openclaw_live_append_array DOCKER_RUN_ARGS EXTERNAL_AUTH_MOUNTS
|
||||
openclaw_live_append_array DOCKER_RUN_ARGS PROFILE_MOUNT
|
||||
DOCKER_RUN_ARGS+=(\
|
||||
"$LIVE_IMAGE_NAME" \
|
||||
-lc "$LIVE_TEST_CMD"
|
||||
-lc "$LIVE_TEST_CMD")
|
||||
"${DOCKER_RUN_ARGS[@]}"
|
||||
|
||||
@@ -191,7 +191,7 @@ echo "==> Target: src/agents/models.profiles.live.test.ts"
|
||||
echo "==> Profile env only: ${OPENCLAW_DOCKER_PROFILE_ENV_ONLY:-0}"
|
||||
echo "==> External auth dirs: ${AUTH_DIRS_CSV:-none}"
|
||||
echo "==> External auth files: ${AUTH_FILES_CSV:-none}"
|
||||
docker run --rm -t \
|
||||
DOCKER_RUN_ARGS=(docker run --rm -t \
|
||||
-u "$DOCKER_USER" \
|
||||
--entrypoint bash \
|
||||
-e COREPACK_ENABLE_DOWNLOAD_PROMPT=0 \
|
||||
@@ -210,13 +210,16 @@ docker run --rm -t \
|
||||
-e OPENCLAW_LIVE_REQUIRE_PROFILE_KEYS="${OPENCLAW_LIVE_REQUIRE_PROFILE_KEYS:-}" \
|
||||
-e OPENCLAW_LIVE_GATEWAY_MODELS="${OPENCLAW_LIVE_GATEWAY_MODELS:-}" \
|
||||
-e OPENCLAW_LIVE_GATEWAY_PROVIDERS="${OPENCLAW_LIVE_GATEWAY_PROVIDERS:-}" \
|
||||
-e OPENCLAW_LIVE_GATEWAY_MAX_MODELS="${OPENCLAW_LIVE_GATEWAY_MAX_MODELS:-}" \
|
||||
"${DOCKER_HOME_MOUNT[@]}" \
|
||||
-e OPENCLAW_LIVE_GATEWAY_MAX_MODELS="${OPENCLAW_LIVE_GATEWAY_MAX_MODELS:-}")
|
||||
openclaw_live_append_array DOCKER_RUN_ARGS DOCKER_HOME_MOUNT
|
||||
DOCKER_RUN_ARGS+=(\
|
||||
-v "$CACHE_HOME_DIR":/home/node/.cache \
|
||||
-v "$ROOT_DIR":/src:ro \
|
||||
-v "$CONFIG_DIR":/home/node/.openclaw \
|
||||
-v "$WORKSPACE_DIR":/home/node/.openclaw/workspace \
|
||||
"${EXTERNAL_AUTH_MOUNTS[@]}" \
|
||||
"${PROFILE_MOUNT[@]}" \
|
||||
-v "$WORKSPACE_DIR":/home/node/.openclaw/workspace)
|
||||
openclaw_live_append_array DOCKER_RUN_ARGS EXTERNAL_AUTH_MOUNTS
|
||||
openclaw_live_append_array DOCKER_RUN_ARGS PROFILE_MOUNT
|
||||
DOCKER_RUN_ARGS+=(\
|
||||
"$LIVE_IMAGE_NAME" \
|
||||
-lc "$LIVE_TEST_CMD"
|
||||
-lc "$LIVE_TEST_CMD")
|
||||
"${DOCKER_RUN_ARGS[@]}"
|
||||
|
||||
Reference in New Issue
Block a user