Merge branch 'main' into dashboard-v2-views-refactor

This commit is contained in:
Val Alexander
2026-03-10 20:39:34 -05:00
committed by GitHub
227 changed files with 7668 additions and 2565 deletions

View File

@@ -12,6 +12,7 @@ Docs: https://docs.openclaw.ai
- Exec/child commands: mark child command environments with `OPENCLAW_CLI` so subprocesses can detect when they were launched from the OpenClaw CLI. (#41411) Thanks @vincentkoc.
- iOS/Home canvas: add a bundled welcome screen with a live agent overview that refreshes on connect, reconnect, and foreground return, and move the compact connection pill off the top-left canvas overlay. (#42456) Thanks @ngutman.
- iOS/Home canvas: replace floating controls with a docked toolbar, make the bundled home scaffold adapt to smaller phones, and open chat in the resolved main session instead of a synthetic `ios` session. (#42456) Thanks @ngutman.
- Discord/auto threads: add `autoArchiveDuration` channel config for auto-created threads so Discord thread archiving can stay at 1 hour, 1 day, 3 days, or 1 week instead of always using the 1-hour default. (#35065) Thanks @davidguttman.
### Breaking
@@ -25,6 +26,8 @@ Docs: https://docs.openclaw.ai
- ACP/ACPX plugin: bump the bundled `acpx` pin to `0.1.16` so plugin-local installs and strict version checks match the latest published CLI. (#41975) Thanks @dutifulbob.
- macOS/LaunchAgent install: tighten LaunchAgent directory and plist permissions during install so launchd bootstrap does not fail when the target home path or generated plist inherited group/world-writable modes.
- Gateway/Control UI: keep dashboard auth tokens in session-scoped browser storage so same-tab refreshes preserve remote token auth without restoring long-lived localStorage token persistence, while scoping tokens to the selected gateway URL and fragment-only bootstrap flow. (#40892) thanks @velvet-shark.
- Secret files: harden CLI and channel credential file reads against path-swap races by requiring direct regular files for `*File` secret inputs and rejecting symlink-backed secret files.
- Archive extraction: harden TAR and external `tar.bz2` installs against destination symlink and pre-existing child-symlink escapes by extracting into staging first and merging into the canonical destination with safe file opens.
- Models/Kimi Coding: send `anthropic-messages` tools in native Anthropic format again so `kimi-coding` stops degrading tool calls into XML/plain-text pseudo invocations instead of real `tool_use` blocks. (#38669, #39907, #40552) Thanks @opriz.
- Context engine/tests: add bundled-registry regression coverage for cross-chunk resolution, plugin-sdk re-exports, and concurrent chunk registration. (#40460) thanks @dsantoreis.
- Agents/embedded runner: bound compaction retry waiting and drain embedded runs during SIGUSR1 restart so session lanes recover instead of staying blocked behind compaction. (#40324) thanks @cgdusek.
@@ -77,6 +80,12 @@ Docs: https://docs.openclaw.ai
- Models/Alibaba Cloud Model Studio: wire `MODELSTUDIO_API_KEY` through shared env auth, implicit provider discovery, and shell-env fallback so onboarding works outside the wizard too. (#40634) Thanks @pomelo-nwu.
- ACP/sessions_spawn: implicitly stream `mode="run"` ACP spawns to parent only for eligible subagent orchestrator sessions (heartbeat `target: "last"` with a usable session-local route), restoring parent progress relays without thread binding. (#42404) Thanks @davidguttman.
- Sessions/reset model recompute: clear stale runtime model, context-token, and system-prompt metadata before session resets recompute the replacement session, so resets pick up current defaults and explicit overrides instead of reusing old runtime model state. (#41173) thanks @PonyX-lab.
- Browser/Browserbase 429 handling: surface stable no-retry rate-limit guidance without buffering discarded HTTP 429 response bodies from remote browser services. (#40491) thanks @mvanhorn.
- Gateway/auth: allow one trusted device-token retry on shared-token mismatch with recovery hints to prevent reconnect churn during token drift. (#42507) Thanks @joshavant.
- Channels/allowlists: remove stale matcher caching so same-array allowlist edits and wildcard replacements take effect immediately, with regression coverage for in-place mutation cases.
- Gateway/auth: fail closed when local `gateway.auth.*` SecretRefs are configured but unavailable, instead of silently falling back to `gateway.remote.*` credentials in local mode. Thanks @tdjackey.
- Sandbox/fs bridge: pin staged writes to verified parent directories so temporary write files cannot materialize outside the allowed mount before atomic replace. Thanks @tdjackey.
- Commands/config writes: enforce `configWrites` against both the originating account and the targeted account scope for `/config` and config-backed `/allowlist` edits, blocking sibling-account mutations while preserving gateway `operator.admin` flows. Thanks @tdjackey for reporting.
## 2026.3.8
@@ -132,6 +141,7 @@ Docs: https://docs.openclaw.ai
- Docs/Changelog: correct the contributor credit for the bundled Control UI global-install fix to @LarytheLord. (#40420) Thanks @velvet-shark.
- Telegram/media downloads: time out only stalled body reads so polling recovers from hung file downloads without aborting slow downloads that are still streaming data. (#40098) thanks @tysoncung.
- Docker/runtime image: prune dev dependencies, strip build-only dist metadata for smaller Docker images. (#40307) Thanks @vincentkoc.
- Subagents/sandboxing: restrict leaf subagents to their own spawned runs and remove leaf `subagents` control access so sandboxed leaf workers can no longer steer sibling sessions. Thanks @tdjackey.
- Gateway/restart timeout recovery: exit non-zero when restart-triggered shutdown drains time out so launchd/systemd restart the gateway instead of treating the failed restart as a clean stop. Landed from contributor PR #40380 by @dsantoreis. Thanks @dsantoreis.
- Gateway/config restart guard: validate config before service start/restart and keep post-SIGUSR1 startup failures from crashing the gateway process, reducing invalid-config restart loops and macOS permission loss. Landed from contributor PR #38699 by @lml2468. Thanks @lml2468.
- Gateway/launchd respawn detection: treat `XPC_SERVICE_NAME` as a launchd supervision hint so macOS restarts exit cleanly under launchd instead of attempting detached self-respawn. Landed from contributor PR #20555 by @dimat. Thanks @dimat.
@@ -144,6 +154,8 @@ Docs: https://docs.openclaw.ai
- Skills/download installs: pin the validated per-skill tools root before writing downloaded archives, so rebinding the lexical tools path cannot redirect download writes outside the intended tools directory. Thanks @tdjackey.
- Control UI/Debug: replace the Manual RPC free-text method field with a sorted dropdown sourced from gateway-advertised methods, and stack the form vertically for narrower layouts. (#14967) thanks @rixau.
- Auth/profile resolution: log debug details when auto-discovered auth profiles fail during provider API-key resolution, so `--debug` output surfaces the real refresh/keychain/credential-store failure instead of only the generic missing-key message. (#41271) thanks @he-yufeng.
- ACP/cancel scoping: scope `chat.abort` and shared-session ACP event routing by `runId` so one session cannot cancel or consume another session's run when they share the same gateway session key. (#41331) Thanks @pejmanjohn.
- SecretRef/models: harden custom/provider secret persistence and reuse across models.json snapshots, merge behavior, runtime headers, and secret audits. (#42554) Thanks @joshavant.
## 2026.3.7
@@ -210,6 +222,7 @@ Docs: https://docs.openclaw.ai
- Onboarding/API key input hardening: strip non-Latin1 Unicode artifacts from normalized secret input (while preserving Latin-1 content and internal spaces) so malformed copied API keys cannot trigger HTTP header `ByteString` construction crashes; adds regression coverage for shared normalization and MiniMax auth header usage. (#24496) Thanks @fa6maalassaf.
- Kimi Coding/Anthropic tools compatibility: normalize `anthropic-messages` tool payloads to OpenAI-style `tools[].function` + compatible `tool_choice` when targeting Kimi Coding endpoints, restoring tool-call workflows that regressed after v2026.3.2. (#37038) Thanks @mochimochimochi-hub.
- Heartbeat/workspace-path guardrails: append explicit workspace `HEARTBEAT.md` path guidance (and `docs/heartbeat.md` avoidance) to heartbeat prompts so heartbeat runs target workspace checklists reliably across packaged install layouts. (#37037) Thanks @stofancy.
- Node/system.run approvals: bind approval prompts to the exact executed argv text and show shell payload only as a secondary preview, closing basename-spoofed wrapper approval mismatches. Thanks @tdjackey.
- Subagents/kill-complete announce race: when a late `subagent-complete` lifecycle event arrives after an earlier kill marker, clear stale kill suppression/cleanup flags and re-run announce cleanup so finished runs no longer get silently swallowed. (#37024) Thanks @cmfinlan.
- Agents/tool-result cleanup timeout hardening: on embedded runner teardown idle timeouts, clear pending tool-call state without persisting synthetic `missing tool result` entries, preventing timeout cleanups from poisoning follow-up turns; adds regression coverage for timeout clear-vs-flush behavior. (#37081) Thanks @Coyote-Den.
- Agents/openai-completions stream timeout hardening: ensure runtime undici global dispatchers use extended streaming body/header timeouts (including env-proxy dispatcher mode) before embedded runs, reducing forced mid-stream `terminated` failures on long generations; adds regression coverage for dispatcher selection and idempotent reconfiguration. (#9708) Thanks @scottchguard.

View File

@@ -131,6 +131,41 @@ private let defaultOperatorConnectScopes: [String] = [
"operator.pairing",
]
private enum GatewayConnectErrorCodes {
static let authTokenMismatch = "AUTH_TOKEN_MISMATCH"
static let authDeviceTokenMismatch = "AUTH_DEVICE_TOKEN_MISMATCH"
static let authTokenMissing = "AUTH_TOKEN_MISSING"
static let authPasswordMissing = "AUTH_PASSWORD_MISSING"
static let authPasswordMismatch = "AUTH_PASSWORD_MISMATCH"
static let authRateLimited = "AUTH_RATE_LIMITED"
static let pairingRequired = "PAIRING_REQUIRED"
static let controlUiDeviceIdentityRequired = "CONTROL_UI_DEVICE_IDENTITY_REQUIRED"
static let deviceIdentityRequired = "DEVICE_IDENTITY_REQUIRED"
}
private struct GatewayConnectAuthError: LocalizedError {
let message: String
let detailCode: String?
let canRetryWithDeviceToken: Bool
var errorDescription: String? { self.message }
var isNonRecoverable: Bool {
switch self.detailCode {
case GatewayConnectErrorCodes.authTokenMissing,
GatewayConnectErrorCodes.authPasswordMissing,
GatewayConnectErrorCodes.authPasswordMismatch,
GatewayConnectErrorCodes.authRateLimited,
GatewayConnectErrorCodes.pairingRequired,
GatewayConnectErrorCodes.controlUiDeviceIdentityRequired,
GatewayConnectErrorCodes.deviceIdentityRequired:
return true
default:
return false
}
}
}
public actor GatewayChannelActor {
private let logger = Logger(subsystem: "ai.openclaw", category: "gateway")
private var task: WebSocketTaskBox?
@@ -160,6 +195,9 @@ public actor GatewayChannelActor {
private var watchdogTask: Task<Void, Never>?
private var tickTask: Task<Void, Never>?
private var keepaliveTask: Task<Void, Never>?
private var pendingDeviceTokenRetry = false
private var deviceTokenRetryBudgetUsed = false
private var reconnectPausedForAuthFailure = false
private let defaultRequestTimeoutMs: Double = 15000
private let pushHandler: (@Sendable (GatewayPush) async -> Void)?
private let connectOptions: GatewayConnectOptions?
@@ -232,10 +270,19 @@ public actor GatewayChannelActor {
while self.shouldReconnect {
guard await self.sleepUnlessCancelled(nanoseconds: 30 * 1_000_000_000) else { return } // 30s cadence
guard self.shouldReconnect else { return }
if self.reconnectPausedForAuthFailure { continue }
if self.connected { continue }
do {
try await self.connect()
} catch {
if self.shouldPauseReconnectAfterAuthFailure(error) {
self.reconnectPausedForAuthFailure = true
self.logger.error(
"gateway watchdog reconnect paused for non-recoverable auth failure " +
"\(error.localizedDescription, privacy: .public)"
)
continue
}
let wrapped = self.wrap(error, context: "gateway watchdog reconnect")
self.logger.error("gateway watchdog reconnect failed \(wrapped.localizedDescription, privacy: .public)")
}
@@ -267,7 +314,12 @@ public actor GatewayChannelActor {
},
operation: { try await self.sendConnect() })
} catch {
let wrapped = self.wrap(error, context: "connect to gateway @ \(self.url.absoluteString)")
let wrapped: Error
if let authError = error as? GatewayConnectAuthError {
wrapped = authError
} else {
wrapped = self.wrap(error, context: "connect to gateway @ \(self.url.absoluteString)")
}
self.connected = false
self.task?.cancel(with: .goingAway, reason: nil)
await self.disconnectHandler?("connect failed: \(wrapped.localizedDescription)")
@@ -281,6 +333,7 @@ public actor GatewayChannelActor {
}
self.listen()
self.connected = true
self.reconnectPausedForAuthFailure = false
self.backoffMs = 500
self.lastSeq = nil
self.startKeepalive()
@@ -371,11 +424,18 @@ public actor GatewayChannelActor {
(includeDeviceIdentity && identity != nil)
? DeviceAuthStore.loadToken(deviceId: identity!.deviceId, role: role)?.token
: nil
// If we're not sending a device identity, a device token can't be validated server-side.
// In that mode we always use the shared gateway token/password.
let authToken = includeDeviceIdentity ? (storedToken ?? self.token) : self.token
let shouldUseDeviceRetryToken =
includeDeviceIdentity && self.pendingDeviceTokenRetry &&
storedToken != nil && self.token != nil && self.isTrustedDeviceRetryEndpoint()
if shouldUseDeviceRetryToken {
self.pendingDeviceTokenRetry = false
}
// Keep shared credentials explicit when provided. Device token retry is attached
// only on a bounded second attempt after token mismatch.
let authToken = self.token ?? (includeDeviceIdentity ? storedToken : nil)
let authDeviceToken = shouldUseDeviceRetryToken ? storedToken : nil
let authSource: GatewayAuthSource
if storedToken != nil {
if authDeviceToken != nil || (self.token == nil && storedToken != nil) {
authSource = .deviceToken
} else if authToken != nil {
authSource = .sharedToken
@@ -386,9 +446,12 @@ public actor GatewayChannelActor {
}
self.lastAuthSource = authSource
self.logger.info("gateway connect auth=\(authSource.rawValue, privacy: .public)")
let canFallbackToShared = includeDeviceIdentity && storedToken != nil && self.token != nil
if let authToken {
params["auth"] = ProtoAnyCodable(["token": ProtoAnyCodable(authToken)])
var auth: [String: ProtoAnyCodable] = ["token": ProtoAnyCodable(authToken)]
if let authDeviceToken {
auth["deviceToken"] = ProtoAnyCodable(authDeviceToken)
}
params["auth"] = ProtoAnyCodable(auth)
} else if let password = self.password {
params["auth"] = ProtoAnyCodable(["password": ProtoAnyCodable(password)])
}
@@ -426,11 +489,24 @@ public actor GatewayChannelActor {
do {
let response = try await self.waitForConnectResponse(reqId: reqId)
try await self.handleConnectResponse(response, identity: identity, role: role)
self.pendingDeviceTokenRetry = false
self.deviceTokenRetryBudgetUsed = false
} catch {
if canFallbackToShared {
if let identity {
DeviceAuthStore.clearToken(deviceId: identity.deviceId, role: role)
}
let shouldRetryWithDeviceToken = self.shouldRetryWithStoredDeviceToken(
error: error,
explicitGatewayToken: self.token,
storedToken: storedToken,
attemptedDeviceTokenRetry: authDeviceToken != nil)
if shouldRetryWithDeviceToken {
self.pendingDeviceTokenRetry = true
self.deviceTokenRetryBudgetUsed = true
self.backoffMs = min(self.backoffMs, 250)
} else if authDeviceToken != nil,
let identity,
self.shouldClearStoredDeviceTokenAfterRetry(error)
{
// Retry failed with an explicit device-token mismatch; clear stale local token.
DeviceAuthStore.clearToken(deviceId: identity.deviceId, role: role)
}
throw error
}
@@ -443,7 +519,13 @@ public actor GatewayChannelActor {
) async throws {
if res.ok == false {
let msg = (res.error?["message"]?.value as? String) ?? "gateway connect failed"
throw NSError(domain: "Gateway", code: 1008, userInfo: [NSLocalizedDescriptionKey: msg])
let details = res.error?["details"]?.value as? [String: ProtoAnyCodable]
let detailCode = details?["code"]?.value as? String
let canRetryWithDeviceToken = details?["canRetryWithDeviceToken"]?.value as? Bool ?? false
throw GatewayConnectAuthError(
message: msg,
detailCode: detailCode,
canRetryWithDeviceToken: canRetryWithDeviceToken)
}
guard let payload = res.payload else {
throw NSError(
@@ -616,19 +698,91 @@ public actor GatewayChannelActor {
private func scheduleReconnect() async {
guard self.shouldReconnect else { return }
guard !self.reconnectPausedForAuthFailure else { return }
let delay = self.backoffMs / 1000
self.backoffMs = min(self.backoffMs * 2, 30000)
guard await self.sleepUnlessCancelled(nanoseconds: UInt64(delay * 1_000_000_000)) else { return }
guard self.shouldReconnect else { return }
guard !self.reconnectPausedForAuthFailure else { return }
do {
try await self.connect()
} catch {
if self.shouldPauseReconnectAfterAuthFailure(error) {
self.reconnectPausedForAuthFailure = true
self.logger.error(
"gateway reconnect paused for non-recoverable auth failure " +
"\(error.localizedDescription, privacy: .public)"
)
return
}
let wrapped = self.wrap(error, context: "gateway reconnect")
self.logger.error("gateway reconnect failed \(wrapped.localizedDescription, privacy: .public)")
await self.scheduleReconnect()
}
}
private func shouldRetryWithStoredDeviceToken(
error: Error,
explicitGatewayToken: String?,
storedToken: String?,
attemptedDeviceTokenRetry: Bool
) -> Bool {
if self.deviceTokenRetryBudgetUsed {
return false
}
if attemptedDeviceTokenRetry {
return false
}
guard explicitGatewayToken != nil, storedToken != nil else {
return false
}
guard self.isTrustedDeviceRetryEndpoint() else {
return false
}
guard let authError = error as? GatewayConnectAuthError else {
return false
}
return authError.canRetryWithDeviceToken ||
authError.detailCode == GatewayConnectErrorCodes.authTokenMismatch
}
private func shouldPauseReconnectAfterAuthFailure(_ error: Error) -> Bool {
guard let authError = error as? GatewayConnectAuthError else {
return false
}
if authError.isNonRecoverable {
return true
}
if authError.detailCode == GatewayConnectErrorCodes.authTokenMismatch &&
self.deviceTokenRetryBudgetUsed && !self.pendingDeviceTokenRetry
{
return true
}
return false
}
private func shouldClearStoredDeviceTokenAfterRetry(_ error: Error) -> Bool {
guard let authError = error as? GatewayConnectAuthError else {
return false
}
return authError.detailCode == GatewayConnectErrorCodes.authDeviceTokenMismatch
}
private func isTrustedDeviceRetryEndpoint() -> Bool {
// This client currently treats loopback as the only trusted retry target.
// Unlike the Node gateway client, it does not yet expose a pinned TLS-fingerprint
// trust path for remote retry, so remote fallback remains disabled by default.
guard let host = self.url.host?.trimmingCharacters(in: .whitespacesAndNewlines).lowercased(),
!host.isEmpty
else {
return false
}
if host == "localhost" || host == "::1" || host == "127.0.0.1" || host.hasPrefix("127.") {
return true
}
return false
}
private nonisolated func sleepUnlessCancelled(nanoseconds: UInt64) async -> Bool {
do {
try await Task.sleep(nanoseconds: nanoseconds)
@@ -756,7 +910,8 @@ public actor GatewayChannelActor {
return (id: id, data: data)
} catch {
self.logger.error(
"gateway \(kind) encode failed \(method, privacy: .public) error=\(error.localizedDescription, privacy: .public)")
"gateway \(kind) encode failed \(method, privacy: .public) " +
"error=\(error.localizedDescription, privacy: .public)")
throw error
}
}

View File

@@ -87,6 +87,8 @@ Token/secret files:
}
```
`tokenFile` and `secretFile` must point to regular files. Symlinks are rejected.
Multiple accounts:
```json5

View File

@@ -115,7 +115,7 @@ Provider options:
- `channels.nextcloud-talk.enabled`: enable/disable channel startup.
- `channels.nextcloud-talk.baseUrl`: Nextcloud instance URL.
- `channels.nextcloud-talk.botSecret`: bot shared secret.
- `channels.nextcloud-talk.botSecretFile`: secret file path.
- `channels.nextcloud-talk.botSecretFile`: regular-file secret path. Symlinks are rejected.
- `channels.nextcloud-talk.apiUser`: API user for room lookups (DM detection).
- `channels.nextcloud-talk.apiPassword`: API/app password for room lookups.
- `channels.nextcloud-talk.apiPasswordFile`: API password file path.

View File

@@ -892,7 +892,7 @@ Primary reference:
- `channels.telegram.enabled`: enable/disable channel startup.
- `channels.telegram.botToken`: bot token (BotFather).
- `channels.telegram.tokenFile`: read token from file path.
- `channels.telegram.tokenFile`: read token from a regular file path. Symlinks are rejected.
- `channels.telegram.dmPolicy`: `pairing | allowlist | open | disabled` (default: pairing).
- `channels.telegram.allowFrom`: DM allowlist (numeric Telegram user IDs). `allowlist` requires at least one sender ID. `open` requires `"*"`. `openclaw doctor --fix` can resolve legacy `@username` entries to IDs and can recover allowlist entries from pairing-store files in allowlist migration flows.
- `channels.telegram.actions.poll`: enable or disable Telegram poll creation (default: enabled; still requires `sendMessage`).
@@ -953,7 +953,7 @@ Primary reference:
Telegram-specific high-signal fields:
- startup/auth: `enabled`, `botToken`, `tokenFile`, `accounts.*`
- startup/auth: `enabled`, `botToken`, `tokenFile`, `accounts.*` (`tokenFile` must point to a regular file; symlinks are rejected)
- access control: `dmPolicy`, `allowFrom`, `groupPolicy`, `groupAllowFrom`, `groups`, `groups.*.topics.*`, top-level `bindings[]` (`type: "acp"`)
- exec approvals: `execApprovals`, `accounts.*.execApprovals`
- command/menu: `commands.native`, `commands.nativeSkills`, `customCommands`

View File

@@ -179,7 +179,7 @@ Provider options:
- `channels.zalo.enabled`: enable/disable channel startup.
- `channels.zalo.botToken`: bot token from Zalo Bot Platform.
- `channels.zalo.tokenFile`: read token from file path.
- `channels.zalo.tokenFile`: read token from a regular file path. Symlinks are rejected.
- `channels.zalo.dmPolicy`: `pairing | allowlist | open | disabled` (default: pairing).
- `channels.zalo.allowFrom`: DM allowlist (user IDs). `open` requires `"*"`. The wizard will ask for numeric IDs.
- `channels.zalo.groupPolicy`: `open | allowlist | disabled` (default: allowlist).
@@ -193,7 +193,7 @@ Provider options:
Multi-account options:
- `channels.zalo.accounts.<id>.botToken`: per-account token.
- `channels.zalo.accounts.<id>.tokenFile`: per-account token file.
- `channels.zalo.accounts.<id>.tokenFile`: per-account regular token file. Symlinks are rejected.
- `channels.zalo.accounts.<id>.name`: display name.
- `channels.zalo.accounts.<id>.enabled`: enable/disable account.
- `channels.zalo.accounts.<id>.dmPolicy`: per-account DM policy.

View File

@@ -92,3 +92,40 @@ Pass `--token` or `--password` explicitly. Missing explicit credentials is an er
- These commands require `operator.pairing` (or `operator.admin`) scope.
- `devices clear` is intentionally gated by `--yes`.
- If pairing scope is unavailable on local loopback (and no explicit `--url` is passed), list/approve can use a local pairing fallback.
## Token drift recovery checklist
Use this when Control UI or other clients keep failing with `AUTH_TOKEN_MISMATCH` or `AUTH_DEVICE_TOKEN_MISMATCH`.
1. Confirm current gateway token source:
```bash
openclaw config get gateway.auth.token
```
2. List paired devices and identify the affected device id:
```bash
openclaw devices list
```
3. Rotate operator token for the affected device:
```bash
openclaw devices rotate --device <deviceId> --role operator
```
4. If rotation is not enough, remove stale pairing and approve again:
```bash
openclaw devices remove <deviceId>
openclaw devices list
openclaw devices approve <requestId>
```
5. Retry client connection with the current shared token/password.
Related:
- [Dashboard auth troubleshooting](/web/dashboard#if-you-see-unauthorized-1008)
- [Gateway troubleshooting](/gateway/troubleshooting#dashboard-control-ui-connectivity)

View File

@@ -203,7 +203,7 @@ WhatsApp runs through the gateway's web channel (Baileys Web). It starts automat
}
```
- Bot token: `channels.telegram.botToken` or `channels.telegram.tokenFile`, with `TELEGRAM_BOT_TOKEN` as fallback for the default account.
- Bot token: `channels.telegram.botToken` or `channels.telegram.tokenFile` (regular file only; symlinks rejected), with `TELEGRAM_BOT_TOKEN` as fallback for the default account.
- Optional `channels.telegram.defaultAccount` overrides default account selection when it matches a configured account id.
- In multi-account setups (2+ account ids), set an explicit default (`channels.telegram.defaultAccount` or `channels.telegram.accounts.default`) to avoid fallback routing; `openclaw doctor` warns when this is missing or invalid.
- `configWrites: false` blocks Telegram-initiated config writes (supergroup ID migrations, `/config set|unset`).
@@ -748,6 +748,7 @@ Include your own number in `allowFrom` to enable self-chat mode (ignores native
- `bash: true` enables `! <cmd>` for host shell. Requires `tools.elevated.enabled` and sender in `tools.elevated.allowFrom.<channel>`.
- `config: true` enables `/config` (reads/writes `openclaw.json`). For gateway `chat.send` clients, persistent `/config set|unset` writes also require `operator.admin`; read-only `/config show` stays available to normal write-scoped operator clients.
- `channels.<provider>.configWrites` gates config mutations per channel (default: true).
- For multi-account channels, `channels.<provider>.accounts.<id>.configWrites` also gates writes that target that account (for example `/allowlist --config --account <id>` or `/config set channels.<provider>.accounts.<id>...`).
- `allowFrom` is per-provider. When set, it is the **only** authorization source (channel allowlists/pairing and `useAccessGroups` are ignored).
- `useAccessGroups: false` allows commands to bypass access-group policies when `allowFrom` is not set.

View File

@@ -206,6 +206,12 @@ The Gateway treats these as **claims** and enforces server-side allowlists.
persisted by the client for future connects.
- Device tokens can be rotated/revoked via `device.token.rotate` and
`device.token.revoke` (requires `operator.pairing` scope).
- Auth failures include `error.details.code` plus recovery hints:
- `error.details.canRetryWithDeviceToken` (boolean)
- `error.details.recommendedNextStep` (`retry_with_device_token`, `update_auth_configuration`, `update_auth_credentials`, `wait_then_retry`, `review_auth_configuration`)
- Client behavior for `AUTH_TOKEN_MISMATCH`:
- Trusted clients may attempt one bounded retry with a cached per-device token.
- If that retry fails, clients should stop automatic reconnect loops and surface operator action guidance.
## Device identity + pairing
@@ -217,8 +223,9 @@ The Gateway treats these as **claims** and enforces server-side allowlists.
- **Local** connects include loopback and the gateway hosts own tailnet address
(so samehost tailnet binds can still autoapprove).
- All WS clients must include `device` identity during `connect` (operator + node).
Control UI can omit it **only** when `gateway.controlUi.dangerouslyDisableDeviceAuth`
is enabled for break-glass use.
Control UI can omit it only in these modes:
- `gateway.controlUi.allowInsecureAuth=true` for localhost-only insecure HTTP compatibility.
- `gateway.controlUi.dangerouslyDisableDeviceAuth=true` (break-glass, severe security downgrade).
- All connections must sign the server-provided `connect.challenge` nonce.
### Device auth migration diagnostics

View File

@@ -199,7 +199,7 @@ If you run `--deep`, OpenClaw also attempts a best-effort live Gateway probe.
Use this when auditing access or deciding what to back up:
- **WhatsApp**: `~/.openclaw/credentials/whatsapp/<accountId>/creds.json`
- **Telegram bot token**: config/env or `channels.telegram.tokenFile`
- **Telegram bot token**: config/env or `channels.telegram.tokenFile` (regular file only; symlinks rejected)
- **Discord bot token**: config/env or SecretRef (env/file/exec providers)
- **Slack tokens**: config/env (`channels.slack.*`)
- **Pairing allowlists**:
@@ -262,9 +262,14 @@ High-signal `checkId` values you will most likely see in real deployments (not e
## Control UI over HTTP
The Control UI needs a **secure context** (HTTPS or localhost) to generate device
identity. `gateway.controlUi.allowInsecureAuth` does **not** bypass secure-context,
device-identity, or device-pairing checks. Prefer HTTPS (Tailscale Serve) or open
the UI on `127.0.0.1`.
identity. `gateway.controlUi.allowInsecureAuth` is a local compatibility toggle:
- On localhost, it allows Control UI auth without device identity when the page
is loaded over non-secure HTTP.
- It does not bypass pairing checks.
- It does not relax remote (non-localhost) device identity requirements.
Prefer HTTPS (Tailscale Serve) or open the UI on `127.0.0.1`.
For break-glass scenarios only, `gateway.controlUi.dangerouslyDisableDeviceAuth`
disables device identity checks entirely. This is a severe security downgrade;

View File

@@ -113,9 +113,21 @@ Common signatures:
challenge-based device auth flow (`connect.challenge` + `device.nonce`).
- `device signature invalid` / `device signature expired` → client signed the wrong
payload (or stale timestamp) for the current handshake.
- `unauthorized` / reconnect loop → token/password mismatch.
- `AUTH_TOKEN_MISMATCH` with `canRetryWithDeviceToken=true` → client can do one trusted retry with cached device token.
- repeated `unauthorized` after that retry → shared token/device token drift; refresh token config and re-approve/rotate device token if needed.
- `gateway connect failed:` → wrong host/port/url target.
### Auth detail codes quick map
Use `error.details.code` from the failed `connect` response to pick the next action:
| Detail code | Meaning | Recommended action |
| ---------------------------- | -------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `AUTH_TOKEN_MISSING` | Client did not send a required shared token. | Paste/set token in the client and retry. For dashboard paths: `openclaw config get gateway.auth.token` then paste into Control UI settings. |
| `AUTH_TOKEN_MISMATCH` | Shared token did not match gateway auth token. | If `canRetryWithDeviceToken=true`, allow one trusted retry. If still failing, run the [token drift recovery checklist](/cli/devices#token-drift-recovery-checklist). |
| `AUTH_DEVICE_TOKEN_MISMATCH` | Cached per-device token is stale or revoked. | Rotate/re-approve device token using [devices CLI](/cli/devices), then reconnect. |
| `PAIRING_REQUIRED` | Device identity is known but not approved for this role. | Approve pending request: `openclaw devices list` then `openclaw devices approve <requestId>`. |
Device auth v2 migration check:
```bash
@@ -135,6 +147,7 @@ Related:
- [/web/control-ui](/web/control-ui)
- [/gateway/authentication](/gateway/authentication)
- [/gateway/remote](/gateway/remote)
- [/cli/devices](/cli/devices)
## Gateway service not running

View File

@@ -2512,6 +2512,7 @@ Your gateway is running with auth enabled (`gateway.auth.*`), but the UI is not
Facts (from code):
- The Control UI keeps the token in `sessionStorage` for the current browser tab session and selected gateway URL, so same-tab refreshes keep working without restoring long-lived localStorage token persistence.
- On `AUTH_TOKEN_MISMATCH`, trusted clients can attempt one bounded retry with a cached device token when the gateway returns retry hints (`canRetryWithDeviceToken=true`, `recommendedNextStep=retry_with_device_token`).
Fix:
@@ -2520,6 +2521,9 @@ Fix:
- If remote, tunnel first: `ssh -N -L 18789:127.0.0.1:18789 user@host` then open `http://127.0.0.1:18789/`.
- Set `gateway.auth.token` (or `OPENCLAW_GATEWAY_TOKEN`) on the gateway host.
- In the Control UI settings, paste the same token.
- If mismatch persists after the one retry, rotate/re-approve the paired device token:
- `openclaw devices list`
- `openclaw devices rotate --device <id> --role operator`
- Still stuck? Run `openclaw status --all` and follow [Troubleshooting](/gateway/troubleshooting). See [Dashboard](/web/dashboard) for auth details.
### I set gatewaybind tailnet but it can't bind nothing listens

View File

@@ -136,7 +136,8 @@ flowchart TD
Common log signatures:
- `device identity required` → HTTP/non-secure context cannot complete device auth.
- `unauthorized` / reconnect loopwrong token/password or auth mode mismatch.
- `AUTH_TOKEN_MISMATCH` with retry hints (`canRetryWithDeviceToken=true`) → one trusted device-token retry may occur automatically.
- repeated `unauthorized` after that retry → wrong token/password, auth mode mismatch, or stale paired device token.
- `gateway connect failed:` → UI is targeting the wrong URL/port or unreachable gateway.
Deep pages:

View File

@@ -127,7 +127,7 @@ openclaw health
Use this when debugging auth or deciding what to back up:
- **WhatsApp**: `~/.openclaw/credentials/whatsapp/<accountId>/creds.json`
- **Telegram bot token**: config/env or `channels.telegram.tokenFile`
- **Telegram bot token**: config/env or `channels.telegram.tokenFile` (regular file only; symlinks rejected)
- **Discord bot token**: config/env or SecretRef (env/file/exec providers)
- **Slack tokens**: config/env (`channels.slack.*`)
- **Pairing allowlists**:

View File

@@ -123,6 +123,7 @@ Notes:
- `/new <model>` accepts a model alias, `provider/model`, or a provider name (fuzzy match); if no match, the text is treated as the message body.
- For full provider usage breakdown, use `openclaw status --usage`.
- `/allowlist add|remove` requires `commands.config=true` and honors channel `configWrites`.
- In multi-account channels, config-targeted `/allowlist --account <id>` and `/config set channels.<provider>.accounts.<id>...` also honor the target account's `configWrites`.
- `/usage` controls the per-response usage footer; `/usage cost` prints a local cost summary from OpenClaw session logs.
- `/restart` is enabled by default; set `commands.restart: false` to disable it.
- Discord-only native command: `/vc join|leave|status` controls voice channels (requires `channels.discord.voice` and native commands; not available as text).

View File

@@ -174,7 +174,12 @@ OpenClaw **blocks** Control UI connections without device identity.
}
```
`allowInsecureAuth` does not bypass Control UI device identity or pairing checks.
`allowInsecureAuth` is a local compatibility toggle only:
- It allows localhost Control UI sessions to proceed without device identity in
non-secure HTTP contexts.
- It does not bypass pairing checks.
- It does not relax remote (non-localhost) device identity requirements.
**Break-glass only:**

View File

@@ -45,6 +45,8 @@ Prefer localhost, Tailscale Serve, or an SSH tunnel.
## If you see “unauthorized” / 1008
- Ensure the gateway is reachable (local: `openclaw status`; remote: SSH tunnel `ssh -N -L 18789:127.0.0.1:18789 user@host` then open `http://127.0.0.1:18789/`).
- For `AUTH_TOKEN_MISMATCH`, clients may do one trusted retry with a cached device token when the gateway returns retry hints. If auth still fails after that retry, resolve token drift manually.
- For token drift repair steps, follow [Token drift recovery checklist](/cli/devices#token-drift-recovery-checklist).
- Retrieve or supply the token from the gateway host:
- Plaintext config: `openclaw config get gateway.auth.token`
- SecretRef-managed config: resolve the external secret provider or export `OPENCLAW_GATEWAY_TOKEN` in this shell, then rerun `openclaw dashboard`

View File

@@ -5,7 +5,6 @@ import {
ACPX_PINNED_VERSION,
createAcpxPluginConfigSchema,
resolveAcpxPluginConfig,
toAcpMcpServers,
} from "./config.js";
describe("acpx plugin config parsing", () => {
@@ -20,9 +19,9 @@ describe("acpx plugin config parsing", () => {
expect(resolved.command).toBe(ACPX_BUNDLED_BIN);
expect(resolved.expectedVersion).toBe(ACPX_PINNED_VERSION);
expect(resolved.allowPluginLocalInstall).toBe(true);
expect(resolved.stripProviderAuthEnvVars).toBe(true);
expect(resolved.cwd).toBe(path.resolve("/tmp/workspace"));
expect(resolved.strictWindowsCmdWrapper).toBe(true);
expect(resolved.mcpServers).toEqual({});
});
it("accepts command override and disables plugin-local auto-install", () => {
@@ -37,6 +36,7 @@ describe("acpx plugin config parsing", () => {
expect(resolved.command).toBe(path.resolve(command));
expect(resolved.expectedVersion).toBeUndefined();
expect(resolved.allowPluginLocalInstall).toBe(false);
expect(resolved.stripProviderAuthEnvVars).toBe(false);
});
it("resolves relative command paths against workspace directory", () => {
@@ -50,6 +50,7 @@ describe("acpx plugin config parsing", () => {
expect(resolved.command).toBe(path.resolve("/home/user/repos/openclaw", "../acpx/dist/cli.js"));
expect(resolved.expectedVersion).toBeUndefined();
expect(resolved.allowPluginLocalInstall).toBe(false);
expect(resolved.stripProviderAuthEnvVars).toBe(false);
});
it("keeps bare command names as-is", () => {
@@ -63,6 +64,7 @@ describe("acpx plugin config parsing", () => {
expect(resolved.command).toBe("acpx");
expect(resolved.expectedVersion).toBeUndefined();
expect(resolved.allowPluginLocalInstall).toBe(false);
expect(resolved.stripProviderAuthEnvVars).toBe(false);
});
it("accepts exact expectedVersion override", () => {
@@ -78,6 +80,7 @@ describe("acpx plugin config parsing", () => {
expect(resolved.command).toBe(path.resolve(command));
expect(resolved.expectedVersion).toBe("0.1.99");
expect(resolved.allowPluginLocalInstall).toBe(false);
expect(resolved.stripProviderAuthEnvVars).toBe(false);
});
it("treats expectedVersion=any as no version constraint", () => {
@@ -134,97 +137,4 @@ describe("acpx plugin config parsing", () => {
}),
).toThrow("strictWindowsCmdWrapper must be a boolean");
});
it("accepts mcp server maps", () => {
const resolved = resolveAcpxPluginConfig({
rawConfig: {
mcpServers: {
canva: {
command: "npx",
args: ["-y", "mcp-remote@latest", "https://mcp.canva.com/mcp"],
env: {
CANVA_TOKEN: "secret",
},
},
},
},
workspaceDir: "/tmp/workspace",
});
expect(resolved.mcpServers).toEqual({
canva: {
command: "npx",
args: ["-y", "mcp-remote@latest", "https://mcp.canva.com/mcp"],
env: {
CANVA_TOKEN: "secret",
},
},
});
});
it("rejects invalid mcp server definitions", () => {
expect(() =>
resolveAcpxPluginConfig({
rawConfig: {
mcpServers: {
canva: {
command: "npx",
args: ["-y", 1],
},
},
},
workspaceDir: "/tmp/workspace",
}),
).toThrow(
"mcpServers.canva must have a command string, optional args array, and optional env object",
);
});
it("schema accepts mcp server config", () => {
const schema = createAcpxPluginConfigSchema();
if (!schema.safeParse) {
throw new Error("acpx config schema missing safeParse");
}
const parsed = schema.safeParse({
mcpServers: {
canva: {
command: "npx",
args: ["-y", "mcp-remote@latest"],
env: {
CANVA_TOKEN: "secret",
},
},
},
});
expect(parsed.success).toBe(true);
});
});
describe("toAcpMcpServers", () => {
it("converts plugin config maps into ACP stdio MCP entries", () => {
expect(
toAcpMcpServers({
canva: {
command: "npx",
args: ["-y", "mcp-remote@latest", "https://mcp.canva.com/mcp"],
env: {
CANVA_TOKEN: "secret",
},
},
}),
).toEqual([
{
name: "canva",
command: "npx",
args: ["-y", "mcp-remote@latest", "https://mcp.canva.com/mcp"],
env: [
{
name: "CANVA_TOKEN",
value: "secret",
},
],
},
]);
});
});

View File

@@ -47,6 +47,7 @@ export type ResolvedAcpxPluginConfig = {
command: string;
expectedVersion?: string;
allowPluginLocalInstall: boolean;
stripProviderAuthEnvVars: boolean;
installCommand: string;
cwd: string;
permissionMode: AcpxPermissionMode;
@@ -332,6 +333,7 @@ export function resolveAcpxPluginConfig(params: {
workspaceDir: params.workspaceDir,
});
const allowPluginLocalInstall = command === ACPX_BUNDLED_BIN;
const stripProviderAuthEnvVars = command === ACPX_BUNDLED_BIN;
const configuredExpectedVersion = normalized.expectedVersion;
const expectedVersion =
configuredExpectedVersion === ACPX_VERSION_ANY
@@ -343,6 +345,7 @@ export function resolveAcpxPluginConfig(params: {
command,
expectedVersion,
allowPluginLocalInstall,
stripProviderAuthEnvVars,
installCommand,
cwd,
permissionMode: normalized.permissionMode ?? DEFAULT_PERMISSION_MODE,

View File

@@ -77,6 +77,7 @@ describe("acpx ensure", () => {
command: "/plugin/node_modules/.bin/acpx",
args: ["--version"],
cwd: "/plugin",
stripProviderAuthEnvVars: undefined,
});
});
@@ -148,6 +149,30 @@ describe("acpx ensure", () => {
command: "/custom/acpx",
args: ["--help"],
cwd: "/custom",
stripProviderAuthEnvVars: undefined,
});
});
it("forwards stripProviderAuthEnvVars to version checks", async () => {
spawnAndCollectMock.mockResolvedValueOnce({
stdout: "Usage: acpx [options]\n",
stderr: "",
code: 0,
error: null,
});
await checkAcpxVersion({
command: "/plugin/node_modules/.bin/acpx",
cwd: "/plugin",
expectedVersion: undefined,
stripProviderAuthEnvVars: true,
});
expect(spawnAndCollectMock).toHaveBeenCalledWith({
command: "/plugin/node_modules/.bin/acpx",
args: ["--help"],
cwd: "/plugin",
stripProviderAuthEnvVars: true,
});
});
@@ -186,6 +211,54 @@ describe("acpx ensure", () => {
});
});
it("threads stripProviderAuthEnvVars through version probes and install", async () => {
spawnAndCollectMock
.mockResolvedValueOnce({
stdout: "acpx 0.0.9\n",
stderr: "",
code: 0,
error: null,
})
.mockResolvedValueOnce({
stdout: "added 1 package\n",
stderr: "",
code: 0,
error: null,
})
.mockResolvedValueOnce({
stdout: `acpx ${ACPX_PINNED_VERSION}\n`,
stderr: "",
code: 0,
error: null,
});
await ensureAcpx({
command: "/plugin/node_modules/.bin/acpx",
pluginRoot: "/plugin",
expectedVersion: ACPX_PINNED_VERSION,
stripProviderAuthEnvVars: true,
});
expect(spawnAndCollectMock.mock.calls[0]?.[0]).toMatchObject({
command: "/plugin/node_modules/.bin/acpx",
args: ["--version"],
cwd: "/plugin",
stripProviderAuthEnvVars: true,
});
expect(spawnAndCollectMock.mock.calls[1]?.[0]).toMatchObject({
command: "npm",
args: ["install", "--omit=dev", "--no-save", `acpx@${ACPX_PINNED_VERSION}`],
cwd: "/plugin",
stripProviderAuthEnvVars: true,
});
expect(spawnAndCollectMock.mock.calls[2]?.[0]).toMatchObject({
command: "/plugin/node_modules/.bin/acpx",
args: ["--version"],
cwd: "/plugin",
stripProviderAuthEnvVars: true,
});
});
it("fails with actionable error when npm install fails", async () => {
spawnAndCollectMock
.mockResolvedValueOnce({

View File

@@ -102,6 +102,7 @@ export async function checkAcpxVersion(params: {
command: string;
cwd?: string;
expectedVersion?: string;
stripProviderAuthEnvVars?: boolean;
spawnOptions?: SpawnCommandOptions;
}): Promise<AcpxVersionCheckResult> {
const expectedVersion = params.expectedVersion?.trim() || undefined;
@@ -113,6 +114,7 @@ export async function checkAcpxVersion(params: {
command: params.command,
args: probeArgs,
cwd,
stripProviderAuthEnvVars: params.stripProviderAuthEnvVars,
};
let result: Awaited<ReturnType<typeof spawnAndCollect>>;
try {
@@ -198,6 +200,7 @@ export async function ensureAcpx(params: {
pluginRoot?: string;
expectedVersion?: string;
allowInstall?: boolean;
stripProviderAuthEnvVars?: boolean;
spawnOptions?: SpawnCommandOptions;
}): Promise<void> {
if (pendingEnsure) {
@@ -214,6 +217,7 @@ export async function ensureAcpx(params: {
command: params.command,
cwd: pluginRoot,
expectedVersion,
stripProviderAuthEnvVars: params.stripProviderAuthEnvVars,
spawnOptions: params.spawnOptions,
});
if (precheck.ok) {
@@ -231,6 +235,7 @@ export async function ensureAcpx(params: {
command: "npm",
args: ["install", "--omit=dev", "--no-save", `acpx@${installVersion}`],
cwd: pluginRoot,
stripProviderAuthEnvVars: params.stripProviderAuthEnvVars,
});
if (install.error) {
@@ -252,6 +257,7 @@ export async function ensureAcpx(params: {
command: params.command,
cwd: pluginRoot,
expectedVersion,
stripProviderAuthEnvVars: params.stripProviderAuthEnvVars,
spawnOptions: params.spawnOptions,
});

View File

@@ -0,0 +1,59 @@
import { describe, expect, it, vi } from "vitest";
const { spawnAndCollectMock } = vi.hoisted(() => ({
spawnAndCollectMock: vi.fn(),
}));
vi.mock("./process.js", () => ({
spawnAndCollect: spawnAndCollectMock,
}));
import { __testing, resolveAcpxAgentCommand } from "./mcp-agent-command.js";
describe("resolveAcpxAgentCommand", () => {
it("threads stripProviderAuthEnvVars through the config show probe", async () => {
spawnAndCollectMock.mockResolvedValueOnce({
stdout: JSON.stringify({
agents: {
codex: {
command: "custom-codex",
},
},
}),
stderr: "",
code: 0,
error: null,
});
const command = await resolveAcpxAgentCommand({
acpxCommand: "/plugin/node_modules/.bin/acpx",
cwd: "/plugin",
agent: "codex",
stripProviderAuthEnvVars: true,
});
expect(command).toBe("custom-codex");
expect(spawnAndCollectMock).toHaveBeenCalledWith(
{
command: "/plugin/node_modules/.bin/acpx",
args: ["--cwd", "/plugin", "config", "show"],
cwd: "/plugin",
stripProviderAuthEnvVars: true,
},
undefined,
);
});
});
describe("buildMcpProxyAgentCommand", () => {
it("escapes Windows-style proxy paths without double-escaping backslashes", () => {
const quoted = __testing.quoteCommandPart(
"C:\\repo\\extensions\\acpx\\src\\runtime-internals\\mcp-proxy.mjs",
);
expect(quoted).toBe(
'"C:\\\\repo\\\\extensions\\\\acpx\\\\src\\\\runtime-internals\\\\mcp-proxy.mjs"',
);
expect(quoted).not.toContain("\\\\\\");
});
});

View File

@@ -37,6 +37,10 @@ function quoteCommandPart(value: string): string {
return `"${value.replace(/["\\]/g, "\\$&")}"`;
}
export const __testing = {
quoteCommandPart,
};
function toCommandLine(parts: string[]): string {
return parts.map(quoteCommandPart).join(" ");
}
@@ -62,6 +66,7 @@ function readConfiguredAgentOverrides(value: unknown): Record<string, string> {
async function loadAgentOverrides(params: {
acpxCommand: string;
cwd: string;
stripProviderAuthEnvVars?: boolean;
spawnOptions?: SpawnCommandOptions;
}): Promise<Record<string, string>> {
const result = await spawnAndCollect(
@@ -69,6 +74,7 @@ async function loadAgentOverrides(params: {
command: params.acpxCommand,
args: ["--cwd", params.cwd, "config", "show"],
cwd: params.cwd,
stripProviderAuthEnvVars: params.stripProviderAuthEnvVars,
},
params.spawnOptions,
);
@@ -87,12 +93,14 @@ export async function resolveAcpxAgentCommand(params: {
acpxCommand: string;
cwd: string;
agent: string;
stripProviderAuthEnvVars?: boolean;
spawnOptions?: SpawnCommandOptions;
}): Promise<string> {
const normalizedAgent = normalizeAgentName(params.agent);
const overrides = await loadAgentOverrides({
acpxCommand: params.acpxCommand,
cwd: params.cwd,
stripProviderAuthEnvVars: params.stripProviderAuthEnvVars,
spawnOptions: params.spawnOptions,
});
return overrides[normalizedAgent] ?? ACPX_BUILTIN_AGENT_COMMANDS[normalizedAgent] ?? params.agent;

View File

@@ -2,7 +2,7 @@ import { spawn } from "node:child_process";
import { mkdir, mkdtemp, rm, writeFile } from "node:fs/promises";
import { tmpdir } from "node:os";
import path from "node:path";
import { afterEach, describe, expect, it } from "vitest";
import { afterEach, describe, expect, it, vi } from "vitest";
import { createWindowsCmdShimFixture } from "../../../shared/windows-cmd-shim-test-fixtures.js";
import {
resolveSpawnCommand,
@@ -28,6 +28,7 @@ async function createTempDir(): Promise<string> {
}
afterEach(async () => {
vi.unstubAllEnvs();
while (tempDirs.length > 0) {
const dir = tempDirs.pop();
if (!dir) {
@@ -289,4 +290,99 @@ describe("spawnAndCollect", () => {
const result = await resultPromise;
expect(result.error?.name).toBe("AbortError");
});
it("strips shared provider auth env vars from spawned acpx children", async () => {
vi.stubEnv("OPENAI_API_KEY", "openai-secret");
vi.stubEnv("GITHUB_TOKEN", "gh-secret");
vi.stubEnv("HF_TOKEN", "hf-secret");
vi.stubEnv("OPENCLAW_API_KEY", "keep-me");
const result = await spawnAndCollect({
command: process.execPath,
args: [
"-e",
"process.stdout.write(JSON.stringify({openai:process.env.OPENAI_API_KEY,github:process.env.GITHUB_TOKEN,hf:process.env.HF_TOKEN,openclaw:process.env.OPENCLAW_API_KEY,shell:process.env.OPENCLAW_SHELL}))",
],
cwd: process.cwd(),
stripProviderAuthEnvVars: true,
});
expect(result.code).toBe(0);
expect(result.error).toBeNull();
const parsed = JSON.parse(result.stdout) as {
openai?: string;
github?: string;
hf?: string;
openclaw?: string;
shell?: string;
};
expect(parsed.openai).toBeUndefined();
expect(parsed.github).toBeUndefined();
expect(parsed.hf).toBeUndefined();
expect(parsed.openclaw).toBe("keep-me");
expect(parsed.shell).toBe("acp");
});
it("strips provider auth env vars case-insensitively", async () => {
vi.stubEnv("OpenAI_Api_Key", "openai-secret");
vi.stubEnv("Github_Token", "gh-secret");
vi.stubEnv("OPENCLAW_API_KEY", "keep-me");
const result = await spawnAndCollect({
command: process.execPath,
args: [
"-e",
"process.stdout.write(JSON.stringify({openai:process.env.OpenAI_Api_Key,github:process.env.Github_Token,openclaw:process.env.OPENCLAW_API_KEY,shell:process.env.OPENCLAW_SHELL}))",
],
cwd: process.cwd(),
stripProviderAuthEnvVars: true,
});
expect(result.code).toBe(0);
expect(result.error).toBeNull();
const parsed = JSON.parse(result.stdout) as {
openai?: string;
github?: string;
openclaw?: string;
shell?: string;
};
expect(parsed.openai).toBeUndefined();
expect(parsed.github).toBeUndefined();
expect(parsed.openclaw).toBe("keep-me");
expect(parsed.shell).toBe("acp");
});
it("preserves provider auth env vars for explicit custom commands by default", async () => {
vi.stubEnv("OPENAI_API_KEY", "openai-secret");
vi.stubEnv("GITHUB_TOKEN", "gh-secret");
vi.stubEnv("HF_TOKEN", "hf-secret");
vi.stubEnv("OPENCLAW_API_KEY", "keep-me");
const result = await spawnAndCollect({
command: process.execPath,
args: [
"-e",
"process.stdout.write(JSON.stringify({openai:process.env.OPENAI_API_KEY,github:process.env.GITHUB_TOKEN,hf:process.env.HF_TOKEN,openclaw:process.env.OPENCLAW_API_KEY,shell:process.env.OPENCLAW_SHELL}))",
],
cwd: process.cwd(),
});
expect(result.code).toBe(0);
expect(result.error).toBeNull();
const parsed = JSON.parse(result.stdout) as {
openai?: string;
github?: string;
hf?: string;
openclaw?: string;
shell?: string;
};
expect(parsed.openai).toBe("openai-secret");
expect(parsed.github).toBe("gh-secret");
expect(parsed.hf).toBe("hf-secret");
expect(parsed.openclaw).toBe("keep-me");
expect(parsed.shell).toBe("acp");
});
});

View File

@@ -7,7 +7,9 @@ import type {
} from "openclaw/plugin-sdk/acpx";
import {
applyWindowsSpawnProgramPolicy,
listKnownProviderAuthEnvVarNames,
materializeWindowsSpawnProgram,
omitEnvKeysCaseInsensitive,
resolveWindowsSpawnProgramCandidate,
} from "openclaw/plugin-sdk/acpx";
@@ -125,6 +127,7 @@ export function spawnWithResolvedCommand(
command: string;
args: string[];
cwd: string;
stripProviderAuthEnvVars?: boolean;
},
options?: SpawnCommandOptions,
): ChildProcessWithoutNullStreams {
@@ -136,9 +139,15 @@ export function spawnWithResolvedCommand(
options,
);
const childEnv = omitEnvKeysCaseInsensitive(
process.env,
params.stripProviderAuthEnvVars ? listKnownProviderAuthEnvVarNames() : [],
);
childEnv.OPENCLAW_SHELL = "acp";
return spawn(resolved.command, resolved.args, {
cwd: params.cwd,
env: { ...process.env, OPENCLAW_SHELL: "acp" },
env: childEnv,
stdio: ["pipe", "pipe", "pipe"],
shell: resolved.shell,
windowsHide: resolved.windowsHide,
@@ -180,6 +189,7 @@ export async function spawnAndCollect(
command: string;
args: string[];
cwd: string;
stripProviderAuthEnvVars?: boolean;
},
options?: SpawnCommandOptions,
runtime?: {

View File

@@ -1,6 +1,6 @@
import os from "node:os";
import path from "node:path";
import { afterAll, beforeAll, describe, expect, it } from "vitest";
import { afterAll, beforeAll, describe, expect, it, vi } from "vitest";
import { runAcpRuntimeAdapterContract } from "../../../src/acp/runtime/adapter-contract.testkit.js";
import { AcpxRuntime, decodeAcpxRuntimeHandleState } from "./runtime.js";
import {
@@ -19,13 +19,14 @@ beforeAll(async () => {
{
command: "/definitely/missing/acpx",
allowPluginLocalInstall: false,
stripProviderAuthEnvVars: false,
installCommand: "n/a",
cwd: process.cwd(),
mcpServers: {},
permissionMode: "approve-reads",
nonInteractivePermissions: "fail",
strictWindowsCmdWrapper: true,
queueOwnerTtlSeconds: 0.1,
mcpServers: {},
},
{ logger: NOOP_LOGGER },
);
@@ -165,7 +166,7 @@ describe("AcpxRuntime", () => {
for await (const _event of runtime.runTurn({
handle,
text: "describe this image",
attachments: [{ mediaType: "image/png", data: "aW1hZ2UtYnl0ZXM=" }],
attachments: [{ mediaType: "image/png", data: "aW1hZ2UtYnl0ZXM=" }], // pragma: allowlist secret
mode: "prompt",
requestId: "req-image",
})) {
@@ -186,6 +187,40 @@ describe("AcpxRuntime", () => {
]);
});
it("preserves provider auth env vars when runtime uses a custom acpx command", async () => {
vi.stubEnv("OPENAI_API_KEY", "openai-secret"); // pragma: allowlist secret
vi.stubEnv("GITHUB_TOKEN", "gh-secret"); // pragma: allowlist secret
try {
const { runtime, logPath } = await createMockRuntimeFixture();
const handle = await runtime.ensureSession({
sessionKey: "agent:codex:acp:custom-env",
agent: "codex",
mode: "persistent",
});
for await (const _event of runtime.runTurn({
handle,
text: "custom-env",
mode: "prompt",
requestId: "req-custom-env",
})) {
// Drain events; assertions inspect the mock runtime log.
}
const logs = await readMockRuntimeLogEntries(logPath);
const prompt = logs.find(
(entry) =>
entry.kind === "prompt" &&
String(entry.sessionName ?? "") === "agent:codex:acp:custom-env",
);
expect(prompt?.openaiApiKey).toBe("openai-secret");
expect(prompt?.githubToken).toBe("gh-secret");
} finally {
vi.unstubAllEnvs();
}
});
it("preserves leading spaces across streamed text deltas", async () => {
const runtime = sharedFixture?.runtime;
expect(runtime).toBeDefined();
@@ -395,7 +430,7 @@ describe("AcpxRuntime", () => {
command: "npx",
args: ["-y", "mcp-remote@latest", "https://mcp.canva.com/mcp"],
env: {
CANVA_TOKEN: "secret",
CANVA_TOKEN: "secret", // pragma: allowlist secret
},
},
},

View File

@@ -170,6 +170,7 @@ export class AcpxRuntime implements AcpRuntime {
command: this.config.command,
cwd: this.config.cwd,
expectedVersion: this.config.expectedVersion,
stripProviderAuthEnvVars: this.config.stripProviderAuthEnvVars,
spawnOptions: this.spawnCommandOptions,
});
if (!versionCheck.ok) {
@@ -183,6 +184,7 @@ export class AcpxRuntime implements AcpRuntime {
command: this.config.command,
args: ["--help"],
cwd: this.config.cwd,
stripProviderAuthEnvVars: this.config.stripProviderAuthEnvVars,
},
this.spawnCommandOptions,
);
@@ -309,6 +311,7 @@ export class AcpxRuntime implements AcpRuntime {
command: this.config.command,
args,
cwd: state.cwd,
stripProviderAuthEnvVars: this.config.stripProviderAuthEnvVars,
},
this.spawnCommandOptions,
);
@@ -495,6 +498,7 @@ export class AcpxRuntime implements AcpRuntime {
command: this.config.command,
cwd: this.config.cwd,
expectedVersion: this.config.expectedVersion,
stripProviderAuthEnvVars: this.config.stripProviderAuthEnvVars,
spawnOptions: this.spawnCommandOptions,
});
if (!versionCheck.ok) {
@@ -518,6 +522,7 @@ export class AcpxRuntime implements AcpRuntime {
command: this.config.command,
args: ["--help"],
cwd: this.config.cwd,
stripProviderAuthEnvVars: this.config.stripProviderAuthEnvVars,
},
this.spawnCommandOptions,
);
@@ -683,6 +688,7 @@ export class AcpxRuntime implements AcpRuntime {
acpxCommand: this.config.command,
cwd: params.cwd,
agent: params.agent,
stripProviderAuthEnvVars: this.config.stripProviderAuthEnvVars,
spawnOptions: this.spawnCommandOptions,
});
const resolved = buildMcpProxyAgentCommand({
@@ -705,6 +711,7 @@ export class AcpxRuntime implements AcpRuntime {
command: this.config.command,
args: params.args,
cwd: params.cwd,
stripProviderAuthEnvVars: this.config.stripProviderAuthEnvVars,
},
this.spawnCommandOptions,
{

View File

@@ -89,6 +89,11 @@ describe("createAcpxRuntimeService", () => {
await vi.waitFor(() => {
expect(ensureAcpxSpy).toHaveBeenCalledOnce();
expect(ensureAcpxSpy).toHaveBeenCalledWith(
expect.objectContaining({
stripProviderAuthEnvVars: true,
}),
);
expect(probeAvailabilitySpy).toHaveBeenCalledOnce();
});

View File

@@ -59,9 +59,8 @@ export function createAcpxRuntimeService(
});
const expectedVersionLabel = pluginConfig.expectedVersion ?? "any";
const installLabel = pluginConfig.allowPluginLocalInstall ? "enabled" : "disabled";
const mcpServerCount = Object.keys(pluginConfig.mcpServers).length;
ctx.logger.info(
`acpx runtime backend registered (command: ${pluginConfig.command}, expectedVersion: ${expectedVersionLabel}, pluginLocalInstall: ${installLabel}${mcpServerCount > 0 ? `, mcpServers: ${mcpServerCount}` : ""})`,
`acpx runtime backend registered (command: ${pluginConfig.command}, expectedVersion: ${expectedVersionLabel}, pluginLocalInstall: ${installLabel})`,
);
lifecycleRevision += 1;
@@ -73,6 +72,7 @@ export function createAcpxRuntimeService(
logger: ctx.logger,
expectedVersion: pluginConfig.expectedVersion,
allowInstall: pluginConfig.allowPluginLocalInstall,
stripProviderAuthEnvVars: pluginConfig.stripProviderAuthEnvVars,
spawnOptions: {
strictWindowsCmdWrapper: pluginConfig.strictWindowsCmdWrapper,
},

View File

@@ -204,6 +204,8 @@ if (command === "prompt") {
sessionName: sessionFromOption,
stdinText,
openclawShell,
openaiApiKey: process.env.OPENAI_API_KEY || "",
githubToken: process.env.GITHUB_TOKEN || "",
});
const requestId = "req-1";
@@ -326,6 +328,7 @@ export async function createMockRuntimeFixture(params?: {
const config: ResolvedAcpxPluginConfig = {
command: scriptPath,
allowPluginLocalInstall: false,
stripProviderAuthEnvVars: false,
installCommand: "n/a",
cwd: dir,
permissionMode: params?.permissionMode ?? "approve-all",
@@ -378,6 +381,7 @@ export async function readMockRuntimeLogEntries(
export async function cleanupMockRuntimeFixtures(): Promise<void> {
delete process.env.MOCK_ACPX_LOG;
delete process.env.MOCK_ACPX_CONFIG_SHOW_AGENTS;
sharedMockCliScriptPath = null;
logFileSequence = 0;
while (tempDirs.length > 0) {

View File

@@ -21,6 +21,7 @@ import {
import {
buildAccountScopedDmSecurityPolicy,
collectOpenGroupPolicyRestrictSendersWarnings,
createAccountStatusSink,
formatNormalizedAllowFromEntries,
mapAllowFromEntries,
} from "openclaw/plugin-sdk/compat";
@@ -369,8 +370,11 @@ export const bluebubblesPlugin: ChannelPlugin<ResolvedBlueBubblesAccount> = {
startAccount: async (ctx) => {
const account = ctx.account;
const webhookPath = resolveWebhookPathFromConfig(account.config);
ctx.setStatus({
accountId: account.accountId,
const statusSink = createAccountStatusSink({
accountId: ctx.accountId,
setStatus: ctx.setStatus,
});
statusSink({
baseUrl: account.baseUrl,
});
ctx.log?.info(`[${account.accountId}] starting provider (webhook=${webhookPath})`);
@@ -379,7 +383,7 @@ export const bluebubblesPlugin: ChannelPlugin<ResolvedBlueBubblesAccount> = {
config: ctx.cfg,
runtime: ctx.runtime,
abortSignal: ctx.abortSignal,
statusSink: (patch) => ctx.setStatus({ accountId: ctx.accountId, ...patch }),
statusSink,
webhookPath,
});
},

View File

@@ -1,7 +1,9 @@
import { MarkdownConfigSchema, ToolPolicySchema } from "openclaw/plugin-sdk/bluebubbles";
import {
AllowFromEntrySchema,
AllowFromListSchema,
buildCatchallMultiAccountChannelSchema,
DmPolicySchema,
GroupPolicySchema,
} from "openclaw/plugin-sdk/compat";
import { z } from "zod";
import { buildSecretInputSchema, hasConfiguredSecretInput } from "./secret-input.js";
@@ -35,10 +37,10 @@ const bluebubblesAccountSchema = z
serverUrl: z.string().optional(),
password: buildSecretInputSchema().optional(),
webhookPath: z.string().optional(),
dmPolicy: z.enum(["pairing", "allowlist", "open", "disabled"]).optional(),
allowFrom: z.array(AllowFromEntrySchema).optional(),
groupAllowFrom: z.array(AllowFromEntrySchema).optional(),
groupPolicy: z.enum(["open", "disabled", "allowlist"]).optional(),
dmPolicy: DmPolicySchema.optional(),
allowFrom: AllowFromListSchema,
groupAllowFrom: AllowFromListSchema,
groupPolicy: GroupPolicySchema.optional(),
historyLimit: z.number().int().min(0).optional(),
dmHistoryLimit: z.number().int().min(0).optional(),
textChunkLimit: z.number().int().positive().optional(),

View File

@@ -10,6 +10,7 @@ import {
formatDocsLink,
mergeAllowFromEntries,
normalizeAccountId,
patchScopedAccountConfig,
resolveAccountIdForConfigure,
setTopLevelChannelDmPolicyWithAllowFrom,
} from "openclaw/plugin-sdk/bluebubbles";
@@ -38,34 +39,14 @@ function setBlueBubblesAllowFrom(
accountId: string,
allowFrom: string[],
): OpenClawConfig {
if (accountId === DEFAULT_ACCOUNT_ID) {
return {
...cfg,
channels: {
...cfg.channels,
bluebubbles: {
...cfg.channels?.bluebubbles,
allowFrom,
},
},
};
}
return {
...cfg,
channels: {
...cfg.channels,
bluebubbles: {
...cfg.channels?.bluebubbles,
accounts: {
...cfg.channels?.bluebubbles?.accounts,
[accountId]: {
...cfg.channels?.bluebubbles?.accounts?.[accountId],
allowFrom,
},
},
},
},
};
return patchScopedAccountConfig({
cfg,
channelKey: channel,
accountId,
patch: { allowFrom },
ensureChannelEnabled: false,
ensureAccountEnabled: false,
});
}
function parseBlueBubblesAllowFromInput(raw: string): string[] {

View File

@@ -1,9 +1,9 @@
import { createScopedChannelConfigBase } from "openclaw/plugin-sdk/compat";
import {
buildAccountScopedDmSecurityPolicy,
buildOpenGroupPolicyConfigureRouteAllowlistWarning,
collectAllowlistProviderGroupPolicyWarnings,
createScopedAccountConfigAccessors,
createScopedDmSecurityResolver,
formatNormalizedAllowFromEntries,
} from "openclaw/plugin-sdk/compat";
import {
@@ -12,6 +12,7 @@ import {
buildComputedAccountStatusSnapshot,
buildChannelConfigSchema,
DEFAULT_ACCOUNT_ID,
createAccountStatusSink,
getChatChannelMeta,
listDirectoryGroupEntriesFromMapKeys,
listDirectoryUserEntriesFromAllowFrom,
@@ -21,6 +22,7 @@ import {
PAIRING_APPROVED_MESSAGE,
resolveChannelMediaMaxBytes,
resolveGoogleChatGroupRequireMention,
runPassiveAccountLifecycle,
type ChannelDock,
type ChannelMessageActionAdapter,
type ChannelPlugin,
@@ -84,6 +86,14 @@ const googleChatConfigBase = createScopedChannelConfigBase<ResolvedGoogleChatAcc
],
});
const resolveGoogleChatDmPolicy = createScopedDmSecurityResolver<ResolvedGoogleChatAccount>({
channelKey: "googlechat",
resolvePolicy: (account) => account.config.dm?.policy,
resolveAllowFrom: (account) => account.config.dm?.allowFrom,
allowFromPathSuffix: "dm.",
normalizeEntry: (raw) => formatAllowFromEntry(raw),
});
export const googlechatDock: ChannelDock = {
id: "googlechat",
capabilities: {
@@ -170,18 +180,7 @@ export const googlechatPlugin: ChannelPlugin<ResolvedGoogleChatAccount> = {
...googleChatConfigAccessors,
},
security: {
resolveDmPolicy: ({ cfg, accountId, account }) => {
return buildAccountScopedDmSecurityPolicy({
cfg,
channelKey: "googlechat",
accountId,
fallbackAccountId: account.accountId ?? DEFAULT_ACCOUNT_ID,
policy: account.config.dm?.policy,
allowFrom: account.config.dm?.allowFrom ?? [],
allowFromPathSuffix: "dm.",
normalizeEntry: (raw) => formatAllowFromEntry(raw),
});
},
resolveDmPolicy: resolveGoogleChatDmPolicy,
collectWarnings: ({ account, cfg }) => {
const warnings = collectAllowlistProviderGroupPolicyWarnings({
cfg,
@@ -512,37 +511,39 @@ export const googlechatPlugin: ChannelPlugin<ResolvedGoogleChatAccount> = {
gateway: {
startAccount: async (ctx) => {
const account = ctx.account;
ctx.log?.info(`[${account.accountId}] starting Google Chat webhook`);
ctx.setStatus({
const statusSink = createAccountStatusSink({
accountId: account.accountId,
setStatus: ctx.setStatus,
});
ctx.log?.info(`[${account.accountId}] starting Google Chat webhook`);
statusSink({
running: true,
lastStartAt: Date.now(),
webhookPath: resolveGoogleChatWebhookPath({ account }),
audienceType: account.config.audienceType,
audience: account.config.audience,
});
const unregister = await startGoogleChatMonitor({
account,
config: ctx.cfg,
runtime: ctx.runtime,
await runPassiveAccountLifecycle({
abortSignal: ctx.abortSignal,
webhookPath: account.config.webhookPath,
webhookUrl: account.config.webhookUrl,
statusSink: (patch) => ctx.setStatus({ accountId: account.accountId, ...patch }),
});
// Keep the promise pending until abort (webhook mode is passive).
await new Promise<void>((resolve) => {
if (ctx.abortSignal.aborted) {
resolve();
return;
}
ctx.abortSignal.addEventListener("abort", () => resolve(), { once: true });
});
unregister?.();
ctx.setStatus({
accountId: account.accountId,
running: false,
lastStopAt: Date.now(),
start: async () =>
await startGoogleChatMonitor({
account,
config: ctx.cfg,
runtime: ctx.runtime,
abortSignal: ctx.abortSignal,
webhookPath: account.config.webhookPath,
webhookUrl: account.config.webhookUrl,
statusSink,
}),
stop: async (unregister) => {
unregister?.();
},
onStop: async () => {
statusSink({
running: false,
lastStopAt: Date.now(),
});
},
});
},
},

View File

@@ -1,5 +1,7 @@
import type { OpenClawConfig, DmPolicy } from "openclaw/plugin-sdk/googlechat";
import {
DEFAULT_ACCOUNT_ID,
applySetupAccountConfigPatch,
addWildcardAllowFrom,
formatDocsLink,
mergeAllowFromEntries,
@@ -8,7 +10,6 @@ import {
type ChannelOnboardingAdapter,
type ChannelOnboardingDmPolicy,
type WizardPrompter,
DEFAULT_ACCOUNT_ID,
migrateBaseNameToDefaultAccount,
} from "openclaw/plugin-sdk/googlechat";
import {
@@ -83,45 +84,6 @@ const dmPolicy: ChannelOnboardingDmPolicy = {
promptAllowFrom,
};
function applyAccountConfig(params: {
cfg: OpenClawConfig;
accountId: string;
patch: Record<string, unknown>;
}): OpenClawConfig {
const { cfg, accountId, patch } = params;
if (accountId === DEFAULT_ACCOUNT_ID) {
return {
...cfg,
channels: {
...cfg.channels,
googlechat: {
...cfg.channels?.["googlechat"],
enabled: true,
...patch,
},
},
};
}
return {
...cfg,
channels: {
...cfg.channels,
googlechat: {
...cfg.channels?.["googlechat"],
enabled: true,
accounts: {
...cfg.channels?.["googlechat"]?.accounts,
[accountId]: {
...cfg.channels?.["googlechat"]?.accounts?.[accountId],
enabled: true,
...patch,
},
},
},
},
};
}
async function promptCredentials(params: {
cfg: OpenClawConfig;
prompter: WizardPrompter;
@@ -137,7 +99,7 @@ async function promptCredentials(params: {
initialValue: true,
});
if (useEnv) {
return applyAccountConfig({ cfg, accountId, patch: {} });
return applySetupAccountConfigPatch({ cfg, channelKey: channel, accountId, patch: {} });
}
}
@@ -156,8 +118,9 @@ async function promptCredentials(params: {
placeholder: "/path/to/service-account.json",
validate: (value) => (String(value ?? "").trim() ? undefined : "Required"),
});
return applyAccountConfig({
return applySetupAccountConfigPatch({
cfg,
channelKey: channel,
accountId,
patch: { serviceAccountFile: String(path).trim() },
});
@@ -168,8 +131,9 @@ async function promptCredentials(params: {
placeholder: '{"type":"service_account", ... }',
validate: (value) => (String(value ?? "").trim() ? undefined : "Required"),
});
return applyAccountConfig({
return applySetupAccountConfigPatch({
cfg,
channelKey: channel,
accountId,
patch: { serviceAccount: String(json).trim() },
});
@@ -200,8 +164,9 @@ async function promptAudience(params: {
initialValue: currentAudience || undefined,
validate: (value) => (String(value ?? "").trim() ? undefined : "Required"),
});
return applyAccountConfig({
return applySetupAccountConfigPatch({
cfg: params.cfg,
channelKey: channel,
accountId: params.accountId,
patch: { audienceType, audience: String(audience).trim() },
});

View File

@@ -1,5 +1,8 @@
import fs from "node:fs";
import os from "node:os";
import path from "node:path";
import { describe, expect, it } from "vitest";
import { listIrcAccountIds, resolveDefaultIrcAccountId } from "./accounts.js";
import { listIrcAccountIds, resolveDefaultIrcAccountId, resolveIrcAccount } from "./accounts.js";
import type { CoreConfig } from "./types.js";
function asConfig(value: unknown): CoreConfig {
@@ -76,3 +79,28 @@ describe("resolveDefaultIrcAccountId", () => {
expect(resolveDefaultIrcAccountId(cfg)).toBe("aaa");
});
});
describe("resolveIrcAccount", () => {
it.runIf(process.platform !== "win32")("rejects symlinked password files", () => {
const dir = fs.mkdtempSync(path.join(os.tmpdir(), "openclaw-irc-account-"));
const passwordFile = path.join(dir, "password.txt");
const passwordLink = path.join(dir, "password-link.txt");
fs.writeFileSync(passwordFile, "secret-pass\n", "utf8");
fs.symlinkSync(passwordFile, passwordLink);
const cfg = asConfig({
channels: {
irc: {
host: "irc.example.com",
nick: "claw",
passwordFile: passwordLink,
},
},
});
const account = resolveIrcAccount({ cfg });
expect(account.password).toBe("");
expect(account.passwordSource).toBe("none");
fs.rmSync(dir, { recursive: true, force: true });
});
});

View File

@@ -1,5 +1,5 @@
import { readFileSync } from "node:fs";
import { DEFAULT_ACCOUNT_ID, normalizeAccountId } from "openclaw/plugin-sdk/account-id";
import { tryReadSecretFileSync } from "openclaw/plugin-sdk/core";
import {
createAccountListHelpers,
normalizeResolvedSecretInputString,
@@ -100,13 +100,11 @@ function resolvePassword(accountId: string, merged: IrcAccountConfig) {
}
if (merged.passwordFile?.trim()) {
try {
const filePassword = readFileSync(merged.passwordFile.trim(), "utf-8").trim();
if (filePassword) {
return { password: filePassword, source: "passwordFile" as const };
}
} catch {
// Ignore unreadable files here; status will still surface missing configuration.
const filePassword = tryReadSecretFileSync(merged.passwordFile, "IRC password file", {
rejectSymlink: true,
});
if (filePassword) {
return { password: filePassword, source: "passwordFile" as const };
}
}
@@ -137,11 +135,10 @@ function resolveNickServConfig(accountId: string, nickserv?: IrcNickServConfig):
envPassword ||
"";
if (!resolvedPassword && passwordFile) {
try {
resolvedPassword = readFileSync(passwordFile, "utf-8").trim();
} catch {
// Ignore unreadable files; monitor/probe status will surface failures.
}
resolvedPassword =
tryReadSecretFileSync(passwordFile, "IRC NickServ password file", {
rejectSymlink: true,
}) ?? "";
}
const merged: IrcNickServConfig = {

View File

@@ -0,0 +1,67 @@
import { afterEach, describe, expect, it, vi } from "vitest";
import { createStartAccountContext } from "../../test-utils/start-account-context.js";
import type { ResolvedIrcAccount } from "./accounts.js";
const hoisted = vi.hoisted(() => ({
monitorIrcProvider: vi.fn(),
}));
vi.mock("./monitor.js", async () => {
const actual = await vi.importActual<typeof import("./monitor.js")>("./monitor.js");
return {
...actual,
monitorIrcProvider: hoisted.monitorIrcProvider,
};
});
import { ircPlugin } from "./channel.js";
describe("ircPlugin gateway.startAccount", () => {
afterEach(() => {
vi.clearAllMocks();
});
it("keeps startAccount pending until abort, then stops the monitor", async () => {
const stop = vi.fn();
hoisted.monitorIrcProvider.mockResolvedValue({ stop });
const account: ResolvedIrcAccount = {
accountId: "default",
enabled: true,
name: "default",
configured: true,
host: "irc.example.com",
port: 6697,
tls: true,
nick: "openclaw",
username: "openclaw",
realname: "OpenClaw",
password: "",
passwordSource: "none",
config: {} as ResolvedIrcAccount["config"],
};
const abort = new AbortController();
const task = ircPlugin.gateway!.startAccount!(
createStartAccountContext({
account,
abortSignal: abort.signal,
}),
);
let settled = false;
void task.then(() => {
settled = true;
});
await vi.waitFor(() => {
expect(hoisted.monitorIrcProvider).toHaveBeenCalledOnce();
});
expect(settled).toBe(false);
expect(stop).not.toHaveBeenCalled();
abort.abort();
await task;
expect(stop).toHaveBeenCalledOnce();
});
});

View File

@@ -9,10 +9,12 @@ import {
buildBaseAccountStatusSnapshot,
buildBaseChannelStatusSummary,
buildChannelConfigSchema,
createAccountStatusSink,
DEFAULT_ACCOUNT_ID,
deleteAccountFromConfigSection,
getChatChannelMeta,
PAIRING_APPROVED_MESSAGE,
runPassiveAccountLifecycle,
setAccountEnabledInConfigSection,
type ChannelPlugin,
} from "openclaw/plugin-sdk/irc";
@@ -353,6 +355,10 @@ export const ircPlugin: ChannelPlugin<ResolvedIrcAccount, IrcProbe> = {
gateway: {
startAccount: async (ctx) => {
const account = ctx.account;
const statusSink = createAccountStatusSink({
accountId: ctx.accountId,
setStatus: ctx.setStatus,
});
if (!account.configured) {
throw new Error(
`IRC is not configured for account "${account.accountId}" (need host and nick in channels.irc).`,
@@ -361,14 +367,20 @@ export const ircPlugin: ChannelPlugin<ResolvedIrcAccount, IrcProbe> = {
ctx.log?.info(
`[${account.accountId}] starting IRC provider (${account.host}:${account.port}${account.tls ? " tls" : ""})`,
);
const { stop } = await monitorIrcProvider({
accountId: account.accountId,
config: ctx.cfg as CoreConfig,
runtime: ctx.runtime,
await runPassiveAccountLifecycle({
abortSignal: ctx.abortSignal,
statusSink: (patch) => ctx.setStatus({ accountId: ctx.accountId, ...patch }),
start: async () =>
await monitorIrcProvider({
accountId: account.accountId,
config: ctx.cfg as CoreConfig,
runtime: ctx.runtime,
abortSignal: ctx.abortSignal,
statusSink,
}),
stop: async (monitor) => {
monitor.stop();
},
});
return { stop };
},
},
};

View File

@@ -1,6 +1,7 @@
import {
DEFAULT_ACCOUNT_ID,
formatDocsLink,
patchScopedAccountConfig,
promptChannelAccessConfig,
resolveAccountIdForConfigure,
setTopLevelChannelAllowFrom,
@@ -59,35 +60,14 @@ function updateIrcAccountConfig(
accountId: string,
patch: Partial<IrcAccountConfig>,
): CoreConfig {
const current = cfg.channels?.irc ?? {};
if (accountId === DEFAULT_ACCOUNT_ID) {
return {
...cfg,
channels: {
...cfg.channels,
irc: {
...current,
...patch,
},
},
};
}
return {
...cfg,
channels: {
...cfg.channels,
irc: {
...current,
accounts: {
...current.accounts,
[accountId]: {
...current.accounts?.[accountId],
...patch,
},
},
},
},
};
return patchScopedAccountConfig({
cfg,
channelKey: channel,
accountId,
patch,
ensureChannelEnabled: false,
ensureAccountEnabled: false,
}) as CoreConfig;
}
function setIrcDmPolicy(cfg: CoreConfig, dmPolicy: DmPolicy): CoreConfig {

View File

@@ -1,7 +1,8 @@
import {
buildAccountScopedDmSecurityPolicy,
createScopedAccountConfigAccessors,
collectAllowlistProviderRestrictSendersWarnings,
createScopedAccountConfigAccessors,
createScopedChannelConfigBase,
createScopedDmSecurityResolver,
} from "openclaw/plugin-sdk/compat";
import {
buildChannelConfigSchema,
@@ -43,6 +44,24 @@ const lineConfigAccessors = createScopedAccountConfigAccessors({
.map((entry) => entry.replace(/^line:(?:user:)?/i, "")),
});
const lineConfigBase = createScopedChannelConfigBase<ResolvedLineAccount, OpenClawConfig>({
sectionKey: "line",
listAccountIds: (cfg) => getLineRuntime().channel.line.listLineAccountIds(cfg),
resolveAccount: (cfg, accountId) =>
getLineRuntime().channel.line.resolveLineAccount({ cfg, accountId: accountId ?? undefined }),
defaultAccountId: (cfg) => getLineRuntime().channel.line.resolveDefaultLineAccountId(cfg),
clearBaseFields: ["channelSecret", "tokenFile", "secretFile"],
});
const resolveLineDmPolicy = createScopedDmSecurityResolver<ResolvedLineAccount>({
channelKey: "line",
resolvePolicy: (account) => account.config.dmPolicy,
resolveAllowFrom: (account) => account.config.allowFrom,
policyPathSuffix: "dmPolicy",
approveHint: "openclaw pairing approve line <code>",
normalizeEntry: (raw) => raw.replace(/^line:(?:user:)?/i, ""),
});
function patchLineAccountConfig(
cfg: OpenClawConfig,
lineConfig: LineConfig,
@@ -113,40 +132,7 @@ export const linePlugin: ChannelPlugin<ResolvedLineAccount> = {
reload: { configPrefixes: ["channels.line"] },
configSchema: buildChannelConfigSchema(LineConfigSchema),
config: {
listAccountIds: (cfg) => getLineRuntime().channel.line.listLineAccountIds(cfg),
resolveAccount: (cfg, accountId) =>
getLineRuntime().channel.line.resolveLineAccount({ cfg, accountId: accountId ?? undefined }),
defaultAccountId: (cfg) => getLineRuntime().channel.line.resolveDefaultLineAccountId(cfg),
setAccountEnabled: ({ cfg, accountId, enabled }) => {
const lineConfig = (cfg.channels?.line ?? {}) as LineConfig;
return patchLineAccountConfig(cfg, lineConfig, accountId, { enabled });
},
deleteAccount: ({ cfg, accountId }) => {
const lineConfig = (cfg.channels?.line ?? {}) as LineConfig;
if (accountId === DEFAULT_ACCOUNT_ID) {
// oxlint-disable-next-line no-unused-vars
const { channelSecret, tokenFile, secretFile, ...rest } = lineConfig;
return {
...cfg,
channels: {
...cfg.channels,
line: rest,
},
};
}
const accounts = { ...lineConfig.accounts };
delete accounts[accountId];
return {
...cfg,
channels: {
...cfg.channels,
line: {
...lineConfig,
accounts: Object.keys(accounts).length > 0 ? accounts : undefined,
},
},
};
},
...lineConfigBase,
isConfigured: (account) =>
Boolean(account.channelAccessToken?.trim() && account.channelSecret?.trim()),
describeAccount: (account) => ({
@@ -159,19 +145,7 @@ export const linePlugin: ChannelPlugin<ResolvedLineAccount> = {
...lineConfigAccessors,
},
security: {
resolveDmPolicy: ({ cfg, accountId, account }) => {
return buildAccountScopedDmSecurityPolicy({
cfg,
channelKey: "line",
accountId,
fallbackAccountId: account.accountId ?? DEFAULT_ACCOUNT_ID,
policy: account.config.dmPolicy,
allowFrom: account.config.allowFrom ?? [],
policyPathSuffix: "dmPolicy",
approveHint: "openclaw pairing approve line <code>",
normalizeEntry: (raw) => raw.replace(/^line:(?:user:)?/i, ""),
});
},
resolveDmPolicy: resolveLineDmPolicy,
collectWarnings: ({ account, cfg }) => {
return collectAllowlistProviderRestrictSendersWarnings({
cfg,

View File

@@ -1,8 +1,9 @@
import {
buildAccountScopedDmSecurityPolicy,
buildOpenGroupPolicyWarning,
collectAllowlistProviderGroupPolicyWarnings,
createScopedAccountConfigAccessors,
createScopedChannelConfigBase,
createScopedDmSecurityResolver,
} from "openclaw/plugin-sdk/compat";
import {
applyAccountNameToChannelSection,
@@ -10,10 +11,8 @@ import {
buildProbeChannelStatusSummary,
collectStatusIssuesFromLastError,
DEFAULT_ACCOUNT_ID,
deleteAccountFromConfigSection,
normalizeAccountId,
PAIRING_APPROVED_MESSAGE,
setAccountEnabledInConfigSection,
type ChannelPlugin,
} from "openclaw/plugin-sdk/matrix";
import { matrixMessageActions } from "./actions.js";
@@ -106,6 +105,30 @@ const matrixConfigAccessors = createScopedAccountConfigAccessors({
formatAllowFrom: (allowFrom) => normalizeMatrixAllowList(allowFrom),
});
const matrixConfigBase = createScopedChannelConfigBase<ResolvedMatrixAccount, CoreConfig>({
sectionKey: "matrix",
listAccountIds: listMatrixAccountIds,
resolveAccount: (cfg, accountId) => resolveMatrixAccount({ cfg, accountId }),
defaultAccountId: resolveDefaultMatrixAccountId,
clearBaseFields: [
"name",
"homeserver",
"userId",
"accessToken",
"password",
"deviceName",
"initialSyncLimit",
],
});
const resolveMatrixDmPolicy = createScopedDmSecurityResolver<ResolvedMatrixAccount>({
channelKey: "matrix",
resolvePolicy: (account) => account.config.dm?.policy,
resolveAllowFrom: (account) => account.config.dm?.allowFrom,
allowFromPathSuffix: "dm.",
normalizeEntry: (raw) => normalizeMatrixUserId(raw),
});
export const matrixPlugin: ChannelPlugin<ResolvedMatrixAccount> = {
id: "matrix",
meta,
@@ -127,32 +150,7 @@ export const matrixPlugin: ChannelPlugin<ResolvedMatrixAccount> = {
reload: { configPrefixes: ["channels.matrix"] },
configSchema: buildChannelConfigSchema(MatrixConfigSchema),
config: {
listAccountIds: (cfg) => listMatrixAccountIds(cfg as CoreConfig),
resolveAccount: (cfg, accountId) => resolveMatrixAccount({ cfg: cfg as CoreConfig, accountId }),
defaultAccountId: (cfg) => resolveDefaultMatrixAccountId(cfg as CoreConfig),
setAccountEnabled: ({ cfg, accountId, enabled }) =>
setAccountEnabledInConfigSection({
cfg: cfg as CoreConfig,
sectionKey: "matrix",
accountId,
enabled,
allowTopLevel: true,
}),
deleteAccount: ({ cfg, accountId }) =>
deleteAccountFromConfigSection({
cfg: cfg as CoreConfig,
sectionKey: "matrix",
accountId,
clearBaseFields: [
"name",
"homeserver",
"userId",
"accessToken",
"password",
"deviceName",
"initialSyncLimit",
],
}),
...matrixConfigBase,
isConfigured: (account) => account.configured,
describeAccount: (account) => ({
accountId: account.accountId,
@@ -164,18 +162,7 @@ export const matrixPlugin: ChannelPlugin<ResolvedMatrixAccount> = {
...matrixConfigAccessors,
},
security: {
resolveDmPolicy: ({ cfg, accountId, account }) => {
return buildAccountScopedDmSecurityPolicy({
cfg: cfg as CoreConfig,
channelKey: "matrix",
accountId,
fallbackAccountId: account.accountId ?? DEFAULT_ACCOUNT_ID,
policy: account.config.dm?.policy,
allowFrom: account.config.dm?.allowFrom ?? [],
allowFromPathSuffix: "dm.",
normalizeEntry: (raw) => normalizeMatrixUserId(raw),
});
},
resolveDmPolicy: resolveMatrixDmPolicy,
collectWarnings: ({ account, cfg }) => {
return collectAllowlistProviderGroupPolicyWarnings({
cfg: cfg as CoreConfig,

View File

@@ -1,9 +1,13 @@
import {
AllowFromListSchema,
buildNestedDmConfigSchema,
DmPolicySchema,
GroupPolicySchema,
} from "openclaw/plugin-sdk/compat";
import { MarkdownConfigSchema, ToolPolicySchema } from "openclaw/plugin-sdk/matrix";
import { z } from "zod";
import { buildSecretInputSchema } from "./secret-input.js";
const allowFromEntry = z.union([z.string(), z.number()]);
const matrixActionSchema = z
.object({
reactions: z.boolean().optional(),
@@ -14,14 +18,6 @@ const matrixActionSchema = z
})
.optional();
const matrixDmSchema = z
.object({
enabled: z.boolean().optional(),
policy: z.enum(["pairing", "allowlist", "open", "disabled"]).optional(),
allowFrom: z.array(allowFromEntry).optional(),
})
.optional();
const matrixRoomSchema = z
.object({
enabled: z.boolean().optional(),
@@ -29,7 +25,7 @@ const matrixRoomSchema = z
requireMention: z.boolean().optional(),
tools: ToolPolicySchema,
autoReply: z.boolean().optional(),
users: z.array(allowFromEntry).optional(),
users: AllowFromListSchema,
skills: z.array(z.string()).optional(),
systemPrompt: z.string().optional(),
})
@@ -49,7 +45,7 @@ export const MatrixConfigSchema = z.object({
initialSyncLimit: z.number().optional(),
encryption: z.boolean().optional(),
allowlistOnly: z.boolean().optional(),
groupPolicy: z.enum(["open", "disabled", "allowlist"]).optional(),
groupPolicy: GroupPolicySchema.optional(),
replyToMode: z.enum(["off", "first", "all"]).optional(),
threadReplies: z.enum(["off", "inbound", "always"]).optional(),
textChunkLimit: z.number().optional(),
@@ -57,9 +53,9 @@ export const MatrixConfigSchema = z.object({
responsePrefix: z.string().optional(),
mediaMaxMb: z.number().optional(),
autoJoin: z.enum(["always", "allowlist", "off"]).optional(),
autoJoinAllowlist: z.array(allowFromEntry).optional(),
groupAllowFrom: z.array(allowFromEntry).optional(),
dm: matrixDmSchema,
autoJoinAllowlist: AllowFromListSchema,
groupAllowFrom: AllowFromListSchema,
dm: buildNestedDmConfigSchema(),
groups: z.object({}).catchall(matrixRoomSchema).optional(),
rooms: z.object({}).catchall(matrixRoomSchema).optional(),
actions: matrixActionSchema,

View File

@@ -1,6 +1,7 @@
import {
compileAllowlist,
normalizeStringEntries,
resolveAllowlistMatchByCandidates,
resolveAllowlistCandidates,
type AllowlistMatch,
} from "openclaw/plugin-sdk/matrix";
@@ -75,11 +76,11 @@ export function resolveMatrixAllowListMatch(params: {
allowList: string[];
userId?: string;
}): MatrixAllowListMatch {
const allowList = params.allowList;
if (allowList.length === 0) {
const compiledAllowList = compileAllowlist(params.allowList);
if (compiledAllowList.set.size === 0) {
return { allowed: false };
}
if (allowList.includes("*")) {
if (compiledAllowList.wildcard) {
return { allowed: true, matchKey: "*", matchSource: "wildcard" };
}
const userId = normalizeMatrixUser(params.userId);
@@ -88,7 +89,10 @@ export function resolveMatrixAllowListMatch(params: {
{ value: userId ? `matrix:${userId}` : "", source: "prefixed-id" },
{ value: userId ? `user:${userId}` : "", source: "prefixed-user" },
];
return resolveAllowlistMatchByCandidates({ allowList, candidates });
return resolveAllowlistCandidates({
compiledAllowlist: compiledAllowList,
candidates,
});
}
export function resolveMatrixAllowListMatches(params: { allowList: string[]; userId?: string }) {

View File

@@ -9,6 +9,7 @@ import {
applySetupAccountConfigPatch,
buildComputedAccountStatusSnapshot,
buildChannelConfigSchema,
createAccountStatusSink,
DEFAULT_ACCOUNT_ID,
deleteAccountFromConfigSection,
migrateBaseNameToDefaultAccount,
@@ -500,8 +501,11 @@ export const mattermostPlugin: ChannelPlugin<ResolvedMattermostAccount> = {
gateway: {
startAccount: async (ctx) => {
const account = ctx.account;
ctx.setStatus({
accountId: account.accountId,
const statusSink = createAccountStatusSink({
accountId: ctx.accountId,
setStatus: ctx.setStatus,
});
statusSink({
baseUrl: account.baseUrl,
botTokenSource: account.botTokenSource,
});
@@ -513,7 +517,7 @@ export const mattermostPlugin: ChannelPlugin<ResolvedMattermostAccount> = {
config: ctx.cfg,
runtime: ctx.runtime,
abortSignal: ctx.abortSignal,
statusSink: (patch) => ctx.setStatus({ accountId: ctx.accountId, ...patch }),
statusSink,
});
},
},

View File

@@ -0,0 +1,30 @@
import fs from "node:fs";
import os from "node:os";
import path from "node:path";
import { describe, expect, it } from "vitest";
import { resolveNextcloudTalkAccount } from "./accounts.js";
import type { CoreConfig } from "./types.js";
describe("resolveNextcloudTalkAccount", () => {
it.runIf(process.platform !== "win32")("rejects symlinked botSecretFile paths", () => {
const dir = fs.mkdtempSync(path.join(os.tmpdir(), "openclaw-nextcloud-talk-"));
const secretFile = path.join(dir, "secret.txt");
const secretLink = path.join(dir, "secret-link.txt");
fs.writeFileSync(secretFile, "bot-secret\n", "utf8");
fs.symlinkSync(secretFile, secretLink);
const cfg = {
channels: {
"nextcloud-talk": {
baseUrl: "https://cloud.example.com",
botSecretFile: secretLink,
},
},
} as CoreConfig;
const account = resolveNextcloudTalkAccount({ cfg });
expect(account.secret).toBe("");
expect(account.secretSource).toBe("none");
fs.rmSync(dir, { recursive: true, force: true });
});
});

View File

@@ -1,4 +1,4 @@
import { readFileSync } from "node:fs";
import { tryReadSecretFileSync } from "openclaw/plugin-sdk/core";
import {
createAccountListHelpers,
DEFAULT_ACCOUNT_ID,
@@ -88,13 +88,13 @@ function resolveNextcloudTalkSecret(
}
if (merged.botSecretFile) {
try {
const fileSecret = readFileSync(merged.botSecretFile, "utf-8").trim();
if (fileSecret) {
return { secret: fileSecret, source: "secretFile" };
}
} catch {
// File not found or unreadable, fall through.
const fileSecret = tryReadSecretFileSync(
merged.botSecretFile,
"Nextcloud Talk bot secret file",
{ rejectSymlink: true },
);
if (fileSecret) {
return { secret: fileSecret, source: "secretFile" };
}
}

View File

@@ -2,8 +2,10 @@ import {
buildAccountScopedDmSecurityPolicy,
collectAllowlistProviderGroupPolicyWarnings,
collectOpenGroupPolicyRouteAllowlistWarnings,
createAccountStatusSink,
formatAllowFromLowercase,
mapAllowFromEntries,
runPassiveAccountLifecycle,
} from "openclaw/plugin-sdk/compat";
import {
applyAccountNameToChannelSection,
@@ -15,7 +17,6 @@ import {
deleteAccountFromConfigSection,
normalizeAccountId,
setAccountEnabledInConfigSection,
waitForAbortSignal,
type ChannelPlugin,
type OpenClawConfig,
type ChannelSetupInput,
@@ -338,17 +339,25 @@ export const nextcloudTalkPlugin: ChannelPlugin<ResolvedNextcloudTalkAccount> =
ctx.log?.info(`[${account.accountId}] starting Nextcloud Talk webhook server`);
const { stop } = await monitorNextcloudTalkProvider({
accountId: account.accountId,
config: ctx.cfg as CoreConfig,
runtime: ctx.runtime,
abortSignal: ctx.abortSignal,
statusSink: (patch) => ctx.setStatus({ accountId: ctx.accountId, ...patch }),
const statusSink = createAccountStatusSink({
accountId: ctx.accountId,
setStatus: ctx.setStatus,
});
// Keep webhook channels pending for the account lifecycle.
await waitForAbortSignal(ctx.abortSignal);
stop();
await runPassiveAccountLifecycle({
abortSignal: ctx.abortSignal,
start: async () =>
await monitorNextcloudTalkProvider({
accountId: account.accountId,
config: ctx.cfg as CoreConfig,
runtime: ctx.runtime,
abortSignal: ctx.abortSignal,
statusSink,
}),
stop: async (monitor) => {
monitor.stop();
},
});
},
logoutAccount: async ({ accountId, cfg }) => {
const nextCfg = { ...cfg } as OpenClawConfig;

View File

@@ -1,15 +1,14 @@
import {
buildSingleChannelSecretPromptState,
formatDocsLink,
hasConfiguredSecretInput,
mapAllowFromEntries,
mergeAllowFromEntries,
promptSingleChannelSecretInput,
patchScopedAccountConfig,
runSingleChannelSecretStep,
resolveAccountIdForConfigure,
DEFAULT_ACCOUNT_ID,
normalizeAccountId,
setTopLevelChannelDmPolicyWithAllowFrom,
type SecretInput,
type ChannelOnboardingAdapter,
type ChannelOnboardingDmPolicy,
type OpenClawConfig,
@@ -39,38 +38,12 @@ function setNextcloudTalkAccountConfig(
accountId: string,
updates: Record<string, unknown>,
): CoreConfig {
if (accountId === DEFAULT_ACCOUNT_ID) {
return {
...cfg,
channels: {
...cfg.channels,
"nextcloud-talk": {
...cfg.channels?.["nextcloud-talk"],
enabled: true,
...updates,
},
},
};
}
return {
...cfg,
channels: {
...cfg.channels,
"nextcloud-talk": {
...cfg.channels?.["nextcloud-talk"],
enabled: true,
accounts: {
...cfg.channels?.["nextcloud-talk"]?.accounts,
[accountId]: {
...cfg.channels?.["nextcloud-talk"]?.accounts?.[accountId],
enabled: cfg.channels?.["nextcloud-talk"]?.accounts?.[accountId]?.enabled ?? true,
...updates,
},
},
},
},
};
return patchScopedAccountConfig({
cfg,
channelKey: channel,
accountId,
patch: updates,
}) as CoreConfig;
}
async function noteNextcloudTalkSecretHelp(prompter: WizardPrompter): Promise<void> {
@@ -215,12 +188,6 @@ export const nextcloudTalkOnboardingAdapter: ChannelOnboardingAdapter = {
hasConfiguredSecretInput(resolvedAccount.config.botSecret) ||
resolvedAccount.config.botSecretFile,
);
const secretPromptState = buildSingleChannelSecretPromptState({
accountConfigured,
hasConfigToken: hasConfigSecret,
allowEnv,
envValue: process.env.NEXTCLOUD_TALK_BOT_SECRET,
});
let baseUrl = resolvedAccount.baseUrl;
if (!baseUrl) {
@@ -241,32 +208,35 @@ export const nextcloudTalkOnboardingAdapter: ChannelOnboardingAdapter = {
).trim();
}
let secret: SecretInput | null = null;
if (!accountConfigured) {
await noteNextcloudTalkSecretHelp(prompter);
}
const secretResult = await promptSingleChannelSecretInput({
const secretStep = await runSingleChannelSecretStep({
cfg: next,
prompter,
providerHint: "nextcloud-talk",
credentialLabel: "bot secret",
accountConfigured: secretPromptState.accountConfigured,
canUseEnv: secretPromptState.canUseEnv,
hasConfigToken: secretPromptState.hasConfigToken,
accountConfigured,
hasConfigToken: hasConfigSecret,
allowEnv,
envValue: process.env.NEXTCLOUD_TALK_BOT_SECRET,
envPrompt: "NEXTCLOUD_TALK_BOT_SECRET detected. Use env var?",
keepPrompt: "Nextcloud Talk bot secret already configured. Keep it?",
inputPrompt: "Enter Nextcloud Talk bot secret",
preferredEnvVar: "NEXTCLOUD_TALK_BOT_SECRET",
onMissingConfigured: async () => await noteNextcloudTalkSecretHelp(prompter),
applyUseEnv: async (cfg) =>
setNextcloudTalkAccountConfig(cfg as CoreConfig, accountId, {
baseUrl,
}),
applySet: async (cfg, value) =>
setNextcloudTalkAccountConfig(cfg as CoreConfig, accountId, {
baseUrl,
botSecret: value,
}),
});
if (secretResult.action === "set") {
secret = secretResult.value;
}
next = secretStep.cfg as CoreConfig;
if (secretResult.action === "use-env" || secret || baseUrl !== resolvedAccount.baseUrl) {
if (secretStep.action === "keep" && baseUrl !== resolvedAccount.baseUrl) {
next = setNextcloudTalkAccountConfig(next, accountId, {
baseUrl,
...(secret ? { botSecret: secret } : {}),
});
}
@@ -287,26 +257,28 @@ export const nextcloudTalkOnboardingAdapter: ChannelOnboardingAdapter = {
validate: (value) => (String(value ?? "").trim() ? undefined : "Required"),
}),
).trim();
const apiPasswordResult = await promptSingleChannelSecretInput({
const apiPasswordStep = await runSingleChannelSecretStep({
cfg: next,
prompter,
providerHint: "nextcloud-talk-api",
credentialLabel: "API password",
...buildSingleChannelSecretPromptState({
accountConfigured: Boolean(existingApiUser && existingApiPasswordConfigured),
hasConfigToken: existingApiPasswordConfigured,
allowEnv: false,
}),
accountConfigured: Boolean(existingApiUser && existingApiPasswordConfigured),
hasConfigToken: existingApiPasswordConfigured,
allowEnv: false,
envPrompt: "",
keepPrompt: "Nextcloud Talk API password already configured. Keep it?",
inputPrompt: "Enter Nextcloud Talk API password",
preferredEnvVar: "NEXTCLOUD_TALK_API_PASSWORD",
applySet: async (cfg, value) =>
setNextcloudTalkAccountConfig(cfg as CoreConfig, accountId, {
apiUser,
apiPassword: value,
}),
});
const apiPassword = apiPasswordResult.action === "set" ? apiPasswordResult.value : undefined;
next = setNextcloudTalkAccountConfig(next, accountId, {
apiUser,
...(apiPassword ? { apiPassword } : {}),
});
next =
apiPasswordStep.action === "keep"
? setNextcloudTalkAccountConfig(next, accountId, { apiUser })
: (apiPasswordStep.cfg as CoreConfig);
}
if (forceAllowFrom) {

View File

@@ -1,8 +1,7 @@
import { AllowFromListSchema, DmPolicySchema } from "openclaw/plugin-sdk/compat";
import { MarkdownConfigSchema, buildChannelConfigSchema } from "openclaw/plugin-sdk/nostr";
import { z } from "zod";
const allowFromEntry = z.union([z.string(), z.number()]);
/**
* Validates https:// URLs only (no javascript:, data:, file:, etc.)
*/
@@ -76,10 +75,10 @@ export const NostrConfigSchema = z.object({
relays: z.array(z.string()).optional(),
/** DM access policy: pairing, allowlist, open, or disabled */
dmPolicy: z.enum(["pairing", "allowlist", "open", "disabled"]).optional(),
dmPolicy: DmPolicySchema.optional(),
/** Allowed sender pubkeys (npub or hex format) */
allowFrom: z.array(allowFromEntry).optional(),
allowFrom: AllowFromListSchema,
/** Profile metadata (NIP-01 kind:0 content) */
profile: NostrProfileSchema.optional(),

View File

@@ -313,6 +313,68 @@ describe("telegramPlugin duplicate token guard", () => {
expect(result).toMatchObject({ channel: "telegram", messageId: "tg-2" });
});
it("sends outbound payload media lists and keeps buttons on the first message only", async () => {
const sendMessageTelegram = vi
.fn()
.mockResolvedValueOnce({ messageId: "tg-3", chatId: "12345" })
.mockResolvedValueOnce({ messageId: "tg-4", chatId: "12345" });
setTelegramRuntime({
channel: {
telegram: {
sendMessageTelegram,
},
},
} as unknown as PluginRuntime);
const result = await telegramPlugin.outbound!.sendPayload!({
cfg: createCfg(),
to: "12345",
text: "",
payload: {
text: "Approval required",
mediaUrls: ["https://example.com/1.jpg", "https://example.com/2.jpg"],
channelData: {
telegram: {
quoteText: "quoted",
buttons: [[{ text: "Allow Once", callback_data: "/approve abc allow-once" }]],
},
},
},
mediaLocalRoots: ["/tmp/media"],
accountId: "ops",
silent: true,
});
expect(sendMessageTelegram).toHaveBeenCalledTimes(2);
expect(sendMessageTelegram).toHaveBeenNthCalledWith(
1,
"12345",
"Approval required",
expect.objectContaining({
mediaUrl: "https://example.com/1.jpg",
mediaLocalRoots: ["/tmp/media"],
quoteText: "quoted",
silent: true,
buttons: [[{ text: "Allow Once", callback_data: "/approve abc allow-once" }]],
}),
);
expect(sendMessageTelegram).toHaveBeenNthCalledWith(
2,
"12345",
"",
expect.objectContaining({
mediaUrl: "https://example.com/2.jpg",
mediaLocalRoots: ["/tmp/media"],
quoteText: "quoted",
silent: true,
}),
);
expect(
(sendMessageTelegram.mock.calls[1]?.[2] as Record<string, unknown>)?.buttons,
).toBeUndefined();
expect(result).toMatchObject({ channel: "telegram", messageId: "tg-4" });
});
it("ignores accounts with missing tokens during duplicate-token checks", async () => {
const cfg = createCfg();
cfg.channels!.telegram!.accounts!.ops = {} as never;

View File

@@ -1,9 +1,9 @@
import { createScopedChannelConfigBase } from "openclaw/plugin-sdk/compat";
import {
collectAllowlistProviderGroupPolicyWarnings,
buildAccountScopedDmSecurityPolicy,
collectOpenGroupPolicyRouteAllowlistWarnings,
createScopedAccountConfigAccessors,
createScopedDmSecurityResolver,
formatAllowFromLowercase,
} from "openclaw/plugin-sdk/compat";
import {
@@ -31,6 +31,7 @@ import {
resolveTelegramAccount,
resolveTelegramGroupRequireMention,
resolveTelegramGroupToolPolicy,
sendTelegramPayloadMessages,
telegramOnboardingAdapter,
TelegramConfigSchema,
type ChannelMessageActionAdapter,
@@ -91,10 +92,6 @@ const telegramMessageActions: ChannelMessageActionAdapter = {
},
};
type TelegramInlineButtons = ReadonlyArray<
ReadonlyArray<{ text: string; callback_data: string; style?: "danger" | "success" | "primary" }>
>;
const telegramConfigAccessors = createScopedAccountConfigAccessors({
resolveAccount: ({ cfg, accountId }) => resolveTelegramAccount({ cfg, accountId }),
resolveAllowFrom: (account: ResolvedTelegramAccount) => account.config.allowFrom,
@@ -112,6 +109,14 @@ const telegramConfigBase = createScopedChannelConfigBase<ResolvedTelegramAccount
clearBaseFields: ["botToken", "tokenFile", "name"],
});
const resolveTelegramDmPolicy = createScopedDmSecurityResolver<ResolvedTelegramAccount>({
channelKey: "telegram",
resolvePolicy: (account) => account.config.dmPolicy,
resolveAllowFrom: (account) => account.config.allowFrom,
policyPathSuffix: "dmPolicy",
normalizeEntry: (raw) => raw.replace(/^(telegram|tg):/i, ""),
});
export const telegramPlugin: ChannelPlugin<ResolvedTelegramAccount, TelegramProbe> = {
id: "telegram",
meta: {
@@ -180,18 +185,7 @@ export const telegramPlugin: ChannelPlugin<ResolvedTelegramAccount, TelegramProb
...telegramConfigAccessors,
},
security: {
resolveDmPolicy: ({ cfg, accountId, account }) => {
return buildAccountScopedDmSecurityPolicy({
cfg,
channelKey: "telegram",
accountId,
fallbackAccountId: account.accountId ?? DEFAULT_ACCOUNT_ID,
policy: account.config.dmPolicy,
allowFrom: account.config.allowFrom ?? [],
policyPathSuffix: "dmPolicy",
normalizeEntry: (raw) => raw.replace(/^(telegram|tg):/i, ""),
});
},
resolveDmPolicy: resolveTelegramDmPolicy,
collectWarnings: ({ account, cfg }) => {
const groupAllowlistConfigured =
account.config.groups && Object.keys(account.config.groups).length > 0;
@@ -335,47 +329,21 @@ export const telegramPlugin: ChannelPlugin<ResolvedTelegramAccount, TelegramProb
const send = deps?.sendTelegram ?? getTelegramRuntime().channel.telegram.sendMessageTelegram;
const replyToMessageId = parseTelegramReplyToMessageId(replyToId);
const messageThreadId = parseTelegramThreadId(threadId);
const telegramData = payload.channelData?.telegram as
| { buttons?: TelegramInlineButtons; quoteText?: string }
| undefined;
const quoteText =
typeof telegramData?.quoteText === "string" ? telegramData.quoteText : undefined;
const text = payload.text ?? "";
const mediaUrls = payload.mediaUrls?.length
? payload.mediaUrls
: payload.mediaUrl
? [payload.mediaUrl]
: [];
const baseOpts = {
verbose: false,
cfg,
mediaLocalRoots,
messageThreadId,
replyToMessageId,
quoteText,
accountId: accountId ?? undefined,
silent: silent ?? undefined,
};
if (mediaUrls.length === 0) {
const result = await send(to, text, {
...baseOpts,
buttons: telegramData?.buttons,
});
return { channel: "telegram", ...result };
}
let finalResult: Awaited<ReturnType<typeof send>> | undefined;
for (let i = 0; i < mediaUrls.length; i += 1) {
const mediaUrl = mediaUrls[i];
const isFirst = i === 0;
finalResult = await send(to, isFirst ? text : "", {
...baseOpts,
mediaUrl,
...(isFirst ? { buttons: telegramData?.buttons } : {}),
});
}
return { channel: "telegram", ...(finalResult ?? { messageId: "unknown", chatId: to }) };
const result = await sendTelegramPayloadMessages({
send,
to,
payload,
baseOpts: {
verbose: false,
cfg,
mediaLocalRoots,
messageThreadId,
replyToMessageId,
accountId: accountId ?? undefined,
silent: silent ?? undefined,
},
});
return { channel: "telegram", ...result };
},
sendText: async ({ cfg, to, text, accountId, deps, replyToId, threadId, silent }) => {
const send = deps?.sendTelegram ?? getTelegramRuntime().channel.telegram.sendMessageTelegram;

View File

@@ -1,6 +1,7 @@
import type { OpenClawConfig } from "openclaw/plugin-sdk/tlon";
import {
formatDocsLink,
patchScopedAccountConfig,
resolveAccountIdForConfigure,
DEFAULT_ACCOUNT_ID,
type ChannelOnboardingAdapter,
@@ -32,46 +33,30 @@ function applyAccountConfig(params: {
};
}): OpenClawConfig {
const { cfg, accountId, input } = params;
const useDefault = accountId === DEFAULT_ACCOUNT_ID;
const base = cfg.channels?.tlon ?? {};
const nextValues = {
enabled: true,
...(input.name ? { name: input.name } : {}),
...buildTlonAccountFields(input),
};
if (useDefault) {
return {
...cfg,
channels: {
...cfg.channels,
tlon: {
...base,
...nextValues,
},
},
};
if (accountId === DEFAULT_ACCOUNT_ID) {
return patchScopedAccountConfig({
cfg,
channelKey: channel,
accountId,
patch: nextValues,
ensureChannelEnabled: false,
ensureAccountEnabled: false,
});
}
return {
...cfg,
channels: {
...cfg.channels,
tlon: {
...base,
enabled: base.enabled ?? true,
accounts: {
...(base as { accounts?: Record<string, unknown> }).accounts,
[accountId]: {
...(base as { accounts?: Record<string, Record<string, unknown>> }).accounts?.[
accountId
],
...nextValues,
},
},
},
},
};
return patchScopedAccountConfig({
cfg,
channelKey: channel,
accountId,
patch: { enabled: cfg.channels?.tlon?.enabled ?? true },
accountPatch: nextValues,
ensureChannelEnabled: false,
ensureAccountEnabled: false,
});
}
async function noteTlonHelp(prompter: WizardPrompter): Promise<void> {

View File

@@ -1,8 +1,9 @@
import {
buildAccountScopedDmSecurityPolicy,
collectOpenProviderGroupPolicyWarnings,
buildOpenGroupPolicyRestrictSendersWarning,
buildOpenGroupPolicyWarning,
collectOpenProviderGroupPolicyWarnings,
createAccountStatusSink,
mapAllowFromEntries,
} from "openclaw/plugin-sdk/compat";
import type {
@@ -357,6 +358,10 @@ export const zaloPlugin: ChannelPlugin<ResolvedZaloAccount> = {
`[${account.accountId}] Zalo probe threw before provider start: ${err instanceof Error ? (err.stack ?? err.message) : String(err)}`,
);
}
const statusSink = createAccountStatusSink({
accountId: ctx.accountId,
setStatus: ctx.setStatus,
});
ctx.log?.info(`[${account.accountId}] starting provider${zaloBotLabel} mode=${mode}`);
const { monitorZaloProvider } = await import("./monitor.js");
return monitorZaloProvider({
@@ -370,7 +375,7 @@ export const zaloPlugin: ChannelPlugin<ResolvedZaloAccount> = {
webhookSecret: normalizeSecretInputString(account.config.webhookSecret),
webhookPath: account.config.webhookPath,
fetcher,
statusSink: (patch) => ctx.setStatus({ accountId: ctx.accountId, ...patch }),
statusSink,
});
},
},

View File

@@ -1,6 +1,8 @@
import {
AllowFromEntrySchema,
AllowFromListSchema,
buildCatchallMultiAccountChannelSchema,
DmPolicySchema,
GroupPolicySchema,
} from "openclaw/plugin-sdk/compat";
import { MarkdownConfigSchema } from "openclaw/plugin-sdk/zalo";
import { z } from "zod";
@@ -15,10 +17,10 @@ const zaloAccountSchema = z.object({
webhookUrl: z.string().optional(),
webhookSecret: buildSecretInputSchema().optional(),
webhookPath: z.string().optional(),
dmPolicy: z.enum(["pairing", "allowlist", "open", "disabled"]).optional(),
allowFrom: z.array(AllowFromEntrySchema).optional(),
groupPolicy: z.enum(["disabled", "allowlist", "open"]).optional(),
groupAllowFrom: z.array(AllowFromEntrySchema).optional(),
dmPolicy: DmPolicySchema.optional(),
allowFrom: AllowFromListSchema,
groupPolicy: GroupPolicySchema.optional(),
groupAllowFrom: AllowFromListSchema,
mediaMaxMb: z.number().optional(),
proxy: z.string().optional(),
responsePrefix: z.string().optional(),

View File

@@ -12,6 +12,7 @@ import {
mergeAllowFromEntries,
normalizeAccountId,
promptSingleChannelSecretInput,
runSingleChannelSecretStep,
resolveAccountIdForConfigure,
setTopLevelChannelDmPolicyWithAllowFrom,
} from "openclaw/plugin-sdk/zalo";
@@ -255,80 +256,66 @@ export const zaloOnboardingAdapter: ChannelOnboardingAdapter = {
const hasConfigToken = Boolean(
hasConfiguredSecretInput(resolvedAccount.config.botToken) || resolvedAccount.config.tokenFile,
);
const tokenPromptState = buildSingleChannelSecretPromptState({
accountConfigured,
hasConfigToken,
allowEnv,
envValue: process.env.ZALO_BOT_TOKEN,
});
let token: SecretInput | null = null;
if (!accountConfigured) {
await noteZaloTokenHelp(prompter);
}
const tokenResult = await promptSingleChannelSecretInput({
const tokenStep = await runSingleChannelSecretStep({
cfg: next,
prompter,
providerHint: "zalo",
credentialLabel: "bot token",
accountConfigured: tokenPromptState.accountConfigured,
canUseEnv: tokenPromptState.canUseEnv,
hasConfigToken: tokenPromptState.hasConfigToken,
accountConfigured,
hasConfigToken,
allowEnv,
envValue: process.env.ZALO_BOT_TOKEN,
envPrompt: "ZALO_BOT_TOKEN detected. Use env var?",
keepPrompt: "Zalo token already configured. Keep it?",
inputPrompt: "Enter Zalo bot token",
preferredEnvVar: "ZALO_BOT_TOKEN",
});
if (tokenResult.action === "set") {
token = tokenResult.value;
}
if (tokenResult.action === "use-env" && zaloAccountId === DEFAULT_ACCOUNT_ID) {
next = {
...next,
channels: {
...next.channels,
zalo: {
...next.channels?.zalo,
enabled: true,
},
},
} as OpenClawConfig;
}
if (token) {
if (zaloAccountId === DEFAULT_ACCOUNT_ID) {
next = {
...next,
channels: {
...next.channels,
zalo: {
...next.channels?.zalo,
enabled: true,
botToken: token,
},
},
} as OpenClawConfig;
} else {
next = {
...next,
channels: {
...next.channels,
zalo: {
...next.channels?.zalo,
enabled: true,
accounts: {
...next.channels?.zalo?.accounts,
[zaloAccountId]: {
...next.channels?.zalo?.accounts?.[zaloAccountId],
onMissingConfigured: async () => await noteZaloTokenHelp(prompter),
applyUseEnv: async (cfg) =>
zaloAccountId === DEFAULT_ACCOUNT_ID
? ({
...cfg,
channels: {
...cfg.channels,
zalo: {
...cfg.channels?.zalo,
enabled: true,
botToken: token,
},
},
},
},
} as OpenClawConfig;
}
}
} as OpenClawConfig)
: cfg,
applySet: async (cfg, value) =>
zaloAccountId === DEFAULT_ACCOUNT_ID
? ({
...cfg,
channels: {
...cfg.channels,
zalo: {
...cfg.channels?.zalo,
enabled: true,
botToken: value,
},
},
} as OpenClawConfig)
: ({
...cfg,
channels: {
...cfg.channels,
zalo: {
...cfg.channels?.zalo,
enabled: true,
accounts: {
...cfg.channels?.zalo?.accounts,
[zaloAccountId]: {
...cfg.channels?.zalo?.accounts?.[zaloAccountId],
enabled: true,
botToken: value,
},
},
},
},
} as OpenClawConfig),
});
next = tokenStep.cfg;
const wantsWebhook = await prompter.confirm({
message: "Use webhook mode for Zalo?",

View File

@@ -1,3 +1,6 @@
import fs from "node:fs";
import os from "node:os";
import path from "node:path";
import { describe, expect, it } from "vitest";
import { resolveZaloToken } from "./token.js";
import type { ZaloConfig } from "./types.js";
@@ -55,4 +58,20 @@ describe("resolveZaloToken", () => {
expect(res.token).toBe("work-token");
expect(res.source).toBe("config");
});
it.runIf(process.platform !== "win32")("rejects symlinked token files", () => {
const dir = fs.mkdtempSync(path.join(os.tmpdir(), "openclaw-zalo-token-"));
const tokenFile = path.join(dir, "token.txt");
const tokenLink = path.join(dir, "token-link.txt");
fs.writeFileSync(tokenFile, "file-token\n", "utf8");
fs.symlinkSync(tokenFile, tokenLink);
const cfg = {
tokenFile: tokenLink,
} as ZaloConfig;
const res = resolveZaloToken(cfg);
expect(res.token).toBe("");
expect(res.source).toBe("none");
fs.rmSync(dir, { recursive: true, force: true });
});
});

View File

@@ -1,5 +1,5 @@
import { readFileSync } from "node:fs";
import { DEFAULT_ACCOUNT_ID, normalizeAccountId } from "openclaw/plugin-sdk/account-id";
import { tryReadSecretFileSync } from "openclaw/plugin-sdk/core";
import type { BaseTokenResolution } from "openclaw/plugin-sdk/zalo";
import { normalizeResolvedSecretInputString, normalizeSecretInputString } from "./secret-input.js";
import type { ZaloConfig } from "./types.js";
@@ -9,16 +9,7 @@ export type ZaloTokenResolution = BaseTokenResolution & {
};
function readTokenFromFile(tokenFile: string | undefined): string {
const trimmedPath = tokenFile?.trim();
if (!trimmedPath) {
return "";
}
try {
return readFileSync(trimmedPath, "utf8").trim();
} catch {
// ignore read failures
return "";
}
return tryReadSecretFileSync(tokenFile, "Zalo token file", { rejectSymlink: true }) ?? "";
}
export function resolveZaloToken(

View File

@@ -1,5 +1,6 @@
import {
buildAccountScopedDmSecurityPolicy,
createAccountStatusSink,
mapAllowFromEntries,
} from "openclaw/plugin-sdk/compat";
import type {
@@ -682,6 +683,10 @@ export const zalouserPlugin: ChannelPlugin<ResolvedZalouserAccount> = {
} catch {
// ignore probe errors
}
const statusSink = createAccountStatusSink({
accountId: ctx.accountId,
setStatus: ctx.setStatus,
});
ctx.log?.info(`[${account.accountId}] starting zalouser provider${userLabel}`);
const { monitorZalouserProvider } = await import("./monitor.js");
return monitorZalouserProvider({
@@ -689,7 +694,7 @@ export const zalouserPlugin: ChannelPlugin<ResolvedZalouserAccount> = {
config: ctx.cfg,
runtime: ctx.runtime,
abortSignal: ctx.abortSignal,
statusSink: (patch) => ctx.setStatus({ accountId: ctx.accountId, ...patch }),
statusSink,
});
},
loginWithQrStart: async (params) => {

View File

@@ -1,6 +1,8 @@
import {
AllowFromEntrySchema,
AllowFromListSchema,
buildCatchallMultiAccountChannelSchema,
DmPolicySchema,
GroupPolicySchema,
} from "openclaw/plugin-sdk/compat";
import { MarkdownConfigSchema, ToolPolicySchema } from "openclaw/plugin-sdk/zalouser";
import { z } from "zod";
@@ -17,11 +19,11 @@ const zalouserAccountSchema = z.object({
enabled: z.boolean().optional(),
markdown: MarkdownConfigSchema,
profile: z.string().optional(),
dmPolicy: z.enum(["pairing", "allowlist", "open", "disabled"]).optional(),
allowFrom: z.array(AllowFromEntrySchema).optional(),
dmPolicy: DmPolicySchema.optional(),
allowFrom: AllowFromListSchema,
historyLimit: z.number().int().min(0).optional(),
groupAllowFrom: z.array(AllowFromEntrySchema).optional(),
groupPolicy: z.enum(["disabled", "allowlist", "open"]).optional(),
groupAllowFrom: AllowFromListSchema,
groupPolicy: GroupPolicySchema.optional(),
groups: z.object({}).catchall(groupConfigSchema).optional(),
messagePrefix: z.string().optional(),
responsePrefix: z.string().optional(),

View File

@@ -9,6 +9,7 @@ import {
formatResolvedUnresolvedNote,
mergeAllowFromEntries,
normalizeAccountId,
patchScopedAccountConfig,
promptChannelAccessConfig,
resolveAccountIdForConfigure,
setTopLevelChannelDmPolicyWithAllowFrom,
@@ -36,37 +37,13 @@ function setZalouserAccountScopedConfig(
defaultPatch: Record<string, unknown>,
accountPatch: Record<string, unknown> = defaultPatch,
): OpenClawConfig {
if (accountId === DEFAULT_ACCOUNT_ID) {
return {
...cfg,
channels: {
...cfg.channels,
zalouser: {
...cfg.channels?.zalouser,
enabled: true,
...defaultPatch,
},
},
} as OpenClawConfig;
}
return {
...cfg,
channels: {
...cfg.channels,
zalouser: {
...cfg.channels?.zalouser,
enabled: true,
accounts: {
...cfg.channels?.zalouser?.accounts,
[accountId]: {
...cfg.channels?.zalouser?.accounts?.[accountId],
enabled: cfg.channels?.zalouser?.accounts?.[accountId]?.enabled ?? true,
...accountPatch,
},
},
},
},
} as OpenClawConfig;
return patchScopedAccountConfig({
cfg,
channelKey: channel,
accountId,
patch: defaultPatch,
accountPatch,
}) as OpenClawConfig;
}
function setZalouserDmPolicy(

View File

@@ -299,6 +299,7 @@
"start": "node scripts/run-node.mjs",
"test": "node scripts/test-parallel.mjs",
"test:all": "pnpm lint && pnpm build && pnpm test && pnpm test:e2e && pnpm test:live && pnpm test:docker:all",
"test:auth:compat": "vitest run --config vitest.gateway.config.ts src/gateway/server.auth.compat-baseline.test.ts src/gateway/client.test.ts src/gateway/reconnect-gating.test.ts src/gateway/protocol/connect-error-details.test.ts",
"test:channels": "vitest run --config vitest.channels.config.ts",
"test:coverage": "vitest run --config vitest.unit.config.ts --coverage",
"test:docker:all": "pnpm test:docker:live-models && pnpm test:docker:live-gateway && pnpm test:docker:onboard && pnpm test:docker:gateway-network && pnpm test:docker:qr && pnpm test:docker:doctor-switch && pnpm test:docker:plugins && pnpm test:docker:cleanup",

View File

@@ -4,9 +4,11 @@ import type { RequestPermissionRequest } from "@agentclientprotocol/sdk";
import { afterEach, describe, expect, it, vi } from "vitest";
import { createTrackedTempDirs } from "../test-utils/tracked-temp-dirs.js";
import {
buildAcpClientStripKeys,
resolveAcpClientSpawnEnv,
resolveAcpClientSpawnInvocation,
resolvePermissionRequest,
shouldStripProviderAuthEnvVarsForAcpServer,
} from "./client.js";
import { extractAttachmentsFromPrompt, extractTextFromPrompt } from "./event-mapper.js";
@@ -110,6 +112,120 @@ describe("resolveAcpClientSpawnEnv", () => {
expect(env.OPENCLAW_SHELL).toBe("acp-client");
expect(env.OPENAI_API_KEY).toBeUndefined();
});
it("strips provider auth env vars for the default OpenClaw bridge", () => {
const stripKeys = new Set(["OPENAI_API_KEY", "GITHUB_TOKEN", "HF_TOKEN"]);
const env = resolveAcpClientSpawnEnv(
{
OPENAI_API_KEY: "openai-secret", // pragma: allowlist secret
GITHUB_TOKEN: "gh-secret", // pragma: allowlist secret
HF_TOKEN: "hf-secret", // pragma: allowlist secret
OPENCLAW_API_KEY: "keep-me",
PATH: "/usr/bin",
},
{ stripKeys },
);
expect(env.OPENAI_API_KEY).toBeUndefined();
expect(env.GITHUB_TOKEN).toBeUndefined();
expect(env.HF_TOKEN).toBeUndefined();
expect(env.OPENCLAW_API_KEY).toBe("keep-me");
expect(env.PATH).toBe("/usr/bin");
expect(env.OPENCLAW_SHELL).toBe("acp-client");
});
it("strips provider auth env vars case-insensitively", () => {
const env = resolveAcpClientSpawnEnv(
{
OpenAI_Api_Key: "openai-secret", // pragma: allowlist secret
Github_Token: "gh-secret", // pragma: allowlist secret
OPENCLAW_API_KEY: "keep-me",
},
{ stripKeys: new Set(["OPENAI_API_KEY", "GITHUB_TOKEN"]) },
);
expect(env.OpenAI_Api_Key).toBeUndefined();
expect(env.Github_Token).toBeUndefined();
expect(env.OPENCLAW_API_KEY).toBe("keep-me");
expect(env.OPENCLAW_SHELL).toBe("acp-client");
});
it("preserves provider auth env vars for explicit custom ACP servers", () => {
const env = resolveAcpClientSpawnEnv({
OPENAI_API_KEY: "openai-secret", // pragma: allowlist secret
GITHUB_TOKEN: "gh-secret", // pragma: allowlist secret
HF_TOKEN: "hf-secret", // pragma: allowlist secret
OPENCLAW_API_KEY: "keep-me",
});
expect(env.OPENAI_API_KEY).toBe("openai-secret");
expect(env.GITHUB_TOKEN).toBe("gh-secret");
expect(env.HF_TOKEN).toBe("hf-secret");
expect(env.OPENCLAW_API_KEY).toBe("keep-me");
expect(env.OPENCLAW_SHELL).toBe("acp-client");
});
});
describe("shouldStripProviderAuthEnvVarsForAcpServer", () => {
it("strips provider auth env vars for the default bridge", () => {
expect(shouldStripProviderAuthEnvVarsForAcpServer()).toBe(true);
expect(
shouldStripProviderAuthEnvVarsForAcpServer({
serverCommand: "openclaw",
serverArgs: ["acp"],
defaultServerCommand: "openclaw",
defaultServerArgs: ["acp"],
}),
).toBe(true);
});
it("preserves provider auth env vars for explicit custom ACP servers", () => {
expect(
shouldStripProviderAuthEnvVarsForAcpServer({
serverCommand: "custom-acp-server",
serverArgs: ["serve"],
defaultServerCommand: "openclaw",
defaultServerArgs: ["acp"],
}),
).toBe(false);
});
it("preserves provider auth env vars when an explicit override uses the default executable with different args", () => {
expect(
shouldStripProviderAuthEnvVarsForAcpServer({
serverCommand: process.execPath,
serverArgs: ["custom-entry.js"],
defaultServerCommand: process.execPath,
defaultServerArgs: ["dist/entry.js", "acp"],
}),
).toBe(false);
});
});
describe("buildAcpClientStripKeys", () => {
it("always includes active skill env keys", () => {
const stripKeys = buildAcpClientStripKeys({
stripProviderAuthEnvVars: false,
activeSkillEnvKeys: ["SKILL_SECRET", "OPENAI_API_KEY"],
});
expect(stripKeys.has("SKILL_SECRET")).toBe(true);
expect(stripKeys.has("OPENAI_API_KEY")).toBe(true);
expect(stripKeys.has("GITHUB_TOKEN")).toBe(false);
});
it("adds provider auth env vars for the default bridge", () => {
const stripKeys = buildAcpClientStripKeys({
stripProviderAuthEnvVars: true,
activeSkillEnvKeys: ["SKILL_SECRET"],
});
expect(stripKeys.has("SKILL_SECRET")).toBe(true);
expect(stripKeys.has("OPENAI_API_KEY")).toBe(true);
expect(stripKeys.has("GITHUB_TOKEN")).toBe(true);
expect(stripKeys.has("HF_TOKEN")).toBe(true);
expect(stripKeys.has("OPENCLAW_API_KEY")).toBe(false);
});
});
describe("resolveAcpClientSpawnInvocation", () => {

View File

@@ -19,6 +19,10 @@ import {
materializeWindowsSpawnProgram,
resolveWindowsSpawnProgram,
} from "../plugin-sdk/windows-spawn.js";
import {
listKnownProviderAuthEnvVarNames,
omitEnvKeysCaseInsensitive,
} from "../secrets/provider-env-vars.js";
import { DANGEROUS_ACP_TOOLS } from "../security/dangerous-tools.js";
const SAFE_AUTO_APPROVE_TOOL_IDS = new Set(["read", "search", "web_search", "memory_search"]);
@@ -346,20 +350,56 @@ function buildServerArgs(opts: AcpClientOptions): string[] {
return args;
}
type AcpClientSpawnEnvOptions = {
stripKeys?: Iterable<string>;
};
export function resolveAcpClientSpawnEnv(
baseEnv: NodeJS.ProcessEnv = process.env,
options?: { stripKeys?: ReadonlySet<string> },
options: AcpClientSpawnEnvOptions = {},
): NodeJS.ProcessEnv {
const env: NodeJS.ProcessEnv = { ...baseEnv };
if (options?.stripKeys) {
for (const key of options.stripKeys) {
delete env[key];
}
}
const env = omitEnvKeysCaseInsensitive(baseEnv, options.stripKeys ?? []);
env.OPENCLAW_SHELL = "acp-client";
return env;
}
export function shouldStripProviderAuthEnvVarsForAcpServer(
params: {
serverCommand?: string;
serverArgs?: string[];
defaultServerCommand?: string;
defaultServerArgs?: string[];
} = {},
): boolean {
const serverCommand = params.serverCommand?.trim();
if (!serverCommand) {
return true;
}
const defaultServerCommand = params.defaultServerCommand?.trim();
if (!defaultServerCommand || serverCommand !== defaultServerCommand) {
return false;
}
const serverArgs = params.serverArgs ?? [];
const defaultServerArgs = params.defaultServerArgs ?? [];
return (
serverArgs.length === defaultServerArgs.length &&
serverArgs.every((arg, index) => arg === defaultServerArgs[index])
);
}
export function buildAcpClientStripKeys(params: {
stripProviderAuthEnvVars?: boolean;
activeSkillEnvKeys?: Iterable<string>;
}): Set<string> {
const stripKeys = new Set<string>(params.activeSkillEnvKeys ?? []);
if (params.stripProviderAuthEnvVars) {
for (const key of listKnownProviderAuthEnvVarNames()) {
stripKeys.add(key);
}
}
return stripKeys;
}
type AcpSpawnRuntime = {
platform: NodeJS.Platform;
env: NodeJS.ProcessEnv;
@@ -456,12 +496,22 @@ export async function createAcpClient(opts: AcpClientOptions = {}): Promise<AcpC
const serverArgs = buildServerArgs(opts);
const entryPath = resolveSelfEntryPath();
const serverCommand = opts.serverCommand ?? (entryPath ? process.execPath : "openclaw");
const effectiveArgs = opts.serverCommand || !entryPath ? serverArgs : [entryPath, ...serverArgs];
const defaultServerCommand = entryPath ? process.execPath : "openclaw";
const defaultServerArgs = entryPath ? [entryPath, ...serverArgs] : serverArgs;
const serverCommand = opts.serverCommand ?? defaultServerCommand;
const effectiveArgs = opts.serverCommand || !entryPath ? serverArgs : defaultServerArgs;
const { getActiveSkillEnvKeys } = await import("../agents/skills/env-overrides.runtime.js");
const spawnEnv = resolveAcpClientSpawnEnv(process.env, {
stripKeys: getActiveSkillEnvKeys(),
const stripProviderAuthEnvVars = shouldStripProviderAuthEnvVarsForAcpServer({
serverCommand,
serverArgs: effectiveArgs,
defaultServerCommand,
defaultServerArgs,
});
const stripKeys = buildAcpClientStripKeys({
stripProviderAuthEnvVars,
activeSkillEnvKeys: getActiveSkillEnvKeys(),
});
const spawnEnv = resolveAcpClientSpawnEnv(process.env, { stripKeys });
const spawnInvocation = resolveAcpClientSpawnInvocation(
{ serverCommand, serverArgs: effectiveArgs },
{

View File

@@ -1,54 +1,12 @@
import { mkdir, symlink, writeFile } from "node:fs/promises";
import path from "node:path";
import { afterEach, describe, expect, it } from "vitest";
import { createTrackedTempDirs } from "../test-utils/tracked-temp-dirs.js";
import { describe, expect, it } from "vitest";
import { MAX_SECRET_FILE_BYTES, readSecretFromFile } from "./secret-file.js";
const tempDirs = createTrackedTempDirs();
const createTempDir = () => tempDirs.make("openclaw-secret-file-test-");
afterEach(async () => {
await tempDirs.cleanup();
});
describe("readSecretFromFile", () => {
it("reads and trims a regular secret file", async () => {
const dir = await createTempDir();
const file = path.join(dir, "secret.txt");
await writeFile(file, " top-secret \n", "utf8");
expect(readSecretFromFile(file, "Gateway password")).toBe("top-secret");
it("keeps the shared secret-file limit", () => {
expect(MAX_SECRET_FILE_BYTES).toBe(16 * 1024);
});
it("rejects files larger than the secret-file limit", async () => {
const dir = await createTempDir();
const file = path.join(dir, "secret.txt");
await writeFile(file, "x".repeat(MAX_SECRET_FILE_BYTES + 1), "utf8");
expect(() => readSecretFromFile(file, "Gateway password")).toThrow(
`Gateway password file at ${file} exceeds ${MAX_SECRET_FILE_BYTES} bytes.`,
);
});
it("rejects non-regular files", async () => {
const dir = await createTempDir();
const nestedDir = path.join(dir, "secret-dir");
await mkdir(nestedDir);
expect(() => readSecretFromFile(nestedDir, "Gateway password")).toThrow(
`Gateway password file at ${nestedDir} must be a regular file.`,
);
});
it("rejects symlinks", async () => {
const dir = await createTempDir();
const target = path.join(dir, "target.txt");
const link = path.join(dir, "secret-link.txt");
await writeFile(target, "top-secret\n", "utf8");
await symlink(target, link);
expect(() => readSecretFromFile(link, "Gateway password")).toThrow(
`Gateway password file at ${link} must not be a symlink.`,
);
it("exposes the hardened secret reader", () => {
expect(typeof readSecretFromFile).toBe("function");
});
});

View File

@@ -1,43 +1,10 @@
import fs from "node:fs";
import { resolveUserPath } from "../utils.js";
import { DEFAULT_SECRET_FILE_MAX_BYTES, readSecretFileSync } from "../infra/secret-file.js";
export const MAX_SECRET_FILE_BYTES = 16 * 1024;
export const MAX_SECRET_FILE_BYTES = DEFAULT_SECRET_FILE_MAX_BYTES;
export function readSecretFromFile(filePath: string, label: string): string {
const resolvedPath = resolveUserPath(filePath.trim());
if (!resolvedPath) {
throw new Error(`${label} file path is empty.`);
}
let stat: fs.Stats;
try {
stat = fs.lstatSync(resolvedPath);
} catch (err) {
throw new Error(`Failed to inspect ${label} file at ${resolvedPath}: ${String(err)}`, {
cause: err,
});
}
if (stat.isSymbolicLink()) {
throw new Error(`${label} file at ${resolvedPath} must not be a symlink.`);
}
if (!stat.isFile()) {
throw new Error(`${label} file at ${resolvedPath} must be a regular file.`);
}
if (stat.size > MAX_SECRET_FILE_BYTES) {
throw new Error(`${label} file at ${resolvedPath} exceeds ${MAX_SECRET_FILE_BYTES} bytes.`);
}
let raw = "";
try {
raw = fs.readFileSync(resolvedPath, "utf8");
} catch (err) {
throw new Error(`Failed to read ${label} file at ${resolvedPath}: ${String(err)}`, {
cause: err,
});
}
const secret = raw.trim();
if (!secret) {
throw new Error(`${label} file at ${resolvedPath} is empty.`);
}
return secret;
return readSecretFileSync(filePath, label, {
maxBytes: MAX_SECRET_FILE_BYTES,
rejectSymlink: true,
});
}

View File

@@ -0,0 +1,274 @@
import type { CancelNotification, PromptRequest, PromptResponse } from "@agentclientprotocol/sdk";
import { describe, expect, it, vi } from "vitest";
import type { GatewayClient } from "../gateway/client.js";
import type { EventFrame } from "../gateway/protocol/index.js";
import { createInMemorySessionStore } from "./session.js";
import { AcpGatewayAgent } from "./translator.js";
import { createAcpConnection, createAcpGateway } from "./translator.test-helpers.js";
type Harness = {
agent: AcpGatewayAgent;
requestSpy: ReturnType<typeof vi.fn>;
sessionUpdateSpy: ReturnType<typeof vi.fn>;
sessionStore: ReturnType<typeof createInMemorySessionStore>;
sentRunIds: string[];
};
function createPromptRequest(sessionId: string): PromptRequest {
return {
sessionId,
prompt: [{ type: "text", text: "hello" }],
_meta: {},
} as unknown as PromptRequest;
}
function createChatEvent(payload: Record<string, unknown>): EventFrame {
return {
type: "event",
event: "chat",
payload,
} as EventFrame;
}
function createToolEvent(payload: Record<string, unknown>): EventFrame {
return {
type: "event",
event: "agent",
payload,
} as EventFrame;
}
function createHarness(sessions: Array<{ sessionId: string; sessionKey: string }>): Harness {
const sentRunIds: string[] = [];
const requestSpy = vi.fn(async (method: string, params?: Record<string, unknown>) => {
if (method === "chat.send") {
const runId = params?.idempotencyKey;
if (typeof runId === "string") {
sentRunIds.push(runId);
}
return new Promise<never>(() => {});
}
return {};
});
const connection = createAcpConnection();
const sessionStore = createInMemorySessionStore();
for (const session of sessions) {
sessionStore.createSession({
sessionId: session.sessionId,
sessionKey: session.sessionKey,
cwd: "/tmp",
});
}
const agent = new AcpGatewayAgent(
connection,
createAcpGateway(requestSpy as unknown as GatewayClient["request"]),
{ sessionStore },
);
return {
agent,
requestSpy,
// eslint-disable-next-line @typescript-eslint/unbound-method
sessionUpdateSpy: connection.sessionUpdate as unknown as ReturnType<typeof vi.fn>,
sessionStore,
sentRunIds,
};
}
async function startPendingPrompt(
harness: Harness,
sessionId: string,
): Promise<{ promptPromise: Promise<PromptResponse>; runId: string }> {
const before = harness.sentRunIds.length;
const promptPromise = harness.agent.prompt(createPromptRequest(sessionId));
await vi.waitFor(() => {
expect(harness.sentRunIds.length).toBe(before + 1);
});
return {
promptPromise,
runId: harness.sentRunIds[before],
};
}
describe("acp translator cancel and run scoping", () => {
it("cancel passes active runId to chat.abort", async () => {
const sessionKey = "agent:main:shared";
const harness = createHarness([{ sessionId: "session-1", sessionKey }]);
const pending = await startPendingPrompt(harness, "session-1");
await harness.agent.cancel({ sessionId: "session-1" } as CancelNotification);
expect(harness.requestSpy).toHaveBeenCalledWith("chat.abort", {
sessionKey,
runId: pending.runId,
});
await expect(pending.promptPromise).resolves.toEqual({ stopReason: "cancelled" });
});
it("cancel uses pending runId when there is no active run", async () => {
const sessionKey = "agent:main:shared";
const harness = createHarness([{ sessionId: "session-1", sessionKey }]);
const pending = await startPendingPrompt(harness, "session-1");
harness.sessionStore.clearActiveRun("session-1");
await harness.agent.cancel({ sessionId: "session-1" } as CancelNotification);
expect(harness.requestSpy).toHaveBeenCalledWith("chat.abort", {
sessionKey,
runId: pending.runId,
});
await expect(pending.promptPromise).resolves.toEqual({ stopReason: "cancelled" });
});
it("cancel skips chat.abort when there is no active run and no pending prompt", async () => {
const sessionKey = "agent:main:shared";
const harness = createHarness([{ sessionId: "session-1", sessionKey }]);
await harness.agent.cancel({ sessionId: "session-1" } as CancelNotification);
const abortCalls = harness.requestSpy.mock.calls.filter(([method]) => method === "chat.abort");
expect(abortCalls).toHaveLength(0);
});
it("cancel from a session without active run does not abort another session sharing the same key", async () => {
const sessionKey = "agent:main:shared";
const harness = createHarness([
{ sessionId: "session-1", sessionKey },
{ sessionId: "session-2", sessionKey },
]);
const pending2 = await startPendingPrompt(harness, "session-2");
await harness.agent.cancel({ sessionId: "session-1" } as CancelNotification);
const abortCalls = harness.requestSpy.mock.calls.filter(([method]) => method === "chat.abort");
expect(abortCalls).toHaveLength(0);
expect(harness.sessionStore.getSession("session-2")?.activeRunId).toBe(pending2.runId);
await harness.agent.handleGatewayEvent(
createChatEvent({
runId: pending2.runId,
sessionKey,
seq: 1,
state: "final",
}),
);
await expect(pending2.promptPromise).resolves.toEqual({ stopReason: "end_turn" });
});
it("drops chat events when runId does not match the active prompt", async () => {
const sessionKey = "agent:main:shared";
const harness = createHarness([{ sessionId: "session-1", sessionKey }]);
const pending = await startPendingPrompt(harness, "session-1");
await harness.agent.handleGatewayEvent(
createChatEvent({
runId: "run-other",
sessionKey,
seq: 1,
state: "final",
}),
);
expect(harness.sessionStore.getSession("session-1")?.activeRunId).toBe(pending.runId);
await harness.agent.handleGatewayEvent(
createChatEvent({
runId: pending.runId,
sessionKey,
seq: 2,
state: "final",
}),
);
await expect(pending.promptPromise).resolves.toEqual({ stopReason: "end_turn" });
});
it("drops tool events when runId does not match the active prompt", async () => {
const sessionKey = "agent:main:shared";
const harness = createHarness([{ sessionId: "session-1", sessionKey }]);
const pending = await startPendingPrompt(harness, "session-1");
harness.sessionUpdateSpy.mockClear();
await harness.agent.handleGatewayEvent(
createToolEvent({
runId: "run-other",
sessionKey,
stream: "tool",
data: {
phase: "start",
name: "read_file",
toolCallId: "tool-1",
args: { path: "README.md" },
},
}),
);
expect(harness.sessionUpdateSpy).not.toHaveBeenCalled();
await harness.agent.handleGatewayEvent(
createChatEvent({
runId: pending.runId,
sessionKey,
seq: 1,
state: "final",
}),
);
await expect(pending.promptPromise).resolves.toEqual({ stopReason: "end_turn" });
});
it("routes events to the pending prompt that matches runId when session keys are shared", async () => {
const sessionKey = "agent:main:shared";
const harness = createHarness([
{ sessionId: "session-1", sessionKey },
{ sessionId: "session-2", sessionKey },
]);
const pending1 = await startPendingPrompt(harness, "session-1");
const pending2 = await startPendingPrompt(harness, "session-2");
harness.sessionUpdateSpy.mockClear();
await harness.agent.handleGatewayEvent(
createToolEvent({
runId: pending2.runId,
sessionKey,
stream: "tool",
data: {
phase: "start",
name: "read_file",
toolCallId: "tool-2",
args: { path: "notes.txt" },
},
}),
);
expect(harness.sessionUpdateSpy).toHaveBeenCalledWith(
expect.objectContaining({
sessionId: "session-2",
update: expect.objectContaining({
sessionUpdate: "tool_call",
toolCallId: "tool-2",
status: "in_progress",
}),
}),
);
expect(harness.sessionUpdateSpy).toHaveBeenCalledTimes(1);
await harness.agent.handleGatewayEvent(
createChatEvent({
runId: pending2.runId,
sessionKey,
seq: 1,
state: "final",
}),
);
await expect(pending2.promptPromise).resolves.toEqual({ stopReason: "end_turn" });
expect(harness.sessionStore.getSession("session-1")?.activeRunId).toBe(pending1.runId);
await harness.agent.handleGatewayEvent(
createChatEvent({
runId: pending1.runId,
sessionKey,
seq: 2,
state: "final",
}),
);
await expect(pending1.promptPromise).resolves.toEqual({ stopReason: "end_turn" });
});
});

View File

@@ -633,14 +633,25 @@ export class AcpGatewayAgent implements Agent {
if (!session) {
return;
}
// Capture runId before cancelActiveRun clears session.activeRunId.
const activeRunId = session.activeRunId;
this.sessionStore.cancelActiveRun(params.sessionId);
const pending = this.pendingPrompts.get(params.sessionId);
const scopedRunId = activeRunId ?? pending?.idempotencyKey;
if (!scopedRunId) {
return;
}
try {
await this.gateway.request("chat.abort", { sessionKey: session.sessionKey });
await this.gateway.request("chat.abort", {
sessionKey: session.sessionKey,
runId: scopedRunId,
});
} catch (err) {
this.log(`cancel error: ${String(err)}`);
}
const pending = this.pendingPrompts.get(params.sessionId);
if (pending) {
this.pendingPrompts.delete(params.sessionId);
pending.resolve({ stopReason: "cancelled" });
@@ -672,6 +683,7 @@ export class AcpGatewayAgent implements Agent {
return;
}
const stream = payload.stream as string | undefined;
const runId = payload.runId as string | undefined;
const data = payload.data as Record<string, unknown> | undefined;
const sessionKey = payload.sessionKey as string | undefined;
if (!stream || !data || !sessionKey) {
@@ -688,7 +700,7 @@ export class AcpGatewayAgent implements Agent {
return;
}
const pending = this.findPendingBySessionKey(sessionKey);
const pending = this.findPendingBySessionKey(sessionKey, runId);
if (!pending) {
return;
}
@@ -774,13 +786,10 @@ export class AcpGatewayAgent implements Agent {
return;
}
const pending = this.findPendingBySessionKey(sessionKey);
const pending = this.findPendingBySessionKey(sessionKey, runId);
if (!pending) {
return;
}
if (runId && pending.idempotencyKey !== runId) {
return;
}
if (state === "delta" && messageData) {
await this.handleDeltaEvent(pending.sessionId, messageData);
@@ -853,11 +862,15 @@ export class AcpGatewayAgent implements Agent {
pending.resolve({ stopReason });
}
private findPendingBySessionKey(sessionKey: string): PendingPrompt | undefined {
private findPendingBySessionKey(sessionKey: string, runId?: string): PendingPrompt | undefined {
for (const pending of this.pendingPrompts.values()) {
if (pending.sessionKey === sessionKey) {
return pending;
if (pending.sessionKey !== sessionKey) {
continue;
}
if (runId && pending.idempotencyKey !== runId) {
continue;
}
return pending;
}
return undefined;
}

18
src/agents/lanes.test.ts Normal file
View File

@@ -0,0 +1,18 @@
import { describe, expect, it } from "vitest";
import { AGENT_LANE_NESTED, resolveNestedAgentLane } from "./lanes.js";
describe("resolveNestedAgentLane", () => {
it("defaults to the nested lane when no lane is provided", () => {
expect(resolveNestedAgentLane()).toBe(AGENT_LANE_NESTED);
});
it("moves cron lane callers onto the nested lane", () => {
expect(resolveNestedAgentLane("cron")).toBe(AGENT_LANE_NESTED);
expect(resolveNestedAgentLane(" cron ")).toBe(AGENT_LANE_NESTED);
});
it("preserves non-cron lanes", () => {
expect(resolveNestedAgentLane("subagent")).toBe("subagent");
expect(resolveNestedAgentLane(" custom-lane ")).toBe("custom-lane");
});
});

View File

@@ -2,3 +2,13 @@ import { CommandLane } from "../process/lanes.js";
export const AGENT_LANE_NESTED = CommandLane.Nested;
export const AGENT_LANE_SUBAGENT = CommandLane.Subagent;
export function resolveNestedAgentLane(lane?: string): string {
const trimmed = lane?.trim();
// Nested agent runs should not inherit the cron execution lane. Cron jobs
// already occupy that lane while they dispatch inner work.
if (!trimmed || trimmed === "cron") {
return AGENT_LANE_NESTED;
}
return trimmed;
}

View File

@@ -12,7 +12,7 @@ vi.mock("./auth-profiles.js", () => ({
}));
vi.mock("./model-auth.js", () => ({
getCustomProviderApiKey: () => undefined,
resolveUsableCustomProviderApiKey: () => null,
resolveEnvApiKey: () => null,
}));

View File

@@ -5,7 +5,7 @@ import {
resolveAuthProfileDisplayLabel,
resolveAuthProfileOrder,
} from "./auth-profiles.js";
import { getCustomProviderApiKey, resolveEnvApiKey } from "./model-auth.js";
import { resolveEnvApiKey, resolveUsableCustomProviderApiKey } from "./model-auth.js";
import { normalizeProviderId } from "./model-selection.js";
export function resolveModelAuthLabel(params: {
@@ -59,7 +59,10 @@ export function resolveModelAuthLabel(params: {
return `api-key (${envKey.source})`;
}
const customKey = getCustomProviderApiKey(params.cfg, providerKey);
const customKey = resolveUsableCustomProviderApiKey({
cfg: params.cfg,
provider: providerKey,
});
if (customKey) {
return `api-key (models.json)`;
}

View File

@@ -1,6 +1,10 @@
import { describe, expect, it } from "vitest";
import { listKnownProviderEnvApiKeyNames } from "./model-auth-env-vars.js";
import { isNonSecretApiKeyMarker, NON_ENV_SECRETREF_MARKER } from "./model-auth-markers.js";
import {
isKnownEnvApiKeyMarker,
isNonSecretApiKeyMarker,
NON_ENV_SECRETREF_MARKER,
} from "./model-auth-markers.js";
describe("model auth markers", () => {
it("recognizes explicit non-secret markers", () => {
@@ -23,4 +27,9 @@ describe("model auth markers", () => {
it("can exclude env marker-name interpretation for display-only paths", () => {
expect(isNonSecretApiKeyMarker("OPENAI_API_KEY", { includeEnvVarName: false })).toBe(false);
});
it("excludes aws-sdk env markers from known api key env marker helper", () => {
expect(isKnownEnvApiKeyMarker("OPENAI_API_KEY")).toBe(true);
expect(isKnownEnvApiKeyMarker("AWS_PROFILE")).toBe(false);
});
});

View File

@@ -35,6 +35,11 @@ export function isAwsSdkAuthMarker(value: string): boolean {
return AWS_SDK_ENV_MARKERS.has(value.trim());
}
export function isKnownEnvApiKeyMarker(value: string): boolean {
const trimmed = value.trim();
return KNOWN_ENV_API_KEY_MARKERS.has(trimmed) && !isAwsSdkAuthMarker(trimmed);
}
export function resolveNonEnvSecretRefApiKeyMarker(_source: SecretRefSource): string {
return NON_ENV_SECRETREF_MARKER;
}

View File

@@ -1,6 +1,13 @@
import { describe, expect, it } from "vitest";
import type { AuthProfileStore } from "./auth-profiles.js";
import { requireApiKey, resolveAwsSdkEnvVarName, resolveModelAuthMode } from "./model-auth.js";
import { NON_ENV_SECRETREF_MARKER } from "./model-auth-markers.js";
import {
hasUsableCustomProviderApiKey,
requireApiKey,
resolveAwsSdkEnvVarName,
resolveModelAuthMode,
resolveUsableCustomProviderApiKey,
} from "./model-auth.js";
describe("resolveAwsSdkEnvVarName", () => {
it("prefers bearer token over access keys and profile", () => {
@@ -117,3 +124,102 @@ describe("requireApiKey", () => {
).toThrow('No API key resolved for provider "openai"');
});
});
describe("resolveUsableCustomProviderApiKey", () => {
it("returns literal custom provider keys", () => {
const resolved = resolveUsableCustomProviderApiKey({
cfg: {
models: {
providers: {
custom: {
baseUrl: "https://example.com/v1",
apiKey: "sk-custom-runtime", // pragma: allowlist secret
models: [],
},
},
},
},
provider: "custom",
});
expect(resolved).toEqual({
apiKey: "sk-custom-runtime",
source: "models.json",
});
});
it("does not treat non-env markers as usable credentials", () => {
const resolved = resolveUsableCustomProviderApiKey({
cfg: {
models: {
providers: {
custom: {
baseUrl: "https://example.com/v1",
apiKey: NON_ENV_SECRETREF_MARKER,
models: [],
},
},
},
},
provider: "custom",
});
expect(resolved).toBeNull();
});
it("resolves known env marker names from process env for custom providers", () => {
const previous = process.env.OPENAI_API_KEY;
process.env.OPENAI_API_KEY = "sk-from-env"; // pragma: allowlist secret
try {
const resolved = resolveUsableCustomProviderApiKey({
cfg: {
models: {
providers: {
custom: {
baseUrl: "https://example.com/v1",
apiKey: "OPENAI_API_KEY",
models: [],
},
},
},
},
provider: "custom",
});
expect(resolved?.apiKey).toBe("sk-from-env");
expect(resolved?.source).toContain("OPENAI_API_KEY");
} finally {
if (previous === undefined) {
delete process.env.OPENAI_API_KEY;
} else {
process.env.OPENAI_API_KEY = previous;
}
}
});
it("does not treat known env marker names as usable when env value is missing", () => {
const previous = process.env.OPENAI_API_KEY;
delete process.env.OPENAI_API_KEY;
try {
expect(
hasUsableCustomProviderApiKey(
{
models: {
providers: {
custom: {
baseUrl: "https://example.com/v1",
apiKey: "OPENAI_API_KEY",
models: [],
},
},
},
},
"custom",
),
).toBe(false);
} finally {
if (previous === undefined) {
delete process.env.OPENAI_API_KEY;
} else {
process.env.OPENAI_API_KEY = previous;
}
}
});
});

View File

@@ -18,7 +18,11 @@ import {
resolveAuthStorePathForDisplay,
} from "./auth-profiles.js";
import { PROVIDER_ENV_API_KEY_CANDIDATES } from "./model-auth-env-vars.js";
import { OLLAMA_LOCAL_AUTH_MARKER } from "./model-auth-markers.js";
import {
isKnownEnvApiKeyMarker,
isNonSecretApiKeyMarker,
OLLAMA_LOCAL_AUTH_MARKER,
} from "./model-auth-markers.js";
import { normalizeProviderId } from "./model-selection.js";
export { ensureAuthProfileStore, resolveAuthProfileOrder } from "./auth-profiles.js";
@@ -60,6 +64,49 @@ export function getCustomProviderApiKey(
return normalizeOptionalSecretInput(entry?.apiKey);
}
type ResolvedCustomProviderApiKey = {
apiKey: string;
source: string;
};
export function resolveUsableCustomProviderApiKey(params: {
cfg: OpenClawConfig | undefined;
provider: string;
env?: NodeJS.ProcessEnv;
}): ResolvedCustomProviderApiKey | null {
const customKey = getCustomProviderApiKey(params.cfg, params.provider);
if (!customKey) {
return null;
}
if (!isNonSecretApiKeyMarker(customKey)) {
return { apiKey: customKey, source: "models.json" };
}
if (!isKnownEnvApiKeyMarker(customKey)) {
return null;
}
const envValue = normalizeOptionalSecretInput((params.env ?? process.env)[customKey]);
if (!envValue) {
return null;
}
const applied = new Set(getShellEnvAppliedKeys());
return {
apiKey: envValue,
source: resolveEnvSourceLabel({
applied,
envVars: [customKey],
label: `${customKey} (models.json marker)`,
}),
};
}
export function hasUsableCustomProviderApiKey(
cfg: OpenClawConfig | undefined,
provider: string,
env?: NodeJS.ProcessEnv,
): boolean {
return Boolean(resolveUsableCustomProviderApiKey({ cfg, provider, env }));
}
function resolveProviderAuthOverride(
cfg: OpenClawConfig | undefined,
provider: string,
@@ -238,9 +285,9 @@ export async function resolveApiKeyForProvider(params: {
};
}
const customKey = getCustomProviderApiKey(cfg, provider);
const customKey = resolveUsableCustomProviderApiKey({ cfg, provider });
if (customKey) {
return { apiKey: customKey, source: "models.json", mode: "api-key" };
return { apiKey: customKey.apiKey, source: customKey.source, mode: "api-key" };
}
const syntheticLocalAuth = resolveSyntheticLocalProviderAuth({ cfg, provider });
@@ -360,7 +407,7 @@ export function resolveModelAuthMode(
return envKey.source.includes("OAUTH_TOKEN") ? "oauth" : "api-key";
}
if (getCustomProviderApiKey(cfg, resolved)) {
if (hasUsableCustomProviderApiKey(cfg, resolved)) {
return "api-key";
}

View File

@@ -477,6 +477,51 @@ describe("models-config", () => {
});
});
it("replaces stale merged apiKey when config key normalizes to a known env marker", async () => {
await withEnvVar("OPENAI_API_KEY", "sk-plaintext-should-not-appear", async () => {
await withTempHome(async () => {
await writeAgentModelsJson({
providers: {
openai: {
baseUrl: "https://api.openai.com/v1",
apiKey: "STALE_AGENT_KEY", // pragma: allowlist secret
api: "openai-completions",
models: [{ id: "gpt-4.1", name: "GPT-4.1", input: ["text"] }],
},
},
});
const cfg: OpenClawConfig = {
models: {
mode: "merge",
providers: {
openai: {
baseUrl: "https://api.openai.com/v1",
apiKey: "sk-plaintext-should-not-appear", // pragma: allowlist secret; simulates resolved ${OPENAI_API_KEY}
api: "openai-completions",
models: [
{
id: "gpt-4.1",
name: "GPT-4.1",
input: ["text"],
reasoning: false,
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
contextWindow: 128000,
maxTokens: 16384,
},
],
},
},
},
};
await ensureOpenClawModelsJson(cfg);
const result = await readGeneratedModelsJson<{
providers: Record<string, { apiKey?: string }>;
}>();
expect(result.providers.openai?.apiKey).toBe("OPENAI_API_KEY"); // pragma: allowlist secret
});
});
});
it("preserves explicit larger token limits when they exceed implicit catalog defaults", async () => {
await withTempHome(async () => {
await withEnvVar("MOONSHOT_API_KEY", "sk-moonshot-test", async () => {

View File

@@ -92,4 +92,25 @@ describe("models-config merge helpers", () => {
}),
);
});
it("does not preserve stale plaintext apiKey when next entry is a marker", () => {
const merged = mergeWithExistingProviderSecrets({
nextProviders: {
custom: {
apiKey: "OPENAI_API_KEY", // pragma: allowlist secret
models: [{ id: "model", api: "openai-responses" }],
} as ProviderConfig,
},
existingProviders: {
custom: {
apiKey: preservedApiKey,
models: [{ id: "model", api: "openai-responses" }],
} as ExistingProviderConfig,
},
secretRefManagedProviders: new Set<string>(),
explicitBaseUrlProviders: new Set<string>(),
});
expect(merged.custom?.apiKey).toBe("OPENAI_API_KEY"); // pragma: allowlist secret
});
});

View File

@@ -148,9 +148,14 @@ function resolveProviderApiSurface(
function shouldPreserveExistingApiKey(params: {
providerKey: string;
existing: ExistingProviderConfig;
nextEntry: ProviderConfig;
secretRefManagedProviders: ReadonlySet<string>;
}): boolean {
const { providerKey, existing, secretRefManagedProviders } = params;
const { providerKey, existing, nextEntry, secretRefManagedProviders } = params;
const nextApiKey = typeof nextEntry.apiKey === "string" ? nextEntry.apiKey : "";
if (nextApiKey && isNonSecretApiKeyMarker(nextApiKey)) {
return false;
}
return (
!secretRefManagedProviders.has(providerKey) &&
typeof existing.apiKey === "string" &&
@@ -198,7 +203,14 @@ export function mergeWithExistingProviderSecrets(params: {
continue;
}
const preserved: Record<string, unknown> = {};
if (shouldPreserveExistingApiKey({ providerKey: key, existing, secretRefManagedProviders })) {
if (
shouldPreserveExistingApiKey({
providerKey: key,
existing,
nextEntry: newEntry,
secretRefManagedProviders,
})
) {
preserved.apiKey = existing.apiKey;
}
if (

View File

@@ -78,6 +78,7 @@ describe("normalizeProviders", () => {
const agentDir = await fs.mkdtemp(path.join(os.tmpdir(), "openclaw-agent-"));
const original = process.env.OPENAI_API_KEY;
process.env.OPENAI_API_KEY = "sk-test-secret-value-12345"; // pragma: allowlist secret
const secretRefManagedProviders = new Set<string>();
try {
const providers: NonNullable<NonNullable<OpenClawConfig["models"]>["providers"]> = {
openai: {
@@ -97,8 +98,9 @@ describe("normalizeProviders", () => {
],
},
};
const normalized = normalizeProviders({ providers, agentDir });
const normalized = normalizeProviders({ providers, agentDir, secretRefManagedProviders });
expect(normalized?.openai?.apiKey).toBe("OPENAI_API_KEY");
expect(secretRefManagedProviders.has("openai")).toBe(true);
} finally {
if (original === undefined) {
delete process.env.OPENAI_API_KEY;

View File

@@ -347,6 +347,9 @@ export function normalizeProviders(params: {
apiKey: normalizedConfiguredApiKey,
};
}
if (isNonSecretApiKeyMarker(normalizedConfiguredApiKey)) {
params.secretRefManagedProviders?.add(normalizedKey);
}
if (
profileApiKey &&
profileApiKey.source !== "plaintext" &&
@@ -370,6 +373,7 @@ export function normalizeProviders(params: {
if (envVarName && env[envVarName] === currentApiKey) {
mutated = true;
normalizedProvider = { ...normalizedProvider, apiKey: envVarName };
params.secretRefManagedProviders?.add(normalizedKey);
}
}

View File

@@ -101,6 +101,56 @@ describe("models-config runtime source snapshot", () => {
});
});
it("projects cloned runtime configs onto source snapshot when preserving provider auth", async () => {
await withTempHome(async () => {
const sourceConfig: OpenClawConfig = {
models: {
providers: {
openai: {
baseUrl: "https://api.openai.com/v1",
apiKey: { source: "env", provider: "default", id: "OPENAI_API_KEY" }, // pragma: allowlist secret
api: "openai-completions" as const,
models: [],
},
},
},
};
const runtimeConfig: OpenClawConfig = {
models: {
providers: {
openai: {
baseUrl: "https://api.openai.com/v1",
apiKey: "sk-runtime-resolved", // pragma: allowlist secret
api: "openai-completions" as const,
models: [],
},
},
},
};
const clonedRuntimeConfig: OpenClawConfig = {
...runtimeConfig,
agents: {
defaults: {
imageModel: "openai/gpt-image-1",
},
},
};
try {
setRuntimeConfigSnapshot(runtimeConfig, sourceConfig);
await ensureOpenClawModelsJson(clonedRuntimeConfig);
const parsed = await readGeneratedModelsJson<{
providers: Record<string, { apiKey?: string }>;
}>();
expect(parsed.providers.openai?.apiKey).toBe("OPENAI_API_KEY"); // pragma: allowlist secret
} finally {
clearRuntimeConfigSnapshot();
clearConfigCache();
}
});
});
it("uses header markers from runtime source snapshot instead of resolved runtime values", async () => {
await withTempHome(async () => {
const sourceConfig: OpenClawConfig = {

View File

@@ -1,8 +1,8 @@
import fs from "node:fs/promises";
import path from "node:path";
import {
getRuntimeConfigSnapshot,
getRuntimeConfigSourceSnapshot,
projectConfigOntoRuntimeSourceSnapshot,
type OpenClawConfig,
loadConfig,
} from "../config/config.js";
@@ -44,17 +44,13 @@ async function writeModelsFileAtomic(targetPath: string, contents: string): Prom
function resolveModelsConfigInput(config?: OpenClawConfig): OpenClawConfig {
const runtimeSource = getRuntimeConfigSourceSnapshot();
if (!runtimeSource) {
return config ?? loadConfig();
}
if (!config) {
return runtimeSource;
return runtimeSource ?? loadConfig();
}
const runtimeResolved = getRuntimeConfigSnapshot();
if (runtimeResolved && config === runtimeResolved) {
return runtimeSource;
if (!runtimeSource) {
return config;
}
return config;
return projectConfigOntoRuntimeSourceSnapshot(config);
}
async function withModelsJsonWriteLock<T>(targetPath: string, run: () => Promise<T>): Promise<T> {

View File

@@ -63,7 +63,7 @@ vi.mock("../agents/auth-profiles.js", () => ({
vi.mock("../agents/model-auth.js", () => ({
resolveEnvApiKey: () => null,
getCustomProviderApiKey: () => null,
resolveUsableCustomProviderApiKey: () => null,
resolveModelAuthMode: () => "api-key",
}));

View File

@@ -0,0 +1,152 @@
import fs from "node:fs";
import os from "node:os";
import path from "node:path";
import { beforeEach, describe, expect, it } from "vitest";
import {
callGatewayMock,
resetSubagentsConfigOverride,
setSubagentsConfigOverride,
} from "./openclaw-tools.subagents.test-harness.js";
import { addSubagentRunForTests, resetSubagentRegistryForTests } from "./subagent-registry.js";
import "./test-helpers/fast-core-tools.js";
import { createPerSenderSessionConfig } from "./test-helpers/session-config.js";
import { createSubagentsTool } from "./tools/subagents-tool.js";
function writeStore(storePath: string, store: Record<string, unknown>) {
fs.mkdirSync(path.dirname(storePath), { recursive: true });
fs.writeFileSync(storePath, JSON.stringify(store, null, 2), "utf-8");
}
describe("openclaw-tools: subagents scope isolation", () => {
let storePath = "";
beforeEach(() => {
resetSubagentRegistryForTests();
resetSubagentsConfigOverride();
callGatewayMock.mockReset();
storePath = path.join(
os.tmpdir(),
`openclaw-subagents-scope-${Date.now()}-${Math.random().toString(16).slice(2)}.json`,
);
setSubagentsConfigOverride({
session: createPerSenderSessionConfig({ store: storePath }),
});
writeStore(storePath, {});
});
it("leaf subagents do not inherit parent sibling control scope", async () => {
const leafKey = "agent:main:subagent:leaf";
const siblingKey = "agent:main:subagent:unsandboxed";
writeStore(storePath, {
[leafKey]: {
sessionId: "leaf-session",
updatedAt: Date.now(),
spawnedBy: "agent:main:main",
},
[siblingKey]: {
sessionId: "sibling-session",
updatedAt: Date.now(),
spawnedBy: "agent:main:main",
},
});
addSubagentRunForTests({
runId: "run-leaf",
childSessionKey: leafKey,
requesterSessionKey: "agent:main:main",
requesterDisplayKey: "main",
task: "sandboxed leaf",
cleanup: "keep",
createdAt: Date.now() - 30_000,
startedAt: Date.now() - 30_000,
});
addSubagentRunForTests({
runId: "run-sibling",
childSessionKey: siblingKey,
requesterSessionKey: "agent:main:main",
requesterDisplayKey: "main",
task: "unsandboxed sibling",
cleanup: "keep",
createdAt: Date.now() - 20_000,
startedAt: Date.now() - 20_000,
});
const tool = createSubagentsTool({ agentSessionKey: leafKey });
const result = await tool.execute("call-leaf-list", { action: "list" });
expect(result.details).toMatchObject({
status: "ok",
requesterSessionKey: leafKey,
callerSessionKey: leafKey,
callerIsSubagent: true,
total: 0,
active: [],
recent: [],
});
expect(callGatewayMock).not.toHaveBeenCalled();
});
it("orchestrator subagents still see children they spawned", async () => {
const orchestratorKey = "agent:main:subagent:orchestrator";
const workerKey = `${orchestratorKey}:subagent:worker`;
const siblingKey = "agent:main:subagent:sibling";
writeStore(storePath, {
[orchestratorKey]: {
sessionId: "orchestrator-session",
updatedAt: Date.now(),
spawnedBy: "agent:main:main",
},
[workerKey]: {
sessionId: "worker-session",
updatedAt: Date.now(),
spawnedBy: orchestratorKey,
},
[siblingKey]: {
sessionId: "sibling-session",
updatedAt: Date.now(),
spawnedBy: "agent:main:main",
},
});
addSubagentRunForTests({
runId: "run-worker",
childSessionKey: workerKey,
requesterSessionKey: orchestratorKey,
requesterDisplayKey: orchestratorKey,
task: "worker child",
cleanup: "keep",
createdAt: Date.now() - 30_000,
startedAt: Date.now() - 30_000,
});
addSubagentRunForTests({
runId: "run-sibling",
childSessionKey: siblingKey,
requesterSessionKey: "agent:main:main",
requesterDisplayKey: "main",
task: "sibling of orchestrator",
cleanup: "keep",
createdAt: Date.now() - 20_000,
startedAt: Date.now() - 20_000,
});
const tool = createSubagentsTool({ agentSessionKey: orchestratorKey });
const result = await tool.execute("call-orchestrator-list", { action: "list" });
const details = result.details as {
status?: string;
requesterSessionKey?: string;
total?: number;
active?: Array<{ sessionKey?: string }>;
};
expect(details.status).toBe("ok");
expect(details.requesterSessionKey).toBe(orchestratorKey);
expect(details.total).toBe(1);
expect(details.active).toEqual([
expect.objectContaining({
sessionKey: workerKey,
}),
]);
});
});

View File

@@ -6,6 +6,11 @@ import { addSubagentRunForTests, resetSubagentRegistryForTests } from "./subagen
import { createPerSenderSessionConfig } from "./test-helpers/session-config.js";
import { createSessionsSpawnTool } from "./tools/sessions-spawn-tool.js";
vi.mock("@mariozechner/pi-ai/oauth", () => ({
getOAuthApiKey: () => undefined,
getOAuthProviders: () => [],
}));
const callGatewayMock = vi.fn();
vi.mock("../gateway/call.js", () => ({

View File

@@ -5,6 +5,11 @@ export type LoadedConfig = ReturnType<(typeof import("../config/config.js"))["lo
export const callGatewayMock: MockFn = vi.fn();
vi.mock("@mariozechner/pi-ai/oauth", () => ({
getOAuthApiKey: () => undefined,
getOAuthProviders: () => [],
}));
const defaultConfig: LoadedConfig = {
session: {
mainKey: "main",

View File

@@ -180,7 +180,7 @@ describe("buildInlineProviderModels", () => {
expect(result[0].headers).toBeUndefined();
});
it("preserves literal marker-shaped headers in inline provider models", () => {
it("drops SecretRef marker headers in inline provider models", () => {
const providers: Parameters<typeof buildInlineProviderModels>[0] = {
custom: {
headers: {
@@ -196,8 +196,6 @@ describe("buildInlineProviderModels", () => {
expect(result).toHaveLength(1);
expect(result[0].headers).toEqual({
Authorization: "secretref-env:OPENAI_HEADER_TOKEN",
"X-Managed": "secretref-managed",
"X-Static": "tenant-a",
});
});
@@ -245,7 +243,7 @@ describe("resolveModel", () => {
});
});
it("preserves literal marker-shaped provider headers in fallback models", () => {
it("drops SecretRef marker provider headers in fallback models", () => {
const cfg = {
models: {
providers: {
@@ -266,8 +264,6 @@ describe("resolveModel", () => {
expect(result.error).toBeUndefined();
expect((result.model as unknown as { headers?: Record<string, string> }).headers).toEqual({
Authorization: "secretref-env:OPENAI_HEADER_TOKEN",
"X-Managed": "secretref-managed",
"X-Custom-Auth": "token-123",
});
});

View File

@@ -81,8 +81,12 @@ function applyConfiguredProviderOverrides(params: {
const discoveredHeaders = sanitizeModelHeaders(discoveredModel.headers, {
stripSecretRefMarkers: true,
});
const providerHeaders = sanitizeModelHeaders(providerConfig.headers);
const configuredHeaders = sanitizeModelHeaders(configuredModel?.headers);
const providerHeaders = sanitizeModelHeaders(providerConfig.headers, {
stripSecretRefMarkers: true,
});
const configuredHeaders = sanitizeModelHeaders(configuredModel?.headers, {
stripSecretRefMarkers: true,
});
if (!configuredModel && !providerConfig.baseUrl && !providerConfig.api && !providerHeaders) {
return {
...discoveredModel,
@@ -118,14 +122,18 @@ export function buildInlineProviderModels(
if (!trimmed) {
return [];
}
const providerHeaders = sanitizeModelHeaders(entry?.headers);
const providerHeaders = sanitizeModelHeaders(entry?.headers, {
stripSecretRefMarkers: true,
});
return (entry?.models ?? []).map((model) => ({
...model,
provider: trimmed,
baseUrl: entry?.baseUrl,
api: model.api ?? entry?.api,
headers: (() => {
const modelHeaders = sanitizeModelHeaders((model as InlineModelEntry).headers);
const modelHeaders = sanitizeModelHeaders((model as InlineModelEntry).headers, {
stripSecretRefMarkers: true,
});
if (!providerHeaders && !modelHeaders) {
return undefined;
}
@@ -205,8 +213,12 @@ export function resolveModelWithRegistry(params: {
}
const configuredModel = providerConfig?.models?.find((candidate) => candidate.id === modelId);
const providerHeaders = sanitizeModelHeaders(providerConfig?.headers);
const modelHeaders = sanitizeModelHeaders(configuredModel?.headers);
const providerHeaders = sanitizeModelHeaders(providerConfig?.headers, {
stripSecretRefMarkers: true,
});
const modelHeaders = sanitizeModelHeaders(configuredModel?.headers, {
stripSecretRefMarkers: true,
});
if (providerConfig || modelId.startsWith("mock-")) {
return normalizeResolvedModel({
provider,

View File

@@ -144,9 +144,9 @@ describe("resolveSubagentToolPolicy depth awareness", () => {
expect(isToolAllowedByPolicyName("sessions_spawn", policy)).toBe(false);
});
it("depth-2 leaf allows subagents (for visibility)", () => {
it("depth-2 leaf denies subagents", () => {
const policy = resolveSubagentToolPolicy(baseCfg, 2);
expect(isToolAllowedByPolicyName("subagents", policy)).toBe(true);
expect(isToolAllowedByPolicyName("subagents", policy)).toBe(false);
});
it("depth-2 leaf denies sessions_list and sessions_history", () => {

View File

@@ -64,15 +64,20 @@ const SUBAGENT_TOOL_DENY_ALWAYS = [
* Additional tools denied for leaf sub-agents (depth >= maxSpawnDepth).
* These are tools that only make sense for orchestrator sub-agents that can spawn children.
*/
const SUBAGENT_TOOL_DENY_LEAF = ["sessions_list", "sessions_history", "sessions_spawn"];
const SUBAGENT_TOOL_DENY_LEAF = [
"subagents",
"sessions_list",
"sessions_history",
"sessions_spawn",
];
/**
* Build the deny list for a sub-agent at a given depth.
*
* - Depth 1 with maxSpawnDepth >= 2 (orchestrator): allowed to use sessions_spawn,
* subagents, sessions_list, sessions_history so it can manage its children.
* - Depth >= maxSpawnDepth (leaf): denied sessions_spawn and
* session management tools. Still allowed subagents (for list/status visibility).
* - Depth >= maxSpawnDepth (leaf): denied subagents, sessions_spawn, and
* session management tools.
*/
function resolveSubagentDenyList(depth: number, maxSpawnDepth: number): string[] {
const isLeaf = depth >= Math.max(1, Math.floor(maxSpawnDepth));

View File

@@ -23,6 +23,12 @@ export type AnchoredSandboxEntry = {
basename: string;
};
export type PinnedSandboxWriteEntry = {
mountRootPath: string;
relativeParentPath: string;
basename: string;
};
type RunCommand = (
script: string,
options?: {
@@ -144,6 +150,26 @@ export class SandboxFsPathGuard {
};
}
resolvePinnedWriteEntry(target: SandboxResolvedFsPath, action: string): PinnedSandboxWriteEntry {
const basename = path.posix.basename(target.containerPath);
if (!basename || basename === "." || basename === "/") {
throw new Error(`Invalid sandbox entry target: ${target.containerPath}`);
}
const parentPath = normalizeContainerPath(path.posix.dirname(target.containerPath));
const mount = this.resolveRequiredMount(parentPath, action);
const relativeParentPath = path.posix.relative(mount.containerRoot, parentPath);
if (relativeParentPath.startsWith("..") || path.posix.isAbsolute(relativeParentPath)) {
throw new Error(
`Sandbox path escapes allowed mounts; cannot ${action}: ${target.containerPath}`,
);
}
return {
mountRootPath: mount.containerRoot,
relativeParentPath: relativeParentPath === "." ? "" : relativeParentPath,
basename,
};
}
private pathIsExistingDirectory(hostPath: string): boolean {
try {
return fs.statSync(hostPath).isDirectory();

View File

@@ -10,18 +10,6 @@ export type SandboxFsCommandPlan = {
allowFailure?: boolean;
};
export function buildWriteCommitPlan(
target: SandboxResolvedFsPath,
tempPath: string,
): SandboxFsCommandPlan {
return {
checks: [{ target, options: { action: "write files", requireWritable: true } }],
recheckBeforeCommand: true,
script: 'set -eu; mv -f -- "$1" "$2"',
args: [tempPath, target.containerPath],
};
}
export function buildMkdirpPlan(
target: SandboxResolvedFsPath,
anchoredTarget: AnchoredSandboxEntry,

View File

@@ -0,0 +1,86 @@
import { spawnSync } from "node:child_process";
import fs from "node:fs/promises";
import os from "node:os";
import path from "node:path";
import { describe, expect, it } from "vitest";
import { SANDBOX_PINNED_WRITE_PYTHON } from "./fs-bridge-write-helper.js";
async function withTempRoot<T>(prefix: string, run: (root: string) => Promise<T>): Promise<T> {
const root = await fs.mkdtemp(path.join(os.tmpdir(), prefix));
try {
return await run(root);
} finally {
await fs.rm(root, { recursive: true, force: true });
}
}
function runPinnedWrite(params: {
mountRoot: string;
relativeParentPath: string;
basename: string;
mkdir: boolean;
input: string;
}) {
return spawnSync(
"python3",
[
"-c",
SANDBOX_PINNED_WRITE_PYTHON,
params.mountRoot,
params.relativeParentPath,
params.basename,
params.mkdir ? "1" : "0",
],
{
input: params.input,
encoding: "utf8",
stdio: ["pipe", "pipe", "pipe"],
},
);
}
describe("sandbox pinned write helper", () => {
it("creates missing parents and writes through a pinned directory fd", async () => {
await withTempRoot("openclaw-write-helper-", async (root) => {
const workspace = path.join(root, "workspace");
await fs.mkdir(workspace, { recursive: true });
const result = runPinnedWrite({
mountRoot: workspace,
relativeParentPath: "nested/deeper",
basename: "note.txt",
mkdir: true,
input: "hello",
});
expect(result.status).toBe(0);
await expect(
fs.readFile(path.join(workspace, "nested", "deeper", "note.txt"), "utf8"),
).resolves.toBe("hello");
});
});
it.runIf(process.platform !== "win32")(
"rejects symlink-parent writes instead of materializing a temp file outside the mount",
async () => {
await withTempRoot("openclaw-write-helper-", async (root) => {
const workspace = path.join(root, "workspace");
const outside = path.join(root, "outside");
await fs.mkdir(workspace, { recursive: true });
await fs.mkdir(outside, { recursive: true });
await fs.symlink(outside, path.join(workspace, "alias"));
const result = runPinnedWrite({
mountRoot: workspace,
relativeParentPath: "alias",
basename: "escape.txt",
mkdir: false,
input: "owned",
});
expect(result.status).not.toBe(0);
await expect(fs.readFile(path.join(outside, "escape.txt"), "utf8")).rejects.toThrow();
});
},
);
});

View File

@@ -0,0 +1,109 @@
import type { PathSafetyCheck, PinnedSandboxWriteEntry } from "./fs-bridge-path-safety.js";
import type { SandboxFsCommandPlan } from "./fs-bridge-shell-command-plans.js";
export const SANDBOX_PINNED_WRITE_PYTHON = [
"import errno",
"import os",
"import secrets",
"import sys",
"",
"mount_root = sys.argv[1]",
"relative_parent = sys.argv[2]",
"basename = sys.argv[3]",
'mkdir_enabled = sys.argv[4] == "1"',
"",
"DIR_FLAGS = os.O_RDONLY",
"if hasattr(os, 'O_DIRECTORY'):",
" DIR_FLAGS |= os.O_DIRECTORY",
"if hasattr(os, 'O_NOFOLLOW'):",
" DIR_FLAGS |= os.O_NOFOLLOW",
"",
"WRITE_FLAGS = os.O_WRONLY | os.O_CREAT | os.O_EXCL",
"if hasattr(os, 'O_NOFOLLOW'):",
" WRITE_FLAGS |= os.O_NOFOLLOW",
"",
"def open_dir(path, dir_fd=None):",
" return os.open(path, DIR_FLAGS, dir_fd=dir_fd)",
"",
"def walk_parent(root_fd, rel_parent, mkdir_enabled):",
" current_fd = os.dup(root_fd)",
" try:",
" segments = [segment for segment in rel_parent.split('/') if segment and segment != '.']",
" for segment in segments:",
" if segment == '..':",
" raise OSError(errno.EPERM, 'path traversal is not allowed', segment)",
" try:",
" next_fd = open_dir(segment, dir_fd=current_fd)",
" except FileNotFoundError:",
" if not mkdir_enabled:",
" raise",
" os.mkdir(segment, 0o777, dir_fd=current_fd)",
" next_fd = open_dir(segment, dir_fd=current_fd)",
" os.close(current_fd)",
" current_fd = next_fd",
" return current_fd",
" except Exception:",
" os.close(current_fd)",
" raise",
"",
"def create_temp_file(parent_fd, basename):",
" prefix = '.openclaw-write-' + basename + '.'",
" for _ in range(128):",
" candidate = prefix + secrets.token_hex(6)",
" try:",
" fd = os.open(candidate, WRITE_FLAGS, 0o600, dir_fd=parent_fd)",
" return candidate, fd",
" except FileExistsError:",
" continue",
" raise RuntimeError('failed to allocate sandbox temp file')",
"",
"root_fd = open_dir(mount_root)",
"parent_fd = None",
"temp_fd = None",
"temp_name = None",
"try:",
" parent_fd = walk_parent(root_fd, relative_parent, mkdir_enabled)",
" temp_name, temp_fd = create_temp_file(parent_fd, basename)",
" while True:",
" chunk = sys.stdin.buffer.read(65536)",
" if not chunk:",
" break",
" os.write(temp_fd, chunk)",
" os.fsync(temp_fd)",
" os.close(temp_fd)",
" temp_fd = None",
" os.replace(temp_name, basename, src_dir_fd=parent_fd, dst_dir_fd=parent_fd)",
" os.fsync(parent_fd)",
"except Exception:",
" if temp_fd is not None:",
" os.close(temp_fd)",
" temp_fd = None",
" if temp_name is not None and parent_fd is not None:",
" try:",
" os.unlink(temp_name, dir_fd=parent_fd)",
" except FileNotFoundError:",
" pass",
" raise",
"finally:",
" if parent_fd is not None:",
" os.close(parent_fd)",
" os.close(root_fd)",
].join("\n");
export function buildPinnedWritePlan(params: {
check: PathSafetyCheck;
pinned: PinnedSandboxWriteEntry;
mkdir: boolean;
}): SandboxFsCommandPlan & { stdin?: Buffer | string } {
return {
checks: [params.check],
recheckBeforeCommand: true,
script: ["set -eu", "python3 - \"$@\" <<'PY'", SANDBOX_PINNED_WRITE_PYTHON, "PY"].join("\n"),
args: [
params.pinned.mountRootPath,
params.pinned.relativeParentPath,
params.pinned.basename,
params.mkdir ? "1" : "0",
],
};
}

View File

@@ -130,11 +130,11 @@ describe("sandbox fs bridge shell compatibility", () => {
const scripts = getScriptsFromCalls();
expect(scripts.some((script) => script.includes('cat >"$1"'))).toBe(false);
expect(scripts.some((script) => script.includes('cat >"$tmp"'))).toBe(true);
expect(scripts.some((script) => script.includes('mv -f -- "$1" "$2"'))).toBe(true);
expect(scripts.some((script) => script.includes('cat >"$tmp"'))).toBe(false);
expect(scripts.some((script) => script.includes("os.replace("))).toBe(true);
});
it("re-validates target before final rename and cleans temp file on failure", async () => {
it("re-validates target before the pinned write helper runs", async () => {
const { mockedOpenBoundaryFile } = await import("./fs-bridge.test-helpers.js");
mockedOpenBoundaryFile
.mockImplementationOnce(async () => ({ ok: false, reason: "path" }))
@@ -150,8 +150,6 @@ describe("sandbox fs bridge shell compatibility", () => {
);
const scripts = getScriptsFromCalls();
expect(scripts.some((script) => script.includes("mktemp"))).toBe(true);
expect(scripts.some((script) => script.includes('mv -f -- "$1" "$2"'))).toBe(false);
expect(scripts.some((script) => script.includes('rm -f -- "$1"'))).toBe(true);
expect(scripts.some((script) => script.includes("os.replace("))).toBe(false);
});
});

View File

@@ -6,15 +6,14 @@ import {
buildRemovePlan,
buildRenamePlan,
buildStatPlan,
buildWriteCommitPlan,
type SandboxFsCommandPlan,
} from "./fs-bridge-shell-command-plans.js";
import { buildPinnedWritePlan } from "./fs-bridge-write-helper.js";
import {
buildSandboxFsMounts,
resolveSandboxFsPathWithMounts,
type SandboxResolvedFsPath,
} from "./fs-paths.js";
import { normalizeContainerPath } from "./path-utils.js";
import type { SandboxContext, SandboxWorkspaceAccess } from "./types.js";
type RunCommandOptions = {
@@ -112,26 +111,24 @@ class SandboxFsBridgeImpl implements SandboxFsBridge {
}): Promise<void> {
const target = this.resolveResolvedPath(params);
this.ensureWriteAccess(target, "write files");
await this.pathGuard.assertPathSafety(target, { action: "write files", requireWritable: true });
const writeCheck = {
target,
options: { action: "write files", requireWritable: true } as const,
};
await this.pathGuard.assertPathSafety(target, writeCheck.options);
const buffer = Buffer.isBuffer(params.data)
? params.data
: Buffer.from(params.data, params.encoding ?? "utf8");
const tempPath = await this.writeFileToTempPath({
targetContainerPath: target.containerPath,
mkdir: params.mkdir !== false,
data: buffer,
const pinnedWriteTarget = this.pathGuard.resolvePinnedWriteEntry(target, "write files");
await this.runCheckedCommand({
...buildPinnedWritePlan({
check: writeCheck,
pinned: pinnedWriteTarget,
mkdir: params.mkdir !== false,
}),
stdin: buffer,
signal: params.signal,
});
try {
await this.runCheckedCommand({
...buildWriteCommitPlan(target, tempPath),
signal: params.signal,
});
} catch (error) {
await this.cleanupTempPath(tempPath, params.signal);
throw error;
}
}
async mkdirp(params: { filePath: string; cwd?: string; signal?: AbortSignal }): Promise<void> {
@@ -265,58 +262,6 @@ class SandboxFsBridgeImpl implements SandboxFsBridge {
return await this.runCheckedCommand({ ...plan, signal });
}
private async writeFileToTempPath(params: {
targetContainerPath: string;
mkdir: boolean;
data: Buffer;
signal?: AbortSignal;
}): Promise<string> {
const script = params.mkdir
? [
"set -eu",
'target="$1"',
'dir=$(dirname -- "$target")',
'if [ "$dir" != "." ]; then mkdir -p -- "$dir"; fi',
'base=$(basename -- "$target")',
'tmp=$(mktemp "$dir/.openclaw-write-$base.XXXXXX")',
'cat >"$tmp"',
'printf "%s\\n" "$tmp"',
].join("\n")
: [
"set -eu",
'target="$1"',
'dir=$(dirname -- "$target")',
'base=$(basename -- "$target")',
'tmp=$(mktemp "$dir/.openclaw-write-$base.XXXXXX")',
'cat >"$tmp"',
'printf "%s\\n" "$tmp"',
].join("\n");
const result = await this.runCommand(script, {
args: [params.targetContainerPath],
stdin: params.data,
signal: params.signal,
});
const tempPath = result.stdout.toString("utf8").trim().split(/\r?\n/).at(-1)?.trim();
if (!tempPath || !tempPath.startsWith("/")) {
throw new Error(
`Failed to create temporary sandbox write path for ${params.targetContainerPath}`,
);
}
return normalizeContainerPath(tempPath);
}
private async cleanupTempPath(tempPath: string, signal?: AbortSignal): Promise<void> {
try {
await this.runCommand('set -eu; rm -f -- "$1"', {
args: [tempPath],
signal,
allowFailure: true,
});
} catch {
// Best-effort cleanup only.
}
}
private ensureWriteAccess(target: SandboxResolvedFsPath, action: string) {
if (!allowsWrites(this.sandbox.workspaceAccess) || !target.writable) {
throw new Error(`Sandbox path is read-only; cannot ${action}: ${target.containerPath}`);

View File

@@ -1,14 +1,21 @@
import { createHash } from "node:crypto";
import fs from "node:fs";
import {
createTarEntrySafetyChecker,
createTarEntryPreflightChecker,
extractArchive as extractArchiveSafe,
mergeExtractedTreeIntoDestination,
prepareArchiveDestinationDir,
withStagedArchiveDestination,
} from "../infra/archive.js";
import { runCommandWithTimeout } from "../process/exec.js";
import { parseTarVerboseMetadata } from "./skills-install-tar-verbose.js";
import { hasBinary } from "./skills.js";
export type ArchiveExtractResult = { stdout: string; stderr: string; code: number | null };
type TarPreflightResult = {
entries: string[];
metadata: ReturnType<typeof parseTarVerboseMetadata>;
};
async function hashFileSha256(filePath: string): Promise<string> {
const hash = createHash("sha256");
@@ -24,6 +31,112 @@ async function hashFileSha256(filePath: string): Promise<string> {
});
}
function commandFailureResult(
result: { stdout: string; stderr: string; code: number | null },
fallbackStderr: string,
): ArchiveExtractResult {
return {
stdout: result.stdout,
stderr: result.stderr || fallbackStderr,
code: result.code,
};
}
function buildTarExtractArgv(params: {
archivePath: string;
targetDir: string;
stripComponents: number;
}): string[] {
const argv = ["tar", "xf", params.archivePath, "-C", params.targetDir];
if (params.stripComponents > 0) {
argv.push("--strip-components", String(params.stripComponents));
}
return argv;
}
async function readTarPreflight(params: {
archivePath: string;
timeoutMs: number;
}): Promise<TarPreflightResult | ArchiveExtractResult> {
const listResult = await runCommandWithTimeout(["tar", "tf", params.archivePath], {
timeoutMs: params.timeoutMs,
});
if (listResult.code !== 0) {
return commandFailureResult(listResult, "tar list failed");
}
const entries = listResult.stdout
.split("\n")
.map((line) => line.trim())
.filter(Boolean);
const verboseResult = await runCommandWithTimeout(["tar", "tvf", params.archivePath], {
timeoutMs: params.timeoutMs,
});
if (verboseResult.code !== 0) {
return commandFailureResult(verboseResult, "tar verbose list failed");
}
const metadata = parseTarVerboseMetadata(verboseResult.stdout);
if (metadata.length !== entries.length) {
return {
stdout: verboseResult.stdout,
stderr: `tar verbose/list entry count mismatch (${metadata.length} vs ${entries.length})`,
code: 1,
};
}
return { entries, metadata };
}
function isArchiveExtractFailure(
value: TarPreflightResult | ArchiveExtractResult,
): value is ArchiveExtractResult {
return "code" in value;
}
async function verifyArchiveHashStable(params: {
archivePath: string;
expectedHash: string;
}): Promise<ArchiveExtractResult | null> {
const postPreflightHash = await hashFileSha256(params.archivePath);
if (postPreflightHash === params.expectedHash) {
return null;
}
return {
stdout: "",
stderr: "tar archive changed during safety preflight; refusing to extract",
code: 1,
};
}
async function extractTarBz2WithStaging(params: {
archivePath: string;
destinationRealDir: string;
stripComponents: number;
timeoutMs: number;
}): Promise<ArchiveExtractResult> {
return await withStagedArchiveDestination({
destinationRealDir: params.destinationRealDir,
run: async (stagingDir) => {
const extractResult = await runCommandWithTimeout(
buildTarExtractArgv({
archivePath: params.archivePath,
targetDir: stagingDir,
stripComponents: params.stripComponents,
}),
{ timeoutMs: params.timeoutMs },
);
if (extractResult.code !== 0) {
return extractResult;
}
await mergeExtractedTreeIntoDestination({
sourceDir: stagingDir,
destinationDir: params.destinationRealDir,
destinationRealDir: params.destinationRealDir,
});
return extractResult;
},
});
}
export async function extractArchive(params: {
archivePath: string;
archiveType: string;
@@ -66,49 +179,25 @@ export async function extractArchive(params: {
return { stdout: "", stderr: "tar not found on PATH", code: null };
}
const destinationRealDir = await prepareArchiveDestinationDir(targetDir);
const preflightHash = await hashFileSha256(archivePath);
// Preflight list to prevent zip-slip style traversal before extraction.
const listResult = await runCommandWithTimeout(["tar", "tf", archivePath], { timeoutMs });
if (listResult.code !== 0) {
return {
stdout: listResult.stdout,
stderr: listResult.stderr || "tar list failed",
code: listResult.code,
};
const preflight = await readTarPreflight({ archivePath, timeoutMs });
if (isArchiveExtractFailure(preflight)) {
return preflight;
}
const entries = listResult.stdout
.split("\n")
.map((line) => line.trim())
.filter(Boolean);
const verboseResult = await runCommandWithTimeout(["tar", "tvf", archivePath], { timeoutMs });
if (verboseResult.code !== 0) {
return {
stdout: verboseResult.stdout,
stderr: verboseResult.stderr || "tar verbose list failed",
code: verboseResult.code,
};
}
const metadata = parseTarVerboseMetadata(verboseResult.stdout);
if (metadata.length !== entries.length) {
return {
stdout: verboseResult.stdout,
stderr: `tar verbose/list entry count mismatch (${metadata.length} vs ${entries.length})`,
code: 1,
};
}
const checkTarEntrySafety = createTarEntrySafetyChecker({
rootDir: targetDir,
const checkTarEntrySafety = createTarEntryPreflightChecker({
rootDir: destinationRealDir,
stripComponents: strip,
escapeLabel: "targetDir",
});
for (let i = 0; i < entries.length; i += 1) {
const entryPath = entries[i];
const entryMeta = metadata[i];
for (let i = 0; i < preflight.entries.length; i += 1) {
const entryPath = preflight.entries[i];
const entryMeta = preflight.metadata[i];
if (!entryPath || !entryMeta) {
return {
stdout: verboseResult.stdout,
stdout: "",
stderr: "tar metadata parse failure",
code: 1,
};
@@ -120,20 +209,20 @@ export async function extractArchive(params: {
});
}
const postPreflightHash = await hashFileSha256(archivePath);
if (postPreflightHash !== preflightHash) {
return {
stdout: "",
stderr: "tar archive changed during safety preflight; refusing to extract",
code: 1,
};
const hashFailure = await verifyArchiveHashStable({
archivePath,
expectedHash: preflightHash,
});
if (hashFailure) {
return hashFailure;
}
const argv = ["tar", "xf", archivePath, "-C", targetDir];
if (strip > 0) {
argv.push("--strip-components", String(strip));
}
return await runCommandWithTimeout(argv, { timeoutMs });
return await extractTarBz2WithStaging({
archivePath,
destinationRealDir,
stripComponents: strip,
timeoutMs,
});
}
return { stdout: "", stderr: `unsupported archive type: ${archiveType}`, code: null };

View File

@@ -425,4 +425,47 @@ describe("installDownloadSpec extraction safety (tar.bz2)", () => {
.some((call) => (call[0] as string[])[1] === "xf");
expect(extractionAttempted).toBe(false);
});
it("rejects tar.bz2 entries that traverse pre-existing targetDir symlinks", async () => {
const entry = buildEntry("tbz2-targetdir-symlink");
const targetDir = path.join(resolveSkillToolsRootDir(entry), "target");
const outsideDir = path.join(workspaceDir, "tbz2-targetdir-outside");
await fs.mkdir(targetDir, { recursive: true });
await fs.mkdir(outsideDir, { recursive: true });
await fs.symlink(
outsideDir,
path.join(targetDir, "escape"),
process.platform === "win32" ? "junction" : undefined,
);
mockArchiveResponse(new Uint8Array([1, 2, 3]));
runCommandWithTimeoutMock.mockImplementation(async (...argv: unknown[]) => {
const cmd = (argv[0] ?? []) as string[];
if (cmd[0] === "tar" && cmd[1] === "tf") {
return runCommandResult({ stdout: "escape/pwn.txt\n" });
}
if (cmd[0] === "tar" && cmd[1] === "tvf") {
return runCommandResult({ stdout: "-rw-r--r-- 0 0 0 0 Jan 1 00:00 escape/pwn.txt\n" });
}
if (cmd[0] === "tar" && cmd[1] === "xf") {
const stagingDir = String(cmd[cmd.indexOf("-C") + 1] ?? "");
await fs.mkdir(path.join(stagingDir, "escape"), { recursive: true });
await fs.writeFile(path.join(stagingDir, "escape", "pwn.txt"), "owned");
return runCommandResult({ stdout: "ok" });
}
return runCommandResult();
});
const result = await installDownloadSkill({
name: "tbz2-targetdir-symlink",
url: "https://example.invalid/evil.tbz2",
archive: "tar.bz2",
targetDir,
});
expect(result.ok).toBe(false);
expect(result.stderr.toLowerCase()).toContain("archive entry traverses symlink in destination");
expect(await fileExists(path.join(outsideDir, "pwn.txt"))).toBe(false);
});
});

View File

@@ -7,7 +7,6 @@ import {
sortSubagentRuns,
type SubagentTargetResolution,
} from "../../auto-reply/reply/subagents-utils.js";
import { DEFAULT_SUBAGENT_MAX_SPAWN_DEPTH } from "../../config/agent-limits.js";
import { loadConfig } from "../../config/config.js";
import type { SessionEntry } from "../../config/sessions.js";
import { loadSessionStore, resolveStorePath, updateSessionStore } from "../../config/sessions.js";
@@ -28,7 +27,6 @@ import { INTERNAL_MESSAGE_CHANNEL } from "../../utils/message-channel.js";
import { AGENT_LANE_SUBAGENT } from "../lanes.js";
import { abortEmbeddedPiRun } from "../pi-embedded.js";
import { optionalStringEnum } from "../schema/typebox.js";
import { getSubagentDepthFromSessionStore } from "../subagent-depth.js";
import {
clearSubagentRunSteerRestart,
countPendingDescendantRuns,
@@ -204,36 +202,28 @@ function resolveRequesterKey(params: {
};
}
// Check if this sub-agent can spawn children (orchestrator).
// If so, it should see its own children, not its parent's children.
const callerDepth = getSubagentDepthFromSessionStore(callerSessionKey, { cfg: params.cfg });
const maxSpawnDepth =
params.cfg.agents?.defaults?.subagents?.maxSpawnDepth ?? DEFAULT_SUBAGENT_MAX_SPAWN_DEPTH;
if (callerDepth < maxSpawnDepth) {
// Orchestrator sub-agent: use its own session key as requester
// so it sees children it spawned.
return {
requesterSessionKey: callerSessionKey,
callerSessionKey,
callerIsSubagent: true,
};
}
// Leaf sub-agent: walk up to its parent so it can see sibling runs.
const cache = new Map<string, Record<string, SessionEntry>>();
const callerEntry = resolveSessionEntryForKey({
cfg: params.cfg,
key: callerSessionKey,
cache,
}).entry;
const spawnedBy = typeof callerEntry?.spawnedBy === "string" ? callerEntry.spawnedBy.trim() : "";
return {
requesterSessionKey: spawnedBy || callerSessionKey,
// Subagents can only control runs spawned from their own session key.
// Announce routing still uses SubagentRunRecord.requesterSessionKey elsewhere.
requesterSessionKey: callerSessionKey,
callerSessionKey,
callerIsSubagent: true,
};
}
function ensureSubagentControlsOwnDescendants(params: {
requester: ResolvedRequesterKey;
entry: SubagentRunRecord;
}) {
if (!params.requester.callerIsSubagent) {
return undefined;
}
if (params.entry.requesterSessionKey === params.requester.callerSessionKey) {
return undefined;
}
return "Subagents can only control runs spawned from their own session.";
}
async function killSubagentRun(params: {
cfg: ReturnType<typeof loadConfig>;
entry: SubagentRunRecord;
@@ -499,6 +489,20 @@ export function createSubagentsTool(opts?: { agentSessionKey?: string }): AnyAge
error: resolved.error ?? "Unknown subagent target.",
});
}
const ownershipError = ensureSubagentControlsOwnDescendants({
requester,
entry: resolved.entry,
});
if (ownershipError) {
return jsonResult({
status: "forbidden",
action: "kill",
target,
runId: resolved.entry.runId,
sessionKey: resolved.entry.childSessionKey,
error: ownershipError,
});
}
const killCache = new Map<string, Record<string, SessionEntry>>();
const stopResult = await killSubagentRun({
cfg,
@@ -568,6 +572,20 @@ export function createSubagentsTool(opts?: { agentSessionKey?: string }): AnyAge
error: resolved.error ?? "Unknown subagent target.",
});
}
const ownershipError = ensureSubagentControlsOwnDescendants({
requester,
entry: resolved.entry,
});
if (ownershipError) {
return jsonResult({
status: "forbidden",
action: "steer",
target,
runId: resolved.entry.runId,
sessionKey: resolved.entry.childSessionKey,
error: ownershipError,
});
}
if (resolved.entry.endedAt) {
return jsonResult({
status: "done",

Some files were not shown because too many files have changed in this diff Show More