mirror of
https://github.com/openclaw/openclaw.git
synced 2026-05-06 09:30:43 +00:00
Merge branch 'main' into docs/fix-gogcli-binary-urls
This commit is contained in:
@@ -16,6 +16,19 @@ warm caches, local build state, and fast feedback.
|
||||
|
||||
Testbox is the expensive path. Reach for it deliberately.
|
||||
|
||||
OpenClaw maintainers can opt into Testbox-first validation by setting
|
||||
`OPENCLAW_TESTBOX=1` in their environment or standing agent rules. This mode is
|
||||
maintainers-only and requires Blacksmith access.
|
||||
|
||||
When `OPENCLAW_TESTBOX=1` is set in OpenClaw:
|
||||
|
||||
- Pre-warm a Testbox early for longer, wider, or uncertain work.
|
||||
- Prefer Testbox for `pnpm` gates, e2e, package-like proof, and broad suites.
|
||||
- Reuse the same Testbox ID for every run command in the same task/session.
|
||||
- Use local commands only when the task explicitly sets
|
||||
`OPENCLAW_LOCAL_CHECK_MODE=throttled|full`, or when the user asks for local
|
||||
proof.
|
||||
|
||||
## Install the CLI
|
||||
|
||||
If `blacksmith` is not installed, install it:
|
||||
@@ -81,7 +94,8 @@ Prefer Testbox when:
|
||||
- you are reproducing CI-only failures
|
||||
- you need the exact workflow image/job environment from GitHub Actions
|
||||
|
||||
For OpenClaw specifically, normal local iteration should stay local:
|
||||
For OpenClaw specifically, normal local iteration stays local unless maintainer
|
||||
Testbox mode is enabled with `OPENCLAW_TESTBOX=1`:
|
||||
|
||||
- `pnpm check:changed`
|
||||
- `pnpm test:changed`
|
||||
@@ -89,27 +103,49 @@ For OpenClaw specifically, normal local iteration should stay local:
|
||||
- `pnpm test:serial`
|
||||
- `pnpm build`
|
||||
|
||||
Only use Testbox in OpenClaw when the user explicitly wants CI-parity or the
|
||||
check truly depends on remote secrets/services that the local repo loop cannot
|
||||
provide.
|
||||
If `OPENCLAW_TESTBOX=1` is enabled, run those same repo commands inside the
|
||||
warm Testbox. If the user wants laptop-friendly local proof for one command, use
|
||||
the explicit escape hatch `OPENCLAW_LOCAL_CHECK_MODE=throttled`.
|
||||
|
||||
For installable-package product proof, prefer the GitHub `Package Acceptance`
|
||||
workflow over an ad hoc Testbox command. It resolves one package candidate
|
||||
(`source=npm`, `source=ref`, `source=url`, or `source=artifact`), uploads it as
|
||||
`package-under-test`, and runs the reusable Docker E2E lanes against that exact
|
||||
tarball on GitHub/Blacksmith runners. Use `workflow_ref` for the trusted
|
||||
workflow/harness code and `package_ref` for the source ref to pack when testing
|
||||
an older trusted branch, tag, or SHA.
|
||||
|
||||
## Setup: Warmup before coding
|
||||
|
||||
If you decided Testbox is actually warranted, warm one up early. This returns
|
||||
an ID instantly and boots the CI environment in the background while you work:
|
||||
If you decided Testbox is warranted, warm one up early. This returns an ID
|
||||
instantly and boots the CI environment in the background while you work:
|
||||
|
||||
blacksmith testbox warmup ci-check-testbox.yml
|
||||
# → tbx_01jkz5b3t9...
|
||||
|
||||
Save this ID. You need it for every `run` command.
|
||||
|
||||
For OpenClaw maintainer Testbox mode, pre-warm at the start of longer or wider
|
||||
tasks:
|
||||
|
||||
blacksmith testbox warmup ci-check-testbox.yml --ref main --idle-timeout 90
|
||||
|
||||
Use the build-artifact warmup when e2e/package/build proof benefits from seeded
|
||||
`dist/`, `dist-runtime/`, and build-all caches:
|
||||
|
||||
blacksmith testbox warmup ci-build-artifacts-testbox.yml --ref main --idle-timeout 90
|
||||
|
||||
Warmup dispatches a GitHub Actions workflow that provisions a VM with the
|
||||
full CI environment: dependencies installed, services started, secrets
|
||||
injected, and a clean checkout of the repo at the default branch.
|
||||
|
||||
In OpenClaw, raw commit SHAs are not reliable dispatch refs for `warmup --ref`;
|
||||
use a branch or tag. The build-artifact workflow resolves `openclaw@beta` and
|
||||
`openclaw@latest` to SHA cache keys internally.
|
||||
|
||||
Options:
|
||||
|
||||
--ref <branch> Git ref to dispatch against (default: repo's default branch)
|
||||
--ref <branch|tag> Git ref to dispatch against (default: repo's default branch)
|
||||
--job <name> Specific job within the workflow (if it has multiple)
|
||||
--idle-timeout <min> Idle timeout in minutes (default: 30)
|
||||
|
||||
@@ -226,6 +262,11 @@ services, CI-only runners, or reproducibility against the workflow image.
|
||||
|
||||
If the repo says local tests/builds are the normal path, follow the repo.
|
||||
|
||||
OpenClaw maintainer exception: if `OPENCLAW_TESTBOX=1` is set by the user or
|
||||
agent environment, treat Testbox as the normal validation path for this repo.
|
||||
Use `OPENCLAW_LOCAL_CHECK_MODE=throttled|full` as the explicit local escape
|
||||
hatch.
|
||||
|
||||
## When to use
|
||||
|
||||
Use Testbox when:
|
||||
@@ -242,12 +283,13 @@ checks that need parity or remote state.
|
||||
|
||||
## Workflow
|
||||
|
||||
1. Decide whether the repo's local loop is the right default.
|
||||
2. Only if Testbox is warranted, warm up early:
|
||||
`blacksmith testbox warmup ci-check-testbox.yml` → save the ID
|
||||
1. Decide whether the repo's local loop is the right default. For OpenClaw,
|
||||
`OPENCLAW_TESTBOX=1` makes Testbox the maintainer default.
|
||||
2. If Testbox is warranted, warm up early:
|
||||
`blacksmith testbox warmup ci-check-testbox.yml --ref main --idle-timeout 90` → save the ID
|
||||
3. Write code while the testbox boots in the background.
|
||||
4. Run the remote command when needed:
|
||||
`blacksmith testbox run --id <ID> "npm test"`
|
||||
`blacksmith testbox run --id <ID> "pnpm check:changed"`
|
||||
5. If tests fail, fix code and re-run against the same warm box.
|
||||
6. If you changed dependency manifests (package.json, etc.), prepend
|
||||
the install command: `blacksmith testbox run --id <ID> "npm install && npm test"`
|
||||
@@ -268,9 +310,9 @@ Observed full-suite time on Blacksmith Testbox is about 3-4 minutes:
|
||||
- 173-180s on a warmed box
|
||||
- 219s on a fresh 32-vCPU box
|
||||
|
||||
When validating before commit/push, run `pnpm check:changed` first when
|
||||
appropriate, then the full suite with the profile above if broad confidence is
|
||||
needed.
|
||||
When validating before commit/push in maintainer Testbox mode, run
|
||||
`pnpm check:changed` inside the warmed box first when appropriate, then the full
|
||||
suite with the profile above if broad confidence is needed.
|
||||
|
||||
## Examples
|
||||
|
||||
@@ -324,12 +366,14 @@ timeout is reached). Default timeout is 5m; use `--wait-timeout` for longer
|
||||
blacksmith testbox stop --id <ID>
|
||||
|
||||
Testboxes automatically shut down after being idle (default: 30 minutes).
|
||||
If you need a longer session, increase the timeout at warmup time:
|
||||
If you need a longer session, increase the timeout at warmup time. For OpenClaw
|
||||
maintainer work, use 90 minutes for long-running sessions:
|
||||
|
||||
blacksmith testbox warmup ci-check-testbox.yml --idle-timeout 60
|
||||
blacksmith testbox warmup ci-check-testbox.yml --idle-timeout 90
|
||||
blacksmith testbox warmup ci-build-artifacts-testbox.yml --idle-timeout 90
|
||||
|
||||
## With options
|
||||
|
||||
blacksmith testbox warmup ci-check-testbox.yml --ref main
|
||||
blacksmith testbox warmup ci-check-testbox.yml --idle-timeout 60
|
||||
blacksmith testbox warmup ci-check-testbox.yml --idle-timeout 90
|
||||
blacksmith testbox run --id <ID> "go test ./..."
|
||||
|
||||
37
.agents/skills/discord-clawd/SKILL.md
Normal file
37
.agents/skills/discord-clawd/SKILL.md
Normal file
@@ -0,0 +1,37 @@
|
||||
---
|
||||
name: discord-clawd
|
||||
description: Use to talk to the Discord-backed OpenClaw agent/session; not for archive search.
|
||||
---
|
||||
|
||||
# Discord Clawd
|
||||
|
||||
Use this when the task is to talk with the Discord-backed agent/session, ask it a question, or post through that route.
|
||||
|
||||
For Discord archive/history/search, use `$discrawl` instead.
|
||||
|
||||
## Transport
|
||||
|
||||
Use the OpenClaw relay helper:
|
||||
|
||||
```bash
|
||||
cd ~/Projects/agent-scripts
|
||||
python3 skills/openclaw-relay/scripts/openclaw_relay.py targets
|
||||
python3 skills/openclaw-relay/scripts/openclaw_relay.py resolve --target maintainers
|
||||
```
|
||||
|
||||
If the target alias exists, prefer a private ask first:
|
||||
|
||||
```bash
|
||||
python3 skills/openclaw-relay/scripts/openclaw_relay.py ask \
|
||||
--target maintainers \
|
||||
--message "Reply with exactly OK."
|
||||
```
|
||||
|
||||
Use `publish` when the session should decide whether to post. Use `force-send` only when the user explicitly wants a message posted.
|
||||
|
||||
## Guardrails
|
||||
|
||||
- Resolve the target before sending real content.
|
||||
- Report the target and delivery mode used.
|
||||
- Do not use this for local Discord archive queries.
|
||||
- Do not expose gateway tokens or session secrets.
|
||||
4
.agents/skills/discord-clawd/agents/openai.yaml
Normal file
4
.agents/skills/discord-clawd/agents/openai.yaml
Normal file
@@ -0,0 +1,4 @@
|
||||
interface:
|
||||
display_name: "Discord Clawd"
|
||||
short_description: "Talk to the Discord-backed OpenClaw agent"
|
||||
default_prompt: "Use $discord-clawd to route a private ask or explicit post through the Discord-backed OpenClaw agent/session."
|
||||
@@ -7,20 +7,20 @@ description: Review, triage, close, label, comment on, or land OpenClaw PRs/issu
|
||||
|
||||
Use this skill for maintainer-facing GitHub workflow, not for ordinary code changes.
|
||||
|
||||
## Start issue and PR triage with ghcrawl
|
||||
## Start issue and PR triage with gitcrawl
|
||||
|
||||
- Anytime you inspect OpenClaw issues or PRs, check local `ghcrawl` data first for related threads, duplicate attempts, and already-landed fixes.
|
||||
- Use `ghcrawl` for candidate discovery and clustering; use `gh`, `gh api`, and the current checkout to verify live state before commenting, labeling, closing, or landing.
|
||||
- If `ghcrawl` is missing, stale, lacks the target thread, or has no embeddings for neighbor/search commands, fall back to the GitHub search workflow below.
|
||||
- Do not run expensive/update commands such as `ghcrawl refresh`, `ghcrawl embed`, or `ghcrawl cluster` unless the user asked to update the local store or the stale data is blocking the decision.
|
||||
- Anytime you inspect OpenClaw issues or PRs, check local `gitcrawl` data first for related threads, duplicate attempts, and already-landed fixes.
|
||||
- Use `gitcrawl` for candidate discovery and clustering; use `gh`, `gh api`, and the current checkout to verify live state before commenting, labeling, closing, or landing.
|
||||
- If `gitcrawl` is missing, stale, lacks the target thread, or has no embeddings for neighbor/search commands, fall back to the GitHub search workflow below.
|
||||
- Do not run expensive/update commands such as `gitcrawl sync --include-comments`, future enrichment commands, or broad reclustering unless the user asked to update the local store or stale data is blocking the decision.
|
||||
|
||||
Common read-only path:
|
||||
|
||||
```bash
|
||||
ghcrawl threads openclaw/openclaw --numbers <issue-or-pr-number> --include-closed --json
|
||||
ghcrawl neighbors openclaw/openclaw --number <issue-or-pr-number> --limit 12 --json
|
||||
ghcrawl search openclaw/openclaw --query "<scope or title keywords>" --mode hybrid --json
|
||||
ghcrawl cluster-detail openclaw/openclaw --id <cluster-id> --member-limit 20 --body-chars 280 --json
|
||||
gitcrawl threads openclaw/openclaw --numbers <issue-or-pr-number> --include-closed --json
|
||||
gitcrawl neighbors openclaw/openclaw --number <issue-or-pr-number> --limit 12 --json
|
||||
gitcrawl search openclaw/openclaw --query "<scope or title keywords>" --mode hybrid --json
|
||||
gitcrawl cluster-detail openclaw/openclaw --id <cluster-id> --member-limit 20 --body-chars 280 --json
|
||||
```
|
||||
|
||||
## Apply close and triage labels correctly
|
||||
@@ -75,7 +75,7 @@ ghcrawl cluster-detail openclaw/openclaw --id <cluster-id> --member-limit 20 --b
|
||||
|
||||
## Search broadly before deciding
|
||||
|
||||
- Prefer `ghcrawl` first. Then use targeted GitHub keyword search to verify gaps, live status, comments, and candidates not present in the local store.
|
||||
- Prefer `gitcrawl` first. Then use targeted GitHub keyword search to verify gaps, live status, comments, and candidates not present in the local store.
|
||||
- Use `--repo openclaw/openclaw` with `--match title,body` first when using `gh search`.
|
||||
- Add `--match comments` when triaging follow-up discussion or closed-as-duplicate chains.
|
||||
- Do not stop at the first 500 results when the task requires a full search.
|
||||
|
||||
@@ -62,6 +62,24 @@ scenario through qa-channel, decodes the emitted protobuf spans, and verifies
|
||||
the exported trace names and privacy contract. It does not require Opik,
|
||||
Langfuse, or external collector credentials.
|
||||
|
||||
## Matrix live profiles
|
||||
|
||||
`pnpm openclaw qa matrix` defaults to the full `all` profile. Use explicit
|
||||
profiles for faster CI/release proof:
|
||||
|
||||
```bash
|
||||
OPENCLAW_QA_MATRIX_NO_REPLY_WINDOW_MS=3000 \
|
||||
pnpm openclaw qa matrix --profile fast --fail-fast
|
||||
```
|
||||
|
||||
- `fast`: release-critical transport contract, excluding generated image and
|
||||
deep E2EE recovery inventory.
|
||||
- `transport`, `media`, `e2ee-smoke`, `e2ee-deep`, `e2ee-cli`: sharded full
|
||||
Matrix coverage.
|
||||
- `QA-Lab - All Lanes` uses explicit `fast` Matrix on scheduled runs. Manual
|
||||
dispatch keeps `matrix_profile=all` as the default and always shards that full
|
||||
Matrix selection.
|
||||
|
||||
## QA credentials and 1Password
|
||||
|
||||
- Use `op` only inside `tmux` for QA secret lookup in this repo.
|
||||
|
||||
@@ -325,9 +325,11 @@ node --import tsx scripts/openclaw-npm-postpublish-verify.ts <published-version>
|
||||
- Docker install/update coverage that exercises the published beta package
|
||||
- published npm Telegram proof: dispatch Actions > `NPM Telegram Beta E2E`
|
||||
from `main` with `package_spec=openclaw@<beta-version>` and
|
||||
`provider_mode=mock-openai`, approve `npm-release`, and require success.
|
||||
This is the default button path for installed-package onboarding,
|
||||
Telegram setup, and real Telegram E2E against the published npm package.
|
||||
`provider_mode=mock-openai`, and require success. This workflow is
|
||||
maintainer-dispatched and intentionally has no `npm-release` approval gate;
|
||||
`qa-live-shared` only supplies the shared QA secrets. This is the default
|
||||
button path for installed-package onboarding, Telegram setup, and real
|
||||
Telegram E2E against the published npm package.
|
||||
Use the local `pnpm test:docker:npm-telegram-live` lane with the matching
|
||||
`OPENCLAW_NPM_TELEGRAM_PACKAGE_SPEC` and Convex CI env only as a fallback
|
||||
or debugging path.
|
||||
|
||||
492
.agents/skills/openclaw-testing/SKILL.md
Normal file
492
.agents/skills/openclaw-testing/SKILL.md
Normal file
@@ -0,0 +1,492 @@
|
||||
---
|
||||
name: openclaw-testing
|
||||
description: Choose, run, rerun, or debug OpenClaw tests, CI checks, Docker E2E lanes, release validation, and the cheapest safe verification path.
|
||||
---
|
||||
|
||||
# OpenClaw Testing
|
||||
|
||||
Use this skill when deciding what to test, debugging failures, rerunning CI,
|
||||
or validating a change without wasting hours.
|
||||
|
||||
## Read First
|
||||
|
||||
- `docs/reference/test.md` for local test commands.
|
||||
- `docs/ci.md` for CI scope, release checks, Docker chunks, and runner behavior.
|
||||
- Scoped `AGENTS.md` files before editing code under a subtree.
|
||||
|
||||
## Default Rule
|
||||
|
||||
Prove the touched surface first. Do not reflexively run the whole suite.
|
||||
|
||||
1. Inspect the diff and classify the touched surface:
|
||||
- source: `pnpm changed:lanes --json`, then `pnpm check:changed`
|
||||
- tests only: `pnpm test:changed`
|
||||
- one failing file: `pnpm test <path-or-filter> -- --reporter=verbose`
|
||||
- workflow-only: `git diff --check`, workflow syntax/lint (`actionlint` when available)
|
||||
- docs-only: `pnpm docs:list`, docs formatter/lint only if docs tooling changed or requested
|
||||
2. Reproduce narrowly before fixing.
|
||||
3. Fix root cause.
|
||||
4. Rerun the same narrow proof.
|
||||
5. Broaden only when the touched contract demands it.
|
||||
|
||||
## Guardrails
|
||||
|
||||
- Do not kill unrelated processes or tests. If something is running elsewhere, treat it as owned by the user or another agent.
|
||||
- Do not run expensive local Docker, full release checks, full `pnpm test`, or full `pnpm check` unless the user asks or the change genuinely requires it.
|
||||
- Prefer GitHub Actions for release/Docker proof when the workflow already has the prepared image and secrets.
|
||||
- Use `scripts/committer "<msg>" <paths...>` when committing; stage only your files.
|
||||
- If deps are missing, run `pnpm install`, retry once, then report the first actionable error.
|
||||
|
||||
## Local Test Shortcuts
|
||||
|
||||
```bash
|
||||
pnpm changed:lanes --json
|
||||
pnpm check:changed # changed typecheck/lint/guards; no Vitest
|
||||
pnpm test:changed # cheap smart changed Vitest targets
|
||||
OPENCLAW_TEST_CHANGED_BROAD=1 pnpm test:changed
|
||||
pnpm test <path-or-filter> -- --reporter=verbose
|
||||
OPENCLAW_VITEST_MAX_WORKERS=1 pnpm test <path-or-filter>
|
||||
```
|
||||
|
||||
Use targeted file paths whenever possible. Avoid raw `vitest`; use the repo
|
||||
`pnpm test` wrapper so project routing, workers, and setup stay correct.
|
||||
|
||||
## Command Semantics
|
||||
|
||||
- `pnpm check` and `pnpm check:changed` do not run Vitest tests. They are for
|
||||
typecheck, lint, and guard proof.
|
||||
- `pnpm test` and `pnpm test:changed` run Vitest tests.
|
||||
- `pnpm test:changed` is intentionally cheap by default: direct test edits,
|
||||
sibling tests, explicit source mappings, and import-graph dependents.
|
||||
- `OPENCLAW_TEST_CHANGED_BROAD=1 pnpm test:changed` is the explicit broad
|
||||
fallback for harness/config/package edits that genuinely need it.
|
||||
- Do not run extension sweeps just because core changed. If a core edit is for a
|
||||
specific plugin bug, run that plugin's tests explicitly. If a public SDK or
|
||||
contract change needs consumer proof, choose the smallest representative
|
||||
plugin/contract tests first, then broaden only when the risk justifies it.
|
||||
- The test wrapper prints a short `[test] passed|failed|skipped ... in ...`
|
||||
line. Vitest's own duration is still the per-shard detail.
|
||||
|
||||
## Routing Model
|
||||
|
||||
- `pnpm changed:lanes --json` answers "which check lanes does this diff touch?"
|
||||
It is used by `pnpm check:changed` for typecheck/lint/guard selection.
|
||||
- `pnpm test:changed` answers "which Vitest targets are worth running now?" It
|
||||
uses the same changed path list, but applies a cheaper test-target resolver.
|
||||
- Direct test edits run themselves. Source edits prefer explicit mappings,
|
||||
sibling `*.test.ts`, then import-graph dependents. Shared harness/config/root
|
||||
edits are skipped by default unless they have precise mapped tests.
|
||||
- Public SDK or contract edits do not automatically run every plugin test.
|
||||
`check:changed` proves extension type contracts; the agent chooses the
|
||||
smallest plugin/contract Vitest proof that matches the actual risk.
|
||||
- Use `OPENCLAW_TEST_CHANGED_BROAD=1 pnpm test:changed` only when a harness,
|
||||
config, package, or unknown-root edit really needs the broad Vitest fallback.
|
||||
|
||||
## CI Debugging
|
||||
|
||||
Start with current run state, not logs for everything:
|
||||
|
||||
```bash
|
||||
gh run list --branch main --limit 10
|
||||
gh run view <run-id> --json status,conclusion,headSha,url,jobs
|
||||
gh run view <run-id> --job <job-id> --log
|
||||
```
|
||||
|
||||
- Check exact SHA. Ignore newer unrelated `main` unless asked.
|
||||
- For cancelled same-branch runs, confirm whether a newer run superseded it.
|
||||
- Fetch full logs only for failed or relevant jobs.
|
||||
|
||||
## GitHub Release Workflows
|
||||
|
||||
Use the smallest workflow that proves the current risk. The full umbrella is
|
||||
available, but it is usually the last step after narrower proof, not the first
|
||||
rerun after a focused patch.
|
||||
|
||||
### Full Release Validation
|
||||
|
||||
`Full Release Validation` (`.github/workflows/full-release-validation.yml`) is
|
||||
the manual "everything before release" umbrella. It resolves a target ref, then
|
||||
dispatches:
|
||||
|
||||
- manual `CI` for the full normal CI graph
|
||||
- `OpenClaw Release Checks` for install smoke, cross-OS release checks, live and
|
||||
E2E checks, Docker release-path suites, OpenWebUI, QA Lab, fast Matrix, and
|
||||
Telegram release lanes
|
||||
- optional post-publish Telegram E2E when a package spec is supplied
|
||||
|
||||
Run it only when validating an actual release candidate, after broad shared CI
|
||||
or release orchestration changes, or when explicitly asked:
|
||||
|
||||
```bash
|
||||
gh workflow run full-release-validation.yml \
|
||||
--repo openclaw/openclaw \
|
||||
--ref main \
|
||||
-f ref=<branch-or-sha> \
|
||||
-f workflow_ref=main \
|
||||
-f provider=openai \
|
||||
-f mode=both
|
||||
```
|
||||
|
||||
If a full run is already active on a newer `origin/main`, prefer watching that
|
||||
run over dispatching a duplicate. If you accidentally dispatch a stale duplicate,
|
||||
cancel it and monitor the current run.
|
||||
|
||||
### Release Evidence
|
||||
|
||||
After release-candidate validation or before a release decision, record the
|
||||
important run ids in the private `openclaw/releases-private` evidence ledger.
|
||||
Use the manual `OpenClaw Release Evidence`
|
||||
(`openclaw-release-evidence.yml`) workflow there. It writes durable summaries
|
||||
under `evidence/<release-id>/` and commits:
|
||||
|
||||
- `release-evidence.md`
|
||||
- `release-evidence.json`
|
||||
- `index.json`
|
||||
- `runs/<label>.json`
|
||||
|
||||
Use one run per line:
|
||||
|
||||
```text
|
||||
full-release-validation openclaw/openclaw <run-id> blocking
|
||||
package-acceptance openclaw/openclaw <run-id> blocking
|
||||
release-checks openclaw/openclaw <run-id> blocking
|
||||
```
|
||||
|
||||
Store summaries, run URLs, artifact metadata, timings, pass/fail state, and
|
||||
short release-manager notes there. Do not store raw logs, provider
|
||||
prompts/responses, channel transcripts, signing material, or secret-bearing
|
||||
config in git; raw logs stay in Actions artifacts.
|
||||
|
||||
When `Full Release Validation` completes and
|
||||
`OPENCLAW_RELEASES_PRIVATE_DISPATCH_TOKEN` is configured in the public repo, it
|
||||
requests the private `OpenClaw Release Evidence From Full Validation` workflow.
|
||||
That private workflow reads the parent full-validation run, extracts the child
|
||||
CI/release-checks/Telegram run ids from the parent logs, and opens the evidence
|
||||
PR automatically. If the token is absent or the run predates this wiring, trigger
|
||||
that private workflow manually with the full-validation run id.
|
||||
|
||||
### Release Checks
|
||||
|
||||
`OpenClaw Release Checks` (`openclaw-release-checks.yml`) is the release child
|
||||
workflow. It is broader than normal CI but narrower than the umbrella because it
|
||||
does not dispatch the separate full normal CI child. It runs Package Acceptance
|
||||
with `telegram_mode=mock-openai`, so the release package tarball also goes
|
||||
through Telegram package QA. Use it when release-path validation is needed
|
||||
without rerunning the entire umbrella.
|
||||
|
||||
```bash
|
||||
gh workflow run openclaw-release-checks.yml \
|
||||
--repo openclaw/openclaw \
|
||||
--ref main \
|
||||
-f ref=<branch-or-sha> \
|
||||
-f provider=openai \
|
||||
-f mode=both
|
||||
```
|
||||
|
||||
### QA Lab Matrix Profiles
|
||||
|
||||
`pnpm openclaw qa matrix` defaults to `--profile all`. Do not assume the CLI
|
||||
default is the fast release path. Use explicit profiles:
|
||||
|
||||
- `--profile fast --fail-fast`: release-critical Matrix transport contract
|
||||
- `--profile transport|media|e2ee-smoke|e2ee-deep|e2ee-cli`: sharded full
|
||||
Matrix proof
|
||||
- `OPENCLAW_QA_MATRIX_NO_REPLY_WINDOW_MS=3000`: CI-friendly no-reply quiet
|
||||
window when paired with fast or sharded gates
|
||||
|
||||
`QA-Lab - All Lanes` uses explicit fast Matrix on scheduled runs; manual
|
||||
dispatch keeps `matrix_profile=all` as the default and always shards that full
|
||||
Matrix selection. `OpenClaw Release Checks` uses explicit fast Matrix; run the
|
||||
all-lanes workflow when release investigation needs full Matrix media/E2EE
|
||||
inventory.
|
||||
|
||||
### Reusable Live/E2E Checks
|
||||
|
||||
`OpenClaw Live And E2E Checks (Reusable)`
|
||||
(`openclaw-live-and-e2e-checks-reusable.yml`) is the preferred entry point for
|
||||
targeted live, Docker, model, and E2E proof. Inputs let you turn off unrelated
|
||||
lanes:
|
||||
|
||||
```bash
|
||||
gh workflow run openclaw-live-and-e2e-checks-reusable.yml \
|
||||
--repo openclaw/openclaw \
|
||||
--ref main \
|
||||
-f ref=<sha> \
|
||||
-f include_repo_e2e=false \
|
||||
-f include_release_path_suites=false \
|
||||
-f include_openwebui=false \
|
||||
-f include_live_suites=true \
|
||||
-f live_models_only=true \
|
||||
-f live_model_providers=fireworks
|
||||
```
|
||||
|
||||
Useful knobs:
|
||||
|
||||
- `docker_lanes='<lane[,lane]>'`: run selected Docker scheduler lanes against
|
||||
prepared artifacts instead of the three release chunks.
|
||||
- `include_live_suites=false`: skip live/provider suites when testing Docker
|
||||
scheduler or release packaging only.
|
||||
- `live_models_only=true`: run only Docker live model coverage.
|
||||
- `live_model_providers=fireworks` (or comma/space separated providers): run one
|
||||
targeted Docker live model job instead of the full provider matrix.
|
||||
- blank `live_model_providers`: run the full live-model provider matrix.
|
||||
|
||||
For model-list or provider-selection fixes, use `live_models_only=true` plus the
|
||||
specific `live_model_providers` allowlist. Confirm logs show the expected
|
||||
`OPENCLAW_LIVE_PROVIDERS` and selected model ids before declaring proof.
|
||||
|
||||
## Docker
|
||||
|
||||
Docker is expensive. First inspect the scheduler without running Docker:
|
||||
|
||||
```bash
|
||||
OPENCLAW_DOCKER_ALL_DRY_RUN=1 pnpm test:docker:all
|
||||
OPENCLAW_DOCKER_ALL_DRY_RUN=1 OPENCLAW_DOCKER_ALL_LANES=install-e2e pnpm test:docker:all
|
||||
OPENCLAW_DOCKER_ALL_LANES=install-e2e node scripts/test-docker-all.mjs --plan-json
|
||||
```
|
||||
|
||||
Run one failed lane locally only when explicitly asked or when GitHub is not
|
||||
usable:
|
||||
|
||||
```bash
|
||||
OPENCLAW_DOCKER_ALL_LANES=<lane> \
|
||||
OPENCLAW_DOCKER_ALL_BUILD=0 \
|
||||
OPENCLAW_DOCKER_ALL_PREFLIGHT=0 \
|
||||
OPENCLAW_SKIP_DOCKER_BUILD=1 \
|
||||
OPENCLAW_DOCKER_E2E_BARE_IMAGE='<prepared-bare-image>' \
|
||||
OPENCLAW_DOCKER_E2E_FUNCTIONAL_IMAGE='<prepared-functional-image>' \
|
||||
pnpm test:docker:all
|
||||
```
|
||||
|
||||
For release validation, prefer the reusable GitHub workflow input:
|
||||
|
||||
```yaml
|
||||
docker_lanes: install-e2e
|
||||
```
|
||||
|
||||
Multiple lanes are allowed:
|
||||
|
||||
```yaml
|
||||
docker_lanes: install-e2e bundled-channel-update-acpx
|
||||
```
|
||||
|
||||
That skips the release chunk matrix and runs one targeted Docker job against the
|
||||
prepared GHCR images and the selected package artifact. Rerun commands
|
||||
generated inside GitHub artifacts include `package_artifact_run_id`,
|
||||
`package_artifact_name`, `docker_e2e_bare_image`, and
|
||||
`docker_e2e_functional_image` when available, so failed lanes can reuse the
|
||||
exact tarball and prepared images from the failed run. When the fix changes
|
||||
package contents, omit those reuse inputs so the workflow packs a new tarball.
|
||||
Live-only targeted reruns skip the E2E images and build only the live-test
|
||||
image. Release-path normal mode remains max three Docker chunk jobs:
|
||||
|
||||
- `core`
|
||||
- `package-update`
|
||||
- `plugins-integrations`
|
||||
|
||||
OpenWebUI is folded into `plugins-integrations` for full release-path coverage
|
||||
and keeps a standalone `openwebui` chunk only for OpenWebUI-only dispatches.
|
||||
The bundled-channel runtime-dependency coverage inside `plugins-integrations`
|
||||
uses the split `bundled-channel-*` and `bundled-channel-update-*` lanes rather
|
||||
than the serial `bundled-channel-deps` lane, so failures produce cheap targeted
|
||||
reruns for the exact channel/update scenario.
|
||||
|
||||
## Package Acceptance
|
||||
|
||||
Use the manual `Package Acceptance` workflow when the question is "does this
|
||||
installable package work as a product?" rather than "does this source diff pass
|
||||
Vitest?"
|
||||
|
||||
In release validation, treat Package Acceptance as the package-candidate shard
|
||||
inside the larger release umbrella, not as a competing full-test path. Full
|
||||
Release Validation and private release gauntlets should call Package Acceptance
|
||||
for tarball resolution, Docker product/package proof, and optional Telegram QA
|
||||
against the same resolved `package-under-test` artifact; keep orchestration,
|
||||
secret policy, blocking/advisory status, and evidence rollup in the caller.
|
||||
|
||||
Good defaults:
|
||||
|
||||
```bash
|
||||
gh workflow run package-acceptance.yml --ref main \
|
||||
-f source=npm \
|
||||
-f workflow_ref=main \
|
||||
-f package_spec=openclaw@beta \
|
||||
-f suite_profile=product \
|
||||
-f telegram_mode=mock-openai
|
||||
```
|
||||
|
||||
Npm candidate selection:
|
||||
|
||||
- Resolve the registry immediately before dispatch:
|
||||
`npm view openclaw dist-tags --json --prefer-online --cache /tmp/openclaw-npm-cache-verify-$$`
|
||||
and `npm view openclaw@beta version dist.tarball dist.integrity --json --prefer-online --cache /tmp/openclaw-npm-cache-verify-$$`.
|
||||
- If Peter asks for "latest beta", use `source=npm` with
|
||||
`package_spec=openclaw@beta`, then record the resolved version from `npm view`
|
||||
or the workflow summary.
|
||||
- For reruns, release proof, or comparing one known package, prefer the exact
|
||||
immutable spec: `package_spec=openclaw@YYYY.M.D-beta.N` or
|
||||
`package_spec=openclaw@YYYY.M.D`.
|
||||
- For stable package proof, use `package_spec=openclaw@latest` only when the
|
||||
question is explicitly the current stable dist-tag; otherwise pin the exact
|
||||
version.
|
||||
- `source=npm` only accepts registry specs for `openclaw@beta`,
|
||||
`openclaw@latest`, or exact OpenClaw release versions. Do not pass semver
|
||||
ranges, git refs, file paths, tarball URLs, or plugin package names there.
|
||||
- If the candidate is a tarball URL, use `source=url` with `package_sha256`. If
|
||||
it is an Actions tarball artifact, use `source=artifact`. If it is an
|
||||
unpublished source candidate, use `source=ref` with a trusted ref or SHA.
|
||||
- Package acceptance tests exactly the selected package candidate. Do not apply
|
||||
`openclaw update --channel beta` fallback semantics here; if `beta` is absent,
|
||||
stale, older than `latest`, or points at a broken tarball, report that tag
|
||||
state instead of silently testing `latest`.
|
||||
|
||||
Profiles:
|
||||
|
||||
- `smoke`: quick confidence that the tarball installs, can onboard a channel,
|
||||
can run an agent turn, and basic gateway/config lanes work.
|
||||
- `package`: release-package contract. Adds installer/update, doctor install
|
||||
switching, bundled plugin runtime deps, plugin install/update, and package
|
||||
repair lanes. This is the default native replacement for most Parallels
|
||||
package/update coverage.
|
||||
- `product`: package profile plus broader product surfaces: MCP channels,
|
||||
cron/subagent cleanup, OpenAI web search, and OpenWebUI.
|
||||
- `full`: split Docker release-path chunks with OpenWebUI.
|
||||
- `custom`: exact `docker_lanes` list for a focused rerun.
|
||||
|
||||
Candidate sources:
|
||||
|
||||
- `source=npm`: `openclaw@beta`, `openclaw@latest`, or an exact release version.
|
||||
- `source=ref`: pack `package_ref` using the trusted `workflow_ref` harness.
|
||||
This intentionally separates old package commits from new workflow/test code.
|
||||
- `source=url`: HTTPS `.tgz` plus required `package_sha256`.
|
||||
- `source=artifact`: download one `.tgz` from `artifact_run_id`/`artifact_name`.
|
||||
|
||||
Ref model:
|
||||
|
||||
- `gh workflow run ... --ref <workflow-ref>` selects the workflow file revision
|
||||
GitHub executes.
|
||||
- `workflow_ref` is the trusted harness/script ref passed to reusable Docker
|
||||
E2E.
|
||||
- `package_ref` is the source ref to build when `source=ref`. It can be an
|
||||
older branch/tag/SHA as long as it is reachable from an OpenClaw branch or
|
||||
release tag.
|
||||
|
||||
Example: run latest package acceptance harness against an older trusted commit:
|
||||
|
||||
```bash
|
||||
gh workflow run package-acceptance.yml --ref main \
|
||||
-f workflow_ref=main \
|
||||
-f source=ref \
|
||||
-f package_ref=<branch-or-sha> \
|
||||
-f suite_profile=package \
|
||||
-f telegram_mode=mock-openai
|
||||
```
|
||||
|
||||
Use `telegram_mode=mock-openai` or `telegram_mode=live-frontier` when the same
|
||||
resolved `package-under-test` tarball should also run through the Telegram QA
|
||||
workflow in the `qa-live-shared` environment. The standalone Telegram workflow
|
||||
still accepts a published npm spec for post-publish checks, but Package
|
||||
Acceptance passes the resolved artifact for `source=npm`, `ref`, `url`, and
|
||||
`artifact`. Use `telegram_mode=none` only when intentionally skipping Telegram
|
||||
credentialed package proof for a focused rerun.
|
||||
|
||||
Docker E2E images never copy repo sources as the app under test: the bare image
|
||||
is a Node/Git runner, and the functional image installs the same prebuilt npm
|
||||
tarball that bare lanes mount. `scripts/package-openclaw-for-docker.mjs` is the
|
||||
single packer for local scripts and CI and validates the tarball inventory
|
||||
before Docker consumes it. `scripts/test-docker-all.mjs --plan-json` is the
|
||||
scheduler-owned CI plan for image kind, package, live image, lane, and
|
||||
credential needs. Docker lane definitions live in the single scenario catalog
|
||||
`scripts/lib/docker-e2e-scenarios.mjs`; planner logic lives in
|
||||
`scripts/lib/docker-e2e-plan.mjs`. `scripts/docker-e2e.mjs` converts plan and
|
||||
summary JSON into GitHub outputs and step summaries. Every scheduler run writes
|
||||
`.artifacts/docker-tests/**/summary.json` plus `failures.json`. Read those
|
||||
before rerunning. Lane entries include `command`, `rerunCommand`, status,
|
||||
timing, timeout state, image kind, and log file path. The summary also includes
|
||||
top-level phase timings for preflight, image build, package prep, lane pools,
|
||||
and cleanup. Use `pnpm test:docker:timings <summary.json>` to rank slow lanes
|
||||
and phases before deciding whether a broader rerun is justified.
|
||||
|
||||
## Cheap Docker Reruns
|
||||
|
||||
First derive the smallest rerun command from artifacts:
|
||||
|
||||
```bash
|
||||
pnpm test:docker:rerun <github-run-id>
|
||||
pnpm test:docker:rerun .artifacts/docker-tests/<run>/failures.json
|
||||
```
|
||||
|
||||
The script downloads Docker E2E artifacts for a GitHub run, reads
|
||||
`summary.json`/`failures.json`, and prints a combined targeted workflow command
|
||||
plus per-lane commands. Prefer the combined targeted command when several lanes
|
||||
failed for the same patch:
|
||||
|
||||
```bash
|
||||
gh workflow run openclaw-live-and-e2e-checks-reusable.yml \
|
||||
-f ref=<sha> \
|
||||
-f include_repo_e2e=false \
|
||||
-f include_release_path_suites=false \
|
||||
-f include_openwebui=false \
|
||||
-f docker_lanes='install-e2e bundled-channel-update-acpx' \
|
||||
-f include_live_suites=false \
|
||||
-f live_models_only=false
|
||||
```
|
||||
|
||||
That path still runs the prepare job, so it creates a new tarball for `<sha>`.
|
||||
If the SHA-tagged GHCR bare/functional image already exists, CI skips rebuilding
|
||||
that image and only uploads the fresh package artifact before the targeted lane
|
||||
job. Do not rerun the full three-chunk release path unless the failed lane list
|
||||
or touched surface really requires it.
|
||||
|
||||
## Docker Expected Timings
|
||||
|
||||
Treat these as ballpark. Blacksmith queue time, GHCR pull speed, provider
|
||||
latency, npm cache state, and Docker daemon health can dominate.
|
||||
|
||||
Current local timing artifact (`.artifacts/docker-tests/lane-timings.json`) has
|
||||
these rough bands:
|
||||
|
||||
- Tiny lanes, seconds to under 1 minute:
|
||||
`agents-delete-shared-workspace` ~3s, `plugin-update` ~7s,
|
||||
`config-reload` ~14s, `pi-bundle-mcp-tools` ~15s, `onboard` ~18s,
|
||||
`session-runtime-context` ~20s, `gateway-network` ~34s, `qr` ~44s.
|
||||
- Medium deterministic lanes, ~1-5 minutes:
|
||||
`npm-onboard-channel-agent` ~96s, `openai-image-auth` ~99s,
|
||||
bundled channel/update lanes usually ~90-300s when split, `openwebui` ~225s,
|
||||
`mcp-channels` ~274s.
|
||||
- Heavy deterministic lanes, ~6-10 minutes:
|
||||
`bundled-channel-root-owned` ~429s,
|
||||
`bundled-channel-setup-entry` ~420s,
|
||||
`bundled-channel-load-failure` ~383s,
|
||||
`cron-mcp-cleanup` ~567s.
|
||||
- Live provider lanes, often ~15-20 minutes:
|
||||
`live-gateway` ~958s, `live-models` ~1054s.
|
||||
- Installer/release lanes:
|
||||
`install-e2e` and package-update paths can vary widely with npm, provider,
|
||||
and package registry behavior. Budget tens of minutes; prefer GitHub targeted
|
||||
reruns over local repeats.
|
||||
|
||||
Default fallback lane timeout is 120 minutes. A timeout usually means debug the
|
||||
lane log/artifacts first, not “run the whole thing again.”
|
||||
|
||||
## Failure Workflow
|
||||
|
||||
1. Identify exact failing job, SHA, lane, and artifact path.
|
||||
2. Read `failures.json`, `summary.json`, and the failed lane log tail.
|
||||
3. Use `pnpm test:docker:rerun <run-id|failures.json>` to generate targeted
|
||||
GitHub rerun commands.
|
||||
4. If the lane has `rerunCommand`, use that only as a local starting point.
|
||||
5. For Docker release failures, dispatch targeted `docker_lanes=<failed-lane>`
|
||||
on GitHub before considering local Docker.
|
||||
6. Patch narrowly, then rerun the failed file/lane only.
|
||||
7. Broaden to `pnpm check:changed` or CI only after the isolated proof passes.
|
||||
|
||||
## When To Escalate
|
||||
|
||||
- Public SDK/plugin contract changes: run changed gate plus relevant extension
|
||||
validation.
|
||||
- Build output, lazy imports, package boundaries, or published surfaces:
|
||||
include `pnpm build`.
|
||||
- Workflow edits: run `pnpm check:workflows`.
|
||||
- Release branch or tag validation: use release docs and GitHub workflows; avoid
|
||||
local Docker unless Peter explicitly asks.
|
||||
4
.agents/skills/openclaw-testing/agents/openai.yaml
Normal file
4
.agents/skills/openclaw-testing/agents/openai.yaml
Normal file
@@ -0,0 +1,4 @@
|
||||
interface:
|
||||
display_name: "OpenClaw Testing"
|
||||
short_description: "Choose cheap, targeted OpenClaw validation"
|
||||
default_prompt: "Use $openclaw-testing to choose the cheapest safe test or CI verification path, inspect failures, and rerun only the relevant OpenClaw lane."
|
||||
149
.github/actions/docker-e2e-plan/action.yml
vendored
Normal file
149
.github/actions/docker-e2e-plan/action.yml
vendored
Normal file
@@ -0,0 +1,149 @@
|
||||
name: Docker E2E plan and hydrate
|
||||
description: >
|
||||
Create a Docker E2E lane plan, expose GitHub outputs, and optionally hydrate
|
||||
the prebuilt package artifact plus shared Docker images needed by the plan.
|
||||
inputs:
|
||||
mode:
|
||||
description: prepare, chunk, or targeted.
|
||||
required: true
|
||||
chunk:
|
||||
description: Release-path chunk for mode=chunk.
|
||||
required: false
|
||||
default: ""
|
||||
lanes:
|
||||
description: Comma/space separated lane names for targeted or prepare mode.
|
||||
required: false
|
||||
default: ""
|
||||
include-openwebui:
|
||||
description: Whether Open WebUI is included when planning release/prepare coverage.
|
||||
required: false
|
||||
default: "true"
|
||||
include-release-path-suites:
|
||||
description: Whether prepare mode should plan all release-path suites.
|
||||
required: false
|
||||
default: "false"
|
||||
hydrate-artifacts:
|
||||
description: Whether to download/pull artifacts required by the plan.
|
||||
required: false
|
||||
default: "true"
|
||||
package-artifact-name:
|
||||
description: Workflow artifact name containing openclaw-current.tgz.
|
||||
required: false
|
||||
default: docker-e2e-package
|
||||
outputs:
|
||||
credentials:
|
||||
description: Comma-separated credential groups required by selected lanes.
|
||||
value: ${{ steps.plan.outputs.credentials }}
|
||||
needs_bare_image:
|
||||
description: "1 when selected lanes require the bare Docker E2E image."
|
||||
value: ${{ steps.plan.outputs.needs_bare_image }}
|
||||
needs_e2e_image:
|
||||
description: "1 when selected lanes require any Docker E2E image."
|
||||
value: ${{ steps.plan.outputs.needs_e2e_image }}
|
||||
needs_functional_image:
|
||||
description: "1 when selected lanes require the functional Docker E2E image."
|
||||
value: ${{ steps.plan.outputs.needs_functional_image }}
|
||||
needs_live_image:
|
||||
description: "1 when selected lanes require building the live Docker image."
|
||||
value: ${{ steps.plan.outputs.needs_live_image }}
|
||||
needs_package:
|
||||
description: "1 when selected lanes require the OpenClaw package tarball."
|
||||
value: ${{ steps.plan.outputs.needs_package }}
|
||||
plan_json:
|
||||
description: Path to the generated plan JSON.
|
||||
value: ${{ steps.plan.outputs.plan_json }}
|
||||
runs:
|
||||
using: composite
|
||||
steps:
|
||||
- name: Plan Docker E2E lanes
|
||||
id: plan
|
||||
shell: bash
|
||||
env:
|
||||
MODE: ${{ inputs.mode }}
|
||||
CHUNK: ${{ inputs.chunk }}
|
||||
LANES: ${{ inputs.lanes }}
|
||||
INCLUDE_OPENWEBUI: ${{ inputs.include-openwebui }}
|
||||
INCLUDE_RELEASE_PATH_SUITES: ${{ inputs.include-release-path-suites }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
mkdir -p .artifacts/docker-tests
|
||||
|
||||
case "$MODE" in
|
||||
prepare)
|
||||
plan_path=".artifacts/docker-tests/plan.json"
|
||||
if [[ "$INCLUDE_RELEASE_PATH_SUITES" == "true" ]]; then
|
||||
export OPENCLAW_DOCKER_ALL_PROFILE=release-path
|
||||
export OPENCLAW_DOCKER_ALL_PLAN_RELEASE_ALL=1
|
||||
elif [[ -n "$LANES" ]]; then
|
||||
export OPENCLAW_DOCKER_ALL_LANES="$LANES"
|
||||
elif [[ "$INCLUDE_OPENWEBUI" == "true" ]]; then
|
||||
export OPENCLAW_DOCKER_ALL_LANES=openwebui
|
||||
fi
|
||||
;;
|
||||
chunk)
|
||||
if [[ -z "$CHUNK" ]]; then
|
||||
echo "chunk input is required for Docker E2E chunk planning." >&2
|
||||
exit 1
|
||||
fi
|
||||
export OPENCLAW_DOCKER_ALL_PROFILE=release-path
|
||||
export OPENCLAW_DOCKER_ALL_CHUNK="$CHUNK"
|
||||
plan_path=".artifacts/docker-tests/release-${CHUNK}-plan.json"
|
||||
;;
|
||||
targeted)
|
||||
if [[ -z "$LANES" ]]; then
|
||||
echo "lanes input is required for Docker E2E targeted planning." >&2
|
||||
exit 1
|
||||
fi
|
||||
export OPENCLAW_DOCKER_ALL_LANES="$LANES"
|
||||
plan_path=".artifacts/docker-tests/targeted-plan.json"
|
||||
;;
|
||||
*)
|
||||
echo "mode must be prepare, chunk, or targeted. Got: $MODE" >&2
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
|
||||
export OPENCLAW_DOCKER_ALL_INCLUDE_OPENWEBUI="$INCLUDE_OPENWEBUI"
|
||||
node scripts/test-docker-all.mjs --plan-json > "$plan_path"
|
||||
node scripts/docker-e2e.mjs github-outputs "$plan_path" >> "$GITHUB_OUTPUT"
|
||||
echo "plan_json=$plan_path" >> "$GITHUB_OUTPUT"
|
||||
|
||||
- name: Download OpenClaw Docker E2E package
|
||||
if: inputs.hydrate-artifacts == 'true' && steps.plan.outputs.needs_package == '1'
|
||||
uses: actions/download-artifact@v8
|
||||
with:
|
||||
name: ${{ inputs.package-artifact-name }}
|
||||
path: .artifacts/docker-e2e-package
|
||||
|
||||
- name: Pull shared bare Docker E2E image
|
||||
if: inputs.hydrate-artifacts == 'true' && steps.plan.outputs.needs_bare_image == '1'
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
docker pull "${OPENCLAW_DOCKER_E2E_BARE_IMAGE}"
|
||||
|
||||
- name: Pull shared functional Docker E2E image
|
||||
if: inputs.hydrate-artifacts == 'true' && steps.plan.outputs.needs_functional_image == '1'
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
docker pull "${OPENCLAW_DOCKER_E2E_FUNCTIONAL_IMAGE}"
|
||||
|
||||
- name: Validate Docker E2E credentials
|
||||
if: inputs.hydrate-artifacts == 'true'
|
||||
shell: bash
|
||||
env:
|
||||
CREDENTIALS: ${{ steps.plan.outputs.credentials }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
credentials=",$CREDENTIALS,"
|
||||
if [[ "$credentials" == *",openai,"* ]]; then
|
||||
[[ -n "${OPENAI_API_KEY:-}" ]] || {
|
||||
echo "OPENAI_API_KEY is required for selected Docker E2E lanes." >&2
|
||||
exit 1
|
||||
}
|
||||
fi
|
||||
if [[ "$credentials" == *",anthropic,"* && -z "${ANTHROPIC_API_TOKEN:-}" && -z "${ANTHROPIC_API_KEY:-}" ]]; then
|
||||
echo "ANTHROPIC_API_TOKEN or ANTHROPIC_API_KEY is required for selected Docker E2E lanes." >&2
|
||||
exit 1
|
||||
fi
|
||||
10
.github/labeler.yml
vendored
10
.github/labeler.yml
vendored
@@ -35,6 +35,11 @@
|
||||
- any-glob-to-any-file:
|
||||
- "extensions/google-meet/**"
|
||||
- "docs/plugins/google-meet.md"
|
||||
"plugin: migrate-hermes":
|
||||
- changed-files:
|
||||
- any-glob-to-any-file:
|
||||
- "extensions/migrate-hermes/**"
|
||||
- "docs/cli/migrate.md"
|
||||
"plugin: bonjour":
|
||||
- changed-files:
|
||||
- any-glob-to-any-file:
|
||||
@@ -101,6 +106,11 @@
|
||||
- any-glob-to-any-file:
|
||||
- "extensions/slack/**"
|
||||
- "docs/channels/slack.md"
|
||||
"channel: synology-chat":
|
||||
- changed-files:
|
||||
- any-glob-to-any-file:
|
||||
- "extensions/synology-chat/**"
|
||||
- "docs/channels/synology-chat.md"
|
||||
"channel: telegram":
|
||||
- changed-files:
|
||||
- any-glob-to-any-file:
|
||||
|
||||
198
.github/workflows/ci-build-artifacts-testbox.yml
vendored
Normal file
198
.github/workflows/ci-build-artifacts-testbox.yml
vendored
Normal file
@@ -0,0 +1,198 @@
|
||||
name: Blacksmith Build Artifacts Testbox
|
||||
|
||||
on:
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
testbox_id:
|
||||
type: string
|
||||
description: "Testbox session ID"
|
||||
required: true
|
||||
pull_request:
|
||||
paths:
|
||||
- ".github/workflows/**"
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
env:
|
||||
FORCE_JAVASCRIPT_ACTIONS_TO_NODE24: "true"
|
||||
|
||||
jobs:
|
||||
build-artifacts:
|
||||
permissions:
|
||||
contents: read
|
||||
name: "build-artifacts"
|
||||
runs-on: blacksmith-8vcpu-ubuntu-2404
|
||||
timeout-minutes: 35
|
||||
steps:
|
||||
- name: Begin Testbox
|
||||
uses: useblacksmith/begin-testbox@v2
|
||||
with:
|
||||
testbox_id: ${{ inputs.testbox_id }}
|
||||
|
||||
- name: Checkout
|
||||
shell: bash
|
||||
env:
|
||||
CHECKOUT_REPO: ${{ github.repository }}
|
||||
CHECKOUT_SHA: ${{ github.sha }}
|
||||
CHECKOUT_TOKEN: ${{ github.token }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
workdir="$GITHUB_WORKSPACE"
|
||||
auth_header="$(printf 'x-access-token:%s' "$CHECKOUT_TOKEN" | base64 | tr -d '\n')"
|
||||
|
||||
reset_checkout_dir() {
|
||||
mkdir -p "$workdir"
|
||||
find "$workdir" -mindepth 1 -maxdepth 1 -exec rm -rf {} +
|
||||
}
|
||||
|
||||
checkout_attempt() {
|
||||
local attempt="$1"
|
||||
|
||||
reset_checkout_dir
|
||||
git init "$workdir" >/dev/null
|
||||
git config --global --add safe.directory "$workdir"
|
||||
git -C "$workdir" remote add origin "https://github.com/${CHECKOUT_REPO}"
|
||||
git -C "$workdir" config gc.auto 0
|
||||
|
||||
timeout --signal=TERM 30s git -C "$workdir" \
|
||||
-c protocol.version=2 \
|
||||
-c "http.https://github.com/.extraheader=AUTHORIZATION: basic ${auth_header}" \
|
||||
fetch --no-tags --prune --no-recurse-submodules --depth=1 origin \
|
||||
"+${CHECKOUT_SHA}:refs/remotes/origin/ci-target" || return 1
|
||||
|
||||
git -C "$workdir" checkout --force --detach "$CHECKOUT_SHA" || return 1
|
||||
test -f "$workdir/.github/actions/setup-node-env/action.yml" || return 1
|
||||
echo "checkout attempt ${attempt}/5 succeeded"
|
||||
}
|
||||
|
||||
for attempt in 1 2 3 4 5; do
|
||||
if checkout_attempt "$attempt"; then
|
||||
exit 0
|
||||
fi
|
||||
echo "checkout attempt ${attempt}/5 failed"
|
||||
sleep $((attempt * 5))
|
||||
done
|
||||
|
||||
echo "checkout failed after 5 attempts" >&2
|
||||
exit 1
|
||||
|
||||
- name: Setup Node environment
|
||||
uses: ./.github/actions/setup-node-env
|
||||
with:
|
||||
install-bun: "false"
|
||||
|
||||
- name: Resolve release dist cache seeds
|
||||
id: dist-cache-seeds
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
cache_prefix="${RUNNER_OS}-dist-build-"
|
||||
declare -A seen=()
|
||||
|
||||
resolve_tag_sha() {
|
||||
local tag="$1"
|
||||
local direct=""
|
||||
local peeled=""
|
||||
|
||||
while read -r sha ref; do
|
||||
if [[ "$ref" == "refs/tags/${tag}^{}" ]]; then
|
||||
peeled="$sha"
|
||||
elif [[ "$ref" == "refs/tags/${tag}" ]]; then
|
||||
direct="$sha"
|
||||
fi
|
||||
done < <(git ls-remote --tags origin "refs/tags/${tag}" "refs/tags/${tag}^{}")
|
||||
|
||||
printf '%s\n' "${peeled:-$direct}"
|
||||
}
|
||||
|
||||
{
|
||||
echo "restore-keys<<EOF"
|
||||
for dist_tag in beta latest; do
|
||||
version="$(npm view "openclaw@${dist_tag}" version 2>/dev/null || true)"
|
||||
if [[ -z "$version" ]]; then
|
||||
echo "Could not resolve npm dist-tag ${dist_tag}; skipping cache seed." >&2
|
||||
continue
|
||||
fi
|
||||
|
||||
sha="$(resolve_tag_sha "v${version}")"
|
||||
if [[ -z "$sha" ]]; then
|
||||
echo "Could not resolve git tag v${version}; skipping cache seed." >&2
|
||||
continue
|
||||
fi
|
||||
|
||||
key="${cache_prefix}${sha}"
|
||||
if [[ -z "${seen[$key]+x}" ]]; then
|
||||
echo "$key"
|
||||
seen[$key]=1
|
||||
fi
|
||||
done
|
||||
echo "${cache_prefix}"
|
||||
echo "EOF"
|
||||
} >> "$GITHUB_OUTPUT"
|
||||
|
||||
- name: Restore dist build cache
|
||||
id: dist-cache
|
||||
uses: actions/cache/restore@v5
|
||||
with:
|
||||
path: |
|
||||
.artifacts/build-all-cache/
|
||||
dist/
|
||||
dist-runtime/
|
||||
key: ${{ runner.os }}-dist-build-${{ github.sha }}
|
||||
restore-keys: ${{ steps.dist-cache-seeds.outputs.restore-keys }}
|
||||
|
||||
- name: Build dist on cache miss
|
||||
if: steps.dist-cache.outputs.cache-hit != 'true'
|
||||
run: pnpm build:ci-artifacts
|
||||
|
||||
- name: Build Control UI on cache miss
|
||||
if: steps.dist-cache.outputs.cache-hit != 'true'
|
||||
run: pnpm ui:build
|
||||
|
||||
- name: Verify build artifacts
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
test -d dist
|
||||
test -d dist-runtime
|
||||
if [[ ! -f dist/index.js && ! -f dist/index.mjs ]]; then
|
||||
echo "Missing dist/index.js or dist/index.mjs" >&2
|
||||
exit 1
|
||||
fi
|
||||
test -f dist/build-info.json
|
||||
test -f dist/control-ui/index.html
|
||||
|
||||
- name: Save dist build cache
|
||||
if: steps.dist-cache.outputs.cache-hit != 'true'
|
||||
uses: actions/cache/save@v5
|
||||
with:
|
||||
path: |
|
||||
.artifacts/build-all-cache/
|
||||
dist/
|
||||
dist-runtime/
|
||||
key: ${{ runner.os }}-dist-build-${{ github.sha }}
|
||||
|
||||
- name: Prepare Testbox shell
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
git fetch --no-tags --depth=50 origin "+refs/heads/main:refs/remotes/origin/main"
|
||||
|
||||
node_bin="$(dirname "$(node -p 'process.execPath')")"
|
||||
pnpm_bin="$(command -v pnpm)"
|
||||
sudo ln -sf "$node_bin/node" /usr/local/bin/node
|
||||
sudo ln -sf "$node_bin/npm" /usr/local/bin/npm
|
||||
sudo ln -sf "$node_bin/npx" /usr/local/bin/npx
|
||||
sudo ln -sf "$node_bin/corepack" /usr/local/bin/corepack
|
||||
sudo ln -sf "$pnpm_bin" /usr/local/bin/pnpm
|
||||
|
||||
- name: Run Testbox
|
||||
uses: useblacksmith/run-testbox@v2
|
||||
if: always()
|
||||
env:
|
||||
FORCE_JAVASCRIPT_ACTIONS_TO_NODE24: "true"
|
||||
209
.github/workflows/ci.yml
vendored
209
.github/workflows/ci.yml
vendored
@@ -1,6 +1,13 @@
|
||||
name: CI
|
||||
|
||||
on:
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
target_ref:
|
||||
description: Optional branch, tag, or full commit SHA to validate instead of the workflow ref
|
||||
required: false
|
||||
default: ""
|
||||
type: string
|
||||
push:
|
||||
branches: [main]
|
||||
paths-ignore:
|
||||
@@ -13,8 +20,8 @@ permissions:
|
||||
contents: read
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.event_name == 'pull_request' && format('{0}-v7-{1}', github.workflow, github.event.pull_request.number) || (github.repository == 'openclaw/openclaw' && format('{0}-v7-{1}', github.workflow, github.ref) || format('{0}-v7-{1}-{2}', github.workflow, github.ref, github.sha)) }}
|
||||
cancel-in-progress: true
|
||||
group: ${{ github.event_name == 'workflow_dispatch' && format('{0}-manual-v1-{1}', github.workflow, github.run_id) || (github.event_name == 'pull_request' && format('{0}-v7-{1}', github.workflow, github.event.pull_request.number) || (github.repository == 'openclaw/openclaw' && format('{0}-v7-{1}', github.workflow, github.ref) || format('{0}-v7-{1}-{2}', github.workflow, github.ref, github.sha))) }}
|
||||
cancel-in-progress: ${{ github.event_name != 'workflow_dispatch' }}
|
||||
|
||||
env:
|
||||
FORCE_JAVASCRIPT_ACTIONS_TO_NODE24: "true"
|
||||
@@ -29,6 +36,7 @@ jobs:
|
||||
runs-on: ubuntu-24.04
|
||||
timeout-minutes: 20
|
||||
outputs:
|
||||
checkout_sha: ${{ steps.checkout_ref.outputs.sha }}
|
||||
docs_only: ${{ steps.manifest.outputs.docs_only }}
|
||||
docs_changed: ${{ steps.manifest.outputs.docs_changed }}
|
||||
run_node: ${{ steps.manifest.outputs.run_node }}
|
||||
@@ -37,8 +45,6 @@ jobs:
|
||||
run_skills_python: ${{ steps.manifest.outputs.run_skills_python }}
|
||||
run_skills_python_job: ${{ steps.manifest.outputs.run_skills_python_job }}
|
||||
run_windows: ${{ steps.manifest.outputs.run_windows }}
|
||||
has_changed_extensions: ${{ steps.manifest.outputs.has_changed_extensions }}
|
||||
changed_extensions_matrix: ${{ steps.manifest.outputs.changed_extensions_matrix }}
|
||||
run_build_artifacts: ${{ steps.manifest.outputs.run_build_artifacts }}
|
||||
run_checks_fast_core: ${{ steps.manifest.outputs.run_checks_fast_core }}
|
||||
run_checks_fast: ${{ steps.manifest.outputs.run_checks_fast }}
|
||||
@@ -51,8 +57,6 @@ jobs:
|
||||
checks_node_core_nondist_matrix: ${{ steps.manifest.outputs.checks_node_core_nondist_matrix }}
|
||||
run_checks_node_core_dist: ${{ steps.manifest.outputs.run_checks_node_core_dist }}
|
||||
checks_node_core_dist_matrix: ${{ steps.manifest.outputs.checks_node_core_dist_matrix }}
|
||||
run_extension_fast: ${{ steps.manifest.outputs.run_extension_fast }}
|
||||
extension_fast_matrix: ${{ steps.manifest.outputs.extension_fast_matrix }}
|
||||
run_check: ${{ steps.manifest.outputs.run_check }}
|
||||
run_check_additional: ${{ steps.manifest.outputs.run_check_additional }}
|
||||
run_build_smoke: ${{ steps.manifest.outputs.run_build_smoke }}
|
||||
@@ -69,12 +73,18 @@ jobs:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v6
|
||||
with:
|
||||
ref: ${{ inputs.target_ref || github.sha }}
|
||||
fetch-depth: 1
|
||||
fetch-tags: false
|
||||
persist-credentials: false
|
||||
submodules: false
|
||||
|
||||
- name: Resolve checkout SHA
|
||||
id: checkout_ref
|
||||
run: echo "sha=$(git rev-parse HEAD)" >> "$GITHUB_OUTPUT"
|
||||
|
||||
- name: Ensure preflight base commit
|
||||
if: github.event_name != 'workflow_dispatch'
|
||||
uses: ./.github/actions/ensure-base-commit
|
||||
with:
|
||||
base-sha: ${{ github.event_name == 'push' && github.event.before || github.event.pull_request.base.sha }}
|
||||
@@ -82,11 +92,12 @@ jobs:
|
||||
|
||||
- name: Detect docs-only changes
|
||||
id: docs_scope
|
||||
if: github.event_name != 'workflow_dispatch'
|
||||
uses: ./.github/actions/detect-docs-changes
|
||||
|
||||
- name: Detect changed scopes
|
||||
id: changed_scope
|
||||
if: steps.docs_scope.outputs.docs_only != 'true'
|
||||
if: github.event_name != 'workflow_dispatch' && steps.docs_scope.outputs.docs_only != 'true'
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
@@ -99,45 +110,20 @@ jobs:
|
||||
|
||||
node scripts/ci-changed-scope.mjs --base "$BASE" --head HEAD
|
||||
|
||||
- name: Detect changed extensions
|
||||
id: changed_extensions
|
||||
if: steps.docs_scope.outputs.docs_only != 'true' && steps.changed_scope.outputs.run_node == 'true'
|
||||
env:
|
||||
BASE_SHA: ${{ github.event_name == 'push' && github.event.before || github.event.pull_request.base.sha }}
|
||||
BASE_REF: ${{ github.event_name == 'push' && github.ref_name || github.event.pull_request.base.ref }}
|
||||
run: |
|
||||
node --input-type=module <<'EOF'
|
||||
import { appendFileSync } from "node:fs";
|
||||
import { listChangedExtensionIds } from "./scripts/lib/changed-extensions.mjs";
|
||||
|
||||
const extensionIds = listChangedExtensionIds({
|
||||
base: process.env.BASE_SHA,
|
||||
head: "HEAD",
|
||||
fallbackBaseRef: process.env.BASE_REF,
|
||||
unavailableBaseBehavior: "all",
|
||||
});
|
||||
const matrix = JSON.stringify({ include: extensionIds.map((extension) => ({ extension })) });
|
||||
|
||||
appendFileSync(process.env.GITHUB_OUTPUT, `has_changed_extensions=${extensionIds.length > 0}\n`, "utf8");
|
||||
appendFileSync(process.env.GITHUB_OUTPUT, `changed_extensions_matrix=${matrix}\n`, "utf8");
|
||||
EOF
|
||||
|
||||
- name: Build CI manifest
|
||||
id: manifest
|
||||
env:
|
||||
OPENCLAW_CI_DOCS_ONLY: ${{ steps.docs_scope.outputs.docs_only }}
|
||||
OPENCLAW_CI_DOCS_CHANGED: ${{ steps.docs_scope.outputs.docs_changed }}
|
||||
OPENCLAW_CI_RUN_NODE: ${{ steps.changed_scope.outputs.run_node || 'false' }}
|
||||
OPENCLAW_CI_RUN_MACOS: ${{ steps.changed_scope.outputs.run_macos || 'false' }}
|
||||
OPENCLAW_CI_RUN_ANDROID: ${{ steps.changed_scope.outputs.run_android || 'false' }}
|
||||
OPENCLAW_CI_RUN_WINDOWS: ${{ steps.changed_scope.outputs.run_windows || 'false' }}
|
||||
OPENCLAW_CI_RUN_NODE_FAST_ONLY: ${{ steps.changed_scope.outputs.run_node_fast_only || 'false' }}
|
||||
OPENCLAW_CI_RUN_NODE_FAST_PLUGIN_CONTRACTS: ${{ steps.changed_scope.outputs.run_node_fast_plugin_contracts || 'false' }}
|
||||
OPENCLAW_CI_RUN_NODE_FAST_CI_ROUTING: ${{ steps.changed_scope.outputs.run_node_fast_ci_routing || 'false' }}
|
||||
OPENCLAW_CI_RUN_SKILLS_PYTHON: ${{ steps.changed_scope.outputs.run_skills_python || 'false' }}
|
||||
OPENCLAW_CI_RUN_CONTROL_UI_I18N: ${{ steps.changed_scope.outputs.run_control_ui_i18n || 'false' }}
|
||||
OPENCLAW_CI_HAS_CHANGED_EXTENSIONS: ${{ steps.changed_extensions.outputs.has_changed_extensions || 'false' }}
|
||||
OPENCLAW_CI_CHANGED_EXTENSIONS_MATRIX: ${{ steps.changed_extensions.outputs.changed_extensions_matrix || '{"include":[]}' }}
|
||||
OPENCLAW_CI_DOCS_ONLY: ${{ github.event_name == 'workflow_dispatch' && 'false' || steps.docs_scope.outputs.docs_only }}
|
||||
OPENCLAW_CI_DOCS_CHANGED: ${{ github.event_name == 'workflow_dispatch' && 'true' || steps.docs_scope.outputs.docs_changed }}
|
||||
OPENCLAW_CI_RUN_NODE: ${{ github.event_name == 'workflow_dispatch' && 'true' || steps.changed_scope.outputs.run_node || 'false' }}
|
||||
OPENCLAW_CI_RUN_MACOS: ${{ github.event_name == 'workflow_dispatch' && 'true' || steps.changed_scope.outputs.run_macos || 'false' }}
|
||||
OPENCLAW_CI_RUN_ANDROID: ${{ github.event_name == 'workflow_dispatch' && 'true' || steps.changed_scope.outputs.run_android || 'false' }}
|
||||
OPENCLAW_CI_RUN_WINDOWS: ${{ github.event_name == 'workflow_dispatch' && 'true' || steps.changed_scope.outputs.run_windows || 'false' }}
|
||||
OPENCLAW_CI_RUN_NODE_FAST_ONLY: ${{ github.event_name == 'workflow_dispatch' && 'false' || steps.changed_scope.outputs.run_node_fast_only || 'false' }}
|
||||
OPENCLAW_CI_RUN_NODE_FAST_PLUGIN_CONTRACTS: ${{ github.event_name == 'workflow_dispatch' && 'false' || steps.changed_scope.outputs.run_node_fast_plugin_contracts || 'false' }}
|
||||
OPENCLAW_CI_RUN_NODE_FAST_CI_ROUTING: ${{ github.event_name == 'workflow_dispatch' && 'false' || steps.changed_scope.outputs.run_node_fast_ci_routing || 'false' }}
|
||||
OPENCLAW_CI_RUN_SKILLS_PYTHON: ${{ github.event_name == 'workflow_dispatch' && 'true' || steps.changed_scope.outputs.run_skills_python || 'false' }}
|
||||
OPENCLAW_CI_RUN_CONTROL_UI_I18N: ${{ github.event_name == 'workflow_dispatch' && 'true' || steps.changed_scope.outputs.run_control_ui_i18n || 'false' }}
|
||||
OPENCLAW_CI_REPOSITORY: ${{ github.repository }}
|
||||
run: |
|
||||
node --input-type=module <<'EOF'
|
||||
@@ -161,18 +147,8 @@ jobs:
|
||||
return fallback;
|
||||
};
|
||||
|
||||
const parseJson = (value, fallback) => {
|
||||
try {
|
||||
return value ? JSON.parse(value) : fallback;
|
||||
} catch {
|
||||
return fallback;
|
||||
}
|
||||
};
|
||||
|
||||
const createMatrix = (include) => ({ include });
|
||||
const outputPath = process.env.GITHUB_OUTPUT;
|
||||
const eventName = process.env.GITHUB_EVENT_NAME ?? "pull_request";
|
||||
const isPush = eventName === "push";
|
||||
const isCanonicalRepository = process.env.OPENCLAW_CI_REPOSITORY === "openclaw/openclaw";
|
||||
const docsOnly = parseBoolean(process.env.OPENCLAW_CI_DOCS_ONLY);
|
||||
const docsChanged = parseBoolean(process.env.OPENCLAW_CI_DOCS_CHANGED);
|
||||
@@ -197,11 +173,6 @@ jobs:
|
||||
const runSkillsPython = parseBoolean(process.env.OPENCLAW_CI_RUN_SKILLS_PYTHON) && !docsOnly;
|
||||
const runControlUiI18n =
|
||||
parseBoolean(process.env.OPENCLAW_CI_RUN_CONTROL_UI_I18N) && !docsOnly;
|
||||
const hasChangedExtensions =
|
||||
parseBoolean(process.env.OPENCLAW_CI_HAS_CHANGED_EXTENSIONS) && !docsOnly;
|
||||
const changedExtensionsMatrix = hasChangedExtensions
|
||||
? parseJson(process.env.OPENCLAW_CI_CHANGED_EXTENSIONS_MATRIX, { include: [] })
|
||||
: { include: [] };
|
||||
const extensionTestShardCount = isCanonicalRepository
|
||||
? DEFAULT_EXTENSION_TEST_SHARD_COUNT
|
||||
: Math.max(DEFAULT_EXTENSION_TEST_SHARD_COUNT, 36);
|
||||
@@ -271,8 +242,6 @@ jobs:
|
||||
run_android: runAndroid,
|
||||
run_skills_python: runSkillsPython,
|
||||
run_windows: runWindows,
|
||||
has_changed_extensions: hasChangedExtensions,
|
||||
changed_extensions_matrix: changedExtensionsMatrix,
|
||||
run_build_artifacts: runNodeFull,
|
||||
run_checks_fast_core: runChecksFastCore,
|
||||
run_checks_fast: runNodeFull,
|
||||
@@ -293,15 +262,6 @@ jobs:
|
||||
checks_node_core_nondist_matrix: createMatrix(nodeTestNonDistShards),
|
||||
run_checks_node_core_dist: nodeTestDistShards.length > 0,
|
||||
checks_node_core_dist_matrix: createMatrix(nodeTestDistShards),
|
||||
run_extension_fast: hasChangedExtensions && !isPush,
|
||||
extension_fast_matrix: createMatrix(
|
||||
hasChangedExtensions && !isPush
|
||||
? (changedExtensionsMatrix.include ?? []).map((entry) => ({
|
||||
check_name: `extension-fast-${entry.extension}`,
|
||||
extension: entry.extension,
|
||||
}))
|
||||
: [],
|
||||
),
|
||||
run_check: runNodeFull,
|
||||
run_check_additional: runNodeFull,
|
||||
run_build_smoke: runNodeFull,
|
||||
@@ -354,12 +314,14 @@ jobs:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v6
|
||||
with:
|
||||
ref: ${{ inputs.target_ref || github.sha }}
|
||||
fetch-depth: 1
|
||||
fetch-tags: false
|
||||
persist-credentials: false
|
||||
submodules: false
|
||||
|
||||
- name: Ensure security base commit
|
||||
if: github.event_name != 'workflow_dispatch'
|
||||
uses: ./.github/actions/ensure-base-commit
|
||||
with:
|
||||
base-sha: ${{ github.event_name == 'push' && github.event.before || github.event.pull_request.base.sha }}
|
||||
@@ -443,6 +405,7 @@ jobs:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v6
|
||||
with:
|
||||
ref: ${{ inputs.target_ref || github.sha }}
|
||||
fetch-depth: 1
|
||||
fetch-tags: false
|
||||
persist-credentials: false
|
||||
@@ -505,7 +468,7 @@ jobs:
|
||||
shell: bash
|
||||
env:
|
||||
CHECKOUT_REPO: ${{ github.repository }}
|
||||
CHECKOUT_SHA: ${{ github.sha }}
|
||||
CHECKOUT_SHA: ${{ needs.preflight.outputs.checkout_sha }}
|
||||
CHECKOUT_TOKEN: ${{ github.token }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
@@ -577,7 +540,7 @@ jobs:
|
||||
path: |
|
||||
dist/
|
||||
dist-runtime/
|
||||
key: ${{ runner.os }}-dist-build-${{ github.sha }}
|
||||
key: ${{ runner.os }}-dist-build-${{ needs.preflight.outputs.checkout_sha }}
|
||||
|
||||
- name: Pack built runtime artifacts
|
||||
run: tar --posix -cf dist-runtime-build.tar.zst --use-compress-program zstdmt dist dist-runtime
|
||||
@@ -706,7 +669,7 @@ jobs:
|
||||
shell: bash
|
||||
env:
|
||||
CHECKOUT_REPO: ${{ github.repository }}
|
||||
CHECKOUT_SHA: ${{ github.sha }}
|
||||
CHECKOUT_SHA: ${{ needs.preflight.outputs.checkout_sha }}
|
||||
CHECKOUT_TOKEN: ${{ github.token }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
@@ -801,7 +764,7 @@ jobs:
|
||||
shell: bash
|
||||
env:
|
||||
CHECKOUT_REPO: ${{ github.repository }}
|
||||
CHECKOUT_SHA: ${{ github.sha }}
|
||||
CHECKOUT_SHA: ${{ needs.preflight.outputs.checkout_sha }}
|
||||
CHECKOUT_TOKEN: ${{ github.token }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
@@ -904,7 +867,7 @@ jobs:
|
||||
shell: bash
|
||||
env:
|
||||
CHECKOUT_REPO: ${{ github.repository }}
|
||||
CHECKOUT_SHA: ${{ github.sha }}
|
||||
CHECKOUT_SHA: ${{ needs.preflight.outputs.checkout_sha }}
|
||||
CHECKOUT_TOKEN: ${{ github.token }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
@@ -972,7 +935,7 @@ jobs:
|
||||
shell: bash
|
||||
env:
|
||||
CHECKOUT_REPO: ${{ github.repository }}
|
||||
CHECKOUT_SHA: ${{ github.sha }}
|
||||
CHECKOUT_SHA: ${{ needs.preflight.outputs.checkout_sha }}
|
||||
CHECKOUT_TOKEN: ${{ github.token }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
@@ -1084,7 +1047,7 @@ jobs:
|
||||
contents: read
|
||||
name: checks-node-compat-node22
|
||||
needs: [preflight]
|
||||
if: needs.preflight.outputs.run_build_artifacts == 'true' && github.event_name == 'push'
|
||||
if: needs.preflight.outputs.run_build_artifacts == 'true' && github.event_name == 'workflow_dispatch'
|
||||
runs-on: ${{ github.repository == 'openclaw/openclaw' && 'blacksmith-4vcpu-ubuntu-2404' || 'ubuntu-24.04' }}
|
||||
timeout-minutes: 60
|
||||
steps:
|
||||
@@ -1092,7 +1055,7 @@ jobs:
|
||||
shell: bash
|
||||
env:
|
||||
CHECKOUT_REPO: ${{ github.repository }}
|
||||
CHECKOUT_SHA: ${{ github.sha }}
|
||||
CHECKOUT_SHA: ${{ needs.preflight.outputs.checkout_sha }}
|
||||
CHECKOUT_TOKEN: ${{ github.token }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
@@ -1172,7 +1135,7 @@ jobs:
|
||||
shell: bash
|
||||
env:
|
||||
CHECKOUT_REPO: ${{ github.repository }}
|
||||
CHECKOUT_SHA: ${{ github.sha }}
|
||||
CHECKOUT_SHA: ${{ needs.preflight.outputs.checkout_sha }}
|
||||
CHECKOUT_TOKEN: ${{ github.token }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
@@ -1323,84 +1286,6 @@ jobs:
|
||||
exit 1
|
||||
fi
|
||||
|
||||
extension-fast:
|
||||
permissions:
|
||||
contents: read
|
||||
name: "extension-fast"
|
||||
needs: [preflight]
|
||||
if: needs.preflight.outputs.run_extension_fast == 'true'
|
||||
runs-on: ${{ github.repository == 'openclaw/openclaw' && 'blacksmith-8vcpu-ubuntu-2404' || 'ubuntu-24.04' }}
|
||||
timeout-minutes: 60
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix: ${{ fromJson(needs.preflight.outputs.extension_fast_matrix) }}
|
||||
steps:
|
||||
- name: Checkout
|
||||
shell: bash
|
||||
env:
|
||||
CHECKOUT_REPO: ${{ github.repository }}
|
||||
CHECKOUT_SHA: ${{ github.sha }}
|
||||
CHECKOUT_TOKEN: ${{ github.token }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
workdir="$GITHUB_WORKSPACE"
|
||||
auth_header="$(printf 'x-access-token:%s' "$CHECKOUT_TOKEN" | base64 | tr -d '\n')"
|
||||
|
||||
reset_checkout_dir() {
|
||||
mkdir -p "$workdir"
|
||||
find "$workdir" -mindepth 1 -maxdepth 1 -exec rm -rf {} +
|
||||
}
|
||||
|
||||
checkout_attempt() {
|
||||
local attempt="$1"
|
||||
|
||||
reset_checkout_dir
|
||||
git init "$workdir" >/dev/null
|
||||
git config --global --add safe.directory "$workdir"
|
||||
git -C "$workdir" remote add origin "https://github.com/${CHECKOUT_REPO}"
|
||||
git -C "$workdir" config gc.auto 0
|
||||
|
||||
timeout --signal=TERM 30s git -C "$workdir" \
|
||||
-c protocol.version=2 \
|
||||
-c "http.https://github.com/.extraheader=AUTHORIZATION: basic ${auth_header}" \
|
||||
fetch --no-tags --prune --no-recurse-submodules --depth=1 origin \
|
||||
"+${CHECKOUT_SHA}:refs/remotes/origin/ci-target" || return 1
|
||||
|
||||
git -C "$workdir" checkout --force --detach "$CHECKOUT_SHA" || return 1
|
||||
test -f "$workdir/.github/actions/setup-node-env/action.yml" || return 1
|
||||
echo "checkout attempt ${attempt}/5 succeeded"
|
||||
}
|
||||
|
||||
for attempt in 1 2 3 4 5; do
|
||||
if checkout_attempt "$attempt"; then
|
||||
exit 0
|
||||
fi
|
||||
echo "checkout attempt ${attempt}/5 failed"
|
||||
sleep $((attempt * 5))
|
||||
done
|
||||
|
||||
echo "checkout failed after 5 attempts" >&2
|
||||
exit 1
|
||||
|
||||
- name: Setup Node environment
|
||||
uses: ./.github/actions/setup-node-env
|
||||
with:
|
||||
install-bun: "false"
|
||||
|
||||
- name: Run changed extension tests
|
||||
env:
|
||||
OPENCLAW_CHANGED_EXTENSION: ${{ matrix.extension }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
if [ "$OPENCLAW_CHANGED_EXTENSION" = "telegram" ]; then
|
||||
export OPENCLAW_VITEST_MAX_WORKERS=1
|
||||
export NODE_OPTIONS="${NODE_OPTIONS:+$NODE_OPTIONS }--max-old-space-size=6144"
|
||||
pnpm test:extension "$OPENCLAW_CHANGED_EXTENSION" -- --pool=forks
|
||||
exit 0
|
||||
fi
|
||||
pnpm test:extension "$OPENCLAW_CHANGED_EXTENSION"
|
||||
|
||||
# Types, lint, and format check shards.
|
||||
check-shard:
|
||||
permissions:
|
||||
@@ -1437,7 +1322,7 @@ jobs:
|
||||
shell: bash
|
||||
env:
|
||||
CHECKOUT_REPO: ${{ github.repository }}
|
||||
CHECKOUT_SHA: ${{ github.sha }}
|
||||
CHECKOUT_SHA: ${{ needs.preflight.outputs.checkout_sha }}
|
||||
CHECKOUT_TOKEN: ${{ github.token }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
@@ -1569,7 +1454,7 @@ jobs:
|
||||
shell: bash
|
||||
env:
|
||||
CHECKOUT_REPO: ${{ github.repository }}
|
||||
CHECKOUT_SHA: ${{ github.sha }}
|
||||
CHECKOUT_SHA: ${{ needs.preflight.outputs.checkout_sha }}
|
||||
CHECKOUT_TOKEN: ${{ github.token }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
@@ -1767,7 +1652,7 @@ jobs:
|
||||
shell: bash
|
||||
env:
|
||||
CHECKOUT_REPO: ${{ github.repository }}
|
||||
CHECKOUT_SHA: ${{ github.sha }}
|
||||
CHECKOUT_SHA: ${{ needs.preflight.outputs.checkout_sha }}
|
||||
CHECKOUT_TOKEN: ${{ github.token }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
@@ -1830,6 +1715,7 @@ jobs:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v6
|
||||
with:
|
||||
ref: ${{ needs.preflight.outputs.checkout_sha }}
|
||||
persist-credentials: false
|
||||
submodules: false
|
||||
|
||||
@@ -1872,6 +1758,7 @@ jobs:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v6
|
||||
with:
|
||||
ref: ${{ needs.preflight.outputs.checkout_sha }}
|
||||
persist-credentials: false
|
||||
submodules: false
|
||||
|
||||
@@ -1976,6 +1863,7 @@ jobs:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v6
|
||||
with:
|
||||
ref: ${{ needs.preflight.outputs.checkout_sha }}
|
||||
persist-credentials: false
|
||||
submodules: false
|
||||
|
||||
@@ -2016,6 +1904,7 @@ jobs:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v6
|
||||
with:
|
||||
ref: ${{ needs.preflight.outputs.checkout_sha }}
|
||||
persist-credentials: false
|
||||
submodules: false
|
||||
|
||||
@@ -2116,7 +2005,7 @@ jobs:
|
||||
shell: bash
|
||||
env:
|
||||
CHECKOUT_REPO: ${{ github.repository }}
|
||||
CHECKOUT_SHA: ${{ github.sha }}
|
||||
CHECKOUT_SHA: ${{ needs.preflight.outputs.checkout_sha }}
|
||||
CHECKOUT_TOKEN: ${{ github.token }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
169
.github/workflows/docker-release.yml
vendored
169
.github/workflows/docker-release.yml
vendored
@@ -63,7 +63,7 @@ jobs:
|
||||
|
||||
# KEEP THIS WORKFLOW ON GITHUB-HOSTED RUNNERS.
|
||||
# DO NOT MOVE IT BACK TO BLACKSMITH WITHOUT RE-VALIDATING TAG BUILDS AND BACKFILLS.
|
||||
# Build amd64 images (default + slim share the build stage cache)
|
||||
# Build amd64 image. Default and slim tags point to the same slim runtime.
|
||||
build-amd64:
|
||||
needs: [approve_manual_backfill]
|
||||
if: ${{ always() && (github.event_name != 'workflow_dispatch' || needs.approve_manual_backfill.result == 'success') }}
|
||||
@@ -74,7 +74,6 @@ jobs:
|
||||
contents: read
|
||||
outputs:
|
||||
digest: ${{ steps.build.outputs.digest }}
|
||||
slim-digest: ${{ steps.build-slim.outputs.digest }}
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v6
|
||||
@@ -117,12 +116,7 @@ jobs:
|
||||
fi
|
||||
{
|
||||
echo "value<<EOF"
|
||||
printf "%s\n" "${tags[@]}"
|
||||
echo "EOF"
|
||||
} >> "$GITHUB_OUTPUT"
|
||||
{
|
||||
echo "slim<<EOF"
|
||||
printf "%s\n" "${slim_tags[@]}"
|
||||
printf "%s\n" "${tags[@]}" "${slim_tags[@]}"
|
||||
echo "EOF"
|
||||
} >> "$GITHUB_OUTPUT"
|
||||
|
||||
@@ -163,27 +157,11 @@ jobs:
|
||||
OPENCLAW_EXTENSIONS=diagnostics-otel
|
||||
tags: ${{ steps.tags.outputs.value }}
|
||||
labels: ${{ steps.labels.outputs.value }}
|
||||
provenance: false
|
||||
sbom: true
|
||||
provenance: mode=max
|
||||
push: true
|
||||
|
||||
- name: Build and push amd64 slim image
|
||||
id: build-slim
|
||||
# WARNING: KEEP THE OFFICIAL DOCKER ACTION HERE; DO NOT SWITCH THIS BACK TO BLACKSMITH BLINDLY.
|
||||
uses: docker/build-push-action@bcafcacb16a39f128d818304e6c9c0c18556b85f # v7.1.0
|
||||
with:
|
||||
context: .
|
||||
platforms: linux/amd64
|
||||
cache-from: type=gha,scope=docker-release-amd64
|
||||
cache-to: type=gha,mode=max,scope=docker-release-amd64
|
||||
build-args: |
|
||||
OPENCLAW_EXTENSIONS=diagnostics-otel
|
||||
OPENCLAW_VARIANT=slim
|
||||
tags: ${{ steps.tags.outputs.slim }}
|
||||
labels: ${{ steps.labels.outputs.value }}
|
||||
provenance: false
|
||||
push: true
|
||||
|
||||
# Build arm64 images (default + slim share the build stage cache)
|
||||
# Build arm64 image. Default and slim tags point to the same slim runtime.
|
||||
build-arm64:
|
||||
needs: [approve_manual_backfill]
|
||||
if: ${{ always() && (github.event_name != 'workflow_dispatch' || needs.approve_manual_backfill.result == 'success') }}
|
||||
@@ -194,7 +172,6 @@ jobs:
|
||||
contents: read
|
||||
outputs:
|
||||
digest: ${{ steps.build.outputs.digest }}
|
||||
slim-digest: ${{ steps.build-slim.outputs.digest }}
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v6
|
||||
@@ -237,12 +214,7 @@ jobs:
|
||||
fi
|
||||
{
|
||||
echo "value<<EOF"
|
||||
printf "%s\n" "${tags[@]}"
|
||||
echo "EOF"
|
||||
} >> "$GITHUB_OUTPUT"
|
||||
{
|
||||
echo "slim<<EOF"
|
||||
printf "%s\n" "${slim_tags[@]}"
|
||||
printf "%s\n" "${tags[@]}" "${slim_tags[@]}"
|
||||
echo "EOF"
|
||||
} >> "$GITHUB_OUTPUT"
|
||||
|
||||
@@ -283,24 +255,8 @@ jobs:
|
||||
OPENCLAW_EXTENSIONS=diagnostics-otel
|
||||
tags: ${{ steps.tags.outputs.value }}
|
||||
labels: ${{ steps.labels.outputs.value }}
|
||||
provenance: false
|
||||
push: true
|
||||
|
||||
- name: Build and push arm64 slim image
|
||||
id: build-slim
|
||||
# WARNING: KEEP THE OFFICIAL DOCKER ACTION HERE; DO NOT SWITCH THIS BACK TO BLACKSMITH BLINDLY.
|
||||
uses: docker/build-push-action@bcafcacb16a39f128d818304e6c9c0c18556b85f # v7.1.0
|
||||
with:
|
||||
context: .
|
||||
platforms: linux/arm64
|
||||
cache-from: type=gha,scope=docker-release-arm64
|
||||
cache-to: type=gha,mode=max,scope=docker-release-arm64
|
||||
build-args: |
|
||||
OPENCLAW_EXTENSIONS=diagnostics-otel
|
||||
OPENCLAW_VARIANT=slim
|
||||
tags: ${{ steps.tags.outputs.slim }}
|
||||
labels: ${{ steps.labels.outputs.value }}
|
||||
provenance: false
|
||||
sbom: true
|
||||
provenance: mode=max
|
||||
push: true
|
||||
|
||||
# Create multi-platform manifests
|
||||
@@ -357,16 +313,11 @@ jobs:
|
||||
fi
|
||||
{
|
||||
echo "value<<EOF"
|
||||
printf "%s\n" "${tags[@]}"
|
||||
echo "EOF"
|
||||
} >> "$GITHUB_OUTPUT"
|
||||
{
|
||||
echo "slim<<EOF"
|
||||
printf "%s\n" "${slim_tags[@]}"
|
||||
printf "%s\n" "${tags[@]}" "${slim_tags[@]}"
|
||||
echo "EOF"
|
||||
} >> "$GITHUB_OUTPUT"
|
||||
|
||||
- name: Create and push default manifest
|
||||
- name: Create and push manifest
|
||||
shell: bash
|
||||
env:
|
||||
TAGS: ${{ steps.tags.outputs.value }}
|
||||
@@ -384,20 +335,94 @@ jobs:
|
||||
"${AMD64_DIGEST}" \
|
||||
"${ARM64_DIGEST}"
|
||||
|
||||
- name: Create and push slim manifest
|
||||
verify-attestations:
|
||||
needs: [create-manifest]
|
||||
if: ${{ always() && needs.create-manifest.result == 'success' }}
|
||||
runs-on: ubuntu-24.04
|
||||
permissions:
|
||||
contents: read
|
||||
packages: read
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v6
|
||||
with:
|
||||
fetch-depth: 1
|
||||
|
||||
- name: Set up Docker Builder
|
||||
uses: docker/setup-buildx-action@4d04d5d9486b7bd6fa91e7baf45bbb4f8b9deedd # v4
|
||||
|
||||
- name: Login to GitHub Container Registry
|
||||
uses: docker/login-action@4907a6ddec9925e35a0a9e82d7399ccc52663121 # v4
|
||||
with:
|
||||
registry: ${{ env.REGISTRY }}
|
||||
username: ${{ github.repository_owner }}
|
||||
password: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Resolve image refs
|
||||
id: refs
|
||||
shell: bash
|
||||
env:
|
||||
SLIM_TAGS: ${{ steps.tags.outputs.slim }}
|
||||
AMD64_SLIM_DIGEST: ${{ needs.build-amd64.outputs.slim-digest }}
|
||||
ARM64_SLIM_DIGEST: ${{ needs.build-arm64.outputs.slim-digest }}
|
||||
IMAGE: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
|
||||
SOURCE_REF: ${{ github.event_name == 'workflow_dispatch' && format('refs/tags/{0}', inputs.tag) || github.ref }}
|
||||
IS_MANUAL_BACKFILL: ${{ github.event_name == 'workflow_dispatch' && '1' || '0' }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
mapfile -t tags <<< "${SLIM_TAGS}"
|
||||
args=()
|
||||
for tag in "${tags[@]}"; do
|
||||
[ -z "$tag" ] && continue
|
||||
args+=("-t" "$tag")
|
||||
done
|
||||
docker buildx imagetools create "${args[@]}" \
|
||||
"${AMD64_SLIM_DIGEST}" \
|
||||
"${ARM64_SLIM_DIGEST}"
|
||||
multi_refs=()
|
||||
slim_multi_refs=()
|
||||
amd64_refs=()
|
||||
arm64_refs=()
|
||||
if [[ "${SOURCE_REF}" == "refs/heads/main" ]]; then
|
||||
multi_refs+=("${IMAGE}:main")
|
||||
slim_multi_refs+=("${IMAGE}:main-slim")
|
||||
amd64_refs+=("${IMAGE}:main-amd64" "${IMAGE}:main-slim-amd64")
|
||||
arm64_refs+=("${IMAGE}:main-arm64" "${IMAGE}:main-slim-arm64")
|
||||
fi
|
||||
if [[ "${SOURCE_REF}" == refs/tags/v* ]]; then
|
||||
version="${SOURCE_REF#refs/tags/v}"
|
||||
multi_refs+=("${IMAGE}:${version}")
|
||||
slim_multi_refs+=("${IMAGE}:${version}-slim")
|
||||
amd64_refs+=("${IMAGE}:${version}-amd64" "${IMAGE}:${version}-slim-amd64")
|
||||
arm64_refs+=("${IMAGE}:${version}-arm64" "${IMAGE}:${version}-slim-arm64")
|
||||
if [[ "${IS_MANUAL_BACKFILL}" != "1" && "$version" =~ ^[0-9]+\.[0-9]+\.[0-9]+(-[0-9]+)?$ ]]; then
|
||||
multi_refs+=("${IMAGE}:latest")
|
||||
slim_multi_refs+=("${IMAGE}:slim")
|
||||
fi
|
||||
fi
|
||||
if [[ ${#multi_refs[@]} -eq 0 || ${#amd64_refs[@]} -eq 0 || ${#arm64_refs[@]} -eq 0 ]]; then
|
||||
echo "::error::No Docker image refs resolved for ref ${SOURCE_REF}"
|
||||
exit 1
|
||||
fi
|
||||
{
|
||||
echo "multi<<EOF"
|
||||
printf "%s\n" "${multi_refs[@]}" "${slim_multi_refs[@]}"
|
||||
echo "EOF"
|
||||
echo "amd64<<EOF"
|
||||
printf "%s\n" "${amd64_refs[@]}"
|
||||
echo "EOF"
|
||||
echo "arm64<<EOF"
|
||||
printf "%s\n" "${arm64_refs[@]}"
|
||||
echo "EOF"
|
||||
} >> "$GITHUB_OUTPUT"
|
||||
|
||||
- name: Verify Docker attestations
|
||||
shell: bash
|
||||
env:
|
||||
MULTI_REFS: ${{ steps.refs.outputs.multi }}
|
||||
AMD64_REFS: ${{ steps.refs.outputs.amd64 }}
|
||||
ARM64_REFS: ${{ steps.refs.outputs.arm64 }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
mapfile -t multi_refs <<< "${MULTI_REFS}"
|
||||
mapfile -t amd64_refs <<< "${AMD64_REFS}"
|
||||
mapfile -t arm64_refs <<< "${ARM64_REFS}"
|
||||
|
||||
node scripts/verify-docker-attestations.mjs \
|
||||
--platform linux/amd64 \
|
||||
--platform linux/arm64 \
|
||||
"${multi_refs[@]}"
|
||||
node scripts/verify-docker-attestations.mjs \
|
||||
--platform linux/amd64 \
|
||||
"${amd64_refs[@]}"
|
||||
node scripts/verify-docker-attestations.mjs \
|
||||
--platform linux/arm64 \
|
||||
"${arm64_refs[@]}"
|
||||
|
||||
391
.github/workflows/full-release-validation.yml
vendored
Normal file
391
.github/workflows/full-release-validation.yml
vendored
Normal file
@@ -0,0 +1,391 @@
|
||||
name: Full Release Validation
|
||||
|
||||
on:
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
ref:
|
||||
description: Branch, tag, or full commit SHA to validate
|
||||
required: true
|
||||
default: main
|
||||
type: string
|
||||
workflow_ref:
|
||||
description: Trusted workflow ref used to run child workflows
|
||||
required: false
|
||||
default: main
|
||||
type: string
|
||||
provider:
|
||||
description: Provider lane for cross-OS onboarding and the end-to-end agent turn
|
||||
required: false
|
||||
default: openai
|
||||
type: choice
|
||||
options:
|
||||
- openai
|
||||
- anthropic
|
||||
- minimax
|
||||
mode:
|
||||
description: Which cross-OS release lanes to run
|
||||
required: false
|
||||
default: both
|
||||
type: choice
|
||||
options:
|
||||
- fresh
|
||||
- upgrade
|
||||
- both
|
||||
npm_telegram_package_spec:
|
||||
description: Optional published package spec for the post-publish Telegram E2E lane
|
||||
required: false
|
||||
default: ""
|
||||
type: string
|
||||
npm_telegram_provider_mode:
|
||||
description: Provider mode for the optional post-publish Telegram E2E lane
|
||||
required: false
|
||||
default: mock-openai
|
||||
type: choice
|
||||
options:
|
||||
- mock-openai
|
||||
- live-frontier
|
||||
npm_telegram_scenario:
|
||||
description: Optional comma-separated Telegram scenario ids for the post-publish lane
|
||||
required: false
|
||||
default: ""
|
||||
type: string
|
||||
|
||||
permissions:
|
||||
actions: write
|
||||
contents: read
|
||||
|
||||
concurrency:
|
||||
group: full-release-validation-${{ inputs.ref }}
|
||||
cancel-in-progress: false
|
||||
|
||||
env:
|
||||
FORCE_JAVASCRIPT_ACTIONS_TO_NODE24: "true"
|
||||
GH_REPO: ${{ github.repository }}
|
||||
|
||||
jobs:
|
||||
resolve_target:
|
||||
name: Resolve target ref
|
||||
runs-on: ubuntu-24.04
|
||||
timeout-minutes: 10
|
||||
outputs:
|
||||
sha: ${{ steps.resolve.outputs.sha }}
|
||||
steps:
|
||||
- name: Checkout target ref
|
||||
uses: actions/checkout@v6
|
||||
with:
|
||||
ref: ${{ inputs.ref }}
|
||||
fetch-depth: 0
|
||||
persist-credentials: false
|
||||
submodules: false
|
||||
|
||||
- name: Resolve target SHA
|
||||
id: resolve
|
||||
run: echo "sha=$(git rev-parse HEAD)" >> "$GITHUB_OUTPUT"
|
||||
|
||||
- name: Summarize target
|
||||
env:
|
||||
TARGET_REF: ${{ inputs.ref }}
|
||||
TARGET_SHA: ${{ steps.resolve.outputs.sha }}
|
||||
WORKFLOW_REF: ${{ inputs.workflow_ref }}
|
||||
NPM_TELEGRAM_PACKAGE_SPEC: ${{ inputs.npm_telegram_package_spec }}
|
||||
run: |
|
||||
{
|
||||
echo "## Full release validation"
|
||||
echo
|
||||
echo "- Target ref: \`${TARGET_REF}\`"
|
||||
echo "- Target SHA: \`${TARGET_SHA}\`"
|
||||
echo "- Child workflow ref: \`${WORKFLOW_REF}\`"
|
||||
echo "- Normal CI: \`CI\` with \`target_ref=${TARGET_REF}\`"
|
||||
echo "- Release/live/Docker/package/QA: \`OpenClaw Release Checks\`"
|
||||
if [[ -n "${NPM_TELEGRAM_PACKAGE_SPEC// }" ]]; then
|
||||
echo "- Post-publish Telegram E2E: \`${NPM_TELEGRAM_PACKAGE_SPEC}\`"
|
||||
else
|
||||
echo "- Post-publish Telegram E2E: skipped because no published package spec was provided"
|
||||
fi
|
||||
} >> "$GITHUB_STEP_SUMMARY"
|
||||
|
||||
normal_ci:
|
||||
name: Run normal full CI
|
||||
needs: [resolve_target]
|
||||
runs-on: ubuntu-24.04
|
||||
timeout-minutes: 240
|
||||
steps:
|
||||
- name: Dispatch and monitor CI
|
||||
env:
|
||||
GH_TOKEN: ${{ github.token }}
|
||||
TARGET_REF: ${{ inputs.ref }}
|
||||
TARGET_SHA: ${{ needs.resolve_target.outputs.sha }}
|
||||
WORKFLOW_REF: ${{ inputs.workflow_ref }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
dispatch_and_wait() {
|
||||
local workflow="$1"
|
||||
local workflow_ref="$2"
|
||||
shift 2
|
||||
|
||||
local before_json run_id status conclusion url
|
||||
before_json="$(gh run list --workflow "$workflow" --event workflow_dispatch --limit 100 --json databaseId --jq '[.[].databaseId]')"
|
||||
|
||||
gh workflow run "$workflow" --ref "$workflow_ref" "$@"
|
||||
|
||||
for _ in $(seq 1 60); do
|
||||
run_id="$(
|
||||
BEFORE_IDS="$before_json" gh run list --workflow "$workflow" --event workflow_dispatch --limit 50 --json databaseId,createdAt \
|
||||
--jq 'map(select(.databaseId as $id | (env.BEFORE_IDS | fromjson | index($id) | not))) | sort_by(.createdAt) | reverse | .[0].databaseId // empty'
|
||||
)"
|
||||
if [[ -n "$run_id" ]]; then
|
||||
break
|
||||
fi
|
||||
sleep 5
|
||||
done
|
||||
|
||||
if [[ -z "${run_id:-}" ]]; then
|
||||
echo "Could not find dispatched run for ${workflow}." >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Dispatched ${workflow}: https://github.com/${GITHUB_REPOSITORY}/actions/runs/${run_id}"
|
||||
|
||||
while true; do
|
||||
status="$(gh run view "$run_id" --json status --jq '.status')"
|
||||
if [[ "$status" == "completed" ]]; then
|
||||
break
|
||||
fi
|
||||
sleep 30
|
||||
done
|
||||
|
||||
conclusion="$(gh run view "$run_id" --json conclusion --jq '.conclusion')"
|
||||
url="$(gh run view "$run_id" --json url --jq '.url')"
|
||||
echo "${workflow} finished with ${conclusion}: ${url}"
|
||||
if [[ "$conclusion" != "success" ]]; then
|
||||
gh run view "$run_id" --json jobs --jq '.jobs[] | select(.conclusion != "success" and .conclusion != "skipped") | {name, conclusion, url}'
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
{
|
||||
echo "### Normal CI"
|
||||
echo
|
||||
echo "- Target ref: \`${TARGET_REF}\`"
|
||||
echo "- Target SHA: \`${TARGET_SHA}\`"
|
||||
} >> "$GITHUB_STEP_SUMMARY"
|
||||
|
||||
dispatch_and_wait ci.yml "$WORKFLOW_REF" -f target_ref="$TARGET_REF"
|
||||
|
||||
release_checks:
|
||||
name: Run release/live/Docker/QA validation
|
||||
needs: [resolve_target]
|
||||
runs-on: ubuntu-24.04
|
||||
timeout-minutes: 720
|
||||
steps:
|
||||
- name: Dispatch and monitor release checks
|
||||
env:
|
||||
GH_TOKEN: ${{ github.token }}
|
||||
TARGET_REF: ${{ inputs.ref }}
|
||||
TARGET_SHA: ${{ needs.resolve_target.outputs.sha }}
|
||||
WORKFLOW_REF: ${{ inputs.workflow_ref }}
|
||||
PROVIDER: ${{ inputs.provider }}
|
||||
MODE: ${{ inputs.mode }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
dispatch_and_wait() {
|
||||
local workflow="$1"
|
||||
local workflow_ref="$2"
|
||||
shift 2
|
||||
|
||||
local before_json run_id status conclusion url
|
||||
before_json="$(gh run list --workflow "$workflow" --event workflow_dispatch --limit 100 --json databaseId --jq '[.[].databaseId]')"
|
||||
|
||||
gh workflow run "$workflow" --ref "$workflow_ref" "$@"
|
||||
|
||||
for _ in $(seq 1 60); do
|
||||
run_id="$(
|
||||
BEFORE_IDS="$before_json" gh run list --workflow "$workflow" --event workflow_dispatch --limit 50 --json databaseId,createdAt \
|
||||
--jq 'map(select(.databaseId as $id | (env.BEFORE_IDS | fromjson | index($id) | not))) | sort_by(.createdAt) | reverse | .[0].databaseId // empty'
|
||||
)"
|
||||
if [[ -n "$run_id" ]]; then
|
||||
break
|
||||
fi
|
||||
sleep 5
|
||||
done
|
||||
|
||||
if [[ -z "${run_id:-}" ]]; then
|
||||
echo "Could not find dispatched run for ${workflow}." >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Dispatched ${workflow}: https://github.com/${GITHUB_REPOSITORY}/actions/runs/${run_id}"
|
||||
|
||||
while true; do
|
||||
status="$(gh run view "$run_id" --json status --jq '.status')"
|
||||
if [[ "$status" == "completed" ]]; then
|
||||
break
|
||||
fi
|
||||
sleep 60
|
||||
done
|
||||
|
||||
conclusion="$(gh run view "$run_id" --json conclusion --jq '.conclusion')"
|
||||
url="$(gh run view "$run_id" --json url --jq '.url')"
|
||||
echo "${workflow} finished with ${conclusion}: ${url}"
|
||||
if [[ "$conclusion" != "success" ]]; then
|
||||
gh run view "$run_id" --json jobs --jq '.jobs[] | select(.conclusion != "success" and .conclusion != "skipped") | {name, conclusion, url}'
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
{
|
||||
echo "### Release/live/Docker/QA validation"
|
||||
echo
|
||||
echo "- Target ref: \`${TARGET_REF}\`"
|
||||
echo "- Target SHA: \`${TARGET_SHA}\`"
|
||||
echo "- Provider: \`${PROVIDER}\`"
|
||||
echo "- Cross-OS mode: \`${MODE}\`"
|
||||
} >> "$GITHUB_STEP_SUMMARY"
|
||||
|
||||
dispatch_and_wait openclaw-release-checks.yml "$WORKFLOW_REF" \
|
||||
-f ref="$TARGET_REF" \
|
||||
-f provider="$PROVIDER" \
|
||||
-f mode="$MODE"
|
||||
|
||||
npm_telegram:
|
||||
name: Run post-publish Telegram E2E
|
||||
needs: [resolve_target]
|
||||
if: inputs.npm_telegram_package_spec != ''
|
||||
runs-on: ubuntu-24.04
|
||||
timeout-minutes: 120
|
||||
steps:
|
||||
- name: Dispatch and monitor npm Telegram E2E
|
||||
env:
|
||||
GH_TOKEN: ${{ github.token }}
|
||||
WORKFLOW_REF: ${{ inputs.workflow_ref }}
|
||||
PACKAGE_SPEC: ${{ inputs.npm_telegram_package_spec }}
|
||||
PROVIDER_MODE: ${{ inputs.npm_telegram_provider_mode }}
|
||||
SCENARIO: ${{ inputs.npm_telegram_scenario }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
before_json="$(gh run list --workflow npm-telegram-beta-e2e.yml --event workflow_dispatch --limit 100 --json databaseId --jq '[.[].databaseId]')"
|
||||
|
||||
args=(-f package_spec="$PACKAGE_SPEC" -f provider_mode="$PROVIDER_MODE")
|
||||
if [[ -n "${SCENARIO// }" ]]; then
|
||||
args+=(-f scenario="$SCENARIO")
|
||||
fi
|
||||
|
||||
gh workflow run npm-telegram-beta-e2e.yml --ref "$WORKFLOW_REF" "${args[@]}"
|
||||
|
||||
run_id=""
|
||||
for _ in $(seq 1 60); do
|
||||
run_id="$(
|
||||
BEFORE_IDS="$before_json" gh run list --workflow npm-telegram-beta-e2e.yml --event workflow_dispatch --limit 50 --json databaseId,createdAt \
|
||||
--jq 'map(select(.databaseId as $id | (env.BEFORE_IDS | fromjson | index($id) | not))) | sort_by(.createdAt) | reverse | .[0].databaseId // empty'
|
||||
)"
|
||||
if [[ -n "$run_id" ]]; then
|
||||
break
|
||||
fi
|
||||
sleep 5
|
||||
done
|
||||
|
||||
if [[ -z "$run_id" ]]; then
|
||||
echo "Could not find dispatched run for npm-telegram-beta-e2e.yml." >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Dispatched npm-telegram-beta-e2e.yml: https://github.com/${GITHUB_REPOSITORY}/actions/runs/${run_id}"
|
||||
|
||||
while true; do
|
||||
status="$(gh run view "$run_id" --json status --jq '.status')"
|
||||
if [[ "$status" == "completed" ]]; then
|
||||
break
|
||||
fi
|
||||
sleep 60
|
||||
done
|
||||
|
||||
conclusion="$(gh run view "$run_id" --json conclusion --jq '.conclusion')"
|
||||
url="$(gh run view "$run_id" --json url --jq '.url')"
|
||||
echo "npm-telegram-beta-e2e.yml finished with ${conclusion}: ${url}"
|
||||
if [[ "$conclusion" != "success" ]]; then
|
||||
gh run view "$run_id" --json jobs --jq '.jobs[] | select(.conclusion != "success" and .conclusion != "skipped") | {name, conclusion, url}'
|
||||
exit 1
|
||||
fi
|
||||
|
||||
summary:
|
||||
name: Verify full validation
|
||||
needs: [normal_ci, release_checks, npm_telegram]
|
||||
if: always()
|
||||
runs-on: ubuntu-24.04
|
||||
timeout-minutes: 5
|
||||
steps:
|
||||
- name: Request private evidence update
|
||||
env:
|
||||
RELEASE_PRIVATE_DISPATCH_TOKEN: ${{ secrets.OPENCLAW_RELEASES_PRIVATE_DISPATCH_TOKEN }}
|
||||
TARGET_REF: ${{ inputs.ref }}
|
||||
PACKAGE_SPEC: ${{ inputs.npm_telegram_package_spec }}
|
||||
GITHUB_RUN_ID_VALUE: ${{ github.run_id }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
if [[ -z "${RELEASE_PRIVATE_DISPATCH_TOKEN// }" ]]; then
|
||||
echo "OPENCLAW_RELEASES_PRIVATE_DISPATCH_TOKEN is not configured; skipping automatic private evidence update."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
release_id="${TARGET_REF#refs/tags/}"
|
||||
release_id="${release_id#v}"
|
||||
if [[ "$PACKAGE_SPEC" =~ ^openclaw@(.+)$ ]]; then
|
||||
release_id="${BASH_REMATCH[1]}"
|
||||
fi
|
||||
release_id="$(printf '%s' "$release_id" | tr '/:@ ' '----' | tr -cd 'A-Za-z0-9._-')"
|
||||
if [[ -z "$release_id" ]]; then
|
||||
echo "::error::Could not derive release evidence id from target ref '${TARGET_REF}'."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
payload="$(
|
||||
jq -cn \
|
||||
--arg full_validation_run_id "$GITHUB_RUN_ID_VALUE" \
|
||||
--arg release_id "$release_id" \
|
||||
--arg release_ref "$TARGET_REF" \
|
||||
--arg package_spec "$PACKAGE_SPEC" \
|
||||
--arg notes "Automatically requested by Full Release Validation ${GITHUB_RUN_ID_VALUE} after child workflows completed." \
|
||||
'{
|
||||
event_type: "openclaw_full_release_validation_completed",
|
||||
client_payload: {
|
||||
full_validation_run_id: $full_validation_run_id,
|
||||
release_id: $release_id,
|
||||
release_ref: $release_ref,
|
||||
package_spec: $package_spec,
|
||||
notes: $notes
|
||||
}
|
||||
}'
|
||||
)"
|
||||
|
||||
curl --fail-with-body \
|
||||
-X POST \
|
||||
-H "Accept: application/vnd.github+json" \
|
||||
-H "Authorization: Bearer ${RELEASE_PRIVATE_DISPATCH_TOKEN}" \
|
||||
-H "X-GitHub-Api-Version: 2022-11-28" \
|
||||
https://api.github.com/repos/openclaw/releases-private/dispatches \
|
||||
-d "$payload"
|
||||
|
||||
- name: Verify child workflow results
|
||||
env:
|
||||
NORMAL_CI_RESULT: ${{ needs.normal_ci.result }}
|
||||
RELEASE_CHECKS_RESULT: ${{ needs.release_checks.result }}
|
||||
NPM_TELEGRAM_RESULT: ${{ needs.npm_telegram.result }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
failed=0
|
||||
for item in \
|
||||
"normal_ci=${NORMAL_CI_RESULT}" \
|
||||
"release_checks=${RELEASE_CHECKS_RESULT}" \
|
||||
"npm_telegram=${NPM_TELEGRAM_RESULT}"
|
||||
do
|
||||
name="${item%%=*}"
|
||||
result="${item#*=}"
|
||||
if [[ "$result" != "success" && "$result" != "skipped" ]]; then
|
||||
echo "::error::${name} ended with ${result}"
|
||||
failed=1
|
||||
fi
|
||||
done
|
||||
exit "$failed"
|
||||
14
.github/workflows/install-smoke.yml
vendored
14
.github/workflows/install-smoke.yml
vendored
@@ -10,6 +10,11 @@ on:
|
||||
required: false
|
||||
default: false
|
||||
type: boolean
|
||||
update_baseline_version:
|
||||
description: Baseline openclaw version or dist-tag for installer update smoke
|
||||
required: false
|
||||
default: latest
|
||||
type: string
|
||||
workflow_call:
|
||||
inputs:
|
||||
ref:
|
||||
@@ -21,6 +26,11 @@ on:
|
||||
required: false
|
||||
default: true
|
||||
type: boolean
|
||||
update_baseline_version:
|
||||
description: Baseline openclaw version or dist-tag for installer update smoke
|
||||
required: false
|
||||
default: latest
|
||||
type: string
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
@@ -103,7 +113,6 @@ jobs:
|
||||
context: .
|
||||
file: ./Dockerfile
|
||||
build-args: |
|
||||
OPENCLAW_DOCKER_APT_UPGRADE=0
|
||||
OPENCLAW_EXTENSIONS=matrix
|
||||
tags: |
|
||||
openclaw-dockerfile-smoke:local
|
||||
@@ -218,7 +227,6 @@ jobs:
|
||||
context: .
|
||||
file: ./Dockerfile
|
||||
build-args: |
|
||||
OPENCLAW_DOCKER_APT_UPGRADE=0
|
||||
OPENCLAW_EXTENSIONS=matrix
|
||||
tags: |
|
||||
openclaw-dockerfile-smoke:local
|
||||
@@ -332,7 +340,7 @@ jobs:
|
||||
OPENCLAW_INSTALL_SMOKE_SKIP_NONROOT: "0"
|
||||
OPENCLAW_INSTALL_SMOKE_SKIP_NPM_GLOBAL: "1"
|
||||
OPENCLAW_INSTALL_SMOKE_SKIP_PREVIOUS: "1"
|
||||
OPENCLAW_INSTALL_SMOKE_UPDATE_BASELINE: latest
|
||||
OPENCLAW_INSTALL_SMOKE_UPDATE_BASELINE: ${{ inputs.update_baseline_version || 'latest' }}
|
||||
OPENCLAW_INSTALL_SMOKE_UPDATE_DIST_IMAGE: openclaw-dockerfile-smoke:local
|
||||
OPENCLAW_INSTALL_SMOKE_UPDATE_SKIP_LOCAL_BUILD: "1"
|
||||
run: bash scripts/test-install-sh-docker.sh
|
||||
|
||||
121
.github/workflows/npm-telegram-beta-e2e.yml
vendored
121
.github/workflows/npm-telegram-beta-e2e.yml
vendored
@@ -4,10 +4,20 @@ on:
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
package_spec:
|
||||
description: Published OpenClaw package spec to test
|
||||
description: Published OpenClaw package spec to test when no artifact is supplied
|
||||
required: true
|
||||
default: openclaw@beta
|
||||
type: string
|
||||
package_label:
|
||||
description: Optional display label for an artifact-backed package candidate
|
||||
required: false
|
||||
default: ""
|
||||
type: string
|
||||
package_artifact_name:
|
||||
description: Advanced package-under-test artifact name; leave blank for registry install
|
||||
required: false
|
||||
default: ""
|
||||
type: string
|
||||
provider_mode:
|
||||
description: QA provider mode
|
||||
required: true
|
||||
@@ -20,6 +30,39 @@ on:
|
||||
description: Optional comma-separated Telegram scenario ids
|
||||
required: false
|
||||
type: string
|
||||
workflow_call:
|
||||
inputs:
|
||||
package_spec:
|
||||
description: Published OpenClaw package spec to test when no artifact is supplied
|
||||
required: true
|
||||
type: string
|
||||
package_artifact_name:
|
||||
description: Optional package-under-test artifact from the current workflow run
|
||||
required: false
|
||||
default: ""
|
||||
type: string
|
||||
package_label:
|
||||
description: Optional display label for an artifact-backed package candidate
|
||||
required: false
|
||||
default: ""
|
||||
type: string
|
||||
provider_mode:
|
||||
description: QA provider mode
|
||||
required: false
|
||||
default: mock-openai
|
||||
type: string
|
||||
scenario:
|
||||
description: Optional comma-separated Telegram scenario ids
|
||||
required: false
|
||||
default: ""
|
||||
type: string
|
||||
secrets:
|
||||
OPENAI_API_KEY:
|
||||
required: false
|
||||
OPENCLAW_QA_CONVEX_SITE_URL:
|
||||
required: false
|
||||
OPENCLAW_QA_CONVEX_SECRET_CI:
|
||||
required: false
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
@@ -34,34 +77,8 @@ env:
|
||||
PNPM_VERSION: "10.33.0"
|
||||
|
||||
jobs:
|
||||
validate_dispatch_ref:
|
||||
name: Validate dispatch ref
|
||||
runs-on: blacksmith-8vcpu-ubuntu-2404
|
||||
steps:
|
||||
- name: Require main workflow ref
|
||||
env:
|
||||
WORKFLOW_REF: ${{ github.ref }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
if [[ "${WORKFLOW_REF}" != "refs/heads/main" ]]; then
|
||||
echo "NPM Telegram beta E2E must be dispatched from main so workflow logic stays controlled." >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
approve_release_manager:
|
||||
name: Approve npm Telegram beta E2E
|
||||
needs: validate_dispatch_ref
|
||||
runs-on: ubuntu-latest
|
||||
environment: npm-release
|
||||
steps:
|
||||
- name: Record approval
|
||||
env:
|
||||
PACKAGE_SPEC: ${{ inputs.package_spec }}
|
||||
run: echo "Approved npm Telegram beta E2E for ${PACKAGE_SPEC}"
|
||||
|
||||
run_npm_telegram_beta_e2e:
|
||||
name: Run published npm Telegram E2E
|
||||
needs: approve_release_manager
|
||||
run_package_telegram_e2e:
|
||||
name: Run package Telegram E2E
|
||||
runs-on: blacksmith-32vcpu-ubuntu-2404
|
||||
timeout-minutes: 60
|
||||
environment: qa-live-shared
|
||||
@@ -71,7 +88,7 @@ jobs:
|
||||
DOCKER_BUILD_SUMMARY: "false"
|
||||
DOCKER_BUILD_RECORD_UPLOAD: "false"
|
||||
steps:
|
||||
- name: Checkout main
|
||||
- name: Checkout dispatch ref
|
||||
uses: actions/checkout@v6
|
||||
with:
|
||||
ref: ${{ github.sha }}
|
||||
@@ -79,6 +96,8 @@ jobs:
|
||||
|
||||
- name: Set up Blacksmith Docker Builder
|
||||
uses: useblacksmith/setup-docker-builder@ac083cc84672d01c60d5e8561d0a939b697de542 # v1
|
||||
with:
|
||||
max-cache-size-mb: 800000
|
||||
|
||||
- name: Build Docker E2E image
|
||||
uses: useblacksmith/build-push-action@cbd1f60d194a98cb3be5523b15134501eaf0fbf3 # v2
|
||||
@@ -102,6 +121,7 @@ jobs:
|
||||
- name: Validate inputs and secrets
|
||||
env:
|
||||
PACKAGE_SPEC: ${{ inputs.package_spec }}
|
||||
PACKAGE_ARTIFACT_NAME: ${{ inputs.package_artifact_name || '' }}
|
||||
PROVIDER_MODE: ${{ inputs.provider_mode }}
|
||||
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
|
||||
OPENCLAW_QA_CONVEX_SITE_URL: ${{ secrets.OPENCLAW_QA_CONVEX_SITE_URL }}
|
||||
@@ -110,10 +130,19 @@ jobs:
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
if [[ ! "${PACKAGE_SPEC}" =~ ^openclaw@(beta|latest|[0-9]{4}\.[1-9][0-9]*\.[1-9][0-9]*(-[1-9][0-9]*|-beta\.[1-9][0-9]*)?)$ ]]; then
|
||||
echo "package_spec must be openclaw@beta, openclaw@latest, or an exact OpenClaw release version; got: ${PACKAGE_SPEC}" >&2
|
||||
exit 1
|
||||
if [[ -z "${PACKAGE_ARTIFACT_NAME// }" ]]; then
|
||||
if [[ ! "${PACKAGE_SPEC}" =~ ^openclaw@(beta|latest|[0-9]{4}\.[1-9][0-9]*\.[1-9][0-9]*(-[1-9][0-9]*|-beta\.[1-9][0-9]*)?)$ ]]; then
|
||||
echo "package_spec must be openclaw@beta, openclaw@latest, or an exact OpenClaw release version; got: ${PACKAGE_SPEC}" >&2
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
case "${PROVIDER_MODE}" in
|
||||
mock-openai | live-frontier) ;;
|
||||
*)
|
||||
echo "provider_mode must be mock-openai or live-frontier; got: ${PROVIDER_MODE}" >&2
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
|
||||
require_var() {
|
||||
local key="$1"
|
||||
@@ -129,7 +158,14 @@ jobs:
|
||||
require_var OPENAI_API_KEY
|
||||
fi
|
||||
|
||||
- name: Run npm Telegram beta E2E
|
||||
- name: Download package-under-test artifact
|
||||
if: inputs.package_artifact_name != ''
|
||||
uses: actions/download-artifact@v8
|
||||
with:
|
||||
name: ${{ inputs.package_artifact_name }}
|
||||
path: .artifacts/telegram-package-under-test
|
||||
|
||||
- name: Run package Telegram E2E
|
||||
id: run_lane
|
||||
shell: bash
|
||||
env:
|
||||
@@ -137,13 +173,16 @@ jobs:
|
||||
OPENCLAW_SKIP_DOCKER_BUILD: "1"
|
||||
OPENCLAW_DOCKER_E2E_IMAGE: openclaw-docker-e2e:local
|
||||
OPENCLAW_NPM_TELEGRAM_PACKAGE_SPEC: ${{ inputs.package_spec }}
|
||||
OPENCLAW_NPM_TELEGRAM_PACKAGE_LABEL: ${{ inputs.package_label }}
|
||||
OPENCLAW_NPM_TELEGRAM_PROVIDER_MODE: ${{ inputs.provider_mode }}
|
||||
OPENCLAW_NPM_TELEGRAM_CREDENTIAL_SOURCE: convex
|
||||
OPENCLAW_NPM_TELEGRAM_CREDENTIAL_ROLE: ci
|
||||
OPENCLAW_QA_CONVEX_SITE_URL: ${{ secrets.OPENCLAW_QA_CONVEX_SITE_URL }}
|
||||
OPENCLAW_QA_CONVEX_SECRET_CI: ${{ secrets.OPENCLAW_QA_CONVEX_SECRET_CI }}
|
||||
OPENCLAW_QA_REDACT_PUBLIC_METADATA: "1"
|
||||
OPENCLAW_QA_TELEGRAM_CAPTURE_CONTENT: "1"
|
||||
INPUT_SCENARIO: ${{ inputs.scenario }}
|
||||
PACKAGE_ARTIFACT_NAME: ${{ inputs.package_artifact_name || '' }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
@@ -151,6 +190,20 @@ jobs:
|
||||
echo "output_dir=${output_dir}" >> "$GITHUB_OUTPUT"
|
||||
export OPENCLAW_NPM_TELEGRAM_OUTPUT_DIR="${output_dir}"
|
||||
|
||||
if [[ -n "${PACKAGE_ARTIFACT_NAME// }" ]]; then
|
||||
mapfile -t package_tgzs < <(find .artifacts/telegram-package-under-test -type f -name "*.tgz" | sort)
|
||||
if [[ "${#package_tgzs[@]}" -ne 1 ]]; then
|
||||
echo "package artifact ${PACKAGE_ARTIFACT_NAME} must contain exactly one .tgz; found ${#package_tgzs[@]}" >&2
|
||||
exit 1
|
||||
fi
|
||||
export OPENCLAW_NPM_TELEGRAM_PACKAGE_TGZ="${package_tgzs[0]}"
|
||||
if [[ -z "${OPENCLAW_NPM_TELEGRAM_PACKAGE_LABEL// }" ]]; then
|
||||
export OPENCLAW_NPM_TELEGRAM_PACKAGE_LABEL="$(basename "${package_tgzs[0]}")"
|
||||
fi
|
||||
elif [[ -z "${OPENCLAW_NPM_TELEGRAM_PACKAGE_LABEL// }" ]]; then
|
||||
export OPENCLAW_NPM_TELEGRAM_PACKAGE_LABEL="${OPENCLAW_NPM_TELEGRAM_PACKAGE_SPEC}"
|
||||
fi
|
||||
|
||||
if [[ -n "${INPUT_SCENARIO// }" ]]; then
|
||||
export OPENCLAW_NPM_TELEGRAM_SCENARIOS="${INPUT_SCENARIO}"
|
||||
fi
|
||||
|
||||
@@ -23,6 +23,31 @@ on:
|
||||
required: false
|
||||
default: true
|
||||
type: boolean
|
||||
docker_lanes:
|
||||
description: Comma/space separated Docker scheduler lane names to run against the prepared image
|
||||
required: false
|
||||
default: ""
|
||||
type: string
|
||||
package_artifact_name:
|
||||
description: Existing workflow artifact containing openclaw-current.tgz; blank packs the selected ref
|
||||
required: false
|
||||
default: ""
|
||||
type: string
|
||||
package_artifact_run_id:
|
||||
description: Prior run id containing package_artifact_name; blank uses this run or packs the selected ref
|
||||
required: false
|
||||
default: ""
|
||||
type: string
|
||||
docker_e2e_bare_image:
|
||||
description: Existing bare Docker E2E image to reuse; blank derives from package SHA/ref
|
||||
required: false
|
||||
default: ""
|
||||
type: string
|
||||
docker_e2e_functional_image:
|
||||
description: Existing functional Docker E2E image to reuse; blank derives from package SHA/ref
|
||||
required: false
|
||||
default: ""
|
||||
type: string
|
||||
include_live_suites:
|
||||
description: Whether to run live-provider coverage
|
||||
required: false
|
||||
@@ -33,6 +58,11 @@ on:
|
||||
required: false
|
||||
default: false
|
||||
type: boolean
|
||||
live_model_providers:
|
||||
description: Comma/space separated provider ids for the Docker live model matrix; blank runs all providers
|
||||
required: false
|
||||
default: ""
|
||||
type: string
|
||||
workflow_call:
|
||||
inputs:
|
||||
ref:
|
||||
@@ -54,6 +84,31 @@ on:
|
||||
required: false
|
||||
default: true
|
||||
type: boolean
|
||||
docker_lanes:
|
||||
description: Comma/space separated Docker scheduler lane names to run against the prepared image
|
||||
required: false
|
||||
default: ""
|
||||
type: string
|
||||
package_artifact_name:
|
||||
description: Existing workflow artifact containing openclaw-current.tgz; blank packs the selected ref
|
||||
required: false
|
||||
default: ""
|
||||
type: string
|
||||
package_artifact_run_id:
|
||||
description: Prior run id containing package_artifact_name; blank uses this run or packs the selected ref
|
||||
required: false
|
||||
default: ""
|
||||
type: string
|
||||
docker_e2e_bare_image:
|
||||
description: Existing bare Docker E2E image to reuse; blank derives from package SHA/ref
|
||||
required: false
|
||||
default: ""
|
||||
type: string
|
||||
docker_e2e_functional_image:
|
||||
description: Existing functional Docker E2E image to reuse; blank derives from package SHA/ref
|
||||
required: false
|
||||
default: ""
|
||||
type: string
|
||||
include_live_suites:
|
||||
description: Whether to run live-provider coverage
|
||||
required: false
|
||||
@@ -64,6 +119,11 @@ on:
|
||||
required: false
|
||||
default: false
|
||||
type: boolean
|
||||
live_model_providers:
|
||||
description: Comma/space separated provider ids for the Docker live model matrix; blank runs all providers
|
||||
required: false
|
||||
default: ""
|
||||
type: string
|
||||
secrets:
|
||||
OPENAI_API_KEY:
|
||||
required: false
|
||||
@@ -171,44 +231,42 @@ jobs:
|
||||
selected_sha: ${{ steps.validate.outputs.selected_sha }}
|
||||
trusted_reason: ${{ steps.validate.outputs.trusted_reason }}
|
||||
steps:
|
||||
- name: Checkout selected ref
|
||||
- name: Checkout workflow repository
|
||||
uses: actions/checkout@v6
|
||||
with:
|
||||
ref: ${{ inputs.ref }}
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Validate selected ref
|
||||
id: validate
|
||||
env:
|
||||
GH_TOKEN: ${{ github.token }}
|
||||
INPUT_REF: ${{ inputs.ref }}
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
selected_sha="$(git rev-parse HEAD)"
|
||||
trusted_reason=""
|
||||
|
||||
git fetch --no-tags origin +refs/heads/main:refs/remotes/origin/main
|
||||
git fetch --no-tags origin '+refs/heads/*:refs/remotes/origin/*'
|
||||
git fetch --tags origin '+refs/tags/*:refs/tags/*'
|
||||
|
||||
# Resolve here instead of in actions/checkout so short SHAs work too.
|
||||
if ! selected_sha="$(git rev-parse --verify "${INPUT_REF}^{commit}")"; then
|
||||
echo "Ref '${INPUT_REF}' could not be resolved to a commit." >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if git merge-base --is-ancestor "$selected_sha" refs/remotes/origin/main; then
|
||||
trusted_reason="main-ancestor"
|
||||
elif git tag --points-at "$selected_sha" | grep -Eq '^v'; then
|
||||
trusted_reason="release-tag"
|
||||
elif git for-each-ref --format='%(refname:short)' --contains "$selected_sha" refs/remotes/origin | grep -Eq '^origin/'; then
|
||||
trusted_reason="repository-branch-history"
|
||||
else
|
||||
pr_head_count="$(
|
||||
gh api \
|
||||
-H "Accept: application/vnd.github+json" \
|
||||
"repos/${GITHUB_REPOSITORY}/commits/${selected_sha}/pulls" \
|
||||
--jq '[.[] | select(.state == "open" and .head.repo.full_name == "'"${GITHUB_REPOSITORY}"'" and .head.sha == "'"${selected_sha}"'")] | length'
|
||||
)"
|
||||
if [[ "$pr_head_count" != "0" ]]; then
|
||||
trusted_reason="open-pr-head"
|
||||
fi
|
||||
trusted_reason=""
|
||||
fi
|
||||
|
||||
if [[ -z "$trusted_reason" ]]; then
|
||||
echo "Ref '${INPUT_REF}' resolved to $selected_sha, which is not trusted for secret-bearing live/E2E checks." >&2
|
||||
echo "Allowed refs must be on main, point to a release tag, or match an open PR head in ${GITHUB_REPOSITORY}." >&2
|
||||
echo "Allowed refs must be reachable from an OpenClaw branch or release tag." >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
@@ -303,7 +361,7 @@ jobs:
|
||||
requires_live_suites: false
|
||||
- suite_id: openai-ws-stream-live-e2e
|
||||
label: OpenAI WebSocket live E2E
|
||||
command: pnpm test:e2e -- src/agents/openai-ws-stream.e2e.test.ts
|
||||
command: pnpm test:e2e src/agents/openai-ws-stream.e2e.test.ts
|
||||
timeout_minutes: 90
|
||||
requires_repo_e2e: false
|
||||
requires_live_suites: true
|
||||
@@ -363,93 +421,23 @@ jobs:
|
||||
|
||||
validate_docker_e2e:
|
||||
needs: [validate_selected_ref, prepare_docker_e2e_image]
|
||||
if: inputs.include_release_path_suites
|
||||
if: inputs.include_release_path_suites && inputs.docker_lanes == ''
|
||||
name: Docker E2E (${{ matrix.label }})
|
||||
runs-on: blacksmith-32vcpu-ubuntu-2404
|
||||
timeout-minutes: ${{ matrix.timeout_minutes }}
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
include:
|
||||
- suite_id: docker-onboard
|
||||
label: Onboarding Docker E2E
|
||||
command: pnpm test:docker:onboard
|
||||
timeout_minutes: 60
|
||||
release_path: true
|
||||
- suite_id: docker-npm-onboard-channel-agent
|
||||
label: Npm Onboard Channel Agent Docker E2E
|
||||
command: pnpm test:docker:npm-onboard-channel-agent
|
||||
timeout_minutes: 90
|
||||
release_path: true
|
||||
- suite_id: docker-gateway-network
|
||||
label: Gateway Network Docker E2E
|
||||
command: pnpm test:docker:gateway-network
|
||||
timeout_minutes: 60
|
||||
release_path: true
|
||||
- suite_id: docker-openai-web-search-minimal
|
||||
label: OpenAI Web Search Minimal Docker E2E
|
||||
command: pnpm test:docker:openai-web-search-minimal
|
||||
timeout_minutes: 60
|
||||
release_path: true
|
||||
- suite_id: docker-mcp-channels
|
||||
label: MCP Channels Docker E2E
|
||||
command: pnpm test:docker:mcp-channels
|
||||
timeout_minutes: 60
|
||||
release_path: true
|
||||
- suite_id: docker-pi-bundle-mcp-tools
|
||||
label: Pi Bundle MCP Tools Docker E2E
|
||||
command: pnpm test:docker:pi-bundle-mcp-tools
|
||||
timeout_minutes: 60
|
||||
release_path: true
|
||||
- suite_id: docker-cron-mcp-cleanup
|
||||
label: Cron MCP Cleanup Docker E2E
|
||||
command: pnpm test:docker:cron-mcp-cleanup
|
||||
timeout_minutes: 60
|
||||
release_path: true
|
||||
- suite_id: docker-plugins
|
||||
label: Plugins Docker E2E
|
||||
command: pnpm test:docker:plugins
|
||||
timeout_minutes: 75
|
||||
release_path: true
|
||||
- suite_id: docker-plugin-update
|
||||
label: Plugin Update Docker E2E
|
||||
command: pnpm test:docker:plugin-update
|
||||
timeout_minutes: 60
|
||||
release_path: true
|
||||
- suite_id: docker-config-reload
|
||||
label: Config Reload Docker E2E
|
||||
command: pnpm test:docker:config-reload
|
||||
timeout_minutes: 60
|
||||
release_path: true
|
||||
- suite_id: docker-bundled-channel-deps
|
||||
label: Bundled Channel Runtime Deps Docker E2E
|
||||
command: pnpm test:docker:bundled-channel-deps
|
||||
timeout_minutes: 75
|
||||
release_path: true
|
||||
- suite_id: docker-doctor-switch
|
||||
label: Doctor Install Switch Docker E2E
|
||||
command: pnpm test:docker:doctor-switch
|
||||
timeout_minutes: 60
|
||||
release_path: true
|
||||
- suite_id: docker-update-channel-switch
|
||||
label: Update Channel Switch Docker E2E
|
||||
command: pnpm test:docker:update-channel-switch
|
||||
timeout_minutes: 60
|
||||
release_path: true
|
||||
- suite_id: docker-session-runtime-context
|
||||
label: Session Runtime Context Docker E2E
|
||||
command: pnpm test:docker:session-runtime-context
|
||||
timeout_minutes: 60
|
||||
release_path: true
|
||||
- suite_id: docker-qr
|
||||
label: QR Import Docker E2E
|
||||
command: pnpm test:docker:qr
|
||||
timeout_minutes: 60
|
||||
release_path: true
|
||||
- suite_id: docker-install-e2e
|
||||
label: Installer Docker E2E
|
||||
command: pnpm test:install:e2e
|
||||
- chunk_id: core
|
||||
label: core
|
||||
timeout_minutes: 120
|
||||
release_path: true
|
||||
- chunk_id: package-update
|
||||
label: package/update
|
||||
timeout_minutes: 180
|
||||
- chunk_id: plugins-integrations
|
||||
label: plugins/integrations
|
||||
timeout_minutes: 180
|
||||
env:
|
||||
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
|
||||
OPENAI_BASE_URL: ${{ secrets.OPENAI_BASE_URL }}
|
||||
@@ -496,7 +484,14 @@ jobs:
|
||||
OPENCLAW_GEMINI_SETTINGS_JSON: ${{ secrets.OPENCLAW_GEMINI_SETTINGS_JSON }}
|
||||
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}
|
||||
OPENCLAW_DOCKER_E2E_IMAGE: ${{ needs.prepare_docker_e2e_image.outputs.image }}
|
||||
OPENCLAW_DOCKER_E2E_BARE_IMAGE: ${{ needs.prepare_docker_e2e_image.outputs.bare_image }}
|
||||
OPENCLAW_DOCKER_E2E_FUNCTIONAL_IMAGE: ${{ needs.prepare_docker_e2e_image.outputs.functional_image }}
|
||||
OPENCLAW_DOCKER_E2E_PACKAGE_ARTIFACT_NAME: ${{ inputs.package_artifact_name || 'docker-e2e-package' }}
|
||||
OPENCLAW_DOCKER_E2E_SELECTED_SHA: ${{ needs.validate_selected_ref.outputs.selected_sha }}
|
||||
OPENCLAW_CURRENT_PACKAGE_TGZ: .artifacts/docker-e2e-package/openclaw-current.tgz
|
||||
OPENCLAW_SKIP_DOCKER_BUILD: "1"
|
||||
INCLUDE_OPENWEBUI: ${{ inputs.include_openwebui }}
|
||||
DOCKER_E2E_CHUNK: ${{ matrix.chunk_id }}
|
||||
steps:
|
||||
- name: Checkout selected ref
|
||||
uses: actions/checkout@v6
|
||||
@@ -521,45 +516,196 @@ jobs:
|
||||
- name: Hydrate live auth/profile inputs
|
||||
run: bash scripts/ci-hydrate-live-auth.sh
|
||||
|
||||
- name: Configure suite-specific env
|
||||
- name: Plan and hydrate Docker E2E chunk
|
||||
id: plan
|
||||
uses: ./.github/actions/docker-e2e-plan
|
||||
with:
|
||||
mode: chunk
|
||||
chunk: ${{ matrix.chunk_id }}
|
||||
include-openwebui: ${{ inputs.include_openwebui }}
|
||||
package-artifact-name: ${{ inputs.package_artifact_name || 'docker-e2e-package' }}
|
||||
|
||||
- name: Run Docker E2E chunk
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
case "${{ matrix.suite_id }}" in
|
||||
docker-install-e2e)
|
||||
echo "OPENCLAW_E2E_MODELS=both" >> "$GITHUB_ENV"
|
||||
;;
|
||||
esac
|
||||
export OPENCLAW_DOCKER_ALL_PROFILE=release-path
|
||||
export OPENCLAW_DOCKER_ALL_CHUNK="${DOCKER_E2E_CHUNK}"
|
||||
export OPENCLAW_DOCKER_ALL_BUILD=0
|
||||
export OPENCLAW_DOCKER_ALL_PREFLIGHT=0
|
||||
export OPENCLAW_DOCKER_ALL_FAIL_FAST=0
|
||||
export OPENCLAW_DOCKER_ALL_INCLUDE_OPENWEBUI="${INCLUDE_OPENWEBUI}"
|
||||
export OPENCLAW_DOCKER_ALL_LOG_DIR=".artifacts/docker-tests/release-${DOCKER_E2E_CHUNK}"
|
||||
export OPENCLAW_DOCKER_ALL_TIMINGS_FILE=".artifacts/docker-tests/release-${DOCKER_E2E_CHUNK}-timings.json"
|
||||
export OPENCLAW_DOCKER_ALL_PNPM_COMMAND="$(command -v pnpm)"
|
||||
|
||||
- name: Validate suite credentials
|
||||
pnpm test:docker:all
|
||||
|
||||
- name: Summarize Docker E2E chunk
|
||||
if: always()
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
case "${{ matrix.suite_id }}" in
|
||||
docker-install-e2e)
|
||||
[[ -n "${OPENAI_API_KEY:-}" ]] || {
|
||||
echo "OPENAI_API_KEY is required for installer Docker E2E." >&2
|
||||
exit 1
|
||||
}
|
||||
if [[ -z "${ANTHROPIC_API_TOKEN:-}" && -z "${ANTHROPIC_API_KEY:-}" ]]; then
|
||||
echo "ANTHROPIC_API_TOKEN or ANTHROPIC_API_KEY is required for installer Docker E2E." >&2
|
||||
exit 1
|
||||
fi
|
||||
;;
|
||||
esac
|
||||
summary=".artifacts/docker-tests/release-${DOCKER_E2E_CHUNK}/summary.json"
|
||||
if [[ ! -f "$summary" ]]; then
|
||||
echo "Docker chunk summary missing: \`$summary\`" >> "$GITHUB_STEP_SUMMARY"
|
||||
exit 0
|
||||
fi
|
||||
node scripts/docker-e2e.mjs summary "$summary" "Docker E2E chunk: ${DOCKER_E2E_CHUNK:-unknown}" >> "$GITHUB_STEP_SUMMARY"
|
||||
|
||||
- name: Run ${{ matrix.label }}
|
||||
run: ${{ matrix.command }}
|
||||
- name: Upload Docker E2E chunk artifacts
|
||||
if: always()
|
||||
uses: actions/upload-artifact@v7
|
||||
with:
|
||||
name: docker-e2e-${{ matrix.chunk_id }}
|
||||
path: .artifacts/docker-tests/
|
||||
if-no-files-found: ignore
|
||||
|
||||
validate_docker_lanes:
|
||||
needs: [validate_selected_ref, prepare_docker_e2e_image]
|
||||
if: inputs.docker_lanes != ''
|
||||
name: Docker E2E targeted lanes
|
||||
runs-on: blacksmith-32vcpu-ubuntu-2404
|
||||
timeout-minutes: 180
|
||||
env:
|
||||
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
|
||||
OPENAI_BASE_URL: ${{ secrets.OPENAI_BASE_URL }}
|
||||
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
ANTHROPIC_API_TOKEN: ${{ secrets.ANTHROPIC_API_TOKEN }}
|
||||
ANTHROPIC_API_KEY_OLD: ${{ secrets.ANTHROPIC_API_KEY_OLD }}
|
||||
BYTEPLUS_API_KEY: ${{ secrets.BYTEPLUS_API_KEY }}
|
||||
CEREBRAS_API_KEY: ${{ secrets.CEREBRAS_API_KEY }}
|
||||
DASHSCOPE_API_KEY: ${{ secrets.DASHSCOPE_API_KEY }}
|
||||
GROQ_API_KEY: ${{ secrets.GROQ_API_KEY }}
|
||||
KIMI_API_KEY: ${{ secrets.KIMI_API_KEY }}
|
||||
MODELSTUDIO_API_KEY: ${{ secrets.MODELSTUDIO_API_KEY }}
|
||||
MOONSHOT_API_KEY: ${{ secrets.MOONSHOT_API_KEY }}
|
||||
MISTRAL_API_KEY: ${{ secrets.MISTRAL_API_KEY }}
|
||||
MINIMAX_API_KEY: ${{ secrets.MINIMAX_API_KEY }}
|
||||
OPENCODE_API_KEY: ${{ secrets.OPENCODE_API_KEY }}
|
||||
OPENCODE_ZEN_API_KEY: ${{ secrets.OPENCODE_ZEN_API_KEY }}
|
||||
OPENCLAW_LIVE_BROWSER_CDP_URL: ${{ secrets.OPENCLAW_LIVE_BROWSER_CDP_URL }}
|
||||
OPENCLAW_LIVE_SETUP_TOKEN: ${{ secrets.OPENCLAW_LIVE_SETUP_TOKEN }}
|
||||
OPENCLAW_LIVE_SETUP_TOKEN_MODEL: ${{ secrets.OPENCLAW_LIVE_SETUP_TOKEN_MODEL }}
|
||||
OPENCLAW_LIVE_SETUP_TOKEN_PROFILE: ${{ secrets.OPENCLAW_LIVE_SETUP_TOKEN_PROFILE }}
|
||||
OPENCLAW_LIVE_SETUP_TOKEN_VALUE: ${{ secrets.OPENCLAW_LIVE_SETUP_TOKEN_VALUE }}
|
||||
GEMINI_API_KEY: ${{ secrets.GEMINI_API_KEY }}
|
||||
GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}
|
||||
OPENROUTER_API_KEY: ${{ secrets.OPENROUTER_API_KEY }}
|
||||
QWEN_API_KEY: ${{ secrets.QWEN_API_KEY }}
|
||||
FAL_KEY: ${{ secrets.FAL_KEY }}
|
||||
RUNWAY_API_KEY: ${{ secrets.RUNWAY_API_KEY }}
|
||||
DEEPGRAM_API_KEY: ${{ secrets.DEEPGRAM_API_KEY }}
|
||||
TOGETHER_API_KEY: ${{ secrets.TOGETHER_API_KEY }}
|
||||
VYDRA_API_KEY: ${{ secrets.VYDRA_API_KEY }}
|
||||
XAI_API_KEY: ${{ secrets.XAI_API_KEY }}
|
||||
ZAI_API_KEY: ${{ secrets.ZAI_API_KEY }}
|
||||
Z_AI_API_KEY: ${{ secrets.Z_AI_API_KEY }}
|
||||
BYTEPLUS_ACCESS_KEY_ID: ${{ secrets.BYTEPLUS_ACCESS_KEY_ID }}
|
||||
BYTEPLUS_SECRET_ACCESS_KEY: ${{ secrets.BYTEPLUS_SECRET_ACCESS_KEY }}
|
||||
CLAUDE_CODE_OAUTH_TOKEN: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}
|
||||
OPENCLAW_CODEX_AUTH_JSON: ${{ secrets.OPENCLAW_CODEX_AUTH_JSON }}
|
||||
OPENCLAW_CODEX_CONFIG_TOML: ${{ secrets.OPENCLAW_CODEX_CONFIG_TOML }}
|
||||
OPENCLAW_CLAUDE_JSON: ${{ secrets.OPENCLAW_CLAUDE_JSON }}
|
||||
OPENCLAW_CLAUDE_CREDENTIALS_JSON: ${{ secrets.OPENCLAW_CLAUDE_CREDENTIALS_JSON }}
|
||||
OPENCLAW_CLAUDE_SETTINGS_JSON: ${{ secrets.OPENCLAW_CLAUDE_SETTINGS_JSON }}
|
||||
OPENCLAW_CLAUDE_SETTINGS_LOCAL_JSON: ${{ secrets.OPENCLAW_CLAUDE_SETTINGS_LOCAL_JSON }}
|
||||
OPENCLAW_GEMINI_SETTINGS_JSON: ${{ secrets.OPENCLAW_GEMINI_SETTINGS_JSON }}
|
||||
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}
|
||||
OPENCLAW_DOCKER_E2E_IMAGE: ${{ needs.prepare_docker_e2e_image.outputs.image }}
|
||||
OPENCLAW_DOCKER_E2E_BARE_IMAGE: ${{ needs.prepare_docker_e2e_image.outputs.bare_image }}
|
||||
OPENCLAW_DOCKER_E2E_FUNCTIONAL_IMAGE: ${{ needs.prepare_docker_e2e_image.outputs.functional_image }}
|
||||
OPENCLAW_DOCKER_E2E_PACKAGE_ARTIFACT_NAME: ${{ inputs.package_artifact_name || 'docker-e2e-package' }}
|
||||
OPENCLAW_DOCKER_E2E_SELECTED_SHA: ${{ needs.validate_selected_ref.outputs.selected_sha }}
|
||||
OPENCLAW_CURRENT_PACKAGE_TGZ: .artifacts/docker-e2e-package/openclaw-current.tgz
|
||||
OPENCLAW_SKIP_DOCKER_BUILD: "1"
|
||||
INCLUDE_OPENWEBUI: ${{ inputs.include_openwebui }}
|
||||
DOCKER_E2E_LANES: ${{ inputs.docker_lanes }}
|
||||
steps:
|
||||
- name: Checkout selected ref
|
||||
uses: actions/checkout@v6
|
||||
with:
|
||||
ref: ${{ needs.validate_selected_ref.outputs.selected_sha }}
|
||||
fetch-depth: 1
|
||||
|
||||
- name: Log in to GHCR for shared Docker E2E image
|
||||
uses: docker/login-action@4907a6ddec9925e35a0a9e82d7399ccc52663121 # v4
|
||||
with:
|
||||
registry: ghcr.io
|
||||
username: ${{ github.actor }}
|
||||
password: ${{ github.token }}
|
||||
|
||||
- name: Setup Node environment
|
||||
uses: ./.github/actions/setup-node-env
|
||||
with:
|
||||
node-version: ${{ env.NODE_VERSION }}
|
||||
pnpm-version: ${{ env.PNPM_VERSION }}
|
||||
install-bun: "true"
|
||||
|
||||
- name: Hydrate live auth/profile inputs
|
||||
run: bash scripts/ci-hydrate-live-auth.sh
|
||||
|
||||
- name: Plan and hydrate targeted Docker E2E lanes
|
||||
id: plan
|
||||
uses: ./.github/actions/docker-e2e-plan
|
||||
with:
|
||||
mode: targeted
|
||||
lanes: ${{ inputs.docker_lanes }}
|
||||
include-openwebui: ${{ inputs.include_openwebui }}
|
||||
package-artifact-name: ${{ inputs.package_artifact_name || 'docker-e2e-package' }}
|
||||
|
||||
- name: Run targeted Docker E2E lanes
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
export OPENCLAW_DOCKER_ALL_LANES="${DOCKER_E2E_LANES}"
|
||||
export OPENCLAW_DOCKER_ALL_PREFLIGHT=0
|
||||
export OPENCLAW_DOCKER_ALL_FAIL_FAST=0
|
||||
export OPENCLAW_DOCKER_ALL_INCLUDE_OPENWEBUI="${INCLUDE_OPENWEBUI}"
|
||||
export OPENCLAW_DOCKER_ALL_LOG_DIR=".artifacts/docker-tests/targeted"
|
||||
export OPENCLAW_DOCKER_ALL_TIMINGS_FILE=".artifacts/docker-tests/targeted-timings.json"
|
||||
export OPENCLAW_DOCKER_ALL_PNPM_COMMAND="$(command -v pnpm)"
|
||||
if [[ "${{ steps.plan.outputs.needs_live_image }}" == "1" ]]; then
|
||||
pnpm test:docker:live-build
|
||||
fi
|
||||
export OPENCLAW_DOCKER_ALL_BUILD=0
|
||||
|
||||
pnpm test:docker:all
|
||||
|
||||
- name: Summarize targeted Docker E2E lanes
|
||||
if: always()
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
summary=".artifacts/docker-tests/targeted/summary.json"
|
||||
if [[ ! -f "$summary" ]]; then
|
||||
echo "Docker targeted summary missing: \`$summary\`" >> "$GITHUB_STEP_SUMMARY"
|
||||
exit 0
|
||||
fi
|
||||
node scripts/docker-e2e.mjs summary "$summary" "Docker E2E targeted lanes" >> "$GITHUB_STEP_SUMMARY"
|
||||
|
||||
- name: Upload targeted Docker E2E artifacts
|
||||
if: always()
|
||||
uses: actions/upload-artifact@v7
|
||||
with:
|
||||
name: docker-e2e-targeted
|
||||
path: .artifacts/docker-tests/
|
||||
if-no-files-found: ignore
|
||||
|
||||
validate_docker_openwebui:
|
||||
needs: [validate_selected_ref, prepare_docker_e2e_image]
|
||||
if: inputs.include_openwebui
|
||||
if: inputs.include_openwebui && !inputs.include_release_path_suites && inputs.docker_lanes == ''
|
||||
name: Docker E2E (openwebui)
|
||||
runs-on: blacksmith-32vcpu-ubuntu-2404
|
||||
timeout-minutes: 75
|
||||
env:
|
||||
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
|
||||
OPENAI_BASE_URL: ${{ secrets.OPENAI_BASE_URL }}
|
||||
OPENCLAW_DOCKER_E2E_IMAGE: ${{ needs.prepare_docker_e2e_image.outputs.image }}
|
||||
OPENCLAW_DOCKER_E2E_FUNCTIONAL_IMAGE: ${{ needs.prepare_docker_e2e_image.outputs.functional_image }}
|
||||
OPENCLAW_DOCKER_E2E_PACKAGE_ARTIFACT_NAME: ${{ inputs.package_artifact_name || 'docker-e2e-package' }}
|
||||
OPENCLAW_DOCKER_E2E_SELECTED_SHA: ${{ needs.validate_selected_ref.outputs.selected_sha }}
|
||||
OPENCLAW_CURRENT_PACKAGE_TGZ: .artifacts/docker-e2e-package/openclaw-current.tgz
|
||||
OPENCLAW_SKIP_DOCKER_BUILD: "1"
|
||||
steps:
|
||||
- name: Checkout selected ref
|
||||
@@ -591,19 +737,69 @@ jobs:
|
||||
exit 1
|
||||
}
|
||||
|
||||
- name: Run Open WebUI Docker E2E
|
||||
run: pnpm test:docker:openwebui
|
||||
- name: Plan and hydrate Open WebUI Docker E2E chunk
|
||||
id: plan
|
||||
uses: ./.github/actions/docker-e2e-plan
|
||||
with:
|
||||
mode: chunk
|
||||
chunk: openwebui
|
||||
include-openwebui: "true"
|
||||
package-artifact-name: ${{ inputs.package_artifact_name || 'docker-e2e-package' }}
|
||||
|
||||
- name: Run Open WebUI Docker E2E chunk
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
export OPENCLAW_DOCKER_ALL_PROFILE=release-path
|
||||
export OPENCLAW_DOCKER_ALL_CHUNK=openwebui
|
||||
export OPENCLAW_DOCKER_ALL_BUILD=0
|
||||
export OPENCLAW_DOCKER_ALL_PREFLIGHT=0
|
||||
export OPENCLAW_DOCKER_ALL_FAIL_FAST=0
|
||||
export OPENCLAW_DOCKER_ALL_INCLUDE_OPENWEBUI=1
|
||||
export OPENCLAW_DOCKER_ALL_LOG_DIR=".artifacts/docker-tests/release-openwebui"
|
||||
export OPENCLAW_DOCKER_ALL_TIMINGS_FILE=".artifacts/docker-tests/release-openwebui-timings.json"
|
||||
export OPENCLAW_DOCKER_ALL_PNPM_COMMAND="$(command -v pnpm)"
|
||||
|
||||
pnpm test:docker:all
|
||||
|
||||
- name: Summarize Open WebUI Docker E2E chunk
|
||||
if: always()
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
summary=".artifacts/docker-tests/release-openwebui/summary.json"
|
||||
if [[ ! -f "$summary" ]]; then
|
||||
echo "Docker Open WebUI summary missing: \`$summary\`" >> "$GITHUB_STEP_SUMMARY"
|
||||
exit 0
|
||||
fi
|
||||
node scripts/docker-e2e.mjs summary "$summary" "Docker E2E chunk: openwebui" >> "$GITHUB_STEP_SUMMARY"
|
||||
|
||||
- name: Upload Open WebUI Docker E2E artifacts
|
||||
if: always()
|
||||
uses: actions/upload-artifact@v7
|
||||
with:
|
||||
name: docker-e2e-openwebui
|
||||
path: .artifacts/docker-tests/
|
||||
if-no-files-found: ignore
|
||||
|
||||
prepare_docker_e2e_image:
|
||||
needs: validate_selected_ref
|
||||
if: inputs.include_release_path_suites || inputs.include_openwebui
|
||||
if: inputs.include_release_path_suites || inputs.include_openwebui || inputs.docker_lanes != ''
|
||||
runs-on: blacksmith-32vcpu-ubuntu-2404
|
||||
timeout-minutes: 90
|
||||
permissions:
|
||||
actions: read
|
||||
contents: read
|
||||
packages: write
|
||||
outputs:
|
||||
image: ${{ steps.image.outputs.image }}
|
||||
bare_image: ${{ steps.image.outputs.bare_image }}
|
||||
functional_image: ${{ steps.image.outputs.functional_image }}
|
||||
needs_bare_image: ${{ steps.plan.outputs.needs_bare_image }}
|
||||
needs_e2e_image: ${{ steps.plan.outputs.needs_e2e_image }}
|
||||
needs_functional_image: ${{ steps.plan.outputs.needs_functional_image }}
|
||||
needs_live_image: ${{ steps.plan.outputs.needs_live_image }}
|
||||
needs_package: ${{ steps.plan.outputs.needs_package }}
|
||||
env:
|
||||
DOCKER_BUILD_SUMMARY: "false"
|
||||
DOCKER_BUILD_RECORD_UPLOAD: "false"
|
||||
@@ -614,45 +810,191 @@ jobs:
|
||||
ref: ${{ needs.validate_selected_ref.outputs.selected_sha }}
|
||||
fetch-depth: 1
|
||||
|
||||
- name: Resolve shared Docker E2E image tag
|
||||
- name: Plan Docker E2E images
|
||||
id: plan
|
||||
uses: ./.github/actions/docker-e2e-plan
|
||||
with:
|
||||
mode: prepare
|
||||
lanes: ${{ inputs.docker_lanes }}
|
||||
include-release-path-suites: ${{ inputs.include_release_path_suites }}
|
||||
include-openwebui: ${{ inputs.include_openwebui }}
|
||||
hydrate-artifacts: "false"
|
||||
|
||||
- name: Setup Node environment
|
||||
if: steps.plan.outputs.needs_package == '1' && inputs.package_artifact_name == '' && inputs.package_artifact_run_id == ''
|
||||
uses: ./.github/actions/setup-node-env
|
||||
with:
|
||||
node-version: ${{ env.NODE_VERSION }}
|
||||
pnpm-version: ${{ env.PNPM_VERSION }}
|
||||
install-bun: "true"
|
||||
|
||||
- name: Download current-run OpenClaw Docker E2E package
|
||||
if: steps.plan.outputs.needs_package == '1' && inputs.package_artifact_name != '' && inputs.package_artifact_run_id == ''
|
||||
uses: actions/download-artifact@v8
|
||||
with:
|
||||
name: ${{ inputs.package_artifact_name }}
|
||||
path: .artifacts/docker-e2e-package
|
||||
|
||||
- name: Download previous-run OpenClaw Docker E2E package
|
||||
if: steps.plan.outputs.needs_package == '1' && inputs.package_artifact_run_id != ''
|
||||
uses: actions/download-artifact@v8
|
||||
with:
|
||||
name: ${{ inputs.package_artifact_name || 'docker-e2e-package' }}
|
||||
path: .artifacts/docker-e2e-package
|
||||
run-id: ${{ inputs.package_artifact_run_id }}
|
||||
github-token: ${{ github.token }}
|
||||
|
||||
- name: Pack OpenClaw package for Docker E2E
|
||||
if: steps.plan.outputs.needs_package == '1' && inputs.package_artifact_name == '' && inputs.package_artifact_run_id == ''
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
mkdir -p .artifacts/docker-e2e-package
|
||||
node scripts/package-openclaw-for-docker.mjs \
|
||||
--output-dir .artifacts/docker-e2e-package \
|
||||
--output-name openclaw-current.tgz
|
||||
|
||||
- name: Validate OpenClaw Docker E2E package
|
||||
id: package
|
||||
if: steps.plan.outputs.needs_package == '1'
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
mkdir -p .artifacts/docker-e2e-package
|
||||
target=".artifacts/docker-e2e-package/openclaw-current.tgz"
|
||||
if [[ ! -f "$target" ]]; then
|
||||
mapfile -t tgzs < <(find .artifacts/docker-e2e-package -type f -name '*.tgz' | sort)
|
||||
if [[ "${#tgzs[@]}" -ne 1 ]]; then
|
||||
echo "Expected exactly one package tarball in .artifacts/docker-e2e-package; found ${#tgzs[@]}." >&2
|
||||
printf '%s\n' "${tgzs[@]}" >&2
|
||||
exit 1
|
||||
fi
|
||||
cp "${tgzs[0]}" "$target"
|
||||
fi
|
||||
node scripts/check-openclaw-package-tarball.mjs "$target"
|
||||
digest="$(sha256sum "$target" | awk '{print $1}')"
|
||||
tag="pkg-${digest:0:32}"
|
||||
echo "sha256=$digest" >> "$GITHUB_OUTPUT"
|
||||
echo "tag=$tag" >> "$GITHUB_OUTPUT"
|
||||
{
|
||||
echo "Docker E2E package: \`$target\`"
|
||||
echo "Docker E2E package SHA-256: \`$digest\`"
|
||||
} >> "$GITHUB_STEP_SUMMARY"
|
||||
|
||||
- name: Upload OpenClaw Docker E2E package
|
||||
if: steps.plan.outputs.needs_package == '1' && (inputs.package_artifact_name == '' || inputs.package_artifact_run_id != '')
|
||||
uses: actions/upload-artifact@v7
|
||||
with:
|
||||
name: ${{ inputs.package_artifact_name || 'docker-e2e-package' }}
|
||||
path: .artifacts/docker-e2e-package/openclaw-current.tgz
|
||||
if-no-files-found: error
|
||||
|
||||
- name: Resolve shared Docker E2E image tags
|
||||
id: image
|
||||
shell: bash
|
||||
env:
|
||||
PACKAGE_TAG: ${{ steps.package.outputs.tag }}
|
||||
SELECTED_SHA: ${{ needs.validate_selected_ref.outputs.selected_sha }}
|
||||
PROVIDED_BARE_IMAGE: ${{ inputs.docker_e2e_bare_image }}
|
||||
PROVIDED_FUNCTIONAL_IMAGE: ${{ inputs.docker_e2e_functional_image }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
repository="${GITHUB_REPOSITORY,,}"
|
||||
image="ghcr.io/${repository}-docker-e2e:${SELECTED_SHA}"
|
||||
image_tag="${PACKAGE_TAG:-$SELECTED_SHA}"
|
||||
bare_image="${PROVIDED_BARE_IMAGE:-ghcr.io/${repository}-docker-e2e-bare:${image_tag}}"
|
||||
functional_image="${PROVIDED_FUNCTIONAL_IMAGE:-ghcr.io/${repository}-docker-e2e-functional:${image_tag}}"
|
||||
image="$functional_image"
|
||||
echo "image=$image" >> "$GITHUB_OUTPUT"
|
||||
echo "Shared Docker E2E image: \`$image\`" >> "$GITHUB_STEP_SUMMARY"
|
||||
echo "bare_image=$bare_image" >> "$GITHUB_OUTPUT"
|
||||
echo "functional_image=$functional_image" >> "$GITHUB_OUTPUT"
|
||||
echo "Shared Docker E2E bare image: \`$bare_image\`" >> "$GITHUB_STEP_SUMMARY"
|
||||
echo "Shared Docker E2E functional image: \`$functional_image\`" >> "$GITHUB_STEP_SUMMARY"
|
||||
|
||||
- name: Log in to GHCR
|
||||
if: steps.plan.outputs.needs_e2e_image == '1'
|
||||
uses: docker/login-action@4907a6ddec9925e35a0a9e82d7399ccc52663121 # v4
|
||||
with:
|
||||
registry: ghcr.io
|
||||
username: ${{ github.actor }}
|
||||
password: ${{ github.token }}
|
||||
|
||||
- name: Check existing shared Docker E2E images
|
||||
id: image_exists
|
||||
if: steps.plan.outputs.needs_e2e_image == '1'
|
||||
shell: bash
|
||||
env:
|
||||
PROVIDED_BARE_IMAGE: ${{ inputs.docker_e2e_bare_image }}
|
||||
PROVIDED_FUNCTIONAL_IMAGE: ${{ inputs.docker_e2e_functional_image }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
bare_exists=0
|
||||
functional_exists=0
|
||||
needs_build=0
|
||||
|
||||
if [[ "${{ steps.plan.outputs.needs_bare_image }}" == "1" ]]; then
|
||||
if docker manifest inspect "${{ steps.image.outputs.bare_image }}" >/dev/null 2>&1; then
|
||||
bare_exists=1
|
||||
echo "Shared Docker E2E bare image already exists: ${{ steps.image.outputs.bare_image }}"
|
||||
elif [[ -n "$PROVIDED_BARE_IMAGE" ]]; then
|
||||
echo "Provided bare Docker E2E image does not exist: $PROVIDED_BARE_IMAGE" >&2
|
||||
exit 1
|
||||
else
|
||||
needs_build=1
|
||||
fi
|
||||
fi
|
||||
|
||||
if [[ "${{ steps.plan.outputs.needs_functional_image }}" == "1" ]]; then
|
||||
if docker manifest inspect "${{ steps.image.outputs.functional_image }}" >/dev/null 2>&1; then
|
||||
functional_exists=1
|
||||
echo "Shared Docker E2E functional image already exists: ${{ steps.image.outputs.functional_image }}"
|
||||
elif [[ -n "$PROVIDED_FUNCTIONAL_IMAGE" ]]; then
|
||||
echo "Provided functional Docker E2E image does not exist: $PROVIDED_FUNCTIONAL_IMAGE" >&2
|
||||
exit 1
|
||||
else
|
||||
needs_build=1
|
||||
fi
|
||||
fi
|
||||
|
||||
echo "bare_exists=$bare_exists" >> "$GITHUB_OUTPUT"
|
||||
echo "functional_exists=$functional_exists" >> "$GITHUB_OUTPUT"
|
||||
echo "needs_build=$needs_build" >> "$GITHUB_OUTPUT"
|
||||
|
||||
- name: Setup Docker builder
|
||||
if: steps.image_exists.outputs.needs_build == '1'
|
||||
uses: useblacksmith/setup-docker-builder@ac083cc84672d01c60d5e8561d0a939b697de542 # v1
|
||||
|
||||
- name: Build and push shared Docker E2E image
|
||||
uses: docker/build-push-action@bcafcacb16a39f128d818304e6c9c0c18556b85f # v7.1.0
|
||||
- name: Build and push bare Docker E2E image
|
||||
if: steps.plan.outputs.needs_bare_image == '1' && steps.image_exists.outputs.bare_exists != '1'
|
||||
uses: useblacksmith/build-push-action@cbd1f60d194a98cb3be5523b15134501eaf0fbf3 # v2
|
||||
with:
|
||||
context: .
|
||||
file: ./scripts/e2e/Dockerfile
|
||||
target: build
|
||||
target: bare
|
||||
platforms: linux/amd64
|
||||
cache-from: type=gha,scope=docker-e2e
|
||||
cache-to: type=gha,mode=max,scope=docker-e2e
|
||||
tags: ${{ steps.image.outputs.image }}
|
||||
provenance: false
|
||||
tags: ${{ steps.image.outputs.bare_image }}
|
||||
sbom: true
|
||||
provenance: mode=max
|
||||
push: true
|
||||
|
||||
- name: Build and push functional Docker E2E image
|
||||
if: steps.plan.outputs.needs_functional_image == '1' && steps.image_exists.outputs.functional_exists != '1'
|
||||
uses: useblacksmith/build-push-action@cbd1f60d194a98cb3be5523b15134501eaf0fbf3 # v2
|
||||
with:
|
||||
context: .
|
||||
file: ./scripts/e2e/Dockerfile
|
||||
target: functional
|
||||
build-contexts: |
|
||||
openclaw_package=.artifacts/docker-e2e-package
|
||||
platforms: linux/amd64
|
||||
tags: ${{ steps.image.outputs.functional_image }}
|
||||
sbom: true
|
||||
provenance: mode=max
|
||||
push: true
|
||||
|
||||
validate_live_models_docker:
|
||||
name: Docker live models (${{ matrix.provider_label }})
|
||||
needs: validate_selected_ref
|
||||
if: inputs.include_live_suites
|
||||
if: inputs.include_live_suites && inputs.live_model_providers == ''
|
||||
runs-on: ubuntu-24.04
|
||||
timeout-minutes: 75
|
||||
strategy:
|
||||
@@ -766,6 +1108,163 @@ jobs:
|
||||
- name: Run Docker live model sweep
|
||||
run: pnpm test:docker:live-models
|
||||
|
||||
validate_live_models_docker_targeted:
|
||||
name: Docker live models (selected providers)
|
||||
needs: validate_selected_ref
|
||||
if: inputs.include_live_suites && inputs.live_model_providers != ''
|
||||
runs-on: ubuntu-24.04
|
||||
timeout-minutes: 75
|
||||
env:
|
||||
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
|
||||
OPENAI_BASE_URL: ${{ secrets.OPENAI_BASE_URL }}
|
||||
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
ANTHROPIC_API_TOKEN: ${{ secrets.ANTHROPIC_API_TOKEN }}
|
||||
ANTHROPIC_API_KEY_OLD: ${{ secrets.ANTHROPIC_API_KEY_OLD }}
|
||||
BYTEPLUS_API_KEY: ${{ secrets.BYTEPLUS_API_KEY }}
|
||||
CEREBRAS_API_KEY: ${{ secrets.CEREBRAS_API_KEY }}
|
||||
DASHSCOPE_API_KEY: ${{ secrets.DASHSCOPE_API_KEY }}
|
||||
GROQ_API_KEY: ${{ secrets.GROQ_API_KEY }}
|
||||
KIMI_API_KEY: ${{ secrets.KIMI_API_KEY }}
|
||||
MODELSTUDIO_API_KEY: ${{ secrets.MODELSTUDIO_API_KEY }}
|
||||
MOONSHOT_API_KEY: ${{ secrets.MOONSHOT_API_KEY }}
|
||||
MISTRAL_API_KEY: ${{ secrets.MISTRAL_API_KEY }}
|
||||
MINIMAX_API_KEY: ${{ secrets.MINIMAX_API_KEY }}
|
||||
OPENCODE_API_KEY: ${{ secrets.OPENCODE_API_KEY }}
|
||||
OPENCODE_ZEN_API_KEY: ${{ secrets.OPENCODE_ZEN_API_KEY }}
|
||||
GEMINI_API_KEY: ${{ secrets.GEMINI_API_KEY }}
|
||||
GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}
|
||||
OPENROUTER_API_KEY: ${{ secrets.OPENROUTER_API_KEY }}
|
||||
QWEN_API_KEY: ${{ secrets.QWEN_API_KEY }}
|
||||
XAI_API_KEY: ${{ secrets.XAI_API_KEY }}
|
||||
ZAI_API_KEY: ${{ secrets.ZAI_API_KEY }}
|
||||
Z_AI_API_KEY: ${{ secrets.Z_AI_API_KEY }}
|
||||
CLAUDE_CODE_OAUTH_TOKEN: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}
|
||||
OPENCLAW_CODEX_AUTH_JSON: ${{ secrets.OPENCLAW_CODEX_AUTH_JSON }}
|
||||
OPENCLAW_CODEX_CONFIG_TOML: ${{ secrets.OPENCLAW_CODEX_CONFIG_TOML }}
|
||||
OPENCLAW_CLAUDE_JSON: ${{ secrets.OPENCLAW_CLAUDE_JSON }}
|
||||
OPENCLAW_CLAUDE_CREDENTIALS_JSON: ${{ secrets.OPENCLAW_CLAUDE_CREDENTIALS_JSON }}
|
||||
OPENCLAW_CLAUDE_SETTINGS_JSON: ${{ secrets.OPENCLAW_CLAUDE_SETTINGS_JSON }}
|
||||
OPENCLAW_CLAUDE_SETTINGS_LOCAL_JSON: ${{ secrets.OPENCLAW_CLAUDE_SETTINGS_LOCAL_JSON }}
|
||||
OPENCLAW_GEMINI_SETTINGS_JSON: ${{ secrets.OPENCLAW_GEMINI_SETTINGS_JSON }}
|
||||
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}
|
||||
REQUESTED_LIVE_MODEL_PROVIDERS: ${{ inputs.live_model_providers }}
|
||||
OPENCLAW_VITEST_MAX_WORKERS: "2"
|
||||
steps:
|
||||
- name: Checkout selected ref
|
||||
uses: actions/checkout@v6
|
||||
with:
|
||||
ref: ${{ needs.validate_selected_ref.outputs.selected_sha }}
|
||||
fetch-depth: 1
|
||||
|
||||
- name: Setup Node environment
|
||||
uses: ./.github/actions/setup-node-env
|
||||
with:
|
||||
node-version: ${{ env.NODE_VERSION }}
|
||||
pnpm-version: ${{ env.PNPM_VERSION }}
|
||||
install-bun: "true"
|
||||
|
||||
- name: Normalize provider allowlist
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
all_providers=(anthropic google minimax openai opencode-go openrouter xai zai fireworks)
|
||||
|
||||
normalize_provider() {
|
||||
local value="${1,,}"
|
||||
case "$value" in
|
||||
z.ai|z-ai) echo "zai" ;;
|
||||
opencode|opencode-go) echo "opencode-go" ;;
|
||||
open-router|openrouter) echo "openrouter" ;;
|
||||
*) echo "$value" ;;
|
||||
esac
|
||||
}
|
||||
|
||||
is_known_provider() {
|
||||
local value="$1"
|
||||
local provider
|
||||
for provider in "${all_providers[@]}"; do
|
||||
[[ "$provider" == "$value" ]] && return 0
|
||||
done
|
||||
return 1
|
||||
}
|
||||
|
||||
selected=()
|
||||
declare -A seen=()
|
||||
raw="${REQUESTED_LIVE_MODEL_PROVIDERS:-}"
|
||||
normalized_all="${raw,,}"
|
||||
normalized_all="${normalized_all//[[:space:],]/}"
|
||||
if [[ -z "$normalized_all" || "$normalized_all" == "all" ]]; then
|
||||
selected=("${all_providers[@]}")
|
||||
else
|
||||
while IFS= read -r entry; do
|
||||
[[ -z "$entry" ]] && continue
|
||||
provider="$(normalize_provider "$entry")"
|
||||
if ! is_known_provider "$provider"; then
|
||||
echo "Unknown live model provider '${entry}'. Expected one of: ${all_providers[*]}" >&2
|
||||
exit 1
|
||||
fi
|
||||
if [[ -z "${seen[$provider]:-}" ]]; then
|
||||
selected+=("$provider")
|
||||
seen[$provider]=1
|
||||
fi
|
||||
done < <(printf '%s\n' "$raw" | tr ',' '\n' | tr '[:space:]' '\n')
|
||||
fi
|
||||
|
||||
if [[ "${#selected[@]}" -eq 0 ]]; then
|
||||
echo "No live model providers selected." >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
providers_csv="$(IFS=,; echo "${selected[*]}")"
|
||||
echo "OPENCLAW_LIVE_PROVIDERS=$providers_csv" >> "$GITHUB_ENV"
|
||||
{
|
||||
echo "Live model providers: \`$providers_csv\`"
|
||||
} >> "$GITHUB_STEP_SUMMARY"
|
||||
|
||||
- name: Hydrate live auth/profile inputs
|
||||
run: bash scripts/ci-hydrate-live-auth.sh
|
||||
|
||||
- name: Validate provider credentials
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
require_any() {
|
||||
local label="$1"
|
||||
shift
|
||||
local key
|
||||
for key in "$@"; do
|
||||
if [[ -n "${!key:-}" ]]; then
|
||||
return 0
|
||||
fi
|
||||
done
|
||||
echo "Missing credential for ${label}: expected one of $*" >&2
|
||||
exit 1
|
||||
}
|
||||
|
||||
IFS=',' read -r -a providers <<<"${OPENCLAW_LIVE_PROVIDERS}"
|
||||
for provider in "${providers[@]}"; do
|
||||
case "$provider" in
|
||||
anthropic) require_any Anthropic ANTHROPIC_API_KEY ANTHROPIC_API_KEY_OLD ANTHROPIC_API_TOKEN ;;
|
||||
google) require_any Google GEMINI_API_KEY GOOGLE_API_KEY ;;
|
||||
minimax) require_any MiniMax MINIMAX_API_KEY ;;
|
||||
openai) require_any OpenAI OPENAI_API_KEY ;;
|
||||
opencode-go) require_any OpenCode OPENCODE_API_KEY OPENCODE_ZEN_API_KEY ;;
|
||||
openrouter) require_any OpenRouter OPENROUTER_API_KEY ;;
|
||||
xai) require_any xAI XAI_API_KEY ;;
|
||||
zai) require_any Z.ai ZAI_API_KEY Z_AI_API_KEY ;;
|
||||
fireworks) require_any Fireworks FIREWORKS_API_KEY ;;
|
||||
*)
|
||||
echo "Unhandled live model provider shard: ${provider}" >&2
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
- name: Run Docker live model sweep
|
||||
run: pnpm test:docker:live-models
|
||||
|
||||
validate_live_provider_suites:
|
||||
needs: validate_selected_ref
|
||||
if: inputs.include_live_suites && !inputs.live_models_only
|
||||
|
||||
139
.github/workflows/openclaw-release-checks.yml
vendored
139
.github/workflows/openclaw-release-checks.yml
vendored
@@ -4,7 +4,7 @@ on:
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
ref:
|
||||
description: Existing release tag or current full 40-character workflow-branch commit SHA to validate (for example v2026.4.12 or 0123456789abcdef0123456789abcdef01234567)
|
||||
description: Branch, tag, or full commit SHA to validate
|
||||
required: true
|
||||
type: string
|
||||
provider:
|
||||
@@ -63,8 +63,8 @@ jobs:
|
||||
RELEASE_REF: ${{ inputs.ref }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
if [[ ! "${RELEASE_REF}" =~ ^v[0-9]{4}\.[1-9][0-9]*\.[1-9][0-9]*((-beta\.[1-9][0-9]*)|(-[1-9][0-9]*))?$ ]] && [[ ! "${RELEASE_REF}" =~ ^[0-9a-fA-F]{40}$ ]]; then
|
||||
echo "Expected an existing release tag or current full 40-character workflow-branch commit SHA, got: ${RELEASE_REF}" >&2
|
||||
if [[ -z "${RELEASE_REF// }" ]] || [[ "${RELEASE_REF}" == -* ]]; then
|
||||
echo "Expected a branch, tag, or full commit SHA; got: ${RELEASE_REF}" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
@@ -78,24 +78,27 @@ jobs:
|
||||
id: ref
|
||||
run: echo "sha=$(git rev-parse HEAD)" >> "$GITHUB_OUTPUT"
|
||||
|
||||
- name: Validate selected ref is on workflow branch
|
||||
- name: Validate selected ref belongs to this repository
|
||||
env:
|
||||
RELEASE_REF: ${{ inputs.ref }}
|
||||
WORKFLOW_REF_NAME: ${{ github.ref_name }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
RELEASE_BRANCH_REF="refs/remotes/origin/${WORKFLOW_REF_NAME}"
|
||||
git fetch --no-tags origin "+refs/heads/${WORKFLOW_REF_NAME}:refs/remotes/origin/${WORKFLOW_REF_NAME}"
|
||||
if [[ "${RELEASE_REF}" =~ ^[0-9a-fA-F]{40}$ ]]; then
|
||||
BRANCH_SHA="$(git rev-parse "${RELEASE_BRANCH_REF}")"
|
||||
if [[ "$(git rev-parse HEAD)" != "${BRANCH_SHA}" ]]; then
|
||||
echo "Commit SHA mode only supports the current ${WORKFLOW_REF_NAME} HEAD. Use a release tag for older commits." >&2
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
git merge-base --is-ancestor HEAD "${RELEASE_BRANCH_REF}"
|
||||
SELECTED_SHA="$(git rev-parse HEAD)"
|
||||
git fetch --no-tags origin '+refs/heads/*:refs/remotes/origin/*'
|
||||
git fetch --tags origin '+refs/tags/*:refs/tags/*'
|
||||
|
||||
if git tag --points-at "${SELECTED_SHA}" | grep -Eq '^v'; then
|
||||
exit 0
|
||||
fi
|
||||
|
||||
if git for-each-ref --format='%(refname:short)' --contains "${SELECTED_SHA}" refs/remotes/origin | grep -Eq '^origin/'; then
|
||||
exit 0
|
||||
fi
|
||||
|
||||
echo "Ref '${RELEASE_REF}' resolved to ${SELECTED_SHA}, but that commit is not reachable from an OpenClaw branch or release tag." >&2
|
||||
echo "Secret-bearing release checks only run repository-owned branch/tag history, not arbitrary unreferenced commits." >&2
|
||||
exit 1
|
||||
|
||||
- name: Capture selected inputs
|
||||
id: inputs
|
||||
env:
|
||||
@@ -155,6 +158,7 @@ jobs:
|
||||
live_and_e2e_release_checks:
|
||||
needs: [resolve_target]
|
||||
permissions:
|
||||
actions: read
|
||||
contents: read
|
||||
packages: write
|
||||
pull-requests: read
|
||||
@@ -211,6 +215,69 @@ jobs:
|
||||
OPENCLAW_GEMINI_SETTINGS_JSON: ${{ secrets.OPENCLAW_GEMINI_SETTINGS_JSON }}
|
||||
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}
|
||||
|
||||
package_acceptance_release_checks:
|
||||
name: Run package acceptance
|
||||
needs: [resolve_target]
|
||||
permissions:
|
||||
actions: read
|
||||
contents: read
|
||||
packages: write
|
||||
pull-requests: read
|
||||
uses: ./.github/workflows/package-acceptance.yml
|
||||
with:
|
||||
workflow_ref: ${{ github.ref_name }}
|
||||
source: ref
|
||||
package_ref: ${{ needs.resolve_target.outputs.ref }}
|
||||
suite_profile: package
|
||||
telegram_mode: mock-openai
|
||||
secrets:
|
||||
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
|
||||
OPENAI_BASE_URL: ${{ secrets.OPENAI_BASE_URL }}
|
||||
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
ANTHROPIC_API_KEY_OLD: ${{ secrets.ANTHROPIC_API_KEY_OLD }}
|
||||
ANTHROPIC_API_TOKEN: ${{ secrets.ANTHROPIC_API_TOKEN }}
|
||||
BYTEPLUS_API_KEY: ${{ secrets.BYTEPLUS_API_KEY }}
|
||||
CEREBRAS_API_KEY: ${{ secrets.CEREBRAS_API_KEY }}
|
||||
DASHSCOPE_API_KEY: ${{ secrets.DASHSCOPE_API_KEY }}
|
||||
GROQ_API_KEY: ${{ secrets.GROQ_API_KEY }}
|
||||
KIMI_API_KEY: ${{ secrets.KIMI_API_KEY }}
|
||||
MODELSTUDIO_API_KEY: ${{ secrets.MODELSTUDIO_API_KEY }}
|
||||
MOONSHOT_API_KEY: ${{ secrets.MOONSHOT_API_KEY }}
|
||||
MISTRAL_API_KEY: ${{ secrets.MISTRAL_API_KEY }}
|
||||
MINIMAX_API_KEY: ${{ secrets.MINIMAX_API_KEY }}
|
||||
OPENCODE_API_KEY: ${{ secrets.OPENCODE_API_KEY }}
|
||||
OPENCODE_ZEN_API_KEY: ${{ secrets.OPENCODE_ZEN_API_KEY }}
|
||||
OPENCLAW_LIVE_BROWSER_CDP_URL: ${{ secrets.OPENCLAW_LIVE_BROWSER_CDP_URL }}
|
||||
OPENCLAW_LIVE_SETUP_TOKEN: ${{ secrets.OPENCLAW_LIVE_SETUP_TOKEN }}
|
||||
OPENCLAW_LIVE_SETUP_TOKEN_MODEL: ${{ secrets.OPENCLAW_LIVE_SETUP_TOKEN_MODEL }}
|
||||
OPENCLAW_LIVE_SETUP_TOKEN_PROFILE: ${{ secrets.OPENCLAW_LIVE_SETUP_TOKEN_PROFILE }}
|
||||
OPENCLAW_LIVE_SETUP_TOKEN_VALUE: ${{ secrets.OPENCLAW_LIVE_SETUP_TOKEN_VALUE }}
|
||||
GEMINI_API_KEY: ${{ secrets.GEMINI_API_KEY }}
|
||||
GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}
|
||||
OPENROUTER_API_KEY: ${{ secrets.OPENROUTER_API_KEY }}
|
||||
QWEN_API_KEY: ${{ secrets.QWEN_API_KEY }}
|
||||
FAL_KEY: ${{ secrets.FAL_KEY }}
|
||||
RUNWAY_API_KEY: ${{ secrets.RUNWAY_API_KEY }}
|
||||
DEEPGRAM_API_KEY: ${{ secrets.DEEPGRAM_API_KEY }}
|
||||
TOGETHER_API_KEY: ${{ secrets.TOGETHER_API_KEY }}
|
||||
VYDRA_API_KEY: ${{ secrets.VYDRA_API_KEY }}
|
||||
XAI_API_KEY: ${{ secrets.XAI_API_KEY }}
|
||||
ZAI_API_KEY: ${{ secrets.ZAI_API_KEY }}
|
||||
Z_AI_API_KEY: ${{ secrets.Z_AI_API_KEY }}
|
||||
BYTEPLUS_ACCESS_KEY_ID: ${{ secrets.BYTEPLUS_ACCESS_KEY_ID }}
|
||||
BYTEPLUS_SECRET_ACCESS_KEY: ${{ secrets.BYTEPLUS_SECRET_ACCESS_KEY }}
|
||||
CLAUDE_CODE_OAUTH_TOKEN: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}
|
||||
OPENCLAW_CODEX_AUTH_JSON: ${{ secrets.OPENCLAW_CODEX_AUTH_JSON }}
|
||||
OPENCLAW_CODEX_CONFIG_TOML: ${{ secrets.OPENCLAW_CODEX_CONFIG_TOML }}
|
||||
OPENCLAW_CLAUDE_JSON: ${{ secrets.OPENCLAW_CLAUDE_JSON }}
|
||||
OPENCLAW_CLAUDE_CREDENTIALS_JSON: ${{ secrets.OPENCLAW_CLAUDE_CREDENTIALS_JSON }}
|
||||
OPENCLAW_CLAUDE_SETTINGS_JSON: ${{ secrets.OPENCLAW_CLAUDE_SETTINGS_JSON }}
|
||||
OPENCLAW_CLAUDE_SETTINGS_LOCAL_JSON: ${{ secrets.OPENCLAW_CLAUDE_SETTINGS_LOCAL_JSON }}
|
||||
OPENCLAW_GEMINI_SETTINGS_JSON: ${{ secrets.OPENCLAW_GEMINI_SETTINGS_JSON }}
|
||||
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}
|
||||
OPENCLAW_QA_CONVEX_SITE_URL: ${{ secrets.OPENCLAW_QA_CONVEX_SITE_URL }}
|
||||
OPENCLAW_QA_CONVEX_SECRET_CI: ${{ secrets.OPENCLAW_QA_CONVEX_SECRET_CI }}
|
||||
|
||||
qa_lab_parity_release_checks:
|
||||
name: Run QA Lab parity gate
|
||||
needs: [resolve_target]
|
||||
@@ -332,6 +399,7 @@ jobs:
|
||||
env:
|
||||
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
|
||||
OPENCLAW_QA_REDACT_PUBLIC_METADATA: "1"
|
||||
OPENCLAW_QA_MATRIX_NO_REPLY_WINDOW_MS: "3000"
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
@@ -344,7 +412,9 @@ jobs:
|
||||
--provider-mode live-frontier \
|
||||
--model "${OPENCLAW_CI_OPENAI_MODEL}" \
|
||||
--alt-model "${OPENCLAW_CI_OPENAI_MODEL}" \
|
||||
--fast
|
||||
--profile fast \
|
||||
--fast \
|
||||
--fail-fast
|
||||
|
||||
- name: Upload Matrix QA artifacts
|
||||
if: always()
|
||||
@@ -438,3 +508,40 @@ jobs:
|
||||
path: ${{ steps.run_lane.outputs.output_dir }}
|
||||
retention-days: 14
|
||||
if-no-files-found: warn
|
||||
|
||||
summary:
|
||||
name: Verify release checks
|
||||
needs:
|
||||
- install_smoke_release_checks
|
||||
- cross_os_release_checks
|
||||
- live_and_e2e_release_checks
|
||||
- package_acceptance_release_checks
|
||||
- qa_lab_parity_release_checks
|
||||
- qa_live_matrix_release_checks
|
||||
- qa_live_telegram_release_checks
|
||||
if: always()
|
||||
runs-on: ubuntu-24.04
|
||||
timeout-minutes: 5
|
||||
steps:
|
||||
- name: Verify release check results
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
failed=0
|
||||
for item in \
|
||||
"install_smoke_release_checks=${{ needs.install_smoke_release_checks.result }}" \
|
||||
"cross_os_release_checks=${{ needs.cross_os_release_checks.result }}" \
|
||||
"live_and_e2e_release_checks=${{ needs.live_and_e2e_release_checks.result }}" \
|
||||
"package_acceptance_release_checks=${{ needs.package_acceptance_release_checks.result }}" \
|
||||
"qa_lab_parity_release_checks=${{ needs.qa_lab_parity_release_checks.result }}" \
|
||||
"qa_live_matrix_release_checks=${{ needs.qa_live_matrix_release_checks.result }}" \
|
||||
"qa_live_telegram_release_checks=${{ needs.qa_live_telegram_release_checks.result }}"
|
||||
do
|
||||
name="${item%%=*}"
|
||||
result="${item#*=}"
|
||||
if [[ "$result" != "success" && "$result" != "skipped" ]]; then
|
||||
echo "::error::${name} ended with ${result}"
|
||||
failed=1
|
||||
fi
|
||||
done
|
||||
exit "$failed"
|
||||
|
||||
517
.github/workflows/package-acceptance.yml
vendored
Normal file
517
.github/workflows/package-acceptance.yml
vendored
Normal file
@@ -0,0 +1,517 @@
|
||||
name: Package Acceptance
|
||||
|
||||
on:
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
workflow_ref:
|
||||
description: Trusted repo ref for workflow scripts and Docker E2E harness
|
||||
required: true
|
||||
default: main
|
||||
type: string
|
||||
source:
|
||||
description: Package candidate source
|
||||
required: true
|
||||
default: npm
|
||||
type: choice
|
||||
options:
|
||||
- npm
|
||||
- ref
|
||||
- url
|
||||
- artifact
|
||||
package_ref:
|
||||
description: Trusted package source ref when source=ref
|
||||
required: true
|
||||
default: main
|
||||
type: string
|
||||
package_spec:
|
||||
description: Published package spec when source=npm
|
||||
required: false
|
||||
default: openclaw@beta
|
||||
type: string
|
||||
package_url:
|
||||
description: HTTPS .tgz URL when source=url
|
||||
required: false
|
||||
default: ""
|
||||
type: string
|
||||
package_sha256:
|
||||
description: Expected package SHA-256; required for source=url
|
||||
required: false
|
||||
default: ""
|
||||
type: string
|
||||
artifact_run_id:
|
||||
description: GitHub Actions run id when source=artifact
|
||||
required: false
|
||||
default: ""
|
||||
type: string
|
||||
artifact_name:
|
||||
description: Artifact name containing one .tgz when source=artifact
|
||||
required: false
|
||||
default: package-under-test
|
||||
type: string
|
||||
suite_profile:
|
||||
description: Acceptance profile
|
||||
required: true
|
||||
default: package
|
||||
type: choice
|
||||
options:
|
||||
- smoke
|
||||
- package
|
||||
- product
|
||||
- full
|
||||
- custom
|
||||
docker_lanes:
|
||||
description: Comma/space separated Docker lanes when suite_profile=custom
|
||||
required: false
|
||||
default: ""
|
||||
type: string
|
||||
telegram_mode:
|
||||
description: Optional Telegram QA lane for the resolved package candidate
|
||||
required: true
|
||||
default: none
|
||||
type: choice
|
||||
options:
|
||||
- none
|
||||
- mock-openai
|
||||
- live-frontier
|
||||
workflow_call:
|
||||
inputs:
|
||||
workflow_ref:
|
||||
description: Trusted repo ref for workflow scripts and Docker E2E harness
|
||||
required: false
|
||||
default: main
|
||||
type: string
|
||||
source:
|
||||
description: "Package candidate source: npm, ref, url, or artifact"
|
||||
required: true
|
||||
type: string
|
||||
package_ref:
|
||||
description: Trusted package source ref when source=ref
|
||||
required: false
|
||||
default: main
|
||||
type: string
|
||||
package_spec:
|
||||
description: Published package spec when source=npm
|
||||
required: false
|
||||
default: openclaw@beta
|
||||
type: string
|
||||
package_url:
|
||||
description: HTTPS .tgz URL when source=url
|
||||
required: false
|
||||
default: ""
|
||||
type: string
|
||||
package_sha256:
|
||||
description: Expected package SHA-256; required for source=url
|
||||
required: false
|
||||
default: ""
|
||||
type: string
|
||||
artifact_run_id:
|
||||
description: GitHub Actions run id when source=artifact
|
||||
required: false
|
||||
default: ""
|
||||
type: string
|
||||
artifact_name:
|
||||
description: Artifact name containing one .tgz when source=artifact
|
||||
required: false
|
||||
default: package-under-test
|
||||
type: string
|
||||
suite_profile:
|
||||
description: "Acceptance profile: smoke, package, product, full, or custom"
|
||||
required: false
|
||||
default: package
|
||||
type: string
|
||||
docker_lanes:
|
||||
description: Comma/space separated Docker lanes when suite_profile=custom
|
||||
required: false
|
||||
default: ""
|
||||
type: string
|
||||
telegram_mode:
|
||||
description: Optional Telegram QA lane for the resolved package candidate
|
||||
required: false
|
||||
default: none
|
||||
type: string
|
||||
secrets:
|
||||
OPENAI_API_KEY:
|
||||
required: false
|
||||
OPENAI_BASE_URL:
|
||||
required: false
|
||||
ANTHROPIC_API_KEY:
|
||||
required: false
|
||||
ANTHROPIC_API_KEY_OLD:
|
||||
required: false
|
||||
ANTHROPIC_API_TOKEN:
|
||||
required: false
|
||||
BYTEPLUS_API_KEY:
|
||||
required: false
|
||||
CEREBRAS_API_KEY:
|
||||
required: false
|
||||
DASHSCOPE_API_KEY:
|
||||
required: false
|
||||
GROQ_API_KEY:
|
||||
required: false
|
||||
KIMI_API_KEY:
|
||||
required: false
|
||||
MODELSTUDIO_API_KEY:
|
||||
required: false
|
||||
MOONSHOT_API_KEY:
|
||||
required: false
|
||||
MISTRAL_API_KEY:
|
||||
required: false
|
||||
MINIMAX_API_KEY:
|
||||
required: false
|
||||
OPENCODE_API_KEY:
|
||||
required: false
|
||||
OPENCODE_ZEN_API_KEY:
|
||||
required: false
|
||||
OPENCLAW_LIVE_BROWSER_CDP_URL:
|
||||
required: false
|
||||
OPENCLAW_LIVE_SETUP_TOKEN:
|
||||
required: false
|
||||
OPENCLAW_LIVE_SETUP_TOKEN_MODEL:
|
||||
required: false
|
||||
OPENCLAW_LIVE_SETUP_TOKEN_PROFILE:
|
||||
required: false
|
||||
OPENCLAW_LIVE_SETUP_TOKEN_VALUE:
|
||||
required: false
|
||||
GEMINI_API_KEY:
|
||||
required: false
|
||||
GOOGLE_API_KEY:
|
||||
required: false
|
||||
OPENROUTER_API_KEY:
|
||||
required: false
|
||||
QWEN_API_KEY:
|
||||
required: false
|
||||
FAL_KEY:
|
||||
required: false
|
||||
RUNWAY_API_KEY:
|
||||
required: false
|
||||
DEEPGRAM_API_KEY:
|
||||
required: false
|
||||
TOGETHER_API_KEY:
|
||||
required: false
|
||||
VYDRA_API_KEY:
|
||||
required: false
|
||||
XAI_API_KEY:
|
||||
required: false
|
||||
ZAI_API_KEY:
|
||||
required: false
|
||||
Z_AI_API_KEY:
|
||||
required: false
|
||||
BYTEPLUS_ACCESS_KEY_ID:
|
||||
required: false
|
||||
BYTEPLUS_SECRET_ACCESS_KEY:
|
||||
required: false
|
||||
CLAUDE_CODE_OAUTH_TOKEN:
|
||||
required: false
|
||||
OPENCLAW_CODEX_AUTH_JSON:
|
||||
required: false
|
||||
OPENCLAW_CODEX_CONFIG_TOML:
|
||||
required: false
|
||||
OPENCLAW_CLAUDE_JSON:
|
||||
required: false
|
||||
OPENCLAW_CLAUDE_CREDENTIALS_JSON:
|
||||
required: false
|
||||
OPENCLAW_CLAUDE_SETTINGS_JSON:
|
||||
required: false
|
||||
OPENCLAW_CLAUDE_SETTINGS_LOCAL_JSON:
|
||||
required: false
|
||||
OPENCLAW_GEMINI_SETTINGS_JSON:
|
||||
required: false
|
||||
FIREWORKS_API_KEY:
|
||||
required: false
|
||||
OPENCLAW_QA_CONVEX_SITE_URL:
|
||||
required: false
|
||||
OPENCLAW_QA_CONVEX_SECRET_CI:
|
||||
required: false
|
||||
|
||||
permissions:
|
||||
actions: read
|
||||
contents: read
|
||||
packages: write
|
||||
pull-requests: read
|
||||
|
||||
concurrency:
|
||||
group: package-acceptance-${{ github.run_id }}
|
||||
cancel-in-progress: false
|
||||
|
||||
env:
|
||||
FORCE_JAVASCRIPT_ACTIONS_TO_NODE24: "true"
|
||||
NODE_VERSION: "24.x"
|
||||
PNPM_VERSION: "10.33.0"
|
||||
PACKAGE_ARTIFACT_NAME: package-under-test
|
||||
|
||||
jobs:
|
||||
resolve_package:
|
||||
name: Resolve package candidate
|
||||
runs-on: ubuntu-24.04
|
||||
timeout-minutes: 60
|
||||
outputs:
|
||||
docker_lanes: ${{ steps.profile.outputs.docker_lanes }}
|
||||
include_live_suites: ${{ steps.profile.outputs.include_live_suites }}
|
||||
include_openwebui: ${{ steps.profile.outputs.include_openwebui }}
|
||||
include_release_path_suites: ${{ steps.profile.outputs.include_release_path_suites }}
|
||||
package_artifact_name: ${{ steps.profile.outputs.package_artifact_name }}
|
||||
package_sha256: ${{ steps.resolve.outputs.sha256 }}
|
||||
package_version: ${{ steps.resolve.outputs.package_version }}
|
||||
telegram_enabled: ${{ steps.profile.outputs.telegram_enabled }}
|
||||
telegram_mode: ${{ steps.profile.outputs.telegram_mode }}
|
||||
steps:
|
||||
- name: Checkout package workflow ref
|
||||
uses: actions/checkout@v6
|
||||
with:
|
||||
ref: ${{ inputs.workflow_ref }}
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Setup Node environment
|
||||
uses: ./.github/actions/setup-node-env
|
||||
with:
|
||||
node-version: ${{ env.NODE_VERSION }}
|
||||
pnpm-version: ${{ env.PNPM_VERSION }}
|
||||
install-bun: ${{ inputs.source == 'ref' && 'true' || 'false' }}
|
||||
install-deps: "false"
|
||||
|
||||
- name: Download package artifact input
|
||||
if: inputs.source == 'artifact'
|
||||
env:
|
||||
GH_TOKEN: ${{ github.token }}
|
||||
ARTIFACT_RUN_ID: ${{ inputs.artifact_run_id }}
|
||||
ARTIFACT_NAME: ${{ inputs.artifact_name }}
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
if [[ -z "${ARTIFACT_RUN_ID// }" ]]; then
|
||||
echo "artifact_run_id is required when source=artifact." >&2
|
||||
exit 1
|
||||
fi
|
||||
if [[ -z "${ARTIFACT_NAME// }" ]]; then
|
||||
echo "artifact_name is required when source=artifact." >&2
|
||||
exit 1
|
||||
fi
|
||||
mkdir -p .artifacts/package-candidate-input
|
||||
gh run download "$ARTIFACT_RUN_ID" -n "$ARTIFACT_NAME" -D .artifacts/package-candidate-input
|
||||
|
||||
- name: Resolve package candidate
|
||||
id: resolve
|
||||
env:
|
||||
SOURCE: ${{ inputs.source }}
|
||||
PACKAGE_REF: ${{ inputs.package_ref }}
|
||||
PACKAGE_SPEC: ${{ inputs.package_spec }}
|
||||
PACKAGE_URL: ${{ inputs.package_url }}
|
||||
PACKAGE_SHA256: ${{ inputs.package_sha256 }}
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
artifact_dir=""
|
||||
if [[ "$SOURCE" == "artifact" ]]; then
|
||||
artifact_dir=".artifacts/package-candidate-input"
|
||||
fi
|
||||
|
||||
node scripts/resolve-openclaw-package-candidate.mjs \
|
||||
--source "$SOURCE" \
|
||||
--package-ref "$PACKAGE_REF" \
|
||||
--package-spec "$PACKAGE_SPEC" \
|
||||
--package-url "$PACKAGE_URL" \
|
||||
--package-sha256 "$PACKAGE_SHA256" \
|
||||
--artifact-dir "${artifact_dir:-.}" \
|
||||
--output-dir .artifacts/docker-e2e-package \
|
||||
--output-name openclaw-current.tgz \
|
||||
--metadata .artifacts/docker-e2e-package/package-candidate.json \
|
||||
--github-output "$GITHUB_OUTPUT"
|
||||
|
||||
- name: Select acceptance profile
|
||||
id: profile
|
||||
env:
|
||||
SOURCE: ${{ inputs.source }}
|
||||
SUITE_PROFILE: ${{ inputs.suite_profile }}
|
||||
CUSTOM_DOCKER_LANES: ${{ inputs.docker_lanes }}
|
||||
TELEGRAM_MODE: ${{ inputs.telegram_mode }}
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
include_release_path_suites=false
|
||||
include_openwebui=false
|
||||
include_live_suites=false
|
||||
docker_lanes=""
|
||||
|
||||
case "$SUITE_PROFILE" in
|
||||
smoke)
|
||||
docker_lanes="npm-onboard-channel-agent gateway-network config-reload"
|
||||
;;
|
||||
package)
|
||||
docker_lanes="npm-onboard-channel-agent doctor-switch update-channel-switch bundled-channel-deps-compat plugins-offline plugin-update"
|
||||
;;
|
||||
product)
|
||||
docker_lanes="npm-onboard-channel-agent doctor-switch update-channel-switch bundled-channel-deps-compat plugins plugin-update mcp-channels cron-mcp-cleanup openai-web-search-minimal openwebui"
|
||||
include_openwebui=true
|
||||
;;
|
||||
full)
|
||||
include_release_path_suites=true
|
||||
include_openwebui=true
|
||||
;;
|
||||
custom)
|
||||
docker_lanes="$CUSTOM_DOCKER_LANES"
|
||||
if [[ -z "${docker_lanes// }" ]]; then
|
||||
echo "docker_lanes is required when suite_profile=custom." >&2
|
||||
exit 1
|
||||
fi
|
||||
if [[ "$docker_lanes" == *"openwebui"* ]]; then
|
||||
include_openwebui=true
|
||||
fi
|
||||
;;
|
||||
*)
|
||||
echo "Unknown suite_profile: $SUITE_PROFILE" >&2
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
|
||||
telegram_enabled=false
|
||||
if [[ "$TELEGRAM_MODE" != "none" ]]; then
|
||||
telegram_enabled=true
|
||||
fi
|
||||
|
||||
{
|
||||
echo "docker_lanes=$docker_lanes"
|
||||
echo "include_release_path_suites=$include_release_path_suites"
|
||||
echo "include_openwebui=$include_openwebui"
|
||||
echo "include_live_suites=$include_live_suites"
|
||||
echo "telegram_enabled=$telegram_enabled"
|
||||
echo "telegram_mode=$TELEGRAM_MODE"
|
||||
echo "package_artifact_name=${PACKAGE_ARTIFACT_NAME}"
|
||||
} >> "$GITHUB_OUTPUT"
|
||||
|
||||
- name: Upload package-under-test artifact
|
||||
uses: actions/upload-artifact@v7
|
||||
with:
|
||||
name: ${{ env.PACKAGE_ARTIFACT_NAME }}
|
||||
path: |
|
||||
.artifacts/docker-e2e-package/openclaw-current.tgz
|
||||
.artifacts/docker-e2e-package/package-candidate.json
|
||||
retention-days: 14
|
||||
if-no-files-found: error
|
||||
|
||||
- name: Summarize package candidate
|
||||
env:
|
||||
PACKAGE_SHA256: ${{ steps.resolve.outputs.sha256 }}
|
||||
PACKAGE_VERSION: ${{ steps.resolve.outputs.package_version }}
|
||||
PACKAGE_REF: ${{ inputs.package_ref }}
|
||||
SOURCE: ${{ inputs.source }}
|
||||
SUITE_PROFILE: ${{ inputs.suite_profile }}
|
||||
WORKFLOW_REF: ${{ inputs.workflow_ref }}
|
||||
shell: bash
|
||||
run: |
|
||||
{
|
||||
echo "## Package acceptance"
|
||||
echo
|
||||
echo "- Source: \`${SOURCE}\`"
|
||||
echo "- Workflow ref: \`${WORKFLOW_REF}\`"
|
||||
if [[ "${SOURCE}" == "ref" ]]; then
|
||||
echo "- Package ref: \`${PACKAGE_REF}\`"
|
||||
fi
|
||||
echo "- Version: \`${PACKAGE_VERSION}\`"
|
||||
echo "- SHA-256: \`${PACKAGE_SHA256}\`"
|
||||
echo "- Profile: \`${SUITE_PROFILE}\`"
|
||||
} >> "$GITHUB_STEP_SUMMARY"
|
||||
|
||||
docker_acceptance:
|
||||
name: Docker product acceptance
|
||||
needs: resolve_package
|
||||
uses: ./.github/workflows/openclaw-live-and-e2e-checks-reusable.yml
|
||||
with:
|
||||
ref: ${{ inputs.workflow_ref }}
|
||||
include_repo_e2e: false
|
||||
include_release_path_suites: ${{ needs.resolve_package.outputs.include_release_path_suites == 'true' }}
|
||||
include_openwebui: ${{ needs.resolve_package.outputs.include_openwebui == 'true' }}
|
||||
docker_lanes: ${{ needs.resolve_package.outputs.docker_lanes }}
|
||||
package_artifact_name: ${{ needs.resolve_package.outputs.package_artifact_name }}
|
||||
include_live_suites: ${{ needs.resolve_package.outputs.include_live_suites == 'true' }}
|
||||
live_models_only: false
|
||||
secrets:
|
||||
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
|
||||
OPENAI_BASE_URL: ${{ secrets.OPENAI_BASE_URL }}
|
||||
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
ANTHROPIC_API_KEY_OLD: ${{ secrets.ANTHROPIC_API_KEY_OLD }}
|
||||
ANTHROPIC_API_TOKEN: ${{ secrets.ANTHROPIC_API_TOKEN }}
|
||||
BYTEPLUS_API_KEY: ${{ secrets.BYTEPLUS_API_KEY }}
|
||||
CEREBRAS_API_KEY: ${{ secrets.CEREBRAS_API_KEY }}
|
||||
DASHSCOPE_API_KEY: ${{ secrets.DASHSCOPE_API_KEY }}
|
||||
GROQ_API_KEY: ${{ secrets.GROQ_API_KEY }}
|
||||
KIMI_API_KEY: ${{ secrets.KIMI_API_KEY }}
|
||||
MODELSTUDIO_API_KEY: ${{ secrets.MODELSTUDIO_API_KEY }}
|
||||
MOONSHOT_API_KEY: ${{ secrets.MOONSHOT_API_KEY }}
|
||||
MISTRAL_API_KEY: ${{ secrets.MISTRAL_API_KEY }}
|
||||
MINIMAX_API_KEY: ${{ secrets.MINIMAX_API_KEY }}
|
||||
OPENCODE_API_KEY: ${{ secrets.OPENCODE_API_KEY }}
|
||||
OPENCODE_ZEN_API_KEY: ${{ secrets.OPENCODE_ZEN_API_KEY }}
|
||||
OPENCLAW_LIVE_BROWSER_CDP_URL: ${{ secrets.OPENCLAW_LIVE_BROWSER_CDP_URL }}
|
||||
OPENCLAW_LIVE_SETUP_TOKEN: ${{ secrets.OPENCLAW_LIVE_SETUP_TOKEN }}
|
||||
OPENCLAW_LIVE_SETUP_TOKEN_MODEL: ${{ secrets.OPENCLAW_LIVE_SETUP_TOKEN_MODEL }}
|
||||
OPENCLAW_LIVE_SETUP_TOKEN_PROFILE: ${{ secrets.OPENCLAW_LIVE_SETUP_TOKEN_PROFILE }}
|
||||
OPENCLAW_LIVE_SETUP_TOKEN_VALUE: ${{ secrets.OPENCLAW_LIVE_SETUP_TOKEN_VALUE }}
|
||||
GEMINI_API_KEY: ${{ secrets.GEMINI_API_KEY }}
|
||||
GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}
|
||||
OPENROUTER_API_KEY: ${{ secrets.OPENROUTER_API_KEY }}
|
||||
QWEN_API_KEY: ${{ secrets.QWEN_API_KEY }}
|
||||
FAL_KEY: ${{ secrets.FAL_KEY }}
|
||||
RUNWAY_API_KEY: ${{ secrets.RUNWAY_API_KEY }}
|
||||
DEEPGRAM_API_KEY: ${{ secrets.DEEPGRAM_API_KEY }}
|
||||
TOGETHER_API_KEY: ${{ secrets.TOGETHER_API_KEY }}
|
||||
VYDRA_API_KEY: ${{ secrets.VYDRA_API_KEY }}
|
||||
XAI_API_KEY: ${{ secrets.XAI_API_KEY }}
|
||||
ZAI_API_KEY: ${{ secrets.ZAI_API_KEY }}
|
||||
Z_AI_API_KEY: ${{ secrets.Z_AI_API_KEY }}
|
||||
BYTEPLUS_ACCESS_KEY_ID: ${{ secrets.BYTEPLUS_ACCESS_KEY_ID }}
|
||||
BYTEPLUS_SECRET_ACCESS_KEY: ${{ secrets.BYTEPLUS_SECRET_ACCESS_KEY }}
|
||||
CLAUDE_CODE_OAUTH_TOKEN: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}
|
||||
OPENCLAW_CODEX_AUTH_JSON: ${{ secrets.OPENCLAW_CODEX_AUTH_JSON }}
|
||||
OPENCLAW_CODEX_CONFIG_TOML: ${{ secrets.OPENCLAW_CODEX_CONFIG_TOML }}
|
||||
OPENCLAW_CLAUDE_JSON: ${{ secrets.OPENCLAW_CLAUDE_JSON }}
|
||||
OPENCLAW_CLAUDE_CREDENTIALS_JSON: ${{ secrets.OPENCLAW_CLAUDE_CREDENTIALS_JSON }}
|
||||
OPENCLAW_CLAUDE_SETTINGS_JSON: ${{ secrets.OPENCLAW_CLAUDE_SETTINGS_JSON }}
|
||||
OPENCLAW_CLAUDE_SETTINGS_LOCAL_JSON: ${{ secrets.OPENCLAW_CLAUDE_SETTINGS_LOCAL_JSON }}
|
||||
OPENCLAW_GEMINI_SETTINGS_JSON: ${{ secrets.OPENCLAW_GEMINI_SETTINGS_JSON }}
|
||||
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}
|
||||
|
||||
package_telegram:
|
||||
name: Telegram package acceptance
|
||||
needs: resolve_package
|
||||
if: needs.resolve_package.outputs.telegram_enabled == 'true'
|
||||
uses: ./.github/workflows/npm-telegram-beta-e2e.yml
|
||||
with:
|
||||
package_spec: ${{ inputs.package_spec }}
|
||||
package_artifact_name: ${{ needs.resolve_package.outputs.package_artifact_name }}
|
||||
package_label: openclaw@${{ needs.resolve_package.outputs.package_version }}
|
||||
provider_mode: ${{ needs.resolve_package.outputs.telegram_mode }}
|
||||
secrets:
|
||||
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
|
||||
OPENCLAW_QA_CONVEX_SITE_URL: ${{ secrets.OPENCLAW_QA_CONVEX_SITE_URL }}
|
||||
OPENCLAW_QA_CONVEX_SECRET_CI: ${{ secrets.OPENCLAW_QA_CONVEX_SECRET_CI }}
|
||||
|
||||
summary:
|
||||
name: Verify package acceptance
|
||||
needs: [resolve_package, docker_acceptance, package_telegram]
|
||||
if: always()
|
||||
runs-on: ubuntu-24.04
|
||||
timeout-minutes: 5
|
||||
steps:
|
||||
- name: Verify package acceptance results
|
||||
env:
|
||||
DOCKER_RESULT: ${{ needs.docker_acceptance.result }}
|
||||
PACKAGE_TELEGRAM_RESULT: ${{ needs.package_telegram.result }}
|
||||
RESOLVE_RESULT: ${{ needs.resolve_package.result }}
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
failed=0
|
||||
for item in \
|
||||
"resolve_package=${RESOLVE_RESULT}" \
|
||||
"docker_acceptance=${DOCKER_RESULT}" \
|
||||
"package_telegram=${PACKAGE_TELEGRAM_RESULT}"
|
||||
do
|
||||
name="${item%%=*}"
|
||||
result="${item#*=}"
|
||||
if [[ "$result" != "success" && "$result" != "skipped" ]]; then
|
||||
echo "::error::${name} ended with ${result}"
|
||||
failed=1
|
||||
fi
|
||||
done
|
||||
exit "$failed"
|
||||
97
.github/workflows/qa-live-transports-convex.yml
vendored
97
.github/workflows/qa-live-transports-convex.yml
vendored
@@ -18,6 +18,19 @@ on:
|
||||
description: Optional comma-separated Discord scenario ids
|
||||
required: false
|
||||
type: string
|
||||
matrix_profile:
|
||||
description: Matrix QA profile for the live Matrix lane
|
||||
required: false
|
||||
default: all
|
||||
type: choice
|
||||
options:
|
||||
- fast
|
||||
- all
|
||||
- transport
|
||||
- media
|
||||
- e2ee-smoke
|
||||
- e2ee-deep
|
||||
- e2ee-cli
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
@@ -199,6 +212,7 @@ jobs:
|
||||
run_live_matrix:
|
||||
name: Run Matrix live QA lane
|
||||
needs: [authorize_actor, validate_selected_ref]
|
||||
if: ${{ !(github.event_name == 'workflow_dispatch' && inputs.matrix_profile == 'all') }}
|
||||
runs-on: blacksmith-32vcpu-ubuntu-2404
|
||||
timeout-minutes: 60
|
||||
environment: qa-live-shared
|
||||
@@ -236,7 +250,9 @@ jobs:
|
||||
shell: bash
|
||||
env:
|
||||
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
|
||||
INPUT_MATRIX_PROFILE: ${{ github.event_name == 'workflow_dispatch' && inputs.matrix_profile || 'fast' }}
|
||||
OPENCLAW_QA_REDACT_PUBLIC_METADATA: "1"
|
||||
OPENCLAW_QA_MATRIX_NO_REPLY_WINDOW_MS: "3000"
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
@@ -249,7 +265,9 @@ jobs:
|
||||
--provider-mode live-frontier \
|
||||
--model "${OPENCLAW_CI_OPENAI_MODEL}" \
|
||||
--alt-model "${OPENCLAW_CI_OPENAI_MODEL}" \
|
||||
--fast
|
||||
--profile "${INPUT_MATRIX_PROFILE}" \
|
||||
--fast \
|
||||
--fail-fast
|
||||
|
||||
- name: Upload Matrix QA artifacts
|
||||
if: always()
|
||||
@@ -260,6 +278,83 @@ jobs:
|
||||
retention-days: 14
|
||||
if-no-files-found: warn
|
||||
|
||||
run_live_matrix_sharded:
|
||||
name: Run Matrix live QA lane (${{ matrix.profile }})
|
||||
needs: [authorize_actor, validate_selected_ref]
|
||||
if: ${{ github.event_name == 'workflow_dispatch' && inputs.matrix_profile == 'all' }}
|
||||
runs-on: blacksmith-32vcpu-ubuntu-2404
|
||||
timeout-minutes: 60
|
||||
environment: qa-live-shared
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
profile:
|
||||
- transport
|
||||
- media
|
||||
- e2ee-smoke
|
||||
- e2ee-deep
|
||||
- e2ee-cli
|
||||
steps:
|
||||
- name: Checkout selected ref
|
||||
uses: actions/checkout@v6
|
||||
with:
|
||||
ref: ${{ needs.validate_selected_ref.outputs.selected_sha }}
|
||||
fetch-depth: 1
|
||||
|
||||
- name: Setup Node environment
|
||||
uses: ./.github/actions/setup-node-env
|
||||
with:
|
||||
node-version: ${{ env.NODE_VERSION }}
|
||||
pnpm-version: ${{ env.PNPM_VERSION }}
|
||||
install-bun: "true"
|
||||
|
||||
- name: Validate required QA credential env
|
||||
env:
|
||||
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
if [[ -z "${OPENAI_API_KEY:-}" ]]; then
|
||||
echo "Missing required OPENAI_API_KEY." >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
- name: Build private QA runtime
|
||||
run: pnpm build
|
||||
|
||||
- name: Run Matrix live lane shard
|
||||
id: run_lane
|
||||
shell: bash
|
||||
env:
|
||||
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
|
||||
OPENCLAW_QA_REDACT_PUBLIC_METADATA: "1"
|
||||
OPENCLAW_QA_MATRIX_NO_REPLY_WINDOW_MS: "3000"
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
output_dir=".artifacts/qa-e2e/matrix-live-${{ matrix.profile }}-${GITHUB_RUN_ID}-${GITHUB_RUN_ATTEMPT}"
|
||||
echo "output_dir=${output_dir}" >> "$GITHUB_OUTPUT"
|
||||
|
||||
pnpm openclaw qa matrix \
|
||||
--repo-root . \
|
||||
--output-dir "${output_dir}" \
|
||||
--provider-mode live-frontier \
|
||||
--model "${OPENCLAW_CI_OPENAI_MODEL}" \
|
||||
--alt-model "${OPENCLAW_CI_OPENAI_MODEL}" \
|
||||
--profile "${{ matrix.profile }}" \
|
||||
--fast \
|
||||
--fail-fast
|
||||
|
||||
- name: Upload Matrix QA shard artifacts
|
||||
if: always()
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: qa-live-matrix-${{ matrix.profile }}-${{ github.run_id }}-${{ github.run_attempt }}
|
||||
path: ${{ steps.run_lane.outputs.output_dir }}
|
||||
retention-days: 14
|
||||
if-no-files-found: warn
|
||||
|
||||
run_live_telegram:
|
||||
name: Run Telegram live QA lane with Convex leases
|
||||
needs: [authorize_actor, validate_selected_ref]
|
||||
|
||||
16
.github/workflows/stale.yml
vendored
16
.github/workflows/stale.yml
vendored
@@ -41,7 +41,7 @@ jobs:
|
||||
days-before-pr-close: 3
|
||||
stale-issue-label: stale
|
||||
stale-pr-label: stale
|
||||
exempt-issue-labels: enhancement,maintainer,pinned,security,no-stale
|
||||
exempt-issue-labels: enhancement,maintainer,pinned,security,no-stale,bad-barnacle
|
||||
exempt-pr-labels: maintainer,no-stale,bad-barnacle
|
||||
operations-per-run: 2000
|
||||
ascending: true
|
||||
@@ -60,7 +60,7 @@ jobs:
|
||||
close-issue-reason: not_planned
|
||||
close-pr-message: |
|
||||
Closing due to inactivity.
|
||||
If you believe this PR should be revived, post in #pr-thunderdome-dangerzone on Discord to talk to a maintainer.
|
||||
If you believe this PR should be revived, post in #clawtributors on Discord to talk to a maintainer.
|
||||
That channel is the escape hatch for high-quality PRs that get auto-closed.
|
||||
- name: Mark stale assigned issues (primary)
|
||||
id: assigned-issue-stale-primary
|
||||
@@ -73,7 +73,7 @@ jobs:
|
||||
days-before-pr-stale: -1
|
||||
days-before-pr-close: -1
|
||||
stale-issue-label: stale
|
||||
exempt-issue-labels: enhancement,maintainer,pinned,security,no-stale
|
||||
exempt-issue-labels: enhancement,maintainer,pinned,security,no-stale,bad-barnacle
|
||||
operations-per-run: 2000
|
||||
ascending: true
|
||||
include-only-assigned: true
|
||||
@@ -108,7 +108,7 @@ jobs:
|
||||
Please add updates or it will be closed.
|
||||
close-pr-message: |
|
||||
Closing due to inactivity.
|
||||
If you believe this PR should be revived, post in #pr-thunderdome-dangerzone on Discord to talk to a maintainer.
|
||||
If you believe this PR should be revived, post in #clawtributors on Discord to talk to a maintainer.
|
||||
That channel is the escape hatch for high-quality PRs that get auto-closed.
|
||||
- name: Check stale state cache
|
||||
id: stale-state
|
||||
@@ -145,7 +145,7 @@ jobs:
|
||||
days-before-pr-close: 3
|
||||
stale-issue-label: stale
|
||||
stale-pr-label: stale
|
||||
exempt-issue-labels: enhancement,maintainer,pinned,security,no-stale
|
||||
exempt-issue-labels: enhancement,maintainer,pinned,security,no-stale,bad-barnacle
|
||||
exempt-pr-labels: maintainer,no-stale,bad-barnacle
|
||||
operations-per-run: 2000
|
||||
ascending: true
|
||||
@@ -164,7 +164,7 @@ jobs:
|
||||
close-issue-reason: not_planned
|
||||
close-pr-message: |
|
||||
Closing due to inactivity.
|
||||
If you believe this PR should be revived, post in #pr-thunderdome-dangerzone on Discord to talk to a maintainer.
|
||||
If you believe this PR should be revived, post in #clawtributors on Discord to talk to a maintainer.
|
||||
That channel is the escape hatch for high-quality PRs that get auto-closed.
|
||||
- name: Mark stale assigned issues (fallback)
|
||||
if: (steps.assigned-issue-stale-primary.outcome == 'failure' || steps.stale-state.outputs.has_state == 'true') && steps.app-token-fallback.outputs.token != ''
|
||||
@@ -176,7 +176,7 @@ jobs:
|
||||
days-before-pr-stale: -1
|
||||
days-before-pr-close: -1
|
||||
stale-issue-label: stale
|
||||
exempt-issue-labels: enhancement,maintainer,pinned,security,no-stale
|
||||
exempt-issue-labels: enhancement,maintainer,pinned,security,no-stale,bad-barnacle
|
||||
operations-per-run: 2000
|
||||
ascending: true
|
||||
include-only-assigned: true
|
||||
@@ -210,7 +210,7 @@ jobs:
|
||||
Please add updates or it will be closed.
|
||||
close-pr-message: |
|
||||
Closing due to inactivity.
|
||||
If you believe this PR should be revived, post in #pr-thunderdome-dangerzone on Discord to talk to a maintainer.
|
||||
If you believe this PR should be revived, post in #clawtributors on Discord to talk to a maintainer.
|
||||
That channel is the escape hatch for high-quality PRs that get auto-closed.
|
||||
|
||||
lock-closed-issues:
|
||||
|
||||
2
.gitignore
vendored
2
.gitignore
vendored
@@ -118,6 +118,8 @@ USER.md
|
||||
!.agents/skills/openclaw-test-heap-leaks/**
|
||||
!.agents/skills/openclaw-test-performance/
|
||||
!.agents/skills/openclaw-test-performance/**
|
||||
!.agents/skills/openclaw-testing/
|
||||
!.agents/skills/openclaw-testing/**
|
||||
!.agents/skills/optimizetests/
|
||||
!.agents/skills/optimizetests/**
|
||||
!.agents/skills/parallels-discord-roundtrip/
|
||||
|
||||
13
AGENTS.md
13
AGENTS.md
@@ -29,6 +29,7 @@ Telegraph style. Root rules only. Read scoped `AGENTS.md` before subtree work.
|
||||
- Extension prod code: no core `src/**`, `src/plugin-sdk-internal/**`, other extension `src/**`, or relative outside package.
|
||||
- Core/tests: no deep plugin internals (`extensions/*/src/**`, `onboard.js`). Use `api.ts`, SDK facade, generic contracts.
|
||||
- Extension-owned behavior stays extension-owned: repair, detection, onboarding, auth/provider defaults, provider tools/settings.
|
||||
- Owner boundary: fix owner-specific behavior in the owner module. Shared/core gets generic seams only; no owner ids, dependency strings, defaults, migrations, or recovery policy. If a bug names an extension or its dependency, start in that extension and add a generic core seam only when multiple owners need it.
|
||||
- Legacy config repair: doctor/fix paths, not startup/load-time core migrations.
|
||||
- Core test asserting extension-specific behavior: move to owner extension or generic contract test.
|
||||
- New seams: backwards-compatible, documented, versioned. Third-party plugins exist.
|
||||
@@ -49,15 +50,21 @@ Telegraph style. Root rules only. Read scoped `AGENTS.md` before subtree work.
|
||||
- Prod sweep: `pnpm check`; tests: `pnpm test`, `pnpm test:changed`, `pnpm test:serial`, `pnpm test:coverage`.
|
||||
- Extension tests: `pnpm test:extensions`, `pnpm test extensions`, `pnpm test extensions/<id>`.
|
||||
- Targeted tests: `pnpm test <path-or-filter> [vitest args...]`; never raw `vitest`.
|
||||
- Vitest flags only; no Jest flags like `--runInBand`. For serial runs use `pnpm test:serial` or `OPENCLAW_VITEST_MAX_WORKERS=1 pnpm test ...`.
|
||||
- Typecheck: `tsgo` lanes only (`pnpm tsgo*`, `pnpm check:test-types`); do not add `tsc --noEmit`, `typecheck`, `check:types`.
|
||||
- Format/lint: `pnpm format:check`/`pnpm format`; `pnpm lint*` lanes.
|
||||
- Formatting: use `oxfmt`, not Prettier. Prefer `pnpm format:check` / `pnpm format`; for targeted files use `pnpm exec oxfmt --check --threads=1 <files...>` or `pnpm exec oxfmt --write --threads=1 <files...>`.
|
||||
- Linting: use repo wrappers (`pnpm lint:*`, `scripts/run-oxlint.mjs`); do not invoke generic JS formatters/lints unless a repo script uses them.
|
||||
- Heavy checks: `OPENCLAW_LOCAL_CHECK=1`, mode `OPENCLAW_LOCAL_CHECK_MODE=throttled|full`; CI/shared use `OPENCLAW_LOCAL_CHECK=0`.
|
||||
- Local first. Use repo `pnpm` lanes before Blacksmith/Testbox. Remote only for parity-only failures, secrets/services, or explicit ask.
|
||||
- Blacksmith/Testbox is maintainer opt-in, not a repo-wide default. If Blacksmith access is available and `OPENCLAW_TESTBOX=1` is set, or a maintainer's personal AGENTS rules ask for it, use Testbox for broad, slow, Docker, live, E2E, full-suite, or CI-parity validation instead of running those heavy lanes locally. Use `OPENCLAW_LOCAL_CHECK_MODE=throttled|full` as the explicit local escape hatch.
|
||||
- Testbox use: run from repo root, pre-warm early with `blacksmith testbox warmup ci-check-testbox.yml --ref main --idle-timeout 90`, reuse the returned `tbx_...` id for all `run`/`download` commands, and stop boxes you created before handoff. Timeout bins: `90` minutes default, `240` multi-hour, `720` all-day, `1440` overnight; anything above `1440` needs explicit approval and cleanup.
|
||||
- Testbox full-suite profile: `blacksmith testbox run --id <ID> "env NODE_OPTIONS=--max-old-space-size=4096 OPENCLAW_TEST_PROJECTS_PARALLEL=6 OPENCLAW_VITEST_MAX_WORKERS=1 pnpm test"`. For installable package proof, prefer the GitHub `Package Acceptance` workflow over ad hoc Testbox commands.
|
||||
|
||||
## GitHub / CI
|
||||
|
||||
- Triage: list first, hydrate few. Use bounded `gh --json --jq`; avoid repeated full comment scans.
|
||||
- Automatic PR/issue discovery: skip maintainer-owned items unless directly relevant. Do not comment, close, label, retitle, rebase, fix up, or land them without Peter asking.
|
||||
- PR scan/triage: no unsolicited PR comments/reviews. Report in chat only unless explicitly asked, or a close/duplicate action needs a reason comment.
|
||||
- Search/dedupe: prefer `gh search issues 'repo:openclaw/openclaw is:open <terms>' --json number,title,state,updatedAt --limit 20`.
|
||||
- GitHub search boolean text is fussy. If `OR` queries return empty, split exact terms and search title/body/comments separately before concluding no hits.
|
||||
- PR shortlist: `gh pr list ...`; then `gh pr view <n> --json number,title,body,closingIssuesReferences,files,statusCheckRollup,reviewDecision`.
|
||||
@@ -117,6 +124,7 @@ Telegraph style. Root rules only. Read scoped `AGENTS.md` before subtree work.
|
||||
## Tests
|
||||
|
||||
- Vitest. Colocated `*.test.ts`; e2e `*.e2e.test.ts`; example models `sonnet-4.6`, `gpt-5.4`.
|
||||
- Avoid brittle tests that grep workflow/docs strings for operator policy. Prefer executable behavior, parsed config/schema checks, or live run proof; put release/CI policy reminders in AGENTS/docs instead.
|
||||
- Clean timers/env/globals/mocks/sockets/temp dirs/module state; `--isolate=false` safe.
|
||||
- Hot tests: avoid per-test `vi.resetModules()` + heavy imports. Measure with `pnpm test:perf:imports <file>` / `pnpm test:perf:hotspots --limit N`.
|
||||
- Seam depth: pure helper/contract unit tests; one integration smoke per boundary.
|
||||
@@ -124,6 +132,7 @@ Telegraph style. Root rules only. Read scoped `AGENTS.md` before subtree work.
|
||||
- Prefer injection; if module mocking, mock narrow local `*.runtime.ts`, not broad barrels or `openclaw/plugin-sdk/*`.
|
||||
- Share fixtures/builders; delete duplicate assertions; assert behavior that can regress here.
|
||||
- Do not edit baseline/inventory/ignore/snapshot/expected-failure files to silence checks without explicit approval.
|
||||
- Do not run multiple independent `pnpm test`/Vitest commands concurrently in the same worktree. They can race on `node_modules/.experimental-vitest-cache` and fail with `ENOTEMPTY`. Use one grouped `pnpm test ...` invocation, run targeted lanes sequentially, or set distinct `OPENCLAW_VITEST_FS_MODULE_CACHE_PATH` values when true parallel Vitest processes are needed.
|
||||
- Test workers max 16. Memory pressure: `OPENCLAW_VITEST_MAX_WORKERS=1 pnpm test`.
|
||||
- Live: `OPENCLAW_LIVE_TEST=1 pnpm test:live`; verbose `OPENCLAW_LIVE_TEST_QUIET=0`.
|
||||
- Guide: `docs/help/testing.md`.
|
||||
@@ -132,7 +141,7 @@ Telegraph style. Root rules only. Read scoped `AGENTS.md` before subtree work.
|
||||
|
||||
- Docs change with behavior/API. Use docs list/read_when hints; docs links per `docs/AGENTS.md`.
|
||||
- Changelog user-facing only; pure test/internal usually no entry.
|
||||
- Changelog placement: active version `### Changes`/`### Fixes`; every added entry must include at least one `Thanks @author` attribution, using credited GitHub username(s). Never add `Thanks @steipete`.
|
||||
- Changelog placement: active version `### Changes`/`### Fixes`; every added entry must include at least one `Thanks @author` attribution, using credited GitHub username(s). Never add `Thanks @steipete` or `Thanks @codex`.
|
||||
- Changelog bullets are always single-line. No wrapping/continuation across multiple lines. Long entries stay on one long line so dedupe, PR-ref, and credit-audit tooling work and so the visual style stays uniform.
|
||||
|
||||
## Git
|
||||
|
||||
133
CHANGELOG.md
133
CHANGELOG.md
@@ -4,8 +4,127 @@ Docs: https://docs.openclaw.ai
|
||||
|
||||
## Unreleased
|
||||
|
||||
### Changes
|
||||
|
||||
- Control UI: polish the quick settings dashboard grid so common cards align across desktop, tablet, and mobile layouts without wasting horizontal space. Thanks @BunsDev.
|
||||
- Matrix/E2EE: add `openclaw matrix encryption setup` to enable Matrix encryption, bootstrap recovery, and print verification status from one setup flow. Thanks @gumadeiras.
|
||||
- Agents/compaction: add an opt-in `agents.defaults.compaction.maxActiveTranscriptBytes` preflight trigger that runs normal local compaction when the active JSONL grows too large, requiring transcript rotation so successful compaction moves future turns onto a smaller successor file instead of raw byte-splitting history. Thanks @vincentkoc.
|
||||
- CLI/migration: add `openclaw migrate` with plan, dry-run, JSON, pre-migration backup, onboarding detection, archive-only report copies, and a bundled Hermes importer for configuration, memory/plugin hints, model providers, MCP servers, skills, and supported credentials. Thanks @NousResearch.
|
||||
|
||||
### Fixes
|
||||
|
||||
- CLI/plugins: stop security-blocked plugin installs from retrying as hook packs, so normal plugin packages report the scanner failure without a misleading "not a valid hook pack" follow-up. Fixes #61175; supersedes #64102. Thanks @KonsultDigital and @ziyincody.
|
||||
- CLI/update: keep the automatic post-update completion refresh on the core-command tree so it no longer stages bundled plugin runtime deps before the Gateway restart path, avoiding `.24` update hangs and 1006 disconnect cascades. Fixes #72665. Thanks @sakalaboator and @He-Pin.
|
||||
- Agents/Bedrock: stop heartbeat runs from persisting blank user transcript turns and repair existing blank user text messages before replay, preventing AWS Bedrock `ContentBlock` blank-text validation failures. Fixes #72640 and #72622. Thanks @goldzulu.
|
||||
- Agents/LM Studio: promote standalone bracketed local-model tool requests into registered tool calls and hide unsupported bracket blocks from visible replies, so MemPalace MCP lookups do not print raw `[tool]` JSON scaffolding in chat. Fixes #66178. Thanks @detroit357.
|
||||
- LM Studio: trust configured LM Studio loopback, LAN, and tailnet endpoints for guarded model requests by default, preserving explicit private-network opt-outs. Refs #60994. Thanks @tnowakow.
|
||||
- Docker/setup: route Docker onboarding defaults for host-side LM Studio and Ollama through `host.docker.internal` and add the Linux host-gateway mapping to the bundled Compose file, so containerized gateways can reach local providers without using container loopback. Fixes #68684; supersedes #68702. Thanks @safrano9999 and @skolez.
|
||||
- Agents/LM Studio: strip prior-turn Gemma 4 reasoning from OpenAI-compatible replay while preserving active tool-call continuation reasoning. Fixes #68704. Thanks @chip-snomo and @Kailigithub.
|
||||
- LM Studio: allow interactive onboarding to leave the API key blank for unauthenticated local servers, using local synthetic auth while clearing stale LM Studio auth profiles. Fixes #66937. Thanks @olamedia.
|
||||
- Plugins/startup: use a `PluginLookUpTable` during Gateway startup so channel ownership, deferred channel loading, and startup plugin IDs reuse the same installed manifest registry instead of rebuilding manifest metadata on the boot path. Thanks @shakkernerd.
|
||||
- Plugins/startup: pass the Gateway `PluginLookUpTable` through plugin loading so auto-enable checks and startup-scope fallback reuse the same manifest registry instead of doing another manifest pass. Thanks @shakkernerd.
|
||||
- Plugins/startup: carry the Gateway `PluginLookUpTable` into deferred channel full-runtime reloads so post-listen startup does not rebuild manifest metadata after the provisional setup-runtime load. Thanks @shakkernerd.
|
||||
- Gateway/startup: extend `OPENCLAW_GATEWAY_STARTUP_TRACE=1` with per-phase event-loop delay plus plugin lookup-table timing and count metrics for installed-index, manifest, startup-plan, and owner-map work, and include the new timing fields in startup benchmark summaries. Thanks @shakkernerd.
|
||||
- Plugins/registry: resolve lookup-table owner maps for providers, CLI backends, setup providers, command aliases, model catalogs, channel configs, and manifest contracts while preserving setup-only CLI backend ownership. Thanks @shakkernerd.
|
||||
- Process/Windows: decode command stdout and stderr from raw bytes with console-codepage awareness, while preserving valid UTF-8 output and multibyte characters split across chunks. Fixes #50519. Thanks @iready, @kevinten10, @zhangyongjie1997, @knightplat-blip, @heiqishi666, and @slepybear.
|
||||
- Agents/bootstrap: dedupe hook-injected bootstrap context files by workspace-relative path and store normalized resolved paths so duplicate relative and absolute hook paths no longer depend on the process cwd. (#59344; fixes #59319; related #56721, #56725, and #57587) Thanks @koen666.
|
||||
- Agents/bootstrap: refresh cached workspace bootstrap snapshots on long-lived main-session turns when `AGENTS.md`, `SOUL.md`, `MEMORY.md`, or `TOOLS.md` change on disk, while preserving unchanged snapshot identity through the workspace file cache. (#64871; related #43901, #26497, #28594, #30896) Thanks @aimqwest and @mikejuyoon.
|
||||
- macOS Gateway: detect installed-but-unloaded LaunchAgent split-brain states during status, doctor, and restart, and re-bootstrap launchd supervision before falling back to unmanaged listener restarts. Fixes #67335, #53475, and #71060; refs #58890, #60885, and #70801. Thanks @ze1tgeist88, @dafacto, and @vishutdhar.
|
||||
- Plugins/install: treat mirrored core logger dependencies as staged bundled runtime deps so packaged Gateway starts do not crash when the external plugin-runtime-deps root is missing `tslog`. Fixes #72228; supersedes #72493. Thanks @deepujain.
|
||||
- Build/plugins: preserve active bundled runtime-dependency staging temp directories owned by live build processes so overlapping postbuild runs no longer delete each other's staged deps mid-prune. Supersedes #72220. Thanks @VACInc.
|
||||
- Plugins/install: hide bundled runtime-dependency npm child windows on Windows across Gateway startup, postinstall, and packaged staging paths so Telegram/Anthropic dependency repair no longer flashes shell windows. Fixes #72315. Thanks @athuljayaram and @joshfeng.
|
||||
- Plugins/Windows: normalize lazy plugin service override imports before Node ESM loading so drive-letter browser-control module paths no longer fail with `ERR_UNSUPPORTED_ESM_URL_SCHEME`. Fixes #72573; supersedes #72599 and #72582. Thanks @llzzww316, @feineryonah-byte, and @WuKongAI-CMU.
|
||||
- Browser/plugins: load `playwright-core` through the browser runtime shim so packaged installs can run Playwright actions from staged plugin runtime deps after doctor/startup repair. Fixes #72168; supersedes #72238. Thanks @zdg1110 and @yetval.
|
||||
- Plugins/install: stage bundled plugin runtime dependencies before Gateway startup, drain update restarts, and materialize plugin-owned root chunks in external mirrors so staged deps resolve under native ESM. Fixes #72058; supersedes #72084. Thanks @amnesia106 and @drvoss.
|
||||
- TTS/SecretRef: resolve `messages.tts.providers.*.apiKey` from the active runtime snapshot so SecretRef-backed MiniMax and other TTS provider keys work in runtime reply/audio paths. Fixes #68690. Thanks @joshavant.
|
||||
- Gateway/install: surface systemd user-bus recovery hints during Linux service activation and retry via the machine user scope when `systemctl --user` reports no-medium bus failures. Fixes #39673; refs #44417 and #63561. Thanks @Arbor4, @myrsu, and @mssteuer.
|
||||
- CLI/startup: read generated startup metadata from the bundled `dist` layout before falling back to live help rendering, so root/browser help and channel-option bootstrap stay on the fast path. Thanks @vincentkoc.
|
||||
- CLI/help: treat positional `help` invocations like `openclaw channels help` as help paths for startup gating, avoiding model/auth warmup while preserving positional arguments such as `openclaw docs help`. Thanks @gumadeiras.
|
||||
- Web search: route plugin-scoped web_search SecretRefs through the active runtime config snapshot so provider execution receives resolved credentials across app/runtime paths, including `plugins.entries.brave.config.webSearch.apiKey`. Fixes #68690. Thanks @VACInc.
|
||||
- Voice Call: allow SecretRef-backed Twilio auth tokens and call-specific OpenAI/ElevenLabs TTS API keys through the plugin config surface. Fixes #68690. Thanks @joshavant.
|
||||
- Google Meet: clean stale chrome-node realtime audio bridges by URL before rejoining, expose active node bridge inspection, and tolerate transient node input pull failures instead of dropping the Meet session. Fixes #72371. (#72372) Thanks @BsnizND.
|
||||
- Google Meet: clear queued Gemini Live playback when realtime interruptions arrive, restart Chrome command-pair audio output after clears, and expose Google Live interruption/VAD config knobs for Meet and Voice Call realtime bridges. Fixes #72523. (#72524) Thanks @BsnizND.
|
||||
- Google Meet: add `realtime.agentId` so live meeting consults can target a named OpenClaw agent instead of always using `main`. (#72381) Thanks @BsnizND.
|
||||
- Matrix/E2EE: stabilize recovery and broken-device QA flows while avoiding Matrix device-cleanup sync races that could leave shutdown-time crypto work running. Thanks @gumadeiras.
|
||||
- Cron: treat isolated run-level agent failures as job errors even when no reply payload is produced, synthesizing a safe error payload so model/provider failures increment error counters and trigger failure notifications instead of clearing as successful. Fixes #43604; carries forward #43631. Thanks @SPFAdvisors.
|
||||
- Cron: preserve exact `NO_REPLY` tool results from isolated jobs with empty final assistant turns as quiet successes instead of surfacing incomplete-turn errors. Fixes #68452; carries forward #68453. Thanks @anyech.
|
||||
- Cron: resolve failure alerts and failure-destination announcements against `session:<id>` targets before falling back to the creator session, so jobs created from group chats can notify the targeted direct session without cross-account routing errors. Refs #62777; carries forward #68535. Thanks @slideshow-dingo and @likewen-tech.
|
||||
- Discord: preserve explicit `user:` and `channel:` delivery targets through plugin routing so cron announcements and failure alerts keep their intended recipient kind. Refs #62777; carries forward #62798. Thanks @neeravmakwana.
|
||||
- Cron: add `failureAlert.includeSkipped` and `openclaw cron edit --failure-alert-include-skipped` so persistently skipped jobs can alert without counting skips as execution errors or affecting retry backoff. Fixes #60846. Thanks @slideshow-dingo.
|
||||
- Cron: invalidate stale pending runtime slots after live or offline `jobs.json` schedule edits, while preserving due slots for formatting-only rewrites. Fixes #27996 and #71607; carries forward #71651. Thanks @xialonglee and @fagnersouza666.
|
||||
- Cron: keep legacy flat `jobs.json` rows loadable while comparing split-state schedule identities, so old cron stores do not crash before in-memory hydration can normalize them. Thanks @codex.
|
||||
- Cron: start isolated agent-turn execution timeouts after the runner enters its effective execution lane, so queued cron/manual runs no longer spend their whole timeout budget before useful work begins. Fixes #41783. Thanks @ayanesakura and @Hurray0.
|
||||
- Cron/Telegram: preserve direct-chat thread IDs and optional account IDs when inferring reminder delivery from Telegram direct-thread session keys. Fixes #44270; carries forward #44325, #44351, #44412, and #72657. Thanks @RunMintOn, @arkyu2077, @0xsline, and @vincentkoc.
|
||||
- Cron: omit synthetic `delivery.resolved` errors from `--no-deliver` run records while preserving explicit no-deliver target traces for agent-initiated messages. Fixes #72210; carries forward #72219. Thanks @hatemclawbot-collab and @xydigit-sj.
|
||||
- Cron: classify isolated runs as errors from structured embedded-run execution-denial metadata, with final-output marker fallback for `SYSTEM_RUN_DENIED`, `INVALID_REQUEST`, and approval-binding refusals, so blocked commands no longer appear green in cron history. Fixes #67172; carries forward #67186. Thanks @oc-gh-dr, @hclsys, and @1yihui.
|
||||
- Subagents: keep the delegated task only in the subagent system prompt and send a short initial kickoff message, avoiding duplicate task tokens while preserving multiline task formatting. Fixes #72019; carries forward #72053. Thanks @Wizongod and @ly85206559.
|
||||
- Onboarding/GitHub Copilot: add manifest-owned `--github-copilot-token` support for non-interactive setup, including env fallback, tokenRef storage in ref mode, saved-profile reuse, and current Copilot default-model wiring. Refs #50002 and supersedes #50003. Thanks @scottgl9.
|
||||
- Gateway/install: add a validated `--wrapper`/`OPENCLAW_WRAPPER` service install path that persists executable LaunchAgent/systemd wrappers across forced reinstalls, updates, and doctor repairs instead of falling back to raw node/bun `ProgramArguments`. Fixes #69400. (#72445) Thanks @willtmc.
|
||||
- Plugins: fail plugin registration when loader-owned acceptance gates reject missing hook names or memory-only capability registration from non-memory plugins, surfacing the issue through plugin status and doctor instead of silently dropping the registration. Fixes #72459. Thanks @1fanwang and @amknight.
|
||||
- macOS Gateway: write launchd services with a state-dir `WorkingDirectory`, use a durable state-dir temp path instead of freezing macOS session `TMPDIR`, create that temp directory before bootstrap, and label abort-shaped launchd exits as `SIGABRT/abort` in status output. Fixes #53679 and #70223; refs #71848. Thanks @dlturock, @stammi922, and @palladius.
|
||||
- Exec approvals: accept runtime-owned `source: "allow-always"` and `commandText` allowlist metadata in gateway and node approval-set payloads so Control UI round-trips no longer fail with `unexpected property 'source'`. Fixes #60000; carries forward #60064. Thanks @sd1471123, @sharkqwy, and @luoyanglang.
|
||||
- Exec/node: skip approval-plan preparation for full-trust `host=node` runs so interpreter and script commands no longer fail with `SYSTEM_RUN_DENIED: approval cannot safely bind` when effective policy is `security=full` and `ask=off`. Fixes #48457 and duplicate #69251. Thanks @ajtran303, @jaserNo1, @Blakeshannon, @lesliefag, and @AvIsBeastMC.
|
||||
- Exec/node: synthesize a local approval plan when a paired node advertises `system.run` without `system.run.prepare`, unblocking approval-required `host=node` exec on current macOS companion nodes while preserving remote prepare for node hosts that support it. Fixes #37591 and duplicate #66839; carries forward #69725. Thanks @soloclz.
|
||||
- Memory/QMD: prefer QMD's `--mask` collection pattern flag so root memory indexing stays scoped to `MEMORY.md` instead of widening to every markdown file in the workspace. Thanks @codex.
|
||||
- Memory/doctor: treat the specific `gateway timeout after ...` gateway memory probe result as inconclusive instead of reporting embeddings not ready, while preserving warnings for explicit failures. Fixes #44426; carries forward #46576 with the Greptile review feedback applied. Thanks Cengiz (@ghost).
|
||||
- Gateway/memory: defer QMD startup for implicit non-default agents and scope memory runtime loading to the selected memory slot so Gateway boot and first memory recall avoid broad plugin runtime fanout. Thanks @vincentkoc.
|
||||
- Gateway/startup: keep core request handlers, setup wizard, and channel runtime helpers off the boot path until the first matching request, wizard run, or channel start, reducing no-plugin Gateway ready RSS and avoidable startup imports. Thanks @vincentkoc.
|
||||
- Gateway/startup: keep CLI outbound channel send dependencies as lazy request-time senders so Gateway boot no longer imports channel plugin registration just to construct default deps. Thanks @vincentkoc.
|
||||
- Gateway/startup: split lightweight HTTP auth helpers away from model-override helpers so Gateway bind no longer imports model catalog selection while wiring base HTTP routes. Thanks @vincentkoc.
|
||||
- Gateway/startup: lazy-load plugin HTTP route dispatch when active plugin routes exist so no-plugin Gateway boot skips plugin route runtime scope setup. Thanks @vincentkoc.
|
||||
- Gateway/startup: move chat run/subscriber registries onto a lightweight state module and defer chat/session event projection until the first event so Gateway boot skips session IO imports. Thanks @vincentkoc.
|
||||
- Gateway/startup: keep node session runtime on a lightweight JSON parser instead of importing gateway method validation helpers during boot. Thanks @vincentkoc.
|
||||
- Gateway/startup: read embedded-run activity from a lightweight shared state module so restart deferral no longer imports the embedded runner during Gateway boot. Thanks @vincentkoc.
|
||||
- Gateway/startup: defer MCP loopback server imports until Gateway shutdown so normal boot no longer loads the loopback HTTP/tool schema stack just to register close handlers. Thanks @vincentkoc.
|
||||
- Gateway/startup: resolve channel runtime helpers asynchronously only when an enabled/configured channel starts, so no-channel Gateway boot skips auto-reply, media, pairing, and outbound channel helper imports. Thanks @vincentkoc.
|
||||
- Gateway/startup: lazy-load HTTP auth, canvas auth, and plugin route scope helpers from their request paths so Gateway bind no longer pays those utility graphs during boot. Thanks @vincentkoc.
|
||||
- Gateway/startup: defer isolated cron runner imports until `/hooks/agent` dispatch so Gateway boot skips the agent-turn runtime on installs that only need normal HTTP bind. Thanks @vincentkoc.
|
||||
- CLI/Gateway: use a parse-only config snapshot for plain `gateway status` reads and reuse same-path service config context so status no longer spends tens of seconds in full config validation before printing. Thanks @vincentkoc.
|
||||
- Lobster/Gateway: memoize repeated Ajv schema compilation before loading the embedded Lobster runtime so scheduled workflows and `llm.invoke` loops stop growing gateway heap on content-identical schemas. Fixes #71148. Thanks @cmi525, @vsolaz, and @vincentkoc.
|
||||
- Codex harness: normalize cached input tokens before session/context accounting so prompt cache reads are not double-counted in `/status`, `session_status`, or persisted `sessionEntry.totalTokens`. Fixes #69298. Thanks @richardmqq.
|
||||
- Hooks/session-memory: use the host local timezone for memory filenames, fallback timestamp slugs, and markdown headers instead of UTC dates. Fixes #46703. (#46721) Thanks @Astro-Han.
|
||||
- Feishu: extract quoted/replied interactive-card text across schema 1.0, schema 2.0, i18n, template-variable, and post-format fallback shapes without carrying broad generated/config churn from related parser experiments. (#38776, #60383, #42218, #45936) Thanks @lishuaigit, @lskun, @just2gooo, and @Br1an67.
|
||||
- Telegram/agents: hide raw failed write/edit warning messages in Telegram when the assistant already explicitly acknowledges the failed action, while keeping warnings when the reply claims success or omits the failure; #39406 remains the broader configurable delivery-policy follow-up. Fixes #51065; covers #39631. Thanks @Bartok9 and @Bortlesboat.
|
||||
- Exec approvals: accept a symlinked `OPENCLAW_HOME` as the trusted approvals root while still rejecting symlinked `.openclaw` path components below it. (#64663) Thanks @FunJim.
|
||||
- Logging: add top-level `hostname`, flattened `message`, and available `agent_id`, `session_id`, and `channel` fields to file-log JSONL records for multi-agent filtering without removing existing structured log arguments. Fixes #51075. Thanks @stevengonsalvez.
|
||||
- ACP: route server logs to stderr before Gateway config/bootstrap work so ACP stdout remains JSON-RPC only for IDE integrations. Fixes #49060. Thanks @Hollychou924.
|
||||
- Logging: propagate internal request trace scopes through Gateway HTTP requests and WebSocket frames so file logs, diagnostic events, agent run traces, model-call traces, OTEL spans, and trusted provider `traceparent` headers share a correlatable `traceId` without logging raw request or model content. Fixes #40353. Thanks @liangruochong44-ui.
|
||||
- Diagnostics/OTEL: capture privacy-safe model-call request payload bytes, streamed response bytes, first-response latency, and total duration in diagnostic events, plugin hooks, stability snapshots, and OTEL model-call spans/metrics without logging raw model content. Fixes #33832. Thanks @wwh830.
|
||||
- Logging: write validated diagnostic trace context as top-level `traceId`, `spanId`, `parentSpanId`, and `traceFlags` fields in file-log JSONL records so traced requests and model calls are easier to correlate in log processors. Refs #40353. Thanks @liangruochong44-ui.
|
||||
- Logging/sessions: apply configured redaction patterns to persisted session transcript text and accept escaped character classes in safe custom redaction regexes, so transcript JSONL no longer keeps matching sensitive text in the clear. Fixes #42982. Thanks @panpan0000.
|
||||
- Agents/sessions: let `sessions_spawn runtime="subagent"` ignore ACP-only `streamTo` and `resumeSessionId` fields while keeping ACP passthrough and documenting `streamTo` as ACP-only. Fixes #43556 and #63120; covers #56326, #61724, #64714, and #67248. Thanks @skernelx, @damselem, @Br1an67, @Mintalix, @IsaacAPerez, @vvitovec, and @Sanjays2402.
|
||||
- Providers/Ollama: honor `/api/show` capabilities when registering local models so non-tool Ollama models no longer receive the agent tool surface, and keep native Ollama thinking opt-in instead of enabling it by default. Fixes #64710 and duplicate #65343. Thanks @yuan-b, @netherby, @xilopaint, and @Diyforfun2026.
|
||||
- Image tool/media: honor `tools.media.image.timeoutSeconds` and matching per-model image timeouts in explicit image analysis, including the MiniMax VLM fallback path, so slow local vision models are not capped by hardcoded 30s/60s aborts. Fixes #67889; supersedes #67929. Thanks @AllenT22 and @alchip.
|
||||
- Providers/Ollama: read larger custom Modelfile `PARAMETER num_ctx` values from `/api/show` so auto-discovered Ollama models with expanded context no longer stay pinned to the base model context. Fixes #68344. Thanks @neeravmakwana.
|
||||
- Providers/Ollama: honor configured model `params.num_ctx` in native and OpenAI-compatible Ollama requests so local models can cap runtime context without rebuilding Modelfiles. Fixes #44550 and #52206; supersedes #69464. Thanks @taitruong, @armi0024, and @LokiCode404.
|
||||
- Providers/Ollama: stop forcing native Ollama requests to use the full configured `contextWindow` as `options.num_ctx` unless `params.num_ctx` is explicit, so local models can keep Ollama's VRAM/env default instead of looking hung on first turns. Fixes #49684 and #68662. Thanks @zhouZcong and @dshenster-byte.
|
||||
- Providers/Ollama: forward whitelisted native Ollama model params such as `temperature`, `top_p`, and top-level `think` so users can disable API-level thinking or tune local models from config without proxy shims. Fixes #48010. Thanks @tangzhi, @pandego, @maweibin, @Adam-Researchh, and @EmpireCreator.
|
||||
- Providers/Ollama: expose native Ollama thinking effort levels so `/think max` is accepted for reasoning-capable Ollama models and maps to Ollama's highest supported `think` effort. Fixes #71584. Thanks @g0st1n.
|
||||
- Providers/Ollama: strip the active custom Ollama provider prefix before native chat and embedding requests, so custom provider ids like `ollama-spark/qwen3:32b` reach Ollama as the real model name. Fixes #72353. Thanks @maximus-dss and @hclsys.
|
||||
- Providers/Ollama: parse stringified native tool-call arguments before dispatch, preserving unsafe integer values so Ollama tool use receives structured parameters. Fixes #69735; supersedes #69910. Thanks @rongshuzhao and @yfge.
|
||||
- Providers/Ollama: skip ambient localhost discovery unless Ollama auth or meaningful config opts in, preventing unexpected probes to `127.0.0.1:11434` for users who are not using Ollama. Fixes #56939; supersedes #57116. Thanks @IanxDev and @tsukhani.
|
||||
- Providers/Ollama: skip implicit localhost discovery when a custom remote `api: "ollama"` provider is configured, while still treating `127/8` loopback hosts as local. Carries forward #43224. Thanks @issacthekaylon.
|
||||
- Providers/models: honor provider-level `contextWindow`, `contextTokens`, and `maxTokens` as defaults when resolving discovered models, so local Ollama and other self-hosted providers can cap all models without repeating per-model entries. Fixes #44786; carries forward #44955. Thanks @voltwake and @maweibin.
|
||||
- Providers/Ollama: move memory embeddings to Ollama's current `/api/embed` endpoint with batched `input` requests while preserving vector normalization and custom provider auth/header overrides. Fixes #39983. Thanks @sskkcc and @LiudengZhang.
|
||||
- Providers/Ollama: route local web search through Ollama's signed `/api/experimental/web_search` daemon proxy, use hosted `/api/web_search` directly for `ollama.com`, and keep `OLLAMA_API_KEY` scoped to cloud fallback auth. Fixes #69132. Thanks @yoon1012 and @hyspacex.
|
||||
- Providers/Ollama: accept OpenAI SDK-style `baseURL` as an alias for `baseUrl` across discovery, streaming, setup pulls, embeddings, and web search so remote Ollama hosts are not silently ignored. Fixes #62533; supersedes #62549. Thanks @Julien-BKK and @Linux2010.
|
||||
- Providers/Ollama: scope synthetic local auth and embedding bearer headers to declared Ollama host boundaries so cloud keys are not sent to local/self-hosted embedding endpoints and remote/cloud Ollama endpoints no longer receive the `ollama-local` marker as if it were a real token. Supersedes #69261 and #69857; refs #43945. Thanks @hyspacex, @maxramsay, and @Meli73.
|
||||
- Providers/Ollama: resolve custom-named local Ollama providers such as `ollama-remote` through the Ollama synthetic-auth hook so subagents no longer miss `ollama-local` auth and silently fall back to cloud models. Fixes #43945. Thanks @Meli73 and @maxramsay.
|
||||
- Providers/Ollama: add provider-scoped model request timeouts, thread them through guarded fetch connect/header/body/abort handling, and document `params.keep_alive` for cold local models so first-turn Ollama loads no longer require global agent timeout changes. Fixes #64541 and #68796; supersedes #65143 and #66511. Thanks @LittleJakub, @Juankcba, @uninhibite-scholar, and @yfge.
|
||||
- Providers/Ollama: preserve explicit configured model input modalities when merging discovered provider metadata so custom vision models keep image support instead of silently dropping attachments. Fixes #39690; carries forward #39785. Thanks @Skrblik and @Mriris.
|
||||
- Providers/Ollama: estimate native Ollama transcript usage when `/api/chat` omits prompt/eval counters while preserving exact zero counters, keeping local model runs visible in usage surfaces. Carries forward #39112. Thanks @TylonHH.
|
||||
- Agents/Ollama: retry native Ollama turns that finish without user-visible text, including unsigned thinking-only responses, so constrained reasoning turns can continue instead of surfacing an empty reply. Carries forward #66552 and #61223. Thanks @yfge and @L3G.
|
||||
- Docs/Ollama: expand setup recipes for local, LAN, cloud, multi-host, web search, embeddings, thinking control, and large-context troubleshooting. Thanks @codex.
|
||||
- Providers/PDF/Ollama: add bounded network timeouts for Ollama model pulls and native Anthropic/Gemini PDF analysis requests so unresponsive provider endpoints no longer hang sessions indefinitely. Fixes #54142; supersedes #54144 and #54145. Thanks @jinduwang1001-max and @arkyu2077.
|
||||
- LLM Task/Ollama: accept model overrides that already include the selected provider prefix, avoiding doubled ids such as `ollama/ollama/llama3.2:latest`, and live-verify local Ollama JSON tasks return parsed output. Fixes #50052. Thanks @ralphy-maplebots and @Hollychou924.
|
||||
- Memory/doctor: treat Ollama memory embeddings as key-optional so `openclaw doctor` no longer warns about a missing API key when the gateway reports embeddings are ready. Fixes #46584. Thanks @fengly78.
|
||||
- Agents/Ollama: apply provider-owned replay turn normalization to native Ollama chat so Cloud models no longer reject non-alternating replay history in agent/Gateway runs. Fixes #71697. Thanks @ismael-81.
|
||||
- Control UI/Ollama: show the resolved configured thinking default in chat and session thinking dropdowns so inherited `adaptive`/per-model thinking config no longer appears as `Default (off)` or a generic inherit value. Fixes #72407. Thanks @NotecAG.
|
||||
- Agents/Ollama: validate explicit `--thinking max` against catalog-discovered Ollama reasoning metadata so local agent runs accept the same native thinking levels shown in the model catalog. Fixes #71584. Thanks @g0st1n.
|
||||
- CLI/models: include explicitly configured provider models in `openclaw models list --provider <id>` without requiring the full catalog path, so configured Ollama models are visible. Fixes #65207. Thanks @drzeast-png.
|
||||
- Docker/QA: add observability coverage to the normal Docker aggregate so QA-lab OTEL and Prometheus diagnostics run inside Docker. Thanks @vincentkoc.
|
||||
- Auto-reply: poison inbound message dedupe after replay-unsafe provider/runtime failures so retries stay safe before visible progress but cannot duplicate messages after block output, tool side effects, or session progress. Fixes #69303; keeps #58549 and #64606 as duplicate validation. Thanks @martingarramon, @NikolaFC, and @zeroth-blip.
|
||||
- Agents/model fallback: jump directly to a known later live-session model redirect instead of walking unrelated fallback candidates, while preserving the already-landed live-session/fallback loop guard. Fixes #57471; related loop family already closed via #58496. Thanks @yuxiaoyang2007-prog.
|
||||
- Gateway/Bonjour: keep @homebridge/ciao cancellation handlers registered across advertiser restarts so late probing cancellations cannot crash Linux and other mDNS-churned gateways. Thanks @codex.
|
||||
- Plugins/startup: load the default `memory-core` slot during Gateway startup when permitted so active-memory recall can call `memory_search` and `memory_get` without requiring an explicit `plugins.slots.memory` entry, while preserving `plugins.slots.memory: "none"`. Thanks @codex.
|
||||
- Plugins/CLI: prefer native require for compiled bundled plugin JavaScript before jiti so read-only config, status, device, and node commands avoid unnecessary transform overhead on slow hosts. Fixes #62842. Thanks @Effet.
|
||||
@@ -14,8 +133,20 @@ Docs: https://docs.openclaw.ai
|
||||
- Plugins/CLI: refresh the persisted registry after managed plugin files are removed so ClawHub uninstall cannot leave stale `plugins list` entries. Thanks @codex.
|
||||
- Plugins/CLI: make plugin install and uninstall config writes conflict-aware, clear stale denylist entries on explicit reinstall/removal, and delete managed plugin files only after config/index commit succeeds. Thanks @codex.
|
||||
- Plugins: fail `plugins update` when tracked plugin or hook updates error, keep bundled runtime-dependency repair behind restrictive allowlists, and reject package installs with unloadable extension entries. Thanks @codex.
|
||||
- WebChat/Control UI: support non-video file attachments in chat uploads while preserving the existing image attachment path and MIME-sniff fallback for generic image uploads. (#70947) Thanks @IAMSamuelRodda.
|
||||
- Skills/memory: restore Chokidar v5 hot reloads by watching concrete skill and memory roots with filters, including SKILL.md removals and deleted skill folders without broad workspace recursion. Fixes #27404, #33585, and #41606. Thanks @shelvenzhou, @08820048, and @rocke2020.
|
||||
- Gateway/chat: keep duplicate attachment-backed `chat.send` retries with the same idempotency key on the documented in-flight path so aborts still target the real active run. Fixes #70139. Thanks @Feelw00.
|
||||
- Gateway/session rows: report the same config-resolved thinking default that runtime sessions use, including global and per-agent defaults, so Control UI and TUI default labels stay aligned. (#71779, #70981, #71033, #70302) Thanks @chen-zhang-cs-code, @SymbolStar, and @cholaolu-boop.
|
||||
- Plugins: share package entrypoint resolution between install and discovery, reject mismatched `runtimeExtensions`, and cache bundled runtime-dependency manifest reads during scans. Thanks @codex.
|
||||
- WhatsApp/Web: keep quiet but healthy linked-device sessions connected by basing the watchdog on WhatsApp Web transport activity, while retaining a longer app-silence cap so frame activity cannot mask a stuck session forever. Fixes #70678; carries forward the focused #71466 approach and keeps #63939 as related configurable-timeout follow-up. Thanks @vincentkoc and @oromeis.
|
||||
- Discord/gateway: count failed health-monitor restart attempts toward cooldown and hourly caps, and evict stale account lifecycle state during channel reloads so repeated Discord gateway recovery cannot loop on old status. Fixes #38596. (#40413) Thanks @jellyAI-dev and @vashquez.
|
||||
- Cron/context engine: run isolated cron jobs under run-scoped context-engine session keys so prior runs of the same job are not inherited unless the job is explicitly session-bound. (#72292) Thanks @jalehman.
|
||||
- Control UI: localize command palette labels, categories, skill shortcuts, footer hints, and connect-command copy labels while preserving localized command palette search matching. (#61130, #61119) Thanks @rubensfox20.
|
||||
- Plugins/memory-lancedb: request float embedding responses from OpenAI-compatible servers so local providers that default SDK requests to base64 no longer return dimension-mismatched LanceDB vectors while preserving configured dimensions. Fixes #45982. (#59048, #46069, #45986) Thanks @deep-introspection, @xiaokhkh, @caicongyang, and @thiswind.
|
||||
- Plugins/memory-lancedb: advance auto-capture cursors per session only after messages are processed or intentionally skipped, retry failed messages, survive compacted histories, and clear cursor state on session end. Fixes #71349; carries forward #42083. Thanks @as775116191.
|
||||
- Plugins/memory-core: respect configured memory-search embedding concurrency during non-batch indexing so local Ollama embedding backends can serialize indexing instead of flooding the server. Fixes #66822. (#66931) Thanks @oliviareid-svg and @LyraInTheFlesh.
|
||||
- Docker/update smoke: keep the package-derived update-channel fixture on package-shipped files and make its UI build stub create the asset the updater verifies. Thanks @vincentkoc.
|
||||
- Gateway/models: repair legacy `models.providers.*.api = "openai"` config values to `openai-completions`, and skip providers with future stale API enum values during startup instead of bricking the gateway. Fixes #72477. (#72542) Thanks @JooyoungChoi14 and @obviyus.
|
||||
|
||||
## 2026.4.26
|
||||
|
||||
@@ -28,6 +159,7 @@ Docs: https://docs.openclaw.ai
|
||||
- Onboarding/models: keep skip-auth and provider-scoped model picker prompts off the full global model catalog path, and cache provider catalog hook resolution so setup no longer stalls after auth on large plugin registries. Thanks @shakkernerd.
|
||||
- Gateway/Bonjour: suppress known @homebridge/ciao cancellation and network assertion failures through scoped process handlers so malformed mDNS packets or restricted VPS networking disable/restart Bonjour instead of crashing the gateway. Fixes #67578. Thanks @zenassist26-create.
|
||||
- Discord: keep late clicks on already-resolved exec approval buttons quiet when elevated mode auto-resolved the request, while still surfacing real approval submission failures. Fixes #66906. Thanks @rlerikse.
|
||||
- Telegram: send a fresh final message for long-lived preview-streamed replies so the visible Telegram timestamp reflects completion time instead of the preview creation time. Thanks @rubencu.
|
||||
|
||||
## 2026.4.25
|
||||
|
||||
@@ -348,6 +480,7 @@ Docs: https://docs.openclaw.ai
|
||||
- CLI/models: make `openclaw models scan` fall back to public OpenRouter free-model metadata when no `OPENROUTER_API_KEY` is configured, avoid config secret resolution for explicit `--no-probe` scans, and apply the scan timeout to the OpenRouter catalog request.
|
||||
- Feishu: keep streaming cards to one live card per turn, flush throttled card edits after meaningful text boundaries, and skip exact block/partial repeats so tool-heavy replies do not duplicate card output. Thanks @allan0509.
|
||||
- Feishu: finish the streaming-card duplicate closeout by stripping leaked reasoning tags, preserving cross-block partial snapshots, enabling topic-thread streaming cards, omitting the generic `main` card header, surfacing transient tool/compaction status, and cleaning streaming state after close failures. Thanks @sesame437, @Vicky-v7, @maoku-family, @Pengxiao-Wang, and @Maple778.
|
||||
- Telegram: keep final-only answers on the normal final-send path instead of creating synthetic draft previews, while preserving real partial preview finalization. Credited from #39213. Thanks @chalawbot.
|
||||
- Telegram: recover incomplete partial-stream previews by falling back to a final send when an ambiguous final edit failure would otherwise retain a strict prefix of the answer. Fixes #71525. (#71554) Thanks @sahilsatralkar.
|
||||
- Control UI/chat: collapse assistant token/model context details behind an explicit Context disclosure and show full dates in message footers, making historical transcript timing clear without noisy default metadata. (#71337) Thanks @BunsDev.
|
||||
- OpenAI/Codex OAuth: explain `unsupported_country_region_territory` token-exchange failures with a proxy/region hint instead of surfacing a generic OAuth error. Fixes #51175. (#71501) Thanks @vincentkoc and @wulala-xjj.
|
||||
|
||||
52
Dockerfile
52
Dockerfile
@@ -9,22 +9,19 @@
|
||||
# bundled plugin workspace tree, so the main build layer is not invalidated by
|
||||
# unrelated plugin source changes.
|
||||
#
|
||||
# Two runtime variants:
|
||||
# Default (bookworm): docker build .
|
||||
# Slim (bookworm-slim): docker build --build-arg OPENCLAW_VARIANT=slim .
|
||||
# Build stages use full bookworm; the runtime image is always bookworm-slim.
|
||||
ARG OPENCLAW_EXTENSIONS=""
|
||||
ARG OPENCLAW_VARIANT=default
|
||||
ARG OPENCLAW_BUNDLED_PLUGIN_DIR=extensions
|
||||
ARG OPENCLAW_DOCKER_APT_UPGRADE=1
|
||||
ARG OPENCLAW_NODE_BOOKWORM_IMAGE="node:24-bookworm@sha256:3a09aa6354567619221ef6c45a5051b671f953f0a1924d1f819ffb236e520e6b"
|
||||
ARG OPENCLAW_NODE_BOOKWORM_DIGEST="sha256:3a09aa6354567619221ef6c45a5051b671f953f0a1924d1f819ffb236e520e6b"
|
||||
ARG OPENCLAW_NODE_BOOKWORM_SLIM_IMAGE="node:24-bookworm-slim@sha256:e8e2e91b1378f83c5b2dd15f0247f34110e2fe895f6ca7719dbb780f929368eb"
|
||||
ARG OPENCLAW_NODE_BOOKWORM_SLIM_DIGEST="sha256:e8e2e91b1378f83c5b2dd15f0247f34110e2fe895f6ca7719dbb780f929368eb"
|
||||
|
||||
# Base images are pinned to SHA256 digests for reproducible builds.
|
||||
# Trade-off: digests must be updated manually when upstream tags move.
|
||||
# To update, run: docker buildx imagetools inspect node:24-bookworm (or podman)
|
||||
# and replace the digest below with the current multi-arch manifest list entry.
|
||||
# Dependabot refreshes these blessed digests; release builds consume the
|
||||
# reviewed base snapshot instead of mutating distro state on every build.
|
||||
# To update, run: docker buildx imagetools inspect node:24-bookworm and
|
||||
# node:24-bookworm-slim (or podman) and replace the digests below with the
|
||||
# current multi-arch manifest list entries.
|
||||
|
||||
FROM ${OPENCLAW_NODE_BOOKWORM_IMAGE} AS ext-deps
|
||||
ARG OPENCLAW_EXTENSIONS
|
||||
@@ -75,10 +72,20 @@ RUN --mount=type=cache,id=openclaw-pnpm-store,target=/root/.local/share/pnpm/sto
|
||||
NODE_OPTIONS=--max-old-space-size=2048 pnpm install --frozen-lockfile
|
||||
|
||||
# pnpm v10+ may append peer-resolution hashes to virtual-store folder names; do not hardcode `.pnpm/...`
|
||||
# paths. Fail fast here if the Matrix native binding did not materialize after install.
|
||||
RUN echo "==> Verifying critical native addons..." && \
|
||||
# paths. Matrix's native downloader can hit transient release CDN errors while
|
||||
# still exiting successfully, so retry the package downloader before failing.
|
||||
RUN set -eux; \
|
||||
echo "==> Verifying critical native addons..."; \
|
||||
for attempt in 1 2 3 4 5; do \
|
||||
if find /app/node_modules -name "matrix-sdk-crypto*.node" 2>/dev/null | grep -q .; then \
|
||||
exit 0; \
|
||||
fi; \
|
||||
echo "matrix-sdk-crypto native addon missing; retrying download (${attempt}/5)"; \
|
||||
node /app/node_modules/@matrix-org/matrix-sdk-crypto-nodejs/download-lib.js || true; \
|
||||
sleep $((attempt * 2)); \
|
||||
done; \
|
||||
find /app/node_modules -name "matrix-sdk-crypto*.node" 2>/dev/null | grep -q . || \
|
||||
(echo "ERROR: matrix-sdk-crypto native addon missing (pnpm install may have silently failed on this arch)" >&2 && exit 1)
|
||||
(echo "ERROR: matrix-sdk-crypto native addon missing after retries" >&2 && exit 1)
|
||||
|
||||
COPY . .
|
||||
|
||||
@@ -125,22 +132,15 @@ RUN printf 'packages:\n - .\n - ui\n' > /tmp/pnpm-workspace.runtime.yaml && \
|
||||
node scripts/postinstall-bundled-plugins.mjs && \
|
||||
find dist -type f \( -name '*.d.ts' -o -name '*.d.mts' -o -name '*.d.cts' -o -name '*.map' \) -delete
|
||||
|
||||
# ── Runtime base images ─────────────────────────────────────────
|
||||
FROM ${OPENCLAW_NODE_BOOKWORM_IMAGE} AS base-default
|
||||
ARG OPENCLAW_NODE_BOOKWORM_DIGEST
|
||||
LABEL org.opencontainers.image.base.name="docker.io/library/node:24-bookworm" \
|
||||
org.opencontainers.image.base.digest="${OPENCLAW_NODE_BOOKWORM_DIGEST}"
|
||||
|
||||
FROM ${OPENCLAW_NODE_BOOKWORM_SLIM_IMAGE} AS base-slim
|
||||
# ── Runtime base image ──────────────────────────────────────────
|
||||
FROM ${OPENCLAW_NODE_BOOKWORM_SLIM_IMAGE} AS base-runtime
|
||||
ARG OPENCLAW_NODE_BOOKWORM_SLIM_DIGEST
|
||||
LABEL org.opencontainers.image.base.name="docker.io/library/node:24-bookworm-slim" \
|
||||
org.opencontainers.image.base.digest="${OPENCLAW_NODE_BOOKWORM_SLIM_DIGEST}"
|
||||
|
||||
# ── Stage 3: Runtime ────────────────────────────────────────────
|
||||
FROM base-${OPENCLAW_VARIANT}
|
||||
ARG OPENCLAW_VARIANT
|
||||
FROM base-runtime
|
||||
ARG OPENCLAW_BUNDLED_PLUGIN_DIR
|
||||
ARG OPENCLAW_DOCKER_APT_UPGRADE
|
||||
|
||||
# OCI base-image metadata for downstream image consumers.
|
||||
# If you change these annotations, also update:
|
||||
@@ -155,16 +155,10 @@ LABEL org.opencontainers.image.source="https://github.com/openclaw/openclaw" \
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
# Install system utilities present in bookworm but missing in bookworm-slim.
|
||||
# On the full bookworm image these are already installed (apt-get is a no-op).
|
||||
# Smoke workflows can opt out of distro upgrades to cut repeated CI time while
|
||||
# keeping the default runtime image behavior unchanged.
|
||||
# Install runtime system utilities missing from bookworm-slim.
|
||||
RUN --mount=type=cache,id=openclaw-bookworm-apt-cache,target=/var/cache/apt,sharing=locked \
|
||||
--mount=type=cache,id=openclaw-bookworm-apt-lists,target=/var/lib/apt,sharing=locked \
|
||||
apt-get update && \
|
||||
if [ "${OPENCLAW_DOCKER_APT_UPGRADE}" != "0" ]; then \
|
||||
DEBIAN_FRONTEND=noninteractive apt-get upgrade -y --no-install-recommends; \
|
||||
fi && \
|
||||
DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
|
||||
procps hostname curl git lsof openssl
|
||||
|
||||
|
||||
@@ -7,7 +7,6 @@ ENV DEBIAN_FRONTEND=noninteractive
|
||||
RUN --mount=type=cache,id=openclaw-sandbox-bookworm-apt-cache,target=/var/cache/apt,sharing=locked \
|
||||
--mount=type=cache,id=openclaw-sandbox-bookworm-apt-lists,target=/var/lib/apt,sharing=locked \
|
||||
apt-get update \
|
||||
&& apt-get upgrade -y --no-install-recommends \
|
||||
&& apt-get install -y --no-install-recommends \
|
||||
bash \
|
||||
ca-certificates \
|
||||
|
||||
@@ -7,7 +7,6 @@ ENV DEBIAN_FRONTEND=noninteractive
|
||||
RUN --mount=type=cache,id=openclaw-sandbox-bookworm-apt-cache,target=/var/cache/apt,sharing=locked \
|
||||
--mount=type=cache,id=openclaw-sandbox-bookworm-apt-lists,target=/var/lib/apt,sharing=locked \
|
||||
apt-get update \
|
||||
&& apt-get upgrade -y --no-install-recommends \
|
||||
&& apt-get install -y --no-install-recommends \
|
||||
bash \
|
||||
ca-certificates \
|
||||
|
||||
@@ -24,7 +24,6 @@ ENV PATH=${BUN_INSTALL_DIR}/bin:${BREW_INSTALL_DIR}/bin:${BREW_INSTALL_DIR}/sbin
|
||||
RUN --mount=type=cache,id=openclaw-sandbox-common-apt-cache,target=/var/cache/apt,sharing=locked \
|
||||
--mount=type=cache,id=openclaw-sandbox-common-apt-lists,target=/var/lib/apt,sharing=locked \
|
||||
apt-get update \
|
||||
&& apt-get upgrade -y --no-install-recommends \
|
||||
&& apt-get install -y --no-install-recommends ${PACKAGES}
|
||||
|
||||
RUN if [ "${INSTALL_PNPM}" = "1" ]; then npm install -g pnpm; fi
|
||||
|
||||
@@ -6,9 +6,9 @@ services:
|
||||
TERM: xterm-256color
|
||||
OPENCLAW_GATEWAY_TOKEN: ${OPENCLAW_GATEWAY_TOKEN:-}
|
||||
OPENCLAW_ALLOW_INSECURE_PRIVATE_WS: ${OPENCLAW_ALLOW_INSECURE_PRIVATE_WS:-}
|
||||
# Docker bridge networks usually do not carry mDNS multicast reliably.
|
||||
# Set OPENCLAW_DISABLE_BONJOUR=0 only on host/macvlan/mDNS-capable networks.
|
||||
OPENCLAW_DISABLE_BONJOUR: ${OPENCLAW_DISABLE_BONJOUR:-1}
|
||||
# Empty means auto: Bonjour disables itself in detected containers.
|
||||
# Set 0 only on host/macvlan/mDNS-capable networks; set 1 to force off.
|
||||
OPENCLAW_DISABLE_BONJOUR: ${OPENCLAW_DISABLE_BONJOUR:-}
|
||||
# OpenTelemetry export is outbound OTLP/HTTP from the Gateway. Prometheus
|
||||
# uses the existing authenticated Gateway route; it does not need a port.
|
||||
OTEL_EXPORTER_OTLP_ENDPOINT: ${OTEL_EXPORTER_OTLP_ENDPOINT:-}
|
||||
@@ -34,6 +34,11 @@ services:
|
||||
# - /var/run/docker.sock:/var/run/docker.sock
|
||||
# group_add:
|
||||
# - "${DOCKER_GID:-999}"
|
||||
# Let bundled local-model providers reach host-side LM Studio/Ollama via
|
||||
# http://host.docker.internal:<port>. Docker Desktop usually provides this
|
||||
# alias; the host-gateway mapping makes it work on Linux Docker Engine too.
|
||||
extra_hosts:
|
||||
- "host.docker.internal:host-gateway"
|
||||
ports:
|
||||
- "${OPENCLAW_GATEWAY_PORT:-18789}:18789"
|
||||
- "${OPENCLAW_BRIDGE_PORT:-18790}:18790"
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
7fa6e35bb9f9d3096d6281f141488be0dcfe15de40dc4f5c0305eb1ff2bc60b6 config-baseline.json
|
||||
5f5fb87fd46f9cbb84d8af17e00ae3c4b74062e8ad517bc2260ba83da2e9014f config-baseline.core.json
|
||||
7cd9c908f066c143eab2a201efbc9640f483ab28bba92ddeca1d18cc2b528bc3 config-baseline.channel.json
|
||||
f9e0174988718959fe1923a54496ec5b9262721fe1e7306f32ccb1316d9d9c3f config-baseline.plugin.json
|
||||
d2b40fe44761f9e412ce3d4336f341c9c4406f990d09219898cb97cd12c0fdd1 config-baseline.json
|
||||
200c156a074a1eec03bb04b3852b4fd5f1fa4ffa140cc5acdc5e412a33600f14 config-baseline.core.json
|
||||
07963db49502132f26db396c56b36e018b110e6c55a68b3cb012d3ec96f43901 config-baseline.channel.json
|
||||
74b74cb18ac37c0acaa765f398f1f9edbcee4c43567f02d45c89598a1e13afb4 config-baseline.plugin.json
|
||||
|
||||
@@ -1,2 +1,2 @@
|
||||
fd941e0485a92ebb8256cf2256330b58c2d5bd94189f4a05d7394353ef7bed88 plugin-sdk-api-baseline.json
|
||||
11ef8362518a0d9f221dc1958b25db46956d1916f278b53e52199bf6c2cbc65b plugin-sdk-api-baseline.jsonl
|
||||
8371f19a19ceeae4eb20fbfe8e68e51f6f54f42c487d7d5c75f214ab1ba0922a plugin-sdk-api-baseline.json
|
||||
a5f5e15e75f8cf27ebaa1302cfe0488974edd53121279c1e90705bc531a4761a plugin-sdk-api-baseline.jsonl
|
||||
|
||||
@@ -435,6 +435,22 @@
|
||||
"source": "Setup",
|
||||
"target": "设置"
|
||||
},
|
||||
{
|
||||
"source": "Migrate",
|
||||
"target": "迁移"
|
||||
},
|
||||
{
|
||||
"source": "Migration",
|
||||
"target": "迁移"
|
||||
},
|
||||
{
|
||||
"source": "Hermes",
|
||||
"target": "Hermes"
|
||||
},
|
||||
{
|
||||
"source": "Archive-only",
|
||||
"target": "仅归档"
|
||||
},
|
||||
{
|
||||
"source": "Channel Plugin SDK",
|
||||
"target": "渠道插件 SDK"
|
||||
|
||||
@@ -15,7 +15,7 @@ This document defines the canonical credential eligibility and resolution semant
|
||||
|
||||
The goal is to keep selection-time and runtime behavior aligned.
|
||||
|
||||
## Stable Probe Reason Codes
|
||||
## Stable probe reason codes
|
||||
|
||||
- `ok`
|
||||
- `excluded_by_auth_order`
|
||||
@@ -25,7 +25,7 @@ The goal is to keep selection-time and runtime behavior aligned.
|
||||
- `unresolved_ref`
|
||||
- `no_model`
|
||||
|
||||
## Token Credentials
|
||||
## Token credentials
|
||||
|
||||
Token credentials (`type: "token"`) support inline `token` and/or `tokenRef`.
|
||||
|
||||
@@ -44,7 +44,7 @@ Token credentials (`type: "token"`) support inline `token` and/or `tokenRef`.
|
||||
2. For eligible profiles, token material may be resolved from inline value or `tokenRef`.
|
||||
3. Unresolvable refs produce `unresolved_ref` in `models status --probe` output.
|
||||
|
||||
## Explicit Auth Order Filtering
|
||||
## Explicit auth order filtering
|
||||
|
||||
- When `auth.order.<provider>` or the auth-store order override is set for a
|
||||
provider, `models status --probe` only probes profile ids that remain in the
|
||||
@@ -54,7 +54,7 @@ Token credentials (`type: "token"`) support inline `token` and/or `tokenRef`.
|
||||
`reasonCode: excluded_by_auth_order` and the detail
|
||||
`Excluded by auth.order for this provider.`
|
||||
|
||||
## Probe Target Resolution
|
||||
## Probe target resolution
|
||||
|
||||
- Probe targets can come from auth profiles, environment credentials, or
|
||||
`models.json`.
|
||||
|
||||
@@ -3,7 +3,7 @@ summary: "Redirect to /gateway/authentication"
|
||||
title: "Auth monitoring"
|
||||
---
|
||||
|
||||
This page moved to [Authentication](/gateway/authentication). See [Authentication](/gateway/authentication) for auth monitoring documentation.
|
||||
Auth monitoring lives under [Authentication](/gateway/authentication).
|
||||
|
||||
## Related
|
||||
|
||||
|
||||
@@ -3,7 +3,7 @@ summary: "Redirect to Task Flow"
|
||||
title: "ClawFlow"
|
||||
---
|
||||
|
||||
ClawFlow was renamed to [Task Flow](/automation/taskflow). See [Task Flow](/automation/taskflow) for the current documentation.
|
||||
ClawFlow was renamed to [Task flow](/automation/taskflow).
|
||||
|
||||
## Related
|
||||
|
||||
|
||||
@@ -43,10 +43,13 @@ Cron is the Gateway's built-in scheduler. It persists jobs, wakes the agent at t
|
||||
- Job definitions persist at `~/.openclaw/cron/jobs.json` so restarts do not lose schedules.
|
||||
- Runtime execution state persists next to it in `~/.openclaw/cron/jobs-state.json`. If you track cron definitions in git, track `jobs.json` and gitignore `jobs-state.json`.
|
||||
- After the split, older OpenClaw versions can read `jobs.json` but may treat jobs as fresh because runtime fields now live in `jobs-state.json`.
|
||||
- When `jobs.json` is edited while the Gateway is running or stopped, OpenClaw compares the changed schedule fields with pending runtime slot metadata and clears stale `nextRunAtMs` values. Pure formatting or key-order-only rewrites preserve the pending slot.
|
||||
- All cron executions create [background task](/automation/tasks) records.
|
||||
- One-shot jobs (`--at`) auto-delete after success by default.
|
||||
- Isolated cron runs best-effort close tracked browser tabs/processes for their `cron:<jobId>` session when the run completes, so detached browser automation does not leave orphaned processes behind.
|
||||
- Isolated cron runs also guard against stale acknowledgement replies. If the first result is just an interim status update (`on it`, `pulling everything together`, and similar hints) and no descendant subagent run is still responsible for the final answer, OpenClaw re-prompts once for the actual result before delivery.
|
||||
- Isolated cron runs prefer structured execution-denial metadata from the embedded run, then fall back to known final summary/output markers such as `SYSTEM_RUN_DENIED` and `INVALID_REQUEST`, so a blocked command is not reported as a green run.
|
||||
- Isolated cron runs also treat run-level agent failures as job errors even when no reply payload is produced, so model/provider failures increment error counters and trigger failure notifications instead of clearing the job as successful.
|
||||
|
||||
<a id="maintenance"></a>
|
||||
|
||||
@@ -159,6 +162,7 @@ Failure notifications follow a separate destination path:
|
||||
- `job.delivery.failureDestination` overrides that per job.
|
||||
- If neither is set and the job already delivers via `announce`, failure notifications now fall back to that primary announce target.
|
||||
- `delivery.failureDestination` is only supported on `sessionTarget="isolated"` jobs unless the primary delivery mode is `webhook`.
|
||||
- `failureAlert.includeSkipped: true` opts a job or global cron alert policy into repeated skipped-run alerts. Skipped runs keep a separate consecutive skip counter, so they do not affect execution-error backoff.
|
||||
|
||||
## CLI examples
|
||||
|
||||
@@ -396,6 +400,8 @@ Model override note:
|
||||
|
||||
The runtime state sidecar is derived from `cron.store`: a `.json` store such as `~/clawd/cron/jobs.json` uses `~/clawd/cron/jobs-state.json`, while a store path without a `.json` suffix appends `-state.json`.
|
||||
|
||||
If you hand-edit `jobs.json`, leave `jobs-state.json` out of source control. OpenClaw uses that sidecar for pending slots, active markers, last-run metadata, and the schedule identity that tells the scheduler when an externally edited job needs a fresh `nextRunAtMs`.
|
||||
|
||||
Disable cron: `cron.enabled: false` or `OPENCLAW_SKIP_CRON=1`.
|
||||
|
||||
<AccordionGroup>
|
||||
|
||||
@@ -3,7 +3,7 @@ summary: "Redirect to /automation"
|
||||
title: "Cron vs heartbeat"
|
||||
---
|
||||
|
||||
This page moved to [Automation & Tasks](/automation). See [Automation & Tasks](/automation) for the decision guide comparing cron and heartbeat.
|
||||
The decision guide for cron vs heartbeat lives under [Automation and tasks](/automation).
|
||||
|
||||
## Related
|
||||
|
||||
|
||||
@@ -173,7 +173,7 @@ openclaw hooks enable <hook-name>
|
||||
|
||||
### session-memory details
|
||||
|
||||
Extracts the last 15 user/assistant messages, generates a descriptive filename slug via LLM, and saves to `<workspace>/memory/YYYY-MM-DD-slug.md`. Requires `workspace.dir` to be configured.
|
||||
Extracts the last 15 user/assistant messages, generates a descriptive filename slug via LLM, and saves to `<workspace>/memory/YYYY-MM-DD-slug.md` using the host local date. Requires `workspace.dir` to be configured.
|
||||
|
||||
<a id="bootstrap-extra-files"></a>
|
||||
|
||||
|
||||
@@ -11,7 +11,7 @@ Standing orders grant your agent **permanent operating authority** for defined p
|
||||
|
||||
This is the difference between telling your assistant "send the weekly report" every Friday vs. granting standing authority: "You own the weekly report. Compile it every Friday, send it, and only escalate if something looks wrong."
|
||||
|
||||
## Why Standing Orders?
|
||||
## Why standing orders
|
||||
|
||||
**Without standing orders:**
|
||||
|
||||
@@ -44,7 +44,7 @@ The agent loads these instructions every session via the workspace bootstrap fil
|
||||
Put standing orders in `AGENTS.md` to guarantee they're loaded every session. The workspace bootstrap automatically injects `AGENTS.md`, `SOUL.md`, `TOOLS.md`, `IDENTITY.md`, `USER.md`, `HEARTBEAT.md`, `BOOTSTRAP.md`, and `MEMORY.md` — but not arbitrary files in subdirectories.
|
||||
</Tip>
|
||||
|
||||
## Anatomy of a Standing Order
|
||||
## Anatomy of a standing order
|
||||
|
||||
```markdown
|
||||
## Program: Weekly Status Report
|
||||
@@ -54,7 +54,7 @@ Put standing orders in `AGENTS.md` to guarantee they're loaded every session. Th
|
||||
**Approval gate:** None for standard reports. Flag anomalies for human review.
|
||||
**Escalation:** If data source is unavailable or metrics look unusual (>2σ from norm)
|
||||
|
||||
### Execution Steps
|
||||
### Execution steps
|
||||
|
||||
1. Pull metrics from configured sources
|
||||
2. Compare to prior week and targets
|
||||
@@ -62,14 +62,14 @@ Put standing orders in `AGENTS.md` to guarantee they're loaded every session. Th
|
||||
4. Deliver summary via configured channel
|
||||
5. Log completion to Agent/Logs/
|
||||
|
||||
### What NOT to Do
|
||||
### What NOT to do
|
||||
|
||||
- Do not send reports to external parties
|
||||
- Do not modify source data
|
||||
- Do not skip delivery if metrics look bad — report accurately
|
||||
```
|
||||
|
||||
## Standing Orders + Cron Jobs
|
||||
## Standing orders plus cron jobs
|
||||
|
||||
Standing orders define **what** the agent is authorized to do. [Cron jobs](/automation/cron-jobs) define **when** it happens. They work together:
|
||||
|
||||
@@ -97,7 +97,7 @@ openclaw cron add \
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Content & Social Media (Weekly Cycle)
|
||||
### Example 1: content and social media (weekly cycle)
|
||||
|
||||
```markdown
|
||||
## Program: Content & Social Media
|
||||
@@ -106,13 +106,13 @@ openclaw cron add \
|
||||
**Approval gate:** All posts require owner review for first 30 days, then standing approval
|
||||
**Trigger:** Weekly cycle (Monday review → mid-week drafts → Friday brief)
|
||||
|
||||
### Weekly Cycle
|
||||
### Weekly cycle
|
||||
|
||||
- **Monday:** Review platform metrics and audience engagement
|
||||
- **Tuesday–Thursday:** Draft social posts, create blog content
|
||||
- **Friday:** Compile weekly marketing brief → deliver to owner
|
||||
|
||||
### Content Rules
|
||||
### Content rules
|
||||
|
||||
- Voice must match the brand (see SOUL.md or brand voice guide)
|
||||
- Never identify as AI in public-facing content
|
||||
@@ -120,7 +120,7 @@ openclaw cron add \
|
||||
- Focus on value to audience, not self-promotion
|
||||
```
|
||||
|
||||
### Example 2: Finance Operations (Event-Triggered)
|
||||
### Example 2: finance operations (event-triggered)
|
||||
|
||||
```markdown
|
||||
## Program: Financial Processing
|
||||
@@ -129,7 +129,7 @@ openclaw cron add \
|
||||
**Approval gate:** None for analysis. Recommendations require owner approval.
|
||||
**Trigger:** New data file detected OR scheduled monthly cycle
|
||||
|
||||
### When New Data Arrives
|
||||
### When new data arrives
|
||||
|
||||
1. Detect new file in designated input directory
|
||||
2. Parse and categorize all transactions
|
||||
@@ -138,7 +138,7 @@ openclaw cron add \
|
||||
5. Generate report in designated output directory
|
||||
6. Deliver summary to owner via configured channel
|
||||
|
||||
### Escalation Rules
|
||||
### Escalation rules
|
||||
|
||||
- Single item > $500: immediate alert
|
||||
- Category > budget by 20%: flag in report
|
||||
@@ -146,7 +146,7 @@ openclaw cron add \
|
||||
- Failed processing after 2 retries: report failure, do not guess
|
||||
```
|
||||
|
||||
### Example 3: Monitoring & Alerts (Continuous)
|
||||
### Example 3: monitoring and alerts (continuous)
|
||||
|
||||
```markdown
|
||||
## Program: System Monitoring
|
||||
@@ -162,7 +162,7 @@ openclaw cron add \
|
||||
- Pending tasks not stale (>24 hours)
|
||||
- Delivery channels operational
|
||||
|
||||
### Response Matrix
|
||||
### Response matrix
|
||||
|
||||
| Condition | Action | Escalate? |
|
||||
| ---------------- | ------------------------ | ------------------------ |
|
||||
@@ -172,7 +172,7 @@ openclaw cron add \
|
||||
| Channel offline | Log and retry next cycle | If offline > 2 hours |
|
||||
```
|
||||
|
||||
## The Execute-Verify-Report Pattern
|
||||
## Execute-verify-report pattern
|
||||
|
||||
Standing orders work best when combined with strict execution discipline. Every task in a standing order should follow this loop:
|
||||
|
||||
@@ -181,7 +181,7 @@ Standing orders work best when combined with strict execution discipline. Every
|
||||
3. **Report** — Tell the owner what was done and what was verified
|
||||
|
||||
```markdown
|
||||
### Execution Rules
|
||||
### Execution rules
|
||||
|
||||
- Every task follows Execute-Verify-Report. No exceptions.
|
||||
- "I'll do that" is not execution. Do it, then report.
|
||||
@@ -193,7 +193,7 @@ Standing orders work best when combined with strict execution discipline. Every
|
||||
|
||||
This pattern prevents the most common agent failure mode: acknowledging a task without completing it.
|
||||
|
||||
## Multi-Program Architecture
|
||||
## Multi-program architecture
|
||||
|
||||
For agents managing multiple concerns, organize standing orders as separate programs with clear boundaries:
|
||||
|
||||
@@ -222,7 +222,7 @@ Each program should have:
|
||||
- Its own **approval gates** (some programs need more oversight than others)
|
||||
- Clear **boundaries** (the agent should know where one program ends and another begins)
|
||||
|
||||
## Best Practices
|
||||
## Best practices
|
||||
|
||||
### Do
|
||||
|
||||
@@ -243,8 +243,8 @@ Each program should have:
|
||||
|
||||
## Related
|
||||
|
||||
- [Automation & Tasks](/automation) — all automation mechanisms at a glance
|
||||
- [Cron Jobs](/automation/cron-jobs) — schedule enforcement for standing orders
|
||||
- [Hooks](/automation/hooks) — event-driven scripts for agent lifecycle events
|
||||
- [Webhooks](/automation/cron-jobs#webhooks) — inbound HTTP event triggers
|
||||
- [Agent Workspace](/concepts/agent-workspace) — where standing orders live, including the full list of auto-injected bootstrap files (AGENTS.md, SOUL.md, etc.)
|
||||
- [Automation and tasks](/automation): all automation mechanisms at a glance.
|
||||
- [Cron jobs](/automation/cron-jobs): schedule enforcement for standing orders.
|
||||
- [Hooks](/automation/hooks): event-driven scripts for agent lifecycle events.
|
||||
- [Webhooks](/automation/cron-jobs#webhooks): inbound HTTP event triggers.
|
||||
- [Agent workspace](/concepts/agent-workspace): where standing orders live, including the full list of auto-injected bootstrap files (`AGENTS.md`, `SOUL.md`, etc.).
|
||||
|
||||
@@ -9,7 +9,7 @@ sidebarTitle: "Background tasks"
|
||||
---
|
||||
|
||||
<Note>
|
||||
Looking for scheduling? See [Automation & Tasks](/automation) for choosing the right mechanism. This page covers **tracking** background work, not scheduling it.
|
||||
Looking for scheduling? See [Automation and tasks](/automation) for choosing the right mechanism. This page is the activity ledger for background work, not the scheduler.
|
||||
</Note>
|
||||
|
||||
Background tasks track work that runs **outside your main conversation session**: ACP runs, subagent spawns, isolated cron job executions, and CLI-initiated operations.
|
||||
|
||||
@@ -16,7 +16,9 @@ Feishu/Lark is an all-in-one collaboration platform where teams chat, share docu
|
||||
|
||||
## Quick start
|
||||
|
||||
> **Requires OpenClaw 2026.4.25 or above.** Run `openclaw --version` to check. Upgrade with `openclaw update`.
|
||||
<Note>
|
||||
Requires OpenClaw 2026.4.25 or above. Run `openclaw --version` to check. Upgrade with `openclaw update`.
|
||||
</Note>
|
||||
|
||||
<Steps>
|
||||
<Step title="Run the channel setup wizard">
|
||||
@@ -169,7 +171,9 @@ openclaw pairing list feishu
|
||||
| `/reset` | Reset the current session |
|
||||
| `/model` | Show or switch the AI model |
|
||||
|
||||
> Feishu/Lark does not support native slash-command menus, so send these as plain text messages.
|
||||
<Note>
|
||||
Feishu/Lark does not support native slash-command menus, so send these as plain text messages.
|
||||
</Note>
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -7,7 +7,9 @@ title: "Group messages"
|
||||
|
||||
Goal: let Clawd sit in WhatsApp groups, wake up only when pinged, and keep that thread separate from the personal DM session.
|
||||
|
||||
Note: `agents.list[].groupChat.mentionPatterns` is now used by Telegram/Discord/Slack/iMessage as well; this doc focuses on WhatsApp-specific behavior. For multi-agent setups, set `agents.list[].groupChat.mentionPatterns` per agent (or use `messages.groupChat.mentionPatterns` as a global fallback).
|
||||
<Note>
|
||||
`agents.list[].groupChat.mentionPatterns` is also used by Telegram, Discord, Slack, and iMessage. This doc focuses on WhatsApp-specific behavior. For multi-agent setups, set `agents.list[].groupChat.mentionPatterns` per agent, or use `messages.groupChat.mentionPatterns` as a global fallback.
|
||||
</Note>
|
||||
|
||||
## Current implementation (2025-12-03)
|
||||
|
||||
|
||||
@@ -132,6 +132,8 @@ New user-defined `override` rules are inserted ahead of default suppress rules,
|
||||
|
||||
If you run Synapse behind a reverse proxy or workers, make sure `/_matrix/client/.../pushrules/` reaches Synapse correctly. Push delivery is handled by the main process or `synapse.app.pusher` / configured pusher workers — ensure those are healthy.
|
||||
|
||||
The rule uses the `event_property_is` push-rule condition (MSC3758, push rule v1.10), which was added to Synapse in 2023. Older Synapse releases accept the `PUT pushrules/...` call but silently never match the condition — upgrade Synapse if no notification arrives on a finalized preview edit.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Tuwunel">
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -39,7 +39,9 @@ teams login
|
||||
teams status # verify you're logged in and see your tenant info
|
||||
```
|
||||
|
||||
> **Note:** The Teams CLI is currently in preview. Commands and flags may change between releases.
|
||||
<Note>
|
||||
The Teams CLI is currently in preview. Commands and flags may change between releases.
|
||||
</Note>
|
||||
|
||||
**2. Start a tunnel** (Teams can't reach localhost)
|
||||
|
||||
@@ -55,7 +57,9 @@ devtunnel host my-openclaw-bot
|
||||
# Your endpoint: https://<tunnel-id>.devtunnels.ms/api/messages
|
||||
```
|
||||
|
||||
> **Note:** `--allow-anonymous` is required because Teams can't authenticate with devtunnels. Each incoming bot request is still validated by the Teams SDK automatically.
|
||||
<Note>
|
||||
`--allow-anonymous` is required because Teams cannot authenticate with devtunnels. Each incoming bot request is still validated by the Teams SDK automatically.
|
||||
</Note>
|
||||
|
||||
Alternatives: `ngrok http 3978` or `tailscale funnel 3978` (but these may change URLs each session).
|
||||
|
||||
@@ -110,9 +114,11 @@ teams app doctor <teamsAppId>
|
||||
|
||||
This runs diagnostics across bot registration, AAD app config, manifest validity, and SSO setup.
|
||||
|
||||
For production deployments, consider using [federated authentication](#federated-authentication-certificate--managed-identity) (certificate or managed identity) instead of client secrets.
|
||||
For production deployments, consider using [federated authentication](/channels/msteams#federated-authentication-certificate-plus-managed-identity) (certificate or managed identity) instead of client secrets.
|
||||
|
||||
Note: group chats are blocked by default (`channels.msteams.groupPolicy: "allowlist"`). To allow group replies, set `channels.msteams.groupAllowFrom` (or use `groupPolicy: "open"` to allow any member, mention-gated).
|
||||
<Note>
|
||||
Group chats are blocked by default (`channels.msteams.groupPolicy: "allowlist"`). To allow group replies, set `channels.msteams.groupAllowFrom`, or use `groupPolicy: "open"` to allow any member (mention-gated).
|
||||
</Note>
|
||||
|
||||
## Goals
|
||||
|
||||
@@ -217,7 +223,9 @@ If you can't use the Teams CLI, you can set up the bot manually through the Azur
|
||||
| **Type of App** | **Single Tenant** (recommended - see note below) |
|
||||
| **Creation type** | **Create new Microsoft App ID** |
|
||||
|
||||
> **Deprecation notice:** Creation of new multi-tenant bots was deprecated after 2025-07-31. Use **Single Tenant** for new bots.
|
||||
<Warning>
|
||||
Creation of new multi-tenant bots was deprecated after 2025-07-31. Use **Single Tenant** for new bots.
|
||||
</Warning>
|
||||
|
||||
3. Click **Review + create** → **Create** (wait ~1-2 minutes)
|
||||
|
||||
@@ -275,7 +283,7 @@ The Teams channel starts automatically when the plugin is available and `msteams
|
||||
|
||||
</details>
|
||||
|
||||
## Federated Authentication (Certificate + Managed Identity)
|
||||
## Federated authentication (certificate plus managed identity)
|
||||
|
||||
> Added in 2026.3.24
|
||||
|
||||
@@ -417,7 +425,7 @@ For AKS deployments using workload identity:
|
||||
|
||||
**Default behavior:** When `authType` is not set, OpenClaw defaults to client secret authentication. Existing configurations continue to work without changes.
|
||||
|
||||
## Local Development (Tunneling)
|
||||
## Local development (tunneling)
|
||||
|
||||
Teams can't reach `localhost`. Use a persistent dev tunnel so your URL stays the same across sessions:
|
||||
|
||||
@@ -487,7 +495,7 @@ The action is gated by `channels.msteams.actions.memberInfo` (default: enabled w
|
||||
- In other words, allowlists gate who can trigger the agent; only specific supplemental context paths are filtered today.
|
||||
- DM history can be limited with `channels.msteams.dmHistoryLimit` (user turns). Per-user overrides: `channels.msteams.dms["<user_id>"].historyLimit`.
|
||||
|
||||
## Current Teams RSC Permissions (Manifest)
|
||||
## Current Teams RSC permissions (manifest)
|
||||
|
||||
These are the **existing resourceSpecific permissions** in our Teams app manifest. They only apply inside the team/chat where the app is installed.
|
||||
|
||||
@@ -511,7 +519,7 @@ To add RSC permissions via the Teams CLI:
|
||||
teams app rsc add <teamsAppId> ChannelMessage.Read.Group --type Application
|
||||
```
|
||||
|
||||
## Example Teams Manifest (redacted)
|
||||
## Example Teams manifest (redacted)
|
||||
|
||||
Minimal, valid example with the required fields. Replace IDs and URLs.
|
||||
|
||||
@@ -643,7 +651,7 @@ If you need images/files in **channels** or want to fetch **message history**, y
|
||||
|
||||
**Additional permission for user mentions:** User @mentions work out of the box for users in the conversation. However, if you want to dynamically search and mention users who are **not in the current conversation**, add `User.Read.All` (Application) permission and grant admin consent.
|
||||
|
||||
## Known Limitations
|
||||
## Known limitations
|
||||
|
||||
### Webhook timeouts
|
||||
|
||||
@@ -706,7 +714,7 @@ Key settings (see `/gateway/configuration` for shared channel patterns):
|
||||
- `agent:<agentId>:msteams:channel:<conversationId>`
|
||||
- `agent:<agentId>:msteams:group:<conversationId>`
|
||||
|
||||
## Reply Style: Threads vs Posts
|
||||
## Reply style: threads vs posts
|
||||
|
||||
Teams recently introduced two channel UI styles over the same underlying data model:
|
||||
|
||||
@@ -833,7 +841,7 @@ OpenClaw sends Teams polls as Adaptive Cards (there is no native Teams poll API)
|
||||
- The gateway must stay online to record votes.
|
||||
- Polls do not auto-post result summaries yet (inspect the store file if needed).
|
||||
|
||||
## Presentation Cards
|
||||
## Presentation cards
|
||||
|
||||
Send semantic presentation payloads to Teams users or conversations using the `message` tool or CLI. OpenClaw renders them as Teams Adaptive Cards from the generic presentation contract.
|
||||
|
||||
@@ -914,7 +922,9 @@ openclaw message send --channel msteams --target "conversation:19:abc...@thread.
|
||||
}
|
||||
```
|
||||
|
||||
Note: Without the `user:` prefix, names default to group/team resolution. Always use `user:` when targeting people by display name.
|
||||
<Note>
|
||||
Without the `user:` prefix, names default to group or team resolution. Always use `user:` when targeting people by display name.
|
||||
</Note>
|
||||
|
||||
## Proactive messaging
|
||||
|
||||
@@ -947,7 +957,7 @@ https://teams.microsoft.com/l/channel/19%3A15bc...%40thread.tacv2/ChannelName?gr
|
||||
- Channel ID = path segment after `/channel/` (URL-decoded)
|
||||
- **Ignore** the `groupId` query parameter
|
||||
|
||||
## Private Channels
|
||||
## Private channels
|
||||
|
||||
Bots have limited support in private channels:
|
||||
|
||||
|
||||
@@ -52,8 +52,9 @@ Account scoping behavior:
|
||||
|
||||
Treat these as sensitive (they gate access to your assistant).
|
||||
|
||||
Important: this store is for DM access. Group authorization is separate.
|
||||
Approving a DM pairing code does not automatically allow that sender to run group commands or control the bot in groups. For group access, configure the channel's explicit group allowlists (for example `groupAllowFrom`, `groups`, or per-group/per-topic overrides depending on the channel).
|
||||
<Note>
|
||||
This store is for DM access. Group authorization is separate. Approving a DM pairing code does not automatically allow that sender to run group commands or control the bot in groups. For group access, configure the channel's explicit group allowlists (for example `groupAllowFrom`, `groups`, or per-group or per-topic overrides depending on the channel).
|
||||
</Note>
|
||||
|
||||
## 2) Node device pairing (iOS/Android/macOS/headless nodes)
|
||||
|
||||
@@ -100,11 +101,9 @@ If the same device retries with different auth details (for example different
|
||||
role/scopes/public key), the previous pending request is superseded and a new
|
||||
`requestId` is created.
|
||||
|
||||
Important: an already paired device does not get broader access silently. If it
|
||||
reconnects asking for more scopes or a broader role, OpenClaw keeps the
|
||||
existing approval as-is and creates a fresh pending upgrade request. Use
|
||||
`openclaw devices list` to compare the currently approved access with the newly
|
||||
requested access before you approve.
|
||||
<Note>
|
||||
An already paired device does not get broader access silently. If it reconnects asking for more scopes or a broader role, OpenClaw keeps the existing approval as-is and creates a fresh pending upgrade request. Use `openclaw devices list` to compare the currently approved access with the newly requested access before you approve.
|
||||
</Note>
|
||||
|
||||
### Optional trusted-CIDR node auto-approve
|
||||
|
||||
|
||||
@@ -152,7 +152,9 @@ openclaw channels status --probe
|
||||
- Approve code on the server: `openclaw pairing approve signal <PAIRING_CODE>`.
|
||||
- Save the bot number as a contact on your phone to avoid "Unknown contact".
|
||||
|
||||
Important: registering a phone number account with `signal-cli` can de-authenticate the main Signal app session for that number. Prefer a dedicated bot number, or use QR link mode if you need to keep your existing phone app setup.
|
||||
<Warning>
|
||||
Registering a phone number account with `signal-cli` can de-authenticate the main Signal app session for that number. Prefer a dedicated bot number, or use QR link mode if you need to keep your existing phone app setup.
|
||||
</Warning>
|
||||
|
||||
Upstream references:
|
||||
|
||||
|
||||
@@ -530,7 +530,9 @@ Manual reply tags are supported:
|
||||
- `[[reply_to_current]]`
|
||||
- `[[reply_to:<id>]]`
|
||||
|
||||
Note: `replyToMode="off"` disables **all** reply threading in Slack, including explicit `[[reply_to_*]]` tags. This differs from Telegram, where explicit tags are still honored in `"off"` mode — Slack threads hide messages from the channel while Telegram replies stay visible inline.
|
||||
<Note>
|
||||
`replyToMode="off"` disables **all** reply threading in Slack, including explicit `[[reply_to_*]]` tags. This differs from Telegram, where explicit tags are still honored in `"off"` mode. Slack threads hide messages from the channel while Telegram replies stay visible inline.
|
||||
</Note>
|
||||
|
||||
## Ack reactions
|
||||
|
||||
|
||||
@@ -298,8 +298,8 @@ curl "https://api.telegram.org/bot<bot_token>/getUpdates"
|
||||
|
||||
For text-only replies:
|
||||
|
||||
- DM: OpenClaw keeps the same preview message and performs a final edit in place (no second message)
|
||||
- group/topic: OpenClaw keeps the same preview message and performs a final edit in place (no second message)
|
||||
- short DM/group/topic previews: OpenClaw keeps the same preview message and performs a final edit in place
|
||||
- previews older than about one minute: OpenClaw sends the completed reply as a fresh final message and then cleans up the preview, so Telegram's visible timestamp reflects completion time instead of the preview creation time
|
||||
|
||||
For complex replies (for example media payloads), OpenClaw falls back to normal final delivery and then cleans up the preview message.
|
||||
|
||||
|
||||
@@ -146,6 +146,7 @@ OpenClaw recommends running WhatsApp on a separate number when possible. (The ch
|
||||
## Runtime model
|
||||
|
||||
- Gateway owns the WhatsApp socket and reconnect loop.
|
||||
- The reconnect watchdog uses WhatsApp Web transport activity, not only inbound app-message volume, so a quiet linked-device session is not restarted solely because nobody has sent a message recently. A longer application-silence cap still forces a reconnect if transport frames keep arriving but no application messages are handled for the watchdog window.
|
||||
- Outbound sends require an active WhatsApp listener for the target account.
|
||||
- Status and broadcast chats are ignored (`@status`, `@broadcast`).
|
||||
- Direct chats use DM session rules (`session.dmScope`; default `main` collapses DMs to the agent main session).
|
||||
@@ -510,6 +511,10 @@ Behavior notes:
|
||||
<Accordion title="Linked but disconnected / reconnect loop">
|
||||
Symptom: linked account with repeated disconnects or reconnect attempts.
|
||||
|
||||
Quiet accounts can stay connected past the normal message timeout; the watchdog
|
||||
restarts when WhatsApp Web transport activity stops, the socket closes, or
|
||||
application-level activity stays silent beyond the longer safety window.
|
||||
|
||||
Fix:
|
||||
|
||||
```bash
|
||||
@@ -562,7 +567,9 @@ The effective `direct` map is determined first: if the account defines its own `
|
||||
1. **Direct-specific system prompt** (`direct["<peerId>"].systemPrompt`): used when the specific peer entry exists in the map **and** its `systemPrompt` key is defined. If `systemPrompt` is an empty string (`""`), the wildcard is suppressed and no system prompt is applied.
|
||||
2. **Direct wildcard system prompt** (`direct["*"].systemPrompt`): used when the specific peer entry is absent from the map entirely, or when it exists but defines no `systemPrompt` key.
|
||||
|
||||
Note: `dms` remains the lightweight per-DM history override bucket (`dms.<id>.historyLimit`); prompt overrides live under `direct`.
|
||||
<Note>
|
||||
`dms` remains the lightweight per-DM history override bucket (`dms.<id>.historyLimit`). Prompt overrides live under `direct`.
|
||||
</Note>
|
||||
|
||||
**Difference from Telegram multi-account behavior:** In Telegram, root `groups` is intentionally suppressed for all accounts in a multi-account setup — even accounts that define no `groups` of their own — to prevent a bot from receiving group messages for groups it does not belong to. WhatsApp does not apply this guard: root `groups` and root `direct` are always inherited by accounts that define no account-level override, regardless of how many accounts are configured. In a multi-account WhatsApp setup, if you want per-account group or direct prompts, define the full map under each account explicitly rather than relying on root-level defaults.
|
||||
|
||||
|
||||
@@ -8,7 +8,9 @@ title: "Zalo personal"
|
||||
|
||||
Status: experimental. This integration automates a **personal Zalo account** via native `zca-js` inside OpenClaw.
|
||||
|
||||
> **Warning:** This is an unofficial integration and may result in account suspension/ban. Use at your own risk.
|
||||
<Warning>
|
||||
This is an unofficial integration and may result in account suspension or ban. Use at your own risk.
|
||||
</Warning>
|
||||
|
||||
## Bundled plugin
|
||||
|
||||
|
||||
245
docs/ci.md
245
docs/ci.md
File diff suppressed because one or more lines are too long
@@ -11,9 +11,9 @@ Manage isolated agents (workspaces + auth + routing).
|
||||
|
||||
Related:
|
||||
|
||||
- Multi-agent routing: [Multi-Agent Routing](/concepts/multi-agent)
|
||||
- Agent workspace: [Agent workspace](/concepts/agent-workspace)
|
||||
- Skill visibility config: [Skills config](/tools/skills-config)
|
||||
- [Multi-agent routing](/concepts/multi-agent)
|
||||
- [Agent workspace](/concepts/agent-workspace)
|
||||
- [Skills config](/tools/skills-config): skill visibility configuration.
|
||||
|
||||
## Examples
|
||||
|
||||
@@ -34,10 +34,7 @@ openclaw agents delete work
|
||||
|
||||
Use routing bindings to pin inbound channel traffic to a specific agent.
|
||||
|
||||
If you also want different visible skills per agent, configure
|
||||
`agents.defaults.skills` and `agents.list[].skills` in `openclaw.json`. See
|
||||
[Skills config](/tools/skills-config) and
|
||||
[Configuration Reference](/gateway/config-agents#agents-defaults-skills).
|
||||
If you also want different visible skills per agent, configure `agents.defaults.skills` and `agents.list[].skills` in `openclaw.json`. See [Skills config](/tools/skills-config) and [Configuration reference](/gateway/config-agents#agents-defaults-skills).
|
||||
|
||||
List bindings:
|
||||
|
||||
|
||||
@@ -12,7 +12,7 @@ Manage chat channel accounts and their runtime status on the Gateway.
|
||||
|
||||
Related docs:
|
||||
|
||||
- Channel guides: [Channels](/channels/index)
|
||||
- Channel guides: [Channels](/channels)
|
||||
- Gateway configuration: [Configuration](/gateway/configuration)
|
||||
|
||||
## Common commands
|
||||
@@ -47,7 +47,9 @@ openclaw channels add --channel nostr --private-key "$NOSTR_PRIVATE_KEY"
|
||||
openclaw channels remove --channel telegram --delete
|
||||
```
|
||||
|
||||
Tip: `openclaw channels add --help` shows per-channel flags (token, private key, app token, signal-cli paths, etc).
|
||||
<Tip>
|
||||
`openclaw channels add --help` shows per-channel flags (token, private key, app token, signal-cli paths, etc).
|
||||
</Tip>
|
||||
|
||||
Common non-interactive add surfaces include:
|
||||
|
||||
@@ -81,17 +83,15 @@ Routing behavior stays consistent:
|
||||
|
||||
If your config was already in a mixed state (named accounts present and top-level single-account values still set), run `openclaw doctor --fix` to move account-scoped values into the promoted account chosen for that channel. Most channels promote into `accounts.default`; Matrix can preserve an existing named/default target instead.
|
||||
|
||||
## Login / logout (interactive)
|
||||
## Login and logout (interactive)
|
||||
|
||||
```bash
|
||||
openclaw channels login --channel whatsapp
|
||||
openclaw channels logout --channel whatsapp
|
||||
```
|
||||
|
||||
Notes:
|
||||
|
||||
- `channels login` supports `--verbose`.
|
||||
- `channels login` / `logout` can infer the channel when only one supported login target is configured.
|
||||
- `channels login` and `logout` can infer the channel when only one supported login target is configured.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
|
||||
@@ -9,23 +9,15 @@ title: "Configure"
|
||||
|
||||
Interactive prompt to set up credentials, devices, and agent defaults.
|
||||
|
||||
Note: The **Model** section now includes a multi-select for the
|
||||
`agents.defaults.models` allowlist (what shows up in `/model` and the model picker).
|
||||
Provider-scoped setup choices merge their selected models into the existing
|
||||
allowlist instead of replacing unrelated providers already in the config.
|
||||
Re-running provider auth from configure preserves an existing
|
||||
`agents.defaults.model.primary`; use `openclaw models auth login --provider <id> --set-default`
|
||||
or `openclaw models set <model>` when you intentionally want to change the default model.
|
||||
<Note>
|
||||
The **Model** section includes a multi-select for the `agents.defaults.models` allowlist (what shows up in `/model` and the model picker). Provider-scoped setup choices merge their selected models into the existing allowlist instead of replacing unrelated providers already in the config. Re-running provider auth from configure preserves an existing `agents.defaults.model.primary`. Use `openclaw models auth login --provider <id> --set-default` or `openclaw models set <model>` when you intentionally want to change the default model.
|
||||
</Note>
|
||||
|
||||
When configure starts from a provider auth choice, the default-model and
|
||||
allowlist pickers prefer that provider automatically. For paired providers such
|
||||
as Volcengine/BytePlus, the same preference also matches their coding-plan
|
||||
variants (`volcengine-plan/*`, `byteplus-plan/*`). If the preferred-provider
|
||||
filter would produce an empty list, configure falls back to the unfiltered
|
||||
catalog instead of showing a blank picker.
|
||||
When configure starts from a provider auth choice, the default-model and allowlist pickers prefer that provider automatically. For paired providers such as Volcengine and BytePlus, the same preference also matches their coding-plan variants (`volcengine-plan/*`, `byteplus-plan/*`). If the preferred-provider filter would produce an empty list, configure falls back to the unfiltered catalog instead of showing a blank picker.
|
||||
|
||||
Tip: `openclaw config` without a subcommand opens the same wizard. Use
|
||||
`openclaw config get|set|unset` for non-interactive edits.
|
||||
<Tip>
|
||||
`openclaw config` without a subcommand opens the same wizard. Use `openclaw config get|set|unset` for non-interactive edits.
|
||||
</Tip>
|
||||
|
||||
For web search, `openclaw configure --section web` lets you choose a provider
|
||||
and configure its credentials. Some providers also show provider-specific
|
||||
|
||||
@@ -129,7 +129,7 @@ Discovery is not audited. Only applied operations and writes are logged.
|
||||
`openclaw onboard --modern` starts Crestodian as the modern onboarding preview.
|
||||
Plain `openclaw onboard` still runs classic onboarding.
|
||||
|
||||
## Setup Bootstrap
|
||||
## Setup bootstrap
|
||||
|
||||
`setup` is the chat-first onboarding bootstrap. It writes only through typed
|
||||
config operations and asks for approval first.
|
||||
|
||||
199
docs/cli/cron.md
199
docs/cli/cron.md
@@ -2,7 +2,7 @@
|
||||
summary: "CLI reference for `openclaw cron` (schedule and run background jobs)"
|
||||
read_when:
|
||||
- You want scheduled jobs and wakeups
|
||||
- You’re debugging cron execution and logs
|
||||
- You are debugging cron execution and logs
|
||||
title: "Cron"
|
||||
---
|
||||
|
||||
@@ -10,86 +10,142 @@ title: "Cron"
|
||||
|
||||
Manage cron jobs for the Gateway scheduler.
|
||||
|
||||
Related:
|
||||
<Tip>
|
||||
Run `openclaw cron --help` for the full command surface. See [Cron jobs](/automation/cron-jobs) for the conceptual guide.
|
||||
</Tip>
|
||||
|
||||
- Cron jobs: [Cron jobs](/automation/cron-jobs)
|
||||
## Sessions
|
||||
|
||||
Tip: run `openclaw cron --help` for the full command surface.
|
||||
`--session` accepts `main`, `isolated`, `current`, or `session:<id>`.
|
||||
|
||||
Note: `openclaw cron list` and `openclaw cron show <job-id>` preview the
|
||||
resolved delivery route. For `channel: "last"`, the preview shows whether the
|
||||
route resolved from the main/current session or will fail closed.
|
||||
<AccordionGroup>
|
||||
<Accordion title="Session keys">
|
||||
- `main` binds to the agent's main session.
|
||||
- `isolated` creates a fresh transcript and session id for each run.
|
||||
- `current` binds to the active session at creation time.
|
||||
- `session:<id>` pins to an explicit persistent session key.
|
||||
</Accordion>
|
||||
<Accordion title="Isolated session semantics">
|
||||
Isolated runs reset ambient conversation context. Channel and group routing, send/queue policy, elevation, origin, and ACP runtime binding are reset for the new run. Safe preferences and explicit user-selected model or auth overrides can carry across runs.
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
Note: isolated `cron add` jobs default to `--announce` delivery. Use `--no-deliver` to keep
|
||||
output internal. `--deliver` remains as a deprecated alias for `--announce`.
|
||||
## Delivery
|
||||
|
||||
Note: isolated cron chat delivery is shared. `--announce` is runner fallback
|
||||
delivery for the final reply; `--no-deliver` disables that fallback but does
|
||||
not remove the agent's `message` tool when a chat route is available.
|
||||
`openclaw cron list` and `openclaw cron show <job-id>` preview the resolved delivery route. For `channel: "last"`, the preview shows whether the route resolved from the main or current session, or will fail closed.
|
||||
|
||||
Note: one-shot (`--at`) jobs delete after success by default. Use `--keep-after-run` to keep them.
|
||||
<Note>
|
||||
Isolated `cron add` jobs default to `--announce` delivery. Use `--no-deliver` to keep output internal. `--deliver` remains as a deprecated alias for `--announce`.
|
||||
</Note>
|
||||
|
||||
Note: `--session` supports `main`, `isolated`, `current`, and `session:<id>`.
|
||||
Use `current` to bind to the active session at creation time, or `session:<id>` for
|
||||
an explicit persistent session key.
|
||||
### Delivery ownership
|
||||
|
||||
Note: `--session isolated` creates a fresh transcript/session id for each run.
|
||||
Safe preferences and explicit user-selected model/auth overrides can carry, but
|
||||
ambient conversation context does not: channel/group routing, send/queue policy,
|
||||
elevation, origin, and ACP runtime binding are reset for the new isolated run.
|
||||
Isolated cron chat delivery is shared between the agent and the runner:
|
||||
|
||||
Note: for one-shot CLI jobs, offset-less `--at` datetimes are treated as UTC unless you also pass
|
||||
`--tz <iana>`, which interprets that local wall-clock time in the given timezone.
|
||||
- The agent can send directly using the `message` tool when a chat route is available.
|
||||
- `announce` fallback-delivers the final reply only when the agent did not send directly to the resolved target.
|
||||
- `webhook` posts the finished payload to a URL.
|
||||
- `none` disables runner fallback delivery.
|
||||
|
||||
Note: recurring jobs now use exponential retry backoff after consecutive errors (30s → 1m → 5m → 15m → 60m), then return to normal schedule after the next successful run.
|
||||
`--announce` is runner fallback delivery for the final reply. `--no-deliver` disables that fallback but does not remove the agent's `message` tool when a chat route is available.
|
||||
|
||||
Note: `openclaw cron run` now returns as soon as the manual run is queued for execution. Successful responses include `{ ok: true, enqueued: true, runId }`; use `openclaw cron runs --id <job-id>` to follow the eventual outcome.
|
||||
Reminders created from an active chat preserve the live chat delivery target for fallback announce delivery. Internal session keys may be lowercase; do not use them as a source of truth for case-sensitive provider IDs such as Matrix room IDs.
|
||||
|
||||
Note: `openclaw cron run <job-id>` force-runs by default. Use `--due` to keep the
|
||||
older "only run if due" behavior.
|
||||
### Failure delivery
|
||||
|
||||
Note: isolated cron turns suppress stale acknowledgement-only replies. If the
|
||||
first result is just an interim status update and no descendant subagent run is
|
||||
responsible for the eventual answer, cron re-prompts once for the real result
|
||||
before delivery.
|
||||
Failure notifications resolve in this order:
|
||||
|
||||
Note: if an isolated cron run returns only the silent token (`NO_REPLY` /
|
||||
`no_reply`), cron suppresses direct outbound delivery and the fallback queued
|
||||
summary path as well, so nothing is posted back to chat.
|
||||
1. `delivery.failureDestination` on the job.
|
||||
2. Global `cron.failureDestination`.
|
||||
3. The job's primary announce target (when no explicit failure destination is set).
|
||||
|
||||
Note: `cron add|edit --model ...` uses that selected allowed model for the job.
|
||||
If the model is not allowed, cron warns and falls back to the job's agent/default
|
||||
model selection instead. Configured fallback chains still apply, but a plain
|
||||
model override with no explicit per-job fallback list no longer appends the
|
||||
agent primary as a hidden extra retry target.
|
||||
<Note>
|
||||
Main-session jobs may only use `delivery.failureDestination` when primary delivery mode is `webhook`. Isolated jobs accept it in all modes.
|
||||
</Note>
|
||||
|
||||
Note: isolated cron model precedence is Gmail-hook override first, then per-job
|
||||
`--model`, then any user-selected stored cron-session model override, then the
|
||||
normal agent/default selection.
|
||||
Note: isolated cron runs treat run-level agent failures as job errors even when
|
||||
no reply payload is produced, so model/provider failures still increment error
|
||||
counters and trigger failure notifications.
|
||||
|
||||
Note: isolated cron fast mode follows the resolved live model selection. Model
|
||||
config `params.fastMode` applies by default, but a stored session `fastMode`
|
||||
override still wins over config.
|
||||
## Scheduling
|
||||
|
||||
Note: if an isolated run throws `LiveSessionModelSwitchError`, cron persists the
|
||||
switched provider/model (and switched auth profile override when present) for
|
||||
the active run before retrying. The outer retry loop is bounded to 2 switch
|
||||
retries after the initial attempt, then aborts instead of looping forever.
|
||||
### One-shot jobs
|
||||
|
||||
Note: failure notifications use `delivery.failureDestination` first, then
|
||||
global `cron.failureDestination`, and finally fall back to the job's primary
|
||||
announce target when no explicit failure destination is configured.
|
||||
`--at <datetime>` schedules a one-shot run. Offset-less datetimes are treated as UTC unless you also pass `--tz <iana>`, which interprets the wall-clock time in the given timezone.
|
||||
|
||||
Note: retention/pruning is controlled in config:
|
||||
<Note>
|
||||
One-shot jobs delete after success by default. Use `--keep-after-run` to preserve them.
|
||||
</Note>
|
||||
|
||||
### Recurring jobs
|
||||
|
||||
Recurring jobs use exponential retry backoff after consecutive errors: 30s, 1m, 5m, 15m, 60m. The schedule returns to normal after the next successful run.
|
||||
|
||||
Skipped runs are tracked separately from execution errors. They do not affect retry backoff, but `openclaw cron edit <job-id> --failure-alert-include-skipped` can opt failure alerts into repeated skipped-run notifications.
|
||||
|
||||
Note: cron job definitions live in `jobs.json`, while pending runtime state lives in `jobs-state.json`. If `jobs.json` is edited externally, the Gateway reloads changed schedules and clears stale pending slots; formatting-only rewrites do not clear the pending slot.
|
||||
|
||||
### Manual runs
|
||||
|
||||
`openclaw cron run` returns as soon as the manual run is queued. Successful responses include `{ ok: true, enqueued: true, runId }`. Use `openclaw cron runs --id <job-id>` to follow the eventual outcome.
|
||||
|
||||
<Note>
|
||||
`openclaw cron run <job-id>` force-runs by default. Use `--due` to keep the older "only run if due" behavior.
|
||||
</Note>
|
||||
|
||||
## Models
|
||||
|
||||
`cron add|edit --model <ref>` selects an allowed model for the job.
|
||||
|
||||
<Warning>
|
||||
If the model is not allowed, cron warns and falls back to the job's agent or default model selection. Configured fallback chains still apply, but a plain model override with no explicit per-job fallback list no longer appends the agent primary as a hidden extra retry target.
|
||||
</Warning>
|
||||
|
||||
### Isolated cron model precedence
|
||||
|
||||
Isolated cron resolves the active model in this order:
|
||||
|
||||
1. Gmail-hook override.
|
||||
2. Per-job `--model`.
|
||||
3. Stored cron-session model override (when the user selected one).
|
||||
4. Agent or default model selection.
|
||||
|
||||
### Fast mode
|
||||
|
||||
Isolated cron fast mode follows the resolved live model selection. Model config `params.fastMode` applies by default, but a stored session `fastMode` override still wins over config.
|
||||
|
||||
### Live model switch retries
|
||||
|
||||
If an isolated run throws `LiveSessionModelSwitchError`, cron persists the switched provider and model (and switched auth profile override when present) for the active run before retrying. The outer retry loop is bounded to two switch retries after the initial attempt, then aborts instead of looping forever.
|
||||
|
||||
## Run output and denials
|
||||
|
||||
### Stale acknowledgement suppression
|
||||
|
||||
Isolated cron turns suppress stale acknowledgement-only replies. If the first result is just an interim status update and no descendant subagent run is responsible for the eventual answer, cron re-prompts once for the real result before delivery.
|
||||
|
||||
### Silent token suppression
|
||||
|
||||
If an isolated cron run returns only the silent token (`NO_REPLY` or `no_reply`), cron suppresses both direct outbound delivery and the fallback queued summary path, so nothing is posted back to chat.
|
||||
|
||||
### Structured denials
|
||||
|
||||
Isolated cron runs prefer structured execution-denial metadata from the embedded run, then fall back to known denial markers in final output, such as `SYSTEM_RUN_DENIED`, `INVALID_REQUEST`, and approval-binding refusal phrases.
|
||||
|
||||
`cron list` and run history surface the denial reason instead of reporting a blocked command as `ok`.
|
||||
|
||||
## Retention
|
||||
|
||||
Retention and pruning are controlled in config:
|
||||
|
||||
- `cron.sessionRetention` (default `24h`) prunes completed isolated run sessions.
|
||||
- `cron.runLog.maxBytes` + `cron.runLog.keepLines` prune `~/.openclaw/cron/runs/<jobId>.jsonl`.
|
||||
- `cron.runLog.maxBytes` and `cron.runLog.keepLines` prune `~/.openclaw/cron/runs/<jobId>.jsonl`.
|
||||
|
||||
Upgrade note: if you have older cron jobs from before the current delivery/store format, run
|
||||
`openclaw doctor --fix`. Doctor now normalizes legacy cron fields (`jobId`, `schedule.cron`,
|
||||
top-level delivery fields including legacy `threadId`, payload `provider` delivery aliases) and migrates simple
|
||||
`notify: true` webhook fallback jobs to explicit webhook delivery when `cron.webhook` is
|
||||
configured.
|
||||
## Migrating older jobs
|
||||
|
||||
<Note>
|
||||
If you have cron jobs from before the current delivery and store format, run `openclaw doctor --fix`. Doctor normalizes legacy cron fields (`jobId`, `schedule.cron`, top-level delivery fields including legacy `threadId`, payload `provider` delivery aliases) and migrates simple `notify: true` webhook fallback jobs to explicit webhook delivery when `cron.webhook` is configured.
|
||||
</Note>
|
||||
|
||||
## Common edits
|
||||
|
||||
@@ -131,21 +187,9 @@ openclaw cron add \
|
||||
|
||||
`--light-context` applies to isolated agent-turn jobs only. For cron runs, lightweight mode keeps bootstrap context empty instead of injecting the full workspace bootstrap set.
|
||||
|
||||
Delivery ownership note:
|
||||
|
||||
- Isolated cron chat delivery is shared. The agent can send directly with the
|
||||
`message` tool when a chat route is available.
|
||||
- `announce` fallback-delivers the final reply only when the agent did not send
|
||||
directly to the resolved target. `webhook` posts the finished payload to a URL.
|
||||
`none` disables runner fallback delivery.
|
||||
- Reminders created from an active chat preserve the live chat delivery target
|
||||
for fallback announce delivery. Internal session keys may be lowercase; do not
|
||||
use them as a source of truth for case-sensitive provider IDs such as Matrix
|
||||
room IDs.
|
||||
|
||||
## Common admin commands
|
||||
|
||||
Manual run:
|
||||
Manual run and inspection:
|
||||
|
||||
```bash
|
||||
openclaw cron list
|
||||
@@ -155,10 +199,9 @@ openclaw cron run <job-id> --due
|
||||
openclaw cron runs --id <job-id> --limit 50
|
||||
```
|
||||
|
||||
`cron runs` entries include delivery diagnostics with the intended cron target,
|
||||
the resolved target, message-tool sends, fallback use, and delivered state.
|
||||
`cron runs` entries include delivery diagnostics with the intended cron target, the resolved target, message-tool sends, fallback use, and delivered state.
|
||||
|
||||
Agent/session retargeting:
|
||||
Agent and session retargeting:
|
||||
|
||||
```bash
|
||||
openclaw cron edit <job-id> --agent ops
|
||||
@@ -176,14 +219,6 @@ openclaw cron edit <job-id> --no-best-effort-deliver
|
||||
openclaw cron edit <job-id> --no-deliver
|
||||
```
|
||||
|
||||
Failure-delivery note:
|
||||
|
||||
- `delivery.failureDestination` is supported for isolated jobs.
|
||||
- Main-session jobs may only use `delivery.failureDestination` when primary
|
||||
delivery mode is `webhook`.
|
||||
- If you do not set any failure destination and the job already announces to a
|
||||
channel, failure notifications reuse that same announce target.
|
||||
|
||||
## Related
|
||||
|
||||
- [CLI reference](/cli)
|
||||
|
||||
@@ -55,10 +55,9 @@ is omitted or `--latest` is passed, OpenClaw only prints the selected pending
|
||||
request and exits; rerun approval with the exact request ID after verifying
|
||||
the details.
|
||||
|
||||
Note: if a device retries pairing with changed auth details (role/scopes/public
|
||||
key), OpenClaw supersedes the previous pending entry and issues a new
|
||||
`requestId`. Run `openclaw devices list` right before approval to use the
|
||||
current ID.
|
||||
<Note>
|
||||
If a device retries pairing with changed auth details (role, scopes, or public key), OpenClaw supersedes the previous pending entry and issues a new `requestId`. Run `openclaw devices list` right before approval to use the current ID.
|
||||
</Note>
|
||||
|
||||
If the device is already paired and asks for broader scopes or a broader role,
|
||||
OpenClaw keeps the existing approval in place and creates a new pending upgrade
|
||||
@@ -128,8 +127,9 @@ Returns the revoke result as JSON.
|
||||
- `--timeout <ms>`: RPC timeout.
|
||||
- `--json`: JSON output (recommended for scripting).
|
||||
|
||||
Note: when you set `--url`, the CLI does not fall back to config or environment credentials.
|
||||
Pass `--token` or `--password` explicitly. Missing explicit credentials is an error.
|
||||
<Warning>
|
||||
When you set `--url`, the CLI does not fall back to config or environment credentials. Pass `--token` or `--password` explicitly. Missing explicit credentials is an error.
|
||||
</Warning>
|
||||
|
||||
## Notes
|
||||
|
||||
|
||||
@@ -110,8 +110,8 @@ Inline `--password` can be exposed in local process listings. Prefer `--password
|
||||
|
||||
### Startup profiling
|
||||
|
||||
- Set `OPENCLAW_GATEWAY_STARTUP_TRACE=1` to log phase timings during Gateway startup.
|
||||
- Run `pnpm test:startup:gateway -- --runs 5 --warmup 1` to benchmark Gateway startup. The benchmark records first process output, `/healthz`, `/readyz`, and startup trace timings.
|
||||
- Set `OPENCLAW_GATEWAY_STARTUP_TRACE=1` to log phase timings during Gateway startup, including per-phase `eventLoopMax` delay and plugin lookup-table timings for installed-index, manifest registry, startup planning, and owner-map work.
|
||||
- Run `pnpm test:startup:gateway -- --runs 5 --warmup 1` to benchmark Gateway startup. The benchmark records first process output, `/healthz`, `/readyz`, startup trace timings, event-loop delay, and plugin lookup-table timing details.
|
||||
|
||||
## Query a running Gateway
|
||||
|
||||
@@ -422,21 +422,57 @@ openclaw gateway restart
|
||||
openclaw gateway uninstall
|
||||
```
|
||||
|
||||
### Install with a wrapper
|
||||
|
||||
Use `--wrapper` when the managed service must start through another executable, for example a
|
||||
secrets manager shim or a run-as helper. The wrapper receives the normal Gateway args and is
|
||||
responsible for eventually exec'ing `openclaw` or Node with those args.
|
||||
|
||||
```bash
|
||||
cat > ~/.local/bin/openclaw-doppler <<'EOF'
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
exec doppler run --project my-project --config production -- openclaw "$@"
|
||||
EOF
|
||||
chmod +x ~/.local/bin/openclaw-doppler
|
||||
|
||||
openclaw gateway install --wrapper ~/.local/bin/openclaw-doppler --force
|
||||
openclaw gateway restart
|
||||
```
|
||||
|
||||
You can also set the wrapper through the environment. `gateway install` validates that the path is
|
||||
an executable file, writes the wrapper into service `ProgramArguments`, and persists
|
||||
`OPENCLAW_WRAPPER` in the service environment for later forced reinstalls, updates, and doctor
|
||||
repairs.
|
||||
|
||||
```bash
|
||||
OPENCLAW_WRAPPER="$HOME/.local/bin/openclaw-doppler" openclaw gateway install --force
|
||||
openclaw doctor
|
||||
```
|
||||
|
||||
To remove a persisted wrapper, clear `OPENCLAW_WRAPPER` while reinstalling:
|
||||
|
||||
```bash
|
||||
OPENCLAW_WRAPPER= openclaw gateway install --force
|
||||
openclaw gateway restart
|
||||
```
|
||||
|
||||
<AccordionGroup>
|
||||
<Accordion title="Command options">
|
||||
- `gateway status`: `--url`, `--token`, `--password`, `--timeout`, `--no-probe`, `--require-rpc`, `--deep`, `--json`
|
||||
- `gateway install`: `--port`, `--runtime <node|bun>`, `--token`, `--force`, `--json`
|
||||
- `gateway install`: `--port`, `--runtime <node|bun>`, `--token`, `--wrapper <path>`, `--force`, `--json`
|
||||
- `gateway uninstall|start|stop|restart`: `--json`
|
||||
</Accordion>
|
||||
<Accordion title="Service install and lifecycle notes">
|
||||
- `gateway install` supports `--port`, `--runtime`, `--token`, `--force`, `--json`.
|
||||
<Accordion title="Lifecycle behavior">
|
||||
- Use `gateway restart` to restart a managed service. Do not chain `gateway stop` and `gateway start` as a restart substitute; on macOS, `gateway stop` intentionally disables the LaunchAgent before stopping it.
|
||||
- Lifecycle commands accept `--json` for scripting.
|
||||
</Accordion>
|
||||
<Accordion title="Auth and SecretRefs at install time">
|
||||
- When token auth requires a token and `gateway.auth.token` is SecretRef-managed, `gateway install` validates that the SecretRef is resolvable but does not persist the resolved token into service environment metadata.
|
||||
- If token auth requires a token and the configured token SecretRef is unresolved, install fails closed instead of persisting fallback plaintext.
|
||||
- For password auth on `gateway run`, prefer `OPENCLAW_GATEWAY_PASSWORD`, `--password-file`, or a SecretRef-backed `gateway.auth.password` over inline `--password`.
|
||||
- In inferred auth mode, shell-only `OPENCLAW_GATEWAY_PASSWORD` does not relax install token requirements; use durable config (`gateway.auth.password` or config `env`) when installing a managed service.
|
||||
- If both `gateway.auth.token` and `gateway.auth.password` are configured and `gateway.auth.mode` is unset, install is blocked until mode is set explicitly.
|
||||
- Lifecycle commands accept `--json` for scripting.
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
|
||||
@@ -17,7 +17,7 @@ Related:
|
||||
- Hooks: [Hooks](/automation/hooks)
|
||||
- Plugin hooks: [Plugin hooks](/plugins/hooks)
|
||||
|
||||
## List All Hooks
|
||||
## List all hooks
|
||||
|
||||
```bash
|
||||
openclaw hooks list
|
||||
@@ -60,7 +60,7 @@ openclaw hooks list --json
|
||||
|
||||
Returns structured JSON for programmatic use.
|
||||
|
||||
## Get Hook Information
|
||||
## Get hook information
|
||||
|
||||
```bash
|
||||
openclaw hooks info <name>
|
||||
@@ -100,7 +100,7 @@ Requirements:
|
||||
Config: ✓ workspace.dir
|
||||
```
|
||||
|
||||
## Check Hooks Eligibility
|
||||
## Check hooks eligibility
|
||||
|
||||
```bash
|
||||
openclaw hooks check
|
||||
@@ -194,7 +194,7 @@ openclaw hooks disable command-logger
|
||||
- `openclaw hooks list --json`, `info --json`, and `check --json` write structured JSON directly to stdout.
|
||||
- Plugin-managed hooks cannot be enabled or disabled here; enable or disable the owning plugin instead.
|
||||
|
||||
## Install Hook Packs
|
||||
## Install hook packs
|
||||
|
||||
```bash
|
||||
openclaw plugins install <package> # ClawHub first, then npm
|
||||
@@ -248,7 +248,7 @@ openclaw plugins install -l ./my-hook-pack
|
||||
Linked hook packs are treated as managed hooks from an operator-configured
|
||||
directory, not as workspace hooks.
|
||||
|
||||
## Update Hook Packs
|
||||
## Update hook packs
|
||||
|
||||
```bash
|
||||
openclaw plugins update <id>
|
||||
@@ -269,7 +269,7 @@ When a stored integrity hash exists and the fetched artifact hash changes,
|
||||
OpenClaw prints a warning and asks for confirmation before proceeding. Use
|
||||
global `--yes` to bypass prompts in CI/non-interactive runs.
|
||||
|
||||
## Bundled Hooks
|
||||
## Bundled hooks
|
||||
|
||||
### session-memory
|
||||
|
||||
|
||||
75
docs/cli/migrate.md
Normal file
75
docs/cli/migrate.md
Normal file
@@ -0,0 +1,75 @@
|
||||
---
|
||||
summary: "CLI reference for importing state from another agent system"
|
||||
read_when:
|
||||
- You want to migrate from Hermes or another agent system into OpenClaw
|
||||
- You are adding a plugin-owned migration provider
|
||||
title: "Migrate"
|
||||
---
|
||||
|
||||
# `openclaw migrate`
|
||||
|
||||
Import state from another agent system through a plugin-owned migration provider.
|
||||
|
||||
```bash
|
||||
openclaw migrate list
|
||||
openclaw migrate hermes --dry-run
|
||||
openclaw migrate hermes
|
||||
openclaw migrate apply hermes --yes
|
||||
openclaw migrate apply hermes --include-secrets --yes
|
||||
openclaw onboard --flow import
|
||||
openclaw onboard --import-from hermes --import-source ~/.hermes
|
||||
```
|
||||
|
||||
## Safety model
|
||||
|
||||
`openclaw migrate` is preview-first. The provider returns an itemized plan before anything changes, including conflicts, skipped items, and sensitive items. JSON plans, apply output, and migration reports redact nested secret-looking keys such as API keys, tokens, authorization headers, cookies, and passwords.
|
||||
|
||||
`openclaw migrate apply <provider>` previews the plan and prompts before changing state unless `--yes` is set. In non-interactive mode, apply requires `--yes`. With `--json` and no `--yes`, apply prints the JSON plan and does not mutate state.
|
||||
|
||||
Apply creates and verifies an OpenClaw backup before applying the migration. If no local OpenClaw state exists yet, the backup step is skipped and the migration can continue. To skip a backup when state exists, pass both `--no-backup` and `--force`.
|
||||
|
||||
Apply mode refuses to continue when the plan has conflicts. Review the plan, then rerun with `--overwrite` if replacing existing targets is intentional. Providers may still write item-level backups for overwritten files in the migration report directory.
|
||||
|
||||
Secrets are never imported by default. Use `--include-secrets` to import supported credentials.
|
||||
|
||||
## Hermes
|
||||
|
||||
The bundled Hermes provider detects Hermes state at `~/.hermes` by default. Use `--from <path>` when Hermes lives elsewhere.
|
||||
|
||||
The Hermes migration can import:
|
||||
|
||||
- default model configuration from `config.yaml`
|
||||
- configured model providers and custom OpenAI-compatible endpoints from `providers` and `custom_providers`
|
||||
- MCP server definitions from `mcp_servers` or `mcp.servers`
|
||||
- `SOUL.md` and `AGENTS.md` into the OpenClaw agent workspace
|
||||
- `memories/MEMORY.md` and `memories/USER.md` by appending them to workspace memory files
|
||||
- memory config defaults for OpenClaw file memory, plus archive/manual-review items for external memory providers such as Honcho
|
||||
- skills with a `SKILL.md` file from `skills/<name>/`
|
||||
- per-skill config values from `skills.config`
|
||||
- supported API keys from `.env`, only with `--include-secrets`
|
||||
|
||||
Archive-only Hermes state is copied into the migration report for manual review, but it is not loaded into live OpenClaw config or credentials. This preserves opaque or unsafe state such as `plugins/`, `sessions/`, `logs/`, `cron/`, `mcp-tokens/`, `auth.json`, and `state.db` without pretending OpenClaw can execute or trust it automatically.
|
||||
|
||||
Supported Hermes `.env` keys include `OPENAI_API_KEY`, `ANTHROPIC_API_KEY`, `OPENROUTER_API_KEY`, `GOOGLE_API_KEY`, `GEMINI_API_KEY`, `GROQ_API_KEY`, `XAI_API_KEY`, `MISTRAL_API_KEY`, and `DEEPSEEK_API_KEY`.
|
||||
|
||||
After applying a migration, run:
|
||||
|
||||
```bash
|
||||
openclaw doctor
|
||||
```
|
||||
|
||||
## Plugin contract
|
||||
|
||||
Migration sources are plugins. A plugin declares its provider ids in `openclaw.plugin.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"contracts": {
|
||||
"migrationProviders": ["hermes"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
At runtime the plugin calls `api.registerMigrationProvider(...)`. The provider implements `detect`, `plan`, and `apply`; core owns CLI orchestration, backup policy, prompts, JSON output, and conflict preflight. Core passes the reviewed plan into `apply(ctx, plan)`, and providers may rebuild the plan only when that argument is absent for compatibility. Provider plugins can use `openclaw/plugin-sdk/migration` for item construction and summary counts, plus `openclaw/plugin-sdk/migration-runtime` for conflict-aware file copies, archive-only report copies, and migration reports.
|
||||
|
||||
Onboarding can also offer migration when a provider detects a known source. `openclaw onboard --flow import` and `openclaw setup --wizard --import-from hermes` use the same plugin migration provider and still show a preview before applying. Onboarding imports require a fresh OpenClaw setup; reset config, credentials, sessions, and the workspace first if you already have local state. Backup plus overwrite or merge imports are feature-gated for existing setups.
|
||||
@@ -67,7 +67,7 @@ Notes:
|
||||
stale removed-provider default.
|
||||
- `models status` may show `marker(<value>)` in auth output for non-secret placeholders (for example `OPENAI_API_KEY`, `secretref-managed`, `minimax-oauth`, `oauth:chutes`, `ollama-local`) instead of masking them as secrets.
|
||||
|
||||
### `models scan`
|
||||
### Models scan
|
||||
|
||||
`models scan` reads OpenRouter's public `:free` catalog and ranks candidates for
|
||||
fallback use. The catalog itself is public, so metadata-only scans do not need
|
||||
@@ -96,7 +96,7 @@ Options:
|
||||
`--set-default` and `--set-image` require live probes; metadata-only scan
|
||||
results are informational and are not applied to config.
|
||||
|
||||
### `models status`
|
||||
### Models status
|
||||
|
||||
Options:
|
||||
|
||||
|
||||
@@ -11,11 +11,23 @@ Interactive onboarding for local or remote Gateway setup.
|
||||
|
||||
## Related guides
|
||||
|
||||
- CLI onboarding hub: [Onboarding (CLI)](/start/wizard)
|
||||
- Onboarding overview: [Onboarding Overview](/start/onboarding-overview)
|
||||
- CLI onboarding reference: [CLI Setup Reference](/start/wizard-cli-reference)
|
||||
- CLI automation: [CLI Automation](/start/wizard-cli-automation)
|
||||
- macOS onboarding: [Onboarding (macOS App)](/start/onboarding)
|
||||
<CardGroup cols={2}>
|
||||
<Card title="CLI onboarding hub" href="/start/wizard" icon="rocket">
|
||||
Walkthrough of the interactive CLI flow.
|
||||
</Card>
|
||||
<Card title="Onboarding overview" href="/start/onboarding-overview" icon="map">
|
||||
How OpenClaw onboarding fits together.
|
||||
</Card>
|
||||
<Card title="CLI setup reference" href="/start/wizard-cli-reference" icon="book">
|
||||
Outputs, internals, and per-step behavior.
|
||||
</Card>
|
||||
<Card title="CLI automation" href="/start/wizard-cli-automation" icon="terminal">
|
||||
Non-interactive flags and scripted setups.
|
||||
</Card>
|
||||
<Card title="macOS app onboarding" href="/start/onboarding" icon="apple">
|
||||
Onboarding flow for the macOS menu bar app.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
## Examples
|
||||
|
||||
@@ -24,10 +36,14 @@ openclaw onboard
|
||||
openclaw onboard --modern
|
||||
openclaw onboard --flow quickstart
|
||||
openclaw onboard --flow manual
|
||||
openclaw onboard --flow import
|
||||
openclaw onboard --import-from hermes --import-source ~/.hermes
|
||||
openclaw onboard --skip-bootstrap
|
||||
openclaw onboard --mode remote --remote-url wss://gateway-host:18789
|
||||
```
|
||||
|
||||
`--flow import` uses plugin-owned migration providers such as Hermes. It only runs against a fresh OpenClaw setup; if existing config, credentials, sessions, or workspace memory/identity files are present, reset or choose a fresh setup before importing.
|
||||
|
||||
`--modern` starts the Crestodian conversational onboarding preview. Without
|
||||
`--modern`, `openclaw onboard` keeps the classic onboarding flow.
|
||||
|
||||
@@ -132,10 +148,11 @@ Interactive onboarding behavior with reference mode:
|
||||
- Onboarding performs a fast preflight validation before saving the ref.
|
||||
- If validation fails, onboarding shows the error and lets you retry.
|
||||
|
||||
Non-interactive Z.AI endpoint choices:
|
||||
### Non-interactive Z.AI endpoint choices
|
||||
|
||||
Note: `--auth-choice zai-api-key` now auto-detects the best Z.AI endpoint for your key (prefers the general API with `zai/glm-5.1`).
|
||||
If you specifically want the GLM Coding Plan endpoints, pick `zai-coding-global` or `zai-coding-cn`.
|
||||
<Note>
|
||||
`--auth-choice zai-api-key` auto-detects the best Z.AI endpoint for your key (prefers the general API with `zai/glm-5.1`). If you specifically want the GLM Coding Plan endpoints, pick `zai-coding-global` or `zai-coding-cn`.
|
||||
</Note>
|
||||
|
||||
```bash
|
||||
# Promptless endpoint selection
|
||||
@@ -157,26 +174,34 @@ openclaw onboard --non-interactive \
|
||||
--mistral-api-key "$MISTRAL_API_KEY"
|
||||
```
|
||||
|
||||
Flow notes:
|
||||
## Flow notes
|
||||
|
||||
- `quickstart`: minimal prompts, auto-generates a gateway token.
|
||||
- `manual`: full prompts for port/bind/auth (alias of `advanced`).
|
||||
- When an auth choice implies a preferred provider, onboarding prefilters the
|
||||
default-model and allowlist pickers to that provider. For Volcengine and
|
||||
BytePlus, this also matches the coding-plan variants
|
||||
(`volcengine-plan/*`, `byteplus-plan/*`).
|
||||
- If the preferred-provider filter yields no loaded models yet, onboarding
|
||||
falls back to the unfiltered catalog instead of leaving the picker empty.
|
||||
- In the web-search step, some providers can trigger provider-specific
|
||||
follow-up prompts:
|
||||
- **Grok** can offer optional `x_search` setup with the same `XAI_API_KEY`
|
||||
and an `x_search` model choice.
|
||||
- **Kimi** can ask for the Moonshot API region (`api.moonshot.ai` vs
|
||||
`api.moonshot.cn`) and the default Kimi web-search model.
|
||||
- Local onboarding DM scope behavior: [CLI Setup Reference](/start/wizard-cli-reference#outputs-and-internals).
|
||||
- Fastest first chat: `openclaw dashboard` (Control UI, no channel setup).
|
||||
- Custom Provider: connect any OpenAI or Anthropic compatible endpoint,
|
||||
including hosted providers not listed. Use Unknown to auto-detect.
|
||||
<AccordionGroup>
|
||||
<Accordion title="Flow types">
|
||||
- `quickstart`: minimal prompts, auto-generates a gateway token.
|
||||
- `manual`: full prompts for port, bind, and auth (alias of `advanced`).
|
||||
- `import`: runs a detected migration provider, previews the plan, then applies after confirmation.
|
||||
</Accordion>
|
||||
<Accordion title="Provider prefiltering">
|
||||
When an auth choice implies a preferred provider, onboarding prefilters the default-model and allowlist pickers to that provider. For Volcengine and BytePlus, this also matches the coding-plan variants (`volcengine-plan/*`, `byteplus-plan/*`).
|
||||
|
||||
If the preferred-provider filter yields no loaded models yet, onboarding falls back to the unfiltered catalog instead of leaving the picker empty.
|
||||
|
||||
</Accordion>
|
||||
<Accordion title="Web-search follow-ups">
|
||||
Some web-search providers trigger provider-specific follow-up prompts:
|
||||
|
||||
- **Grok** can offer optional `x_search` setup with the same `XAI_API_KEY` and an `x_search` model choice.
|
||||
- **Kimi** can ask for the Moonshot API region (`api.moonshot.ai` vs `api.moonshot.cn`) and the default Kimi web-search model.
|
||||
|
||||
</Accordion>
|
||||
<Accordion title="Other behaviors">
|
||||
- Local onboarding DM scope behavior: [CLI setup reference](/start/wizard-cli-reference#outputs-and-internals).
|
||||
- Fastest first chat: `openclaw dashboard` (Control UI, no channel setup).
|
||||
- Custom provider: connect any OpenAI or Anthropic compatible endpoint, including hosted providers not listed. Use Unknown to auto-detect.
|
||||
- If Hermes state is detected, onboarding offers a migration flow. Use [Migrate](/cli/migrate) for dry-run plans, overwrite mode, reports, and exact mappings.
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
## Common follow-up commands
|
||||
|
||||
|
||||
@@ -75,9 +75,11 @@ openclaw sandbox recreate --all --force # Skip confirmation
|
||||
- `--browser`: Only recreate browser containers
|
||||
- `--force`: Skip confirmation prompt
|
||||
|
||||
**Important:** Runtimes are automatically recreated when the agent is next used.
|
||||
<Note>
|
||||
Runtimes are automatically recreated when the agent is next used.
|
||||
</Note>
|
||||
|
||||
## Use Cases
|
||||
## Use cases
|
||||
|
||||
### After updating a Docker image
|
||||
|
||||
@@ -148,18 +150,19 @@ openclaw sandbox recreate --agent family
|
||||
openclaw sandbox recreate --agent alfred
|
||||
```
|
||||
|
||||
## Why is this needed?
|
||||
## Why this is needed
|
||||
|
||||
**Problem:** When you update sandbox configuration:
|
||||
When you update sandbox configuration:
|
||||
|
||||
- Existing runtimes continue running with old settings
|
||||
- Runtimes are only pruned after 24h of inactivity
|
||||
- Regularly-used agents keep old runtimes alive indefinitely
|
||||
- Existing runtimes continue running with old settings.
|
||||
- Runtimes are only pruned after 24h of inactivity.
|
||||
- Regularly-used agents keep old runtimes alive indefinitely.
|
||||
|
||||
**Solution:** Use `openclaw sandbox recreate` to force removal of old runtimes. They'll be recreated automatically with current settings when next needed.
|
||||
Use `openclaw sandbox recreate` to force removal of old runtimes. They are recreated automatically with current settings when next needed.
|
||||
|
||||
Tip: prefer `openclaw sandbox recreate` over manual backend-specific cleanup.
|
||||
It uses the Gateway’s runtime registry and avoids mismatches when scope/session keys change.
|
||||
<Tip>
|
||||
Prefer `openclaw sandbox recreate` over manual backend-specific cleanup. It uses the Gateway's runtime registry and avoids mismatches when scope or session keys change.
|
||||
</Tip>
|
||||
|
||||
## Configuration
|
||||
|
||||
@@ -193,4 +196,4 @@ Sandbox settings live in `~/.openclaw/openclaw.json` under `agents.defaults.sand
|
||||
- [CLI reference](/cli)
|
||||
- [Sandboxing](/gateway/sandboxing)
|
||||
- [Agent workspace](/concepts/agent-workspace)
|
||||
- [Doctor](/gateway/doctor) — checks sandbox setup
|
||||
- [Doctor](/gateway/doctor): checks sandbox setup.
|
||||
|
||||
@@ -21,6 +21,7 @@ Related:
|
||||
openclaw setup
|
||||
openclaw setup --workspace ~/.openclaw/workspace
|
||||
openclaw setup --wizard
|
||||
openclaw setup --wizard --import-from hermes --import-source ~/.hermes
|
||||
openclaw setup --non-interactive --mode remote --remote-url wss://gateway-host:18789 --remote-token <token>
|
||||
```
|
||||
|
||||
@@ -30,6 +31,9 @@ openclaw setup --non-interactive --mode remote --remote-url wss://gateway-host:1
|
||||
- `--wizard`: run onboarding
|
||||
- `--non-interactive`: run onboarding without prompts
|
||||
- `--mode <local|remote>`: onboarding mode
|
||||
- `--import-from <provider>`: migration provider to run during onboarding
|
||||
- `--import-source <path>`: source agent home for `--import-from`
|
||||
- `--import-secrets`: import supported secrets during onboarding migration
|
||||
- `--remote-url <url>`: remote Gateway WebSocket URL
|
||||
- `--remote-token <token>`: remote Gateway token
|
||||
|
||||
@@ -42,7 +46,8 @@ openclaw setup --wizard
|
||||
Notes:
|
||||
|
||||
- Plain `openclaw setup` initializes config + workspace without the full onboarding flow.
|
||||
- Onboarding auto-runs when any onboarding flags are present (`--wizard`, `--non-interactive`, `--mode`, `--remote-url`, `--remote-token`).
|
||||
- Onboarding auto-runs when any onboarding flags are present (`--wizard`, `--non-interactive`, `--mode`, `--import-from`, `--import-source`, `--import-secrets`, `--remote-url`, `--remote-token`).
|
||||
- If Hermes state is detected, interactive onboarding can offer migration automatically. Import onboarding requires a fresh setup; use [Migrate](/cli/migrate) for dry-run plans, backups, and overwrite mode outside onboarding.
|
||||
|
||||
## Related
|
||||
|
||||
|
||||
@@ -40,9 +40,11 @@ openclaw --update
|
||||
`postUpdate.plugins.integrityDrifts` when npm plugin artifact drift is
|
||||
detected during post-update plugin sync.
|
||||
- `--timeout <seconds>`: per-step timeout (default is 1800s).
|
||||
- `--yes`: skip confirmation prompts (for example downgrade confirmation)
|
||||
- `--yes`: skip confirmation prompts (for example downgrade confirmation).
|
||||
|
||||
Note: downgrades require confirmation because older versions can break configuration.
|
||||
<Warning>
|
||||
Downgrades require confirmation because older versions can break configuration.
|
||||
</Warning>
|
||||
|
||||
## `update status`
|
||||
|
||||
@@ -85,41 +87,62 @@ The Gateway core auto-updater (when enabled via config) reuses this same update
|
||||
For package-manager installs, `openclaw update` resolves the target package
|
||||
version before invoking the package manager. Even when the installed version
|
||||
already matches the target, the command refreshes the global package install,
|
||||
then runs plugin sync, completion refresh, and restart work. This keeps packaged
|
||||
sidecars and channel-owned plugin records aligned with the installed OpenClaw
|
||||
build.
|
||||
then runs plugin sync, a core-command completion refresh, and restart work. This
|
||||
keeps packaged sidecars and channel-owned plugin records aligned with the
|
||||
installed OpenClaw build while leaving full plugin-command completion rebuilds to
|
||||
explicit `openclaw completion --write-state` runs.
|
||||
|
||||
## Git checkout flow
|
||||
|
||||
Channels:
|
||||
### Channel selection
|
||||
|
||||
- `stable`: checkout the latest non-beta tag, then build + doctor.
|
||||
- `beta`: prefer the latest `-beta` tag, but fall back to the latest stable tag
|
||||
when beta is missing or older.
|
||||
- `dev`: checkout `main`, then fetch + rebase.
|
||||
- `stable`: checkout the latest non-beta tag, then build and doctor.
|
||||
- `beta`: prefer the latest `-beta` tag, but fall back to the latest stable tag when beta is missing or older.
|
||||
- `dev`: checkout `main`, then fetch and rebase.
|
||||
|
||||
High-level:
|
||||
### Update steps
|
||||
|
||||
1. Requires a clean worktree (no uncommitted changes).
|
||||
2. Switches to the selected channel (tag or branch).
|
||||
3. Fetches upstream (dev only).
|
||||
4. Dev only: preflight lint + TypeScript build in a temp worktree; if the tip fails, walks back up to 10 commits to find the newest clean build.
|
||||
5. Rebases onto the selected commit (dev only).
|
||||
6. Installs deps with the repo package manager. For pnpm checkouts, the updater bootstraps `pnpm` on demand (via `corepack` first, then a temporary `npm install pnpm@10` fallback) instead of running `npm run build` inside a pnpm workspace.
|
||||
7. Builds + builds the Control UI.
|
||||
8. Runs `openclaw doctor` as the final “safe update” check.
|
||||
9. Syncs plugins to the active channel (dev uses bundled plugins; stable/beta uses npm) and updates npm-installed plugins.
|
||||
<Steps>
|
||||
<Step title="Verify clean worktree">
|
||||
Requires no uncommitted changes.
|
||||
</Step>
|
||||
<Step title="Switch channel">
|
||||
Switches to the selected channel (tag or branch).
|
||||
</Step>
|
||||
<Step title="Fetch upstream">
|
||||
Dev only.
|
||||
</Step>
|
||||
<Step title="Preflight build (dev only)">
|
||||
Runs lint and TypeScript build in a temp worktree. If the tip fails, walks back up to 10 commits to find the newest clean build.
|
||||
</Step>
|
||||
<Step title="Rebase">
|
||||
Rebases onto the selected commit (dev only).
|
||||
</Step>
|
||||
<Step title="Install dependencies">
|
||||
Uses the repo package manager. For pnpm checkouts, the updater bootstraps `pnpm` on demand (via `corepack` first, then a temporary `npm install pnpm@10` fallback) instead of running `npm run build` inside a pnpm workspace.
|
||||
</Step>
|
||||
<Step title="Build Control UI">
|
||||
Builds the gateway and the Control UI.
|
||||
</Step>
|
||||
<Step title="Run doctor">
|
||||
`openclaw doctor` runs as the final safe-update check.
|
||||
</Step>
|
||||
<Step title="Sync plugins">
|
||||
Syncs plugins to the active channel. Dev uses bundled plugins; stable and beta use npm. Updates npm-installed plugins.
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
If an exact pinned npm plugin update resolves to an artifact whose integrity
|
||||
differs from the stored install record, `openclaw update` aborts that plugin
|
||||
artifact update instead of installing it. Reinstall or update the plugin
|
||||
explicitly only after verifying that you trust the new artifact.
|
||||
<Warning>
|
||||
If an exact pinned npm plugin update resolves to an artifact whose integrity differs from the stored install record, `openclaw update` aborts that plugin artifact update instead of installing it. Reinstall or update the plugin explicitly only after verifying that you trust the new artifact.
|
||||
</Warning>
|
||||
|
||||
Post-update plugin sync failures fail the update result and stop restart
|
||||
follow-up work. Fix the plugin install/update error, then rerun
|
||||
`openclaw update`.
|
||||
<Note>
|
||||
Post-update plugin sync failures fail the update result and stop restart follow-up work. Fix the plugin install or update error, then rerun `openclaw update`.
|
||||
|
||||
If pnpm bootstrap still fails, the updater now stops early with a package-manager-specific error instead of trying `npm run build` inside the checkout.
|
||||
When the updated Gateway starts, enabled bundled plugin runtime dependencies are staged before plugin activation. Update-triggered restarts drain any active runtime-dependency staging before closing the Gateway, so service-manager restarts do not interrupt an in-flight npm install.
|
||||
|
||||
If pnpm bootstrap still fails, the updater stops early with a package-manager-specific error instead of trying `npm run build` inside the checkout.
|
||||
</Note>
|
||||
|
||||
## `--update` shorthand
|
||||
|
||||
|
||||
@@ -163,6 +163,7 @@ surfaces, while Codex native hooks remain a separate lower-level Codex mechanism
|
||||
- `agent.wait` default: 30s (just the wait). `timeoutMs` param overrides.
|
||||
- Agent runtime: `agents.defaults.timeoutSeconds` default 172800s (48 hours); enforced in `runEmbeddedPiAgent` abort timer.
|
||||
- LLM idle timeout: `agents.defaults.llm.idleTimeoutSeconds` aborts a model request when no response chunks arrive before the idle window. Set it explicitly for slow local models or reasoning/tool-call providers; set it to 0 to disable. If it is not set, OpenClaw uses `agents.defaults.timeoutSeconds` when configured, otherwise 120s. Cron-triggered runs with no explicit LLM or agent timeout disable the idle watchdog and rely on the cron outer timeout.
|
||||
- Provider HTTP request timeout: `models.providers.<id>.timeoutSeconds` applies only to that provider's model HTTP fetches, including connect, headers, body, and total guarded-fetch abort handling. Use this for slow local/self-hosted providers such as Ollama before raising the whole agent runtime timeout.
|
||||
|
||||
## Where things can end early
|
||||
|
||||
|
||||
@@ -6,9 +6,7 @@ read_when:
|
||||
title: "Compaction"
|
||||
---
|
||||
|
||||
Every model has a context window -- the maximum number of tokens it can process.
|
||||
When a conversation approaches that limit, OpenClaw **compacts** older messages
|
||||
into a summary so the chat can continue.
|
||||
Every model has a context window: the maximum number of tokens it can process. When a conversation approaches that limit, OpenClaw **compacts** older messages into a summary so the chat can continue.
|
||||
|
||||
## How it works
|
||||
|
||||
@@ -16,33 +14,54 @@ into a summary so the chat can continue.
|
||||
2. The summary is saved in the session transcript.
|
||||
3. Recent messages are kept intact.
|
||||
|
||||
When OpenClaw splits history into compaction chunks, it keeps assistant tool
|
||||
calls paired with their matching `toolResult` entries. If a split point lands
|
||||
inside a tool block, OpenClaw moves the boundary so the pair stays together and
|
||||
the current unsummarized tail is preserved.
|
||||
When OpenClaw splits history into compaction chunks, it keeps assistant tool calls paired with their matching `toolResult` entries. If a split point lands inside a tool block, OpenClaw moves the boundary so the pair stays together and the current unsummarized tail is preserved.
|
||||
|
||||
The full conversation history stays on disk. Compaction only changes what the
|
||||
model sees on the next turn.
|
||||
The full conversation history stays on disk. Compaction only changes what the model sees on the next turn.
|
||||
|
||||
## Auto-compaction
|
||||
|
||||
Auto-compaction is on by default. It runs when the session nears the context
|
||||
limit, or when the model returns a context-overflow error (in which case
|
||||
OpenClaw compacts and retries). Typical overflow signatures include
|
||||
`request_too_large`, `context length exceeded`, `input exceeds the maximum
|
||||
number of tokens`, `input token count exceeds the maximum number of input
|
||||
tokens`, `input is too long for the model`, and `ollama error: context length
|
||||
exceeded`.
|
||||
Auto-compaction is on by default. It runs when the session nears the context limit, or when the model returns a context-overflow error (in which case OpenClaw compacts and retries).
|
||||
|
||||
You will see:
|
||||
|
||||
- `🧹 Auto-compaction complete` in verbose mode.
|
||||
- `/status` showing `🧹 Compactions: <count>`.
|
||||
|
||||
<Info>
|
||||
Before compacting, OpenClaw automatically reminds the agent to save important
|
||||
notes to [memory](/concepts/memory) files. This prevents context loss.
|
||||
Before compacting, OpenClaw automatically reminds the agent to save important notes to [memory](/concepts/memory) files. This prevents context loss.
|
||||
</Info>
|
||||
|
||||
Use the `agents.defaults.compaction` setting in your `openclaw.json` to configure compaction behavior (mode, target tokens, etc.).
|
||||
Compaction summarization preserves opaque identifiers by default (`identifierPolicy: "strict"`). You can override this with `identifierPolicy: "off"` or provide custom text with `identifierPolicy: "custom"` and `identifierInstructions`.
|
||||
<AccordionGroup>
|
||||
<Accordion title="Recognized overflow signatures">
|
||||
OpenClaw detects context overflow from these provider error patterns:
|
||||
|
||||
You can optionally specify a different model for compaction summarization via `agents.defaults.compaction.model`. This is useful when your primary model is a local or small model and you want compaction summaries produced by a more capable model. The override accepts any `provider/model-id` string:
|
||||
- `request_too_large`
|
||||
- `context length exceeded`
|
||||
- `input exceeds the maximum number of tokens`
|
||||
- `input token count exceeds the maximum number of input tokens`
|
||||
- `input is too long for the model`
|
||||
- `ollama error: context length exceeded`
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
## Manual compaction
|
||||
|
||||
Type `/compact` in any chat to force a compaction. Add instructions to guide the summary:
|
||||
|
||||
```
|
||||
/compact Focus on the API design decisions
|
||||
```
|
||||
|
||||
When `agents.defaults.compaction.keepRecentTokens` is set, manual compaction honors that Pi cut-point and keeps the recent tail in rebuilt context. Without an explicit keep budget, manual compaction behaves as a hard checkpoint and continues from the new summary alone.
|
||||
|
||||
## Configuration
|
||||
|
||||
Configure compaction under `agents.defaults.compaction` in your `openclaw.json`. The most common knobs are listed below; for the full reference, see [Session management deep dive](/reference/session-management-compaction).
|
||||
|
||||
### Using a different model
|
||||
|
||||
By default, compaction uses the agent's primary model. Set `agents.defaults.compaction.model` to delegate summarization to a more capable or specialized model. The override accepts any `provider/model-id` string:
|
||||
|
||||
```json
|
||||
{
|
||||
@@ -56,7 +75,7 @@ You can optionally specify a different model for compaction summarization via `a
|
||||
}
|
||||
```
|
||||
|
||||
This also works with local models, for example a second Ollama model dedicated to summarization or a fine-tuned compaction specialist:
|
||||
This works with local models too, for example a second Ollama model dedicated to summarization:
|
||||
|
||||
```json
|
||||
{
|
||||
@@ -70,75 +89,27 @@ This also works with local models, for example a second Ollama model dedicated t
|
||||
}
|
||||
```
|
||||
|
||||
When unset, compaction uses the agent’s primary model.
|
||||
When unset, compaction uses the agent's primary model.
|
||||
|
||||
## Pluggable compaction providers
|
||||
### Identifier preservation
|
||||
|
||||
Plugins can register a custom compaction provider via `registerCompactionProvider()` on the plugin API. When a provider is registered and configured, OpenClaw delegates summarization to it instead of the built-in LLM pipeline.
|
||||
Compaction summarization preserves opaque identifiers by default (`identifierPolicy: "strict"`). Override with `identifierPolicy: "off"` to disable, or `identifierPolicy: "custom"` plus `identifierInstructions` for custom guidance.
|
||||
|
||||
To use a registered provider, set the provider id in your config:
|
||||
### Active transcript byte guard
|
||||
|
||||
```json
|
||||
{
|
||||
"agents": {
|
||||
"defaults": {
|
||||
"compaction": {
|
||||
"provider": "my-provider"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
When `agents.defaults.compaction.maxActiveTranscriptBytes` is set, OpenClaw triggers normal local compaction before a run if the active JSONL reaches that size. This is useful for long-running sessions where provider-side context management may keep model context healthy while the local transcript keeps growing. It does not split raw JSONL bytes; it asks the normal compaction pipeline to create a semantic summary.
|
||||
|
||||
Setting a `provider` automatically forces `mode: "safeguard"`. Providers receive the same compaction instructions and identifier-preservation policy as the built-in path, and OpenClaw still preserves recent-turn and split-turn suffix context after provider output. If the provider fails or returns an empty result, OpenClaw falls back to built-in LLM summarization.
|
||||
<Warning>
|
||||
The byte guard requires `truncateAfterCompaction: true`. Without transcript rotation, the active file would not shrink and the guard remains inactive.
|
||||
</Warning>
|
||||
|
||||
## Auto-compaction (default on)
|
||||
### Successor transcripts
|
||||
|
||||
When a session nears or exceeds the model’s context window, OpenClaw triggers auto-compaction and may retry the original request using the compacted context.
|
||||
When `agents.defaults.compaction.truncateAfterCompaction` is enabled, OpenClaw does not rewrite the existing transcript in place. It creates a new active successor transcript from the compaction summary, preserved state, and unsummarized tail, then keeps the previous JSONL as the archived checkpoint source.
|
||||
|
||||
You’ll see:
|
||||
### Compaction notices
|
||||
|
||||
- `🧹 Auto-compaction complete` in verbose mode
|
||||
- `/status` showing `🧹 Compactions: <count>`
|
||||
|
||||
Before compaction, OpenClaw can run a **silent memory flush** turn to store
|
||||
durable notes to disk. See [Memory](/concepts/memory) for details and config.
|
||||
|
||||
## Manual compaction
|
||||
|
||||
Type `/compact` in any chat to force a compaction. Add instructions to guide
|
||||
the summary:
|
||||
|
||||
```
|
||||
/compact Focus on the API design decisions
|
||||
```
|
||||
|
||||
When `agents.defaults.compaction.keepRecentTokens` is set, manual compaction
|
||||
honors that Pi cut-point and keeps the recent tail in rebuilt context. Without
|
||||
an explicit keep budget, manual compaction behaves as a hard checkpoint and
|
||||
continues from the new summary alone.
|
||||
|
||||
## Using a different model
|
||||
|
||||
By default, compaction uses your agent's primary model. You can use a more
|
||||
capable model for better summaries:
|
||||
|
||||
```json5
|
||||
{
|
||||
agents: {
|
||||
defaults: {
|
||||
compaction: {
|
||||
model: "openrouter/anthropic/claude-sonnet-4-6",
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
## Compaction notices
|
||||
|
||||
By default, compaction runs silently. To show brief notices when compaction
|
||||
starts and when it completes, enable `notifyUser`:
|
||||
By default, compaction runs silently. Set `notifyUser` to show brief status messages when compaction starts and completes:
|
||||
|
||||
```json5
|
||||
{
|
||||
@@ -152,8 +123,33 @@ starts and when it completes, enable `notifyUser`:
|
||||
}
|
||||
```
|
||||
|
||||
When enabled, the user sees short status messages around each compaction run
|
||||
(for example, "Compacting context..." and "Compaction complete").
|
||||
### Memory flush
|
||||
|
||||
Before compaction, OpenClaw can run a **silent memory flush** turn to store durable notes to disk. See [Memory](/concepts/memory) for details and config.
|
||||
|
||||
## Pluggable compaction providers
|
||||
|
||||
Plugins can register a custom compaction provider via `registerCompactionProvider()` on the plugin API. When a provider is registered and configured, OpenClaw delegates summarization to it instead of the built-in LLM pipeline.
|
||||
|
||||
To use a registered provider, set its id in your config:
|
||||
|
||||
```json
|
||||
{
|
||||
"agents": {
|
||||
"defaults": {
|
||||
"compaction": {
|
||||
"provider": "my-provider"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Setting a `provider` automatically forces `mode: "safeguard"`. Providers receive the same compaction instructions and identifier-preservation policy as the built-in path, and OpenClaw still preserves recent-turn and split-turn suffix context after provider output.
|
||||
|
||||
<Note>
|
||||
If the provider fails or returns an empty result, OpenClaw falls back to built-in LLM summarization.
|
||||
</Note>
|
||||
|
||||
## Compaction vs pruning
|
||||
|
||||
@@ -163,28 +159,21 @@ When enabled, the user sees short status messages around each compaction run
|
||||
| **Saved?** | Yes (in session transcript) | No (in-memory only, per request) |
|
||||
| **Scope** | Entire conversation | Tool results only |
|
||||
|
||||
[Session pruning](/concepts/session-pruning) is a lighter-weight complement that
|
||||
trims tool output without summarizing.
|
||||
[Session pruning](/concepts/session-pruning) is a lighter-weight complement that trims tool output without summarizing.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**Compacting too often?** The model's context window may be small, or tool
|
||||
outputs may be large. Try enabling
|
||||
[session pruning](/concepts/session-pruning).
|
||||
**Compacting too often?** The model's context window may be small, or tool outputs may be large. Try enabling [session pruning](/concepts/session-pruning).
|
||||
|
||||
**Context feels stale after compaction?** Use `/compact Focus on <topic>` to
|
||||
guide the summary, or enable the [memory flush](/concepts/memory) so notes
|
||||
survive.
|
||||
**Context feels stale after compaction?** Use `/compact Focus on <topic>` to guide the summary, or enable the [memory flush](/concepts/memory) so notes survive.
|
||||
|
||||
**Need a clean slate?** `/new` starts a fresh session without compacting.
|
||||
|
||||
For advanced configuration (reserve tokens, identifier preservation, custom
|
||||
context engines, OpenAI server-side compaction), see the
|
||||
[Session Management Deep Dive](/reference/session-management-compaction).
|
||||
For advanced configuration (reserve tokens, identifier preservation, custom context engines, OpenAI server-side compaction), see the [Session management deep dive](/reference/session-management-compaction).
|
||||
|
||||
## Related
|
||||
|
||||
- [Session](/concepts/session) — session management and lifecycle
|
||||
- [Session Pruning](/concepts/session-pruning) — trimming tool results
|
||||
- [Context](/concepts/context) — how context is built for agent turns
|
||||
- [Hooks](/automation/hooks) — compaction lifecycle hooks (before_compaction, after_compaction)
|
||||
- [Session](/concepts/session): session management and lifecycle.
|
||||
- [Session pruning](/concepts/session-pruning): trimming tool results.
|
||||
- [Context](/concepts/context): how context is built for agent turns.
|
||||
- [Hooks](/automation/hooks): compaction lifecycle hooks (`before_compaction`, `after_compaction`).
|
||||
|
||||
@@ -194,6 +194,10 @@ Required members:
|
||||
Prepended to the system prompt.
|
||||
</ParamField>
|
||||
|
||||
`compact` returns a `CompactResult`. When compaction rotates the active
|
||||
transcript, `result.sessionId` and `result.sessionFile` identify the successor
|
||||
session that the next retry or turn must use.
|
||||
|
||||
Optional members:
|
||||
|
||||
| Member | Kind | Purpose |
|
||||
|
||||
@@ -70,11 +70,15 @@ The delegate operates **autonomously** on a schedule, executing standing orders
|
||||
|
||||
This tier combines Tier 2 permissions with [Cron Jobs](/automation/cron-jobs) and [Standing Orders](/automation/standing-orders).
|
||||
|
||||
> **Security warning**: Tier 3 requires careful configuration of hard blocks — actions the agent must never take regardless of instruction. Complete the prerequisites below before granting any identity provider permissions.
|
||||
<Warning>
|
||||
Tier 3 requires careful configuration of hard blocks: actions the agent must never take regardless of instruction. Complete the prerequisites below before granting any identity provider permissions.
|
||||
</Warning>
|
||||
|
||||
## Prerequisites: isolation and hardening
|
||||
|
||||
> **Do this first.** Before you grant any credentials or identity provider access, lock down the delegate's boundaries. The steps in this section define what the agent **cannot** do — establish these constraints before giving it the ability to do anything.
|
||||
<Note>
|
||||
**Do this first.** Before you grant any credentials or identity provider access, lock down the delegate's boundaries. The steps in this section define what the agent **cannot** do. Establish these constraints before giving it the ability to do anything.
|
||||
</Note>
|
||||
|
||||
### Hard blocks (non-negotiable)
|
||||
|
||||
@@ -180,7 +184,9 @@ New-ApplicationAccessPolicy `
|
||||
-AccessRight RestrictAccess
|
||||
```
|
||||
|
||||
> **Security warning**: without an application access policy, `Mail.Read` application permission grants access to **every mailbox in the tenant**. Always create the access policy before the application reads any mail. Test by confirming the app returns `403` for mailboxes outside the security group.
|
||||
<Warning>
|
||||
Without an application access policy, `Mail.Read` application permission grants access to **every mailbox in the tenant**. Always create the access policy before the application reads any mail. Test by confirming the app returns `403` for mailboxes outside the security group.
|
||||
</Warning>
|
||||
|
||||
#### Google Workspace
|
||||
|
||||
@@ -196,7 +202,9 @@ https://www.googleapis.com/auth/calendar # Tier 2
|
||||
|
||||
The service account impersonates the delegate user (not the principal), preserving the "on behalf of" model.
|
||||
|
||||
> **Security warning**: domain-wide delegation allows the service account to impersonate **any user in the entire domain**. Restrict the scopes to the minimum required, and limit the service account's client ID to only the scopes listed above in the Admin Console (Security > API controls > Domain-wide delegation). A leaked service account key with broad scopes grants full access to every mailbox and calendar in the organization. Rotate keys on a schedule and monitor the Admin Console audit log for unexpected impersonation events.
|
||||
<Warning>
|
||||
Domain-wide delegation allows the service account to impersonate **any user in the entire domain**. Restrict the scopes to the minimum required, and limit the service account's client ID to only the scopes listed above in the Admin Console (Security > API controls > Domain-wide delegation). A leaked service account key with broad scopes grants full access to every mailbox and calendar in the organization. Rotate keys on a schedule and monitor the Admin Console audit log for unexpected impersonation events.
|
||||
</Warning>
|
||||
|
||||
### 3. Bind the delegate to channels
|
||||
|
||||
|
||||
@@ -7,18 +7,18 @@ read_when:
|
||||
---
|
||||
|
||||
OpenClaw remembers things by writing **plain Markdown files** in your agent's
|
||||
workspace. The model only "remembers" what gets saved to disk -- there is no
|
||||
workspace. The model only "remembers" what gets saved to disk — there is no
|
||||
hidden state.
|
||||
|
||||
## How it works
|
||||
|
||||
Your agent has three memory-related files:
|
||||
|
||||
- **`MEMORY.md`** -- long-term memory. Durable facts, preferences, and
|
||||
- **`MEMORY.md`** — long-term memory. Durable facts, preferences, and
|
||||
decisions. Loaded at the start of every DM session.
|
||||
- **`memory/YYYY-MM-DD.md`** -- daily notes. Running context and observations.
|
||||
- **`memory/YYYY-MM-DD.md`** — daily notes. Running context and observations.
|
||||
Today and yesterday's notes are loaded automatically.
|
||||
- **`DREAMS.md`** (optional) -- Dream Diary and dreaming sweep
|
||||
- **`DREAMS.md`** (optional) — Dream Diary and dreaming sweep
|
||||
summaries for human review, including grounded historical backfill entries.
|
||||
|
||||
These files live in the agent workspace (default `~/.openclaw/workspace`).
|
||||
@@ -32,9 +32,9 @@ prefer TypeScript." It will write it to the appropriate file.
|
||||
|
||||
The agent has two tools for working with memory:
|
||||
|
||||
- **`memory_search`** -- finds relevant notes using semantic search, even when
|
||||
- **`memory_search`** — finds relevant notes using semantic search, even when
|
||||
the wording differs from the original.
|
||||
- **`memory_get`** -- reads a specific memory file or line range.
|
||||
- **`memory_get`** — reads a specific memory file or line range.
|
||||
|
||||
Both tools are provided by the active memory plugin (default: `memory-core`).
|
||||
|
||||
@@ -61,7 +61,7 @@ See [Memory Wiki](/plugins/memory-wiki).
|
||||
## Memory search
|
||||
|
||||
When an embedding provider is configured, `memory_search` uses **hybrid
|
||||
search** -- combining vector similarity (semantic meaning) with keyword matching
|
||||
search** — combining vector similarity (semantic meaning) with keyword matching
|
||||
(exact terms like IDs and code symbols). This works out of the box once you have
|
||||
an API key for any supported provider.
|
||||
|
||||
@@ -104,7 +104,7 @@ dashboards, bridge mode, and Obsidian-friendly workflows.
|
||||
|
||||
Before [compaction](/concepts/compaction) summarizes your conversation, OpenClaw
|
||||
runs a silent turn that reminds the agent to save important context to memory
|
||||
files. This is on by default -- you do not need to configure anything.
|
||||
files. This is on by default — you do not need to configure anything.
|
||||
|
||||
<Tip>
|
||||
The memory flush prevents context loss during compaction. If your agent has
|
||||
@@ -176,16 +176,14 @@ openclaw memory index --force # Rebuild the index
|
||||
|
||||
## Further reading
|
||||
|
||||
- [Builtin Memory Engine](/concepts/memory-builtin) -- default SQLite backend
|
||||
- [QMD Memory Engine](/concepts/memory-qmd) -- advanced local-first sidecar
|
||||
- [Honcho Memory](/concepts/memory-honcho) -- AI-native cross-session memory
|
||||
- [Memory Wiki](/plugins/memory-wiki) -- compiled knowledge vault and wiki-native tools
|
||||
- [Memory Search](/concepts/memory-search) -- search pipeline, providers, and
|
||||
tuning
|
||||
- [Dreaming](/concepts/dreaming) -- background promotion
|
||||
from short-term recall to long-term memory
|
||||
- [Memory configuration reference](/reference/memory-config) -- all config knobs
|
||||
- [Compaction](/concepts/compaction) -- how compaction interacts with memory
|
||||
- [Builtin memory engine](/concepts/memory-builtin): default SQLite backend.
|
||||
- [QMD memory engine](/concepts/memory-qmd): advanced local-first sidecar.
|
||||
- [Honcho memory](/concepts/memory-honcho): AI-native cross-session memory.
|
||||
- [Memory Wiki](/plugins/memory-wiki): compiled knowledge vault and wiki-native tools.
|
||||
- [Memory search](/concepts/memory-search): search pipeline, providers, and tuning.
|
||||
- [Dreaming](/concepts/dreaming): background promotion from short-term recall to long-term memory.
|
||||
- [Memory configuration reference](/reference/memory-config): all config knobs.
|
||||
- [Compaction](/concepts/compaction): how compaction interacts with memory.
|
||||
|
||||
## Related
|
||||
|
||||
|
||||
@@ -7,8 +7,7 @@ read_when:
|
||||
title: "Messages"
|
||||
---
|
||||
|
||||
This page ties together how OpenClaw handles inbound messages, sessions, queueing,
|
||||
streaming, and reasoning visibility.
|
||||
OpenClaw handles inbound messages through a pipeline of session resolution, queueing, streaming, tool execution, and reasoning visibility. This page maps the path from inbound message to reply.
|
||||
|
||||
## Message flow (high level)
|
||||
|
||||
|
||||
@@ -265,6 +265,7 @@ That means fallback retries have to coordinate with live model switching:
|
||||
- System-driven model changes such as fallback rotation, heartbeat overrides, or compaction never mark a pending live switch on their own.
|
||||
- Before a fallback retry starts, the reply runner persists the selected fallback override fields to the session entry.
|
||||
- Live-session reconciliation prefers persisted session overrides over stale runtime model fields.
|
||||
- If a live-switch error points at a later candidate in the active fallback chain, OpenClaw jumps directly to that selected model instead of walking unrelated candidates first.
|
||||
- If the fallback attempt fails, the runner rolls back only the override fields it wrote, and only if they still match that failed candidate.
|
||||
|
||||
This prevents the classic race:
|
||||
|
||||
@@ -16,7 +16,7 @@ Reference for **LLM/model providers** (not chat channels like WhatsApp/Telegram)
|
||||
- Model refs use `provider/model` (example: `opencode/claude-opus-4-6`).
|
||||
- `agents.defaults.models` acts as an allowlist when set.
|
||||
- CLI helpers: `openclaw onboard`, `openclaw models list`, `openclaw models set <provider/model>`.
|
||||
- `models.providers.*.models[].contextWindow` is native model metadata; `contextTokens` is the effective runtime cap.
|
||||
- `models.providers.*.contextWindow` / `contextTokens` / `maxTokens` set provider-level defaults; `models.providers.*.models[].contextWindow` / `contextTokens` / `maxTokens` override them per model.
|
||||
- Fallback rules, cooldown probes, and session-override persistence: [Model failover](/concepts/model-failover).
|
||||
</Accordion>
|
||||
<Accordion title="OpenAI provider/runtime split">
|
||||
@@ -367,7 +367,7 @@ Kimi K2 model IDs:
|
||||
}
|
||||
```
|
||||
|
||||
### Kimi Coding
|
||||
### Kimi coding
|
||||
|
||||
Kimi Coding uses Moonshot AI's Anthropic-compatible endpoint:
|
||||
|
||||
@@ -625,6 +625,7 @@ Example (OpenAI‑compatible):
|
||||
baseUrl: "http://localhost:1234/v1",
|
||||
apiKey: "${LM_API_TOKEN}",
|
||||
api: "openai-completions",
|
||||
timeoutSeconds: 300,
|
||||
models: [
|
||||
{
|
||||
id: "my-local-model",
|
||||
@@ -660,6 +661,7 @@ Example (OpenAI‑compatible):
|
||||
- Proxy-style OpenAI-compatible routes also skip native OpenAI-only request shaping: no `service_tier`, no Responses `store`, no Completions `store`, no prompt-cache hints, no OpenAI reasoning-compat payload shaping, and no hidden OpenClaw attribution headers.
|
||||
- For OpenAI-compatible Completions proxies that need vendor-specific fields, set `agents.defaults.models["provider/model"].params.extra_body` (or `extraBody`) to merge extra JSON into the outbound request body.
|
||||
- For vLLM chat-template controls, set `agents.defaults.models["provider/model"].params.chat_template_kwargs`. OpenClaw automatically sends `enable_thinking: false` and `force_nonempty_content: true` for `vllm/nemotron-3-*` when the session thinking level is off.
|
||||
- For slow local models or remote LAN/tailnet hosts, set `models.providers.<id>.timeoutSeconds`. This extends provider model HTTP request handling, including connect, headers, body streaming, and the total guarded-fetch abort, without increasing the whole agent runtime timeout.
|
||||
- If `baseUrl` is empty/omitted, OpenClaw keeps the default OpenAI behavior (which resolves to `api.openai.com`).
|
||||
- For safety, an explicit `compat.supportsDeveloperRole: true` is still overridden on non-native `openai-completions` endpoints.
|
||||
</Accordion>
|
||||
|
||||
@@ -65,10 +65,15 @@ model calls must not export `StreamAbandoned` on successful turns; raw diagnosti
|
||||
`openclaw.content.*` attributes must stay out of the trace. It writes
|
||||
`otel-smoke-summary.json` next to the QA suite artifacts.
|
||||
|
||||
Observability QA stays source-checkout only. The npm tarball intentionally omits
|
||||
QA Lab, so package Docker release lanes do not run `qa` commands. Use
|
||||
`pnpm qa:otel:smoke` from a built source checkout when changing diagnostics
|
||||
instrumentation.
|
||||
|
||||
For a transport-real Matrix smoke lane, run:
|
||||
|
||||
```bash
|
||||
pnpm openclaw qa matrix
|
||||
pnpm openclaw qa matrix --profile fast --fail-fast
|
||||
```
|
||||
|
||||
That lane provisions a disposable Tuwunel homeserver in Docker, registers
|
||||
@@ -79,9 +84,15 @@ the child config scoped to the transport under test, so Matrix runs without
|
||||
a combined stdout/stderr log into the selected Matrix QA output directory. To
|
||||
capture the outer `scripts/run-node.mjs` build/launcher output too, set
|
||||
`OPENCLAW_RUN_NODE_OUTPUT_LOG=<path>` to a repo-local log file.
|
||||
Matrix progress is printed by default. `OPENCLAW_QA_MATRIX_TIMEOUT_MS` bounds
|
||||
the full run, and `OPENCLAW_QA_MATRIX_CLEANUP_TIMEOUT_MS` bounds cleanup so a
|
||||
stuck Docker teardown reports the exact recovery command instead of hanging.
|
||||
Matrix progress is printed by default. The CLI default profile is `all`, so
|
||||
plain `pnpm openclaw qa matrix` still runs the full catalog. Use `--profile
|
||||
fast` for the release-critical transport contract, or shard full coverage with
|
||||
`transport`, `media`, `e2ee-smoke`, `e2ee-deep`, and `e2ee-cli`. `--fail-fast`
|
||||
stops after the first failed scenario when you want a release gate instead of a
|
||||
full inventory. `OPENCLAW_QA_MATRIX_TIMEOUT_MS` bounds the full run,
|
||||
`OPENCLAW_QA_MATRIX_NO_REPLY_WINDOW_MS` can shorten no-reply quiet windows for
|
||||
CI, and `OPENCLAW_QA_MATRIX_CLEANUP_TIMEOUT_MS` bounds cleanup so a stuck
|
||||
Docker teardown reports the exact recovery command instead of hanging.
|
||||
|
||||
For a transport-real Telegram smoke lane, run:
|
||||
|
||||
|
||||
@@ -152,6 +152,7 @@ Legacy key migration:
|
||||
Telegram:
|
||||
|
||||
- Uses `sendMessage` + `editMessageText` preview updates across DMs and group/topics.
|
||||
- Sends a fresh final message instead of editing in place when a preview has been visible for about one minute, then cleans up the preview so Telegram's timestamp reflects reply completion.
|
||||
- Preview streaming is skipped when Telegram block streaming is explicitly enabled (to avoid double-streaming).
|
||||
- `/reasoning stream` can write reasoning to preview.
|
||||
|
||||
|
||||
@@ -116,12 +116,9 @@ heartbeats are disabled for the default agent or
|
||||
files concise — especially `MEMORY.md`, which can grow over time and lead to
|
||||
unexpectedly high context usage and more frequent compaction.
|
||||
|
||||
> **Note:** `memory/*.md` daily files are **not** part of the normal bootstrap
|
||||
> Project Context. On ordinary turns they are accessed on demand via the
|
||||
> `memory_search` and `memory_get` tools, so they do not count against the
|
||||
> context window unless the model explicitly reads them. Bare `/new` and
|
||||
> `/reset` turns are the exception: the runtime can prepend recent daily memory
|
||||
> as a one-shot startup-context block for that first turn.
|
||||
<Note>
|
||||
`memory/*.md` daily files are **not** part of the normal bootstrap Project Context. On ordinary turns they are accessed on demand via the `memory_search` and `memory_get` tools, so they do not count against the context window unless the model explicitly reads them. Bare `/new` and `/reset` turns are the exception: the runtime can prepend recent daily memory as a one-shot startup-context block for that first turn.
|
||||
</Note>
|
||||
|
||||
Large files are truncated with a marker. The max per-file size is controlled by
|
||||
`agents.defaults.bootstrapMaxChars` (default: 12000). Total injected bootstrap
|
||||
|
||||
@@ -1545,6 +1545,7 @@
|
||||
"cli/gateway",
|
||||
"cli/health",
|
||||
"cli/logs",
|
||||
"cli/migrate",
|
||||
"cli/onboard",
|
||||
"cli/reset",
|
||||
"cli/secrets",
|
||||
|
||||
@@ -7,7 +7,7 @@ title: "Authentication"
|
||||
---
|
||||
|
||||
<Note>
|
||||
This page covers **model provider** authentication (API keys, OAuth, Claude CLI reuse, and Anthropic setup-token). For **gateway connection** authentication (token, password, trusted-proxy), see [Configuration](/gateway/configuration) and [Trusted Proxy Auth](/gateway/trusted-proxy-auth).
|
||||
This page is the **model provider** authentication reference (API keys, OAuth, Claude CLI reuse, and Anthropic setup-token). For **gateway connection** authentication (token, password, trusted-proxy), see [Configuration](/gateway/configuration) and [Trusted Proxy Auth](/gateway/trusted-proxy-auth).
|
||||
</Note>
|
||||
|
||||
OpenClaw supports OAuth and API keys for model providers. For always-on gateway
|
||||
|
||||
@@ -179,11 +179,10 @@ openclaw plugins disable bonjour
|
||||
|
||||
## Docker gotchas
|
||||
|
||||
Bundled Docker Compose sets `OPENCLAW_DISABLE_BONJOUR=1` for the Gateway service
|
||||
by default. Docker bridge networks usually do not forward mDNS multicast
|
||||
(`224.0.0.251:5353`) between the container and the LAN, so leaving Bonjour on can
|
||||
produce repeated ciao `probing` or `announcing` failures without making discovery
|
||||
work.
|
||||
The bundled Bonjour plugin auto-disables LAN multicast advertising in detected
|
||||
containers when `OPENCLAW_DISABLE_BONJOUR` is unset. Docker bridge networks
|
||||
usually do not forward mDNS multicast (`224.0.0.251:5353`) between the container
|
||||
and the LAN, so advertising from the container rarely makes discovery work.
|
||||
|
||||
Important gotchas:
|
||||
|
||||
@@ -193,16 +192,16 @@ Important gotchas:
|
||||
`OPENCLAW_GATEWAY_BIND=lan` so the published host port can work.
|
||||
- Disabling Bonjour does not disable wide-area DNS-SD. Use wide-area discovery
|
||||
or Tailnet when the Gateway and node are not on the same LAN.
|
||||
- Reusing the same `OPENCLAW_CONFIG_DIR` outside Docker does not inherit the
|
||||
Compose default unless the environment still sets `OPENCLAW_DISABLE_BONJOUR`.
|
||||
- Reusing the same `OPENCLAW_CONFIG_DIR` outside Docker does not persist the
|
||||
container auto-disable policy.
|
||||
- Set `OPENCLAW_DISABLE_BONJOUR=0` only for host networking, macvlan, or another
|
||||
network where mDNS multicast is known to pass.
|
||||
network where mDNS multicast is known to pass; set it to `1` to force-disable.
|
||||
|
||||
## Troubleshooting disabled Bonjour
|
||||
|
||||
If a node no longer auto-discovers the Gateway after Docker setup:
|
||||
|
||||
1. Confirm whether the Gateway is intentionally suppressing LAN advertising:
|
||||
1. Confirm whether the Gateway is running in auto, forced-on, or forced-off mode:
|
||||
|
||||
```bash
|
||||
docker compose config | grep OPENCLAW_DISABLE_BONJOUR
|
||||
@@ -239,9 +238,9 @@ If a node no longer auto-discovers the Gateway after Docker setup:
|
||||
container bridges, WSL, or interface churn can leave the ciao advertiser in a
|
||||
non-announced state. OpenClaw retries a few times and then disables Bonjour
|
||||
for the current Gateway process instead of restarting the advertiser forever.
|
||||
- **Docker bridge networking**: bundled Docker Compose disables Bonjour by
|
||||
default with `OPENCLAW_DISABLE_BONJOUR=1`. Set it to `0` only for host,
|
||||
macvlan, or another mDNS-capable network.
|
||||
- **Docker bridge networking**: Bonjour auto-disables in detected containers.
|
||||
Set `OPENCLAW_DISABLE_BONJOUR=0` only for host, macvlan, or another
|
||||
mDNS-capable network.
|
||||
- **Sleep / interface churn**: macOS may temporarily drop mDNS results; retry.
|
||||
- **Browse works but resolve fails**: keep machine names simple (avoid emojis or
|
||||
punctuation), then restart the Gateway. The service instance name derives from
|
||||
@@ -260,7 +259,8 @@ sequences (e.g. spaces become `\032`).
|
||||
- `openclaw plugins disable bonjour` disables LAN multicast advertising by disabling the bundled plugin.
|
||||
- `openclaw plugins enable bonjour` restores the default LAN discovery plugin.
|
||||
- `OPENCLAW_DISABLE_BONJOUR=1` disables LAN multicast advertising without changing plugin config; accepted truthy values are `1`, `true`, `yes`, and `on` (legacy: `OPENCLAW_DISABLE_BONJOUR`).
|
||||
- Docker Compose sets `OPENCLAW_DISABLE_BONJOUR=1` by default for bridge networking; override with `OPENCLAW_DISABLE_BONJOUR=0` only when mDNS multicast is available.
|
||||
- `OPENCLAW_DISABLE_BONJOUR=0` forces LAN multicast advertising on, including inside detected containers; accepted falsy values are `0`, `false`, `no`, and `off`.
|
||||
- When `OPENCLAW_DISABLE_BONJOUR` is unset, Bonjour advertises on normal hosts and auto-disables inside detected containers.
|
||||
- `gateway.bind` in `~/.openclaw/openclaw.json` controls the Gateway bind mode.
|
||||
- `OPENCLAW_SSH_PORT` overrides the SSH port when `sshPort` is advertised (legacy: `OPENCLAW_SSH_PORT`).
|
||||
- `OPENCLAW_TAILNET_DNS` publishes a MagicDNS hint in TXT when mDNS full mode is enabled (legacy: `OPENCLAW_TAILNET_DNS`).
|
||||
|
||||
@@ -554,6 +554,8 @@ Periodic heartbeat runs.
|
||||
qualityGuard: { enabled: true, maxRetries: 1 },
|
||||
postCompactionSections: ["Session Startup", "Red Lines"], // [] disables reinjection
|
||||
model: "openrouter/anthropic/claude-sonnet-4-6", // optional compaction-only model override
|
||||
truncateAfterCompaction: true, // rotate to a smaller successor JSONL after compaction
|
||||
maxActiveTranscriptBytes: "20mb", // optional preflight local compaction trigger
|
||||
notifyUser: true, // send brief notices when compaction starts and completes (default: false)
|
||||
memoryFlush: {
|
||||
enabled: true,
|
||||
@@ -576,6 +578,7 @@ Periodic heartbeat runs.
|
||||
- `qualityGuard`: retry-on-malformed-output checks for safeguard summaries. Enabled by default in safeguard mode; set `enabled: false` to skip the audit.
|
||||
- `postCompactionSections`: optional AGENTS.md H2/H3 section names to re-inject after compaction. Defaults to `["Session Startup", "Red Lines"]`; set `[]` to disable reinjection. When unset or explicitly set to that default pair, older `Every Session`/`Safety` headings are also accepted as a legacy fallback.
|
||||
- `model`: optional `provider/model-id` override for compaction summarization only. Use this when the main session should keep one model but compaction summaries should run on another; when unset, compaction uses the session's primary model.
|
||||
- `maxActiveTranscriptBytes`: optional byte threshold (`number` or strings like `"20mb"`) that triggers normal local compaction before a run when the active JSONL grows past the threshold. Requires `truncateAfterCompaction` so successful compaction can rotate to a smaller successor transcript. Disabled when unset or `0`.
|
||||
- `notifyUser`: when `true`, sends brief notices to the user when compaction starts and when it completes (for example, "Compacting context..." and "Compaction complete"). Disabled by default to keep compaction silent.
|
||||
- `memoryFlush`: silent agentic turn before auto-compaction to store durable memories. Skipped when workspace is read-only.
|
||||
|
||||
|
||||
@@ -215,6 +215,11 @@ Configures inbound media understanding (image/audio/video):
|
||||
{ type: "cli", command: "whisper", args: ["--model", "base", "{{MediaPath}}"] },
|
||||
],
|
||||
},
|
||||
image: {
|
||||
enabled: true,
|
||||
timeoutSeconds: 180,
|
||||
models: [{ provider: "ollama", model: "gemma4:26b", timeoutSeconds: 300 }],
|
||||
},
|
||||
video: {
|
||||
enabled: true,
|
||||
maxBytes: 52428800,
|
||||
@@ -242,6 +247,7 @@ Configures inbound media understanding (image/audio/video):
|
||||
|
||||
- `capabilities`: optional list (`image`, `audio`, `video`). Defaults: `openai`/`anthropic`/`minimax` → image, `google` → image+audio+video, `groq` → audio.
|
||||
- `prompt`, `maxChars`, `maxBytes`, `timeoutSeconds`, `language`: per-entry overrides.
|
||||
- `tools.media.image.timeoutSeconds` and matching image model `timeoutSeconds` entries also apply when the agent calls the explicit `image` tool.
|
||||
- Failures fall back to the next entry.
|
||||
|
||||
Provider auth follows standard order: `auth-profiles.json` → env vars → `models.providers.*.apiKey`.
|
||||
@@ -429,6 +435,10 @@ OpenClaw uses the built-in model catalog. Add custom providers via `models.provi
|
||||
- `models.providers.*.api`: request adapter (`openai-completions`, `openai-responses`, `anthropic-messages`, `google-generative-ai`, etc).
|
||||
- `models.providers.*.apiKey`: provider credential (prefer SecretRef/env substitution).
|
||||
- `models.providers.*.auth`: auth strategy (`api-key`, `token`, `oauth`, `aws-sdk`).
|
||||
- `models.providers.*.contextWindow`: default native context window for models under this provider when the model entry does not set `contextWindow`.
|
||||
- `models.providers.*.contextTokens`: default effective runtime context cap for models under this provider when the model entry does not set `contextTokens`.
|
||||
- `models.providers.*.maxTokens`: default output-token cap for models under this provider when the model entry does not set `maxTokens`.
|
||||
- `models.providers.*.timeoutSeconds`: optional per-provider model HTTP request timeout in seconds, including connect, headers, body, and total request abort handling.
|
||||
- `models.providers.*.injectNumCtxForOpenAICompat`: for Ollama + `openai-completions`, inject `options.num_ctx` into requests (default: `true`).
|
||||
- `models.providers.*.authHeader`: force credential transport in the `Authorization` header when required.
|
||||
- `models.providers.*.baseUrl`: upstream API base URL.
|
||||
@@ -446,8 +456,8 @@ OpenClaw uses the built-in model catalog. Add custom providers via `models.provi
|
||||
</Accordion>
|
||||
<Accordion title="Model catalog entries">
|
||||
- `models.providers.*.models`: explicit provider model catalog entries.
|
||||
- `models.providers.*.models.*.contextWindow`: native model context window metadata.
|
||||
- `models.providers.*.models.*.contextTokens`: optional runtime context cap. Use this when you want a smaller effective context budget than the model's native `contextWindow`; `openclaw models list` shows both values when they differ.
|
||||
- `models.providers.*.models.*.contextWindow`: native model context window metadata. This overrides provider-level `contextWindow` for that model.
|
||||
- `models.providers.*.models.*.contextTokens`: optional runtime context cap. This overrides provider-level `contextTokens`; use it when you want a smaller effective context budget than the model's native `contextWindow`; `openclaw models list` shows both values when they differ.
|
||||
- `models.providers.*.models.*.compat.supportsDeveloperRole`: optional compatibility hint. For `api: "openai-completions"` with a non-empty non-native `baseUrl` (host not `api.openai.com`), OpenClaw forces this to `false` at runtime. Empty/omitted `baseUrl` keeps default OpenAI behavior.
|
||||
- `models.providers.*.models.*.compat.requiresStringContent`: optional compatibility hint for string-only OpenAI-compatible chat endpoints. When `true`, OpenClaw flattens pure text `messages[].content` arrays into plain strings before sending the request.
|
||||
</Accordion>
|
||||
|
||||
@@ -859,6 +859,7 @@ Notes:
|
||||
- Set `logging.file` for a stable path.
|
||||
- `consoleLevel` bumps to `debug` when `--verbose`.
|
||||
- `maxFileBytes`: maximum active log file size in bytes before rotation (positive integer; default: `104857600` = 100 MB). OpenClaw keeps up to five numbered archives beside the active file.
|
||||
- `redactSensitive` / `redactPatterns`: best-effort masking for console output, file logs, OTLP log records, and persisted session transcript text.
|
||||
|
||||
---
|
||||
|
||||
@@ -1125,6 +1126,7 @@ Applies only to one-shot cron jobs. Recurring jobs use separate failure handling
|
||||
enabled: false,
|
||||
after: 3,
|
||||
cooldownMs: 3600000,
|
||||
includeSkipped: false,
|
||||
mode: "announce",
|
||||
accountId: "main",
|
||||
},
|
||||
@@ -1135,6 +1137,7 @@ Applies only to one-shot cron jobs. Recurring jobs use separate failure handling
|
||||
- `enabled`: enable failure alerts for cron jobs (default: `false`).
|
||||
- `after`: consecutive failures before an alert fires (positive integer, min: `1`).
|
||||
- `cooldownMs`: minimum milliseconds between repeated alerts for the same job (non-negative integer).
|
||||
- `includeSkipped`: count consecutive skipped runs toward the alert threshold (default: `false`). Skipped runs are tracked separately and do not affect execution-error backoff.
|
||||
- `mode`: delivery mode — `"announce"` sends via a channel message; `"webhook"` posts to the configured webhook.
|
||||
- `accountId`: optional account or channel id to scope alert delivery.
|
||||
|
||||
|
||||
@@ -86,9 +86,9 @@ Security notes:
|
||||
Disable/override:
|
||||
|
||||
- `OPENCLAW_DISABLE_BONJOUR=1` disables advertising.
|
||||
- Docker Compose defaults `OPENCLAW_DISABLE_BONJOUR=1` because bridge networks
|
||||
usually do not carry mDNS multicast reliably; use `0` only on host, macvlan,
|
||||
or another mDNS-capable network.
|
||||
- When `OPENCLAW_DISABLE_BONJOUR` is unset, Bonjour advertises on normal hosts
|
||||
and auto-disables inside detected containers. Use `0` only on host, macvlan,
|
||||
or another mDNS-capable network; use `1` to force-disable.
|
||||
- `gateway.bind` in `~/.openclaw/openclaw.json` controls the Gateway bind mode.
|
||||
- `OPENCLAW_SSH_PORT` overrides the SSH port advertised when `sshPort` is emitted.
|
||||
- `OPENCLAW_TAILNET_DNS` publishes a `tailnetDns` hint (MagicDNS).
|
||||
|
||||
@@ -197,6 +197,7 @@ That stages grounded durable candidates into the short-term dreaming store while
|
||||
- `browser.ssrfPolicy.allowPrivateNetwork` → `browser.ssrfPolicy.dangerouslyAllowPrivateNetwork`
|
||||
- `browser.profiles.*.driver: "extension"` → `"existing-session"`
|
||||
- remove `browser.relayBindHost` (legacy extension relay setting)
|
||||
- legacy `models.providers.*.api: "openai"` → `"openai-completions"` (gateway startup also skips providers whose `api` is set to a future or unknown enum value rather than failing closed)
|
||||
|
||||
Doctor warnings also include account-default guidance for multi-account channels:
|
||||
|
||||
|
||||
@@ -19,26 +19,26 @@ Best current local stack. Load a large model in LM Studio (for example, a full-s
|
||||
{
|
||||
agents: {
|
||||
defaults: {
|
||||
model: { primary: “lmstudio/my-local-model” },
|
||||
model: { primary: "lmstudio/my-local-model" },
|
||||
models: {
|
||||
“anthropic/claude-opus-4-6”: { alias: “Opus” },
|
||||
“lmstudio/my-local-model”: { alias: “Local” },
|
||||
"anthropic/claude-opus-4-6": { alias: "Opus" },
|
||||
"lmstudio/my-local-model": { alias: "Local" },
|
||||
},
|
||||
},
|
||||
},
|
||||
models: {
|
||||
mode: “merge”,
|
||||
mode: "merge",
|
||||
providers: {
|
||||
lmstudio: {
|
||||
baseUrl: “http://127.0.0.1:1234/v1”,
|
||||
apiKey: “lmstudio”,
|
||||
api: “openai-responses”,
|
||||
baseUrl: "http://127.0.0.1:1234/v1",
|
||||
apiKey: "lmstudio",
|
||||
api: "openai-responses",
|
||||
models: [
|
||||
{
|
||||
id: “my-local-model”,
|
||||
name: “Local Model”,
|
||||
id: "my-local-model",
|
||||
name: "Local Model",
|
||||
reasoning: false,
|
||||
input: [“text”],
|
||||
input: ["text"],
|
||||
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
|
||||
contextWindow: 196608,
|
||||
maxTokens: 8192,
|
||||
@@ -124,6 +124,7 @@ vLLM, LiteLLM, OAI-proxy, or custom gateways work if they expose an OpenAI-style
|
||||
baseUrl: "http://127.0.0.1:8000/v1",
|
||||
apiKey: "sk-local",
|
||||
api: "openai-responses",
|
||||
timeoutSeconds: 300,
|
||||
models: [
|
||||
{
|
||||
id: "my-local-model",
|
||||
@@ -142,6 +143,10 @@ vLLM, LiteLLM, OAI-proxy, or custom gateways work if they expose an OpenAI-style
|
||||
```
|
||||
|
||||
Keep `models.mode: "merge"` so hosted models stay available as fallbacks.
|
||||
Use `models.providers.<id>.timeoutSeconds` for slow local or remote model
|
||||
servers before raising `agents.defaults.timeoutSeconds`. The provider timeout
|
||||
applies only to model HTTP requests, including connect, headers, body streaming,
|
||||
and the total guarded-fetch abort.
|
||||
|
||||
Behavior note for local/proxied `/v1` backends:
|
||||
|
||||
@@ -159,6 +164,11 @@ Compatibility notes for stricter OpenAI-compatible backends:
|
||||
structured content-part arrays. Set
|
||||
`models.providers.<provider>.models[].compat.requiresStringContent: true` for
|
||||
those endpoints.
|
||||
- Some local models emit standalone bracketed tool requests as text, such as
|
||||
`[tool_name]` followed by JSON and `[END_TOOL_REQUEST]`. OpenClaw promotes
|
||||
those into real tool calls only when the name exactly matches a registered
|
||||
tool for the turn; otherwise the block is treated as unsupported text and is
|
||||
hidden from user-visible replies.
|
||||
- Some smaller or stricter local backends are unstable with OpenClaw's full
|
||||
agent-runtime prompt shape, especially when tool schemas are included. If the
|
||||
backend works for tiny direct `/v1/chat/completions` calls but fails on normal
|
||||
|
||||
@@ -52,10 +52,12 @@ You can tune console verbosity independently via:
|
||||
- `logging.consoleLevel` (default `info`)
|
||||
- `logging.consoleStyle` (`pretty` | `compact` | `json`)
|
||||
|
||||
## Tool summary redaction
|
||||
## Redaction
|
||||
|
||||
Verbose tool summaries (e.g. `🛠️ Exec: ...`) can mask sensitive tokens before they hit the
|
||||
console stream. This is **tools-only** and does not alter file logs.
|
||||
OpenClaw can mask sensitive tokens before log or transcript output leaves the
|
||||
process. The same redaction policy is applied at console, file-log, OTLP
|
||||
log-record, and session transcript text sinks, so matching secret values are
|
||||
masked before JSONL lines or messages are written to disk.
|
||||
|
||||
- `logging.redactSensitive`: `off` | `tools` (default: `tools`)
|
||||
- `logging.redactPatterns`: array of regex strings (overrides defaults)
|
||||
|
||||
@@ -8,7 +8,7 @@ title: "Multiple gateways"
|
||||
|
||||
Most setups should use one Gateway because a single Gateway can handle multiple messaging connections and agents. If you need stronger isolation or redundancy (e.g., a rescue bot), run separate Gateways with isolated profiles/ports.
|
||||
|
||||
## Best Recommended Setup
|
||||
## Best recommended setup
|
||||
|
||||
For most users, the simplest rescue-bot setup is:
|
||||
|
||||
@@ -44,7 +44,7 @@ During `openclaw --profile rescue onboard`:
|
||||
If onboarding already installed the rescue service for you, the final
|
||||
`gateway install` is not needed.
|
||||
|
||||
## Why This Works
|
||||
## Why this works
|
||||
|
||||
The rescue bot stays independent because it has its own:
|
||||
|
||||
@@ -75,7 +75,7 @@ In practice, that means the rescue bot gets its own:
|
||||
|
||||
The prompts are otherwise the same as normal onboarding.
|
||||
|
||||
## General Multi-Gateway Setup
|
||||
## General multi-gateway setup
|
||||
|
||||
The rescue-bot layout above is the easiest default, but the same isolation
|
||||
pattern works for any pair or group of Gateways on one host.
|
||||
@@ -114,7 +114,7 @@ Use the rescue-bot quickstart when you want a fallback operator lane. Use the
|
||||
general profile pattern when you want multiple long-lived Gateways for
|
||||
different channels, tenants, workspaces, or operational roles.
|
||||
|
||||
## Isolation Checklist
|
||||
## Isolation checklist
|
||||
|
||||
Keep these unique per Gateway instance:
|
||||
|
||||
|
||||
@@ -110,9 +110,9 @@ Best for:
|
||||
- You want lower per-turn sync overhead.
|
||||
- You do not want host-local edits to silently overwrite remote sandbox state.
|
||||
|
||||
Important: if you edit files on the host outside OpenClaw after the initial seed,
|
||||
the remote sandbox does **not** see those changes. Use
|
||||
`openclaw sandbox recreate` to re-seed.
|
||||
<Warning>
|
||||
If you edit files on the host outside OpenClaw after the initial seed, the remote sandbox does **not** see those changes. Use `openclaw sandbox recreate` to re-seed.
|
||||
</Warning>
|
||||
|
||||
### Choosing a mode
|
||||
|
||||
|
||||
@@ -147,9 +147,17 @@ When any subkey is enabled, model and tool spans get bounded, redacted
|
||||
- **Traces:** `diagnostics.otel.sampleRate` (root-span only, `0.0` drops all,
|
||||
`1.0` keeps all).
|
||||
- **Metrics:** `diagnostics.otel.flushIntervalMs` (minimum `1000`).
|
||||
- **Logs:** OTLP logs respect `logging.level` (file log level). Console
|
||||
redaction does **not** apply to OTLP logs. High-volume installs should
|
||||
prefer OTLP collector sampling/filtering over local sampling.
|
||||
- **Logs:** OTLP logs respect `logging.level` (file log level). They use the
|
||||
diagnostic log-record redaction path, not console formatting. High-volume
|
||||
installs should prefer OTLP collector sampling/filtering over local sampling.
|
||||
- **File-log correlation:** JSONL file logs include top-level `traceId`,
|
||||
`spanId`, `parentSpanId`, and `traceFlags` when the log call carries a valid
|
||||
diagnostic trace context, which lets log processors join local log lines with
|
||||
exported spans.
|
||||
- **Request correlation:** Gateway HTTP requests and WebSocket frames create an
|
||||
internal request trace scope. Logs and diagnostic events inside that scope
|
||||
inherit the request trace by default, while agent run and model-call spans are
|
||||
created as children so provider `traceparent` headers stay on the same trace.
|
||||
|
||||
## Exported metrics
|
||||
|
||||
@@ -161,6 +169,10 @@ When any subkey is enabled, model and tool spans get bounded, redacted
|
||||
- `openclaw.context.tokens` (histogram, attrs: `openclaw.context`, `openclaw.channel`, `openclaw.provider`, `openclaw.model`)
|
||||
- `gen_ai.client.token.usage` (histogram, GenAI semantic-conventions metric, attrs: `gen_ai.token.type` = `input`/`output`, `gen_ai.provider.name`, `gen_ai.operation.name`, `gen_ai.request.model`)
|
||||
- `gen_ai.client.operation.duration` (histogram, seconds, GenAI semantic-conventions metric, attrs: `gen_ai.provider.name`, `gen_ai.operation.name`, `gen_ai.request.model`, optional `error.type`)
|
||||
- `openclaw.model_call.duration_ms` (histogram, attrs: `openclaw.provider`, `openclaw.model`, `openclaw.api`, `openclaw.transport`)
|
||||
- `openclaw.model_call.request_bytes` (histogram, UTF-8 byte size of the final model request payload; no raw payload content)
|
||||
- `openclaw.model_call.response_bytes` (histogram, UTF-8 byte size of streamed model response events; no raw response content)
|
||||
- `openclaw.model_call.time_to_first_byte_ms` (histogram, elapsed time before the first streamed response event)
|
||||
|
||||
### Message flow
|
||||
|
||||
@@ -212,6 +224,7 @@ When any subkey is enabled, model and tool spans get bounded, redacted
|
||||
- `openclaw.model.call`
|
||||
- `gen_ai.system` by default, or `gen_ai.provider.name` when the latest GenAI semantic conventions are opted in
|
||||
- `gen_ai.request.model`, `gen_ai.operation.name`, `openclaw.provider`, `openclaw.model`, `openclaw.api`, `openclaw.transport`
|
||||
- `openclaw.model_call.request_bytes`, `openclaw.model_call.response_bytes`, `openclaw.model_call.time_to_first_byte_ms`
|
||||
- `openclaw.provider.request_id_hash` (bounded SHA-based hash of the upstream provider request id; raw ids are not exported)
|
||||
- `openclaw.harness.run`
|
||||
- `openclaw.harness.id`, `openclaw.harness.plugin`, `openclaw.outcome`, `openclaw.provider`, `openclaw.model`, `openclaw.channel`
|
||||
|
||||
@@ -75,15 +75,12 @@ Notes:
|
||||
- `system.run` / `system.run.prepare` / `system.which` request:
|
||||
`operator.pairing` + `operator.admin`
|
||||
|
||||
Important:
|
||||
<Warning>
|
||||
Node pairing is a trust and identity flow plus token issuance. It does **not** pin the live node command surface per node.
|
||||
|
||||
- Node pairing is a trust/identity flow plus token issuance.
|
||||
- It does **not** pin the live node command surface per node.
|
||||
- Live node commands come from what the node declares on connect after the
|
||||
gateway's global node command policy (`gateway.nodes.allowCommands` /
|
||||
`denyCommands`) is applied.
|
||||
- Per-node `system.run` allow/ask policy lives on the node in
|
||||
`exec.approvals.node.*`, not in the pairing record.
|
||||
- Live node commands come from what the node declares on connect after the gateway's global node command policy (`gateway.nodes.allowCommands` and `denyCommands`) is applied.
|
||||
- Per-node `system.run` allow and ask policy lives on the node in `exec.approvals.node.*`, not in the pairing record.
|
||||
</Warning>
|
||||
|
||||
## Node command gating (2026.3.31+)
|
||||
|
||||
|
||||
@@ -33,7 +33,7 @@ flowchart TB
|
||||
T --> C
|
||||
```
|
||||
|
||||
## Quick Setup
|
||||
## Quick setup
|
||||
|
||||
### Step 1: Add SSH Config
|
||||
|
||||
@@ -152,7 +152,7 @@ launchctl bootout gui/$UID/ai.openclaw.ssh-tunnel
|
||||
|
||||
---
|
||||
|
||||
## How It Works
|
||||
## How it works
|
||||
|
||||
| Component | What It Does |
|
||||
| ------------------------------------ | ------------------------------------------------------------ |
|
||||
|
||||
@@ -15,38 +15,37 @@ This repo supports “remote over SSH” by keeping a single Gateway (the master
|
||||
- The Gateway WebSocket binds to **loopback** on your configured port (defaults to 18789).
|
||||
- For remote use, you forward that loopback port over SSH (or use a tailnet/VPN and tunnel less).
|
||||
|
||||
## Common VPN/tailnet setups (where the agent lives)
|
||||
## Common VPN and tailnet setups
|
||||
|
||||
Think of the **Gateway host** as “where the agent lives.” It owns sessions, auth profiles, channels, and state.
|
||||
Your laptop/desktop (and nodes) connect to that host.
|
||||
Think of the **Gateway host** as where the agent lives. It owns sessions, auth profiles, channels, and state. Your laptop, desktop, and nodes connect to that host.
|
||||
|
||||
### 1) Always-on Gateway in your tailnet (VPS or home server)
|
||||
### Always-on Gateway in your tailnet
|
||||
|
||||
Run the Gateway on a persistent host and reach it via **Tailscale** or SSH.
|
||||
Run the Gateway on a persistent host (VPS or home server) and reach it via **Tailscale** or SSH.
|
||||
|
||||
- **Best UX:** keep `gateway.bind: "loopback"` and use **Tailscale Serve** for the Control UI.
|
||||
- **Fallback:** keep loopback + SSH tunnel from any machine that needs access.
|
||||
- **Fallback:** keep loopback plus SSH tunnel from any machine that needs access.
|
||||
- **Examples:** [exe.dev](/install/exe-dev) (easy VM) or [Hetzner](/install/hetzner) (production VPS).
|
||||
|
||||
This is ideal when your laptop sleeps often but you want the agent always-on.
|
||||
Ideal when your laptop sleeps often but you want the agent always-on.
|
||||
|
||||
### 2) Home desktop runs the Gateway, laptop is remote control
|
||||
### Home desktop runs the Gateway
|
||||
|
||||
The laptop does **not** run the agent. It connects remotely:
|
||||
|
||||
- Use the macOS app’s **Remote over SSH** mode (Settings → General → “OpenClaw runs”).
|
||||
- The app opens and manages the tunnel, so WebChat + health checks “just work.”
|
||||
- Use the macOS app's **Remote over SSH** mode (Settings → General → OpenClaw runs).
|
||||
- The app opens and manages the tunnel, so WebChat and health checks just work.
|
||||
|
||||
Runbook: [macOS remote access](/platforms/mac/remote).
|
||||
|
||||
### 3) Laptop runs the Gateway, remote access from other machines
|
||||
### Laptop runs the Gateway
|
||||
|
||||
Keep the Gateway local but expose it safely:
|
||||
|
||||
- SSH tunnel to the laptop from other machines, or
|
||||
- Tailscale Serve the Control UI and keep the Gateway loopback-only.
|
||||
|
||||
Guide: [Tailscale](/gateway/tailscale) and [Web overview](/web).
|
||||
Guides: [Tailscale](/gateway/tailscale) and [Web overview](/web).
|
||||
|
||||
## Command flow (what runs where)
|
||||
|
||||
@@ -77,9 +76,13 @@ With the tunnel up:
|
||||
- `openclaw health` and `openclaw status --deep` now reach the remote gateway via `ws://127.0.0.1:18789`.
|
||||
- `openclaw gateway status`, `openclaw gateway health`, `openclaw gateway probe`, and `openclaw gateway call` can also target the forwarded URL via `--url` when needed.
|
||||
|
||||
Note: replace `18789` with your configured `gateway.port` (or `--port`/`OPENCLAW_GATEWAY_PORT`).
|
||||
Note: when you pass `--url`, the CLI does not fall back to config or environment credentials.
|
||||
Include `--token` or `--password` explicitly. Missing explicit credentials is an error.
|
||||
<Note>
|
||||
Replace `18789` with your configured `gateway.port` (or `--port` or `OPENCLAW_GATEWAY_PORT`).
|
||||
</Note>
|
||||
|
||||
<Warning>
|
||||
When you pass `--url`, the CLI does not fall back to config or environment credentials. Include `--token` or `--password` explicitly. Missing explicit credentials is an error.
|
||||
</Warning>
|
||||
|
||||
## CLI remote defaults
|
||||
|
||||
@@ -126,7 +129,7 @@ WebChat no longer uses a separate HTTP port. The SwiftUI chat UI connects direct
|
||||
- Forward `18789` over SSH (see above), then connect clients to `ws://127.0.0.1:18789`.
|
||||
- On macOS, prefer the app’s “Remote over SSH” mode, which manages the tunnel automatically.
|
||||
|
||||
## macOS app "Remote over SSH"
|
||||
## macOS app Remote over SSH
|
||||
|
||||
The macOS menu bar app can drive the same setup end-to-end (remote status checks, WebChat, and Voice Wake forwarding).
|
||||
|
||||
@@ -222,7 +225,9 @@ launchctl bootstrap gui/$UID ~/Library/LaunchAgents/ai.openclaw.ssh-tunnel.plist
|
||||
|
||||
The tunnel will start automatically at login, restart on crash, and keep the forwarded port live.
|
||||
|
||||
Note: if you have a leftover `com.openclaw.ssh-tunnel` LaunchAgent from an older setup, unload and delete it.
|
||||
<Note>
|
||||
If you have a leftover `com.openclaw.ssh-tunnel` LaunchAgent from an older setup, unload and delete it.
|
||||
</Note>
|
||||
|
||||
#### Troubleshooting
|
||||
|
||||
|
||||
@@ -856,12 +856,9 @@ Set a token so **all** WS clients must authenticate:
|
||||
|
||||
Doctor can generate one for you: `openclaw doctor --generate-gateway-token`.
|
||||
|
||||
Note: `gateway.remote.token` / `.password` are client credential sources. They
|
||||
do **not** protect local WS access by themselves.
|
||||
Local call paths can use `gateway.remote.*` as fallback only when `gateway.auth.*`
|
||||
is unset.
|
||||
If `gateway.auth.token` / `gateway.auth.password` is explicitly configured via
|
||||
SecretRef and unresolved, resolution fails closed (no remote fallback masking).
|
||||
<Note>
|
||||
`gateway.remote.token` and `gateway.remote.password` are client credential sources. They do **not** protect local WS access by themselves. Local call paths can use `gateway.remote.*` as fallback only when `gateway.auth.*` is unset. If `gateway.auth.token` or `gateway.auth.password` is explicitly configured via SecretRef and unresolved, resolution fails closed (no remote fallback masking).
|
||||
</Note>
|
||||
Optional: pin remote TLS with `gateway.remote.tlsFingerprint` when using `wss://`.
|
||||
Plaintext `ws://` is loopback-only by default. For trusted private-network
|
||||
paths, set `OPENCLAW_ALLOW_INSECURE_PRIVATE_WS=1` on the client process as
|
||||
@@ -999,7 +996,7 @@ Logs and transcripts can leak sensitive info even when access controls are corre
|
||||
|
||||
Recommendations:
|
||||
|
||||
- Keep tool summary redaction on (`logging.redactSensitive: "tools"`; default).
|
||||
- Keep log and transcript redaction on (`logging.redactSensitive: "tools"`; default).
|
||||
- Add custom patterns for your environment via `logging.redactPatterns` (tokens, hostnames, internal URLs).
|
||||
- When sharing diagnostics, prefer `openclaw status --all` (pasteable, secrets redacted) over raw logs.
|
||||
- Prune old session transcripts and log files if you don’t need long retention.
|
||||
@@ -1092,9 +1089,9 @@ Two complementary approaches:
|
||||
- **Run the full Gateway in Docker** (container boundary): [Docker](/install/docker)
|
||||
- **Tool sandbox** (`agents.defaults.sandbox`, host gateway + sandbox-isolated tools; Docker is the default backend): [Sandboxing](/gateway/sandboxing)
|
||||
|
||||
Note: to prevent cross-agent access, keep `agents.defaults.sandbox.scope` at `"agent"` (default)
|
||||
or `"session"` for stricter per-session isolation. `scope: "shared"` uses a
|
||||
single container/workspace.
|
||||
<Note>
|
||||
To prevent cross-agent access, keep `agents.defaults.sandbox.scope` at `"agent"` (default) or `"session"` for stricter per-session isolation. `scope: "shared"` uses a single container or workspace.
|
||||
</Note>
|
||||
|
||||
Also consider agent workspace access inside the sandbox:
|
||||
|
||||
@@ -1103,7 +1100,9 @@ Also consider agent workspace access inside the sandbox:
|
||||
- `agents.defaults.sandbox.workspaceAccess: "rw"` mounts the agent workspace read/write at `/workspace`
|
||||
- Extra `sandbox.docker.binds` are validated against normalized and canonicalized source paths. Parent-symlink tricks and canonical home aliases still fail closed if they resolve into blocked roots such as `/etc`, `/var/run`, or credential directories under the OS home.
|
||||
|
||||
Important: `tools.elevated` is the global baseline escape hatch that runs exec outside the sandbox. The effective host is `gateway` by default, or `node` when the exec target is configured to `node`. Keep `tools.elevated.allowFrom` tight and don’t enable it for strangers. You can further restrict elevated per agent via `agents.list[].tools.elevated`. See [Elevated Mode](/tools/elevated).
|
||||
<Warning>
|
||||
`tools.elevated` is the global baseline escape hatch that runs exec outside the sandbox. The effective host is `gateway` by default, or `node` when the exec target is configured to `node`. Keep `tools.elevated.allowFrom` tight and do not enable it for strangers. You can further restrict elevated per agent via `agents.list[].tools.elevated`. See [Elevated mode](/tools/elevated).
|
||||
</Warning>
|
||||
|
||||
### Sub-agent delegation guardrail
|
||||
|
||||
|
||||
@@ -85,7 +85,9 @@ Connect from another Tailnet device:
|
||||
- Control UI: `http://<tailscale-ip>:18789/`
|
||||
- WebSocket: `ws://<tailscale-ip>:18789`
|
||||
|
||||
Note: loopback (`http://127.0.0.1:18789`) will **not** work in this mode.
|
||||
<Note>
|
||||
Loopback (`http://127.0.0.1:18789`) will **not** work in this mode.
|
||||
</Note>
|
||||
|
||||
### Public internet (Funnel + shared password)
|
||||
|
||||
|
||||
@@ -7,8 +7,7 @@ read_when:
|
||||
title: "Debugging"
|
||||
---
|
||||
|
||||
This page covers debugging helpers for streaming output, especially when a
|
||||
provider mixes reasoning into normal text.
|
||||
Debugging helpers for streaming output, especially when a provider mixes reasoning into normal text.
|
||||
|
||||
## Runtime debug overrides
|
||||
|
||||
@@ -249,22 +248,27 @@ Reset flow (fresh start):
|
||||
pnpm gateway:dev:reset
|
||||
```
|
||||
|
||||
Note: `--dev` is a **global** profile flag and gets eaten by some runners.
|
||||
If you need to spell it out, use the env var form:
|
||||
<Note>
|
||||
`--dev` is a **global** profile flag and gets eaten by some runners. If you need to spell it out, use the env var form:
|
||||
|
||||
```bash
|
||||
OPENCLAW_PROFILE=dev openclaw gateway --dev --reset
|
||||
```
|
||||
|
||||
</Note>
|
||||
|
||||
`--reset` wipes config, credentials, sessions, and the dev workspace (using
|
||||
`trash`, not `rm`), then recreates the default dev setup.
|
||||
|
||||
Tip: if a non‑dev gateway is already running (launchd/systemd), stop it first:
|
||||
<Tip>
|
||||
If a non-dev gateway is already running (launchd or systemd), stop it first:
|
||||
|
||||
```bash
|
||||
openclaw gateway stop
|
||||
```
|
||||
|
||||
</Tip>
|
||||
|
||||
## Raw stream logging (OpenClaw)
|
||||
|
||||
OpenClaw can log the **raw assistant stream** before any filtering/formatting.
|
||||
|
||||
@@ -127,13 +127,16 @@ Live tests are split into two layers so we can isolate failures:
|
||||
- Embedded agent forwards a multimodal user message to the model
|
||||
- Assertion: reply contains `cat` + the code (OCR tolerance: minor mistakes allowed)
|
||||
|
||||
Tip: to see what you can test on your machine (and the exact `provider/model` ids), run:
|
||||
<Tip>
|
||||
To see what you can test on your machine (and the exact `provider/model` ids), run:
|
||||
|
||||
```bash
|
||||
openclaw models list
|
||||
openclaw models list --json
|
||||
```
|
||||
|
||||
</Tip>
|
||||
|
||||
## Live: CLI backend smoke (Claude, Codex, Gemini, or other local CLIs)
|
||||
|
||||
- Test: `src/gateway/gateway-cli-backend.live.test.ts`
|
||||
@@ -227,10 +230,12 @@ Notes:
|
||||
- `OPENCLAW_LIVE_ACP_BIND_CODEX_MODEL=gpt-5.2`
|
||||
- `OPENCLAW_LIVE_ACP_BIND_OPENCODE_MODEL=opencode/kimi-k2.6`
|
||||
- `OPENCLAW_LIVE_ACP_BIND_REQUIRE_TRANSCRIPT=1`
|
||||
- `OPENCLAW_LIVE_ACP_BIND_REQUIRE_CRON=1`
|
||||
- `OPENCLAW_LIVE_ACP_BIND_PARENT_MODEL=openai/gpt-5.2`
|
||||
- Notes:
|
||||
- This lane uses the gateway `chat.send` surface with admin-only synthetic originating-route fields so tests can attach message-channel context without pretending to deliver externally.
|
||||
- When `OPENCLAW_LIVE_ACP_BIND_AGENT_COMMAND` is unset, the test uses the embedded `acpx` plugin's built-in agent registry for the selected ACP harness agent.
|
||||
- Bound-session cron MCP creation is best-effort by default because external ACP harnesses can cancel MCP calls after the bind/image proof has passed; set `OPENCLAW_LIVE_ACP_BIND_REQUIRE_CRON=1` to make that post-bind cron probe strict.
|
||||
|
||||
Example:
|
||||
|
||||
@@ -337,7 +342,7 @@ Narrow, explicit allowlists are fastest and least flaky:
|
||||
- `OPENCLAW_LIVE_GATEWAY_MODELS="openai/gpt-5.2" pnpm test:live src/gateway/gateway-models.profiles.live.test.ts`
|
||||
|
||||
- Tool calling across several providers:
|
||||
- `OPENCLAW_LIVE_GATEWAY_MODELS="openai/gpt-5.2,openai-codex/gpt-5.2,anthropic/claude-opus-4-6,google/gemini-3-flash-preview,deepseek/deepseek-v4-flash,zai/glm-4.7,minimax/MiniMax-M2.7" pnpm test:live src/gateway/gateway-models.profiles.live.test.ts`
|
||||
- `OPENCLAW_LIVE_GATEWAY_MODELS="openai/gpt-5.2,openai-codex/gpt-5.2,anthropic/claude-opus-4-6,google/gemini-3-flash-preview,deepseek/deepseek-v4-flash,zai/glm-5.1,minimax/MiniMax-M2.7" pnpm test:live src/gateway/gateway-models.profiles.live.test.ts`
|
||||
|
||||
- Google focus (Gemini API key + Antigravity):
|
||||
- Gemini (API key): `OPENCLAW_LIVE_GATEWAY_MODELS="google/gemini-3-flash-preview" pnpm test:live src/gateway/gateway-models.profiles.live.test.ts`
|
||||
@@ -371,11 +376,11 @@ This is the “common models” run we expect to keep working:
|
||||
- Google (Gemini API): `google/gemini-3.1-pro-preview` and `google/gemini-3-flash-preview` (avoid older Gemini 2.x models)
|
||||
- Google (Antigravity): `google-antigravity/claude-opus-4-6-thinking` and `google-antigravity/gemini-3-flash`
|
||||
- DeepSeek: `deepseek/deepseek-v4-flash` and `deepseek/deepseek-v4-pro`
|
||||
- Z.AI (GLM): `zai/glm-4.7`
|
||||
- Z.AI (GLM): `zai/glm-5.1`
|
||||
- MiniMax: `minimax/MiniMax-M2.7`
|
||||
|
||||
Run gateway smoke with tools + image:
|
||||
`OPENCLAW_LIVE_GATEWAY_MODELS="openai/gpt-5.2,openai-codex/gpt-5.2,anthropic/claude-opus-4-6,google/gemini-3.1-pro-preview,google/gemini-3-flash-preview,google-antigravity/claude-opus-4-6-thinking,google-antigravity/gemini-3-flash,deepseek/deepseek-v4-flash,zai/glm-4.7,minimax/MiniMax-M2.7" pnpm test:live src/gateway/gateway-models.profiles.live.test.ts`
|
||||
`OPENCLAW_LIVE_GATEWAY_MODELS="openai/gpt-5.2,openai-codex/gpt-5.2,anthropic/claude-opus-4-6,google/gemini-3.1-pro-preview,google/gemini-3-flash-preview,google-antigravity/claude-opus-4-6-thinking,google-antigravity/gemini-3-flash,deepseek/deepseek-v4-flash,zai/glm-5.1,minimax/MiniMax-M2.7" pnpm test:live src/gateway/gateway-models.profiles.live.test.ts`
|
||||
|
||||
### Baseline: tool calling (Read + optional Exec)
|
||||
|
||||
@@ -385,7 +390,7 @@ Pick at least one per provider family:
|
||||
- Anthropic: `anthropic/claude-opus-4-6` (or `anthropic/claude-sonnet-4-6`)
|
||||
- Google: `google/gemini-3-flash-preview` (or `google/gemini-3.1-pro-preview`)
|
||||
- DeepSeek: `deepseek/deepseek-v4-flash`
|
||||
- Z.AI (GLM): `zai/glm-4.7`
|
||||
- Z.AI (GLM): `zai/glm-5.1`
|
||||
- MiniMax: `minimax/MiniMax-M2.7`
|
||||
|
||||
Optional additional coverage (nice to have):
|
||||
@@ -411,7 +416,9 @@ More providers you can include in the live matrix (if you have creds/config):
|
||||
- Built-in: `openai`, `openai-codex`, `anthropic`, `google`, `google-vertex`, `google-antigravity`, `google-gemini-cli`, `zai`, `openrouter`, `opencode`, `opencode-go`, `xai`, `groq`, `cerebras`, `mistral`, `github-copilot`
|
||||
- Via `models.providers` (custom endpoints): `minimax` (cloud/API), plus any OpenAI/Anthropic-compatible proxy (LM Studio, vLLM, LiteLLM, etc.)
|
||||
|
||||
Tip: don’t try to hardcode “all models” in docs. The authoritative list is whatever `discoverModels(...)` returns on your machine + whatever keys are available.
|
||||
<Tip>
|
||||
Do not hardcode "all models" in docs. The authoritative list is whatever `discoverModels(...)` returns on your machine plus whatever keys are available.
|
||||
</Tip>
|
||||
|
||||
## Credentials (never commit)
|
||||
|
||||
|
||||
@@ -84,7 +84,9 @@ When debugging real providers/models (requires real creds):
|
||||
against `moonshot/kimi-k2.6`. Verify the JSON reports Moonshot/K2.6 and the
|
||||
assistant transcript stores normalized `usage.cost`.
|
||||
|
||||
Tip: when you only need one failing case, prefer narrowing live tests via the allowlist env vars described below.
|
||||
<Tip>
|
||||
When you only need one failing case, prefer narrowing live tests via the allowlist env vars described below.
|
||||
</Tip>
|
||||
|
||||
## QA-specific runners
|
||||
|
||||
@@ -92,9 +94,13 @@ These commands sit beside the main test suites when you need QA-lab realism:
|
||||
|
||||
CI runs QA Lab in dedicated workflows. `Parity gate` runs on matching PRs and
|
||||
from manual dispatch with mock providers. `QA-Lab - All Lanes` runs nightly on
|
||||
`main` and from manual dispatch with the mock parity gate, live Matrix lane, and
|
||||
Convex-managed live Telegram lane as parallel jobs. `OpenClaw Release Checks`
|
||||
runs the same lanes before release approval.
|
||||
`main` and from manual dispatch with the mock parity gate, live Matrix lane,
|
||||
Convex-managed live Telegram lane, and Convex-managed live Discord lane as
|
||||
parallel jobs. Scheduled QA and release checks pass Matrix `--profile fast`
|
||||
explicitly, while the Matrix CLI and manual workflow input default remain
|
||||
`all`; manual dispatch can shard `all` into `transport`, `media`, `e2ee-smoke`,
|
||||
`e2ee-deep`, and `e2ee-cli` jobs. `OpenClaw Release Checks` runs parity plus
|
||||
the fast Matrix and Telegram lanes before release approval.
|
||||
|
||||
- `pnpm openclaw qa suite`
|
||||
- Runs repo-backed QA scenarios directly on the host.
|
||||
@@ -136,10 +142,13 @@ runs the same lanes before release approval.
|
||||
then seeds an affected broken session JSONL and verifies
|
||||
`openclaw doctor --fix` rewrites it to the active branch with a backup.
|
||||
- `pnpm test:docker:npm-telegram-live`
|
||||
- Installs a published OpenClaw package in Docker, runs installed-package
|
||||
- Installs an OpenClaw package candidate in Docker, runs installed-package
|
||||
onboarding, configures Telegram through the installed CLI, then reuses the
|
||||
live Telegram QA lane with that installed package as the SUT Gateway.
|
||||
- Defaults to `OPENCLAW_NPM_TELEGRAM_PACKAGE_SPEC=openclaw@beta`.
|
||||
- Defaults to `OPENCLAW_NPM_TELEGRAM_PACKAGE_SPEC=openclaw@beta`; set
|
||||
`OPENCLAW_NPM_TELEGRAM_PACKAGE_TGZ=/path/to/openclaw-current.tgz` or
|
||||
`OPENCLAW_CURRENT_PACKAGE_TGZ` to test a resolved local tarball instead of
|
||||
installing from the registry.
|
||||
- Uses the same Telegram env credentials or Convex credential source as
|
||||
`pnpm openclaw qa telegram`. For CI/release automation, set
|
||||
`OPENCLAW_NPM_TELEGRAM_CREDENTIAL_SOURCE=convex` plus
|
||||
@@ -151,6 +160,43 @@ runs the same lanes before release approval.
|
||||
- GitHub Actions exposes this lane as the manual maintainer workflow
|
||||
`NPM Telegram Beta E2E`. It does not run on merge. The workflow uses the
|
||||
`qa-live-shared` environment and Convex CI credential leases.
|
||||
- GitHub Actions also exposes `Package Acceptance` for side-run product proof
|
||||
against one candidate package. It accepts a trusted ref, published npm spec,
|
||||
HTTPS tarball URL plus SHA-256, or tarball artifact from another run, uploads
|
||||
the normalized `openclaw-current.tgz` as `package-under-test`, then runs the
|
||||
existing Docker E2E scheduler with smoke, package, product, full, or custom
|
||||
lane profiles. Set `telegram_mode=mock-openai` or `live-frontier` to run the
|
||||
Telegram QA workflow against the same `package-under-test` artifact.
|
||||
- Latest beta product proof:
|
||||
|
||||
```bash
|
||||
gh workflow run package-acceptance.yml --ref main \
|
||||
-f source=npm \
|
||||
-f package_spec=openclaw@beta \
|
||||
-f suite_profile=product \
|
||||
-f telegram_mode=mock-openai
|
||||
```
|
||||
|
||||
- Exact tarball URL proof requires a digest:
|
||||
|
||||
```bash
|
||||
gh workflow run package-acceptance.yml --ref main \
|
||||
-f source=url \
|
||||
-f package_url=https://registry.npmjs.org/openclaw/-/openclaw-VERSION.tgz \
|
||||
-f package_sha256=<sha256> \
|
||||
-f suite_profile=package
|
||||
```
|
||||
|
||||
- Artifact proof downloads a tarball artifact from another Actions run:
|
||||
|
||||
```bash
|
||||
gh workflow run package-acceptance.yml --ref main \
|
||||
-f source=artifact \
|
||||
-f artifact_run_id=<run-id> \
|
||||
-f artifact_name=<artifact-name> \
|
||||
-f suite_profile=smoke
|
||||
```
|
||||
|
||||
- `pnpm test:docker:bundled-channel-deps`
|
||||
- Packs and installs the current OpenClaw build in Docker, starts the Gateway
|
||||
with OpenAI configured, then enables bundled channel/plugins via config
|
||||
@@ -208,10 +254,11 @@ runs the same lanes before release approval.
|
||||
- Repo checkouts load the bundled runner directly; no separate plugin install
|
||||
step is needed.
|
||||
- Provisions three temporary Matrix users (`driver`, `sut`, `observer`) plus one private room, then starts a QA gateway child with the real Matrix plugin as the SUT transport.
|
||||
- Defaults to `--profile all`. Use `--profile fast --fail-fast` for release-critical transport proof, or `--profile transport|media|e2ee-smoke|e2ee-deep|e2ee-cli` when sharding the full catalog.
|
||||
- Uses the pinned stable Tuwunel image `ghcr.io/matrix-construct/tuwunel:v1.5.1` by default. Override with `OPENCLAW_QA_MATRIX_TUWUNEL_IMAGE` when you need to test a different image.
|
||||
- Matrix does not expose shared credential-source flags because the lane provisions disposable users locally.
|
||||
- Writes a Matrix QA report, summary, observed-events artifact, and combined stdout/stderr output log under `.artifacts/qa-e2e/...`.
|
||||
- Emits progress by default and enforces a hard run timeout with `OPENCLAW_QA_MATRIX_TIMEOUT_MS` (default 30 minutes). Cleanup is bounded by `OPENCLAW_QA_MATRIX_CLEANUP_TIMEOUT_MS` and failures include the recovery `docker compose ... down --remove-orphans` command.
|
||||
- Emits progress by default and enforces a hard run timeout with `OPENCLAW_QA_MATRIX_TIMEOUT_MS` (default 30 minutes). `OPENCLAW_QA_MATRIX_NO_REPLY_WINDOW_MS` tunes negative no-reply quiet windows, and cleanup is bounded by `OPENCLAW_QA_MATRIX_CLEANUP_TIMEOUT_MS` with failures including the recovery `docker compose ... down --remove-orphans` command.
|
||||
- `pnpm openclaw qa telegram`
|
||||
- Runs the Telegram live QA lane against a real private group using the driver and SUT bot tokens from env.
|
||||
- Requires `OPENCLAW_QA_TELEGRAM_GROUP_ID`, `OPENCLAW_QA_TELEGRAM_DRIVER_BOT_TOKEN`, and `OPENCLAW_QA_TELEGRAM_SUT_BOT_TOKEN`. The group id must be the numeric Telegram chat id.
|
||||
@@ -227,10 +274,11 @@ Live transport lanes share one standard contract so new transports do not drift:
|
||||
`qa-channel` remains the broad synthetic QA suite and is not part of the live
|
||||
transport coverage matrix.
|
||||
|
||||
| Lane | Canary | Mention gating | Allowlist block | Top-level reply | Restart resume | Thread follow-up | Thread isolation | Reaction observation | Help command |
|
||||
| -------- | ------ | -------------- | --------------- | --------------- | -------------- | ---------------- | ---------------- | -------------------- | ------------ |
|
||||
| Matrix | x | x | x | x | x | x | x | x | |
|
||||
| Telegram | x | | | | | | | | x |
|
||||
| Lane | Canary | Mention gating | Allowlist block | Top-level reply | Restart resume | Thread follow-up | Thread isolation | Reaction observation | Help command | Native command registration |
|
||||
| -------- | ------ | -------------- | --------------- | --------------- | -------------- | ---------------- | ---------------- | -------------------- | ------------ | --------------------------- |
|
||||
| Matrix | x | x | x | x | x | x | x | x | | |
|
||||
| Telegram | x | x | | | | | | | x | |
|
||||
| Discord | x | x | | | | | | | | x |
|
||||
|
||||
### Shared Telegram credentials via Convex (v1)
|
||||
|
||||
@@ -395,7 +443,7 @@ Think of the suites as “increasing realism” (and increasing flakiness/cost):
|
||||
|
||||
- Command: `pnpm test`
|
||||
- Config: untargeted runs use the `vitest.full-*.config.ts` shard set and may expand multi-project shards into per-project configs for parallel scheduling
|
||||
- Files: core/unit inventories under `src/**/*.test.ts`, `packages/**/*.test.ts`, `test/**/*.test.ts`, and the whitelisted `ui` node tests covered by `vitest.unit.config.ts`
|
||||
- Files: core/unit inventories under `src/**/*.test.ts`, `packages/**/*.test.ts`, and `test/**/*.test.ts`; UI unit tests run in the dedicated `unit-ui` shard
|
||||
- Scope:
|
||||
- Pure unit tests
|
||||
- In-process integration tests (gateway auth, routing, tooling, parsing, config)
|
||||
@@ -411,9 +459,9 @@ Think of the suites as “increasing realism” (and increasing flakiness/cost):
|
||||
- Untargeted `pnpm test` runs twelve smaller shard configs (`core-unit-fast`, `core-unit-src`, `core-unit-security`, `core-unit-ui`, `core-unit-support`, `core-support-boundary`, `core-contracts`, `core-bundled`, `core-runtime`, `agentic`, `auto-reply`, `extensions`) instead of one giant native root-project process. This cuts peak RSS on loaded machines and avoids auto-reply/extension work starving unrelated suites.
|
||||
- `pnpm test --watch` still uses the native root `vitest.config.ts` project graph, because a multi-shard watch loop is not practical.
|
||||
- `pnpm test`, `pnpm test:watch`, and `pnpm test:perf:imports` route explicit file/directory targets through scoped lanes first, so `pnpm test extensions/discord/src/monitor/message-handler.preflight.test.ts` avoids paying the full root project startup tax.
|
||||
- `pnpm test:changed` expands changed git paths into the same scoped lanes when the diff only touches routable source/test files; config/setup edits still fall back to the broad root-project rerun.
|
||||
- `pnpm check:changed` is the normal smart local gate for narrow work. It classifies the diff into core, core tests, extensions, extension tests, apps, docs, release metadata, live Docker tooling, and tooling, then runs the matching typecheck/lint/test lanes. Public Plugin SDK and plugin-contract changes include one extension validation pass because extensions depend on those core contracts. Release metadata-only version bumps run targeted version/config/root-dependency checks instead of the full suite, with a guard that rejects package changes outside the top-level version field.
|
||||
- Live Docker ACP harness edits run a focused local gate: shell syntax for the live Docker auth scripts, live Docker scheduler dry-run, ACP bind unit tests, and the ACPX extension tests. `package.json` changes are included only when the diff is limited to `scripts["test:docker:live-*"]`; dependency, export, version, and other package-surface edits still use the broader guards.
|
||||
- `pnpm test:changed` expands changed git paths into cheap scoped lanes by default: direct test edits, sibling `*.test.ts` files, explicit source mappings, and local import-graph dependents. Config/setup/package edits do not broad-run tests unless you explicitly use `OPENCLAW_TEST_CHANGED_BROAD=1 pnpm test:changed`.
|
||||
- `pnpm check:changed` is the normal smart local check gate for narrow work. It classifies the diff into core, core tests, extensions, extension tests, apps, docs, release metadata, live Docker tooling, and tooling, then runs the matching typecheck, lint, and guard commands. It does not run Vitest tests; call `pnpm test:changed` or explicit `pnpm test <target>` for test proof. Release metadata-only version bumps run targeted version/config/root-dependency checks, with a guard that rejects package changes outside the top-level version field.
|
||||
- Live Docker ACP harness edits run focused checks: shell syntax for the live Docker auth scripts and a live Docker scheduler dry-run. `package.json` changes are included only when the diff is limited to `scripts["test:docker:live-*"]`; dependency, export, version, and other package-surface edits still use the broader guards.
|
||||
- Import-light unit tests from agents, commands, plugins, auto-reply helpers, `plugin-sdk`, and similar pure utility areas route through the `unit-fast` lane, which skips `test/setup-openclaw-runtime.ts`; stateful/runtime-heavy files stay on the existing lanes.
|
||||
- Selected `plugin-sdk` and `commands` helper source files also map changed-mode runs to explicit sibling tests in those light lanes, so helper edits avoid rerunning the full heavy suite for that directory.
|
||||
- `auto-reply` has dedicated buckets for top-level core helpers, top-level `reply.*` integration tests, and the `src/auto-reply/reply/**` subtree. CI further splits the reply subtree into agent-runner, dispatch, and commands/state-routing shards so one import-heavy bucket does not own the full Node tail.
|
||||
@@ -458,10 +506,11 @@ Think of the suites as “increasing realism” (and increasing flakiness/cost):
|
||||
- The pre-commit hook is formatting-only. It restages formatted files and
|
||||
does not run lint, typecheck, or tests.
|
||||
- Run `pnpm check:changed` explicitly before handoff or push when you
|
||||
need the smart local gate. Public Plugin SDK and plugin-contract
|
||||
changes include one extension validation pass.
|
||||
- `pnpm test:changed` routes through scoped lanes when the changed paths
|
||||
map cleanly to a smaller suite.
|
||||
need the smart local check gate.
|
||||
- `pnpm test:changed` routes through cheap scoped lanes by default. Use
|
||||
`OPENCLAW_TEST_CHANGED_BROAD=1 pnpm test:changed` only when the agent
|
||||
decides a harness, config, package, or contract edit really needs broader
|
||||
Vitest coverage.
|
||||
- `pnpm test:max` and `pnpm test:changed:max` keep the same routing
|
||||
behavior, just with a higher worker cap.
|
||||
- Local worker auto-scaling is intentionally conservative and backs off
|
||||
@@ -606,7 +655,9 @@ These Docker runners split into two buckets:
|
||||
`OPENCLAW_LIVE_GATEWAY_STEP_TIMEOUT_MS=45000`, and
|
||||
`OPENCLAW_LIVE_GATEWAY_MODEL_TIMEOUT_MS=90000`. Override those env vars when you
|
||||
explicitly want the larger exhaustive scan.
|
||||
- `test:docker:all` builds the live Docker image once via `test:docker:live-build`, then reuses it for the live Docker lanes. It also builds one shared `scripts/e2e/Dockerfile` image via `test:docker:e2e-build` and reuses it for the E2E container smoke runners that exercise the built app. The aggregate uses a weighted local scheduler: `OPENCLAW_DOCKER_ALL_PARALLELISM` controls process slots, while resource caps keep heavy live, npm-install, and multi-service lanes from all starting at once. Defaults are 10 slots, `OPENCLAW_DOCKER_ALL_LIVE_LIMIT=6`, `OPENCLAW_DOCKER_ALL_NPM_LIMIT=8`, and `OPENCLAW_DOCKER_ALL_SERVICE_LIMIT=7`; tune `OPENCLAW_DOCKER_ALL_WEIGHT_LIMIT` or `OPENCLAW_DOCKER_ALL_DOCKER_LIMIT` only when the Docker host has more headroom. The runner performs a Docker preflight by default, removes stale OpenClaw E2E containers, prints status every 30 seconds, stores successful lane timings in `.artifacts/docker-tests/lane-timings.json`, and uses those timings to start longer lanes first on later runs. Use `OPENCLAW_DOCKER_ALL_DRY_RUN=1` to print the weighted lane manifest without building or running Docker.
|
||||
- `test:docker:all` builds the live Docker image once via `test:docker:live-build`, packs OpenClaw once as an npm tarball through `scripts/package-openclaw-for-docker.mjs`, then builds/reuses two `scripts/e2e/Dockerfile` images. The bare image is only the Node/Git runner for install/update/plugin-dependency lanes; those lanes mount the prebuilt tarball. The functional image installs the same tarball into `/app` for built-app functionality lanes. Docker lane definitions live in `scripts/lib/docker-e2e-scenarios.mjs`; planner logic lives in `scripts/lib/docker-e2e-plan.mjs`; `scripts/test-docker-all.mjs` executes the selected plan. The aggregate uses a weighted local scheduler: `OPENCLAW_DOCKER_ALL_PARALLELISM` controls process slots, while resource caps keep heavy live, npm-install, and multi-service lanes from all starting at once. If a single lane is heavier than the active caps, the scheduler can still start it when the pool is empty and then keeps it running alone until capacity is available again. Defaults are 10 slots, `OPENCLAW_DOCKER_ALL_LIVE_LIMIT=9`, `OPENCLAW_DOCKER_ALL_NPM_LIMIT=10`, and `OPENCLAW_DOCKER_ALL_SERVICE_LIMIT=7`; tune `OPENCLAW_DOCKER_ALL_WEIGHT_LIMIT` or `OPENCLAW_DOCKER_ALL_DOCKER_LIMIT` only when the Docker host has more headroom. The runner performs a Docker preflight by default, removes stale OpenClaw E2E containers, prints status every 30 seconds, stores successful lane timings in `.artifacts/docker-tests/lane-timings.json`, and uses those timings to start longer lanes first on later runs. Use `OPENCLAW_DOCKER_ALL_DRY_RUN=1` to print the weighted lane manifest without building or running Docker, or `node scripts/test-docker-all.mjs --plan-json` to print the CI plan for selected lanes, package/image needs, and credentials.
|
||||
- `Package Acceptance` is the GitHub-native package gate for "does this installable tarball work as a product?" It resolves one candidate package from `source=npm`, `source=ref`, `source=url`, or `source=artifact`, uploads it as `package-under-test`, then runs the reusable Docker E2E lanes against that exact tarball instead of repacking the selected ref. `workflow_ref` selects the trusted workflow/harness scripts, while `package_ref` selects the source commit/branch/tag to pack when `source=ref`; this lets current acceptance logic validate older trusted commits. Profiles are ordered by breadth: `smoke` is quick install/channel/agent plus gateway/config, `package` is the package/update/plugin contract and the default native replacement for most Parallels package/update coverage, `product` adds MCP channels, cron/subagent cleanup, OpenAI web search, and OpenWebUI, and `full` runs the release-path Docker chunks with OpenWebUI. Release validation runs the `package` profile for the target ref with Telegram package QA enabled. Targeted GitHub Docker rerun commands generated from artifacts include prior package artifact and prepared image inputs when available, so failed lanes can avoid rebuilding the package and images.
|
||||
- Package Acceptance legacy compatibility is capped at `2026.4.25` (`2026.4.25-beta.*` included). Through that cutoff, the harness tolerates only shipped-package metadata gaps: omitted private QA inventory entries, missing `gateway install --wrapper`, missing patch files in the tarball-derived git fixture, missing persisted `update.channel`, legacy plugin install-record locations, missing marketplace install-record persistence, and config metadata migration during `plugins update`. For packages after `2026.4.25`, those paths are strict failures.
|
||||
- Container smoke runners: `test:docker:openwebui`, `test:docker:onboard`, `test:docker:npm-onboard-channel-agent`, `test:docker:update-channel-switch`, `test:docker:session-runtime-context`, `test:docker:agents-delete-shared-workspace`, `test:docker:gateway-network`, `test:docker:browser-cdp-snapshot`, `test:docker:mcp-channels`, `test:docker:pi-bundle-mcp-tools`, `test:docker:cron-mcp-cleanup`, `test:docker:plugins`, `test:docker:plugin-update`, and `test:docker:config-reload` boot one or more real containers and verify higher-level integration paths.
|
||||
|
||||
The live-model Docker runners also bind-mount only the needed CLI auth homes (or all supported ones when the run is not narrowed), then copy them into the container home before the run so external-CLI OAuth can refresh tokens without mutating the host auth store:
|
||||
@@ -616,13 +667,14 @@ The live-model Docker runners also bind-mount only the needed CLI auth homes (or
|
||||
- CLI backend smoke: `pnpm test:docker:live-cli-backend` (script: `scripts/test-live-cli-backend-docker.sh`)
|
||||
- Codex app-server harness smoke: `pnpm test:docker:live-codex-harness` (script: `scripts/test-live-codex-harness-docker.sh`)
|
||||
- Gateway + dev agent: `pnpm test:docker:live-gateway` (script: `scripts/test-live-gateway-models-docker.sh`)
|
||||
- Observability smoke: `pnpm qa:otel:smoke` is a private QA source-checkout lane. It is intentionally not part of package Docker release lanes because the npm tarball omits QA Lab.
|
||||
- Open WebUI live smoke: `pnpm test:docker:openwebui` (script: `scripts/e2e/openwebui-docker.sh`)
|
||||
- Onboarding wizard (TTY, full scaffolding): `pnpm test:docker:onboard` (script: `scripts/e2e/onboard-docker.sh`)
|
||||
- Npm tarball onboarding/channel/agent smoke: `pnpm test:docker:npm-onboard-channel-agent` installs the packed OpenClaw tarball globally in Docker, configures OpenAI via env-ref onboarding plus Telegram by default, verifies doctor repairs activated plugin runtime deps, and runs one mocked OpenAI agent turn. Reuse a prebuilt tarball with `OPENCLAW_NPM_ONBOARD_PACKAGE_TGZ=/path/to/openclaw-*.tgz`, skip the host rebuild with `OPENCLAW_NPM_ONBOARD_HOST_BUILD=0`, or switch channel with `OPENCLAW_NPM_ONBOARD_CHANNEL=discord`.
|
||||
- Npm tarball onboarding/channel/agent smoke: `pnpm test:docker:npm-onboard-channel-agent` installs the packed OpenClaw tarball globally in Docker, configures OpenAI via env-ref onboarding plus Telegram by default, verifies doctor repairs activated plugin runtime deps, and runs one mocked OpenAI agent turn. Reuse a prebuilt tarball with `OPENCLAW_CURRENT_PACKAGE_TGZ=/path/to/openclaw-*.tgz`, skip the host rebuild with `OPENCLAW_NPM_ONBOARD_HOST_BUILD=0`, or switch channel with `OPENCLAW_NPM_ONBOARD_CHANNEL=discord`.
|
||||
- Update channel switch smoke: `pnpm test:docker:update-channel-switch` installs the packed OpenClaw tarball globally in Docker, switches from package `stable` to git `dev`, verifies the persisted channel and plugin post-update work, then switches back to package `stable` and checks update status.
|
||||
- Session runtime context smoke: `pnpm test:docker:session-runtime-context` verifies hidden runtime context transcript persistence plus doctor repair of affected duplicated prompt-rewrite branches.
|
||||
- Bun global install smoke: `bash scripts/e2e/bun-global-install-smoke.sh` packs the current tree, installs it with `bun install -g` in an isolated home, and verifies `openclaw infer image providers --json` returns bundled image providers instead of hanging. Reuse a prebuilt tarball with `OPENCLAW_BUN_GLOBAL_SMOKE_PACKAGE_TGZ=/path/to/openclaw-*.tgz`, skip the host build with `OPENCLAW_BUN_GLOBAL_SMOKE_HOST_BUILD=0`, or copy `dist/` from a built Docker image with `OPENCLAW_BUN_GLOBAL_SMOKE_DIST_IMAGE=openclaw-dockerfile-smoke:local`.
|
||||
- Installer Docker smoke: `bash scripts/test-install-sh-docker.sh` shares one npm cache across its root, update, and direct-npm containers. Update smoke defaults to npm `latest` as the stable baseline before upgrading to the candidate tarball. Non-root installer checks keep an isolated npm cache so root-owned cache entries do not mask user-local install behavior. Set `OPENCLAW_INSTALL_SMOKE_NPM_CACHE_DIR=/path/to/cache` to reuse the root/update/direct-npm cache across local reruns.
|
||||
- Installer Docker smoke: `bash scripts/test-install-sh-docker.sh` shares one npm cache across its root, update, and direct-npm containers. Update smoke defaults to npm `latest` as the stable baseline before upgrading to the candidate tarball. Override with `OPENCLAW_INSTALL_SMOKE_UPDATE_BASELINE=2026.4.22` locally, or with the Install Smoke workflow's `update_baseline_version` input on GitHub. Non-root installer checks keep an isolated npm cache so root-owned cache entries do not mask user-local install behavior. Set `OPENCLAW_INSTALL_SMOKE_NPM_CACHE_DIR=/path/to/cache` to reuse the root/update/direct-npm cache across local reruns.
|
||||
- Install Smoke CI skips the duplicate direct-npm global update with `OPENCLAW_INSTALL_SMOKE_SKIP_NPM_GLOBAL=1`; run the script locally without that env when direct `npm install -g` coverage is needed.
|
||||
- Agents delete shared workspace CLI smoke: `pnpm test:docker:agents-delete-shared-workspace` (script: `scripts/e2e/agents-delete-shared-workspace-docker.sh`) builds the root Dockerfile image by default, seeds two agents with one workspace in an isolated container home, runs `agents delete --json`, and verifies valid JSON plus retained workspace behavior. Reuse the install-smoke image with `OPENCLAW_AGENTS_DELETE_SHARED_WORKSPACE_E2E_IMAGE=openclaw-dockerfile-smoke:local OPENCLAW_AGENTS_DELETE_SHARED_WORKSPACE_E2E_SKIP_BUILD=1`.
|
||||
- Gateway networking (two containers, WS auth + health): `pnpm test:docker:gateway-network` (script: `scripts/e2e/gateway-network-docker.sh`)
|
||||
@@ -635,15 +687,15 @@ The live-model Docker runners also bind-mount only the needed CLI auth homes (or
|
||||
Set `OPENCLAW_PLUGINS_E2E_CLAWHUB=0` to skip the live ClawHub block, or override the default package with `OPENCLAW_PLUGINS_E2E_CLAWHUB_SPEC` and `OPENCLAW_PLUGINS_E2E_CLAWHUB_ID`.
|
||||
- Plugin update unchanged smoke: `pnpm test:docker:plugin-update` (script: `scripts/e2e/plugin-update-unchanged-docker.sh`)
|
||||
- Config reload metadata smoke: `pnpm test:docker:config-reload` (script: `scripts/e2e/config-reload-source-docker.sh`)
|
||||
- Bundled plugin runtime deps: `pnpm test:docker:bundled-channel-deps` builds a small Docker runner image by default, builds and packs OpenClaw once on the host, then mounts that tarball into each Linux install scenario. Reuse the image with `OPENCLAW_SKIP_DOCKER_BUILD=1`, skip the host rebuild after a fresh local build with `OPENCLAW_BUNDLED_CHANNEL_HOST_BUILD=0`, or point at an existing tarball with `OPENCLAW_BUNDLED_CHANNEL_PACKAGE_TGZ=/path/to/openclaw-*.tgz`. The full Docker aggregate pre-packs this tarball once, then shards bundled channel checks into independent lanes, including separate update lanes for Telegram, Discord, Slack, Feishu, memory-lancedb, and ACPX. Use `OPENCLAW_BUNDLED_CHANNELS=telegram,slack` to narrow the channel matrix when running the bundled lane directly, or `OPENCLAW_BUNDLED_CHANNEL_UPDATE_TARGETS=telegram,acpx` to narrow the update scenario. The lane also verifies that `channels.<id>.enabled=false` and `plugins.entries.<id>.enabled=false` suppress doctor/runtime-dependency repair.
|
||||
- Bundled plugin runtime deps: `pnpm test:docker:bundled-channel-deps` builds a small Docker runner image by default, builds and packs OpenClaw once on the host, then mounts that tarball into each Linux install scenario. Reuse the image with `OPENCLAW_SKIP_DOCKER_BUILD=1`, skip the host rebuild after a fresh local build with `OPENCLAW_BUNDLED_CHANNEL_HOST_BUILD=0`, or point at an existing tarball with `OPENCLAW_CURRENT_PACKAGE_TGZ=/path/to/openclaw-*.tgz`. The full Docker aggregate and release-path `plugins-integrations` chunk pre-pack this tarball once, then shard bundled channel checks into independent lanes, including separate update lanes for Telegram, Discord, Slack, Feishu, memory-lancedb, and ACPX. Use `OPENCLAW_BUNDLED_CHANNELS=telegram,slack` to narrow the channel matrix when running the bundled lane directly, or `OPENCLAW_BUNDLED_CHANNEL_UPDATE_TARGETS=telegram,acpx` to narrow the update scenario. The lane also verifies that `channels.<id>.enabled=false` and `plugins.entries.<id>.enabled=false` suppress doctor/runtime-dependency repair.
|
||||
- Narrow bundled plugin runtime deps while iterating by disabling unrelated scenarios, for example:
|
||||
`OPENCLAW_BUNDLED_CHANNEL_SCENARIOS=0 OPENCLAW_BUNDLED_CHANNEL_UPDATE_SCENARIO=0 OPENCLAW_BUNDLED_CHANNEL_ROOT_OWNED_SCENARIO=0 OPENCLAW_BUNDLED_CHANNEL_SETUP_ENTRY_SCENARIO=0 pnpm test:docker:bundled-channel-deps`.
|
||||
|
||||
To prebuild and reuse the shared built-app image manually:
|
||||
To prebuild and reuse the shared functional image manually:
|
||||
|
||||
```bash
|
||||
OPENCLAW_DOCKER_E2E_IMAGE=openclaw-docker-e2e:local pnpm test:docker:e2e-build
|
||||
OPENCLAW_DOCKER_E2E_IMAGE=openclaw-docker-e2e:local OPENCLAW_SKIP_DOCKER_BUILD=1 pnpm test:docker:mcp-channels
|
||||
OPENCLAW_DOCKER_E2E_IMAGE=openclaw-docker-e2e-functional:local pnpm test:docker:e2e-build
|
||||
OPENCLAW_DOCKER_E2E_IMAGE=openclaw-docker-e2e-functional:local OPENCLAW_SKIP_DOCKER_BUILD=1 pnpm test:docker:mcp-channels
|
||||
```
|
||||
|
||||
Suite-specific image overrides such as `OPENCLAW_GATEWAY_NETWORK_E2E_IMAGE` still win when set. When `OPENCLAW_SKIP_DOCKER_BUILD=1` points at a remote shared image, the scripts pull it if it is not already local. The QR and installer Docker tests keep their own Dockerfiles because they validate package/install behavior rather than the shared built-app runtime.
|
||||
|
||||
@@ -24,7 +24,7 @@ The [openclaw-ansible](https://github.com/openclaw/openclaw-ansible) repo is the
|
||||
| **Network** | Internet connection for package installation |
|
||||
| **Ansible** | 2.14+ (installed automatically by the quick-start script) |
|
||||
|
||||
## What You Get
|
||||
## What you get
|
||||
|
||||
- **Firewall-first security** -- UFW + Docker isolation (only SSH + Tailscale accessible)
|
||||
- **Tailscale VPN** -- secure remote access without exposing services publicly
|
||||
@@ -33,7 +33,7 @@ The [openclaw-ansible](https://github.com/openclaw/openclaw-ansible) repo is the
|
||||
- **Systemd integration** -- auto-start on boot with hardening
|
||||
- **One-command setup** -- complete deployment in minutes
|
||||
|
||||
## Quick Start
|
||||
## Quick start
|
||||
|
||||
One-command install:
|
||||
|
||||
@@ -41,7 +41,7 @@ One-command install:
|
||||
curl -fsSL https://raw.githubusercontent.com/openclaw/openclaw-ansible/main/install.sh | bash
|
||||
```
|
||||
|
||||
## What Gets Installed
|
||||
## What gets installed
|
||||
|
||||
The Ansible playbook installs and configures:
|
||||
|
||||
@@ -86,7 +86,7 @@ backend. See [Sandboxing](/gateway/sandboxing) for details and other backends.
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
### Quick Commands
|
||||
### Quick commands
|
||||
|
||||
```bash
|
||||
# Check service status
|
||||
@@ -103,7 +103,7 @@ sudo -i -u openclaw
|
||||
openclaw channels login
|
||||
```
|
||||
|
||||
## Security Architecture
|
||||
## Security architecture
|
||||
|
||||
The deployment uses a 4-layer defense model:
|
||||
|
||||
@@ -122,7 +122,7 @@ Only port 22 (SSH) should be open. All other services (gateway, Docker) are lock
|
||||
|
||||
Docker is installed for agent sandboxes (isolated tool execution), not for running the gateway itself. See [Multi-Agent Sandbox and Tools](/tools/multi-agent-sandbox-tools) for sandbox configuration.
|
||||
|
||||
## Manual Installation
|
||||
## Manual installation
|
||||
|
||||
If you prefer manual control over the automation:
|
||||
|
||||
@@ -216,7 +216,7 @@ This is idempotent and safe to run multiple times.
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
## Advanced Configuration
|
||||
## Advanced configuration
|
||||
|
||||
For detailed security architecture and troubleshooting, see the openclaw-ansible repo:
|
||||
|
||||
|
||||
@@ -35,7 +35,7 @@ Bun is an optional local runtime for running TypeScript directly (`bun run ...`,
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
## Lifecycle Scripts
|
||||
## Lifecycle scripts
|
||||
|
||||
Bun blocks dependency lifecycle scripts unless explicitly trusted. For this repo, the commonly blocked scripts are not required:
|
||||
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user