mirror of
https://github.com/openclaw/openclaw.git
synced 2026-05-06 18:50:42 +00:00
Merge branch 'main' into meow/markdown-preview-polish
This commit is contained in:
@@ -1,11 +1,6 @@
|
||||
---
|
||||
name: blacksmith-testbox
|
||||
description: >
|
||||
Validate code changes against real CI when local execution is not
|
||||
enough. Use for CI-parity checks, secrets/services, migrations, or
|
||||
builds/tests that cannot run reliably on the local machine. Do not
|
||||
replace repo-documented local test/build loops just because this
|
||||
skill exists.
|
||||
description: Run Blacksmith Testbox for CI-parity checks, secrets, hosted services, migrations, or builds local cannot reproduce.
|
||||
---
|
||||
|
||||
# Blacksmith Testbox
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
name: openclaw-ghsa-maintainer
|
||||
description: Maintainer workflow for OpenClaw GitHub Security Advisories (GHSA). Use when Codex needs to inspect, patch, validate, or publish a repo advisory, verify private-fork state, prepare advisory Markdown or JSON payloads safely, handle GHSA API-specific publish constraints, or confirm advisory publish success.
|
||||
description: Inspect, patch, validate, publish, or confirm OpenClaw GHSA security advisories and private-fork state.
|
||||
---
|
||||
|
||||
# OpenClaw GHSA Maintainer
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
name: openclaw-parallels-smoke
|
||||
description: End-to-end Parallels smoke, upgrade, and rerun workflow for OpenClaw across macOS, Windows, and Linux guests. Use when Codex needs to run, rerun, debug, or interpret VM-based install, onboarding, gateway smoke tests, latest-release-to-main upgrade checks, fresh snapshot retests, or optional Discord roundtrip verification under Parallels.
|
||||
description: Run, rerun, debug, or interpret OpenClaw Parallels install, onboarding, gateway smoke, and upgrade checks.
|
||||
---
|
||||
|
||||
# OpenClaw Parallels Smoke
|
||||
@@ -45,6 +45,9 @@ Use this skill for Parallels guest workflows and smoke interpretation. Do not lo
|
||||
## npm install then update
|
||||
|
||||
- Preferred entrypoint: `pnpm test:parallels:npm-update`
|
||||
- For a macOS-only published release update check, use:
|
||||
- `timeout --foreground 75m pnpm test:parallels:npm-update -- --platform macos --package-spec openclaw@<old-version> --update-target <target-version-or-tag> --json`
|
||||
This keeps the same-guest `openclaw update --tag ...` coverage and uses the shared macOS current-user/sudo fallback without starting Windows/Linux lanes.
|
||||
- Required coverage: every release/update regression run must include both lanes:
|
||||
- fresh snapshot -> install requested package/baseline -> smoke
|
||||
- same guest baseline -> run the guest's installed `openclaw update ...` command -> smoke again
|
||||
@@ -75,6 +78,7 @@ Use this skill for Parallels guest workflows and smoke interpretation. Do not lo
|
||||
## macOS flow
|
||||
|
||||
- Preferred entrypoint: `pnpm test:parallels:macos`
|
||||
- `parallels-macos-smoke.sh --mode fresh --target-package-spec openclaw@<version>` is an install smoke only. For published old-version -> new-version update coverage on macOS, prefer the npm-update wrapper with `--platform macos`; `parallels-macos-smoke.sh --mode upgrade --target-package-spec ...` installs the target package and does not exercise the baseline CLI's updater.
|
||||
- Default upgrade coverage on macOS should now include: fresh snapshot -> site installer pinned to the latest stable tag -> `openclaw update --channel dev` on the guest. Treat this as part of the default Tahoe regression plan, not an optional side quest.
|
||||
- `parallels-macos-smoke.sh --mode upgrade` should run that release-to-dev lane by default. Keep the older host-tgz upgrade path only when the caller explicitly passes `--target-package-spec`.
|
||||
- Because the default upgrade lane no longer needs a host tgz, skip `npm pack` + host HTTP server startup for `--mode upgrade` unless `--target-package-spec` is set. Keep the pack/server path for `fresh` and `both`.
|
||||
@@ -144,6 +148,7 @@ Use this skill for Parallels guest workflows and smoke interpretation. Do not lo
|
||||
- `--discord-token-env`
|
||||
- `--discord-guild-id`
|
||||
- `--discord-channel-id`
|
||||
- After a successful Discord smoke/roundtrip, shut down the guest VM before handoff (`prlctl stop "$VM_NAME"` or the concrete VM name). The macOS smoke harness should do this automatically after successful Discord proof; still stop the VM manually after ad-hoc Discord checks. Do not leave the Discord-configured guest running; it can keep reading/posting in `#maintainer` and spam Discord after the proof is complete.
|
||||
- Keep the Discord token only in a host env var.
|
||||
- Use installed `openclaw message send/read`, not `node openclaw.mjs message ...`.
|
||||
- Set `channels.discord.guilds` as one JSON object, not dotted config paths with snowflakes.
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
name: openclaw-pr-maintainer
|
||||
description: Maintainer workflow for reviewing, triaging, preparing, closing, or landing OpenClaw pull requests and related issues. Use when Codex needs to validate bug-fix claims, search for related issues or PRs, apply or recommend close/reason labels, prepare GitHub comments safely, check review-thread follow-up, or perform maintainer-style PR decision making before merge or closure.
|
||||
description: Review, triage, close, label, comment on, or land OpenClaw PRs/issues with maintainer evidence checks.
|
||||
---
|
||||
|
||||
# OpenClaw PR Maintainer
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
name: openclaw-qa-testing
|
||||
description: Run, watch, debug, and extend OpenClaw QA testing with qa-lab and qa-channel. Use when Codex needs to execute the repo-backed QA suite, inspect live QA artifacts, debug failing scenarios, add new QA scenarios, or explain the OpenClaw QA workflow. Prefer the live OpenAI lane with regular openai/gpt-5.4 in fast mode; do not use gpt-5.4-pro or gpt-5.4-mini unless the user explicitly overrides that policy.
|
||||
description: Run, watch, debug, extend, or explain OpenClaw qa-lab and qa-channel scenarios, artifacts, and live lanes.
|
||||
---
|
||||
|
||||
# OpenClaw QA Testing
|
||||
@@ -49,6 +49,66 @@ pnpm openclaw qa suite \
|
||||
5. If the user wants to watch the live UI, find the current `openclaw-qa` listen port and report `http://127.0.0.1:<port>`.
|
||||
6. If a scenario fails, fix the product or harness root cause, then rerun the full lane.
|
||||
|
||||
## QA credentials and 1Password
|
||||
|
||||
- Use `op` only inside `tmux` for QA secret lookup in this repo.
|
||||
- Quick auth check inside tmux:
|
||||
|
||||
```bash
|
||||
op account list
|
||||
```
|
||||
|
||||
- Direct Telegram npm live test secrets currently live in 1Password item:
|
||||
- vault: `OpenClaw`
|
||||
- item: `Telegram E2E`
|
||||
- That item is the first place to look for:
|
||||
- `OPENCLAW_QA_TELEGRAM_DRIVER_BOT_TOKEN`
|
||||
- `OPENCLAW_QA_TELEGRAM_SUT_BOT_TOKEN`
|
||||
- `OPENCLAW_QA_PROVIDER_MODE`
|
||||
- `OPENCLAW_NPM_TELEGRAM_PACKAGE_SPEC`
|
||||
- Convex QA secrets currently live in 1Password items:
|
||||
- vault: `OpenClaw`
|
||||
- item: `OPENCLAW_QA_CONVEX_SITE_URL`
|
||||
- item: `OPENCLAW_QA_CONVEX_SECRET_MAINTAINER`
|
||||
- item: `OPENCLAW_QA_CONVEX_SECRET_CI`
|
||||
- Additional related notes/login items seen during QA credential work:
|
||||
- vault: `Private`
|
||||
- items: `OPENCLAW QA`, `Convex`, `Telegram`
|
||||
- If a required value is missing from those notes:
|
||||
- do not guess
|
||||
- ask the maintainer/operator for the current value or the current 1Password item name
|
||||
- for Telegram direct runs, `OPENCLAW_QA_TELEGRAM_GROUP_ID` may be stored separately from `Telegram E2E`
|
||||
- for Convex runs, the leased Telegram credential should provide the Telegram group id and bot tokens together; do not require a separate `OPENCLAW_QA_TELEGRAM_GROUP_ID`
|
||||
- for Convex runs, prefer `OpenClaw/OPENCLAW_QA_CONVEX_SITE_URL`; if that is stale or unclear, ask for the active pool URL before running
|
||||
- Prefer direct Telegram envs for the npm Telegram Docker lane when available:
|
||||
|
||||
```bash
|
||||
OPENCLAW_QA_TELEGRAM_GROUP_ID="..." \
|
||||
OPENCLAW_QA_TELEGRAM_DRIVER_BOT_TOKEN="..." \
|
||||
OPENCLAW_QA_TELEGRAM_SUT_BOT_TOKEN="..." \
|
||||
OPENCLAW_QA_PROVIDER_MODE="mock-openai" \
|
||||
OPENCLAW_NPM_TELEGRAM_PACKAGE_SPEC="openclaw@beta" \
|
||||
pnpm test:docker:npm-telegram-live
|
||||
```
|
||||
|
||||
- Prefer Convex mode when the goal is stable shared QA infra:
|
||||
- round-robin credential leasing
|
||||
- thinner wrapper for channel-specific setup
|
||||
- CLI/admin flows around the pooled credentials
|
||||
- Live npm Telegram Docker lane note:
|
||||
- `scripts/e2e/npm-telegram-live-runner.ts` reads `OPENCLAW_NPM_TELEGRAM_PROVIDER_MODE`
|
||||
- do not assume `OPENCLAW_QA_PROVIDER_MODE` is consumed by that wrapper
|
||||
- if a 1Password note only gives `OPENCLAW_QA_PROVIDER_MODE`, map it explicitly to `OPENCLAW_NPM_TELEGRAM_PROVIDER_MODE` before running the Docker lane
|
||||
- Verified live shape:
|
||||
- Convex mode can pass the real Docker lane without direct Telegram env vars
|
||||
- leased Telegram payload includes the group id coupled to the driver/SUT tokens
|
||||
- a real run of `pnpm test:docker:npm-telegram-live` passed with:
|
||||
- `OPENCLAW_QA_CREDENTIAL_SOURCE=convex`
|
||||
- `OPENCLAW_QA_CREDENTIAL_ROLE=maintainer`
|
||||
- `OPENCLAW_QA_CONVEX_SITE_URL`
|
||||
- `OPENCLAW_QA_CONVEX_SECRET_MAINTAINER`
|
||||
- `OPENCLAW_NPM_TELEGRAM_PROVIDER_MODE=mock-openai`
|
||||
|
||||
## Character evals
|
||||
|
||||
Use `qa character-eval` for style/persona/vibe checks across multiple live models.
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
name: openclaw-release-maintainer
|
||||
description: Maintainer workflow for OpenClaw releases, prereleases, changelog release notes, and publish validation. Use when Codex needs to prepare or verify stable or beta release steps, align version naming, assemble release notes, check release auth requirements, or validate publish-time commands and artifacts.
|
||||
description: Prepare or verify OpenClaw stable/beta releases, changelogs, release notes, publish commands, and artifacts.
|
||||
---
|
||||
|
||||
# OpenClaw Release Maintainer
|
||||
@@ -70,6 +70,22 @@ Use this skill for release and publish-time workflow. Keep ordinary development
|
||||
- Every stable OpenClaw release ships the npm package and macOS app together.
|
||||
Beta releases normally ship npm/package artifacts first and skip mac app
|
||||
build/sign/notarize unless the operator requests mac beta validation.
|
||||
- Do not let the slower macOS signing/notary path block npm publication once
|
||||
the npm preflight has passed. Keep mac validation/publish running in
|
||||
parallel, publish npm from the successful npm preflight, then start published
|
||||
npm install/update, Docker, and Parallels verification while mac artifacts
|
||||
continue.
|
||||
- Mac packaging may be built from a slight release-branch variation of the
|
||||
tagged commit when the delta is mac packaging, signing, workflow, or
|
||||
validation-only release machinery. If mac packaging needs release-branch-only
|
||||
fixes after the stable npm package or GitHub tag is already published, do not
|
||||
create a `vYYYY.M.D-N` correction tag just to change the workflow source.
|
||||
Dispatch the private mac workflows for the original `tag=vYYYY.M.D` with
|
||||
`source_ref=release/YYYY.M.D` and `public_release_branch=release/YYYY.M.D`;
|
||||
provenance checks must prove the source SHA descends from the tag and
|
||||
validation/preflight use the same source. Reserve `vYYYY.M.D-N` correction
|
||||
tags for emergency hotfixes that must publish a new npm package/release
|
||||
identity, not for ordinary mac-only packaging recovery.
|
||||
- The production Sparkle feed lives at `https://raw.githubusercontent.com/openclaw/openclaw/main/appcast.xml`, and the canonical published file is `appcast.xml` on `main` in the `openclaw` repo.
|
||||
- That shared production Sparkle feed is stable-only. Beta mac releases may
|
||||
upload assets to the GitHub prerelease, but they must not replace the shared
|
||||
@@ -82,6 +98,10 @@ Use this skill for release and publish-time workflow. Keep ordinary development
|
||||
## Build changelog-backed release notes
|
||||
|
||||
- Changelog entries should be user-facing, not internal release-process notes.
|
||||
- GitHub release and prerelease bodies must use the full matching
|
||||
`CHANGELOG.md` version section, not highlights or an excerpt. When creating
|
||||
or editing a release, extract from `## YYYY.M.D` through the line before the
|
||||
next level-2 heading and use that complete block as the release notes.
|
||||
- When cutting a mac release with a beta GitHub prerelease:
|
||||
- tag `vYYYY.M.D-beta.N` from the release commit
|
||||
- create a prerelease titled `openclaw YYYY.M.D-beta.N`
|
||||
@@ -104,14 +124,33 @@ live`; keep it clearly beta and avoid implying stable promotion.
|
||||
- Lead with user-visible capabilities, then important integrations, then
|
||||
reliability/security/install fixes. Compress "lots of fixes" into one
|
||||
readable bullet.
|
||||
- Read the full changelog section before drafting. Do not lead with coverage,
|
||||
CI, validation, or internal release mechanics unless the release is explicitly
|
||||
about those. Peter prefers concrete user wins: features, integrations,
|
||||
workflow improvements, and practical reliability fixes.
|
||||
- Tone: high-signal, slightly cheeky, confident, not corporate. One joke is
|
||||
enough. Avoid punching down, insulting users, or promising what was not
|
||||
verified.
|
||||
- Length: release tweets are always standard tweets under 280 characters. Trim
|
||||
to 3-4 bullets and count the final text before posting.
|
||||
- Links/media: include the GitHub release or changelog link at the end. Add a
|
||||
short docs follow-up reply only when there is a standout feature that needs
|
||||
setup instructions.
|
||||
- Peter likes dry, compact taglines when they feel earned. Good example:
|
||||
`Big release, tiny release notes... kidding.` Keep the joke short and let the
|
||||
feature bullets carry the tweet; do not turn the punchline into a second
|
||||
paragraph or a forced bit.
|
||||
- Length: release tweets are always standard tweets under 280 characters, with
|
||||
room for one URL. Trim to 3-4 bullets and count the final text before posting.
|
||||
- Links/media: include the GitHub release or changelog link at the end of the
|
||||
first release tweet.
|
||||
- Thread follow-ups: if doing a thread, keep the first release tweet as the
|
||||
compact launch post, then publish one focused feature explainer per reply.
|
||||
Follow-up replies should not repeat "new in VERSION" or the version number
|
||||
when the thread context already makes it obvious.
|
||||
- Every follow-up tweet should include a docs URL for that specific feature.
|
||||
Prefer a bare URL over `Docs: <url>` unless the label is needed for clarity.
|
||||
Keep follow-ups concise: around 160-220 raw characters is usually the sweet
|
||||
spot; under 280 is the hard cap. If a URL makes a tweet fail, trim prose
|
||||
before dropping the URL.
|
||||
Prefer explaining diagnostics, trajectory/export, provider setup, model
|
||||
commands, or other setup-heavy features in follow-ups instead of overloading
|
||||
the first release tweet.
|
||||
- Hotfix/correction: be direct and accountable. State what slipped, what is
|
||||
fixed, and the new version. Keep jokes out of incident-style posts.
|
||||
|
||||
@@ -201,9 +240,18 @@ node --import tsx scripts/openclaw-npm-postpublish-verify.ts <published-version>
|
||||
- Source Peter's profile before live release validation so OpenAI and Anthropic
|
||||
credentials are available without printing secrets:
|
||||
`set -a; source "$HOME/.profile"; set +a`.
|
||||
- Release QA and Parallels validation for this train must use both
|
||||
- Parallels validation and any local live model QA for this train must use both
|
||||
`OPENAI_API_KEY` and `ANTHROPIC_API_KEY`. If either is missing after sourcing
|
||||
`.profile`, stop before starting the long lanes and report the missing key.
|
||||
`.profile`, stop before starting those local long lanes and report the
|
||||
missing key.
|
||||
- Live credentialed channel QA is the GitHub Actions workflow
|
||||
`QA-Lab - All Lanes` (`.github/workflows/qa-live-telegram-convex.yml`), not a
|
||||
local substitute. Dispatch it from Actions against the release tag and wait
|
||||
for it to pass before npm preflight/publish readiness. Use a SHA only when it
|
||||
satisfies the workflow's secret-bearing trust gate: main ancestor or open PR
|
||||
head. It runs the QA Lab mock parity gate plus live Matrix and live Telegram
|
||||
lanes using the `qa-live-shared` environment; Telegram uses Convex CI
|
||||
credential leases.
|
||||
- Default release checks:
|
||||
- `pnpm check`
|
||||
- `pnpm check:test-types`
|
||||
@@ -221,23 +269,42 @@ node --import tsx scripts/openclaw-npm-postpublish-verify.ts <published-version>
|
||||
- all Parallels install/update tests:
|
||||
`pnpm test:parallels:npm-update -- --json` plus any needed individual
|
||||
rerun lanes from `openclaw-parallels-smoke`
|
||||
- all QA release validation:
|
||||
OpenAI live suite with `openai/gpt-5.4` in fast mode, Anthropic live suite
|
||||
with `anthropic/claude-opus-4-6`, and the repo-backed character evals
|
||||
- all QA release validation: dispatch GitHub Actions > `QA-Lab - All Lanes`
|
||||
against the release tag and require success. This is the release gate for
|
||||
live credentialed Matrix/Telegram channel coverage. Use a SHA only when it
|
||||
satisfies the workflow trust gate. Run local OpenAI/Anthropic suites or
|
||||
repo-backed character evals only when the operator asks for extra model
|
||||
coverage or a failure needs local debugging.
|
||||
- Post-published beta verification roster:
|
||||
- `node --import tsx scripts/openclaw-npm-postpublish-verify.ts <beta-version>`
|
||||
- install/update smoke against the published beta channel
|
||||
- Docker install/update coverage that exercises the published beta package
|
||||
- published npm Telegram proof: dispatch Actions > `NPM Telegram Beta E2E`
|
||||
from `main` with `package_spec=openclaw@<beta-version>` and
|
||||
`provider_mode=mock-openai`, approve `npm-release`, and require success.
|
||||
This is the default button path for installed-package onboarding,
|
||||
Telegram setup, and real Telegram E2E against the published npm package.
|
||||
Use the local `pnpm test:docker:npm-telegram-live` lane with the matching
|
||||
`OPENCLAW_NPM_TELEGRAM_PACKAGE_SPEC` and Convex CI env only as a fallback
|
||||
or debugging path.
|
||||
- Parallels published beta install/update coverage with both OpenAI and
|
||||
Anthropic provider keys available
|
||||
- Parallels install/update proof must keep plugin installs enabled unless the
|
||||
operator explicitly scopes a harness-only isolation check; a lane that
|
||||
disables bundled plugin installs is not valid plugin/dependency release
|
||||
evidence.
|
||||
- targeted QA reruns only for areas touched by fixes after the full pre-npm
|
||||
roster, unless the operator requests the full QA roster again
|
||||
roster, unless the operator requests the full QA roster again. If the fix
|
||||
touches live channel QA, credential plumbing, Matrix, Telegram, or the QA
|
||||
harness, rerun Actions > `QA-Lab - All Lanes`.
|
||||
- Check all release-related build surfaces touched by the release, not only the npm package.
|
||||
- For beta-style full e2e batteries, hard-cap top-level long lanes instead of letting them run indefinitely. Use host `timeout --foreground`/`gtimeout --foreground` caps such as:
|
||||
- `45m` for `OPENCLAW_INSTALL_SMOKE_SKIP_NONROOT=1 pnpm test:install:smoke`
|
||||
- `90m` for `pnpm test:docker:all`
|
||||
- `60m` each for standalone Docker live lanes
|
||||
- `180m` for the full QA live OpenAI + Anthropic roster
|
||||
- `180m` for local full QA live OpenAI + Anthropic rosters when explicitly
|
||||
requested; the default release channel QA gate is Actions >
|
||||
`QA-Lab - All Lanes`
|
||||
- Parallels caps from the `openclaw-parallels-smoke` skill
|
||||
If a lane hits its cap, stop and inspect/fix the affected lane before continuing; do not continue to wait on the same process.
|
||||
- Actual npm install/update phases are capped at 5 minutes. If `npm install -g`, installer package install, or `openclaw update` takes longer than 300s in release e2e, stop treating the run as healthy progress and debug the installer/updater or harness.
|
||||
@@ -257,7 +324,14 @@ node --import tsx scripts/openclaw-npm-postpublish-verify.ts <published-version>
|
||||
public release assets so the updater feed cannot lag the published binaries.
|
||||
- Serialize stable appcast-producing runs across tags so two releases do not
|
||||
generate replacement `appcast.xml` files from the same stale seed.
|
||||
- For stable releases, confirm the latest beta already passed the broader release workflows before cutting stable.
|
||||
- For stable releases, rely primarily on the latest beta's broader release
|
||||
workflow confidence. When promoting the matching non-beta build to npm
|
||||
`latest`, prefer a light time-bounded verification pass: published npm
|
||||
postpublish verify, Docker install/update smoke, macOS-only Parallels
|
||||
install/update smoke, and required QA signal. Do not rerun the full
|
||||
Docker/Parallels matrix unless the beta evidence is stale, the stable build
|
||||
differs materially from beta, or the operator explicitly asks for full
|
||||
retesting.
|
||||
- If any required build, packaging step, or release workflow is red, do not say the release is ready.
|
||||
|
||||
## Use the right auth flow
|
||||
@@ -267,6 +341,29 @@ node --import tsx scripts/openclaw-npm-postpublish-verify.ts <published-version>
|
||||
`openclaw/releases-private/.github/workflows/openclaw-npm-dist-tags.yml`
|
||||
workflow because `npm dist-tag` management needs `NPM_TOKEN`, while the
|
||||
public npm release workflow stays OIDC-only.
|
||||
- Prefer fixing the private workflow token path over any local 1Password
|
||||
fallback. The desired setup is a granular npm token stored as the private
|
||||
repo's `NPM_TOKEN` secret, scoped to the `openclaw` package with read/write
|
||||
and 2FA bypass for automation.
|
||||
- If the private dist-tag workflow cannot promote because `NPM_TOKEN` is absent
|
||||
or stale, use the local tmux + 1Password fallback:
|
||||
- Start or reuse a tmux session so interactive `npm login` and OTP prompts
|
||||
are observable and recoverable.
|
||||
- Hard rule: never run `op` directly in the main agent shell during release
|
||||
work. Any 1Password CLI use must happen inside that tmux session so prompts
|
||||
and alerts are contained and observable.
|
||||
- Use the 1Password item `op://Private/Npmjs` for npm credentials and OTP.
|
||||
Do not print passwords, tokens, or OTPs to the transcript; send them through
|
||||
tmux buffers, env vars scoped to the tmux command, or `expect` with
|
||||
`log_user 0`.
|
||||
- Re-authenticate npm inside that tmux session with
|
||||
`npm login --auth-type=legacy`, then confirm `npm whoami` reports
|
||||
`steipete`.
|
||||
- Promote with a fresh OTP:
|
||||
`npm dist-tag add openclaw@YYYY.M.D latest --otp "$OTP"`.
|
||||
- Verify with a cache-bypassed registry read, for example:
|
||||
`npm view openclaw dist-tags --json --prefer-online --cache /tmp/openclaw-npm-cache-verify-$$`
|
||||
and `npm view openclaw@latest version dist.tarball --json --prefer-online`.
|
||||
- Direct stable publishes can also use that private dist-tag workflow to point
|
||||
`beta` at the already-published `latest` version when the operator wants both
|
||||
tags aligned immediately.
|
||||
@@ -383,73 +480,82 @@ node --import tsx scripts/openclaw-npm-postpublish-verify.ts <published-version>
|
||||
6. Create `release/YYYY.M.D` from that post-changelog `main` commit.
|
||||
7. Make every repo version location match the beta tag before creating it.
|
||||
8. Commit release preparation changes on the release branch and push the branch.
|
||||
9. Run the full pre-npm beta test roster from the release branch before any npm
|
||||
preflight or publish.
|
||||
9. Run the local build, Docker, and Parallels parts of the full pre-npm beta
|
||||
test roster from the release branch before any npm preflight or publish.
|
||||
10. For beta releases, skip mac app build/sign/notarize unless beta scope or a
|
||||
release blocker specifically requires it. For stable releases, include the
|
||||
mac app, signing, notarization, and appcast path.
|
||||
11. Confirm the target npm version is not already published.
|
||||
12. Create and push the git tag from the release branch.
|
||||
13. Create or refresh the matching GitHub release.
|
||||
14. Start `.github/workflows/openclaw-npm-release.yml` from the release branch
|
||||
14. Dispatch Actions > `QA-Lab - All Lanes` against the release tag and wait
|
||||
for the mock parity, live Matrix, and live Telegram credentialed-channel
|
||||
lanes to pass.
|
||||
15. Start `.github/workflows/openclaw-npm-release.yml` from the release branch
|
||||
with `preflight_only=true`
|
||||
and choose the intended `npm_dist_tag` (`beta` default; `latest` only for
|
||||
an intentional direct stable publish). Wait for it to pass. Save that run id
|
||||
because the real publish requires it to reuse the prepared npm tarball.
|
||||
15. For stable releases, start `.github/workflows/macos-release.yml` in
|
||||
16. For stable releases, start `.github/workflows/macos-release.yml` in
|
||||
`openclaw/openclaw` and wait for the public validation-only run to pass.
|
||||
16. For stable releases, start
|
||||
17. For stable releases, start
|
||||
`openclaw/releases-private/.github/workflows/openclaw-macos-validate.yml`
|
||||
with the same tag and wait for the private mac validation lane to pass.
|
||||
17. For stable releases, start
|
||||
18. For stable releases, start
|
||||
`openclaw/releases-private/.github/workflows/openclaw-macos-publish.yml`
|
||||
with `preflight_only=true` and wait for it to pass. Save that run id because
|
||||
the real publish requires it to reuse the notarized mac artifacts.
|
||||
18. If any preflight or validation run fails, fix the issue on a new commit,
|
||||
19. If any preflight or validation run fails, fix the issue on a new commit,
|
||||
delete the tag and matching GitHub release, recreate them from the fixed
|
||||
commit, and rerun all relevant preflights from scratch before continuing.
|
||||
Never reuse old preflight results after the commit changes. For pushed or
|
||||
published beta tags, do not delete/recreate; increment to the next beta tag.
|
||||
19. Start `.github/workflows/openclaw-npm-release.yml` from the same branch with
|
||||
20. Start `.github/workflows/openclaw-npm-release.yml` from the same branch with
|
||||
the same tag for the real publish, choose `npm_dist_tag` (`beta` default,
|
||||
`latest` only when you intentionally want direct stable publish), keep it
|
||||
the same as the preflight run, and pass the successful npm
|
||||
`preflight_run_id`.
|
||||
20. Wait for `npm-release` approval from `@openclaw/openclaw-release-managers`.
|
||||
21. Run postpublish verification:
|
||||
21. Wait for `npm-release` approval from `@openclaw/openclaw-release-managers`.
|
||||
22. Run postpublish verification:
|
||||
`node --import tsx scripts/openclaw-npm-postpublish-verify.ts <published-version>`.
|
||||
22. Run the post-published beta verification roster. If any lane fails after
|
||||
23. Run the post-published beta verification roster. If any lane fails after
|
||||
the beta tag/package is pushed or published, fix, commit/push/pull,
|
||||
increment to the next beta tag, and restart at the full pre-npm beta test
|
||||
roster for the new beta. If a pre-npm lane fails before any tag/package
|
||||
leaves the machine, fix and rerun the same intended beta attempt. Repeat up
|
||||
to the operator's authorized beta-attempt limit, normally 4.
|
||||
23. Announce the beta/stable release on Discord best-effort using Peter's bot
|
||||
roster for the new beta. The roster includes the manual Actions >
|
||||
`NPM Telegram Beta E2E` workflow against the exact published beta package.
|
||||
If a pre-npm lane fails before any tag/package leaves the machine, fix and
|
||||
rerun the same intended beta attempt. Repeat up to the operator's
|
||||
authorized beta-attempt limit, normally 4.
|
||||
24. Announce the beta/stable release on Discord best-effort using Peter's bot
|
||||
token from `.profile`.
|
||||
24. If the operator requested beta only, stop after beta verification and the
|
||||
25. If the operator requested beta only, stop after beta verification and the
|
||||
announcement.
|
||||
25. If the stable release was published to `beta`, start the private
|
||||
26. If the stable release was published to `beta`, use the light stable
|
||||
promotion roster when the matching beta already carried the full confidence
|
||||
pass: published npm postpublish verify, Docker install/update smoke,
|
||||
macOS-only Parallels install/update smoke, and required QA signal.
|
||||
Then start the private
|
||||
`openclaw/releases-private/.github/workflows/openclaw-npm-dist-tags.yml`
|
||||
workflow after beta validation passes to promote that stable version from
|
||||
`beta` to `latest`, then verify `latest` now points at that version.
|
||||
26. If the stable release was published directly to `latest` and `beta` should
|
||||
workflow to promote that stable version from `beta` to `latest`, then
|
||||
verify `latest` now points at that version.
|
||||
27. If the stable release was published directly to `latest` and `beta` should
|
||||
follow it, start that same private dist-tag workflow to point `beta` at the
|
||||
stable version, then verify both `latest` and `beta` point at that version.
|
||||
27. For stable releases, start
|
||||
28. For stable releases, start
|
||||
`openclaw/releases-private/.github/workflows/openclaw-macos-publish.yml`
|
||||
for the real publish with the successful private mac `preflight_run_id` and
|
||||
wait for success.
|
||||
28. Verify the successful real private mac run uploaded the `.zip`, `.dmg`,
|
||||
29. Verify the successful real private mac run uploaded the `.zip`, `.dmg`,
|
||||
and `.dSYM.zip` artifacts to the existing GitHub release in
|
||||
`openclaw/openclaw`.
|
||||
29. For stable releases, download `macos-appcast-<tag>` from the successful
|
||||
30. For stable releases, download `macos-appcast-<tag>` from the successful
|
||||
private mac run, update `appcast.xml` on `main`, and verify the feed. Merge
|
||||
or cherry-pick release branch changes back to `main` after stable succeeds.
|
||||
30. For beta releases, publish the mac assets only when intentionally requested;
|
||||
31. For beta releases, publish the mac assets only when intentionally requested;
|
||||
expect no shared production
|
||||
`appcast.xml` artifact and do not update the shared production feed unless a
|
||||
separate beta feed exists.
|
||||
31. After publish, verify npm and the attached release artifacts.
|
||||
32. After publish, verify npm and the attached release artifacts.
|
||||
|
||||
## GHSA advisory work
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
name: openclaw-secret-scanning-maintainer
|
||||
description: Maintainer-only workflow for handling GitHub Secret Scanning alerts on OpenClaw. Use when Codex needs to triage, redact, clean up, and resolve secret leakage found in issue comments, issue bodies, PR comments, or other GitHub content.
|
||||
description: Triage, redact, clean up, and resolve OpenClaw GitHub Secret Scanning alerts in issues or PRs.
|
||||
---
|
||||
|
||||
# OpenClaw Secret Scanning Maintainer
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
name: openclaw-test-heap-leaks
|
||||
description: Investigate `pnpm test` memory growth, Vitest worker OOMs, and suspicious RSS increases in OpenClaw using the `scripts/test-parallel.mjs` heap snapshot tooling. Use when Codex needs to reproduce test-lane memory growth, collect repeated `.heapsnapshot` files, compare snapshots from the same worker PID, triage likely transformed-module retention versus likely runtime leaks, and fix or reduce the impact by patching cleanup logic or isolating hotspot tests.
|
||||
description: Investigate OpenClaw pnpm test memory growth, Vitest OOMs, RSS spikes, and heap snapshot deltas.
|
||||
---
|
||||
|
||||
# OpenClaw Test Heap Leaks
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
name: openclaw-test-performance
|
||||
description: Benchmark, diagnose, and optimize OpenClaw test performance without losing coverage. Use when Codex needs to reassess `pnpm test`, compare grouped Vitest reports, identify CPU/memory/import hotspots, fix slow tests or cold runtime paths, preserve behavior proofs, update the performance report, add AGENTS guardrails, and make scoped commits/pushes for OpenClaw test-speed work.
|
||||
description: Benchmark, diagnose, and optimize OpenClaw test runtime, import hotspots, CPU/RSS, and slow coverage paths.
|
||||
---
|
||||
|
||||
# OpenClaw Test Performance
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
name: optimizetests
|
||||
description: Optimize OpenClaw test runtime end to end. Use when the user asks for /optimizetests, slow-test review, import optimization, deduping tests, moving misplaced core coverage to extensions, or reducing CI/test wall time without adding shards or dropping coverage.
|
||||
description: Optimize OpenClaw slow tests, imports, misplaced coverage, and CI wall time without dropping coverage.
|
||||
---
|
||||
|
||||
# Optimize Tests
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
name: parallels-discord-roundtrip
|
||||
description: Run the macOS Parallels smoke harness with Discord end-to-end roundtrip verification, including guest send, host verification, host reply, and guest readback.
|
||||
description: Run macOS Parallels smoke with Discord send, host verification, host reply, and guest readback proof.
|
||||
---
|
||||
|
||||
# Parallels Discord Roundtrip
|
||||
@@ -50,6 +50,7 @@ pnpm test:parallels:macos \
|
||||
- Avoid `prlctl enter` / expect for long Discord setup scripts; it line-wraps/corrupts long commands. Use `prlctl exec --current-user /bin/sh -lc ...` for the Discord config phase.
|
||||
- Full 3-OS sweeps: the shared build lock is safe in parallel, but snapshot restore is still a Parallels bottleneck. Prefer serialized Windows/Linux restore-heavy reruns if the host is already under load.
|
||||
- Harness cleanup deletes the temporary Discord smoke messages at exit.
|
||||
- After a successful Discord roundtrip, shut down the macOS guest before handoff (`prlctl stop "macOS Tahoe"`). The macOS smoke harness should do this automatically after successful Discord proof; still stop the VM manually after ad-hoc Discord checks. Do not leave the Discord-configured VM running; it can keep reading/posting in `#maintainer` and spam Discord after the proof is complete.
|
||||
- Per-phase logs: `/tmp/openclaw-parallels-smoke.*`
|
||||
- Machine summary: pass `--json`
|
||||
- If roundtrip flakes, inspect `fresh.discord-roundtrip.log` and `discord-last-readback.json` in the run dir first.
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
name: security-triage
|
||||
description: Triage GitHub security advisories for OpenClaw with high-confidence close/keep decisions, exact tag and commit verification, trust-model checks, optional hardening notes, and a final reply ready to post and copy to clipboard.
|
||||
description: Triage OpenClaw security advisories, drafts, and GHSA reports with shipped-tag and trust-model proof.
|
||||
---
|
||||
|
||||
# Security Triage
|
||||
@@ -45,6 +45,17 @@ For each advisory, decide:
|
||||
- `keep open`
|
||||
- `keep open but narrow`
|
||||
|
||||
Default to one advisory at a time when comments/closures are involved:
|
||||
|
||||
1. Review exactly one GHSA.
|
||||
2. Print the GHSA URL first.
|
||||
3. Summarize the decision and evidence for discussion.
|
||||
4. Draft one maintainer-ready comment.
|
||||
5. Copy only that one comment to the clipboard.
|
||||
6. Stop and wait for Peter to post/discuss before moving to the next GHSA.
|
||||
|
||||
Do not batch multiple close comments unless Peter explicitly asks for a batch.
|
||||
|
||||
Check in this order:
|
||||
|
||||
1. Trust model
|
||||
@@ -60,6 +71,11 @@ Check in this order:
|
||||
4. Functional tradeoff
|
||||
- If a hardening change would reduce intended user functionality, call that out before proposing it.
|
||||
- Prefer fixes that preserve user workflows over deny-by-default regressions unless the boundary demands it.
|
||||
5. Hardening follow-up
|
||||
- Even when the GHSA should close, ask whether a narrow hardening change would reduce footguns without changing the documented trust boundary.
|
||||
- Separate hardening from vulnerability status. Phrase it as "not required for GHSA closure, but worth considering".
|
||||
- Bring up hardening only if it is concrete, low-risk, and preserves intended maintainer/operator workflows.
|
||||
- If hardening would require a product/security model change, say that explicitly and do not imply it is a required fix for closure.
|
||||
|
||||
## Response Format
|
||||
|
||||
@@ -76,9 +92,22 @@ When preparing a maintainer-ready close reply:
|
||||
|
||||
Keep tone firm, specific, non-defensive.
|
||||
|
||||
## Discussion Mode
|
||||
|
||||
When Peter is manually posting GHSA comments, use this flow:
|
||||
|
||||
1. Show the URL.
|
||||
2. Give a terse verdict (`close`, `keep open`, or `keep open but narrow`).
|
||||
3. List the strongest evidence bullets.
|
||||
4. State any optional hardening follow-up separately from the close reason.
|
||||
5. Copy the proposed comment body with `pbcopy`.
|
||||
6. End the reply after the one advisory. Do not continue to the next advisory until Peter says to continue.
|
||||
|
||||
If the GitHub API cannot post comments for private advisories, say so once and keep using clipboard/UI paste.
|
||||
|
||||
## Clipboard Step
|
||||
|
||||
After drafting the final post body, copy it:
|
||||
After drafting the final post body for the current advisory, copy it:
|
||||
|
||||
```bash
|
||||
pbcopy <<'EOF'
|
||||
@@ -86,7 +115,7 @@ pbcopy <<'EOF'
|
||||
EOF
|
||||
```
|
||||
|
||||
Tell the user that the clipboard now contains the proposed response.
|
||||
Tell the user that the clipboard now contains the proposed response for that advisory.
|
||||
|
||||
## Useful Commands
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
name: tag-duplicate-prs-issues
|
||||
description: Maintainer workflow for deciding whether an OpenClaw pull request or issue is a duplicate, gathering evidence with ghreplica and pr-search-cli, grouping related work in prtags, and syncing the duplicate grouping back to GitHub through prtags. Use when Codex needs to search for duplicate PRs or issues, create or reuse a duplicate group, enforce one-group-per-target discipline, save duplicate judgments in prtags, or prepare group state for comment sync.
|
||||
description: Search duplicate OpenClaw PRs/issues, group related work in prtags, and sync duplicate state to GitHub.
|
||||
---
|
||||
|
||||
# Tag Duplicate PRs and Issues
|
||||
|
||||
@@ -8,6 +8,14 @@
|
||||
|
||||
.bun-cache
|
||||
.bun
|
||||
.artifacts
|
||||
**/.artifacts
|
||||
.local
|
||||
**/.local
|
||||
.pi
|
||||
**/.pi
|
||||
__openclaw_vitest__
|
||||
**/__openclaw_vitest__
|
||||
.tmp
|
||||
**/.tmp
|
||||
.DS_Store
|
||||
@@ -38,6 +46,9 @@ docs/.generated
|
||||
*.log
|
||||
tmp
|
||||
**/tmp
|
||||
dist-runtime
|
||||
**/dist-runtime
|
||||
openclaw-path-alias-*
|
||||
|
||||
# build artifacts
|
||||
dist
|
||||
|
||||
9
.github/actions/setup-node-env/action.yml
vendored
9
.github/actions/setup-node-env/action.yml
vendored
@@ -37,6 +37,7 @@ runs:
|
||||
check-latest: false
|
||||
|
||||
- name: Setup pnpm + cache store
|
||||
id: pnpm-cache
|
||||
uses: ./.github/actions/setup-pnpm-store-cache
|
||||
with:
|
||||
pnpm-version: ${{ inputs.pnpm-version }}
|
||||
@@ -97,3 +98,11 @@ runs:
|
||||
install_args+=("$LOCKFILE_FLAG")
|
||||
fi
|
||||
pnpm "${install_args[@]}" || pnpm "${install_args[@]}"
|
||||
|
||||
- name: Save pnpm store cache
|
||||
if: inputs.install-deps == 'true' && steps.pnpm-cache.outputs.cache-enabled == 'true' && steps.pnpm-cache.outputs.cache-hit != 'true'
|
||||
uses: actions/cache/save@v5
|
||||
continue-on-error: true
|
||||
with:
|
||||
path: ${{ steps.pnpm-cache.outputs.store-path }}
|
||||
key: ${{ steps.pnpm-cache.outputs.primary-key }}
|
||||
|
||||
@@ -14,9 +14,25 @@ inputs:
|
||||
required: false
|
||||
default: "true"
|
||||
use-actions-cache:
|
||||
description: Whether to restore/save pnpm store with actions/cache.
|
||||
description: Whether to restore pnpm store with actions/cache.
|
||||
required: false
|
||||
default: "true"
|
||||
outputs:
|
||||
cache-enabled:
|
||||
description: Whether actions/cache restore was enabled.
|
||||
value: ${{ steps.pnpm-cache-config.outputs.enabled }}
|
||||
cache-hit:
|
||||
description: Whether the pnpm store cache had an exact key hit.
|
||||
value: ${{ steps.pnpm-cache-restore.outputs.cache-hit }}
|
||||
cache-matched-key:
|
||||
description: Cache key matched by restore, if any.
|
||||
value: ${{ steps.pnpm-cache-restore.outputs.cache-matched-key }}
|
||||
primary-key:
|
||||
description: Primary pnpm store cache key.
|
||||
value: ${{ steps.pnpm-cache-config.outputs.primary-key }}
|
||||
store-path:
|
||||
description: Resolved pnpm store path.
|
||||
value: ${{ steps.pnpm-store.outputs.path }}
|
||||
runs:
|
||||
using: composite
|
||||
steps:
|
||||
@@ -46,18 +62,29 @@ runs:
|
||||
shell: bash
|
||||
run: echo "path=$(pnpm store path --silent)" >> "$GITHUB_OUTPUT"
|
||||
|
||||
- name: Restore pnpm store cache (exact key only)
|
||||
if: inputs.use-actions-cache == 'true' && inputs.use-restore-keys != 'true'
|
||||
uses: actions/cache@v5
|
||||
with:
|
||||
path: ${{ steps.pnpm-store.outputs.path }}
|
||||
key: ${{ runner.os }}-pnpm-store-${{ inputs.cache-key-suffix }}-${{ hashFiles('pnpm-lock.yaml') }}
|
||||
- name: Resolve pnpm store cache keys
|
||||
id: pnpm-cache-config
|
||||
shell: bash
|
||||
env:
|
||||
CACHE_KEY_SUFFIX: ${{ inputs.cache-key-suffix }}
|
||||
LOCKFILE_HASH: ${{ hashFiles('pnpm-lock.yaml') }}
|
||||
USE_ACTIONS_CACHE: ${{ inputs.use-actions-cache }}
|
||||
USE_RESTORE_KEYS: ${{ inputs.use-restore-keys }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
echo "enabled=$USE_ACTIONS_CACHE" >> "$GITHUB_OUTPUT"
|
||||
echo "primary-key=${RUNNER_OS}-pnpm-store-${CACHE_KEY_SUFFIX}-${LOCKFILE_HASH}" >> "$GITHUB_OUTPUT"
|
||||
if [ "$USE_RESTORE_KEYS" = "true" ]; then
|
||||
echo "restore-keys=${RUNNER_OS}-pnpm-store-${CACHE_KEY_SUFFIX}-" >> "$GITHUB_OUTPUT"
|
||||
else
|
||||
echo "restore-keys=" >> "$GITHUB_OUTPUT"
|
||||
fi
|
||||
|
||||
- name: Restore pnpm store cache (with fallback keys)
|
||||
if: inputs.use-actions-cache == 'true' && inputs.use-restore-keys == 'true'
|
||||
uses: actions/cache@v5
|
||||
- name: Restore pnpm store cache
|
||||
id: pnpm-cache-restore
|
||||
if: inputs.use-actions-cache == 'true'
|
||||
uses: actions/cache/restore@v5
|
||||
with:
|
||||
path: ${{ steps.pnpm-store.outputs.path }}
|
||||
key: ${{ runner.os }}-pnpm-store-${{ inputs.cache-key-suffix }}-${{ hashFiles('pnpm-lock.yaml') }}
|
||||
restore-keys: |
|
||||
${{ runner.os }}-pnpm-store-${{ inputs.cache-key-suffix }}-
|
||||
key: ${{ steps.pnpm-cache-config.outputs.primary-key }}
|
||||
restore-keys: ${{ steps.pnpm-cache-config.outputs.restore-keys }}
|
||||
|
||||
@@ -12,6 +12,9 @@ paths-ignore:
|
||||
- docs
|
||||
- "**/node_modules"
|
||||
- "**/coverage"
|
||||
- "**/*.generated.ts"
|
||||
- "**/*.bundle.js"
|
||||
- "**/*-runtime.js"
|
||||
- "**/*.test.ts"
|
||||
- "**/*.test.tsx"
|
||||
- "**/*.e2e.test.ts"
|
||||
|
||||
33
.github/codex/prompts/docs-agent.md
vendored
Normal file
33
.github/codex/prompts/docs-agent.md
vendored
Normal file
@@ -0,0 +1,33 @@
|
||||
# OpenClaw Docs Agent
|
||||
|
||||
You are maintaining OpenClaw documentation after a main-branch commit.
|
||||
|
||||
Goal: inspect the code changes and existing documentation, then update existing docs only when they are stale, incomplete, or misleading.
|
||||
|
||||
Hard limits:
|
||||
|
||||
- Edit existing files only.
|
||||
- Do not create new docs pages, images, assets, scripts, code files, or workflow files.
|
||||
- Do not delete or rename files.
|
||||
- Do not change production code, tests, package metadata, generated baselines, lockfiles, or CI config.
|
||||
- Keep changes minimal and factual.
|
||||
- Use "plugin/plugins" in user-facing docs/UI/changelog; `extensions/` is only the internal workspace layout.
|
||||
- Do not add a changelog entry unless the docs update describes a user-facing behavior/API change from the triggering commit.
|
||||
|
||||
Allowed paths:
|
||||
|
||||
- `docs/**`
|
||||
- `README.md`
|
||||
- `CHANGELOG.md`
|
||||
|
||||
Required workflow:
|
||||
|
||||
1. Run `pnpm docs:list` if available and read relevant docs based on `read_when` hints.
|
||||
2. Inspect the triggering event via `$GITHUB_EVENT_PATH`, then review `$DOCS_AGENT_BASE_SHA..$DOCS_AGENT_HEAD_SHA` and its changed files. If either env var is missing, fall back to the event payload.
|
||||
3. Update stale existing documentation, if needed.
|
||||
4. Run `pnpm check:docs` if dependencies are available.
|
||||
5. Leave the worktree clean if no docs need changes.
|
||||
|
||||
If `pnpm docs:check-mdx` or `pnpm check:docs` reports MDX parse errors, fix only the syntax needed for the listed existing docs files. Preserve prose meaning, frontmatter, code fences, and links; do not broadly rewrite translated or source content while repairing parser failures.
|
||||
|
||||
When uncertain, prefer no edit and explain the uncertainty in the final message.
|
||||
25
.github/codex/prompts/docs-mdx-repair.md
vendored
Normal file
25
.github/codex/prompts/docs-mdx-repair.md
vendored
Normal file
@@ -0,0 +1,25 @@
|
||||
# OpenClaw Docs MDX Repair Agent
|
||||
|
||||
You are repairing generated OpenClaw documentation after a fast MDX validation failure.
|
||||
|
||||
Goal: fix only the MDX syntax errors reported by the checker.
|
||||
|
||||
Hard limits:
|
||||
|
||||
- Edit only existing Markdown/MDX files under the locale path named by `LOCALE`.
|
||||
- Do not edit source English docs unless `LOCALE=en`.
|
||||
- Do not edit code, workflows, package metadata, generated sync metadata, translation memory, or assets.
|
||||
- Do not add, delete, or rename files.
|
||||
- Preserve the meaning of translated prose.
|
||||
- Preserve frontmatter, `x-i18n.source_hash`, links, code fences, JSX component names, and existing page structure.
|
||||
- Avoid broad formatting or retranslation.
|
||||
|
||||
Required workflow:
|
||||
|
||||
1. Read `.openclaw-sync/mdx/${LOCALE}.json` when it exists.
|
||||
2. Inspect only the listed files and nearby lines.
|
||||
3. Fix the minimal syntax issue, such as broken JSX attribute quoting, mismatched component closing tags, raw `<` text, raw HTML comments, or accidental top-level `import`/`export` text.
|
||||
4. Run `node source/scripts/check-docs-mdx.mjs "docs/${LOCALE}" --json-out ".openclaw-sync/mdx/${LOCALE}.json"`.
|
||||
5. Leave no changes outside `docs/${LOCALE}`.
|
||||
|
||||
When uncertain, prefer the smallest escaping fix: backticks for literal words, `<` for literal `<`, double quotes around JSX attribute values, and balanced component tags.
|
||||
44
.github/codex/prompts/test-performance-agent.md
vendored
Normal file
44
.github/codex/prompts/test-performance-agent.md
vendored
Normal file
@@ -0,0 +1,44 @@
|
||||
# OpenClaw Test Performance Agent
|
||||
|
||||
You are maintaining OpenClaw test performance after a trusted main-branch CI run.
|
||||
|
||||
Goal: inspect the full-suite test performance report, then make small, coverage-preserving improvements to slow tests when the fix is clear. If the baseline report shows failing tests and the fix is obvious, fix those too.
|
||||
|
||||
Inputs:
|
||||
|
||||
- Baseline grouped report: `.artifacts/test-perf/baseline-before.json`
|
||||
- Per-config Vitest JSON reports: `.artifacts/test-perf/baseline-before/vitest-json/`
|
||||
- Per-config logs: `.artifacts/test-perf/baseline-before/logs/`
|
||||
|
||||
Hard limits:
|
||||
|
||||
- Preserve test coverage and behavioral intent.
|
||||
- Do not delete, skip, weaken, or narrow test cases to make the suite faster.
|
||||
- Do not add `test.skip`, `it.skip`, `describe.skip`, `test.only`, `it.only`, or `describe.only`.
|
||||
- Do not update snapshots, generated baselines, inventories, ignore files, lockfiles, package metadata, CI workflows, or release metadata.
|
||||
- Do not add dependencies.
|
||||
- Do not create, delete, or rename files.
|
||||
- Do not do broad refactors or style-only rewrites.
|
||||
- Keep changes minimal and focused on the slow or failing tests you can justify from the report.
|
||||
- Prefer no edit when a performance improvement is speculative.
|
||||
- If `.artifacts/test-perf/baseline-before.json` has `"failed": true`, do not make performance-only edits. First inspect the failed config logs. Edit only when the test failure has an obvious, coverage-preserving fix. If no obvious failure fix exists, leave the worktree clean.
|
||||
|
||||
Good fixes:
|
||||
|
||||
- Replace broad partial module mocks, especially `importOriginal()` mocks, with narrow injected dependencies or local runtime seams.
|
||||
- Avoid importing heavy barrels in hot tests when a narrow module or helper covers the same behavior.
|
||||
- Add or adjust a production lazy/injection seam only when that is the narrowest way to preserve coverage while removing expensive imports or fixing an obvious mock/import failure.
|
||||
- Move expensive setup from per-test hooks to shared setup only when state isolation remains correct.
|
||||
- Reuse existing fixtures/builders instead of recreating expensive work per case.
|
||||
- Mock expensive runtime boundaries directly: filesystem crawls, package registries, provider SDKs, network/process launch, browser/runtime scanners.
|
||||
- Keep one integration smoke per boundary and test pure helpers directly, but only when the same behavior remains covered.
|
||||
|
||||
Required workflow:
|
||||
|
||||
1. Run `pnpm docs:list` if available, then read `docs/reference/test.md` and `docs/help/testing.md` sections about test performance.
|
||||
2. Inspect `.artifacts/test-perf/baseline-before.json`. If `failed` is true, inspect the failed config logs before looking at slow files.
|
||||
3. Pick at most a few low-risk files. When baseline failed, pick only files needed for the obvious failure fix; otherwise focus on the slowest files/configs. Explain the coverage-preserving reason in comments only if the code would otherwise be unclear.
|
||||
4. Run targeted tests for changed files where possible. Use `pnpm test <path>` and optionally `pnpm test:perf:imports <path>`.
|
||||
5. Leave the worktree clean if no safe improvement exists.
|
||||
|
||||
When uncertain, make no edit and explain the uncertainty in the final message.
|
||||
14
.github/labeler.yml
vendored
14
.github/labeler.yml
vendored
@@ -24,6 +24,16 @@
|
||||
- any-glob-to-any-file:
|
||||
- "extensions/googlechat/**"
|
||||
- "docs/channels/googlechat.md"
|
||||
"plugin: google-meet":
|
||||
- changed-files:
|
||||
- any-glob-to-any-file:
|
||||
- "extensions/google-meet/**"
|
||||
- "docs/plugins/google-meet.md"
|
||||
"plugin: bonjour":
|
||||
- changed-files:
|
||||
- any-glob-to-any-file:
|
||||
- "extensions/bonjour/**"
|
||||
- "docs/gateway/bonjour.md"
|
||||
"channel: imessage":
|
||||
- changed-files:
|
||||
- any-glob-to-any-file:
|
||||
@@ -377,3 +387,7 @@
|
||||
- changed-files:
|
||||
- any-glob-to-any-file:
|
||||
- "extensions/fal/**"
|
||||
"extensions: gradium":
|
||||
- changed-files:
|
||||
- any-glob-to-any-file:
|
||||
- "extensions/gradium/**"
|
||||
|
||||
8
.github/workflows/ci-check-testbox.yml
vendored
8
.github/workflows/ci-check-testbox.yml
vendored
@@ -62,18 +62,18 @@ jobs:
|
||||
|
||||
git -C "$workdir" checkout --force --detach "$CHECKOUT_SHA" || return 1
|
||||
test -f "$workdir/.github/actions/setup-node-env/action.yml" || return 1
|
||||
echo "checkout attempt ${attempt}/2 succeeded"
|
||||
echo "checkout attempt ${attempt}/5 succeeded"
|
||||
}
|
||||
|
||||
for attempt in 1 2; do
|
||||
for attempt in 1 2 3 4 5; do
|
||||
if checkout_attempt "$attempt"; then
|
||||
exit 0
|
||||
fi
|
||||
echo "checkout attempt ${attempt}/2 failed"
|
||||
echo "checkout attempt ${attempt}/5 failed"
|
||||
sleep $((attempt * 5))
|
||||
done
|
||||
|
||||
echo "checkout failed after 2 attempts" >&2
|
||||
echo "checkout failed after 5 attempts" >&2
|
||||
exit 1
|
||||
- name: Setup Node environment
|
||||
uses: ./.github/actions/setup-node-env
|
||||
|
||||
561
.github/workflows/ci.yml
vendored
561
.github/workflows/ci.yml
vendored
@@ -3,6 +3,9 @@ name: CI
|
||||
on:
|
||||
push:
|
||||
branches: [main]
|
||||
paths-ignore:
|
||||
- "**/*.md"
|
||||
- "docs/**"
|
||||
pull_request:
|
||||
types: [opened, reopened, synchronize, ready_for_review, converted_to_draft]
|
||||
|
||||
@@ -10,7 +13,7 @@ permissions:
|
||||
contents: read
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.event_name == 'pull_request' && format('{0}-v6-{1}', github.workflow, github.event.pull_request.number) || (github.repository == 'openclaw/openclaw' && format('{0}-v6-{1}', github.workflow, github.ref) || format('{0}-v6-{1}-{2}', github.workflow, github.ref, github.sha)) }}
|
||||
group: ${{ github.event_name == 'pull_request' && format('{0}-v7-{1}', github.workflow, github.event.pull_request.number) || (github.repository == 'openclaw/openclaw' && format('{0}-v7-{1}', github.workflow, github.ref) || format('{0}-v7-{1}-{2}', github.workflow, github.ref, github.sha)) }}
|
||||
cancel-in-progress: true
|
||||
|
||||
env:
|
||||
@@ -194,6 +197,11 @@ jobs:
|
||||
}).map((shard) => ({
|
||||
check_name: shard.checkName,
|
||||
extensions_csv: shard.extensionIds.join(","),
|
||||
runner: isCanonicalRepository && [0, 3, 4].includes(shard.index)
|
||||
? "blacksmith-8vcpu-ubuntu-2404"
|
||||
: isCanonicalRepository
|
||||
? "blacksmith-4vcpu-ubuntu-2404"
|
||||
: "ubuntu-24.04",
|
||||
shard_index: shard.index + 1,
|
||||
task: "extensions-batch",
|
||||
}))
|
||||
@@ -208,6 +216,7 @@ jobs:
|
||||
configs: shard.configs,
|
||||
includePatterns: shard.includePatterns,
|
||||
requires_dist: shard.requiresDist,
|
||||
runner: shard.runner,
|
||||
}))
|
||||
: [];
|
||||
const nodeTestNonDistShards = nodeTestShards.filter((shard) => !shard.requires_dist);
|
||||
@@ -251,9 +260,9 @@ jobs:
|
||||
checks_node_core_nondist_matrix: createMatrix(nodeTestNonDistShards),
|
||||
run_checks_node_core_dist: nodeTestDistShards.length > 0,
|
||||
checks_node_core_dist_matrix: createMatrix(nodeTestDistShards),
|
||||
run_extension_fast: hasChangedExtensions,
|
||||
run_extension_fast: hasChangedExtensions && !isPush,
|
||||
extension_fast_matrix: createMatrix(
|
||||
hasChangedExtensions
|
||||
hasChangedExtensions && !isPush
|
||||
? (changedExtensionsMatrix.include ?? []).map((entry) => ({
|
||||
check_name: `extension-fast-${entry.extension}`,
|
||||
extension: entry.extension,
|
||||
@@ -284,7 +293,6 @@ jobs:
|
||||
{ check_name: "android-test-play", task: "test-play" },
|
||||
{ check_name: "android-test-third-party", task: "test-third-party" },
|
||||
{ check_name: "android-build-play", task: "build-play" },
|
||||
{ check_name: "android-build-third-party", task: "build-third-party" },
|
||||
]
|
||||
: [],
|
||||
),
|
||||
@@ -455,6 +463,10 @@ jobs:
|
||||
if: needs.preflight.outputs.run_build_artifacts == 'true'
|
||||
runs-on: ${{ github.repository == 'openclaw/openclaw' && 'blacksmith-8vcpu-ubuntu-2404' || 'ubuntu-24.04' }}
|
||||
timeout-minutes: 20
|
||||
outputs:
|
||||
channels-result: ${{ steps.built_artifact_checks.outputs['channels-result'] }}
|
||||
core-support-boundary-result: ${{ steps.built_artifact_checks.outputs['core-support-boundary-result'] }}
|
||||
gateway-watch-result: ${{ steps.built_artifact_checks.outputs['gateway-watch-result'] }}
|
||||
steps:
|
||||
- name: Checkout
|
||||
shell: bash
|
||||
@@ -490,18 +502,18 @@ jobs:
|
||||
|
||||
git -C "$workdir" checkout --force --detach "$CHECKOUT_SHA" || return 1
|
||||
test -f "$workdir/.github/actions/setup-node-env/action.yml" || return 1
|
||||
echo "checkout attempt ${attempt}/2 succeeded"
|
||||
echo "checkout attempt ${attempt}/5 succeeded"
|
||||
}
|
||||
|
||||
for attempt in 1 2; do
|
||||
for attempt in 1 2 3 4 5; do
|
||||
if checkout_attempt "$attempt"; then
|
||||
exit 0
|
||||
fi
|
||||
echo "checkout attempt ${attempt}/2 failed"
|
||||
echo "checkout attempt ${attempt}/5 failed"
|
||||
sleep $((attempt * 5))
|
||||
done
|
||||
|
||||
echo "checkout failed after 2 attempts" >&2
|
||||
echo "checkout failed after 5 attempts" >&2
|
||||
exit 1
|
||||
|
||||
- name: Ensure secrets base commit (PR fast path)
|
||||
@@ -522,12 +534,28 @@ jobs:
|
||||
- name: Build Control UI
|
||||
run: pnpm ui:build
|
||||
|
||||
- name: Check Control UI i18n
|
||||
if: needs.preflight.outputs.run_control_ui_i18n == 'true'
|
||||
run: pnpm ui:i18n:check
|
||||
|
||||
- name: Cache dist build
|
||||
uses: actions/cache@v5
|
||||
with:
|
||||
path: dist/
|
||||
path: |
|
||||
dist/
|
||||
dist-runtime/
|
||||
key: ${{ runner.os }}-dist-build-${{ github.sha }}
|
||||
|
||||
- name: Pack built runtime artifacts
|
||||
run: tar --posix -cf dist-runtime-build.tar.zst --use-compress-program zstdmt dist dist-runtime
|
||||
|
||||
- name: Upload built runtime artifacts
|
||||
uses: actions/upload-artifact@v7
|
||||
with:
|
||||
name: dist-runtime-build
|
||||
path: dist-runtime-build.tar.zst
|
||||
retention-days: 1
|
||||
|
||||
- name: Upload A2UI bundle artifact
|
||||
uses: actions/upload-artifact@v7
|
||||
with:
|
||||
@@ -551,13 +579,91 @@ jobs:
|
||||
- name: Check CLI startup memory
|
||||
run: pnpm test:startup:memory
|
||||
|
||||
- name: Run built artifact checks
|
||||
id: built_artifact_checks
|
||||
if: needs.preflight.outputs.run_checks == 'true' || needs.preflight.outputs.run_checks_node_core_dist == 'true' || needs.preflight.outputs.run_check_additional == 'true'
|
||||
env:
|
||||
RUN_CHANNELS: ${{ needs.preflight.outputs.run_checks }}
|
||||
RUN_CORE_SUPPORT_BOUNDARY: ${{ needs.preflight.outputs.run_checks_node_core_dist }}
|
||||
RUN_GATEWAY_WATCH: ${{ needs.preflight.outputs.run_check_additional }}
|
||||
shell: bash
|
||||
run: |
|
||||
set -uo pipefail
|
||||
|
||||
names=()
|
||||
pids=()
|
||||
logs=()
|
||||
declare -A results=(
|
||||
["channels"]="skipped"
|
||||
["core-support-boundary"]="skipped"
|
||||
["gateway-watch"]="skipped"
|
||||
)
|
||||
|
||||
start_check() {
|
||||
local name="$1"
|
||||
shift
|
||||
local log="${RUNNER_TEMP}/${name}.log"
|
||||
names+=("$name")
|
||||
logs+=("$log")
|
||||
echo "starting ${name}: $*"
|
||||
"$@" >"$log" 2>&1 &
|
||||
pids+=("$!")
|
||||
}
|
||||
|
||||
if [ "$RUN_CHANNELS" = "true" ]; then
|
||||
start_check "channels" env \
|
||||
NODE_OPTIONS=--max-old-space-size=6144 \
|
||||
OPENCLAW_VITEST_MAX_WORKERS=1 \
|
||||
pnpm test:channels
|
||||
fi
|
||||
|
||||
if [ "$RUN_CORE_SUPPORT_BOUNDARY" = "true" ]; then
|
||||
start_check "core-support-boundary" env \
|
||||
NODE_OPTIONS=--max-old-space-size=6144 \
|
||||
OPENCLAW_VITEST_MAX_WORKERS=2 \
|
||||
node scripts/run-vitest.mjs run --config test/vitest/vitest.full-core-support-boundary.config.ts
|
||||
fi
|
||||
|
||||
if [ "$RUN_GATEWAY_WATCH" = "true" ]; then
|
||||
start_check "gateway-watch" node scripts/check-gateway-watch-regression.mjs --skip-build --ready-timeout-ms 5000
|
||||
fi
|
||||
|
||||
for index in "${!pids[@]}"; do
|
||||
name="${names[$index]}"
|
||||
log="${logs[$index]}"
|
||||
pid="${pids[$index]}"
|
||||
|
||||
if wait "$pid"; then
|
||||
result="success"
|
||||
else
|
||||
result="failure"
|
||||
fi
|
||||
|
||||
echo "::group::${name} log"
|
||||
cat "$log"
|
||||
echo "::endgroup::"
|
||||
results["$name"]="$result"
|
||||
done
|
||||
|
||||
for name in channels core-support-boundary gateway-watch; do
|
||||
echo "${name}-result=${results[$name]}" >> "$GITHUB_OUTPUT"
|
||||
done
|
||||
|
||||
- name: Upload gateway watch regression artifacts
|
||||
if: always() && needs.preflight.outputs.run_check_additional == 'true'
|
||||
uses: actions/upload-artifact@v7
|
||||
with:
|
||||
name: gateway-watch-regression
|
||||
path: .local/gateway-watch-regression/
|
||||
retention-days: 7
|
||||
|
||||
checks-fast-core:
|
||||
permissions:
|
||||
contents: read
|
||||
name: ${{ matrix.check_name }}
|
||||
needs: [preflight]
|
||||
if: needs.preflight.outputs.run_checks_fast == 'true'
|
||||
runs-on: ubuntu-24.04
|
||||
runs-on: ${{ github.repository == 'openclaw/openclaw' && 'blacksmith-4vcpu-ubuntu-2404' || 'ubuntu-24.04' }}
|
||||
timeout-minutes: 60
|
||||
strategy:
|
||||
fail-fast: false
|
||||
@@ -597,18 +703,18 @@ jobs:
|
||||
|
||||
git -C "$workdir" checkout --force --detach "$CHECKOUT_SHA" || return 1
|
||||
test -f "$workdir/.github/actions/setup-node-env/action.yml" || return 1
|
||||
echo "checkout attempt ${attempt}/2 succeeded"
|
||||
echo "checkout attempt ${attempt}/5 succeeded"
|
||||
}
|
||||
|
||||
for attempt in 1 2; do
|
||||
for attempt in 1 2 3 4 5; do
|
||||
if checkout_attempt "$attempt"; then
|
||||
exit 0
|
||||
fi
|
||||
echo "checkout attempt ${attempt}/2 failed"
|
||||
echo "checkout attempt ${attempt}/5 failed"
|
||||
sleep $((attempt * 5))
|
||||
done
|
||||
|
||||
echo "checkout failed after 2 attempts" >&2
|
||||
echo "checkout failed after 5 attempts" >&2
|
||||
exit 1
|
||||
|
||||
- name: Setup Node environment
|
||||
@@ -685,18 +791,18 @@ jobs:
|
||||
|
||||
git -C "$workdir" checkout --force --detach "$CHECKOUT_SHA" || return 1
|
||||
test -f "$workdir/.github/actions/setup-node-env/action.yml" || return 1
|
||||
echo "checkout attempt ${attempt}/2 succeeded"
|
||||
echo "checkout attempt ${attempt}/5 succeeded"
|
||||
}
|
||||
|
||||
for attempt in 1 2; do
|
||||
for attempt in 1 2 3 4 5; do
|
||||
if checkout_attempt "$attempt"; then
|
||||
exit 0
|
||||
fi
|
||||
echo "checkout attempt ${attempt}/2 failed"
|
||||
echo "checkout attempt ${attempt}/5 failed"
|
||||
sleep $((attempt * 5))
|
||||
done
|
||||
|
||||
echo "checkout failed after 2 attempts" >&2
|
||||
echo "checkout failed after 5 attempts" >&2
|
||||
exit 1
|
||||
|
||||
- name: Setup Node environment
|
||||
@@ -788,18 +894,18 @@ jobs:
|
||||
|
||||
git -C "$workdir" checkout --force --detach "$CHECKOUT_SHA" || return 1
|
||||
test -f "$workdir/.github/actions/setup-node-env/action.yml" || return 1
|
||||
echo "checkout attempt ${attempt}/2 succeeded"
|
||||
echo "checkout attempt ${attempt}/5 succeeded"
|
||||
}
|
||||
|
||||
for attempt in 1 2; do
|
||||
for attempt in 1 2 3 4 5; do
|
||||
if checkout_attempt "$attempt"; then
|
||||
exit 0
|
||||
fi
|
||||
echo "checkout attempt ${attempt}/2 failed"
|
||||
echo "checkout attempt ${attempt}/5 failed"
|
||||
sleep $((attempt * 5))
|
||||
done
|
||||
|
||||
echo "checkout failed after 2 attempts" >&2
|
||||
echo "checkout failed after 5 attempts" >&2
|
||||
exit 1
|
||||
|
||||
- name: Setup Node environment
|
||||
@@ -816,7 +922,7 @@ jobs:
|
||||
name: ${{ matrix.check_name }}
|
||||
needs: [preflight]
|
||||
if: needs.preflight.outputs.run_checks_fast == 'true'
|
||||
runs-on: ${{ github.repository == 'openclaw/openclaw' && 'blacksmith-8vcpu-ubuntu-2404' || 'ubuntu-24.04' }}
|
||||
runs-on: ${{ matrix.runner }}
|
||||
timeout-minutes: 60
|
||||
strategy:
|
||||
fail-fast: false
|
||||
@@ -856,18 +962,18 @@ jobs:
|
||||
|
||||
git -C "$workdir" checkout --force --detach "$CHECKOUT_SHA" || return 1
|
||||
test -f "$workdir/.github/actions/setup-node-env/action.yml" || return 1
|
||||
echo "checkout attempt ${attempt}/2 succeeded"
|
||||
echo "checkout attempt ${attempt}/5 succeeded"
|
||||
}
|
||||
|
||||
for attempt in 1 2; do
|
||||
for attempt in 1 2 3 4 5; do
|
||||
if checkout_attempt "$attempt"; then
|
||||
exit 0
|
||||
fi
|
||||
echo "checkout attempt ${attempt}/2 failed"
|
||||
echo "checkout attempt ${attempt}/5 failed"
|
||||
sleep $((attempt * 5))
|
||||
done
|
||||
|
||||
echo "checkout failed after 2 attempts" >&2
|
||||
echo "checkout failed after 5 attempts" >&2
|
||||
exit 1
|
||||
|
||||
- name: Setup Node environment
|
||||
@@ -877,7 +983,9 @@ jobs:
|
||||
|
||||
- name: Run extension shard
|
||||
env:
|
||||
NODE_OPTIONS: --max-old-space-size=6144
|
||||
OPENCLAW_EXTENSION_BATCH_PARALLEL: 2
|
||||
OPENCLAW_VITEST_MAX_WORKERS: 1
|
||||
OPENCLAW_EXTENSION_BATCH: ${{ matrix.extensions_csv }}
|
||||
run: pnpm test:extensions:batch -- "$OPENCLAW_EXTENSION_BATCH"
|
||||
|
||||
@@ -905,115 +1013,25 @@ jobs:
|
||||
name: ${{ matrix.check_name }}
|
||||
needs: [preflight, build-artifacts]
|
||||
if: ${{ !cancelled() && always() && needs.preflight.outputs.run_checks == 'true' && needs.build-artifacts.result == 'success' }}
|
||||
runs-on: ${{ github.repository == 'openclaw/openclaw' && 'blacksmith-8vcpu-ubuntu-2404' || 'ubuntu-24.04' }}
|
||||
timeout-minutes: 60
|
||||
runs-on: ubuntu-24.04
|
||||
timeout-minutes: 5
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix: ${{ fromJson(needs.preflight.outputs.checks_matrix) }}
|
||||
steps:
|
||||
- name: Checkout
|
||||
shell: bash
|
||||
env:
|
||||
CHECKOUT_REPO: ${{ github.repository }}
|
||||
CHECKOUT_SHA: ${{ github.sha }}
|
||||
CHECKOUT_TOKEN: ${{ github.token }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
workdir="$GITHUB_WORKSPACE"
|
||||
auth_header="$(printf 'x-access-token:%s' "$CHECKOUT_TOKEN" | base64 | tr -d '\n')"
|
||||
|
||||
reset_checkout_dir() {
|
||||
mkdir -p "$workdir"
|
||||
find "$workdir" -mindepth 1 -maxdepth 1 -exec rm -rf {} +
|
||||
}
|
||||
|
||||
checkout_attempt() {
|
||||
local attempt="$1"
|
||||
|
||||
reset_checkout_dir
|
||||
git init "$workdir" >/dev/null
|
||||
git config --global --add safe.directory "$workdir"
|
||||
git -C "$workdir" remote add origin "https://github.com/${CHECKOUT_REPO}"
|
||||
git -C "$workdir" config gc.auto 0
|
||||
|
||||
timeout --signal=TERM 30s git -C "$workdir" \
|
||||
-c protocol.version=2 \
|
||||
-c "http.https://github.com/.extraheader=AUTHORIZATION: basic ${auth_header}" \
|
||||
fetch --no-tags --prune --no-recurse-submodules --depth=1 origin \
|
||||
"+${CHECKOUT_SHA}:refs/remotes/origin/ci-target" || return 1
|
||||
|
||||
git -C "$workdir" checkout --force --detach "$CHECKOUT_SHA" || return 1
|
||||
test -f "$workdir/.github/actions/setup-node-env/action.yml" || return 1
|
||||
echo "checkout attempt ${attempt}/2 succeeded"
|
||||
}
|
||||
|
||||
for attempt in 1 2; do
|
||||
if checkout_attempt "$attempt"; then
|
||||
exit 0
|
||||
fi
|
||||
echo "checkout attempt ${attempt}/2 failed"
|
||||
sleep $((attempt * 5))
|
||||
done
|
||||
|
||||
echo "checkout failed after 2 attempts" >&2
|
||||
exit 1
|
||||
|
||||
- name: Setup Node environment
|
||||
uses: ./.github/actions/setup-node-env
|
||||
with:
|
||||
node-version: "${{ matrix.node_version || '24.x' }}"
|
||||
cache-key-suffix: "${{ matrix.cache_key_suffix || 'node24' }}"
|
||||
install-bun: "false"
|
||||
|
||||
- name: Configure Node test resources
|
||||
if: matrix.runtime == 'node' && (matrix.task == 'test' || matrix.task == 'channels')
|
||||
- name: Verify ${{ matrix.task }} (${{ matrix.runtime }})
|
||||
env:
|
||||
TASK: ${{ matrix.task }}
|
||||
run: |
|
||||
echo "OPENCLAW_VITEST_MAX_WORKERS=2" >> "$GITHUB_ENV"
|
||||
if [ "$TASK" = "test" ]; then
|
||||
echo "OPENCLAW_TEST_PROJECTS_LEAF_SHARDS=1" >> "$GITHUB_ENV"
|
||||
echo "OPENCLAW_TEST_SKIP_FULL_EXTENSIONS_SHARD=1" >> "$GITHUB_ENV"
|
||||
fi
|
||||
if [ "$TASK" = "channels" ]; then
|
||||
echo "OPENCLAW_VITEST_MAX_WORKERS=1" >> "$GITHUB_ENV"
|
||||
fi
|
||||
|
||||
- name: Restore dist cache
|
||||
if: matrix.task == 'test'
|
||||
id: checks-dist-cache
|
||||
uses: actions/cache@v5
|
||||
with:
|
||||
path: dist/
|
||||
key: ${{ runner.os }}-dist-build-${{ github.sha }}
|
||||
|
||||
- name: Verify dist cache
|
||||
if: matrix.task == 'test' && steps.checks-dist-cache.outputs.cache-hit != 'true'
|
||||
run: |
|
||||
echo "Missing same-run dist cache for ${RUNNER_OS}-dist-build-${GITHUB_SHA}" >&2
|
||||
exit 1
|
||||
|
||||
- name: Download A2UI bundle artifact
|
||||
if: matrix.task == 'test' || matrix.task == 'channels'
|
||||
uses: actions/download-artifact@v8
|
||||
with:
|
||||
name: canvas-a2ui-bundle
|
||||
path: src/canvas-host/a2ui/
|
||||
|
||||
- name: Run ${{ matrix.task }} (${{ matrix.runtime }})
|
||||
env:
|
||||
TASK: ${{ matrix.task }}
|
||||
NODE_OPTIONS: --max-old-space-size=6144
|
||||
CHANNELS_RESULT: ${{ needs.build-artifacts.outputs['channels-result'] }}
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
case "$TASK" in
|
||||
test)
|
||||
pnpm test
|
||||
;;
|
||||
channels)
|
||||
pnpm test:channels
|
||||
if [ "$CHANNELS_RESULT" != "success" ]; then
|
||||
echo "Channel tests failed in build-artifacts: $CHANNELS_RESULT" >&2
|
||||
exit 1
|
||||
fi
|
||||
;;
|
||||
*)
|
||||
echo "Unsupported checks task: $TASK" >&2
|
||||
@@ -1027,7 +1045,7 @@ jobs:
|
||||
name: checks-node-compat-node22
|
||||
needs: [preflight]
|
||||
if: needs.preflight.outputs.run_node == 'true' && github.event_name == 'push'
|
||||
runs-on: ${{ github.repository == 'openclaw/openclaw' && 'blacksmith-8vcpu-ubuntu-2404' || 'ubuntu-24.04' }}
|
||||
runs-on: ${{ github.repository == 'openclaw/openclaw' && 'blacksmith-4vcpu-ubuntu-2404' || 'ubuntu-24.04' }}
|
||||
timeout-minutes: 60
|
||||
steps:
|
||||
- name: Checkout
|
||||
@@ -1064,18 +1082,18 @@ jobs:
|
||||
|
||||
git -C "$workdir" checkout --force --detach "$CHECKOUT_SHA" || return 1
|
||||
test -f "$workdir/.github/actions/setup-node-env/action.yml" || return 1
|
||||
echo "checkout attempt ${attempt}/2 succeeded"
|
||||
echo "checkout attempt ${attempt}/5 succeeded"
|
||||
}
|
||||
|
||||
for attempt in 1 2; do
|
||||
for attempt in 1 2 3 4 5; do
|
||||
if checkout_attempt "$attempt"; then
|
||||
exit 0
|
||||
fi
|
||||
echo "checkout attempt ${attempt}/2 failed"
|
||||
echo "checkout attempt ${attempt}/5 failed"
|
||||
sleep $((attempt * 5))
|
||||
done
|
||||
|
||||
echo "checkout failed after 2 attempts" >&2
|
||||
echo "checkout failed after 5 attempts" >&2
|
||||
exit 1
|
||||
|
||||
- name: Setup Node environment
|
||||
@@ -1104,7 +1122,7 @@ jobs:
|
||||
name: ${{ matrix.check_name }}
|
||||
needs: [preflight]
|
||||
if: needs.preflight.outputs.run_checks_node_core_nondist == 'true'
|
||||
runs-on: ${{ github.repository == 'openclaw/openclaw' && 'blacksmith-8vcpu-ubuntu-2404' || 'ubuntu-24.04' }}
|
||||
runs-on: ${{ github.repository == 'openclaw/openclaw' && (matrix.runner || 'ubuntu-24.04') || 'ubuntu-24.04' }}
|
||||
timeout-minutes: 60
|
||||
strategy:
|
||||
fail-fast: false
|
||||
@@ -1144,18 +1162,18 @@ jobs:
|
||||
|
||||
git -C "$workdir" checkout --force --detach "$CHECKOUT_SHA" || return 1
|
||||
test -f "$workdir/.github/actions/setup-node-env/action.yml" || return 1
|
||||
echo "checkout attempt ${attempt}/2 succeeded"
|
||||
echo "checkout attempt ${attempt}/5 succeeded"
|
||||
}
|
||||
|
||||
for attempt in 1 2; do
|
||||
for attempt in 1 2 3 4 5; do
|
||||
if checkout_attempt "$attempt"; then
|
||||
exit 0
|
||||
fi
|
||||
echo "checkout attempt ${attempt}/2 failed"
|
||||
echo "checkout attempt ${attempt}/5 failed"
|
||||
sleep $((attempt * 5))
|
||||
done
|
||||
|
||||
echo "checkout failed after 2 attempts" >&2
|
||||
echo "checkout failed after 5 attempts" >&2
|
||||
exit 1
|
||||
|
||||
- name: Setup Node environment
|
||||
@@ -1173,6 +1191,7 @@ jobs:
|
||||
NODE_OPTIONS: --max-old-space-size=6144
|
||||
OPENCLAW_NODE_TEST_CONFIGS_JSON: ${{ toJson(matrix.configs) }}
|
||||
OPENCLAW_NODE_TEST_INCLUDE_PATTERNS_JSON: ${{ toJson(matrix.includePatterns) }}
|
||||
OPENCLAW_TEST_PROJECTS_PARALLEL: "2"
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
@@ -1180,7 +1199,6 @@ jobs:
|
||||
import { spawnSync } from "node:child_process";
|
||||
import { writeFileSync } from "node:fs";
|
||||
import { join } from "node:path";
|
||||
import { resolveVitestCliEntry, resolveVitestNodeArgs } from "./scripts/run-vitest.mjs";
|
||||
|
||||
const configs = JSON.parse(process.env.OPENCLAW_NODE_TEST_CONFIGS_JSON ?? "[]");
|
||||
if (!Array.isArray(configs) || configs.length === 0) {
|
||||
@@ -1198,27 +1216,12 @@ jobs:
|
||||
childEnv.OPENCLAW_VITEST_INCLUDE_FILE = includeFile;
|
||||
}
|
||||
|
||||
for (const config of configs) {
|
||||
console.error(`[test] starting ${config}`);
|
||||
const result = spawnSync(
|
||||
"pnpm",
|
||||
[
|
||||
"exec",
|
||||
"node",
|
||||
...resolveVitestNodeArgs(process.env),
|
||||
resolveVitestCliEntry(),
|
||||
"run",
|
||||
"--config",
|
||||
config,
|
||||
],
|
||||
{
|
||||
env: childEnv,
|
||||
stdio: "inherit",
|
||||
},
|
||||
);
|
||||
if ((result.status ?? 1) !== 0) {
|
||||
process.exit(result.status ?? 1);
|
||||
}
|
||||
const result = spawnSync("pnpm", ["exec", "node", "scripts/test-projects.mjs", ...configs], {
|
||||
env: childEnv,
|
||||
stdio: "inherit",
|
||||
});
|
||||
if ((result.status ?? 1) !== 0) {
|
||||
process.exit(result.status ?? 1);
|
||||
}
|
||||
EOF
|
||||
|
||||
@@ -1228,142 +1231,31 @@ jobs:
|
||||
name: ${{ matrix.check_name }}
|
||||
needs: [preflight, build-artifacts]
|
||||
if: ${{ !cancelled() && always() && needs.preflight.outputs.run_checks_node_core_dist == 'true' && needs.build-artifacts.result == 'success' }}
|
||||
runs-on: ${{ github.repository == 'openclaw/openclaw' && 'blacksmith-8vcpu-ubuntu-2404' || 'ubuntu-24.04' }}
|
||||
timeout-minutes: 60
|
||||
runs-on: ubuntu-24.04
|
||||
timeout-minutes: 5
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix: ${{ fromJson(needs.preflight.outputs.checks_node_core_dist_matrix) }}
|
||||
steps:
|
||||
- name: Checkout
|
||||
shell: bash
|
||||
- name: Verify Node test shard
|
||||
env:
|
||||
CHECKOUT_REPO: ${{ github.repository }}
|
||||
CHECKOUT_SHA: ${{ github.sha }}
|
||||
CHECKOUT_TOKEN: ${{ github.token }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
workdir="$GITHUB_WORKSPACE"
|
||||
auth_header="$(printf 'x-access-token:%s' "$CHECKOUT_TOKEN" | base64 | tr -d '\n')"
|
||||
|
||||
reset_checkout_dir() {
|
||||
mkdir -p "$workdir"
|
||||
find "$workdir" -mindepth 1 -maxdepth 1 -exec rm -rf {} +
|
||||
}
|
||||
|
||||
checkout_attempt() {
|
||||
local attempt="$1"
|
||||
|
||||
reset_checkout_dir
|
||||
git init "$workdir" >/dev/null
|
||||
git config --global --add safe.directory "$workdir"
|
||||
git -C "$workdir" remote add origin "https://github.com/${CHECKOUT_REPO}"
|
||||
git -C "$workdir" config gc.auto 0
|
||||
|
||||
timeout --signal=TERM 30s git -C "$workdir" \
|
||||
-c protocol.version=2 \
|
||||
-c "http.https://github.com/.extraheader=AUTHORIZATION: basic ${auth_header}" \
|
||||
fetch --no-tags --prune --no-recurse-submodules --depth=1 origin \
|
||||
"+${CHECKOUT_SHA}:refs/remotes/origin/ci-target" || return 1
|
||||
|
||||
git -C "$workdir" checkout --force --detach "$CHECKOUT_SHA" || return 1
|
||||
test -f "$workdir/.github/actions/setup-node-env/action.yml" || return 1
|
||||
echo "checkout attempt ${attempt}/2 succeeded"
|
||||
}
|
||||
|
||||
for attempt in 1 2; do
|
||||
if checkout_attempt "$attempt"; then
|
||||
exit 0
|
||||
fi
|
||||
echo "checkout attempt ${attempt}/2 failed"
|
||||
sleep $((attempt * 5))
|
||||
done
|
||||
|
||||
echo "checkout failed after 2 attempts" >&2
|
||||
exit 1
|
||||
|
||||
- name: Setup Node environment
|
||||
uses: ./.github/actions/setup-node-env
|
||||
with:
|
||||
node-version: "${{ matrix.node_version || '24.x' }}"
|
||||
cache-key-suffix: "${{ matrix.cache_key_suffix || 'node24' }}"
|
||||
install-bun: "false"
|
||||
|
||||
- name: Configure Node test resources
|
||||
run: echo "OPENCLAW_VITEST_MAX_WORKERS=2" >> "$GITHUB_ENV"
|
||||
|
||||
- name: Restore dist cache
|
||||
id: dist-cache
|
||||
uses: actions/cache@v5
|
||||
with:
|
||||
path: dist/
|
||||
key: ${{ runner.os }}-dist-build-${{ github.sha }}
|
||||
|
||||
- name: Verify dist cache
|
||||
if: steps.dist-cache.outputs.cache-hit != 'true'
|
||||
run: |
|
||||
echo "Missing same-run dist cache for ${RUNNER_OS}-dist-build-${GITHUB_SHA}" >&2
|
||||
exit 1
|
||||
|
||||
- name: Download A2UI bundle artifact
|
||||
uses: actions/download-artifact@v8
|
||||
with:
|
||||
name: canvas-a2ui-bundle
|
||||
path: src/canvas-host/a2ui/
|
||||
|
||||
- name: Run Node test shard
|
||||
env:
|
||||
NODE_OPTIONS: --max-old-space-size=6144
|
||||
OPENCLAW_NODE_TEST_CONFIGS_JSON: ${{ toJson(matrix.configs) }}
|
||||
OPENCLAW_NODE_TEST_INCLUDE_PATTERNS_JSON: ${{ toJson(matrix.includePatterns) }}
|
||||
CORE_SUPPORT_BOUNDARY_RESULT: ${{ needs.build-artifacts.outputs['core-support-boundary-result'] }}
|
||||
SHARD_NAME: ${{ matrix.shard_name }}
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
node --input-type=module <<'EOF'
|
||||
import { spawnSync } from "node:child_process";
|
||||
import { writeFileSync } from "node:fs";
|
||||
import { join } from "node:path";
|
||||
import { resolveVitestCliEntry, resolveVitestNodeArgs } from "./scripts/run-vitest.mjs";
|
||||
|
||||
const configs = JSON.parse(process.env.OPENCLAW_NODE_TEST_CONFIGS_JSON ?? "[]");
|
||||
if (!Array.isArray(configs) || configs.length === 0) {
|
||||
console.error("Missing node test shard configs");
|
||||
process.exit(1);
|
||||
}
|
||||
const includePatterns = JSON.parse(process.env.OPENCLAW_NODE_TEST_INCLUDE_PATTERNS_JSON ?? "null");
|
||||
const childEnv = { ...process.env };
|
||||
if (Array.isArray(includePatterns) && includePatterns.length > 0) {
|
||||
const includeFile = join(
|
||||
process.env.RUNNER_TEMP ?? ".",
|
||||
`node-test-include-${process.env.GITHUB_JOB ?? "local"}-${Date.now()}.json`,
|
||||
);
|
||||
writeFileSync(includeFile, JSON.stringify(includePatterns), "utf8");
|
||||
childEnv.OPENCLAW_VITEST_INCLUDE_FILE = includeFile;
|
||||
}
|
||||
|
||||
for (const config of configs) {
|
||||
console.error(`[test] starting ${config}`);
|
||||
const result = spawnSync(
|
||||
"pnpm",
|
||||
[
|
||||
"exec",
|
||||
"node",
|
||||
...resolveVitestNodeArgs(process.env),
|
||||
resolveVitestCliEntry(),
|
||||
"run",
|
||||
"--config",
|
||||
config,
|
||||
],
|
||||
{
|
||||
env: childEnv,
|
||||
stdio: "inherit",
|
||||
},
|
||||
);
|
||||
if ((result.status ?? 1) !== 0) {
|
||||
process.exit(result.status ?? 1);
|
||||
}
|
||||
}
|
||||
EOF
|
||||
case "$SHARD_NAME" in
|
||||
core-support-boundary)
|
||||
if [ "$CORE_SUPPORT_BOUNDARY_RESULT" != "success" ]; then
|
||||
echo "Core support boundary shard failed in build-artifacts: $CORE_SUPPORT_BOUNDARY_RESULT" >&2
|
||||
exit 1
|
||||
fi
|
||||
;;
|
||||
*)
|
||||
echo "Unsupported built-artifact shard: $SHARD_NAME" >&2
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
|
||||
checks-node-core-test:
|
||||
permissions:
|
||||
@@ -1436,18 +1328,18 @@ jobs:
|
||||
|
||||
git -C "$workdir" checkout --force --detach "$CHECKOUT_SHA" || return 1
|
||||
test -f "$workdir/.github/actions/setup-node-env/action.yml" || return 1
|
||||
echo "checkout attempt ${attempt}/2 succeeded"
|
||||
echo "checkout attempt ${attempt}/5 succeeded"
|
||||
}
|
||||
|
||||
for attempt in 1 2; do
|
||||
for attempt in 1 2 3 4 5; do
|
||||
if checkout_attempt "$attempt"; then
|
||||
exit 0
|
||||
fi
|
||||
echo "checkout attempt ${attempt}/2 failed"
|
||||
echo "checkout attempt ${attempt}/5 failed"
|
||||
sleep $((attempt * 5))
|
||||
done
|
||||
|
||||
echo "checkout failed after 2 attempts" >&2
|
||||
echo "checkout failed after 5 attempts" >&2
|
||||
exit 1
|
||||
|
||||
- name: Setup Node environment
|
||||
@@ -1458,7 +1350,15 @@ jobs:
|
||||
- name: Run changed extension tests
|
||||
env:
|
||||
OPENCLAW_CHANGED_EXTENSION: ${{ matrix.extension }}
|
||||
run: pnpm test:extension "$OPENCLAW_CHANGED_EXTENSION"
|
||||
run: |
|
||||
set -euo pipefail
|
||||
if [ "$OPENCLAW_CHANGED_EXTENSION" = "telegram" ]; then
|
||||
export OPENCLAW_VITEST_MAX_WORKERS=1
|
||||
export NODE_OPTIONS="${NODE_OPTIONS:+$NODE_OPTIONS }--max-old-space-size=6144"
|
||||
pnpm test:extension "$OPENCLAW_CHANGED_EXTENSION" -- --pool=forks
|
||||
exit 0
|
||||
fi
|
||||
pnpm test:extension "$OPENCLAW_CHANGED_EXTENSION"
|
||||
|
||||
# Types, lint, and format check shards.
|
||||
check-shard:
|
||||
@@ -1478,7 +1378,7 @@ jobs:
|
||||
runner: ubuntu-24.04
|
||||
- check_name: check-prod-types
|
||||
task: prod-types
|
||||
runner: ubuntu-24.04
|
||||
runner: blacksmith-4vcpu-ubuntu-2404
|
||||
- check_name: check-lint
|
||||
task: lint
|
||||
runner: blacksmith-16vcpu-ubuntu-2404
|
||||
@@ -1487,7 +1387,7 @@ jobs:
|
||||
runner: ubuntu-24.04
|
||||
- check_name: check-test-types
|
||||
task: test-types
|
||||
runner: ubuntu-24.04
|
||||
runner: blacksmith-4vcpu-ubuntu-2404
|
||||
- check_name: check-strict-smoke
|
||||
task: strict-smoke
|
||||
runner: ubuntu-24.04
|
||||
@@ -1526,18 +1426,18 @@ jobs:
|
||||
|
||||
git -C "$workdir" checkout --force --detach "$CHECKOUT_SHA" || return 1
|
||||
test -f "$workdir/.github/actions/setup-node-env/action.yml" || return 1
|
||||
echo "checkout attempt ${attempt}/2 succeeded"
|
||||
echo "checkout attempt ${attempt}/5 succeeded"
|
||||
}
|
||||
|
||||
for attempt in 1 2; do
|
||||
for attempt in 1 2 3 4 5; do
|
||||
if checkout_attempt "$attempt"; then
|
||||
exit 0
|
||||
fi
|
||||
echo "checkout attempt ${attempt}/2 failed"
|
||||
echo "checkout attempt ${attempt}/5 failed"
|
||||
sleep $((attempt * 5))
|
||||
done
|
||||
|
||||
echo "checkout failed after 2 attempts" >&2
|
||||
echo "checkout failed after 5 attempts" >&2
|
||||
exit 1
|
||||
|
||||
- name: Setup Node environment
|
||||
@@ -1562,7 +1462,7 @@ jobs:
|
||||
pnpm tsgo:prod
|
||||
;;
|
||||
lint)
|
||||
pnpm lint
|
||||
pnpm lint --threads=8
|
||||
;;
|
||||
policy-guards)
|
||||
pnpm lint:webhook:no-low-level-body-read
|
||||
@@ -1574,7 +1474,8 @@ jobs:
|
||||
pnpm check:test-types
|
||||
;;
|
||||
strict-smoke)
|
||||
pnpm build:strict-smoke
|
||||
# build-artifacts already runs the tsdown/runtime build for the same Node-relevant changes.
|
||||
pnpm build:plugin-sdk:strict-smoke
|
||||
;;
|
||||
*)
|
||||
echo "Unsupported check task: $TASK" >&2
|
||||
@@ -1620,8 +1521,6 @@ jobs:
|
||||
group: extension-bundled
|
||||
- check_name: check-additional-extension-package-boundary
|
||||
group: extension-package-boundary
|
||||
- check_name: check-additional-runtime-topology-gateway
|
||||
group: runtime-topology-gateway
|
||||
- check_name: check-additional-runtime-topology-architecture
|
||||
group: runtime-topology-architecture
|
||||
steps:
|
||||
@@ -1659,18 +1558,18 @@ jobs:
|
||||
|
||||
git -C "$workdir" checkout --force --detach "$CHECKOUT_SHA" || return 1
|
||||
test -f "$workdir/.github/actions/setup-node-env/action.yml" || return 1
|
||||
echo "checkout attempt ${attempt}/2 succeeded"
|
||||
echo "checkout attempt ${attempt}/5 succeeded"
|
||||
}
|
||||
|
||||
for attempt in 1 2; do
|
||||
for attempt in 1 2 3 4 5; do
|
||||
if checkout_attempt "$attempt"; then
|
||||
exit 0
|
||||
fi
|
||||
echo "checkout attempt ${attempt}/2 failed"
|
||||
echo "checkout attempt ${attempt}/5 failed"
|
||||
sleep $((attempt * 5))
|
||||
done
|
||||
|
||||
echo "checkout failed after 2 attempts" >&2
|
||||
echo "checkout failed after 5 attempts" >&2
|
||||
exit 1
|
||||
|
||||
- name: Setup Node environment
|
||||
@@ -1757,12 +1656,6 @@ jobs:
|
||||
run_check "test:extensions:package-boundary:compile" pnpm run test:extensions:package-boundary:compile
|
||||
run_check "test:extensions:package-boundary:canary" pnpm run test:extensions:package-boundary:canary
|
||||
;;
|
||||
runtime-topology-gateway)
|
||||
if [ "$RUN_CONTROL_UI_I18N" = "true" ]; then
|
||||
run_check "ui:i18n:check" pnpm ui:i18n:check
|
||||
fi
|
||||
run_check "gateway-watch-regression" pnpm test:gateway:watch-regression
|
||||
;;
|
||||
runtime-topology-architecture)
|
||||
run_check "check:architecture" pnpm check:architecture
|
||||
;;
|
||||
@@ -1774,19 +1667,11 @@ jobs:
|
||||
|
||||
exit "$failures"
|
||||
|
||||
- name: Upload gateway watch regression artifacts
|
||||
if: always() && matrix.group == 'runtime-topology-gateway'
|
||||
uses: actions/upload-artifact@v7
|
||||
with:
|
||||
name: gateway-watch-regression
|
||||
path: .local/gateway-watch-regression/
|
||||
retention-days: 7
|
||||
|
||||
check-additional:
|
||||
permissions:
|
||||
contents: read
|
||||
name: "check-additional"
|
||||
needs: [preflight, check-additional-shard]
|
||||
needs: [preflight, check-additional-shard, build-artifacts]
|
||||
if: ${{ !cancelled() && always() && needs.preflight.outputs.run_check_additional == 'true' }}
|
||||
runs-on: ubuntu-24.04
|
||||
timeout-minutes: 5
|
||||
@@ -1794,11 +1679,21 @@ jobs:
|
||||
- name: Verify additional check shards
|
||||
env:
|
||||
SHARD_RESULT: ${{ needs.check-additional-shard.result }}
|
||||
BUILD_ARTIFACTS_RESULT: ${{ needs.build-artifacts.result }}
|
||||
GATEWAY_RESULT: ${{ needs.build-artifacts.outputs.gateway-watch-result }}
|
||||
run: |
|
||||
if [ "$SHARD_RESULT" != "success" ]; then
|
||||
echo "Additional check shards failed: $SHARD_RESULT" >&2
|
||||
exit 1
|
||||
fi
|
||||
if [ "$BUILD_ARTIFACTS_RESULT" != "success" ]; then
|
||||
echo "Build artifact job failed: $BUILD_ARTIFACTS_RESULT" >&2
|
||||
exit 1
|
||||
fi
|
||||
if [ "$GATEWAY_RESULT" != "success" ]; then
|
||||
echo "Gateway topology check failed: $GATEWAY_RESULT" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
build-smoke:
|
||||
permissions:
|
||||
@@ -1861,18 +1756,18 @@ jobs:
|
||||
|
||||
git -C "$workdir" checkout --force --detach "$CHECKOUT_SHA" || return 1
|
||||
test -f "$workdir/.github/actions/setup-node-env/action.yml" || return 1
|
||||
echo "checkout attempt ${attempt}/2 succeeded"
|
||||
echo "checkout attempt ${attempt}/5 succeeded"
|
||||
}
|
||||
|
||||
for attempt in 1 2; do
|
||||
for attempt in 1 2 3 4 5; do
|
||||
if checkout_attempt "$attempt"; then
|
||||
exit 0
|
||||
fi
|
||||
echo "checkout attempt ${attempt}/2 failed"
|
||||
echo "checkout attempt ${attempt}/5 failed"
|
||||
sleep $((attempt * 5))
|
||||
done
|
||||
|
||||
echo "checkout failed after 2 attempts" >&2
|
||||
echo "checkout failed after 5 attempts" >&2
|
||||
exit 1
|
||||
|
||||
- name: Setup Node environment
|
||||
@@ -1965,6 +1860,7 @@ jobs:
|
||||
check-latest: false
|
||||
|
||||
- name: Setup pnpm + cache store
|
||||
id: pnpm-cache
|
||||
uses: ./.github/actions/setup-pnpm-store-cache
|
||||
with:
|
||||
pnpm-version: "10.33.0"
|
||||
@@ -1998,6 +1894,14 @@ jobs:
|
||||
# caches can skip repeated rebuild/download work on later shards/runs.
|
||||
pnpm install --frozen-lockfile --prefer-offline --ignore-scripts=false --config.engine-strict=false --config.enable-pre-post-scripts=true --config.side-effects-cache=true || pnpm install --frozen-lockfile --prefer-offline --ignore-scripts=false --config.engine-strict=false --config.enable-pre-post-scripts=true --config.side-effects-cache=true
|
||||
|
||||
- name: Save pnpm store cache
|
||||
if: steps.pnpm-cache.outputs.cache-enabled == 'true' && steps.pnpm-cache.outputs.cache-hit != 'true'
|
||||
uses: actions/cache/save@v5
|
||||
continue-on-error: true
|
||||
with:
|
||||
path: ${{ steps.pnpm-cache.outputs.store-path }}
|
||||
key: ${{ steps.pnpm-cache.outputs.primary-key }}
|
||||
|
||||
- name: Run ${{ matrix.task }} (${{ matrix.runtime }})
|
||||
env:
|
||||
TASK: ${{ matrix.task }}
|
||||
@@ -2201,18 +2105,18 @@ jobs:
|
||||
|
||||
git -C "$workdir" checkout --force --detach "$CHECKOUT_SHA" || return 1
|
||||
test -x "$workdir/apps/android/gradlew" || return 1
|
||||
echo "checkout attempt ${attempt}/2 succeeded"
|
||||
echo "checkout attempt ${attempt}/5 succeeded"
|
||||
}
|
||||
|
||||
for attempt in 1 2; do
|
||||
for attempt in 1 2 3 4 5; do
|
||||
if checkout_attempt "$attempt"; then
|
||||
exit 0
|
||||
fi
|
||||
echo "checkout attempt ${attempt}/2 failed"
|
||||
echo "checkout attempt ${attempt}/5 failed"
|
||||
sleep $((attempt * 5))
|
||||
done
|
||||
|
||||
echo "checkout failed after 2 attempts" >&2
|
||||
echo "checkout failed after 5 attempts" >&2
|
||||
exit 1
|
||||
|
||||
- name: Setup Java
|
||||
@@ -2271,9 +2175,6 @@ jobs:
|
||||
build-play)
|
||||
./gradlew --no-daemon --build-cache :app:assemblePlayDebug
|
||||
;;
|
||||
build-third-party)
|
||||
./gradlew --no-daemon --build-cache :app:assembleThirdPartyDebug
|
||||
;;
|
||||
*)
|
||||
echo "Unsupported Android task: $TASK" >&2
|
||||
exit 1
|
||||
|
||||
@@ -49,7 +49,7 @@ jobs:
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
all_locales_json='["zh-CN","zh-TW","pt-BR","de","es","ja-JP","ko","fr","tr","uk","id","pl"]'
|
||||
all_locales_json='["zh-CN","zh-TW","pt-BR","de","es","ja-JP","ko","fr","tr","uk","id","pl","th"]'
|
||||
|
||||
if [ "$EVENT_NAME" != "push" ]; then
|
||||
echo "has_locales=true" >> "$GITHUB_OUTPUT"
|
||||
@@ -137,7 +137,7 @@ jobs:
|
||||
env:
|
||||
OPENAI_API_KEY: ${{ secrets.OPENCLAW_DOCS_I18N_OPENAI_API_KEY || secrets.OPENAI_API_KEY }}
|
||||
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
OPENCLAW_CONTROL_UI_I18N_MODEL: gpt-5.4
|
||||
OPENCLAW_CONTROL_UI_I18N_MODEL: ${{ vars.OPENCLAW_CI_OPENAI_MODEL_BARE }}
|
||||
OPENCLAW_CONTROL_UI_I18N_THINKING: low
|
||||
LOCALE: ${{ matrix.locale }}
|
||||
run: node --import tsx scripts/control-ui-i18n.ts sync --locale "${LOCALE}" --write
|
||||
|
||||
250
.github/workflows/docs-agent.yml
vendored
Normal file
250
.github/workflows/docs-agent.yml
vendored
Normal file
@@ -0,0 +1,250 @@
|
||||
name: Docs Agent
|
||||
|
||||
on:
|
||||
workflow_run: # zizmor: ignore[dangerous-triggers] main-only docs repair after trusted CI; job gates repository, event, branch, actor, conclusion, exact current main SHA, and hourly cadence before using write token
|
||||
workflows:
|
||||
- CI
|
||||
types:
|
||||
- completed
|
||||
workflow_dispatch:
|
||||
|
||||
permissions:
|
||||
actions: read
|
||||
contents: write
|
||||
|
||||
concurrency:
|
||||
group: docs-agent-main
|
||||
cancel-in-progress: false
|
||||
|
||||
env:
|
||||
FORCE_JAVASCRIPT_ACTIONS_TO_NODE24: "true"
|
||||
|
||||
jobs:
|
||||
update-docs:
|
||||
if: >
|
||||
github.repository == 'openclaw/openclaw' &&
|
||||
github.actor != 'github-actions[bot]' &&
|
||||
(github.event_name != 'workflow_run' ||
|
||||
(github.event.workflow_run.conclusion == 'success' &&
|
||||
github.event.workflow_run.event == 'push' &&
|
||||
github.event.workflow_run.head_branch == 'main' &&
|
||||
github.event.workflow_run.actor.login != 'github-actions[bot]'))
|
||||
runs-on: ubuntu-24.04
|
||||
timeout-minutes: 30
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v6
|
||||
with:
|
||||
ref: main
|
||||
fetch-depth: 0
|
||||
persist-credentials: false
|
||||
submodules: false
|
||||
|
||||
- name: Gate trusted main activity and hourly cadence
|
||||
id: gate
|
||||
env:
|
||||
EVENT_NAME: ${{ github.event_name }}
|
||||
GH_TOKEN: ${{ github.token }}
|
||||
WORKFLOW_HEAD_SHA: ${{ github.event.workflow_run.head_sha }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
if [ "$EVENT_NAME" != "workflow_run" ]; then
|
||||
head_sha="$(git rev-parse HEAD)"
|
||||
review_base="$(git rev-parse "${head_sha}^" 2>/dev/null || printf '%s' "$head_sha")"
|
||||
{
|
||||
echo "run_agent=true"
|
||||
echo "base_sha=${head_sha}"
|
||||
echo "review_base_sha=${review_base}"
|
||||
echo "review_head_sha=${head_sha}"
|
||||
} >> "$GITHUB_OUTPUT"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
for attempt in 1 2 3 4 5; do
|
||||
if git fetch --no-tags origin main; then
|
||||
break
|
||||
fi
|
||||
if [ "$attempt" = "5" ]; then
|
||||
echo "Failed to fetch main after retries." >&2
|
||||
exit 1
|
||||
fi
|
||||
echo "Fetch attempt ${attempt} failed; retrying."
|
||||
sleep $((attempt * 2))
|
||||
done
|
||||
remote_main="$(git rev-parse origin/main)"
|
||||
if [ "$remote_main" != "$WORKFLOW_HEAD_SHA" ]; then
|
||||
echo "CI run is superseded by ${remote_main}; skipping docs agent for ${WORKFLOW_HEAD_SHA}."
|
||||
echo "run_agent=false" >> "$GITHUB_OUTPUT"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
runs_json="$RUNNER_TEMP/docs-agent-runs.json"
|
||||
gh api --method GET "repos/${GITHUB_REPOSITORY}/actions/workflows/docs-agent.yml/runs" \
|
||||
-f branch=main \
|
||||
-f event=workflow_run \
|
||||
-f per_page=100 > "$runs_json"
|
||||
|
||||
one_hour_ago="$(date -u -d '1 hour ago' +%Y-%m-%dT%H:%M:%SZ)"
|
||||
recent_runs="$(
|
||||
jq -r \
|
||||
--argjson current_run_id "$GITHUB_RUN_ID" \
|
||||
--arg one_hour_ago "$one_hour_ago" \
|
||||
'.workflow_runs[]
|
||||
| select(.database_id != $current_run_id)
|
||||
| select(.created_at >= $one_hour_ago)
|
||||
| select(.status != "cancelled")
|
||||
| select((.conclusion // "") != "skipped")
|
||||
| [.database_id, .status, (.conclusion // ""), .created_at, .head_sha]
|
||||
| @tsv' "$runs_json"
|
||||
)"
|
||||
|
||||
if [ -n "$recent_runs" ]; then
|
||||
echo "Docs agent already ran or is running within the last hour; skipping."
|
||||
printf '%s\n' "$recent_runs"
|
||||
echo "run_agent=false" >> "$GITHUB_OUTPUT"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
review_base="$(
|
||||
jq -r \
|
||||
--argjson current_run_id "$GITHUB_RUN_ID" \
|
||||
--arg remote_main "$remote_main" \
|
||||
'.workflow_runs[]
|
||||
| select(.database_id != $current_run_id)
|
||||
| select(.status != "cancelled")
|
||||
| select((.conclusion // "") != "skipped")
|
||||
| .head_sha
|
||||
| select(. != null and . != "")
|
||||
| select(. != $remote_main)
|
||||
' "$runs_json" | head -n 1
|
||||
)"
|
||||
if [ -z "$review_base" ] || ! git cat-file -e "${review_base}^{commit}" 2>/dev/null; then
|
||||
review_base="$(git rev-parse "${remote_main}^" 2>/dev/null || printf '%s' "$remote_main")"
|
||||
fi
|
||||
|
||||
{
|
||||
echo "run_agent=true"
|
||||
echo "base_sha=${remote_main}"
|
||||
echo "review_base_sha=${review_base}"
|
||||
echo "review_head_sha=${remote_main}"
|
||||
} >> "$GITHUB_OUTPUT"
|
||||
|
||||
- name: Setup Node environment
|
||||
if: steps.gate.outputs.run_agent == 'true'
|
||||
uses: ./.github/actions/setup-node-env
|
||||
with:
|
||||
install-bun: "false"
|
||||
|
||||
- name: Ensure docs agent key exists
|
||||
if: steps.gate.outputs.run_agent == 'true'
|
||||
env:
|
||||
OPENAI_API_KEY: ${{ secrets.OPENCLAW_DOCS_AGENT_OPENAI_API_KEY || secrets.OPENAI_API_KEY }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
if [ -z "${OPENAI_API_KEY:-}" ]; then
|
||||
echo "Missing OPENCLAW_DOCS_AGENT_OPENAI_API_KEY or OPENAI_API_KEY secret." >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
- name: Run Codex docs agent
|
||||
if: steps.gate.outputs.run_agent == 'true'
|
||||
uses: openai/codex-action@v1
|
||||
env:
|
||||
DOCS_AGENT_BASE_SHA: ${{ steps.gate.outputs.review_base_sha }}
|
||||
DOCS_AGENT_HEAD_SHA: ${{ steps.gate.outputs.review_head_sha }}
|
||||
with:
|
||||
openai-api-key: ${{ secrets.OPENCLAW_DOCS_AGENT_OPENAI_API_KEY || secrets.OPENAI_API_KEY }}
|
||||
prompt-file: .github/codex/prompts/docs-agent.md
|
||||
model: ${{ vars.OPENCLAW_CI_OPENAI_MODEL_BARE }}
|
||||
effort: medium
|
||||
sandbox: workspace-write
|
||||
safety-strategy: drop-sudo
|
||||
codex-args: '["--full-auto"]'
|
||||
|
||||
- name: Enforce existing-docs-only patch
|
||||
if: steps.gate.outputs.run_agent == 'true'
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
untracked="$(git ls-files --others --exclude-standard)"
|
||||
if [ -n "$untracked" ]; then
|
||||
echo "Docs agent created untracked files; forbidden:"
|
||||
printf '%s\n' "$untracked"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
added_or_deleted="$(git diff --name-status --diff-filter=AD)"
|
||||
if [ -n "$added_or_deleted" ]; then
|
||||
echo "Docs agent added or deleted tracked files; forbidden:"
|
||||
printf '%s\n' "$added_or_deleted"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
bad_paths="$(
|
||||
git diff --name-only | while IFS= read -r path; do
|
||||
case "$path" in
|
||||
docs/*|README.md|CHANGELOG.md) ;;
|
||||
*) printf '%s\n' "$path" ;;
|
||||
esac
|
||||
done
|
||||
)"
|
||||
if [ -n "$bad_paths" ]; then
|
||||
echo "Docs agent touched non-doc paths; forbidden:"
|
||||
printf '%s\n' "$bad_paths"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
- name: Restore Node 24 path
|
||||
if: steps.gate.outputs.run_agent == 'true'
|
||||
run: | # zizmor: ignore[github-env] NODE_BIN is set by the trusted local setup-node-env action in this same job
|
||||
set -euo pipefail
|
||||
export PATH="${NODE_BIN}:${PATH}"
|
||||
echo "${NODE_BIN}" >> "$GITHUB_PATH"
|
||||
node -v
|
||||
corepack enable
|
||||
pnpm -v
|
||||
|
||||
- name: Check docs
|
||||
if: steps.gate.outputs.run_agent == 'true'
|
||||
run: pnpm check:docs
|
||||
|
||||
- name: Commit docs updates
|
||||
if: steps.gate.outputs.run_agent == 'true'
|
||||
env:
|
||||
BASE_SHA: ${{ steps.gate.outputs.base_sha }}
|
||||
GITHUB_TOKEN: ${{ github.token }}
|
||||
TARGET_BRANCH: main
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
if git diff --quiet; then
|
||||
echo "No docs changes."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
git config user.name "openclaw-docs-agent[bot]"
|
||||
git config user.email "openclaw-docs-agent[bot]@users.noreply.github.com"
|
||||
git add docs README.md CHANGELOG.md
|
||||
git commit --no-verify -m "docs: refresh documentation"
|
||||
|
||||
for attempt in 1 2 3 4 5; do
|
||||
if ! git fetch --no-tags origin "${TARGET_BRANCH}"; then
|
||||
echo "Fetch attempt ${attempt} failed; retrying."
|
||||
sleep $((attempt * 2))
|
||||
continue
|
||||
fi
|
||||
if git push "https://x-access-token:${GITHUB_TOKEN}@github.com/${GITHUB_REPOSITORY}.git" HEAD:"${TARGET_BRANCH}"; then
|
||||
exit 0
|
||||
fi
|
||||
remote_main="$(git rev-parse "origin/${TARGET_BRANCH}")"
|
||||
if [ "$remote_main" != "$BASE_SHA" ]; then
|
||||
echo "main advanced from ${BASE_SHA} to ${remote_main}; skipping stale docs update."
|
||||
exit 0
|
||||
fi
|
||||
echo "Docs update attempt ${attempt} failed; retrying."
|
||||
sleep $((attempt * 2))
|
||||
done
|
||||
|
||||
echo "Failed to push docs updates after retries." >&2
|
||||
exit 1
|
||||
56
.github/workflows/docs-sync-publish.yml
vendored
56
.github/workflows/docs-sync-publish.yml
vendored
@@ -32,9 +32,19 @@ jobs:
|
||||
OPENCLAW_DOCS_SYNC_TOKEN: ${{ secrets.OPENCLAW_DOCS_SYNC_TOKEN }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
git clone \
|
||||
"https://x-access-token:${OPENCLAW_DOCS_SYNC_TOKEN}@github.com/openclaw/docs.git" \
|
||||
publish
|
||||
for attempt in 1 2 3 4 5; do
|
||||
rm -rf publish
|
||||
if git clone \
|
||||
"https://x-access-token:${OPENCLAW_DOCS_SYNC_TOKEN}@github.com/openclaw/docs.git" \
|
||||
publish; then
|
||||
exit 0
|
||||
fi
|
||||
echo "Clone attempt ${attempt} failed; retrying."
|
||||
sleep $((attempt * 2))
|
||||
done
|
||||
|
||||
echo "Failed to clone publish repo after retries." >&2
|
||||
exit 1
|
||||
|
||||
- name: Sync docs into publish repo
|
||||
run: |
|
||||
@@ -43,26 +53,56 @@ jobs:
|
||||
--source-repo "$GITHUB_REPOSITORY" \
|
||||
--source-sha "$GITHUB_SHA"
|
||||
|
||||
- name: Install docs MDX checker dependency
|
||||
run: npm install --no-save --package-lock=false @mdx-js/mdx@3.1.1
|
||||
|
||||
- name: Check publish docs MDX
|
||||
run: node "$GITHUB_WORKSPACE/publish/.openclaw-sync/check-docs-mdx.mjs" "$GITHUB_WORKSPACE/publish/docs"
|
||||
|
||||
- name: Commit publish repo sync
|
||||
working-directory: publish
|
||||
run: |
|
||||
set -euo pipefail
|
||||
remote_source_sha() {
|
||||
git show refs/remotes/origin/main:.openclaw-sync/source.json 2>/dev/null \
|
||||
| node -e 'const fs = require("node:fs"); try { const data = JSON.parse(fs.readFileSync(0, "utf8")); if (data.sha) process.stdout.write(data.sha); } catch {}' \
|
||||
|| true
|
||||
}
|
||||
|
||||
skip_stale_source() {
|
||||
current_source_sha="$(remote_source_sha)"
|
||||
if [ -z "$current_source_sha" ] || [ "$current_source_sha" = "$GITHUB_SHA" ]; then
|
||||
return
|
||||
fi
|
||||
|
||||
if git -C "$GITHUB_WORKSPACE" merge-base --is-ancestor "$GITHUB_SHA" "$current_source_sha"; then
|
||||
echo "Skipping stale publish sync for $GITHUB_SHA; origin/main already mirrors $current_source_sha."
|
||||
exit 0
|
||||
fi
|
||||
}
|
||||
|
||||
if git diff --quiet -- docs .openclaw-sync; then
|
||||
echo "No publish-repo changes."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
if git fetch origin main:refs/remotes/origin/main; then
|
||||
skip_stale_source
|
||||
fi
|
||||
|
||||
git config user.name "openclaw-docs-sync[bot]"
|
||||
git config user.email "openclaw-docs-sync[bot]@users.noreply.github.com"
|
||||
git add docs .openclaw-sync
|
||||
git commit -m "chore(sync): mirror docs from $GITHUB_REPOSITORY@$GITHUB_SHA"
|
||||
for attempt in 1 2 3 4 5; do
|
||||
git fetch origin main
|
||||
git rebase origin/main
|
||||
if git push origin HEAD:main; then
|
||||
exit 0
|
||||
if git fetch origin main:refs/remotes/origin/main; then
|
||||
skip_stale_source
|
||||
if git rebase -X theirs origin/main && git push origin HEAD:main; then
|
||||
exit 0
|
||||
fi
|
||||
fi
|
||||
echo "Push attempt ${attempt} failed; retrying."
|
||||
git rebase --abort >/dev/null 2>&1 || true
|
||||
echo "Publish sync attempt ${attempt} failed; retrying."
|
||||
sleep $((attempt * 2))
|
||||
done
|
||||
|
||||
|
||||
@@ -31,7 +31,8 @@ jobs:
|
||||
translate-tr-release \
|
||||
translate-uk-release \
|
||||
translate-id-release \
|
||||
translate-pl-release
|
||||
translate-pl-release \
|
||||
translate-th-release
|
||||
do
|
||||
gh api repos/openclaw/docs/dispatches \
|
||||
--method POST \
|
||||
|
||||
39
.github/workflows/docs.yml
vendored
Normal file
39
.github/workflows/docs.yml
vendored
Normal file
@@ -0,0 +1,39 @@
|
||||
name: Docs
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [main]
|
||||
paths:
|
||||
- "**/*.md"
|
||||
- "docs/**"
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
concurrency:
|
||||
group: ${{ format('{0}-{1}', github.workflow, github.ref) }}
|
||||
cancel-in-progress: true
|
||||
|
||||
env:
|
||||
FORCE_JAVASCRIPT_ACTIONS_TO_NODE24: "true"
|
||||
|
||||
jobs:
|
||||
docs:
|
||||
runs-on: ubuntu-24.04
|
||||
timeout-minutes: 20
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v6
|
||||
with:
|
||||
fetch-depth: 1
|
||||
fetch-tags: false
|
||||
persist-credentials: false
|
||||
submodules: false
|
||||
|
||||
- name: Setup Node environment
|
||||
uses: ./.github/actions/setup-node-env
|
||||
with:
|
||||
install-bun: "false"
|
||||
|
||||
- name: Check docs
|
||||
run: pnpm check:docs
|
||||
59
.github/workflows/duplicate-after-merge.yml
vendored
Normal file
59
.github/workflows/duplicate-after-merge.yml
vendored
Normal file
@@ -0,0 +1,59 @@
|
||||
name: Duplicate PRs After Merge
|
||||
|
||||
on:
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
landed_pr:
|
||||
description: "Merged PR number that supersedes the duplicates"
|
||||
required: true
|
||||
type: string
|
||||
duplicate_prs:
|
||||
description: "Comma or whitespace separated duplicate PR numbers to close"
|
||||
required: true
|
||||
type: string
|
||||
apply:
|
||||
description: "When true, label/comment/close; otherwise dry-run only"
|
||||
required: true
|
||||
type: boolean
|
||||
default: false
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
issues: write
|
||||
pull-requests: write
|
||||
|
||||
concurrency:
|
||||
group: duplicate-after-merge-${{ github.event.inputs.landed_pr }}
|
||||
cancel-in-progress: false
|
||||
|
||||
env:
|
||||
FORCE_JAVASCRIPT_ACTIONS_TO_NODE24: "true"
|
||||
GH_TOKEN: ${{ github.token }}
|
||||
|
||||
jobs:
|
||||
close-duplicates:
|
||||
runs-on: ubuntu-24.04
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v6
|
||||
|
||||
- name: Close confirmed duplicates
|
||||
env:
|
||||
APPLY: ${{ inputs.apply }}
|
||||
DUPLICATE_PRS: ${{ inputs.duplicate_prs }}
|
||||
LANDED_PR: ${{ inputs.landed_pr }}
|
||||
REPO: ${{ github.repository }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
args=(
|
||||
--repo "$REPO"
|
||||
--landed-pr "$LANDED_PR"
|
||||
--duplicates "$DUPLICATE_PRS"
|
||||
)
|
||||
|
||||
if [[ "$APPLY" == "true" ]]; then
|
||||
args+=(--apply)
|
||||
fi
|
||||
|
||||
node scripts/close-duplicate-prs-after-merge.mjs "${args[@]}"
|
||||
255
.github/workflows/install-smoke.yml
vendored
255
.github/workflows/install-smoke.yml
vendored
@@ -1,17 +1,32 @@
|
||||
name: Install Smoke
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [main]
|
||||
pull_request:
|
||||
types: [opened, reopened, synchronize, ready_for_review, converted_to_draft]
|
||||
schedule:
|
||||
- cron: "17 3 * * *"
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
run_bun_global_install_smoke:
|
||||
description: Run the Bun global install image-provider smoke
|
||||
required: false
|
||||
default: false
|
||||
type: boolean
|
||||
workflow_call:
|
||||
inputs:
|
||||
ref:
|
||||
description: Git ref to validate
|
||||
required: false
|
||||
type: string
|
||||
run_bun_global_install_smoke:
|
||||
description: Run the Bun global install image-provider smoke
|
||||
required: false
|
||||
default: true
|
||||
type: boolean
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.event_name == 'pull_request' && format('{0}-{1}', github.workflow, github.event.pull_request.number) || format('{0}-{1}', github.workflow, github.ref) }}
|
||||
group: ${{ github.event_name == 'workflow_dispatch' && format('{0}-manual-{1}', github.workflow, github.run_id) || format('{0}-{1}', github.workflow, github.ref) }}
|
||||
cancel-in-progress: true
|
||||
|
||||
env:
|
||||
@@ -19,65 +34,54 @@ env:
|
||||
|
||||
jobs:
|
||||
preflight:
|
||||
if: github.event_name != 'pull_request' || !github.event.pull_request.draft
|
||||
runs-on: ubuntu-24.04
|
||||
outputs:
|
||||
docs_only: ${{ steps.manifest.outputs.docs_only }}
|
||||
run_install_smoke: ${{ steps.manifest.outputs.run_install_smoke }}
|
||||
run_fast_install_smoke: ${{ steps.manifest.outputs.run_fast_install_smoke }}
|
||||
run_full_install_smoke: ${{ steps.manifest.outputs.run_full_install_smoke }}
|
||||
run_bun_global_install_smoke: ${{ steps.manifest.outputs.run_bun_global_install_smoke }}
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v6
|
||||
with:
|
||||
ref: ${{ inputs.ref || github.ref }}
|
||||
fetch-depth: 1
|
||||
fetch-tags: false
|
||||
persist-credentials: false
|
||||
submodules: false
|
||||
|
||||
- name: Ensure preflight base commit
|
||||
uses: ./.github/actions/ensure-base-commit
|
||||
with:
|
||||
base-sha: ${{ github.event_name == 'push' && github.event.before || github.event.pull_request.base.sha }}
|
||||
fetch-ref: ${{ github.event_name == 'push' && github.ref_name || github.event.pull_request.base.ref }}
|
||||
|
||||
- name: Detect docs-only changes
|
||||
id: docs_scope
|
||||
uses: ./.github/actions/detect-docs-changes
|
||||
|
||||
- name: Detect changed smoke scope
|
||||
id: changed_scope
|
||||
if: steps.docs_scope.outputs.docs_only != 'true'
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
if [ "${{ github.event_name }}" = "push" ]; then
|
||||
BASE="${{ github.event.before }}"
|
||||
else
|
||||
BASE="${{ github.event.pull_request.base.sha }}"
|
||||
fi
|
||||
|
||||
node scripts/ci-changed-scope.mjs --base "$BASE" --head HEAD
|
||||
|
||||
- name: Build install-smoke CI manifest
|
||||
id: manifest
|
||||
env:
|
||||
OPENCLAW_CI_DOCS_ONLY: ${{ steps.docs_scope.outputs.docs_only }}
|
||||
OPENCLAW_CI_RUN_CHANGED_SMOKE: ${{ steps.changed_scope.outputs.run_changed_smoke || 'false' }}
|
||||
OPENCLAW_CI_EVENT_NAME: ${{ github.event_name }}
|
||||
OPENCLAW_CI_WORKFLOW_BUN_GLOBAL_INSTALL_SMOKE: ${{ inputs.run_bun_global_install_smoke || 'false' }}
|
||||
run: |
|
||||
docs_only="${OPENCLAW_CI_DOCS_ONLY:-false}"
|
||||
run_changed_smoke="${OPENCLAW_CI_RUN_CHANGED_SMOKE:-false}"
|
||||
run_install_smoke=false
|
||||
if [ "$docs_only" != "true" ] && [ "$run_changed_smoke" = "true" ]; then
|
||||
run_install_smoke=true
|
||||
event_name="${OPENCLAW_CI_EVENT_NAME:-}"
|
||||
workflow_bun_global_install_smoke="${OPENCLAW_CI_WORKFLOW_BUN_GLOBAL_INSTALL_SMOKE:-false}"
|
||||
docs_only=false
|
||||
run_fast_install_smoke=true
|
||||
run_full_install_smoke=true
|
||||
run_bun_global_install_smoke=false
|
||||
run_install_smoke=true
|
||||
if [ "$event_name" = "schedule" ]; then
|
||||
run_bun_global_install_smoke=true
|
||||
elif [ "$event_name" = "workflow_dispatch" ] || [ "$event_name" = "workflow_call" ]; then
|
||||
if [ "$workflow_bun_global_install_smoke" = "true" ]; then
|
||||
run_bun_global_install_smoke=true
|
||||
fi
|
||||
fi
|
||||
{
|
||||
echo "docs_only=$docs_only"
|
||||
echo "run_install_smoke=$run_install_smoke"
|
||||
echo "run_fast_install_smoke=$run_fast_install_smoke"
|
||||
echo "run_full_install_smoke=$run_full_install_smoke"
|
||||
echo "run_bun_global_install_smoke=$run_bun_global_install_smoke"
|
||||
} >> "$GITHUB_OUTPUT"
|
||||
|
||||
install-smoke:
|
||||
install-smoke-fast:
|
||||
needs: [preflight]
|
||||
if: needs.preflight.outputs.run_install_smoke == 'true'
|
||||
if: needs.preflight.outputs.run_fast_install_smoke == 'true' && needs.preflight.outputs.run_full_install_smoke != 'true'
|
||||
runs-on: blacksmith-16vcpu-ubuntu-2404
|
||||
env:
|
||||
DOCKER_BUILD_SUMMARY: "false"
|
||||
@@ -85,6 +89,102 @@ jobs:
|
||||
steps:
|
||||
- name: Checkout CLI
|
||||
uses: actions/checkout@v6
|
||||
with:
|
||||
ref: ${{ inputs.ref || github.ref }}
|
||||
|
||||
- name: Set up Blacksmith Docker Builder
|
||||
uses: useblacksmith/setup-docker-builder@ac083cc84672d01c60d5e8561d0a939b697de542 # v1
|
||||
|
||||
# Blacksmith's builder owns the Docker layer cache; keep smoke builds off
|
||||
# explicit gha cache directives so local tags still load cleanly.
|
||||
- name: Build root Dockerfile smoke image
|
||||
uses: useblacksmith/build-push-action@cbd1f60d194a98cb3be5523b15134501eaf0fbf3 # v2
|
||||
with:
|
||||
context: .
|
||||
file: ./Dockerfile
|
||||
build-args: |
|
||||
OPENCLAW_DOCKER_APT_UPGRADE=0
|
||||
OPENCLAW_EXTENSIONS=matrix
|
||||
tags: |
|
||||
openclaw-dockerfile-smoke:local
|
||||
openclaw-ext-smoke:local
|
||||
load: true
|
||||
push: false
|
||||
provenance: false
|
||||
|
||||
- name: Run root Dockerfile CLI smoke
|
||||
run: |
|
||||
docker run --rm --entrypoint sh openclaw-dockerfile-smoke:local -lc 'which openclaw && openclaw --version'
|
||||
|
||||
- name: Run agents delete shared workspace Docker CLI smoke
|
||||
env:
|
||||
OPENCLAW_AGENTS_DELETE_SHARED_WORKSPACE_E2E_IMAGE: openclaw-dockerfile-smoke:local
|
||||
OPENCLAW_AGENTS_DELETE_SHARED_WORKSPACE_E2E_SKIP_BUILD: "1"
|
||||
run: bash scripts/e2e/agents-delete-shared-workspace-docker.sh
|
||||
|
||||
- name: Run Docker gateway network e2e
|
||||
env:
|
||||
OPENCLAW_GATEWAY_NETWORK_E2E_IMAGE: openclaw-dockerfile-smoke:local
|
||||
OPENCLAW_GATEWAY_NETWORK_E2E_SKIP_BUILD: "1"
|
||||
run: bash scripts/e2e/gateway-network-docker.sh
|
||||
|
||||
- name: Smoke test Dockerfile with matrix extension build arg
|
||||
run: |
|
||||
docker run --rm --entrypoint sh openclaw-ext-smoke:local -lc '
|
||||
which openclaw &&
|
||||
openclaw --version &&
|
||||
node -e "
|
||||
const Module = require(\"node:module\");
|
||||
const matrixPackage = require(\"/app/extensions/matrix/package.json\");
|
||||
const requireFromMatrix = Module.createRequire(\"/app/extensions/matrix/package.json\");
|
||||
const runtimeDeps = Object.keys(matrixPackage.dependencies ?? {});
|
||||
if (runtimeDeps.length === 0) {
|
||||
throw new Error(
|
||||
\"matrix package has no declared runtime dependencies; smoke cannot validate install mirroring\",
|
||||
);
|
||||
}
|
||||
for (const dep of runtimeDeps) {
|
||||
requireFromMatrix.resolve(dep);
|
||||
}
|
||||
const { spawnSync } = require(\"node:child_process\");
|
||||
const run = spawnSync(\"openclaw\", [\"plugins\", \"list\", \"--json\"], { encoding: \"utf8\" });
|
||||
if (run.status !== 0) {
|
||||
process.stderr.write(run.stderr || run.stdout || \"plugins list failed\\n\");
|
||||
process.exit(run.status ?? 1);
|
||||
}
|
||||
const parsed = JSON.parse(run.stdout);
|
||||
const matrix = (parsed.plugins || []).find((entry) => entry.id === \"matrix\");
|
||||
if (!matrix) {
|
||||
throw new Error(\"matrix plugin missing from bundled plugin list\");
|
||||
}
|
||||
const matrixDiag = (parsed.diagnostics || []).filter(
|
||||
(diag) =>
|
||||
typeof diag.source === \"string\" &&
|
||||
diag.source.includes(\"/extensions/matrix\") &&
|
||||
typeof diag.message === \"string\" &&
|
||||
diag.message.includes(\"extension entry escapes package directory\"),
|
||||
);
|
||||
if (matrixDiag.length > 0) {
|
||||
throw new Error(
|
||||
\"unexpected matrix diagnostics: \" +
|
||||
matrixDiag.map((diag) => diag.message).join(\"; \"),
|
||||
);
|
||||
}
|
||||
"
|
||||
'
|
||||
|
||||
install-smoke:
|
||||
needs: [preflight]
|
||||
if: needs.preflight.outputs.run_full_install_smoke == 'true'
|
||||
runs-on: blacksmith-16vcpu-ubuntu-2404
|
||||
env:
|
||||
DOCKER_BUILD_SUMMARY: "false"
|
||||
DOCKER_BUILD_RECORD_UPLOAD: "false"
|
||||
steps:
|
||||
- name: Checkout CLI
|
||||
uses: actions/checkout@v6
|
||||
with:
|
||||
ref: ${{ inputs.ref || github.ref }}
|
||||
|
||||
- name: Set up Blacksmith Docker Builder
|
||||
uses: useblacksmith/setup-docker-builder@ac083cc84672d01c60d5e8561d0a939b697de542 # v1
|
||||
@@ -96,6 +196,8 @@ jobs:
|
||||
OPENCLAW_QR_SMOKE_FORCE_INSTALL: "1"
|
||||
run: bash scripts/e2e/qr-import-docker.sh
|
||||
|
||||
# Build once with the matrix extension and tag both smoke names. This
|
||||
# keeps the build-arg coverage without a second Blacksmith build action.
|
||||
- name: Build root Dockerfile smoke image
|
||||
uses: useblacksmith/build-push-action@cbd1f60d194a98cb3be5523b15134501eaf0fbf3 # v2
|
||||
with:
|
||||
@@ -103,7 +205,10 @@ jobs:
|
||||
file: ./Dockerfile
|
||||
build-args: |
|
||||
OPENCLAW_DOCKER_APT_UPGRADE=0
|
||||
tags: openclaw-dockerfile-smoke:local
|
||||
OPENCLAW_EXTENSIONS=matrix
|
||||
tags: |
|
||||
openclaw-dockerfile-smoke:local
|
||||
openclaw-ext-smoke:local
|
||||
load: true
|
||||
push: false
|
||||
provenance: false
|
||||
@@ -112,28 +217,18 @@ jobs:
|
||||
run: |
|
||||
docker run --rm --entrypoint sh openclaw-dockerfile-smoke:local -lc 'which openclaw && openclaw --version'
|
||||
|
||||
- name: Run agents delete shared workspace Docker CLI smoke
|
||||
env:
|
||||
OPENCLAW_AGENTS_DELETE_SHARED_WORKSPACE_E2E_IMAGE: openclaw-dockerfile-smoke:local
|
||||
OPENCLAW_AGENTS_DELETE_SHARED_WORKSPACE_E2E_SKIP_BUILD: "1"
|
||||
run: bash scripts/e2e/agents-delete-shared-workspace-docker.sh
|
||||
|
||||
- name: Run Docker gateway network e2e
|
||||
env:
|
||||
OPENCLAW_GATEWAY_NETWORK_E2E_IMAGE: openclaw-dockerfile-smoke:local
|
||||
OPENCLAW_GATEWAY_NETWORK_E2E_SKIP_BUILD: "1"
|
||||
run: bash scripts/e2e/gateway-network-docker.sh
|
||||
|
||||
# This smoke validates that the build-arg path preinstalls the matrix
|
||||
# runtime deps declared by the plugin and that matrix discovery stays
|
||||
# healthy in the final runtime image.
|
||||
- name: Build extension Dockerfile smoke image
|
||||
uses: useblacksmith/build-push-action@cbd1f60d194a98cb3be5523b15134501eaf0fbf3 # v2
|
||||
with:
|
||||
context: .
|
||||
file: ./Dockerfile
|
||||
build-args: |
|
||||
OPENCLAW_DOCKER_APT_UPGRADE=0
|
||||
OPENCLAW_EXTENSIONS=matrix
|
||||
tags: openclaw-ext-smoke:local
|
||||
load: true
|
||||
push: false
|
||||
provenance: false
|
||||
|
||||
- name: Smoke test Dockerfile with matrix extension build arg
|
||||
run: |
|
||||
docker run --rm --entrypoint sh openclaw-ext-smoke:local -lc '
|
||||
@@ -190,7 +285,6 @@ jobs:
|
||||
provenance: false
|
||||
|
||||
- name: Build installer non-root image
|
||||
if: github.event_name != 'pull_request'
|
||||
uses: useblacksmith/build-push-action@cbd1f60d194a98cb3be5523b15134501eaf0fbf3 # v2
|
||||
with:
|
||||
context: ./scripts/docker
|
||||
@@ -200,12 +294,19 @@ jobs:
|
||||
push: false
|
||||
provenance: false
|
||||
|
||||
- name: Setup Node environment for local pack smoke
|
||||
- name: Setup Node environment for installer smoke
|
||||
uses: ./.github/actions/setup-node-env
|
||||
with:
|
||||
install-bun: "false"
|
||||
install-bun: ${{ needs.preflight.outputs.run_bun_global_install_smoke }}
|
||||
install-deps: "true"
|
||||
|
||||
- name: Run Bun global install image-provider smoke
|
||||
if: needs.preflight.outputs.run_bun_global_install_smoke == 'true'
|
||||
env:
|
||||
OPENCLAW_BUN_GLOBAL_SMOKE_DIST_IMAGE: openclaw-dockerfile-smoke:local
|
||||
OPENCLAW_BUN_GLOBAL_SMOKE_HOST_BUILD: "0"
|
||||
run: bash scripts/e2e/bun-global-install-smoke.sh
|
||||
|
||||
- name: Run installer docker tests
|
||||
env:
|
||||
OPENCLAW_INSTALL_URL: https://openclaw.ai/install.sh
|
||||
@@ -213,9 +314,39 @@ jobs:
|
||||
OPENCLAW_NO_ONBOARD: "1"
|
||||
OPENCLAW_INSTALL_SMOKE_SKIP_CLI: "1"
|
||||
OPENCLAW_INSTALL_SMOKE_SKIP_IMAGE_BUILD: "1"
|
||||
OPENCLAW_INSTALL_NONROOT_SKIP_IMAGE_BUILD: ${{ github.event_name == 'pull_request' && '0' || '1' }}
|
||||
OPENCLAW_INSTALL_SMOKE_SKIP_NONROOT: ${{ github.event_name == 'pull_request' && '1' || '0' }}
|
||||
OPENCLAW_INSTALL_NONROOT_SKIP_IMAGE_BUILD: "1"
|
||||
OPENCLAW_INSTALL_SMOKE_SKIP_NONROOT: "0"
|
||||
OPENCLAW_INSTALL_SMOKE_SKIP_NPM_GLOBAL: "1"
|
||||
OPENCLAW_INSTALL_SMOKE_SKIP_PREVIOUS: "1"
|
||||
OPENCLAW_INSTALL_SMOKE_UPDATE_BASELINE: latest
|
||||
OPENCLAW_INSTALL_SMOKE_UPDATE_DIST_IMAGE: openclaw-dockerfile-smoke:local
|
||||
OPENCLAW_INSTALL_SMOKE_UPDATE_SKIP_LOCAL_BUILD: "1"
|
||||
run: bash scripts/test-install-sh-docker.sh
|
||||
|
||||
docker-e2e-fast:
|
||||
needs: [preflight]
|
||||
if: needs.preflight.outputs.run_fast_install_smoke == 'true' || needs.preflight.outputs.run_full_install_smoke == 'true'
|
||||
runs-on: blacksmith-16vcpu-ubuntu-2404
|
||||
timeout-minutes: 8
|
||||
env:
|
||||
DOCKER_BUILD_SUMMARY: "false"
|
||||
DOCKER_BUILD_RECORD_UPLOAD: "false"
|
||||
steps:
|
||||
- name: Checkout CLI
|
||||
uses: actions/checkout@v6
|
||||
with:
|
||||
ref: ${{ inputs.ref || github.ref }}
|
||||
|
||||
- name: Set up Blacksmith Docker Builder
|
||||
uses: useblacksmith/setup-docker-builder@ac083cc84672d01c60d5e8561d0a939b697de542 # v1
|
||||
|
||||
- name: Setup Node environment for package smoke
|
||||
uses: ./.github/actions/setup-node-env
|
||||
with:
|
||||
install-bun: "false"
|
||||
install-deps: "true"
|
||||
|
||||
- name: Run fast bundled plugin Docker E2E
|
||||
env:
|
||||
OPENCLAW_BUNDLED_CHANNEL_DEPS_E2E_IMAGE: openclaw-bundled-channel-fast:local
|
||||
run: timeout 120s pnpm test:docker:bundled-channel-deps:fast
|
||||
|
||||
210
.github/workflows/npm-telegram-beta-e2e.yml
vendored
Normal file
210
.github/workflows/npm-telegram-beta-e2e.yml
vendored
Normal file
@@ -0,0 +1,210 @@
|
||||
name: NPM Telegram Beta E2E
|
||||
|
||||
on:
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
package_spec:
|
||||
description: Published OpenClaw package spec to test
|
||||
required: true
|
||||
default: openclaw@beta
|
||||
type: string
|
||||
provider_mode:
|
||||
description: QA provider mode
|
||||
required: true
|
||||
default: mock-openai
|
||||
type: choice
|
||||
options:
|
||||
- mock-openai
|
||||
- live-frontier
|
||||
scenario:
|
||||
description: Optional comma-separated Telegram scenario ids
|
||||
required: false
|
||||
type: string
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
concurrency:
|
||||
group: npm-telegram-beta-e2e-${{ github.run_id }}
|
||||
cancel-in-progress: false
|
||||
|
||||
env:
|
||||
FORCE_JAVASCRIPT_ACTIONS_TO_NODE24: "true"
|
||||
NODE_VERSION: "24.x"
|
||||
PNPM_VERSION: "10.33.0"
|
||||
|
||||
jobs:
|
||||
validate_dispatch_ref:
|
||||
name: Validate dispatch ref
|
||||
runs-on: blacksmith-8vcpu-ubuntu-2404
|
||||
steps:
|
||||
- name: Require main workflow ref
|
||||
env:
|
||||
WORKFLOW_REF: ${{ github.ref }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
if [[ "${WORKFLOW_REF}" != "refs/heads/main" ]]; then
|
||||
echo "NPM Telegram beta E2E must be dispatched from main so workflow logic stays controlled." >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
approve_release_manager:
|
||||
name: Approve npm Telegram beta E2E
|
||||
needs: validate_dispatch_ref
|
||||
runs-on: ubuntu-latest
|
||||
environment: npm-release
|
||||
steps:
|
||||
- name: Record approval
|
||||
env:
|
||||
PACKAGE_SPEC: ${{ inputs.package_spec }}
|
||||
run: echo "Approved npm Telegram beta E2E for ${PACKAGE_SPEC}"
|
||||
|
||||
prepare_docker_e2e_image:
|
||||
name: Prepare Docker E2E image
|
||||
needs: validate_dispatch_ref
|
||||
runs-on: blacksmith-32vcpu-ubuntu-2404
|
||||
timeout-minutes: 90
|
||||
permissions:
|
||||
contents: read
|
||||
packages: write
|
||||
outputs:
|
||||
image: ${{ steps.image.outputs.image }}
|
||||
env:
|
||||
DOCKER_BUILD_SUMMARY: "false"
|
||||
DOCKER_BUILD_RECORD_UPLOAD: "false"
|
||||
steps:
|
||||
- name: Checkout main
|
||||
uses: actions/checkout@v6
|
||||
with:
|
||||
ref: ${{ github.sha }}
|
||||
fetch-depth: 1
|
||||
|
||||
- name: Resolve Docker E2E image tag
|
||||
id: image
|
||||
shell: bash
|
||||
env:
|
||||
SELECTED_SHA: ${{ github.sha }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
repository="${GITHUB_REPOSITORY,,}"
|
||||
image="ghcr.io/${repository}-docker-e2e:${SELECTED_SHA}"
|
||||
echo "image=$image" >> "$GITHUB_OUTPUT"
|
||||
echo "Docker E2E image: \`$image\`" >> "$GITHUB_STEP_SUMMARY"
|
||||
|
||||
- name: Set up Blacksmith Docker Builder
|
||||
uses: useblacksmith/setup-docker-builder@ac083cc84672d01c60d5e8561d0a939b697de542 # v1
|
||||
|
||||
- name: Log in to GHCR
|
||||
uses: docker/login-action@4907a6ddec9925e35a0a9e82d7399ccc52663121 # v4
|
||||
with:
|
||||
registry: ghcr.io
|
||||
username: ${{ github.actor }}
|
||||
password: ${{ github.token }}
|
||||
|
||||
- name: Build and push Docker E2E image
|
||||
uses: useblacksmith/build-push-action@cbd1f60d194a98cb3be5523b15134501eaf0fbf3 # v2
|
||||
with:
|
||||
context: .
|
||||
file: ./scripts/e2e/Dockerfile
|
||||
target: build
|
||||
platforms: linux/amd64
|
||||
tags: ${{ steps.image.outputs.image }}
|
||||
provenance: false
|
||||
push: true
|
||||
|
||||
run_npm_telegram_beta_e2e:
|
||||
name: Run published npm Telegram E2E
|
||||
needs: [approve_release_manager, prepare_docker_e2e_image]
|
||||
runs-on: blacksmith-32vcpu-ubuntu-2404
|
||||
timeout-minutes: 60
|
||||
environment: qa-live-shared
|
||||
permissions:
|
||||
contents: read
|
||||
packages: read
|
||||
steps:
|
||||
- name: Checkout main
|
||||
uses: actions/checkout@v6
|
||||
with:
|
||||
ref: ${{ github.sha }}
|
||||
fetch-depth: 1
|
||||
|
||||
- name: Log in to GHCR
|
||||
uses: docker/login-action@4907a6ddec9925e35a0a9e82d7399ccc52663121 # v4
|
||||
with:
|
||||
registry: ghcr.io
|
||||
username: ${{ github.actor }}
|
||||
password: ${{ github.token }}
|
||||
|
||||
- name: Setup Node environment
|
||||
uses: ./.github/actions/setup-node-env
|
||||
with:
|
||||
node-version: ${{ env.NODE_VERSION }}
|
||||
pnpm-version: ${{ env.PNPM_VERSION }}
|
||||
install-bun: "true"
|
||||
|
||||
- name: Validate inputs and secrets
|
||||
env:
|
||||
PACKAGE_SPEC: ${{ inputs.package_spec }}
|
||||
PROVIDER_MODE: ${{ inputs.provider_mode }}
|
||||
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
|
||||
OPENCLAW_QA_CONVEX_SITE_URL: ${{ secrets.OPENCLAW_QA_CONVEX_SITE_URL }}
|
||||
OPENCLAW_QA_CONVEX_SECRET_CI: ${{ secrets.OPENCLAW_QA_CONVEX_SECRET_CI }}
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
if [[ ! "${PACKAGE_SPEC}" =~ ^openclaw@(beta|latest|[0-9]{4}\.[1-9][0-9]*\.[1-9][0-9]*(-[1-9][0-9]*|-beta\.[1-9][0-9]*)?)$ ]]; then
|
||||
echo "package_spec must be openclaw@beta, openclaw@latest, or an exact OpenClaw release version; got: ${PACKAGE_SPEC}" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
require_var() {
|
||||
local key="$1"
|
||||
if [[ -z "${!key:-}" ]]; then
|
||||
echo "Missing required ${key}." >&2
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
require_var OPENCLAW_QA_CONVEX_SITE_URL
|
||||
require_var OPENCLAW_QA_CONVEX_SECRET_CI
|
||||
if [[ "${PROVIDER_MODE}" == "live-frontier" ]]; then
|
||||
require_var OPENAI_API_KEY
|
||||
fi
|
||||
|
||||
- name: Run npm Telegram beta E2E
|
||||
id: run_lane
|
||||
shell: bash
|
||||
env:
|
||||
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
|
||||
OPENCLAW_SKIP_DOCKER_BUILD: "1"
|
||||
OPENCLAW_DOCKER_E2E_IMAGE: ${{ needs.prepare_docker_e2e_image.outputs.image }}
|
||||
OPENCLAW_NPM_TELEGRAM_PACKAGE_SPEC: ${{ inputs.package_spec }}
|
||||
OPENCLAW_NPM_TELEGRAM_PROVIDER_MODE: ${{ inputs.provider_mode }}
|
||||
OPENCLAW_NPM_TELEGRAM_CREDENTIAL_SOURCE: convex
|
||||
OPENCLAW_NPM_TELEGRAM_CREDENTIAL_ROLE: ci
|
||||
OPENCLAW_QA_CONVEX_SITE_URL: ${{ secrets.OPENCLAW_QA_CONVEX_SITE_URL }}
|
||||
OPENCLAW_QA_CONVEX_SECRET_CI: ${{ secrets.OPENCLAW_QA_CONVEX_SECRET_CI }}
|
||||
OPENCLAW_QA_REDACT_PUBLIC_METADATA: "1"
|
||||
INPUT_SCENARIO: ${{ inputs.scenario }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
output_dir=".artifacts/qa-e2e/npm-telegram-beta-${GITHUB_RUN_ID}-${GITHUB_RUN_ATTEMPT}"
|
||||
echo "output_dir=${output_dir}" >> "$GITHUB_OUTPUT"
|
||||
export OPENCLAW_NPM_TELEGRAM_OUTPUT_DIR="${output_dir}"
|
||||
|
||||
if [[ -n "${INPUT_SCENARIO// }" ]]; then
|
||||
export OPENCLAW_NPM_TELEGRAM_SCENARIOS="${INPUT_SCENARIO}"
|
||||
fi
|
||||
|
||||
pnpm test:docker:npm-telegram-live
|
||||
|
||||
- name: Upload npm Telegram E2E artifacts
|
||||
if: always()
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: npm-telegram-beta-e2e-${{ github.run_id }}-${{ github.run_attempt }}
|
||||
path: ${{ steps.run_lane.outputs.output_dir }}
|
||||
retention-days: 14
|
||||
if-no-files-found: warn
|
||||
@@ -28,6 +28,11 @@ on:
|
||||
required: false
|
||||
default: true
|
||||
type: boolean
|
||||
live_models_only:
|
||||
description: Whether to run only the Docker live model matrix when live suites are enabled
|
||||
required: false
|
||||
default: false
|
||||
type: boolean
|
||||
workflow_call:
|
||||
inputs:
|
||||
ref:
|
||||
@@ -54,6 +59,11 @@ on:
|
||||
required: false
|
||||
default: true
|
||||
type: boolean
|
||||
live_models_only:
|
||||
description: Whether to run only the Docker live model matrix when live suites are enabled
|
||||
required: false
|
||||
default: false
|
||||
type: boolean
|
||||
secrets:
|
||||
OPENAI_API_KEY:
|
||||
required: false
|
||||
@@ -141,9 +151,12 @@ on:
|
||||
required: false
|
||||
OPENCLAW_GEMINI_SETTINGS_JSON:
|
||||
required: false
|
||||
FIREWORKS_API_KEY:
|
||||
required: false
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
packages: write
|
||||
pull-requests: read
|
||||
|
||||
env:
|
||||
@@ -153,7 +166,7 @@ env:
|
||||
|
||||
jobs:
|
||||
validate_selected_ref:
|
||||
runs-on: blacksmith-8vcpu-ubuntu-2404
|
||||
runs-on: ubuntu-24.04
|
||||
outputs:
|
||||
selected_sha: ${{ steps.validate.outputs.selected_sha }}
|
||||
trusted_reason: ${{ steps.validate.outputs.trusted_reason }}
|
||||
@@ -209,8 +222,8 @@ jobs:
|
||||
|
||||
validate_release_live_cache:
|
||||
needs: validate_selected_ref
|
||||
if: inputs.include_live_suites
|
||||
runs-on: blacksmith-32vcpu-ubuntu-2404
|
||||
if: inputs.include_live_suites && !inputs.live_models_only
|
||||
runs-on: blacksmith-8vcpu-ubuntu-2404
|
||||
timeout-minutes: 60
|
||||
env:
|
||||
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
|
||||
@@ -275,7 +288,7 @@ jobs:
|
||||
|
||||
validate_special_e2e:
|
||||
needs: validate_selected_ref
|
||||
if: inputs.include_repo_e2e || inputs.include_live_suites
|
||||
if: inputs.include_repo_e2e || (inputs.include_live_suites && !inputs.live_models_only)
|
||||
runs-on: blacksmith-32vcpu-ubuntu-2404
|
||||
timeout-minutes: ${{ matrix.timeout_minutes }}
|
||||
strategy:
|
||||
@@ -349,8 +362,8 @@ jobs:
|
||||
run: ${{ matrix.command }}
|
||||
|
||||
validate_docker_e2e:
|
||||
needs: validate_selected_ref
|
||||
if: inputs.include_release_path_suites || inputs.include_openwebui
|
||||
needs: [validate_selected_ref, prepare_docker_e2e_image]
|
||||
if: inputs.include_release_path_suites
|
||||
runs-on: blacksmith-32vcpu-ubuntu-2404
|
||||
timeout-minutes: ${{ matrix.timeout_minutes }}
|
||||
strategy:
|
||||
@@ -362,55 +375,71 @@ jobs:
|
||||
command: pnpm test:docker:onboard
|
||||
timeout_minutes: 60
|
||||
release_path: true
|
||||
openwebui_only: false
|
||||
- suite_id: docker-npm-onboard-channel-agent
|
||||
label: Npm Onboard Channel Agent Docker E2E
|
||||
command: pnpm test:docker:npm-onboard-channel-agent
|
||||
timeout_minutes: 90
|
||||
release_path: true
|
||||
- suite_id: docker-gateway-network
|
||||
label: Gateway Network Docker E2E
|
||||
command: pnpm test:docker:gateway-network
|
||||
timeout_minutes: 60
|
||||
release_path: true
|
||||
openwebui_only: false
|
||||
- suite_id: docker-openai-web-search-minimal
|
||||
label: OpenAI Web Search Minimal Docker E2E
|
||||
command: pnpm test:docker:openai-web-search-minimal
|
||||
timeout_minutes: 60
|
||||
release_path: true
|
||||
- suite_id: docker-mcp-channels
|
||||
label: MCP Channels Docker E2E
|
||||
command: pnpm test:docker:mcp-channels
|
||||
timeout_minutes: 60
|
||||
release_path: true
|
||||
openwebui_only: false
|
||||
- suite_id: docker-pi-bundle-mcp-tools
|
||||
label: Pi Bundle MCP Tools Docker E2E
|
||||
command: pnpm test:docker:pi-bundle-mcp-tools
|
||||
timeout_minutes: 60
|
||||
release_path: true
|
||||
- suite_id: docker-cron-mcp-cleanup
|
||||
label: Cron MCP Cleanup Docker E2E
|
||||
command: pnpm test:docker:cron-mcp-cleanup
|
||||
timeout_minutes: 60
|
||||
release_path: true
|
||||
- suite_id: docker-plugins
|
||||
label: Plugins Docker E2E
|
||||
command: pnpm test:docker:plugins
|
||||
timeout_minutes: 75
|
||||
release_path: true
|
||||
openwebui_only: false
|
||||
- suite_id: docker-plugin-update
|
||||
label: Plugin Update Docker E2E
|
||||
command: pnpm test:docker:plugin-update
|
||||
timeout_minutes: 60
|
||||
release_path: true
|
||||
- suite_id: docker-config-reload
|
||||
label: Config Reload Docker E2E
|
||||
command: pnpm test:docker:config-reload
|
||||
timeout_minutes: 60
|
||||
release_path: true
|
||||
- suite_id: docker-bundled-channel-deps
|
||||
label: Bundled Channel Runtime Deps Docker E2E
|
||||
command: pnpm test:docker:bundled-channel-deps
|
||||
timeout_minutes: 75
|
||||
release_path: true
|
||||
openwebui_only: false
|
||||
- suite_id: docker-doctor-switch
|
||||
label: Doctor Install Switch Docker E2E
|
||||
command: pnpm test:docker:doctor-switch
|
||||
timeout_minutes: 60
|
||||
release_path: true
|
||||
openwebui_only: false
|
||||
- suite_id: docker-qr
|
||||
label: QR Import Docker E2E
|
||||
command: pnpm test:docker:qr
|
||||
timeout_minutes: 60
|
||||
release_path: true
|
||||
openwebui_only: false
|
||||
- suite_id: docker-install-e2e
|
||||
label: Installer Docker E2E
|
||||
command: pnpm test:install:e2e
|
||||
timeout_minutes: 120
|
||||
release_path: true
|
||||
openwebui_only: false
|
||||
- suite_id: docker-openwebui
|
||||
label: Open WebUI Docker E2E
|
||||
command: pnpm test:docker:openwebui
|
||||
timeout_minutes: 75
|
||||
release_path: false
|
||||
openwebui_only: true
|
||||
env:
|
||||
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
|
||||
OPENAI_BASE_URL: ${{ secrets.OPENAI_BASE_URL }}
|
||||
@@ -455,6 +484,9 @@ jobs:
|
||||
OPENCLAW_CLAUDE_SETTINGS_JSON: ${{ secrets.OPENCLAW_CLAUDE_SETTINGS_JSON }}
|
||||
OPENCLAW_CLAUDE_SETTINGS_LOCAL_JSON: ${{ secrets.OPENCLAW_CLAUDE_SETTINGS_LOCAL_JSON }}
|
||||
OPENCLAW_GEMINI_SETTINGS_JSON: ${{ secrets.OPENCLAW_GEMINI_SETTINGS_JSON }}
|
||||
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}
|
||||
OPENCLAW_DOCKER_E2E_IMAGE: ${{ needs.prepare_docker_e2e_image.outputs.image }}
|
||||
OPENCLAW_SKIP_DOCKER_BUILD: "1"
|
||||
steps:
|
||||
- name: Checkout selected ref
|
||||
uses: actions/checkout@v6
|
||||
@@ -462,6 +494,13 @@ jobs:
|
||||
ref: ${{ needs.validate_selected_ref.outputs.selected_sha }}
|
||||
fetch-depth: 1
|
||||
|
||||
- name: Log in to GHCR for shared Docker E2E image
|
||||
uses: docker/login-action@4907a6ddec9925e35a0a9e82d7399ccc52663121 # v4
|
||||
with:
|
||||
registry: ghcr.io
|
||||
username: ${{ github.actor }}
|
||||
password: ${{ github.token }}
|
||||
|
||||
- name: Setup Node environment
|
||||
uses: ./.github/actions/setup-node-env
|
||||
with:
|
||||
@@ -497,23 +536,229 @@ jobs:
|
||||
exit 1
|
||||
fi
|
||||
;;
|
||||
docker-openwebui)
|
||||
[[ -n "${OPENAI_API_KEY:-}" ]] || {
|
||||
echo "OPENAI_API_KEY is required for the Open WebUI Docker smoke." >&2
|
||||
exit 1
|
||||
}
|
||||
;;
|
||||
esac
|
||||
|
||||
- name: Run ${{ matrix.label }}
|
||||
if: |
|
||||
(inputs.include_release_path_suites && matrix.release_path) ||
|
||||
(inputs.include_openwebui && matrix.openwebui_only)
|
||||
run: ${{ matrix.command }}
|
||||
|
||||
validate_docker_openwebui:
|
||||
needs: [validate_selected_ref, prepare_docker_e2e_image]
|
||||
if: inputs.include_openwebui
|
||||
runs-on: blacksmith-32vcpu-ubuntu-2404
|
||||
timeout-minutes: 75
|
||||
env:
|
||||
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
|
||||
OPENAI_BASE_URL: ${{ secrets.OPENAI_BASE_URL }}
|
||||
OPENCLAW_DOCKER_E2E_IMAGE: ${{ needs.prepare_docker_e2e_image.outputs.image }}
|
||||
OPENCLAW_SKIP_DOCKER_BUILD: "1"
|
||||
steps:
|
||||
- name: Checkout selected ref
|
||||
uses: actions/checkout@v6
|
||||
with:
|
||||
ref: ${{ needs.validate_selected_ref.outputs.selected_sha }}
|
||||
fetch-depth: 1
|
||||
|
||||
- name: Log in to GHCR for shared Docker E2E image
|
||||
uses: docker/login-action@4907a6ddec9925e35a0a9e82d7399ccc52663121 # v4
|
||||
with:
|
||||
registry: ghcr.io
|
||||
username: ${{ github.actor }}
|
||||
password: ${{ github.token }}
|
||||
|
||||
- name: Setup Node environment
|
||||
uses: ./.github/actions/setup-node-env
|
||||
with:
|
||||
node-version: ${{ env.NODE_VERSION }}
|
||||
pnpm-version: ${{ env.PNPM_VERSION }}
|
||||
install-bun: "true"
|
||||
|
||||
- name: Validate Open WebUI credentials
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
[[ -n "${OPENAI_API_KEY:-}" ]] || {
|
||||
echo "OPENAI_API_KEY is required for the Open WebUI Docker smoke." >&2
|
||||
exit 1
|
||||
}
|
||||
|
||||
- name: Run Open WebUI Docker E2E
|
||||
run: pnpm test:docker:openwebui
|
||||
|
||||
prepare_docker_e2e_image:
|
||||
needs: validate_selected_ref
|
||||
if: inputs.include_release_path_suites || inputs.include_openwebui
|
||||
runs-on: blacksmith-32vcpu-ubuntu-2404
|
||||
timeout-minutes: 90
|
||||
permissions:
|
||||
contents: read
|
||||
packages: write
|
||||
outputs:
|
||||
image: ${{ steps.image.outputs.image }}
|
||||
env:
|
||||
DOCKER_BUILD_SUMMARY: "false"
|
||||
DOCKER_BUILD_RECORD_UPLOAD: "false"
|
||||
steps:
|
||||
- name: Checkout selected ref
|
||||
uses: actions/checkout@v6
|
||||
with:
|
||||
ref: ${{ needs.validate_selected_ref.outputs.selected_sha }}
|
||||
fetch-depth: 1
|
||||
|
||||
- name: Resolve shared Docker E2E image tag
|
||||
id: image
|
||||
shell: bash
|
||||
env:
|
||||
SELECTED_SHA: ${{ needs.validate_selected_ref.outputs.selected_sha }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
repository="${GITHUB_REPOSITORY,,}"
|
||||
image="ghcr.io/${repository}-docker-e2e:${SELECTED_SHA}"
|
||||
echo "image=$image" >> "$GITHUB_OUTPUT"
|
||||
echo "Shared Docker E2E image: \`$image\`" >> "$GITHUB_STEP_SUMMARY"
|
||||
|
||||
- name: Log in to GHCR
|
||||
uses: docker/login-action@4907a6ddec9925e35a0a9e82d7399ccc52663121 # v4
|
||||
with:
|
||||
registry: ghcr.io
|
||||
username: ${{ github.actor }}
|
||||
password: ${{ github.token }}
|
||||
|
||||
- name: Setup Docker builder
|
||||
uses: useblacksmith/setup-docker-builder@ac083cc84672d01c60d5e8561d0a939b697de542 # v1
|
||||
|
||||
- name: Build and push shared Docker E2E image
|
||||
uses: docker/build-push-action@bcafcacb16a39f128d818304e6c9c0c18556b85f # v7.1.0
|
||||
with:
|
||||
context: .
|
||||
file: ./scripts/e2e/Dockerfile
|
||||
target: build
|
||||
platforms: linux/amd64
|
||||
cache-from: type=gha,scope=docker-e2e
|
||||
cache-to: type=gha,mode=max,scope=docker-e2e
|
||||
tags: ${{ steps.image.outputs.image }}
|
||||
provenance: false
|
||||
push: true
|
||||
|
||||
validate_live_models_docker:
|
||||
name: Docker live models (${{ matrix.provider_label }})
|
||||
needs: validate_selected_ref
|
||||
if: inputs.include_live_suites
|
||||
runs-on: ubuntu-24.04
|
||||
timeout-minutes: 75
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
include:
|
||||
- provider_label: Anthropic
|
||||
providers: anthropic
|
||||
- provider_label: Google
|
||||
providers: google
|
||||
- provider_label: MiniMax
|
||||
providers: minimax
|
||||
- provider_label: OpenAI
|
||||
providers: openai
|
||||
- provider_label: OpenCode
|
||||
providers: opencode-go
|
||||
- provider_label: OpenRouter
|
||||
providers: openrouter
|
||||
- provider_label: xAI
|
||||
providers: xai
|
||||
- provider_label: Z.ai
|
||||
providers: zai
|
||||
- provider_label: Fireworks
|
||||
providers: fireworks
|
||||
env:
|
||||
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
|
||||
OPENAI_BASE_URL: ${{ secrets.OPENAI_BASE_URL }}
|
||||
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
ANTHROPIC_API_TOKEN: ${{ secrets.ANTHROPIC_API_TOKEN }}
|
||||
ANTHROPIC_API_KEY_OLD: ${{ secrets.ANTHROPIC_API_KEY_OLD }}
|
||||
BYTEPLUS_API_KEY: ${{ secrets.BYTEPLUS_API_KEY }}
|
||||
CEREBRAS_API_KEY: ${{ secrets.CEREBRAS_API_KEY }}
|
||||
DASHSCOPE_API_KEY: ${{ secrets.DASHSCOPE_API_KEY }}
|
||||
GROQ_API_KEY: ${{ secrets.GROQ_API_KEY }}
|
||||
KIMI_API_KEY: ${{ secrets.KIMI_API_KEY }}
|
||||
MODELSTUDIO_API_KEY: ${{ secrets.MODELSTUDIO_API_KEY }}
|
||||
MOONSHOT_API_KEY: ${{ secrets.MOONSHOT_API_KEY }}
|
||||
MISTRAL_API_KEY: ${{ secrets.MISTRAL_API_KEY }}
|
||||
MINIMAX_API_KEY: ${{ secrets.MINIMAX_API_KEY }}
|
||||
OPENCODE_API_KEY: ${{ secrets.OPENCODE_API_KEY }}
|
||||
OPENCODE_ZEN_API_KEY: ${{ secrets.OPENCODE_ZEN_API_KEY }}
|
||||
GEMINI_API_KEY: ${{ secrets.GEMINI_API_KEY }}
|
||||
GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}
|
||||
OPENROUTER_API_KEY: ${{ secrets.OPENROUTER_API_KEY }}
|
||||
QWEN_API_KEY: ${{ secrets.QWEN_API_KEY }}
|
||||
XAI_API_KEY: ${{ secrets.XAI_API_KEY }}
|
||||
ZAI_API_KEY: ${{ secrets.ZAI_API_KEY }}
|
||||
Z_AI_API_KEY: ${{ secrets.Z_AI_API_KEY }}
|
||||
CLAUDE_CODE_OAUTH_TOKEN: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}
|
||||
OPENCLAW_CODEX_AUTH_JSON: ${{ secrets.OPENCLAW_CODEX_AUTH_JSON }}
|
||||
OPENCLAW_CODEX_CONFIG_TOML: ${{ secrets.OPENCLAW_CODEX_CONFIG_TOML }}
|
||||
OPENCLAW_CLAUDE_JSON: ${{ secrets.OPENCLAW_CLAUDE_JSON }}
|
||||
OPENCLAW_CLAUDE_CREDENTIALS_JSON: ${{ secrets.OPENCLAW_CLAUDE_CREDENTIALS_JSON }}
|
||||
OPENCLAW_CLAUDE_SETTINGS_JSON: ${{ secrets.OPENCLAW_CLAUDE_SETTINGS_JSON }}
|
||||
OPENCLAW_CLAUDE_SETTINGS_LOCAL_JSON: ${{ secrets.OPENCLAW_CLAUDE_SETTINGS_LOCAL_JSON }}
|
||||
OPENCLAW_GEMINI_SETTINGS_JSON: ${{ secrets.OPENCLAW_GEMINI_SETTINGS_JSON }}
|
||||
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}
|
||||
OPENCLAW_LIVE_PROVIDERS: ${{ matrix.providers }}
|
||||
OPENCLAW_VITEST_MAX_WORKERS: "2"
|
||||
steps:
|
||||
- name: Checkout selected ref
|
||||
uses: actions/checkout@v6
|
||||
with:
|
||||
ref: ${{ needs.validate_selected_ref.outputs.selected_sha }}
|
||||
fetch-depth: 1
|
||||
|
||||
- name: Setup Node environment
|
||||
uses: ./.github/actions/setup-node-env
|
||||
with:
|
||||
node-version: ${{ env.NODE_VERSION }}
|
||||
pnpm-version: ${{ env.PNPM_VERSION }}
|
||||
install-bun: "true"
|
||||
|
||||
- name: Hydrate live auth/profile inputs
|
||||
run: bash scripts/ci-hydrate-live-auth.sh
|
||||
|
||||
- name: Validate provider credential
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
require_any() {
|
||||
local label="$1"
|
||||
shift
|
||||
local key
|
||||
for key in "$@"; do
|
||||
if [[ -n "${!key:-}" ]]; then
|
||||
return 0
|
||||
fi
|
||||
done
|
||||
echo "Missing credential for ${label}: expected one of $*" >&2
|
||||
exit 1
|
||||
}
|
||||
|
||||
case "${{ matrix.providers }}" in
|
||||
anthropic) require_any Anthropic ANTHROPIC_API_KEY ANTHROPIC_API_KEY_OLD ANTHROPIC_API_TOKEN ;;
|
||||
google) require_any Google GEMINI_API_KEY GOOGLE_API_KEY ;;
|
||||
minimax) require_any MiniMax MINIMAX_API_KEY ;;
|
||||
openai) require_any OpenAI OPENAI_API_KEY ;;
|
||||
opencode-go) require_any OpenCode OPENCODE_API_KEY OPENCODE_ZEN_API_KEY ;;
|
||||
openrouter) require_any OpenRouter OPENROUTER_API_KEY ;;
|
||||
xai) require_any xAI XAI_API_KEY ;;
|
||||
zai) require_any Z.ai ZAI_API_KEY Z_AI_API_KEY ;;
|
||||
fireworks) require_any Fireworks FIREWORKS_API_KEY ;;
|
||||
*)
|
||||
echo "Unhandled live model provider shard: ${{ matrix.providers }}" >&2
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
|
||||
- name: Run Docker live model sweep
|
||||
run: pnpm test:docker:live-models
|
||||
|
||||
validate_live_provider_suites:
|
||||
needs: validate_selected_ref
|
||||
if: inputs.include_live_suites
|
||||
if: inputs.include_live_suites && !inputs.live_models_only
|
||||
runs-on: blacksmith-32vcpu-ubuntu-2404
|
||||
timeout-minutes: ${{ matrix.timeout_minutes }}
|
||||
strategy:
|
||||
@@ -525,11 +770,6 @@ jobs:
|
||||
command: pnpm test:live
|
||||
timeout_minutes: 180
|
||||
profile_env_only: false
|
||||
- suite_id: live-models-docker
|
||||
label: Docker live models
|
||||
command: pnpm test:docker:live-models
|
||||
timeout_minutes: 120
|
||||
profile_env_only: false
|
||||
- suite_id: live-gateway-docker
|
||||
label: Docker live gateway
|
||||
command: pnpm test:docker:live-gateway
|
||||
@@ -594,6 +834,7 @@ jobs:
|
||||
OPENCLAW_CLAUDE_SETTINGS_JSON: ${{ secrets.OPENCLAW_CLAUDE_SETTINGS_JSON }}
|
||||
OPENCLAW_CLAUDE_SETTINGS_LOCAL_JSON: ${{ secrets.OPENCLAW_CLAUDE_SETTINGS_LOCAL_JSON }}
|
||||
OPENCLAW_GEMINI_SETTINGS_JSON: ${{ secrets.OPENCLAW_GEMINI_SETTINGS_JSON }}
|
||||
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}
|
||||
OPENCLAW_LIVE_VIDEO_GENERATION_SKIP_PROVIDERS: ""
|
||||
OPENCLAW_LIVE_VYDRA_VIDEO: "1"
|
||||
OPENCLAW_VITEST_MAX_WORKERS: "2"
|
||||
@@ -623,7 +864,7 @@ jobs:
|
||||
fi
|
||||
case "${{ matrix.suite_id }}" in
|
||||
live-cli-backend-docker)
|
||||
echo "OPENCLAW_LIVE_CLI_BACKEND_MODEL=codex-cli/gpt-5.4" >> "$GITHUB_ENV"
|
||||
echo "OPENCLAW_LIVE_CLI_BACKEND_MODEL=codex-cli/gpt-5.5" >> "$GITHUB_ENV"
|
||||
# The CLI backend Docker lane should exercise the same staged
|
||||
# Codex auth path Peter uses locally so MCP cron creation and
|
||||
# multimodal probes stay covered in CI. Replace the staged
|
||||
|
||||
4
.github/workflows/openclaw-npm-release.yml
vendored
4
.github/workflows/openclaw-npm-release.yml
vendored
@@ -43,7 +43,7 @@ jobs:
|
||||
# so this public workflow can stay focused on OIDC publish only.
|
||||
preflight_openclaw_npm:
|
||||
if: ${{ inputs.preflight_only }}
|
||||
runs-on: blacksmith-32vcpu-ubuntu-2404
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: read
|
||||
steps:
|
||||
@@ -252,7 +252,7 @@ jobs:
|
||||
|
||||
validate_publish_request:
|
||||
if: ${{ !inputs.preflight_only }}
|
||||
runs-on: blacksmith-32vcpu-ubuntu-2404
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: read
|
||||
steps:
|
||||
|
||||
244
.github/workflows/openclaw-release-checks.yml
vendored
244
.github/workflows/openclaw-release-checks.yml
vendored
@@ -32,6 +32,9 @@ concurrency:
|
||||
|
||||
env:
|
||||
FORCE_JAVASCRIPT_ACTIONS_TO_NODE24: "true"
|
||||
NODE_VERSION: "24.x"
|
||||
PNPM_VERSION: "10.33.0"
|
||||
OPENCLAW_CI_OPENAI_MODEL: ${{ vars.OPENCLAW_CI_OPENAI_MODEL }}
|
||||
|
||||
jobs:
|
||||
resolve_target:
|
||||
@@ -121,9 +124,18 @@ jobs:
|
||||
echo "- Validated SHA: \`${RELEASE_SHA}\`"
|
||||
echo "- Cross-OS provider: \`${RELEASE_PROVIDER}\`"
|
||||
echo "- Cross-OS mode: \`${RELEASE_MODE}\`"
|
||||
echo "- This run will execute cross-OS release validation plus the non-Parallels Docker/live/openwebui coverage from the CI migration plan."
|
||||
echo "- This run will execute cross-OS release validation, install smoke, QA Lab parity, Matrix, and Telegram lanes, and the non-Parallels Docker/live/openwebui coverage from the CI migration plan."
|
||||
} >> "$GITHUB_STEP_SUMMARY"
|
||||
|
||||
install_smoke_release_checks:
|
||||
needs: [resolve_target]
|
||||
permissions:
|
||||
contents: read
|
||||
uses: ./.github/workflows/install-smoke.yml
|
||||
with:
|
||||
ref: ${{ needs.resolve_target.outputs.ref }}
|
||||
run_bun_global_install_smoke: true
|
||||
|
||||
cross_os_release_checks:
|
||||
needs: [resolve_target]
|
||||
permissions: read-all
|
||||
@@ -144,6 +156,7 @@ jobs:
|
||||
needs: [resolve_target]
|
||||
permissions:
|
||||
contents: read
|
||||
packages: write
|
||||
pull-requests: read
|
||||
uses: ./.github/workflows/openclaw-live-and-e2e-checks-reusable.yml
|
||||
with:
|
||||
@@ -196,3 +209,232 @@ jobs:
|
||||
OPENCLAW_CLAUDE_SETTINGS_JSON: ${{ secrets.OPENCLAW_CLAUDE_SETTINGS_JSON }}
|
||||
OPENCLAW_CLAUDE_SETTINGS_LOCAL_JSON: ${{ secrets.OPENCLAW_CLAUDE_SETTINGS_LOCAL_JSON }}
|
||||
OPENCLAW_GEMINI_SETTINGS_JSON: ${{ secrets.OPENCLAW_GEMINI_SETTINGS_JSON }}
|
||||
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}
|
||||
|
||||
qa_lab_parity_release_checks:
|
||||
name: Run QA Lab parity gate
|
||||
needs: [resolve_target]
|
||||
runs-on: blacksmith-32vcpu-ubuntu-2404
|
||||
timeout-minutes: 30
|
||||
permissions:
|
||||
contents: read
|
||||
env:
|
||||
QA_PARITY_CONCURRENCY: "1"
|
||||
OPENCLAW_QA_TRANSPORT_READY_TIMEOUT_MS: "180000"
|
||||
OPENAI_API_KEY: ""
|
||||
ANTHROPIC_API_KEY: ""
|
||||
OPENCLAW_LIVE_OPENAI_KEY: ""
|
||||
OPENCLAW_LIVE_ANTHROPIC_KEY: ""
|
||||
OPENCLAW_LIVE_GEMINI_KEY: ""
|
||||
OPENCLAW_LIVE_SETUP_TOKEN_VALUE: ""
|
||||
OPENCLAW_BUILD_PRIVATE_QA: "1"
|
||||
OPENCLAW_ENABLE_PRIVATE_QA_CLI: "1"
|
||||
steps:
|
||||
- name: Checkout selected ref
|
||||
uses: actions/checkout@v6
|
||||
with:
|
||||
ref: ${{ needs.resolve_target.outputs.ref }}
|
||||
fetch-depth: 1
|
||||
|
||||
- name: Setup Node environment
|
||||
uses: ./.github/actions/setup-node-env
|
||||
with:
|
||||
node-version: ${{ env.NODE_VERSION }}
|
||||
pnpm-version: ${{ env.PNPM_VERSION }}
|
||||
install-bun: "true"
|
||||
|
||||
- name: Build private QA runtime
|
||||
run: pnpm build
|
||||
|
||||
- name: Run OpenAI candidate lane
|
||||
run: |
|
||||
pnpm openclaw qa suite \
|
||||
--provider-mode mock-openai \
|
||||
--parity-pack agentic \
|
||||
--concurrency "${QA_PARITY_CONCURRENCY}" \
|
||||
--model "${OPENCLAW_CI_OPENAI_MODEL}" \
|
||||
--alt-model openai/gpt-5.4-alt \
|
||||
--output-dir .artifacts/qa-e2e/gpt54
|
||||
|
||||
- name: Run Opus 4.6 lane
|
||||
run: |
|
||||
pnpm openclaw qa suite \
|
||||
--provider-mode mock-openai \
|
||||
--parity-pack agentic \
|
||||
--concurrency "${QA_PARITY_CONCURRENCY}" \
|
||||
--model anthropic/claude-opus-4-6 \
|
||||
--alt-model anthropic/claude-sonnet-4-6 \
|
||||
--output-dir .artifacts/qa-e2e/opus46
|
||||
|
||||
- name: Generate parity report
|
||||
run: |
|
||||
pnpm openclaw qa parity-report \
|
||||
--repo-root . \
|
||||
--candidate-summary .artifacts/qa-e2e/gpt54/qa-suite-summary.json \
|
||||
--baseline-summary .artifacts/qa-e2e/opus46/qa-suite-summary.json \
|
||||
--candidate-label "${OPENCLAW_CI_OPENAI_MODEL}" \
|
||||
--baseline-label anthropic/claude-opus-4-6 \
|
||||
--output-dir .artifacts/qa-e2e/parity
|
||||
|
||||
- name: Upload parity artifacts
|
||||
if: always()
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: release-qa-parity-${{ needs.resolve_target.outputs.sha }}
|
||||
path: .artifacts/qa-e2e/
|
||||
retention-days: 14
|
||||
if-no-files-found: warn
|
||||
|
||||
qa_live_matrix_release_checks:
|
||||
name: Run QA Lab live Matrix lane
|
||||
needs: [resolve_target]
|
||||
runs-on: blacksmith-32vcpu-ubuntu-2404
|
||||
timeout-minutes: 60
|
||||
permissions:
|
||||
contents: read
|
||||
pull-requests: read
|
||||
environment: qa-live-shared
|
||||
env:
|
||||
OPENCLAW_BUILD_PRIVATE_QA: "1"
|
||||
OPENCLAW_ENABLE_PRIVATE_QA_CLI: "1"
|
||||
steps:
|
||||
- name: Checkout selected ref
|
||||
uses: actions/checkout@v6
|
||||
with:
|
||||
ref: ${{ needs.resolve_target.outputs.ref }}
|
||||
fetch-depth: 1
|
||||
|
||||
- name: Setup Node environment
|
||||
uses: ./.github/actions/setup-node-env
|
||||
with:
|
||||
node-version: ${{ env.NODE_VERSION }}
|
||||
pnpm-version: ${{ env.PNPM_VERSION }}
|
||||
install-bun: "true"
|
||||
|
||||
- name: Validate required QA credential env
|
||||
env:
|
||||
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
if [[ -z "${OPENAI_API_KEY:-}" ]]; then
|
||||
echo "Missing required OPENAI_API_KEY." >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
- name: Build private QA runtime
|
||||
run: pnpm build
|
||||
|
||||
- name: Run Matrix live lane
|
||||
id: run_lane
|
||||
shell: bash
|
||||
env:
|
||||
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
|
||||
OPENCLAW_QA_REDACT_PUBLIC_METADATA: "1"
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
output_dir=".artifacts/qa-e2e/matrix-live-release-${GITHUB_RUN_ID}-${GITHUB_RUN_ATTEMPT}"
|
||||
echo "output_dir=${output_dir}" >> "$GITHUB_OUTPUT"
|
||||
|
||||
pnpm openclaw qa matrix \
|
||||
--repo-root . \
|
||||
--output-dir "${output_dir}" \
|
||||
--provider-mode live-frontier \
|
||||
--model "${OPENCLAW_CI_OPENAI_MODEL}" \
|
||||
--alt-model "${OPENCLAW_CI_OPENAI_MODEL}" \
|
||||
--fast
|
||||
|
||||
- name: Upload Matrix QA artifacts
|
||||
if: always()
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: release-qa-live-matrix-${{ needs.resolve_target.outputs.sha }}
|
||||
path: ${{ steps.run_lane.outputs.output_dir }}
|
||||
retention-days: 14
|
||||
if-no-files-found: warn
|
||||
|
||||
qa_live_telegram_release_checks:
|
||||
name: Run QA Lab live Telegram lane
|
||||
needs: [resolve_target]
|
||||
runs-on: blacksmith-32vcpu-ubuntu-2404
|
||||
timeout-minutes: 60
|
||||
permissions:
|
||||
contents: read
|
||||
pull-requests: read
|
||||
environment: qa-live-shared
|
||||
env:
|
||||
OPENCLAW_BUILD_PRIVATE_QA: "1"
|
||||
OPENCLAW_ENABLE_PRIVATE_QA_CLI: "1"
|
||||
steps:
|
||||
- name: Checkout selected ref
|
||||
uses: actions/checkout@v6
|
||||
with:
|
||||
ref: ${{ needs.resolve_target.outputs.ref }}
|
||||
fetch-depth: 1
|
||||
|
||||
- name: Setup Node environment
|
||||
uses: ./.github/actions/setup-node-env
|
||||
with:
|
||||
node-version: ${{ env.NODE_VERSION }}
|
||||
pnpm-version: ${{ env.PNPM_VERSION }}
|
||||
install-bun: "true"
|
||||
|
||||
- name: Validate required QA credential env
|
||||
env:
|
||||
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
|
||||
OPENCLAW_QA_CONVEX_SITE_URL: ${{ secrets.OPENCLAW_QA_CONVEX_SITE_URL }}
|
||||
OPENCLAW_QA_CONVEX_SECRET_CI: ${{ secrets.OPENCLAW_QA_CONVEX_SECRET_CI }}
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
require_var() {
|
||||
local key="$1"
|
||||
if [[ -z "${!key:-}" ]]; then
|
||||
echo "Missing required ${key}." >&2
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
require_var OPENAI_API_KEY
|
||||
require_var OPENCLAW_QA_CONVEX_SITE_URL
|
||||
require_var OPENCLAW_QA_CONVEX_SECRET_CI
|
||||
|
||||
- name: Build private QA runtime
|
||||
run: pnpm build
|
||||
|
||||
- name: Run Telegram live lane
|
||||
id: run_lane
|
||||
shell: bash
|
||||
env:
|
||||
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
|
||||
OPENCLAW_QA_CONVEX_SITE_URL: ${{ secrets.OPENCLAW_QA_CONVEX_SITE_URL }}
|
||||
OPENCLAW_QA_CONVEX_SECRET_CI: ${{ secrets.OPENCLAW_QA_CONVEX_SECRET_CI }}
|
||||
OPENCLAW_QA_REDACT_PUBLIC_METADATA: "1"
|
||||
OPENCLAW_QA_TELEGRAM_CAPTURE_CONTENT: "1"
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
output_dir=".artifacts/qa-e2e/telegram-live-release-${GITHUB_RUN_ID}-${GITHUB_RUN_ATTEMPT}"
|
||||
echo "output_dir=${output_dir}" >> "$GITHUB_OUTPUT"
|
||||
|
||||
pnpm openclaw qa telegram \
|
||||
--repo-root . \
|
||||
--output-dir "${output_dir}" \
|
||||
--provider-mode live-frontier \
|
||||
--model "${OPENCLAW_CI_OPENAI_MODEL}" \
|
||||
--alt-model "${OPENCLAW_CI_OPENAI_MODEL}" \
|
||||
--fast \
|
||||
--credential-source convex \
|
||||
--credential-role ci
|
||||
|
||||
- name: Upload Telegram QA artifacts
|
||||
if: always()
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: release-qa-live-telegram-${{ needs.resolve_target.outputs.sha }}
|
||||
path: ${{ steps.run_lane.outputs.output_dir }}
|
||||
retention-days: 14
|
||||
if-no-files-found: warn
|
||||
|
||||
@@ -7,6 +7,7 @@ on:
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
packages: write
|
||||
pull-requests: read
|
||||
|
||||
concurrency:
|
||||
@@ -20,12 +21,13 @@ jobs:
|
||||
live_and_openwebui_checks:
|
||||
permissions:
|
||||
contents: read
|
||||
packages: write
|
||||
pull-requests: read
|
||||
uses: ./.github/workflows/openclaw-live-and-e2e-checks-reusable.yml
|
||||
with:
|
||||
ref: ${{ github.sha }}
|
||||
include_repo_e2e: true
|
||||
include_release_path_suites: false
|
||||
include_release_path_suites: true
|
||||
include_openwebui: true
|
||||
include_live_suites: true
|
||||
secrets:
|
||||
@@ -72,3 +74,4 @@ jobs:
|
||||
OPENCLAW_CLAUDE_SETTINGS_JSON: ${{ secrets.OPENCLAW_CLAUDE_SETTINGS_JSON }}
|
||||
OPENCLAW_CLAUDE_SETTINGS_LOCAL_JSON: ${{ secrets.OPENCLAW_CLAUDE_SETTINGS_LOCAL_JSON }}
|
||||
OPENCLAW_GEMINI_SETTINGS_JSON: ${{ secrets.OPENCLAW_GEMINI_SETTINGS_JSON }}
|
||||
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}
|
||||
|
||||
10
.github/workflows/parity-gate.yml
vendored
10
.github/workflows/parity-gate.yml
vendored
@@ -13,6 +13,7 @@ on:
|
||||
- "src/gateway/**"
|
||||
- "src/media/**"
|
||||
- ".github/workflows/parity-gate.yml"
|
||||
workflow_dispatch:
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
@@ -23,7 +24,7 @@ concurrency:
|
||||
|
||||
jobs:
|
||||
parity-gate:
|
||||
name: Run the GPT-5.4 / Opus 4.6 parity gate against the qa-lab mock
|
||||
name: Run the OpenAI / Opus 4.6 parity gate against the qa-lab mock
|
||||
if: ${{ github.event.pull_request.draft != true }}
|
||||
runs-on: blacksmith-32vcpu-ubuntu-2404
|
||||
timeout-minutes: 30
|
||||
@@ -41,6 +42,7 @@ jobs:
|
||||
# followthrough gate that expects a fast post-approval read within a 30s
|
||||
# agent.wait timeout.
|
||||
QA_PARITY_CONCURRENCY: "1"
|
||||
OPENCLAW_CI_OPENAI_MODEL: ${{ vars.OPENCLAW_CI_OPENAI_MODEL }}
|
||||
OPENCLAW_QA_TRANSPORT_READY_TIMEOUT_MS: "180000"
|
||||
OPENAI_API_KEY: ""
|
||||
ANTHROPIC_API_KEY: ""
|
||||
@@ -74,13 +76,13 @@ jobs:
|
||||
# The approval-turn sentinel still runs inside the full parity pack below.
|
||||
# Keep the exact mock read-plan contract in deterministic unit tests instead
|
||||
# of paying for a separate full-runtime preflight that has been flaky in CI.
|
||||
- name: Run GPT-5.4 lane
|
||||
- name: Run OpenAI candidate lane
|
||||
run: |
|
||||
pnpm openclaw qa suite \
|
||||
--provider-mode mock-openai \
|
||||
--parity-pack agentic \
|
||||
--concurrency "${QA_PARITY_CONCURRENCY}" \
|
||||
--model openai/gpt-5.4 \
|
||||
--model "${OPENCLAW_CI_OPENAI_MODEL}" \
|
||||
--alt-model openai/gpt-5.4-alt \
|
||||
--output-dir .artifacts/qa-e2e/gpt54
|
||||
|
||||
@@ -100,7 +102,7 @@ jobs:
|
||||
--repo-root . \
|
||||
--candidate-summary .artifacts/qa-e2e/gpt54/qa-suite-summary.json \
|
||||
--baseline-summary .artifacts/qa-e2e/opus46/qa-suite-summary.json \
|
||||
--candidate-label openai/gpt-5.4 \
|
||||
--candidate-label "${OPENCLAW_CI_OPENAI_MODEL}" \
|
||||
--baseline-label anthropic/claude-opus-4-6 \
|
||||
--output-dir .artifacts/qa-e2e/parity
|
||||
|
||||
|
||||
445
.github/workflows/qa-live-transports-convex.yml
vendored
Normal file
445
.github/workflows/qa-live-transports-convex.yml
vendored
Normal file
@@ -0,0 +1,445 @@
|
||||
name: QA-Lab - All Lanes
|
||||
|
||||
on:
|
||||
schedule:
|
||||
- cron: "41 4 * * *"
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
ref:
|
||||
description: Ref, tag, or SHA to run
|
||||
required: true
|
||||
default: main
|
||||
type: string
|
||||
scenario:
|
||||
description: Optional comma-separated Telegram scenario ids
|
||||
required: false
|
||||
type: string
|
||||
discord_scenario:
|
||||
description: Optional comma-separated Discord scenario ids
|
||||
required: false
|
||||
type: string
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
pull-requests: read
|
||||
|
||||
concurrency:
|
||||
group: qa-lab-all-lanes-${{ github.event_name == 'workflow_dispatch' && inputs.ref || github.sha }}
|
||||
cancel-in-progress: false
|
||||
|
||||
env:
|
||||
FORCE_JAVASCRIPT_ACTIONS_TO_NODE24: "true"
|
||||
NODE_VERSION: "24.x"
|
||||
PNPM_VERSION: "10.33.0"
|
||||
OPENCLAW_CI_OPENAI_MODEL: ${{ vars.OPENCLAW_CI_OPENAI_MODEL }}
|
||||
OPENCLAW_BUILD_PRIVATE_QA: "1"
|
||||
OPENCLAW_ENABLE_PRIVATE_QA_CLI: "1"
|
||||
|
||||
jobs:
|
||||
authorize_actor:
|
||||
name: Authorize workflow actor
|
||||
runs-on: blacksmith-8vcpu-ubuntu-2404
|
||||
steps:
|
||||
- name: Require maintainer-level repository access
|
||||
uses: actions/github-script@v8
|
||||
with:
|
||||
script: |
|
||||
if (context.eventName === "schedule") {
|
||||
core.info("Scheduled default-branch QA run; actor permission check is only required for manual dispatch.");
|
||||
return;
|
||||
}
|
||||
const allowed = new Set(["admin", "maintain", "write"]);
|
||||
const { owner, repo } = context.repo;
|
||||
const { data } = await github.rest.repos.getCollaboratorPermissionLevel({
|
||||
owner,
|
||||
repo,
|
||||
username: context.actor,
|
||||
});
|
||||
const permission = data.permission;
|
||||
core.info(`Actor ${context.actor} permission: ${permission}`);
|
||||
if (!allowed.has(permission)) {
|
||||
core.setFailed(
|
||||
`Workflow requires write/maintain/admin access. Actor "${context.actor}" has "${permission}".`,
|
||||
);
|
||||
}
|
||||
|
||||
validate_selected_ref:
|
||||
name: Validate selected ref
|
||||
needs: authorize_actor
|
||||
runs-on: blacksmith-8vcpu-ubuntu-2404
|
||||
outputs:
|
||||
selected_sha: ${{ steps.validate.outputs.selected_sha }}
|
||||
trusted_reason: ${{ steps.validate.outputs.trusted_reason }}
|
||||
steps:
|
||||
- name: Checkout selected ref
|
||||
uses: actions/checkout@v6
|
||||
with:
|
||||
ref: ${{ github.event_name == 'workflow_dispatch' && inputs.ref || github.sha }}
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Validate selected ref
|
||||
id: validate
|
||||
env:
|
||||
GH_TOKEN: ${{ github.token }}
|
||||
INPUT_REF: ${{ github.event_name == 'workflow_dispatch' && inputs.ref || github.sha }}
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
selected_sha="$(git rev-parse HEAD)"
|
||||
trusted_reason=""
|
||||
|
||||
git fetch --no-tags origin +refs/heads/main:refs/remotes/origin/main
|
||||
|
||||
if git merge-base --is-ancestor "$selected_sha" refs/remotes/origin/main; then
|
||||
trusted_reason="main-ancestor"
|
||||
elif git tag --points-at "$selected_sha" | grep -Eq '^v'; then
|
||||
trusted_reason="release-tag"
|
||||
elif [[ "$INPUT_REF" =~ ^release/[0-9]{4}\.[0-9]+\.[0-9]+$ ]]; then
|
||||
git fetch --no-tags origin "+refs/heads/${INPUT_REF}:refs/remotes/origin/${INPUT_REF}"
|
||||
release_branch_sha="$(git rev-parse "refs/remotes/origin/${INPUT_REF}")"
|
||||
if [[ "$selected_sha" == "$release_branch_sha" ]]; then
|
||||
trusted_reason="release-branch-head"
|
||||
fi
|
||||
else
|
||||
pr_head_count="$(
|
||||
gh api \
|
||||
-H "Accept: application/vnd.github+json" \
|
||||
"repos/${GITHUB_REPOSITORY}/commits/${selected_sha}/pulls" \
|
||||
--jq '[.[] | select(.state == "open" and .head.repo.full_name == "'"${GITHUB_REPOSITORY}"'" and .head.sha == "'"${selected_sha}"'")] | length'
|
||||
)"
|
||||
if [[ "$pr_head_count" != "0" ]]; then
|
||||
trusted_reason="open-pr-head"
|
||||
fi
|
||||
fi
|
||||
|
||||
if [[ -z "$trusted_reason" ]]; then
|
||||
echo "Ref '${INPUT_REF}' resolved to $selected_sha, which is not trusted for this secret-bearing QA run." >&2
|
||||
echo "Allowed refs must be on main, point to a release tag, match a release branch head, or match an open PR head in ${GITHUB_REPOSITORY}." >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "selected_sha=$selected_sha" >> "$GITHUB_OUTPUT"
|
||||
echo "trusted_reason=$trusted_reason" >> "$GITHUB_OUTPUT"
|
||||
{
|
||||
echo "Validated ref: \`${INPUT_REF}\`"
|
||||
echo "Resolved SHA: \`$selected_sha\`"
|
||||
echo "Trust reason: \`$trusted_reason\`"
|
||||
} >> "$GITHUB_STEP_SUMMARY"
|
||||
|
||||
run_mock_parity:
|
||||
name: Run QA Lab parity gate
|
||||
needs: [validate_selected_ref]
|
||||
runs-on: blacksmith-32vcpu-ubuntu-2404
|
||||
timeout-minutes: 30
|
||||
env:
|
||||
QA_PARITY_CONCURRENCY: "1"
|
||||
OPENCLAW_QA_TRANSPORT_READY_TIMEOUT_MS: "180000"
|
||||
OPENAI_API_KEY: ""
|
||||
ANTHROPIC_API_KEY: ""
|
||||
OPENCLAW_LIVE_OPENAI_KEY: ""
|
||||
OPENCLAW_LIVE_ANTHROPIC_KEY: ""
|
||||
OPENCLAW_LIVE_GEMINI_KEY: ""
|
||||
OPENCLAW_LIVE_SETUP_TOKEN_VALUE: ""
|
||||
steps:
|
||||
- name: Checkout selected ref
|
||||
uses: actions/checkout@v6
|
||||
with:
|
||||
ref: ${{ needs.validate_selected_ref.outputs.selected_sha }}
|
||||
fetch-depth: 1
|
||||
|
||||
- name: Setup Node environment
|
||||
uses: ./.github/actions/setup-node-env
|
||||
with:
|
||||
node-version: ${{ env.NODE_VERSION }}
|
||||
pnpm-version: ${{ env.PNPM_VERSION }}
|
||||
install-bun: "true"
|
||||
|
||||
- name: Build private QA runtime
|
||||
run: pnpm build
|
||||
|
||||
- name: Run OpenAI candidate lane
|
||||
run: |
|
||||
pnpm openclaw qa suite \
|
||||
--provider-mode mock-openai \
|
||||
--parity-pack agentic \
|
||||
--concurrency "${QA_PARITY_CONCURRENCY}" \
|
||||
--model "${OPENCLAW_CI_OPENAI_MODEL}" \
|
||||
--alt-model openai/gpt-5.4-alt \
|
||||
--output-dir .artifacts/qa-e2e/gpt54
|
||||
|
||||
- name: Run Opus 4.6 lane
|
||||
run: |
|
||||
pnpm openclaw qa suite \
|
||||
--provider-mode mock-openai \
|
||||
--parity-pack agentic \
|
||||
--concurrency "${QA_PARITY_CONCURRENCY}" \
|
||||
--model anthropic/claude-opus-4-6 \
|
||||
--alt-model anthropic/claude-sonnet-4-6 \
|
||||
--output-dir .artifacts/qa-e2e/opus46
|
||||
|
||||
- name: Generate parity report
|
||||
run: |
|
||||
pnpm openclaw qa parity-report \
|
||||
--repo-root . \
|
||||
--candidate-summary .artifacts/qa-e2e/gpt54/qa-suite-summary.json \
|
||||
--baseline-summary .artifacts/qa-e2e/opus46/qa-suite-summary.json \
|
||||
--candidate-label "${OPENCLAW_CI_OPENAI_MODEL}" \
|
||||
--baseline-label anthropic/claude-opus-4-6 \
|
||||
--output-dir .artifacts/qa-e2e/parity
|
||||
|
||||
- name: Upload parity artifacts
|
||||
if: always()
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: qa-parity-${{ github.run_id }}-${{ github.run_attempt }}
|
||||
path: .artifacts/qa-e2e/
|
||||
retention-days: 14
|
||||
if-no-files-found: warn
|
||||
|
||||
run_live_matrix:
|
||||
name: Run Matrix live QA lane
|
||||
needs: [authorize_actor, validate_selected_ref]
|
||||
runs-on: blacksmith-32vcpu-ubuntu-2404
|
||||
timeout-minutes: 60
|
||||
environment: qa-live-shared
|
||||
steps:
|
||||
- name: Checkout selected ref
|
||||
uses: actions/checkout@v6
|
||||
with:
|
||||
ref: ${{ needs.validate_selected_ref.outputs.selected_sha }}
|
||||
fetch-depth: 1
|
||||
|
||||
- name: Setup Node environment
|
||||
uses: ./.github/actions/setup-node-env
|
||||
with:
|
||||
node-version: ${{ env.NODE_VERSION }}
|
||||
pnpm-version: ${{ env.PNPM_VERSION }}
|
||||
install-bun: "true"
|
||||
|
||||
- name: Validate required QA credential env
|
||||
env:
|
||||
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
if [[ -z "${OPENAI_API_KEY:-}" ]]; then
|
||||
echo "Missing required OPENAI_API_KEY." >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
- name: Build private QA runtime
|
||||
run: pnpm build
|
||||
|
||||
- name: Run Matrix live lane
|
||||
id: run_lane
|
||||
shell: bash
|
||||
env:
|
||||
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
|
||||
OPENCLAW_QA_REDACT_PUBLIC_METADATA: "1"
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
output_dir=".artifacts/qa-e2e/matrix-live-${GITHUB_RUN_ID}-${GITHUB_RUN_ATTEMPT}"
|
||||
echo "output_dir=${output_dir}" >> "$GITHUB_OUTPUT"
|
||||
|
||||
pnpm openclaw qa matrix \
|
||||
--repo-root . \
|
||||
--output-dir "${output_dir}" \
|
||||
--provider-mode live-frontier \
|
||||
--model "${OPENCLAW_CI_OPENAI_MODEL}" \
|
||||
--alt-model "${OPENCLAW_CI_OPENAI_MODEL}" \
|
||||
--fast
|
||||
|
||||
- name: Upload Matrix QA artifacts
|
||||
if: always()
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: qa-live-matrix-${{ github.run_id }}-${{ github.run_attempt }}
|
||||
path: ${{ steps.run_lane.outputs.output_dir }}
|
||||
retention-days: 14
|
||||
if-no-files-found: warn
|
||||
|
||||
run_live_telegram:
|
||||
name: Run Telegram live QA lane with Convex leases
|
||||
needs: [authorize_actor, validate_selected_ref]
|
||||
runs-on: blacksmith-32vcpu-ubuntu-2404
|
||||
timeout-minutes: 60
|
||||
environment: qa-live-shared
|
||||
steps:
|
||||
- name: Checkout selected ref
|
||||
uses: actions/checkout@v6
|
||||
with:
|
||||
ref: ${{ needs.validate_selected_ref.outputs.selected_sha }}
|
||||
fetch-depth: 1
|
||||
|
||||
- name: Setup Node environment
|
||||
uses: ./.github/actions/setup-node-env
|
||||
with:
|
||||
node-version: ${{ env.NODE_VERSION }}
|
||||
pnpm-version: ${{ env.PNPM_VERSION }}
|
||||
install-bun: "true"
|
||||
|
||||
- name: Validate required QA credential env
|
||||
env:
|
||||
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
|
||||
OPENCLAW_QA_CONVEX_SITE_URL: ${{ secrets.OPENCLAW_QA_CONVEX_SITE_URL }}
|
||||
OPENCLAW_QA_CONVEX_SECRET_CI: ${{ secrets.OPENCLAW_QA_CONVEX_SECRET_CI }}
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
require_var() {
|
||||
local key="$1"
|
||||
if [[ -z "${!key:-}" ]]; then
|
||||
echo "Missing required ${key}." >&2
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
require_var OPENAI_API_KEY
|
||||
require_var OPENCLAW_QA_CONVEX_SITE_URL
|
||||
require_var OPENCLAW_QA_CONVEX_SECRET_CI
|
||||
|
||||
- name: Build private QA runtime
|
||||
run: pnpm build
|
||||
|
||||
- name: Run Telegram live lane
|
||||
id: run_lane
|
||||
shell: bash
|
||||
env:
|
||||
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
|
||||
OPENCLAW_QA_CONVEX_SITE_URL: ${{ secrets.OPENCLAW_QA_CONVEX_SITE_URL }}
|
||||
OPENCLAW_QA_CONVEX_SECRET_CI: ${{ secrets.OPENCLAW_QA_CONVEX_SECRET_CI }}
|
||||
OPENCLAW_QA_REDACT_PUBLIC_METADATA: "1"
|
||||
OPENCLAW_QA_TELEGRAM_CAPTURE_CONTENT: "1"
|
||||
INPUT_SCENARIO: ${{ github.event_name == 'workflow_dispatch' && inputs.scenario || '' }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
output_dir=".artifacts/qa-e2e/telegram-live-${GITHUB_RUN_ID}-${GITHUB_RUN_ATTEMPT}"
|
||||
scenario_args=()
|
||||
|
||||
if [[ -n "${INPUT_SCENARIO// }" ]]; then
|
||||
IFS=',' read -r -a raw_scenarios <<<"${INPUT_SCENARIO}"
|
||||
for raw in "${raw_scenarios[@]}"; do
|
||||
scenario="$(printf '%s' "${raw}" | sed -e 's/^[[:space:]]*//' -e 's/[[:space:]]*$//')"
|
||||
if [[ -n "${scenario}" ]]; then
|
||||
scenario_args+=(--scenario "${scenario}")
|
||||
fi
|
||||
done
|
||||
fi
|
||||
|
||||
echo "output_dir=${output_dir}" >> "$GITHUB_OUTPUT"
|
||||
|
||||
pnpm openclaw qa telegram \
|
||||
--repo-root . \
|
||||
--output-dir "${output_dir}" \
|
||||
--provider-mode live-frontier \
|
||||
--model "${OPENCLAW_CI_OPENAI_MODEL}" \
|
||||
--alt-model "${OPENCLAW_CI_OPENAI_MODEL}" \
|
||||
--fast \
|
||||
--credential-source convex \
|
||||
--credential-role ci \
|
||||
"${scenario_args[@]}"
|
||||
|
||||
- name: Upload Telegram QA artifacts
|
||||
if: always()
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: qa-live-telegram-${{ github.run_id }}-${{ github.run_attempt }}
|
||||
path: ${{ steps.run_lane.outputs.output_dir }}
|
||||
retention-days: 14
|
||||
if-no-files-found: warn
|
||||
|
||||
run_live_discord:
|
||||
name: Run Discord live QA lane with Convex leases
|
||||
needs: [authorize_actor, validate_selected_ref]
|
||||
runs-on: blacksmith-32vcpu-ubuntu-2404
|
||||
timeout-minutes: 60
|
||||
environment: qa-live-shared
|
||||
steps:
|
||||
- name: Checkout selected ref
|
||||
uses: actions/checkout@v6
|
||||
with:
|
||||
ref: ${{ needs.validate_selected_ref.outputs.selected_sha }}
|
||||
fetch-depth: 1
|
||||
|
||||
- name: Setup Node environment
|
||||
uses: ./.github/actions/setup-node-env
|
||||
with:
|
||||
node-version: ${{ env.NODE_VERSION }}
|
||||
pnpm-version: ${{ env.PNPM_VERSION }}
|
||||
install-bun: "true"
|
||||
|
||||
- name: Validate required QA credential env
|
||||
env:
|
||||
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
|
||||
OPENCLAW_QA_CONVEX_SITE_URL: ${{ secrets.OPENCLAW_QA_CONVEX_SITE_URL }}
|
||||
OPENCLAW_QA_CONVEX_SECRET_CI: ${{ secrets.OPENCLAW_QA_CONVEX_SECRET_CI }}
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
require_var() {
|
||||
local key="$1"
|
||||
if [[ -z "${!key:-}" ]]; then
|
||||
echo "Missing required ${key}." >&2
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
require_var OPENAI_API_KEY
|
||||
require_var OPENCLAW_QA_CONVEX_SITE_URL
|
||||
require_var OPENCLAW_QA_CONVEX_SECRET_CI
|
||||
|
||||
- name: Build private QA runtime
|
||||
run: pnpm build
|
||||
|
||||
- name: Run Discord live lane
|
||||
id: run_lane
|
||||
shell: bash
|
||||
env:
|
||||
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
|
||||
OPENCLAW_QA_CONVEX_SITE_URL: ${{ secrets.OPENCLAW_QA_CONVEX_SITE_URL }}
|
||||
OPENCLAW_QA_CONVEX_SECRET_CI: ${{ secrets.OPENCLAW_QA_CONVEX_SECRET_CI }}
|
||||
OPENCLAW_QA_REDACT_PUBLIC_METADATA: "1"
|
||||
OPENCLAW_QA_DISCORD_CAPTURE_CONTENT: "1"
|
||||
INPUT_SCENARIO: ${{ github.event_name == 'workflow_dispatch' && inputs.discord_scenario || '' }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
output_dir=".artifacts/qa-e2e/discord-live-${GITHUB_RUN_ID}-${GITHUB_RUN_ATTEMPT}"
|
||||
scenario_args=()
|
||||
|
||||
if [[ -n "${INPUT_SCENARIO// }" ]]; then
|
||||
IFS=',' read -r -a raw_scenarios <<<"${INPUT_SCENARIO}"
|
||||
for raw in "${raw_scenarios[@]}"; do
|
||||
scenario="$(printf '%s' "${raw}" | sed -e 's/^[[:space:]]*//' -e 's/[[:space:]]*$//')"
|
||||
if [[ -n "${scenario}" ]]; then
|
||||
scenario_args+=(--scenario "${scenario}")
|
||||
fi
|
||||
done
|
||||
fi
|
||||
|
||||
echo "output_dir=${output_dir}" >> "$GITHUB_OUTPUT"
|
||||
|
||||
pnpm openclaw qa discord \
|
||||
--repo-root . \
|
||||
--output-dir "${output_dir}" \
|
||||
--provider-mode live-frontier \
|
||||
--model openai/gpt-5.4 \
|
||||
--alt-model openai/gpt-5.4 \
|
||||
--fast \
|
||||
--credential-source convex \
|
||||
--credential-role ci \
|
||||
"${scenario_args[@]}"
|
||||
|
||||
- name: Upload Discord QA artifacts
|
||||
if: always()
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: qa-live-discord-${{ github.run_id }}-${{ github.run_attempt }}
|
||||
path: ${{ steps.run_lane.outputs.output_dir }}
|
||||
retention-days: 14
|
||||
if-no-files-found: warn
|
||||
278
.github/workflows/test-performance-agent.yml
vendored
Normal file
278
.github/workflows/test-performance-agent.yml
vendored
Normal file
@@ -0,0 +1,278 @@
|
||||
name: Test Performance Agent
|
||||
|
||||
on:
|
||||
workflow_run: # zizmor: ignore[dangerous-triggers] main-only test optimization after trusted CI; job gates repository, event, branch, actor, conclusion, current main SHA, and daily cadence before using write token
|
||||
workflows:
|
||||
- CI
|
||||
types:
|
||||
- completed
|
||||
workflow_dispatch:
|
||||
|
||||
permissions:
|
||||
actions: read
|
||||
contents: write
|
||||
|
||||
concurrency:
|
||||
group: test-performance-agent-main
|
||||
cancel-in-progress: false
|
||||
|
||||
env:
|
||||
FORCE_JAVASCRIPT_ACTIONS_TO_NODE24: "true"
|
||||
TEST_PERF_BEFORE: .artifacts/test-perf/baseline-before.json
|
||||
TEST_PERF_AFTER: .artifacts/test-perf/after-agent.json
|
||||
TEST_PERF_COMPARE: .artifacts/test-perf/agent-compare.json
|
||||
|
||||
jobs:
|
||||
optimize-tests:
|
||||
if: >
|
||||
github.repository == 'openclaw/openclaw' &&
|
||||
(github.event_name == 'workflow_dispatch' ||
|
||||
(github.event.workflow_run.conclusion == 'success' &&
|
||||
github.event.workflow_run.event == 'push' &&
|
||||
github.event.workflow_run.head_branch == 'main' &&
|
||||
!endsWith(github.event.workflow_run.actor.login, '[bot]')))
|
||||
runs-on: ubuntu-24.04
|
||||
timeout-minutes: 240
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v6
|
||||
with:
|
||||
ref: main
|
||||
fetch-depth: 0
|
||||
persist-credentials: false
|
||||
submodules: false
|
||||
|
||||
- name: Gate trusted main activity and daily cadence
|
||||
id: gate
|
||||
env:
|
||||
EVENT_NAME: ${{ github.event_name }}
|
||||
GH_TOKEN: ${{ github.token }}
|
||||
WORKFLOW_HEAD_SHA: ${{ github.event.workflow_run.head_sha }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
if [ "$EVENT_NAME" != "workflow_run" ]; then
|
||||
echo "run_agent=true" >> "$GITHUB_OUTPUT"
|
||||
echo "base_sha=$(git rev-parse HEAD)" >> "$GITHUB_OUTPUT"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
for attempt in 1 2 3 4 5; do
|
||||
if git fetch --no-tags origin main; then
|
||||
break
|
||||
fi
|
||||
if [ "$attempt" = "5" ]; then
|
||||
echo "Failed to fetch main after retries." >&2
|
||||
exit 1
|
||||
fi
|
||||
echo "Fetch attempt ${attempt} failed; retrying."
|
||||
sleep $((attempt * 2))
|
||||
done
|
||||
|
||||
remote_main="$(git rev-parse origin/main)"
|
||||
if [ "$remote_main" != "$WORKFLOW_HEAD_SHA" ]; then
|
||||
echo "CI run is superseded by ${remote_main}; skipping test performance agent for ${WORKFLOW_HEAD_SHA}."
|
||||
echo "run_agent=false" >> "$GITHUB_OUTPUT"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
day_start="$(date -u +%Y-%m-%dT00:00:00Z)"
|
||||
runs_json="$RUNNER_TEMP/test-performance-agent-runs.json"
|
||||
gh api --method GET "repos/${GITHUB_REPOSITORY}/actions/workflows/test-performance-agent.yml/runs" \
|
||||
-f branch=main \
|
||||
-f event=workflow_run \
|
||||
-f per_page=50 > "$runs_json"
|
||||
|
||||
prior_runs="$(
|
||||
jq -r \
|
||||
--argjson current_run_id "$GITHUB_RUN_ID" \
|
||||
--arg day_start "$day_start" \
|
||||
'.workflow_runs[]
|
||||
| select(.database_id != $current_run_id)
|
||||
| select(.created_at >= $day_start)
|
||||
| select(.status != "cancelled")
|
||||
| select((.conclusion // "") != "skipped")
|
||||
| [.database_id, .status, (.conclusion // ""), .created_at, .head_sha]
|
||||
| @tsv' "$runs_json"
|
||||
)"
|
||||
|
||||
if [ -n "$prior_runs" ]; then
|
||||
echo "Test performance agent already ran or is running today; skipping."
|
||||
printf '%s\n' "$prior_runs"
|
||||
echo "run_agent=false" >> "$GITHUB_OUTPUT"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
echo "run_agent=true" >> "$GITHUB_OUTPUT"
|
||||
echo "base_sha=${remote_main}" >> "$GITHUB_OUTPUT"
|
||||
|
||||
- name: Setup Node environment
|
||||
if: steps.gate.outputs.run_agent == 'true'
|
||||
uses: ./.github/actions/setup-node-env
|
||||
with:
|
||||
install-bun: "false"
|
||||
|
||||
- name: Ensure test performance agent key exists
|
||||
if: steps.gate.outputs.run_agent == 'true'
|
||||
env:
|
||||
OPENAI_API_KEY: ${{ secrets.OPENCLAW_TEST_PERF_AGENT_OPENAI_API_KEY || secrets.OPENAI_API_KEY }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
if [ -z "${OPENAI_API_KEY:-}" ]; then
|
||||
echo "Missing OPENCLAW_TEST_PERF_AGENT_OPENAI_API_KEY or OPENAI_API_KEY secret." >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
- name: Build baseline full-suite performance report
|
||||
if: steps.gate.outputs.run_agent == 'true'
|
||||
run: pnpm test:perf:groups --full-suite --allow-failures --output "$TEST_PERF_BEFORE" --limit 20 --top-files 40
|
||||
|
||||
- name: Run Codex test performance agent
|
||||
if: steps.gate.outputs.run_agent == 'true'
|
||||
uses: openai/codex-action@v1
|
||||
with:
|
||||
openai-api-key: ${{ secrets.OPENCLAW_TEST_PERF_AGENT_OPENAI_API_KEY || secrets.OPENAI_API_KEY }}
|
||||
prompt-file: .github/codex/prompts/test-performance-agent.md
|
||||
model: ${{ vars.OPENCLAW_CI_OPENAI_MODEL_BARE }}
|
||||
effort: high
|
||||
sandbox: workspace-write
|
||||
safety-strategy: drop-sudo
|
||||
codex-args: '["--full-auto"]'
|
||||
|
||||
- name: Enforce focused test performance patch
|
||||
if: steps.gate.outputs.run_agent == 'true'
|
||||
id: patch
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
untracked="$(git ls-files --others --exclude-standard)"
|
||||
if [ -n "$untracked" ]; then
|
||||
echo "Test performance agent created untracked files; forbidden:"
|
||||
printf '%s\n' "$untracked"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
added_deleted_or_renamed="$(git diff --name-status --diff-filter=ADR)"
|
||||
if [ -n "$added_deleted_or_renamed" ]; then
|
||||
echo "Test performance agent added, deleted, or renamed tracked files; forbidden:"
|
||||
printf '%s\n' "$added_deleted_or_renamed"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
bad_paths="$(
|
||||
git diff --name-only | while IFS= read -r path; do
|
||||
case "$path" in
|
||||
apps/*|extensions/*|packages/*|scripts/*|src/*|Swabble/*|test/*|ui/*) ;;
|
||||
*) printf '%s\n' "$path" ;;
|
||||
esac
|
||||
done
|
||||
)"
|
||||
if [ -n "$bad_paths" ]; then
|
||||
echo "Test performance agent touched forbidden paths:"
|
||||
printf '%s\n' "$bad_paths"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if git diff --quiet; then
|
||||
echo "has_changes=false" >> "$GITHUB_OUTPUT"
|
||||
else
|
||||
echo "has_changes=true" >> "$GITHUB_OUTPUT"
|
||||
fi
|
||||
|
||||
- name: Restore Node 24 path
|
||||
if: steps.gate.outputs.run_agent == 'true' && steps.patch.outputs.has_changes == 'true'
|
||||
run: | # zizmor: ignore[github-env] NODE_BIN is set by the trusted local setup-node-env action in this same job
|
||||
set -euo pipefail
|
||||
export PATH="${NODE_BIN}:${PATH}"
|
||||
echo "${NODE_BIN}" >> "$GITHUB_PATH"
|
||||
node -v
|
||||
corepack enable
|
||||
pnpm -v
|
||||
|
||||
- name: Run full-suite performance report after agent changes
|
||||
if: steps.gate.outputs.run_agent == 'true' && steps.patch.outputs.has_changes == 'true'
|
||||
run: pnpm test:perf:groups --full-suite --output "$TEST_PERF_AFTER" --limit 20 --top-files 40
|
||||
|
||||
- name: Compare test performance reports
|
||||
if: steps.gate.outputs.run_agent == 'true' && steps.patch.outputs.has_changes == 'true'
|
||||
run: pnpm test:perf:groups:compare "$TEST_PERF_BEFORE" "$TEST_PERF_AFTER" --output "$TEST_PERF_COMPARE" --limit 20 --top-files 40
|
||||
|
||||
- name: Enforce coverage-preserving test count
|
||||
if: steps.gate.outputs.run_agent == 'true' && steps.patch.outputs.has_changes == 'true'
|
||||
run: |
|
||||
set -euo pipefail
|
||||
node <<'NODE'
|
||||
const fs = require("node:fs");
|
||||
const before = JSON.parse(fs.readFileSync(process.env.TEST_PERF_BEFORE, "utf8"));
|
||||
const after = JSON.parse(fs.readFileSync(process.env.TEST_PERF_AFTER, "utf8"));
|
||||
|
||||
if (before.failed) {
|
||||
console.log("Baseline had failing configs; skipping total test-count comparison against partial report.");
|
||||
process.exit(0);
|
||||
}
|
||||
|
||||
const beforeTests = before.totals?.testCount ?? 0;
|
||||
const afterTests = after.totals?.testCount ?? 0;
|
||||
if (afterTests < beforeTests) {
|
||||
console.error(`Test count decreased from ${beforeTests} to ${afterTests}; refusing coverage-reducing patch.`);
|
||||
process.exit(1);
|
||||
}
|
||||
console.log(`Test count preserved: ${beforeTests} -> ${afterTests}.`);
|
||||
NODE
|
||||
|
||||
- name: Check changed lanes
|
||||
if: steps.gate.outputs.run_agent == 'true' && steps.patch.outputs.has_changes == 'true'
|
||||
run: pnpm check:changed
|
||||
|
||||
- name: Commit test performance updates
|
||||
if: steps.gate.outputs.run_agent == 'true' && steps.patch.outputs.has_changes == 'true'
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ github.token }}
|
||||
TARGET_BRANCH: main
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
if git diff --quiet; then
|
||||
echo "No test performance changes."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
git config user.name "openclaw-test-performance-agent[bot]"
|
||||
git config user.email "openclaw-test-performance-agent[bot]@users.noreply.github.com"
|
||||
git add apps extensions packages scripts src Swabble test ui
|
||||
git commit --no-verify -m "test: optimize slow tests"
|
||||
|
||||
for attempt in 1 2 3 4 5; do
|
||||
if ! git fetch --no-tags origin "${TARGET_BRANCH}"; then
|
||||
echo "Fetch attempt ${attempt} failed; retrying."
|
||||
sleep $((attempt * 2))
|
||||
continue
|
||||
fi
|
||||
if git push "https://x-access-token:${GITHUB_TOKEN}@github.com/${GITHUB_REPOSITORY}.git" HEAD:"${TARGET_BRANCH}"; then
|
||||
exit 0
|
||||
fi
|
||||
remote_main="$(git rev-parse "origin/${TARGET_BRANCH}")"
|
||||
if [ "$remote_main" != "$(git rev-parse HEAD^)" ]; then
|
||||
echo "main advanced; rebasing test performance update onto ${remote_main}."
|
||||
if ! git rebase "origin/${TARGET_BRANCH}"; then
|
||||
echo "Test performance update no longer applies cleanly; skipping stale update."
|
||||
git rebase --abort || true
|
||||
exit 0
|
||||
fi
|
||||
pnpm check:changed
|
||||
fi
|
||||
echo "Test performance update attempt ${attempt} failed; retrying."
|
||||
sleep $((attempt * 2))
|
||||
done
|
||||
|
||||
echo "Failed to push test performance updates after retries." >&2
|
||||
exit 1
|
||||
|
||||
- name: Upload test performance artifacts
|
||||
if: steps.gate.outputs.run_agent == 'true' && always()
|
||||
uses: actions/upload-artifact@v7
|
||||
with:
|
||||
name: test-performance-agent-${{ github.run_id }}
|
||||
path: .artifacts/test-perf/
|
||||
if-no-files-found: ignore
|
||||
retention-days: 14
|
||||
4
.gitignore
vendored
4
.gitignore
vendored
@@ -152,3 +152,7 @@ test/fixtures/openclaw-vitest-unit-report.json
|
||||
analysis/
|
||||
.artifacts/qa-e2e/
|
||||
extensions/qa-lab/web/dist/
|
||||
|
||||
# Generated bundled plugin runtime dependency manifests
|
||||
extensions/**/.openclaw-runtime-deps.json
|
||||
extensions/**/.openclaw-runtime-deps-stamp.json
|
||||
|
||||
@@ -39,7 +39,12 @@
|
||||
"details",
|
||||
"summary",
|
||||
"p",
|
||||
"div",
|
||||
"strong",
|
||||
"span",
|
||||
"iframe",
|
||||
"h2",
|
||||
"h3",
|
||||
"picture",
|
||||
"source",
|
||||
"Tooltip",
|
||||
|
||||
@@ -11,24 +11,53 @@
|
||||
"eslint-plugin-unicorn/prefer-array-find": "error",
|
||||
"eslint/no-array-constructor": "error",
|
||||
"eslint/no-await-in-loop": "off",
|
||||
"eslint/no-constructor-return": "error",
|
||||
"eslint/no-div-regex": "error",
|
||||
"eslint/no-extra-label": "error",
|
||||
"eslint/no-empty-pattern": "error",
|
||||
"eslint/no-lone-blocks": "error",
|
||||
"eslint/no-multi-str": "error",
|
||||
"eslint/no-new": "error",
|
||||
"eslint/no-object-constructor": "error",
|
||||
"eslint/no-proto": "error",
|
||||
"eslint/no-regex-spaces": "error",
|
||||
"eslint/no-return-assign": "error",
|
||||
"eslint/no-sequences": "error",
|
||||
"eslint/no-self-compare": "error",
|
||||
"eslint/no-shadow": "off",
|
||||
"eslint/no-var": "error",
|
||||
"eslint/no-useless-call": "error",
|
||||
"eslint/no-useless-computed-key": "error",
|
||||
"eslint/no-useless-concat": "error",
|
||||
"eslint/no-useless-constructor": "error",
|
||||
"eslint/no-warning-comments": "error",
|
||||
"eslint/no-unmodified-loop-condition": "error",
|
||||
"eslint/no-new-wrappers": "error",
|
||||
"eslint/no-else-return": "error",
|
||||
"eslint/no-case-declarations": "error",
|
||||
"eslint/prefer-exponentiation-operator": "error",
|
||||
"eslint/prefer-numeric-literals": "error",
|
||||
"eslint/radix": "error",
|
||||
"eslint/unicode-bom": "error",
|
||||
"eslint/yoda": "error",
|
||||
"import/no-absolute-path": "error",
|
||||
"import/no-empty-named-blocks": "error",
|
||||
"import/no-self-import": "error",
|
||||
"node/no-exports-assign": "error",
|
||||
"eslint-plugin-unicorn/prefer-set-size": "error",
|
||||
"oxc/no-accumulating-spread": "error",
|
||||
"oxc/no-async-endpoint-handlers": "error",
|
||||
"oxc/no-map-spread": "error",
|
||||
"promise/no-new-statics": "error",
|
||||
"typescript/adjacent-overload-signatures": "error",
|
||||
"typescript/ban-tslint-comment": "error",
|
||||
"typescript/consistent-return": "error",
|
||||
"typescript/no-empty-object-type": ["error", { "allowInterfaces": "with-single-extends" }],
|
||||
"typescript/no-explicit-any": "error",
|
||||
"typescript/no-extraneous-class": "error",
|
||||
"typescript/no-meaningless-void-operator": "error",
|
||||
"typescript/no-non-null-asserted-nullish-coalescing": "error",
|
||||
"typescript/no-unnecessary-qualifier": "error",
|
||||
"typescript/no-unnecessary-type-assertion": "error",
|
||||
"typescript/no-unnecessary-type-arguments": "error",
|
||||
"typescript/no-unnecessary-type-constraint": "error",
|
||||
@@ -36,15 +65,52 @@
|
||||
"typescript/no-unnecessary-type-parameters": "error",
|
||||
"typescript/no-unsafe-type-assertion": "off",
|
||||
"typescript/no-useless-default-assignment": "error",
|
||||
"typescript/switch-exhaustiveness-check": [
|
||||
"error",
|
||||
{ "considerDefaultExhaustiveForUnions": true }
|
||||
],
|
||||
"typescript/prefer-return-this-type": "error",
|
||||
"typescript/prefer-find": "error",
|
||||
"typescript/prefer-function-type": "error",
|
||||
"typescript/prefer-includes": "error",
|
||||
"typescript/prefer-reduce-type-parameter": "error",
|
||||
"typescript/prefer-ts-expect-error": "error",
|
||||
"unicorn/consistent-date-clone": "error",
|
||||
"unicorn/consistent-empty-array-spread": "error",
|
||||
"unicorn/consistent-function-scoping": "off",
|
||||
"unicorn/no-console-spaces": "error",
|
||||
"unicorn/no-length-as-slice-end": "error",
|
||||
"unicorn/no-instanceof-array": "error",
|
||||
"unicorn/no-negation-in-equality-check": "error",
|
||||
"unicorn/no-new-buffer": "error",
|
||||
"unicorn/no-typeof-undefined": "error",
|
||||
"unicorn/no-unnecessary-array-flat-depth": "error",
|
||||
"unicorn/no-unnecessary-array-splice-count": "error",
|
||||
"unicorn/no-unnecessary-slice-end": "error",
|
||||
"unicorn/no-useless-error-capture-stack-trace": "error",
|
||||
"unicorn/no-useless-promise-resolve-reject": "error",
|
||||
"unicorn/prefer-date-now": "error",
|
||||
"unicorn/prefer-dom-node-text-content": "error",
|
||||
"unicorn/prefer-keyboard-event-key": "error",
|
||||
"unicorn/prefer-array-some": "error",
|
||||
"unicorn/prefer-math-min-max": "error",
|
||||
"unicorn/prefer-node-protocol": "error",
|
||||
"unicorn/prefer-number-properties": "error",
|
||||
"unicorn/prefer-negative-index": "error",
|
||||
"unicorn/prefer-optional-catch-binding": "error",
|
||||
"unicorn/prefer-prototype-methods": "error",
|
||||
"unicorn/prefer-regexp-test": "error",
|
||||
"unicorn/prefer-set-size": "error",
|
||||
"unicorn/require-post-message-target-origin": "error"
|
||||
"unicorn/prefer-string-slice": "error",
|
||||
"unicorn/require-array-join-separator": "error",
|
||||
"unicorn/require-number-to-fixed-digits-argument": "error",
|
||||
"unicorn/require-post-message-target-origin": "error",
|
||||
"unicorn/throw-new-error": "error",
|
||||
"vitest/no-import-node-test": "error",
|
||||
"vitest/consistent-vitest-vi": "error",
|
||||
"vitest/prefer-called-once": "error",
|
||||
"vitest/prefer-called-times": "error",
|
||||
"vitest/prefer-expect-type-of": "error"
|
||||
},
|
||||
"ignorePatterns": [
|
||||
"assets/",
|
||||
|
||||
274
AGENTS.md
274
AGENTS.md
@@ -1,201 +1,165 @@
|
||||
# AGENTS.MD
|
||||
|
||||
Telegraph style. Root rules only. Read scoped `AGENTS.md` before touching a subtree.
|
||||
Telegraph style. Root rules only. Read scoped `AGENTS.md` before subtree work.
|
||||
|
||||
## Start
|
||||
|
||||
- Repo: `https://github.com/openclaw/openclaw`
|
||||
- Replies: repo-root file refs only, e.g. `extensions/telegram/src/index.ts:80`. No absolute paths, no `~/`.
|
||||
- CODEOWNERS: maintenance/refactors/tests are ok. For larger behavior, product, security, or ownership-sensitive changes, get a listed owner request/review first.
|
||||
- First pass: run docs list (`pnpm docs:list`; ignore if unavailable), then read only relevant docs/guides.
|
||||
- Missing deps: run `pnpm install`, rerun once, then report first actionable error.
|
||||
- Use "plugin/plugins" in docs/UI/changelog. `extensions/` remains internal workspace layout.
|
||||
- Add channel/plugin/app/doc surface: update `.github/labeler.yml` and matching GitHub labels.
|
||||
- New `AGENTS.md`: add sibling `CLAUDE.md` symlink to it.
|
||||
- Replies: repo-root refs only: `extensions/telegram/src/index.ts:80`. No absolute paths, no `~/`.
|
||||
- Run docs list first: `pnpm docs:list` if available; read relevant docs only.
|
||||
- High-confidence answers only when fixing/triaging: verify source, tests, shipped/current behavior, and dependency contracts before deciding.
|
||||
- Dependency-backed behavior: read upstream dependency docs/source/types first. Do not assume APIs, defaults, errors, timing, or runtime behavior.
|
||||
- Missing deps: `pnpm install`, retry once, then report first actionable error.
|
||||
- CODEOWNERS: maint/refactor/tests ok. Larger behavior/product/security/ownership: owner ask/review.
|
||||
- Wording: product/docs/UI/changelog say "plugin/plugins"; `extensions/` is internal.
|
||||
- New channel/plugin/app/doc surface: update `.github/labeler.yml` + GH labels.
|
||||
- New `AGENTS.md`: add sibling `CLAUDE.md` symlink.
|
||||
|
||||
## Repo Map
|
||||
## Map
|
||||
|
||||
- Core TS: `src/`, `ui/`, `packages/`
|
||||
- Bundled plugins: `extensions/`
|
||||
- Plugin SDK/public contract: `src/plugin-sdk/*`
|
||||
- Core channel internals: `src/channels/*`
|
||||
- Plugin loader/registry/contracts: `src/plugins/*`
|
||||
- Gateway protocol: `src/gateway/protocol/*`
|
||||
- Docs: `docs/`
|
||||
- Apps: `apps/`, `Swabble/`
|
||||
- Installers served from `openclaw.ai`: sibling `../openclaw.ai`
|
||||
|
||||
Scoped guides:
|
||||
|
||||
- `extensions/AGENTS.md`: bundled plugin rules
|
||||
- `src/plugin-sdk/AGENTS.md`: public SDK rules
|
||||
- `src/channels/AGENTS.md`: channel core rules
|
||||
- `src/plugins/AGENTS.md`: plugin loader/registry rules
|
||||
- `src/gateway/AGENTS.md`, `src/gateway/protocol/AGENTS.md`: gateway/protocol rules
|
||||
- `src/agents/AGENTS.md`: agent import/test perf rules
|
||||
- `test/helpers/AGENTS.md`, `test/helpers/channels/AGENTS.md`: shared test helpers
|
||||
- `docs/AGENTS.md`, `ui/AGENTS.md`, `scripts/AGENTS.md`: docs/UI/scripts
|
||||
- Core TS: `src/`, `ui/`, `packages/`; plugins: `extensions/`; SDK: `src/plugin-sdk/*`; channels: `src/channels/*`; loader: `src/plugins/*`; protocol: `src/gateway/protocol/*`; docs/apps: `docs/`, `apps/`, `Swabble/`.
|
||||
- Installers: sibling `../openclaw.ai`.
|
||||
- Scoped guides exist in: `extensions/`, `src/{plugin-sdk,channels,plugins,gateway,gateway/protocol,agents}/`, `test/helpers*/`, `docs/`, `ui/`, `scripts/`.
|
||||
|
||||
## Architecture
|
||||
|
||||
- Core must stay extension-agnostic. No core special cases for bundled plugin/provider/channel ids when manifest/registry/capability contracts can express it.
|
||||
- Extensions cross into core only via `openclaw/plugin-sdk/*`, manifest metadata, injected runtime helpers, and documented local barrels (`api.ts`, `runtime-api.ts`).
|
||||
- Extension production code must not import core `src/**`, `src/plugin-sdk-internal/**`, another extension's `src/**`, or relative paths outside its package.
|
||||
- Core code/tests must not deep-import plugin internals (`extensions/*/src/**`, `onboard.js`). Use plugin `api.ts` / public SDK facade / generic contract.
|
||||
- Extension-owned behavior stays in the extension: legacy repair, detection, onboarding, auth/provider defaults, provider tools/settings.
|
||||
- Legacy config repair: prefer doctor/fix paths over startup/load-time core migrations.
|
||||
- If a core test asserts extension-specific behavior, move it to the owning extension or a generic contract test.
|
||||
- Core stays extension-agnostic. No bundled ids in core when manifest/registry/capability contracts work.
|
||||
- Extensions cross into core only via `openclaw/plugin-sdk/*`, manifest metadata, injected runtime helpers, documented barrels (`api.ts`, `runtime-api.ts`).
|
||||
- Extension prod code: no core `src/**`, `src/plugin-sdk-internal/**`, other extension `src/**`, or relative outside package.
|
||||
- Core/tests: no deep plugin internals (`extensions/*/src/**`, `onboard.js`). Use `api.ts`, SDK facade, generic contracts.
|
||||
- Extension-owned behavior stays extension-owned: repair, detection, onboarding, auth/provider defaults, provider tools/settings.
|
||||
- Legacy config repair: doctor/fix paths, not startup/load-time core migrations.
|
||||
- Core test asserting extension-specific behavior: move to owner extension or generic contract test.
|
||||
- New seams: backwards-compatible, documented, versioned. Third-party plugins exist.
|
||||
- Channels: `src/channels/**` is implementation. Plugin authors get SDK seams, not channel internals.
|
||||
- Providers: core owns generic inference loop; provider plugins own provider-specific auth/catalog/runtime hooks.
|
||||
- Gateway protocol changes are contract changes: additive first; incompatible needs versioning/docs/client follow-through.
|
||||
- Config contract: keep exported types, schema/help, generated metadata, baselines, docs aligned. Retired public keys stay retired; compatibility belongs in raw migration/doctor paths.
|
||||
- Plugin architecture direction: manifest-first control plane; targeted runtime loaders; no hidden paths around declared contracts; broad mutable registries are transitional.
|
||||
- Prompt-cache rule: deterministic ordering for maps/sets/registries/plugin lists/files/network results before model/tool payloads. Preserve old transcript bytes when possible.
|
||||
- Channels: `src/channels/**` is implementation; plugin authors get SDK seams.
|
||||
- Providers: core owns generic loop; provider plugins own auth/catalog/runtime hooks.
|
||||
- Gateway protocol changes: additive first; incompatible needs versioning/docs/client follow-through.
|
||||
- Config contract: exported types, schema/help, metadata, baselines, docs aligned. Retired public keys stay retired; compat in raw migration/doctor.
|
||||
- Direction: manifest-first control plane; targeted runtime loaders; no hidden contract bypasses; broad mutable registries transitional.
|
||||
- Prompt cache: deterministic ordering for maps/sets/registries/plugin lists/files/network results before model/tool payloads. Preserve old transcript bytes when possible.
|
||||
|
||||
## Commands
|
||||
|
||||
- Runtime: Node 22+. Keep Node and Bun paths working.
|
||||
- Install: `pnpm install` (Bun supported; keep lockfiles/patches aligned if touched).
|
||||
- Dev CLI: `pnpm openclaw ...` or `pnpm dev`.
|
||||
- Build: `pnpm build`
|
||||
- Smart local gate: `pnpm check:changed` (scoped typecheck/lint/guards + relevant tests)
|
||||
- Explain smart gate: `pnpm changed:lanes --json`
|
||||
- Pre-commit view: `pnpm check:changed --staged`
|
||||
- Normal full prod sweep: `pnpm check` (prod typecheck/lint/guards, no tests)
|
||||
- Full tests: `pnpm test`
|
||||
- Changed tests only: `pnpm test:changed`
|
||||
- Local serial loop: `pnpm test:serial`
|
||||
- Extension tests: `pnpm test:extensions` or `pnpm test extensions` = all extension shards; `pnpm test extensions/<id>` = one extension lane. Heavy channels/OpenAI have dedicated shards.
|
||||
- Shard timing artifact: `.artifacts/vitest-shard-timings.json`; auto-used for balanced shard ordering. Disable with `OPENCLAW_TEST_PROJECTS_TIMINGS=0`.
|
||||
- Targeted tests: `pnpm test <path-or-filter> [vitest args...]`; do not call raw `vitest`.
|
||||
- Coverage: `pnpm test:coverage`
|
||||
- Format check/fix: `pnpm format:check` / `pnpm format`
|
||||
- Typecheck:
|
||||
- `pnpm tsgo`: fastest core prod graph
|
||||
- `pnpm tsgo:prod`: core + extensions prod graphs; used by `pnpm check`
|
||||
- `pnpm check:test-types` / `pnpm tsgo:test`: all test graphs
|
||||
- `pnpm tsgo:all`: all prod + test project refs
|
||||
- Debug slices exist; do not present as normal user flow.
|
||||
- Profile: `pnpm tsgo:profile [core-test|extensions-test|--all]`
|
||||
- Type policy: use `tsgo`; do not add `tsc --noEmit`, `typecheck`, or `check:types` lanes. `tsc` only for declaration/package-boundary emit gaps.
|
||||
- Lint:
|
||||
- `pnpm lint`: core/extensions/scripts shards
|
||||
- `pnpm lint:core`, `pnpm lint:extensions`, `pnpm lint:scripts`
|
||||
- `pnpm lint:apps`: Swift/app surface, separate from TS lint
|
||||
- `pnpm lint:all`: legacy comparison lane
|
||||
- Local heavy-check behavior: `OPENCLAW_LOCAL_CHECK=1` default; `OPENCLAW_LOCAL_CHECK_MODE=throttled|full`; `OPENCLAW_LOCAL_CHECK=0` for CI/shared runs.
|
||||
- Local validation is local-first. Do not default to Blacksmith/Testbox for routine OpenClaw iteration; it burns warm caches and startup time. Use repo `pnpm` lanes first, then reach for remote CI/Testbox only for parity-only failures, secrets/services, or when explicitly requested.
|
||||
- Runtime: Node 22+. Keep Node + Bun paths working.
|
||||
- Install: `pnpm install` (keep Bun lock/patches aligned if touched).
|
||||
- CLI: `pnpm openclaw ...` or `pnpm dev`; build: `pnpm build`.
|
||||
- Smart gate: `pnpm check:changed`; explain `pnpm changed:lanes --json`; staged preview `pnpm check:changed --staged`.
|
||||
- Prod sweep: `pnpm check`; tests: `pnpm test`, `pnpm test:changed`, `pnpm test:serial`, `pnpm test:coverage`.
|
||||
- Extension tests: `pnpm test:extensions`, `pnpm test extensions`, `pnpm test extensions/<id>`.
|
||||
- Targeted tests: `pnpm test <path-or-filter> [vitest args...]`; never raw `vitest`.
|
||||
- Typecheck: `tsgo` lanes only (`pnpm tsgo*`, `pnpm check:test-types`); do not add `tsc --noEmit`, `typecheck`, `check:types`.
|
||||
- Format/lint: `pnpm format:check`/`pnpm format`; `pnpm lint*` lanes.
|
||||
- Heavy checks: `OPENCLAW_LOCAL_CHECK=1`, mode `OPENCLAW_LOCAL_CHECK_MODE=throttled|full`; CI/shared use `OPENCLAW_LOCAL_CHECK=0`.
|
||||
- Local first. Use repo `pnpm` lanes before Blacksmith/Testbox. Remote only for parity-only failures, secrets/services, or explicit ask.
|
||||
|
||||
## GitHub / CI
|
||||
|
||||
- Triage: list first, hydrate few. Use bounded `gh --json --jq`; avoid repeated full comment scans.
|
||||
- Search/dedupe: prefer `gh search issues 'repo:openclaw/openclaw is:open <terms>' --json number,title,state,updatedAt --limit 20`.
|
||||
- PR shortlist: `gh pr list ...`; then `gh pr view <n> --json number,title,body,closingIssuesReferences,files,statusCheckRollup,reviewDecision`.
|
||||
- After landing PR: search duplicate open issues/PRs. Before closing: comment why + canonical link.
|
||||
- GH comments with markdown backticks, `$`, or shell snippets: avoid inline double-quoted `--body`; use single quotes or `--body-file`.
|
||||
- PR review answer must explicitly cover: what bug/behavior we are trying to fix; PR/issue URL(s) and affected endpoint/surface; whether this is the best possible fix, with high-certainty evidence from code, tests, CI, and shipped/current behavior.
|
||||
- CI polling: exact SHA, needed fields only. Example: `gh api repos/<owner>/<repo>/actions/runs/<id> --jq '{status,conclusion,head_sha,updated_at,name,path}'`.
|
||||
- Post-land wait: minimal. Exact landed SHA only. If superseded on `main`, same-branch `cancel-in-progress` cancellations are expected; stop once local touched-surface proof exists. Never wait for newer unrelated `main` unless asked.
|
||||
- Wait matrix:
|
||||
- never: `Auto response`, `Labeler`, `Docs Sync Publish Repo`, `Docs Agent`, `Test Performance Agent`, `Stale`.
|
||||
- conditional: `CI` exact SHA only; `Docs` only docs task/no local docs proof; `Workflow Sanity` only workflow/composite/CI-policy edits; `Plugin NPM Release` only plugin package/release metadata.
|
||||
- release/manual only: `Docker Release`, `OpenClaw NPM Release`, `macOS Release`, `OpenClaw Release Checks`, `Cross-OS Release Checks`, `NPM Telegram Beta E2E`.
|
||||
- explicit/surface only: `QA-Lab - All Lanes`, `Scheduled Live And E2E`, `Install Smoke`, `CodeQL`, `Sandbox Common Smoke`, `Parity gate`, `Blacksmith Testbox`, `Control UI Locale Refresh`.
|
||||
- `/landpr`: do not idle on `auto-response` or `check-docs`. Treat docs as local proof unless `check-docs` already failed with actionable relevant error.
|
||||
- Poll 30-60s. Fetch jobs/logs/artifacts only after failure/completion or concrete need.
|
||||
|
||||
## Gates
|
||||
|
||||
- Pre-commit hook: staged format/lint, then `pnpm check:changed --staged`; docs/markdown-only skips changed-scope check; `FAST_COMMIT=1` skips changed-scope check only.
|
||||
- Pre-commit hook: staged formatting only. Validation explicit.
|
||||
- Changed lanes:
|
||||
- core prod => core prod typecheck + core tests
|
||||
- core tests => core test typecheck/tests only
|
||||
- extension prod => extension prod typecheck + extension tests
|
||||
- extension tests => extension test typecheck/tests only
|
||||
- public SDK/plugin contract => extension prod/test validation too
|
||||
- unknown root/config => all lanes
|
||||
- Local loop: prefer `pnpm check:changed`; use `pnpm test:changed` for tests only; use `pnpm check` for full prod TS/lint sweep without tests.
|
||||
- Landing on `main`: verify touched surface near landing; default bar is `pnpm check` + `pnpm test` when feasible.
|
||||
- Hard build gate: run/pass `pnpm build` before push if build output, packaging, lazy/module boundaries, or published surfaces can change.
|
||||
- Do not land related failing format/lint/type/build/tests. If failures are unrelated on latest `origin/main`, say so and give scoped proof.
|
||||
- CI architecture gate: `check-additional`; local equivalent `pnpm check:architecture`.
|
||||
- Config docs drift: `pnpm config:docs:gen/check`
|
||||
- Plugin SDK API drift: `pnpm plugin-sdk:api:gen/check`
|
||||
- Generated docs baselines: tracked `docs/.generated/*.sha256`; full JSON ignored.
|
||||
- core prod: core prod typecheck + core tests
|
||||
- core tests: core test typecheck/tests
|
||||
- extension prod: extension prod typecheck + extension tests
|
||||
- extension tests: extension test typecheck/tests
|
||||
- public SDK/plugin contract: extension prod/test too
|
||||
- unknown root/config: all lanes
|
||||
- Before handoff/push: `pnpm check:changed`. Tests-only: `pnpm test:changed`. Full prod sweep: `pnpm check`.
|
||||
- Landing on `main`: verify touched surface near landing. Default feasible bar: `pnpm check` + `pnpm test`.
|
||||
- Hard build gate: `pnpm build` before push if build output, packaging, lazy/module boundaries, or published surfaces can change.
|
||||
- Do not land related failing format/lint/type/build/tests. If unrelated on latest `origin/main`, say so with scoped proof.
|
||||
- Generated/API drift: `pnpm check:architecture`, `pnpm config:docs:gen/check`, `pnpm plugin-sdk:api:gen/check`. Track `docs/.generated/*.sha256`; full JSON ignored.
|
||||
|
||||
## Code Style
|
||||
## Code
|
||||
|
||||
- TypeScript ESM. Strict types. Avoid `any`; prefer real types/`unknown`/narrow adapters.
|
||||
- No `@ts-nocheck`. No lint suppressions unless intentional and explained.
|
||||
- TS ESM, strict. Avoid `any`; prefer real types, `unknown`, narrow adapters.
|
||||
- No `@ts-nocheck`. Lint suppressions only intentional + explained.
|
||||
- External boundaries: prefer `zod` or existing schema helpers.
|
||||
- Runtime branching: prefer discriminated unions / closed codes over freeform strings.
|
||||
- Avoid magic sentinels like `?? 0`, empty object/string when semantics change.
|
||||
- Dynamic import: do not mix static and dynamic import for same module in prod path. Use dedicated `*.runtime.ts` lazy boundary. After lazy-boundary edits, run `pnpm build` and check `[INEFFECTIVE_DYNAMIC_IMPORT]`.
|
||||
- Cycles: keep `pnpm check:import-cycles` and architecture/madge cycle checks green.
|
||||
- Classes: no prototype mixins/mutations. Use explicit inheritance/composition. Tests prefer per-instance stubs.
|
||||
- Comments: brief only for non-obvious logic.
|
||||
- File size: split around ~700 LOC when it improves clarity/testability.
|
||||
- Product naming: **OpenClaw** product/docs; `openclaw` CLI/package/path/config.
|
||||
- Written English: American spelling.
|
||||
- Runtime branching: discriminated unions/closed codes over freeform strings.
|
||||
- Avoid semantic sentinels: `?? 0`, empty object/string, etc.
|
||||
- Dynamic import: no static+dynamic import for same prod module. Use `*.runtime.ts` lazy boundary. After edits: `pnpm build`; check `[INEFFECTIVE_DYNAMIC_IMPORT]`.
|
||||
- Cycles: keep `pnpm check:import-cycles` + architecture/madge green.
|
||||
- Classes: no prototype mixins/mutations. Prefer inheritance/composition. Tests prefer per-instance stubs.
|
||||
- Comments: brief, only non-obvious logic.
|
||||
- Split files around ~700 LOC when clarity/testability improves.
|
||||
- Naming: **OpenClaw** product/docs; `openclaw` CLI/package/path/config.
|
||||
- English: American spelling.
|
||||
|
||||
## Tests
|
||||
|
||||
- Vitest. Tests colocated `*.test.ts`; e2e `*.e2e.test.ts`.
|
||||
- Example models in tests: `sonnet-4.6`, `gpt-5.4`.
|
||||
- Clean up timers/env/globals/mocks/sockets/temp dirs/module state; `--isolate=false` must stay safe.
|
||||
- Hot tests: avoid per-test `vi.resetModules()` + fresh heavy imports; prefer static or `beforeAll` imports and reset state directly.
|
||||
- Measure first: `pnpm test:perf:imports <file>` for import drag; `pnpm test:perf:hotspots --limit N` for suite targets.
|
||||
- Keep tests at seam depth: unit-test pure helpers/contracts; one integration smoke per boundary, not per branch.
|
||||
- Mock expensive runtime seams directly: scanners, manifests, package registries, filesystem crawls, provider SDKs, network/process launch.
|
||||
- Prefer injected deps over module mocks; if mocking modules, mock narrow local `*.runtime.ts` seams, not broad barrels.
|
||||
- Share fixtures/builders; do not recreate temp dirs, package manifests, or plugin workspaces in every case unless state isolation needs it.
|
||||
- Delete duplicate assertions when another test owns the boundary; assert only the behavior that can regress here.
|
||||
- Avoid broad `importOriginal()` / broad `openclaw/plugin-sdk/*` partial mocks in hot tests. Add narrow local `*.runtime.ts` seam and mock it.
|
||||
- Use existing deps/callback/runtime injection seams before module mocks.
|
||||
- Import-dominated test time is a boundary smell; shrink import surface before adding cases.
|
||||
- Replacing slow integration coverage: extract production composition into a named helper and test that helper.
|
||||
- Do not modify baseline/inventory/ignore/snapshot/expected-failure files to silence checks without explicit approval.
|
||||
- Do not set test workers above 16. For memory pressure: `OPENCLAW_VITEST_MAX_WORKERS=1 pnpm test`.
|
||||
- Live: `OPENCLAW_LIVE_TEST=1 pnpm test:live`; full logs `OPENCLAW_LIVE_TEST_QUIET=0`.
|
||||
- Full testing guide: `docs/help/testing.md`.
|
||||
- Vitest. Colocated `*.test.ts`; e2e `*.e2e.test.ts`; example models `sonnet-4.6`, `gpt-5.4`.
|
||||
- Clean timers/env/globals/mocks/sockets/temp dirs/module state; `--isolate=false` safe.
|
||||
- Hot tests: avoid per-test `vi.resetModules()` + heavy imports. Measure with `pnpm test:perf:imports <file>` / `pnpm test:perf:hotspots --limit N`.
|
||||
- Seam depth: pure helper/contract unit tests; one integration smoke per boundary.
|
||||
- Mock expensive seams directly: scanners, manifests, registries, fs crawls, provider SDKs, network/process launch.
|
||||
- Prefer injection; if module mocking, mock narrow local `*.runtime.ts`, not broad barrels or `openclaw/plugin-sdk/*`.
|
||||
- Share fixtures/builders; delete duplicate assertions; assert behavior that can regress here.
|
||||
- Do not edit baseline/inventory/ignore/snapshot/expected-failure files to silence checks without explicit approval.
|
||||
- Test workers max 16. Memory pressure: `OPENCLAW_VITEST_MAX_WORKERS=1 pnpm test`.
|
||||
- Live: `OPENCLAW_LIVE_TEST=1 pnpm test:live`; verbose `OPENCLAW_LIVE_TEST_QUIET=0`.
|
||||
- Guide: `docs/help/testing.md`.
|
||||
|
||||
## Docs / Changelog
|
||||
|
||||
- Update docs when behavior/API changes. Use docs list/read_when hints.
|
||||
- Docs links: see `docs/AGENTS.md`.
|
||||
- Changelog: user-facing only. Pure test/internal changes usually no entry.
|
||||
- Changelog placement: append to active version `### Changes`/`### Fixes`; at most one contributor mention, prefer `Thanks @user`.
|
||||
- Docs change with behavior/API. Use docs list/read_when hints; docs links per `docs/AGENTS.md`.
|
||||
- Changelog user-facing only; pure test/internal usually no entry.
|
||||
- Changelog placement: active version `### Changes`/`### Fixes`; at most one contributor mention, prefer `Thanks @user`.
|
||||
|
||||
## Git
|
||||
|
||||
- Use `scripts/committer "<msg>" <file...>`; stage only intended files.
|
||||
- Commits: conventional-ish, concise/action-oriented. Group related changes.
|
||||
- No manual stash/autostash unless explicitly requested. No branch/worktree changes unless requested.
|
||||
- No merge commits on `main`; rebase on latest `origin/main` before push.
|
||||
- User says "commit": commit your changes only. "commit all": commit everything in grouped chunks. "push": may `git pull --rebase` first.
|
||||
- Do not delete/rename unexpected files; ask if it blocks. Otherwise ignore unrelated WIP.
|
||||
- If bulk PR close/reopen affects >5 PRs, ask with exact count/scope.
|
||||
- PR/issue workflows: use `$openclaw-pr-maintainer`.
|
||||
- `/landpr`: use `~/.codex/prompts/landpr.md`.
|
||||
- Commit via `scripts/committer "<msg>" <file...>`; stage intended files only. It formats staged files; still run gates.
|
||||
- Commits: conventional-ish, concise, grouped.
|
||||
- No manual stash/autostash unless explicit. No branch/worktree changes unless requested.
|
||||
- `main`: no merge commits; rebase on latest `origin/main` before push.
|
||||
- User says `commit`: your changes only. `commit all`: all changes in grouped chunks. `push`: may `git pull --rebase` first.
|
||||
- Do not delete/rename unexpected files; ask if blocking, else ignore.
|
||||
- Bulk PR close/reopen >5: ask with count/scope.
|
||||
- PR/issue workflows: `$openclaw-pr-maintainer`. `/landpr`: `~/.codex/prompts/landpr.md`.
|
||||
|
||||
## Security / Release
|
||||
|
||||
- Never commit real phone numbers, videos, credentials, live config.
|
||||
- Secrets: channel/provider credentials under `~/.openclaw/credentials/`; model auth profiles under `~/.openclaw/agents/<agentId>/agent/auth-profiles.json`.
|
||||
- Secrets: channel/provider creds in `~/.openclaw/credentials/`; model auth profiles in `~/.openclaw/agents/<agentId>/agent/auth-profiles.json`.
|
||||
- Env keys: check `~/.profile`.
|
||||
- Dependency patches/overrides/vendor changes require explicit approval. `pnpm.patchedDependencies` must use exact versions.
|
||||
- Carbon pins owner-only: do not change `@buape/carbon` versions unless Shadow (`@thewilloftheshadow`, verified by `gh`) asks.
|
||||
- Releases/publish/version bumps require explicit approval.
|
||||
- Release docs: `docs/reference/RELEASING.md`; use `$openclaw-release-maintainer`.
|
||||
- GHSA/advisories: use `$openclaw-ghsa-maintainer`.
|
||||
- Beta tag/version must match, e.g. `vYYYY.M.D-beta.N` => npm `YYYY.M.D-beta.N --tag beta`.
|
||||
- Dependency patches/overrides/vendor changes need explicit approval. `pnpm.patchedDependencies` exact versions only.
|
||||
- Carbon pins owner-only: do not change `@buape/carbon` unless Shadow (`@thewilloftheshadow`, verified by `gh`) asks.
|
||||
- Releases/publish/version bumps need explicit approval. Release docs: `docs/reference/RELEASING.md`; use `$openclaw-release-maintainer`.
|
||||
- GHSA/advisories: `$openclaw-ghsa-maintainer`.
|
||||
- Beta tag/version match: `vYYYY.M.D-beta.N` -> npm `YYYY.M.D-beta.N --tag beta`.
|
||||
|
||||
## Apps / Platform
|
||||
|
||||
- Before simulator/emulator testing, check connected real iOS/Android devices first.
|
||||
- Before simulator/emulator testing, check real iOS/Android devices.
|
||||
- "restart iOS/Android apps" = rebuild/reinstall/relaunch, not kill/launch.
|
||||
- SwiftUI: prefer Observation (`@Observable`, `@Bindable`) over new `ObservableObject`.
|
||||
- mac gateway: use app or `openclaw gateway restart/status --deep`; avoid ad-hoc tmux gateway sessions. Rebuild mac app locally, not over SSH.
|
||||
- mac logs: `./scripts/clawlog.sh`.
|
||||
- Version bump touches: `package.json`, `apps/android/app/build.gradle.kts`, `apps/ios/version.json` then `pnpm ios:version:sync`, `apps/macos/.../Info.plist`, `docs/install/updating.md`. Appcast only for Sparkle release.
|
||||
- iOS Team ID: `security find-identity -p codesigning -v`; fallback `defaults read com.apple.dt.Xcode IDEProvisioningTeamIdentifiers`.
|
||||
- Mobile LAN pairing: plaintext `ws://` is loopback-only by default. Trusted private-network `ws://` needs `OPENCLAW_ALLOW_INSECURE_PRIVATE_WS=1`; Tailscale/public use `wss://` or a tunnel.
|
||||
- SwiftUI: Observation (`@Observable`, `@Bindable`) over new `ObservableObject`.
|
||||
- Mac gateway: use app or `openclaw gateway restart/status --deep`; no ad-hoc tmux gateway. Logs: `./scripts/clawlog.sh`.
|
||||
- Version bump touches: `package.json`, `apps/android/app/build.gradle.kts`, `apps/ios/version.json` + `pnpm ios:version:sync`, macOS `Info.plist`, `docs/install/updating.md`. Appcast only for Sparkle release.
|
||||
- Mobile LAN pairing: plaintext `ws://` loopback-only. Private-network `ws://` needs `OPENCLAW_ALLOW_INSECURE_PRIVATE_WS=1`; Tailscale/public use `wss://` or tunnel.
|
||||
- A2UI hash `src/canvas-host/a2ui/.bundle.hash`: generated; ignore unless running `pnpm canvas:a2ui:bundle`; commit separately.
|
||||
|
||||
## External Ops
|
||||
|
||||
- Remote install docs: `docs/install/exe-dev.md`, `docs/install/fly.md`, `docs/install/hetzner.md`.
|
||||
- Parallels smoke: `$openclaw-parallels-smoke`; Discord roundtrip: `parallels-discord-roundtrip`.
|
||||
|
||||
## Misc Footguns
|
||||
## Ops / Footguns
|
||||
|
||||
- Remote install docs: `docs/install/{exe-dev,fly,hetzner}.md`. Parallels smoke: `$openclaw-parallels-smoke`; Discord roundtrip: `parallels-discord-roundtrip`.
|
||||
- Rebrand/migration/config warnings: run `openclaw doctor`.
|
||||
- Never edit `node_modules`.
|
||||
- Local-only `.agents` ignores: use `.git/info/exclude`, not repo `.gitignore`.
|
||||
- CLI progress: use `src/cli/progress.ts`; status tables: `src/terminal/table.ts`.
|
||||
- Local-only `.agents` ignores: `.git/info/exclude`, not repo `.gitignore`.
|
||||
- CLI progress: `src/cli/progress.ts`; status tables: `src/terminal/table.ts`.
|
||||
- Connection/provider additions: update all UI surfaces + docs + status/config forms.
|
||||
- Provider-facing tool schemas: prefer flat string enum helpers over `Type.Union([Type.Literal(...)])`; some providers reject generated `anyOf`. Do not treat this as a repo-wide protocol/schema ban.
|
||||
- External messaging surfaces: no token-delta channel messages. Follow `docs/concepts/streaming.md`; preview/block streaming uses message edits/chunks and must preserve final/fallback delivery.
|
||||
- Provider tool schemas: prefer flat string enum helpers over `Type.Union([Type.Literal(...)])`; some providers reject `anyOf`. Not a repo-wide protocol/schema ban.
|
||||
- External messaging: no token-delta channel messages. Follow `docs/concepts/streaming.md`; preview/block streaming uses edits/chunks and preserves final/fallback delivery.
|
||||
|
||||
3496
CHANGELOG.md
3496
CHANGELOG.md
File diff suppressed because it is too large
Load Diff
@@ -29,9 +29,9 @@ ARG OPENCLAW_NODE_BOOKWORM_SLIM_DIGEST="sha256:e8e2e91b1378f83c5b2dd15f0247f3411
|
||||
FROM ${OPENCLAW_NODE_BOOKWORM_IMAGE} AS ext-deps
|
||||
ARG OPENCLAW_EXTENSIONS
|
||||
ARG OPENCLAW_BUNDLED_PLUGIN_DIR
|
||||
COPY ${OPENCLAW_BUNDLED_PLUGIN_DIR} /tmp/${OPENCLAW_BUNDLED_PLUGIN_DIR}
|
||||
# Copy package.json for opted-in extensions so pnpm resolves their deps.
|
||||
RUN mkdir -p /out && \
|
||||
RUN --mount=type=bind,source=${OPENCLAW_BUNDLED_PLUGIN_DIR},target=/tmp/${OPENCLAW_BUNDLED_PLUGIN_DIR},readonly \
|
||||
mkdir -p /out && \
|
||||
for ext in $OPENCLAW_EXTENSIONS; do \
|
||||
if [ -f "/tmp/${OPENCLAW_BUNDLED_PLUGIN_DIR}/$ext/package.json" ]; then \
|
||||
mkdir -p "/out/$ext" && \
|
||||
|
||||
35
VISION.md
35
VISION.md
@@ -53,12 +53,24 @@ We prioritize secure defaults, but also expose clear knobs for trusted high-powe
|
||||
|
||||
OpenClaw has an extensive plugin API.
|
||||
Core stays lean; optional capability should usually ship as plugins.
|
||||
We are generally slimming down core while expanding what plugins can do.
|
||||
If a useful feature cannot be built as a plugin yet, we welcome PRs and design discussions that extend the plugin API instead of adding one-off core behavior.
|
||||
|
||||
There are two broad plugin styles:
|
||||
|
||||
- Code plugins run OpenClaw plugin code and are appropriate for deeper runtime extension.
|
||||
- Bundle-style plugins package stable external surfaces such as skills, MCP servers, and related configuration.
|
||||
|
||||
Prefer bundle-style plugins when they can express the capability.
|
||||
They have a smaller, more stable interface and better security boundaries.
|
||||
Use code plugins when the capability needs runtime hooks, providers, channels, tools, or other in-process extension points.
|
||||
|
||||
Preferred plugin path is npm package distribution plus local extension loading for development.
|
||||
If you build a plugin, host and maintain it in your own repository.
|
||||
The bar for adding optional plugins to core is intentionally high.
|
||||
Plugin docs: [`docs/tools/plugin.md`](docs/tools/plugin.md)
|
||||
Community plugin listing + PR bar: https://docs.openclaw.ai/plugins/community
|
||||
Plugin discovery, official publisher status, provenance, and security review live in [ClawHub](https://clawhub.ai/).
|
||||
OpenClaw docs should document core extension points; plugin promotion belongs in ClawHub, preferably under vetted org publishers for official plugins.
|
||||
|
||||
Memory is a special plugin slot where only one memory plugin can be active at a time.
|
||||
Today we ship multiple memory options; over time we plan to converge on one recommended default path.
|
||||
@@ -66,21 +78,16 @@ Today we ship multiple memory options; over time we plan to converge on one reco
|
||||
### Skills
|
||||
|
||||
We still ship some bundled skills for baseline UX.
|
||||
New skills should be published to ClawHub first (`clawhub.ai`), not added to core by default.
|
||||
Core skill additions should be rare and require a strong product or security reason.
|
||||
New skills should be published through [ClawHub](https://clawhub.ai/) first, not added to core by default.
|
||||
Official or bundled promotion should require a clear product, security, or maintainer-ownership reason.
|
||||
|
||||
### MCP Support
|
||||
|
||||
OpenClaw supports MCP through `mcporter`: https://github.com/steipete/mcporter
|
||||
OpenClaw supports MCP as both a server and a runtime integration surface.
|
||||
MCP details live in [`docs/cli/mcp.md`](docs/cli/mcp.md).
|
||||
|
||||
This keeps MCP integration flexible and decoupled from core runtime:
|
||||
|
||||
- add or change MCP servers without restarting the gateway
|
||||
- keep core tool/context surface lean
|
||||
- reduce MCP churn impact on core stability and security
|
||||
|
||||
For now, we prefer this bridge model over building first-class MCP runtime into core.
|
||||
If there is an MCP server or feature `mcporter` does not support yet, please open an issue there.
|
||||
The project goal is pragmatic MCP support without duplicating existing agent,
|
||||
tool, ACPX, plugin, or ClawHub paths.
|
||||
|
||||
### Setup
|
||||
|
||||
@@ -98,11 +105,11 @@ It is widely known, fast to iterate in, and easy to read, modify, and extend.
|
||||
|
||||
## What We Will Not Merge (For Now)
|
||||
|
||||
- New core skills when they can live on ClawHub
|
||||
- New core skills when they can live on [ClawHub](https://clawhub.ai/)
|
||||
- Full-doc translation sets for all docs (deferred; we plan AI-generated translations later)
|
||||
- Commercial service integrations that do not clearly fit the model-provider category
|
||||
- Wrapper channels around already supported channels without a clear capability or security gap
|
||||
- First-class MCP runtime in core when `mcporter` already provides the integration path
|
||||
- MCP work that duplicates existing MCP, ACPX, plugin, or ClawHub paths without a clear product or security gap
|
||||
- Agent-hierarchy frameworks (manager-of-managers / nested planner trees) as a default architecture
|
||||
- Heavy orchestration layers that duplicate existing agent and tool infrastructure
|
||||
|
||||
|
||||
289
appcast.xml
289
appcast.xml
@@ -2,6 +2,207 @@
|
||||
<rss xmlns:sparkle="http://www.andymatuschak.org/xml-namespaces/sparkle" version="2.0">
|
||||
<channel>
|
||||
<title>OpenClaw</title>
|
||||
<item>
|
||||
<title>2026.4.22</title>
|
||||
<pubDate>Thu, 23 Apr 2026 15:18:00 +0000</pubDate>
|
||||
<link>https://raw.githubusercontent.com/openclaw/openclaw/main/appcast.xml</link>
|
||||
<sparkle:version>2026042290</sparkle:version>
|
||||
<sparkle:shortVersionString>2026.4.22</sparkle:shortVersionString>
|
||||
<sparkle:minimumSystemVersion>15.0</sparkle:minimumSystemVersion>
|
||||
<description><![CDATA[<h2>OpenClaw 2026.4.22</h2>
|
||||
<h3>Changes</h3>
|
||||
<ul>
|
||||
<li>Providers/xAI: add image generation, text-to-speech, and speech-to-text support, including <code>grok-imagine-image</code> / <code>grok-imagine-image-pro</code>, reference-image edits, six live xAI voices, MP3/WAV/PCM/G.711 TTS formats, <code>grok-stt</code> audio transcription, and xAI realtime transcription for Voice Call streaming. (#68694) Thanks @KateWilkins.</li>
|
||||
<li>Providers/STT: add Voice Call streaming transcription for Deepgram, ElevenLabs, and Mistral, alongside the existing OpenAI and xAI realtime STT paths; ElevenLabs also gains Scribe v2 batch audio transcription for inbound media.</li>
|
||||
<li>TUI: add local embedded mode for running terminal chats without a Gateway while keeping plugin approval gates enforced. (#66767) Thanks @fuller-stack-dev.</li>
|
||||
<li>Onboarding: auto-install missing provider and channel plugins during setup so first-run configuration can complete without manual plugin recovery.</li>
|
||||
<li>OpenAI/Responses: use OpenAI's native <code>web_search</code> tool automatically for direct OpenAI Responses models when web search is enabled and no managed search provider is pinned; explicit providers such as Brave keep the managed <code>web_search</code> tool.</li>
|
||||
<li>Models/commands: add <code>/models add <provider> <modelId></code> so you can register a model from chat and use it without restarting the gateway; keep <code>/models</code> as a simple provider browser while adding clearer add guidance and copy-friendly command examples. (#70211) Thanks @Takhoffman.</li>
|
||||
<li>WhatsApp: add configurable native reply quoting with replyToMode for WhatsApp conversations. Thanks @mcaxtr.</li>
|
||||
<li>WhatsApp/groups+direct: forward per-group and per-direct <code>systemPrompt</code> config into inbound context <code>GroupSystemPrompt</code> so configured per-chat behavioral instructions are injected on every turn. Supports <code>"*"</code> wildcard fallback and account-scoped overrides under <code>channels.whatsapp.accounts.<id>.{groups,direct}</code>; account maps fully replace root maps (no deep merge), matching the existing <code>requireMention</code> pattern. Closes #7011. (#59553) Thanks @Bluetegu.</li>
|
||||
<li>Agents/sessions: add mailbox-style <code>sessions_list</code> filters for label, agent, and search plus visibility-scoped derived title and last-message previews. (#69839) Thanks @dangoZhang.</li>
|
||||
<li>Control UI/settings+chat: add a browser-local personal identity for the operator (name plus local-safe avatar), route user identity rendering through the shared chat/avatar path used by assistant and agent surfaces, and tighten Quick Settings, agent fallback chips, and narrow-screen chat layouts so personalization no longer wastes space or clips controls. (#70362) Thanks @BunsDev.</li>
|
||||
<li>Gateway/diagnostics: enable payload-free stability recording by default and add a support-ready diagnostics export with sanitized logs, status, health, config, and stability snapshots for bug reports. (#70324) Thanks @gumadeiras.</li>
|
||||
<li>Providers/Tencent: add the bundled Tencent Cloud provider plugin with TokenHub onboarding, docs, <code>hy3-preview</code> model catalog entries, and tiered Hy3 pricing metadata. (#68460) Thanks @JuniperSling.</li>
|
||||
<li>Providers/Amazon Bedrock Mantle: add Claude Opus 4.7 through Mantle's Anthropic Messages route with provider-owned bearer-auth streaming, so the model is actually callable without treating AWS bearer tokens like Anthropic API keys. Thanks @wirjo.</li>
|
||||
<li>Providers/GPT-5: move the GPT-5 prompt overlay into the shared provider runtime so compatible GPT-5 models receive the same behavior and heartbeat guidance through OpenAI, OpenRouter, OpenCode, Codex, and other GPT providers; add <code>agents.defaults.promptOverlays.gpt5.personality</code> as the global friendly-style toggle while keeping the OpenAI plugin setting as a fallback.</li>
|
||||
<li>Providers/OpenAI Codex: remove the Codex CLI auth import path from onboarding and provider discovery so OpenClaw no longer copies <code>~/.codex</code> OAuth material into agent auth stores; use browser login or device pairing instead. (#70390) Thanks @pashpashpash.</li>
|
||||
<li>CLI/Claude: default <code>claude-cli</code> runs to warm stdio sessions, including custom configs that omit transport fields, and resume from the stored Claude session after Gateway restarts or idle exits. (#69679) Thanks @obviyus.</li>
|
||||
<li>Pi/models: update the bundled pi packages to <code>0.68.1</code> and let the OpenCode Go catalog come from pi instead of plugin-maintained model aliases, adding the refreshed <code>opencode-go/kimi-k2.6</code>, Qwen, GLM, MiMo, and MiniMax entries.</li>
|
||||
<li>Tokenjuice: add bundled native OpenClaw support for tokenjuice as an opt-in plugin that compacts noisy <code>exec</code> and <code>bash</code> tool results in Pi embedded runs. (#69946) Thanks @vincentkoc.</li>
|
||||
<li>ACPX: add an explicit <code>openClawToolsMcpBridge</code> option that injects a core OpenClaw MCP server for selected built-in tools, starting with <code>cron</code>.</li>
|
||||
<li>CLI/doctor plugins: lazy-load doctor plugin paths and prefer installed plugin <code>dist/*</code> runtime entries over source-adjacent JavaScript fallbacks, reducing the measured <code>doctor --non-interactive</code> runtime by about 74% while keeping cold doctor startup on built plugin artifacts. (#69840) Thanks @gumadeiras.</li>
|
||||
<li>CLI/debugging: add an opt-in temporary debug timing helper for local CLI performance investigations, with readable stderr output, JSONL capture, and docs for removing probes before landing fixes. (#70469) Thanks @shakkernerd.</li>
|
||||
<li>Docs/i18n: add Thai translation support for the docs site.</li>
|
||||
<li>Providers/OpenAI-compatible: mark known local backends such as vLLM, SGLang, llama.cpp, LM Studio, LocalAI, Jan, TabbyAPI, and text-generation-webui as streaming-usage compatible, so their token accounting no longer degrades to unknown/stale totals. (#68711) Thanks @gaineyllc.</li>
|
||||
<li>Providers/OpenAI-compatible: recover streamed token usage from llama.cpp-style <code>timings.prompt_n</code> / <code>timings.predicted_n</code> metadata and sanitize usage counts before accumulation, fixing unknown or stale totals when compatible servers do not emit an OpenAI-shaped <code>usage</code> object. (#41056) Thanks @xaeon2026.</li>
|
||||
<li>Plugins/startup: prefer native Jiti loading for built bundled plugin dist modules on supported runtimes, cutting measured bundled plugin load time by 82-90% while keeping source TypeScript on the transform path. (#69925) Thanks @aauren.</li>
|
||||
<li>Plugin SDK/STT: share realtime transcription WebSocket transport and multipart batch transcription form helpers across bundled STT providers, reducing provider plugin boilerplate while preserving proxy capture, reconnects, audio queueing, close flushing, upload filename normalization, and ready handshakes.</li>
|
||||
<li>Plugin SDK/Pi embedded runs: add a bundled-plugin embedded extension factory seam so native plugins can extend Pi embedded runs with async runtime hooks such as <code>tool_result</code> handling instead of falling back to the older synchronous persistence path. (#69946) Thanks @vincentkoc.</li>
|
||||
<li>Codex harness/hooks: route native Codex app-server turns through <code>before_prompt_build</code> and emit <code>before_compaction</code> / <code>after_compaction</code> for native compaction items so prompt and compaction hooks stop drifting from Pi. Thanks @vincentkoc.</li>
|
||||
<li>Codex harness/plugins: add a bundled-plugin Codex app-server extension seam for async <code>tool_result</code> middleware, fire <code>after_tool_call</code> for Codex tool runs, and route mirrored Codex transcript writes through <code>before_message_write</code> so tool integrations stop diverging from Pi. Thanks @vincentkoc.</li>
|
||||
<li>Codex harness/hooks: fire <code>llm_input</code>, <code>llm_output</code>, and <code>agent_end</code> for native Codex app-server turns so lifecycle hooks stop drifting from Pi. Thanks @vincentkoc.</li>
|
||||
<li>QA/Telegram: record per-scenario reply RTT in the live Telegram QA report and summary, starting with the canary response. (#70550) Thanks @obviyus.</li>
|
||||
<li>Status: add an explicit <code>Runner:</code> field to <code>/status</code> so sessions now report whether they are running on embedded Pi, a CLI-backed provider, or an ACP harness agent/backend such as <code>codex (acp/acpx)</code> or <code>gemini (acp/acpx)</code>. (#70595)</li>
|
||||
</ul>
|
||||
<h3>Fixes</h3>
|
||||
<ul>
|
||||
<li>Thinking defaults/status: raise the implicit default thinking level for reasoning-capable models from legacy <code>off</code>/<code>low</code> fallback behavior to a safe provider-supported <code>medium</code> equivalent when no explicit config default is set, preserve configured-model reasoning metadata when runtime catalog loading is empty, and make <code>/status</code> report the same resolved default as runtime.</li>
|
||||
<li>Gateway/model pricing: fetch OpenRouter and LiteLLM pricing asynchronously at startup and extend catalog fetch timeouts to 30 seconds, reducing noisy timeout warnings during slow upstream responses.</li>
|
||||
<li>Agents/sessions: keep daily reset and idle-maintenance bookkeeping from bumping session activity or pruning freshly active routes, so active conversations no longer look newer or disappear for maintenance-only updates.</li>
|
||||
<li>Plugins/install: add newly installed plugin ids to an existing <code>plugins.allow</code> list before enabling them, so allowlisted configs load installed plugins after restart.</li>
|
||||
<li>Status: show <code>Fast</code> in <code>/status</code> when fast mode is enabled, including config/default-derived fast mode, and omit it when disabled.</li>
|
||||
<li>OpenAI/image generation: detect Azure OpenAI-style image endpoints, use Azure <code>api-key</code> auth plus deployment-scoped image URLs, honor <code>AZURE_OPENAI_API_VERSION</code>, and document the Azure setup path so image generation and edits work against Azure-hosted OpenAI resources. (#70570) Thanks @zhanggpcsu.</li>
|
||||
<li>Telegram/forum topics: cache recovered forum metadata with bounded expiry so supergroup updates no longer need repeated <code>getChat</code> lookups before topic routing.</li>
|
||||
<li>Onboarding/WeCom: show the official WeCom channel plugin with its native Enterprise WeChat display name and blurb in the external channel catalog.</li>
|
||||
<li>Models/auth: merge provider-owned default-model additions from <code>openclaw models auth login</code> instead of replacing <code>agents.defaults.models</code>, so re-authenticating an OAuth provider such as OpenAI Codex no longer wipes other providers' aliases and per-model params. Migrations that must rename keys (Anthropic -> Claude CLI) opt in with <code>replaceDefaultModels</code>. Fixes #69414. (#70435) Thanks @neeravmakwana.</li>
|
||||
<li>Media understanding/audio: prefer configured or key-backed STT providers before auto-detected local Whisper CLIs, so installed local transcription tools no longer shadow API providers such as Groq/OpenAI in <code>tools.media.audio</code> auto mode. Fixes #68727.</li>
|
||||
<li>Providers/OpenAI: lock the auth picker wording for OpenAI API key, Codex browser login, and Codex device pairing so the setup choices no longer imply a mixed Codex/API-key auth path. (#67848) Thanks @tmlxrd.</li>
|
||||
<li>Agents/BTW: route <code>/btw</code> side questions through provider stream registration with the session workspace, so Ollama provider URL construction and workspace-scoped hooks apply correctly. Fixes #68336. (#70413) Thanks @suboss87.</li>
|
||||
<li>Agents/sessions: make session transcript write locks non-reentrant by default, so same-process transcript writers contend unless a helper explicitly opts into nested lock ownership.</li>
|
||||
<li>ACPX/probe: expose an optional <code>probeAgent</code> plugin config field so the embedded ACP runtime health probe can target a configured agent (for example <code>opencode</code> or <code>claude</code>) instead of hardcoding <code>codex</code>, and stop marking the entire ACP runtime backend unavailable when the default probe agent is simply not installed or not authenticated. (#68409) Thanks @lyfuci.</li>
|
||||
<li>Memory search: use sqlite-vec KNN for vector recall while preserving full post-filter result limits in multi-model indexes. Fixes #69666. (#69680) Thanks @aalekh-sarvam.</li>
|
||||
<li>Providers/OpenAI Codex: stop stale per-agent <code>openai-codex:default</code> OAuth profiles from shadowing a newer main-agent identity-scoped profile, and let <code>openclaw doctor</code> offer the matching cleanup. (#70393) Thanks @pashpashpash.</li>
|
||||
<li>ACPX: route OpenClaw ACP bridge commands through the MCP-free runtime path even when the command is wrapped with <code>env</code>, has bridge flags, or is resumed from persisted session state, so documented <code>acpx openclaw</code> setups no longer fail on per-session MCP injection. (#68741) Thanks @alexlomt.</li>
|
||||
<li>Codex harness: route Codex-tagged MCP tool approval elicitations through OpenClaw plugin approvals, including current empty-schema app-server requests, while leaving generic user-input prompts fail-closed. (#68807) Thanks @kesslerio.</li>
|
||||
<li>WhatsApp/outbound: hold an in-memory active-delivery claim while a live outbound send is in flight, so a concurrent reconnect drain no longer re-drives the same pending queue entry and duplicates cron sends 7-12x after the 30-minute inbound-silence watchdog fires mid-delivery. Crash-replay of fresh queue entries left behind by a dead process is preserved because the claim is intentionally process-local. Fixes #70386. (#70428) Thanks @neeravmakwana.</li>
|
||||
<li>Matrix/commands: keep Matrix DM allowlist state out of room control-command authorization, so trusted DM senders do not accidentally gain room-command access.</li>
|
||||
<li>Providers/SDK retry: cap long <code>Retry-After</code> sleeps in Stainless-based Anthropic/OpenAI model SDKs so 60s+ retry windows surface immediately for OpenClaw failover instead of blocking the run. (#68474) Thanks @jetd1.</li>
|
||||
<li>Agents/TTS: preserve spoken text in TTS tool results while defusing reply directives in transcript content, so future turns remember voice replies without treating spoken <code>MEDIA:</code> or voice tags as delivery metadata. (#68869) Thanks @zqchris.</li>
|
||||
<li>Providers/OpenAI: harden Voice Call realtime transcription against OpenAI Realtime session-update drift, forward language and prompt hints, and add live coverage for realtime STT.</li>
|
||||
<li>Agents/Pi embedded runs: suppress the "⚠️ Agent couldn't generate a response" warning when the assistant already delivered user-visible content through a messaging tool and the turn ended cleanly (<code>stopReason=stop</code>). Real failure modes (tool errors, provider <code>stopReason=error</code>, interrupted tool use) still surface the existing "verify before retrying" warning. Fixes #70396. (#70425) Thanks @neeravmakwana.</li>
|
||||
<li>Gateway/Linux: wrap gateway-managed supervisor, PTY, MCP stdio, and browser child processes in a tiny <code>/bin/sh</code> shim that raises the child's own <code>oom_score_adj</code> on Linux, so under cgroup memory pressure the kernel prefers transient workers over the long-lived gateway. Opt out with <code>OPENCLAW_CHILD_OOM_SCORE_ADJ=0</code>. Fixes #70404. (#70419) Thanks @neeravmakwana.</li>
|
||||
<li>Providers/Moonshot: stop strict-sanitizing Kimi's native tool_call IDs (shaped like <code>functions.<name>:<index></code>) on the OpenAI-compatible transport, so multi-turn agentic flows through Kimi K2.6 no longer break after 2-3 tool-calling rounds when the serving layer fails to match mangled IDs against the original tool definitions. Adds a <code>sanitizeToolCallIds</code> opt-out to the shared <code>openai-compatible</code> replay family helper and wires Moonshot to it. Fixes #62319. (#70030) Thanks @LeoDu0314.</li>
|
||||
<li>Dependencies/security: override transitive <code>uuid</code> to <code>14.0.0</code>, clearing the runtime advisory across dependencies.</li>
|
||||
<li>Codex harness: ignore dynamic tool descriptions when deciding whether to reuse a native app-server thread while still fingerprinting tool schemas, so channel-specific copy changes no longer reset otherwise compatible Codex conversations. (#69976) Thanks @chen-zhang-cs-code.</li>
|
||||
<li>Codex harness: expose the Codex app-server model catalog in <code>models list/status</code>, avoid startup hangs from app-server discovery timeouts, and accept current Codex turn-completion notifications so Docker live gateway turns finish reliably.</li>
|
||||
<li>Codex harness: drop invalid legacy app-server <code>serviceTier</code> values such as <code>"priority"</code> before native thread and turn requests, while keeping supported Codex tiers limited to <code>"fast"</code> and <code>"flex"</code>. Fixes #64815.</li>
|
||||
<li>Codex harness: show bounded, sanitized permission target samples in app-server approval prompts, so native permission requests keep their specific hosts, roots, and paths visible without leaking home usernames or URL credentials. (#70340) Thanks @Lucenx9.</li>
|
||||
<li>Docs/Codex harness: narrow native compaction docs to the current start/completion signals, without promising a readable summary or kept-entry audit list yet. (#69612) Thanks @91wan.</li>
|
||||
<li>Providers/Amazon Bedrock: use known context-window metadata for discovered models while keeping the unknown-model fallback conservative, so compaction and overflow handling improve for newer Bedrock models without overstating unlisted model limits. Thanks @wirjo.</li>
|
||||
<li>Providers/Amazon Bedrock Mantle: refresh IAM-backed bearer tokens at runtime instead of baking discovery-time tokens into provider config, so long-lived Mantle sessions keep working after the initial token ages out. Thanks @wirjo.</li>
|
||||
<li>Config/includes: write through single-file top-level includes for isolated OpenClaw-owned mutations, so <code>plugins install</code> and <code>plugins update</code> update an included <code>plugins.json5</code> file instead of flattening modular <code>$include</code> configs. Fixes #41050 and #66048.</li>
|
||||
<li>Config/reload: plan gateway reloads from source-authored config instead of runtime-materialized snapshots, so plugin update writes no longer trigger false restarts from derived provider/plugin config paths. Fixes #68732.</li>
|
||||
<li>Plugins/update: skip npm plugin reinstall/config rewrites when the installed version and recorded artifact identity already match the registry target, let bare npm package names resolve back to tracked install records, and point already-installed <code>plugins install</code> attempts at <code>plugins update</code> / <code>--force</code> instead of a hook-pack fallback. Fixes #46955, #67957, and #68073.</li>
|
||||
<li>Agents/MCP: keep <code>mcp.servers</code> and bundle MCP tools available in Pi embedded <code>coding</code> and <code>messaging</code> sessions while preserving <code>minimal</code> profile and <code>tools.deny: ["bundle-mcp"]</code> opt-out behavior. Fixes #68875 and #68818.</li>
|
||||
<li>Plugins/startup: tolerate transient bundled-channel catalog/metadata drift while auto-enabling configured plugins, so CLI and gateway startup no longer crash when a channel id is known but its display metadata is unavailable.</li>
|
||||
<li>CLI/Claude: report CLI-backed reply runs as streaming while Claude/Codex CLI turns are still in flight, so WebChat keeps visible response state until the backend finishes. Fixes #70125.</li>
|
||||
<li>Slack/streaming: fall back to normal Slack replies for Slack Connect streams rejected before the SDK flushes its local buffer, so short replies no longer disappear or report success before Slack acknowledges delivery. Fixes #70295. (#70370) Thanks @mvanhorn.</li>
|
||||
<li>Codex harness: rotate the shared app-server websocket client when the configured bearer token changes, so auth-token refreshes reconnect with the new <code>Authorization</code> header instead of reusing a stale socket. (#70328) Thanks @Lucenx9.</li>
|
||||
<li>Channels/sandbox: derive runtime policy keys for external direct messages that share the main conversation, so sandbox/tool policy no longer treats channel-originated DMs as local main-session runs.</li>
|
||||
<li>Config/models: merge provider-scoped model allowlist updates and protect model/provider map writes from accidental full replacement, adding <code>config set --merge</code> for additive updates and <code>--replace</code> for intentional clobbers. Fixes #65920, #68392, and #68653.</li>
|
||||
<li>Agents/Pi auth: preserve AWS SDK-authenticated Bedrock runs for IMDS and task-role setups, clear stale refresh timers on sentinel fallback, and log unexpected runtime-auth prep failures instead of silently leaving the provider unauthenticated. Thanks @wirjo.</li>
|
||||
<li>Config/gateway: restore last-known-good config on critical clobber signatures such as missing metadata, missing <code>gateway.mode</code>, or sharp size drops, preventing gateway crash loops when a valid backup exists. Fixes #70336.</li>
|
||||
<li>Config/gateway: recover configs accidentally prefixed with non-JSON output during gateway startup or <code>openclaw doctor --fix</code>, preserving the clobbered file as a backup while leaving normal config reads read-only.</li>
|
||||
<li>Agents/GitHub Copilot: normalize connection-bound Responses item IDs in the Copilot provider wrapper so replayed histories no longer fail after the upstream connection changes. (#69362) Thanks @Menci.</li>
|
||||
<li>Pi embedded runs: pass real built-in tools into Pi session creation and then narrow active tool names after custom tool registration, so the runner and compaction paths compile cleanly and keep OpenClaw-managed custom tool allowlists without feeding string arrays into <code>createAgentSession</code>. Thanks @vincentkoc.</li>
|
||||
<li>Agents/OpenAI websocket: route native OpenAI websocket metadata and session-header decisions through the shared endpoint classifier so local mocks and custom <code>models.providers.openai.baseUrl</code> endpoints stay out of the native OpenAI path consistently across embedded-runner and websocket transport code. Thanks @vincentkoc.</li>
|
||||
<li>Cron/MCP: retire bundled MCP runtimes through one shared cleanup path for isolated cron run ends, persistent cron session rollover, and direct cron <code>deleteAfterRun</code> fallback cleanup. Fixes #69145, #68623, and #68827.</li>
|
||||
<li>MCP/gateway: tear down stdio MCP process trees on transport close and dispose bundled MCP runtimes during session delete/reset, preventing orphaned wrapper/server processes from accumulating. Fixes #68809 and #69465.</li>
|
||||
<li>Agents/MCP: retire bundled MCP runtimes after completed one-shot subagent cleanup and nested <code>sessions_send</code> steps, while keeping persistent subagent sessions warm.</li>
|
||||
<li>Config: render validation warnings with real line breaks instead of a literal <code>\n</code> sequence in CLI/audit output. Fixes #70140.</li>
|
||||
<li>Cron/doctor: repair malformed persisted cron job IDs through <code>openclaw doctor</code>, including legacy <code>jobId</code>, non-string <code>id</code>, and missing <code>id</code> rows, so <code>cron list</code> no longer needs display-layer coercion for corrupt store data. Fixes #70128.</li>
|
||||
<li>Discord: normalize prefixed channel targets only at the thread-binding API boundary, so <code>sessions_spawn({ runtime: "acp", thread: true })</code> can create child threads from Discord channels without breaking current-channel ACP bindings. (#68034) Thanks @Zetarcos.</li>
|
||||
<li>Discord: harden inbound thread metadata handling against partial Carbon channel getters, so non-command thread messages and queued jobs no longer crash when <code>name</code>, <code>parentId</code>, <code>parent</code>, or <code>ownerId</code> requires fetched raw data.</li>
|
||||
<li>Discord: let <code>message</code> tool reactions resolve <code>user:<id></code> DM targets and preserve <code>channels.discord.guilds.<guild>.channels.<channel>.requireMention: false</code> during reply-stage activation fallback. Fixes #70165 and #69441.</li>
|
||||
<li>Plugins/startup: pre-normalize and cache Jiti alias maps before creating plugin loaders, so module-scoped loader filenames do not reintroduce per-plugin alias-normalization startup cost. Fixes #70186.</li>
|
||||
<li>ACP/Codex: run the bundled Codex ACP harness with an isolated <code>CODEX_HOME</code> and avoid writing incomplete ChatGPT auth bridge files, so Codex ACP sessions no longer clobber the user's real Codex CLI auth. Fixes #70234. Thanks @Lonobers88.</li>
|
||||
<li>Gateway/client: keep long-running RPCs such as ACP <code>agent.wait</code> calls in charge of their own timeout instead of closing the websocket on a missed app-level tick while work is still pending.</li>
|
||||
<li>Telegram/webhooks: lower the grammY webhook callback timeout to 5s so Telegram gets an early 200 response instead of retrying long-running updates as read timeouts. (#70146) Thanks @friday-james.</li>
|
||||
<li>Telegram/polling: rebuild the polling HTTP transport after <code>getUpdates</code> 409 conflicts, so retries use a fresh TCP connection instead of looping on a Telegram-terminated keep-alive socket. (#69873) Thanks @hclsys.</li>
|
||||
<li>Media delivery: strip persisted base64 audio payloads from webchat history, resolve stored <code>media://inbound/*</code> attachments before local-root checks, suppress duplicate Telegram voice/audio sends when TTS emits the same media twice, and support custom image-model IDs that already include their provider prefix.</li>
|
||||
<li>Slack/files: resolve <code>downloadFile</code> bot tokens from the runtime config when callers provide <code>cfg</code> without an explicit token or prebuilt client, preserving cfg-only file downloads outside the action runtime path. (#70160) Thanks @martingarramon.</li>
|
||||
<li>Slack/HTTP: dispatch registered Request URL webhooks through the same handler registry used by Slack monitor setup, so HTTP-mode Slack events no longer 404 after successful route registration. (#70275) Thanks @FroeMic.</li>
|
||||
<li>Slack/runtime bindings: route focused Slack thread replies through their bound ACP session instead of preparing replies against the default agent shell. Fixes #67739. Thanks @Frankla20.</li>
|
||||
<li>CLI/Claude: keep stored Claude CLI sessions through OAuth refresh-token rotation by keying auth epochs on stable account identity instead of mutable OAuth token material. (#70452) Thanks @obviyus.</li>
|
||||
<li>CLI/Claude: verify stored Claude CLI session ids have a readable project transcript before resuming, clearing phantom bindings with <code>reason=transcript-missing</code> instead of silently starting fresh under <code>--resume</code>. Fixes #70177.</li>
|
||||
<li>CLI sessions: persist CLI session clearing through the atomic session-store merge path, so expired Claude/Codex CLI bindings are actually removed before retrying without the stale session id. (#70298) Thanks @HFConsultant.</li>
|
||||
<li>ACP/sessions_spawn: honor explicit <code>model</code> overrides for ACP child sessions instead of silently falling back to the target agent default model. (#70210) Thanks @felix-miao.</li>
|
||||
<li>Diffs/viewer: re-read remote viewer access policy from live runtime config on each request, so toggling <code>plugins.entries.diffs.config.security.allowRemoteViewer</code> closes proxied viewer access immediately instead of waiting for a restart. Thanks @vincentkoc.</li>
|
||||
<li>Diffs/tooling: re-read <code>viewerBaseUrl</code>, presentation defaults, and viewer access policy from live runtime config, and fail closed when the live <code>diffs</code> plugin entry disappears instead of reviving startup viewer settings. Thanks @vincentkoc.</li>
|
||||
<li>Memory/LanceDB: stop resurrecting removed live <code>memory-lancedb</code> hook config from startup snapshots, so deleting or disabling the plugin entry shuts off auto-recall and auto-capture without a restart. Thanks @vincentkoc.</li>
|
||||
<li>Memory/LanceDB: keep auto-recall and auto-capture hooks wired when those settings start disabled, so turning them on in live config starts recall and capture without waiting for a restart. Thanks @vincentkoc.</li>
|
||||
<li>Skill Workshop: keep the tool plus <code>before_prompt_build</code> / <code>agent_end</code> hooks wired while the plugin is disabled at startup, so turning the plugin back on in live config starts guidance and capture without waiting for a restart. Thanks @vincentkoc.</li>
|
||||
<li>Active Memory: stop reviving removed live <code>active-memory</code> config from startup snapshots, so removing the plugin entry turns the hook off immediately instead of waiting for a restart. Thanks @vincentkoc.</li>
|
||||
<li>GitHub Copilot: re-read plugin discovery config from the live runtime snapshot, so toggling <code>plugins.entries.github-copilot.config.discovery.enabled</code> takes effect without a restart. Thanks @vincentkoc.</li>
|
||||
<li>Ollama: re-read plugin discovery config from the live runtime snapshot, so toggling <code>plugins.entries.ollama.config.discovery.enabled</code> takes effect without a restart. Thanks @vincentkoc.</li>
|
||||
<li>OpenAI: re-read the plugin prompt-overlay personality from live runtime config, so GPT-5 system prompt contributions update without a restart when <code>plugins.entries.openai.config.personality</code> changes. Thanks @vincentkoc.</li>
|
||||
<li>Amazon Bedrock: re-read live discovery and guardrail plugin config, so toggling <code>plugins.entries.amazon-bedrock.config.discovery</code> or <code>plugins.entries.amazon-bedrock.config.guardrail</code> takes effect without a restart. Thanks @vincentkoc.</li>
|
||||
<li>Codex: re-read the plugin discovery config from the live runtime snapshot, so toggling <code>plugins.entries.codex.config.discovery</code> takes effect without a restart. Thanks @vincentkoc.</li>
|
||||
<li>Agents/subagents: drop bare <code>NO_REPLY</code> from the parent turn when the session still has pending spawned children, so direct-conversation surfaces such as Telegram DMs no longer rewrite the sentinel into visible fallback chatter while waiting for the child completion event. (#69942) Thanks @neeravmakwana.</li>
|
||||
<li>Plugins/install: keep bundled plugin dependencies off npm install while repairing them when plugins activate from a packaged install, including Feishu/Lark, Browser, and direct bundled channel setup-entry loads.</li>
|
||||
<li>CLI/channels: skip and cache bundled channel plugin, setup, and secrets load failures during read-only discovery, so one broken unused bundled channel cannot crash <code>openclaw status</code> or bootstrap secret scans.</li>
|
||||
<li>Memory/LanceDB: retry initialization after a failed LanceDB load and report unsupported Intel macOS native runtime clearly instead of caching the failure or repeatedly attempting an install that cannot work.</li>
|
||||
<li>CLI/Claude: hash only static extra system prompt parts when deciding whether to reuse a CLI session, so per-message inbound metadata no longer resets Claude CLI conversations on every turn. (#70122) Thanks @zijunl.</li>
|
||||
<li>Hooks/Slack: standardize shared message hook routing fields (<code>threadId</code> / <code>replyToId</code>) and stop Slack outbound delivery from re-running <code>message_sending</code> inside the channel adapter, so plugins like thread-ownership make one outbound routing decision per reply. Thanks @vincentkoc.</li>
|
||||
<li>Auto-reply/media: share one run-scoped reply media context between streamed block delivery and final payload filtering, so a local <code>MEDIA:</code> attachment is staged once and duplicate media sends are suppressed reliably. (#68111) Thanks @ayeshakhalid192007-dev.</li>
|
||||
<li>Plugins/gateway hooks: expose startup config, workspace dir, and a live cron getter on the typed <code>gateway_start</code> hook, and move memory-core managed dreaming off the internal <code>gateway:startup</code> bridge so cron reconciliation stays on the public plugin hook path. Thanks @vincentkoc.</li>
|
||||
<li>Plugins/config: read plugin trust decisions from the source config snapshot when a resolved runtime snapshot is active, so <code>plugins.allow</code> remains enforced and <code>doctor</code>/gateway startup no longer warn that the allowlist is empty when it is configured. Fixes #70161. Also fixes #70141.</li>
|
||||
<li>Agents/openai-completions: enable malformed streamed tool-call argument repair for self-hosted OpenAI-compatible backends such as Kimi/SGLang, so fragmented tool-call arguments no longer reach tools as empty or unusable objects. Fixes #69672. (#70294) Thanks @MonkeyLeeT.</li>
|
||||
<li>Gateway/restart: preserve group and channel chat context when resuming an agent turn after a Gateway restart, so continuation replies keep the same prompt, routing, and tool-status behavior as the original conversation.</li>
|
||||
<li>Gateway/pairing: shared-secret loopback CLI clients now silently auto-approve <code>metadata-upgrade</code> pairing (platform / device family refresh) instead of being disconnected with <code>1008 pairing required</code>. This matches the scope-upgrade and role-upgrade behavior added in #69431 and unblocks non-interactive CLI automation when a paired-device record has a stale platform string (e.g. device key replicated across hosts, install migrated between OSes, or platform-string format changed between OpenClaw versions). Browser / Control-UI clients keep the existing approval-required flow for metadata changes.</li>
|
||||
<li>Gateway/pairing: treat any forwarded-header evidence (<code>Forwarded</code>, <code>X-Forwarded-*</code>, or <code>X-Real-IP</code>) as proxied WebSocket traffic before pairing locality checks, so reverse-proxy topologies cannot use the loopback shared-secret helper auto-pairing path.</li>
|
||||
<li>Agents/OpenAI: treat exact <code>NO_REPLY</code> assistant output as a deliberate silent reply in embedded runs, so GPT-5.4 turns with signed reasoning plus a silent final no longer surface a false incomplete-turn error.</li>
|
||||
<li>Auto-reply/streaming: preserve streamed reply directives through chunk boundaries and phase-aware <code>final_answer</code> delivery, so split <code>MEDIA:<path></code> lines, voice tags, and reply targets reach channel delivery instead of leaking as text or being dropped. (#70243) Thanks @zqchris.</li>
|
||||
<li>Anthropic/Claude Opus 4.7: normalize Opus 4.7 and <code>claude-cli</code> Opus 4.7 variants to a 1M context window in resolved runtime metadata and active-agent status/context reporting, so they no longer inherit the stale 200k fallback. Thanks @BunsDev.</li>
|
||||
<li>Gateway/pairing webchat: render <code>/pair qr</code> replies as structured media instead of raw markdown text, preserve inline reply threading and silent-control handling on media replies, avoid persisting sensitive QR images into transcript history, and keep local webchat media embedding behind internal-only trust markers. (#70047) Thanks @BunsDev.</li>
|
||||
<li>Codex harness: default app-server runs to unchained local execution, so OpenAI heartbeats can use network and shell tools without stalling behind native Codex approvals or the workspace-write sandbox.</li>
|
||||
<li>Codex harness: fail closed for unknown native app-server approval methods instead of routing unsupported future approval shapes through OpenClaw approval grants. (#70356) Thanks @Lucenx9.</li>
|
||||
<li>Codex harness: apply the GPT-5 behavior and heartbeat prompt overlay to native Codex app-server runs, so <code>codex/gpt-5.x</code> sessions get the same follow-through, tool-use, and proactive heartbeat guidance as OpenAI GPT-5 runs.</li>
|
||||
<li>Codex harness: add an explicit Guardian mode for Codex app-server approvals, plus a Docker live probe for approved and ask-back Guardian decisions, while keeping default app-server runs unchained for unattended local heartbeats. The legacy <code>OPENCLAW_CODEX_APP_SERVER_GUARDIAN</code> shortcut is removed; use plugin config <code>appServer.mode: "guardian"</code> or <code>OPENCLAW_CODEX_APP_SERVER_MODE=guardian</code>. Thanks @pashpashpash.</li>
|
||||
<li>OpenAI/Responses: keep embedded OpenAI Responses runs on HTTP when <code>models.providers.openai.baseUrl</code> points at a local mock or other non-public endpoint, so mocked/custom endpoints no longer drift onto the hardcoded public websocket transport. (#69815) Thanks @vincentkoc.</li>
|
||||
<li>Channels/config: require resolved runtime config on channel send/action/client helpers and block runtime helper <code>loadConfig()</code> calls, so SecretRefs are resolved at startup/boundaries instead of being re-read during sends.</li>
|
||||
<li>Discord: pass resolved runtime config through guild and moderation action helpers, so thread-originated Discord commands can run channel, member, role, and guild actions without falling back to runtime config reads. (#70215) Thanks @szponeczek.</li>
|
||||
<li>CLI/channels: preserve bundled setup promotion metadata when a loaded partial channel plugin omits it, so adding a non-default account still moves legacy single-account fields such as Telegram <code>streaming</code> into <code>accounts.default</code>.</li>
|
||||
<li>Telegram: keep the sent-message ownership cache isolated per configured session store, so own-message reaction filtering remains correct with custom <code>session.store</code> paths.</li>
|
||||
<li>Security/update: fail closed when exact pinned npm plugin or hook-pack updates detect integrity drift, and expose aborted plugin drift details in <code>openclaw update --json</code>.</li>
|
||||
<li>Ollama: forward OpenClaw thinking control to native <code>/api/chat</code> requests as top-level <code>think</code>, so <code>/think off</code> and <code>openclaw agent --thinking off</code> suppress thinking on models such as qwen3 instead of idling until the watchdog fires. Fixes #69902. (#69967) Thanks @WZH8898.</li>
|
||||
<li>Memory-core/dreaming: suppress the startup-only managed dreaming cron unavailable warning when the cron service is still attaching, while preserving the runtime warning if cron genuinely remains unavailable. Fixes #69939. (#69941) Thanks @Sanjays2402.</li>
|
||||
<li>Mattermost: suppress reasoning-only payloads even when they arrive as blockquoted <code>> Reasoning:</code> text, preventing <code>/reasoning on</code> from leaking thinking into channel posts. (#69927) Thanks @lawrence3699.</li>
|
||||
<li>Discord: read <code>channel.parentId</code> through a safe accessor in the slash-command, reaction, and model-picker paths so partial <code>GuildThreadChannel</code> prototype getters no longer throw <code>Cannot access rawData on partial Channel</code> when commands like <code>/new</code> run from inside a thread. Fixes #69861. (#69908) Thanks @neeravmakwana.</li>
|
||||
<li>Discord: use safe channel name and parent accessors across voice command authorization, so <code>/vc</code> commands from partial Discord thread channels no longer crash on Carbon rawData getters. (#70199) Thanks @hanamizuki.</li>
|
||||
<li>Discord: make auto-thread parent transcript inheritance opt-in via <code>channels.discord.thread.inheritParent</code>, keeping newly created Discord thread sessions isolated by default while preserving explicit inheritance for configured accounts. Fixes #69907. (#69986) Thanks @Blahdude.</li>
|
||||
<li>Browser/Chrome MCP: reset cached existing-session control sessions when a <code>navigate_page</code> call times out, so one stuck navigation no longer poisons the browser profile until a gateway restart. (#69733) Thanks @ayeshakhalid192007-dev.</li>
|
||||
<li>Browser/Chrome MCP: propagate click timeouts and abort signals to existing-session actions so a stuck click fails fast and reconnects instead of poisoning the browser tool until gateway restart. (#63524) Thanks @dongseok0.</li>
|
||||
<li>Amazon Bedrock/prompt caching: resolve opaque application inference profile targets before injecting Bedrock cache points, require every routed target to support explicit cache points, and retry transient profile lookups instead of caching a false negative for the rest of the process. (#69953) Thanks @anirudhmarc and @vincentkoc.</li>
|
||||
<li>Gateway/channel health: base stale-socket recovery on provider-proven transport activity instead of inbound app-event freshness, preventing quiet Slack, Discord, Telegram, Matrix, and local-style channels from being restarted solely because no user traffic arrived. (#69833) Thanks @bek91.</li>
|
||||
<li>OpenCode Go: canonicalize stale bundled <code>opencode-go</code> base URLs from <code>/go</code> or <code>/go/v1</code> to <code>/zen/go</code> or <code>/zen/go/v1</code>, so older generated model metadata stops hitting the 404 HTML endpoint. (#69898)</li>
|
||||
<li>CLI/channels: honor <code>channels.<id>.enabled=false</code> as a hard read-only presence opt-out, so env vars, manifest env vars, or stale persisted auth state no longer make disabled channel plugins appear in status, doctor, or setup-only discovery.</li>
|
||||
<li>Channels/preview streaming: centralize draft-preview finalization so Slack, Discord, Mattermost, and Matrix no longer flush temporary preview messages for media/error finals, and preserve first-reply threading for normal fallback delivery.</li>
|
||||
<li>Discord: keep slash command follow-up chunks ephemeral when the command is configured for ephemeral replies, so long <code>/status</code> output no longer leaks fallback model or runtime details into the public channel. (#69869) thanks @gumadeiras.</li>
|
||||
<li>Gateway/session history: re-check current auth and <code>chat.history</code> scope before later SSE keepalives and transcript updates, so active session-history streams close before delivering post-revocation events.</li>
|
||||
<li>Plugins/discovery: reject package plugin source entries that escape the package directory before explicit runtime entries or inferred built JavaScript peers can be used. (#69868) thanks @gumadeiras.</li>
|
||||
<li>CLI/channels: resolve channel presence through a shared policy that keeps ambient env vars and stale persisted auth from surfacing disabled bundled plugins in status, doctor, security audit, and cron delivery validation unless the channel or plugin is effectively enabled or explicitly configured. (#69862) Thanks @gumadeiras.</li>
|
||||
<li>Doctor/plugins: hydrate legacy partial interactive handler state before plugin reload clears dedupe caches, so <code>openclaw doctor</code> and post-update doctor runs no longer crash with <code>Cannot read properties of undefined (reading 'clear')</code>. (#70135) Thanks @ngutman.</li>
|
||||
<li>Control UI/config: preserve intentionally empty raw config snapshots when clearing pending updates so reset restores the original bytes instead of synthesizing JSON for blank config files. (#68178) Thanks @BunsDev.</li>
|
||||
<li>memory-core/dreaming: surface a <code>Dreaming status: blocked</code> line in <code>openclaw memory status</code> when dreaming is enabled but the heartbeat that drives the managed cron is not firing for the default agent, and add a Troubleshooting section to the dreaming docs covering the two common causes (per-agent <code>heartbeat</code> blocks excluding <code>main</code>, and <code>heartbeat.every</code> set to <code>0</code>/empty/invalid), so the silent failure described in #69843 becomes legible on the status surface.</li>
|
||||
<li>Cron/run-log: report generic <code>message</code> tool sends under the resolved delivery channel when they match the cron target, while preserving account-specific mismatch checks for delivery traces. (#69940) Thanks @davehappyminion.</li>
|
||||
<li>Doctor/channels: merge configured-channel doctor hooks across read-only, loaded, setup, and runtime plugin discovery so partial adapters no longer hide runtime-only compatibility repair or allowlist warnings, preserve disabled-channel opt-outs, and ignore malformed hook values before they can mask valid fallbacks. (#69919) Thanks @gumadeiras.</li>
|
||||
<li>Models/CLI: show bundled provider-owned static catalog rows in <code>models list --all</code> before auth is configured, including Kimi K2.6 rows for Moonshot, OpenRouter, and Vercel AI Gateway, while keeping local-only and workspace plugin catalog paths isolated. (#69909) Thanks @shakkernerd.</li>
|
||||
<li>Models/CLI: clarify that <code>models list --provider</code> expects provider ids and reject display labels before loading model discovery. (#70504) Thanks @shakkernerd.</li>
|
||||
<li>Configure: skip generic CLI startup bootstrap for <code>openclaw configure</code> and bound hint-only gateway probes so the onboarding TUI reaches its first prompt faster when the Gateway is unavailable. (#69984) Thanks @obviyus.</li>
|
||||
<li>Agents/harness: surface selected plugin harness failures directly instead of replaying the same turn through embedded PI, preventing misleading secondary PI auth errors and avoiding duplicate side effects.</li>
|
||||
<li>OpenAI Codex: add a ChatGPT device-code auth option beside browser OAuth, so headless or callback-hostile setups can sign in without relying on the localhost browser callback. (#69557) Thanks @vincentkoc.</li>
|
||||
<li>CLI sessions: keep provider-owned CLI sessions through implicit daily expiry while preserving explicit reset behavior, and retain Claude CLI binding metadata across gateway agent requests. (#70106) Thanks @obviyus.</li>
|
||||
<li>fix(config): accept truncateAfterCompaction (#68395). Thanks @MonkeyLeeT</li>
|
||||
<li>CLI/Claude: keep Claude CLI session bindings stable across OAuth access-token refreshes, so gateway restarts continue the same Claude conversation instead of minting a fresh one. (#70132) Thanks @obviyus.</li>
|
||||
<li>QQBot: add <code>INTERACTION</code> intent (<code>1 << 26</code>) to the gateway constants and include it in the <code>FULL_INTENTS</code> mask so interaction events are received. (#70143) Thanks @cxyhhhhh.</li>
|
||||
<li>Gateway/restart: preserve one-shot continuation instructions across gateway restarts so agents can resume and reply back to the original chat after reboot. (#63406) Thanks @VACInc.</li>
|
||||
<li>Gateway/restart: write restart sentinel files atomically so interrupted writes cannot leave a truncated sentinel behind. (#70225) Thanks @obviyus.</li>
|
||||
<li>Pairing: remove stale pending requests for a device when that paired device is deleted, so an old repair approval cannot recreate the removed device from leftover state.</li>
|
||||
<li>Security/dotenv: block workspace <code>.env</code> overrides for Matrix, Mattermost, IRC, and Synology endpoint settings so cloned workspaces cannot redirect bundled connector traffic through local endpoint config. (#70240) Thanks @drobison00.</li>
|
||||
<li>Telegram: require the same <code>/models</code> authorization for group model-picker callbacks, so unauthorized participants can no longer browse or change the session model through inline buttons. (#70235) Thanks @drobison00.</li>
|
||||
<li>Agents/Pi: keep the filtered tool-name allowlist active for embedded OpenAI/OpenAI Codex GPT-5 runs and compaction sessions, so bundled and client tools still execute after the Pi <code>0.68.1</code> session-tool allowlist change instead of stopping at plan-only replies with no tool call. (#70281) Thanks @jalehman.</li>
|
||||
<li>Agents/Pi: honor explicit <code>strict-agentic</code> execution contracts for incomplete-turn retry guards across providers, so manually opted-in local or compatible models get the same retry behavior without relying on OpenAI model inference. (#66750) Thanks @ziomancer.</li>
|
||||
<li>OpenShell/sandbox: pin verified file reads to an already-opened descriptor, walk the ancestor chain for symlinked parents on platforms without fd-path readlink, and re-check file identity so parent symlink swaps cannot redirect in-sandbox reads to host files outside the allowed mount root. (#69798) Thanks @drobison00.</li>
|
||||
<li>Gateway/Control UI: require authenticated Control UI read access before serving <code>/__openclaw/control-ui-config.json</code> when <code>gateway.auth</code> is enabled, so unauthenticated callers can no longer read bootstrap metadata. (#70247) Thanks @drobison00.</li>
|
||||
<li>Gateway/restart: default session-scoped restart sentinels to a one-shot agent continuation, so chat-initiated Gateway restarts acknowledge successful boot automatically. (#70269) Thanks @obviyus.</li>
|
||||
<li>Build/npm publish: fail postpublish verification when root <code>dist/*</code> files import bundled plugin runtime dependencies without mirroring them in the root package manifest, so Slack-style plugin deps cannot silently ship on the wrong module-resolution path again. (#60112) thanks @medns.</li>
|
||||
</ul>
|
||||
<p><a href="https://github.com/openclaw/openclaw/blob/main/CHANGELOG.md">View full changelog</a></p>
|
||||
]]></description>
|
||||
<enclosure url="https://github.com/openclaw/openclaw/releases/download/v2026.4.22/OpenClaw-2026.4.22.zip" length="47883836" type="application/octet-stream" sparkle:edSignature="kzJ2j2sWX4H+ZIc4dXEFORYr9tk3w1txpjCJ38cdSFz6yWHU0M6Sx9zN0DB7JGIpv1QC+D+jFbWBkl4SJqW2AA=="/>
|
||||
</item>
|
||||
<item>
|
||||
<title>2026.4.20</title>
|
||||
<pubDate>Tue, 21 Apr 2026 19:53:52 +0000</pubDate>
|
||||
@@ -230,91 +431,5 @@
|
||||
]]></description>
|
||||
<enclosure url="https://github.com/openclaw/openclaw/releases/download/v2026.4.15/OpenClaw-2026.4.15.zip" length="47501638" type="application/octet-stream" sparkle:edSignature="JUG3cicpJqCQDvp7VYoN6qBuN4Kn4s0+QQFjlMR69OZlwViLdiStPIHa+1vpuoR4miYhJc9knSDVCFzSfQuYCQ=="/>
|
||||
</item>
|
||||
<item>
|
||||
<title>2026.4.14</title>
|
||||
<pubDate>Tue, 14 Apr 2026 14:08:09 +0000</pubDate>
|
||||
<link>https://raw.githubusercontent.com/openclaw/openclaw/main/appcast.xml</link>
|
||||
<sparkle:version>2026041490</sparkle:version>
|
||||
<sparkle:shortVersionString>2026.4.14</sparkle:shortVersionString>
|
||||
<sparkle:minimumSystemVersion>15.0</sparkle:minimumSystemVersion>
|
||||
<description><![CDATA[<h2>OpenClaw 2026.4.14</h2>
|
||||
<h3>Changes</h3>
|
||||
<ul>
|
||||
<li>OpenAI Codex/models: add forward-compat support for <code>gpt-5.4-pro</code>, including Codex pricing/limits and list/status visibility before the upstream catalog catches up. (#66453) Thanks @jepson-liu.</li>
|
||||
<li>Telegram/forum topics: surface human topic names in agent context, prompt metadata, and plugin hook metadata by learning names from Telegram forum service messages. (#65973) Thanks @ptahdunbar.</li>
|
||||
</ul>
|
||||
<h3>Fixes</h3>
|
||||
<ul>
|
||||
<li>Agents/Ollama: forward the configured embedded-run timeout into the global undici stream timeout tuning so slow local Ollama runs no longer inherit the default stream cutoff instead of the operator-set run timeout. (#63175) Thanks @mindcraftreader and @vincentkoc.</li>
|
||||
<li>Models/Codex: include <code>apiKey</code> in the codex provider catalog output so the Pi ModelRegistry validator no longer rejects the entry and silently drops all custom models from every provider in <code>models.json</code>. (#66180) Thanks @hoyyeva.</li>
|
||||
<li>Tools/image+pdf: normalize configured provider/model refs before media-tool registry lookup so image and PDF tool runs stop rejecting valid Ollama vision models as unknown just because the tool path skipped the usual model-ref normalization step. (#59943) Thanks @yqli2420 and @vincentkoc.</li>
|
||||
<li>Slack/interactions: apply the configured global <code>allowFrom</code> owner allowlist to channel block-action and modal interactive events, require an expected sender id for cross-verification, and reject ambiguous channel types so interactive triggers can no longer bypass the documented allowlist intent in channels without a <code>users</code> list. Open-by-default behavior is preserved when no allowlists are configured. (#66028) Thanks @eleqtrizit.</li>
|
||||
<li>Media-understanding/attachments: fail closed when a local attachment path cannot be canonically resolved via <code>realpath</code>, so a <code>realpath</code> error can no longer downgrade the canonical-roots allowlist check to a non-canonical comparison; attachments that also have a URL still fall back to the network fetch path. (#66022) Thanks @eleqtrizit.</li>
|
||||
<li>Agents/gateway-tool: reject <code>config.patch</code> and <code>config.apply</code> calls from the model-facing gateway tool when they would newly enable any flag enumerated by <code>openclaw security audit</code> (for example <code>dangerouslyDisableDeviceAuth</code>, <code>allowInsecureAuth</code>, <code>dangerouslyAllowHostHeaderOriginFallback</code>, <code>hooks.gmail.allowUnsafeExternalContent</code>, <code>tools.exec.applyPatch.workspaceOnly: false</code>); already-enabled flags pass through unchanged so non-dangerous edits in the same patch still apply, and direct authenticated operator RPC behavior is unchanged. (#62006) Thanks @eleqtrizit.</li>
|
||||
<li>Google image generation: strip a trailing <code>/openai</code> suffix from configured Google base URLs only when calling the native Gemini image API so Gemini image requests stop 404ing without breaking explicit OpenAI-compatible Google endpoints. (#66445) Thanks @dapzthelegend.</li>
|
||||
<li>Telegram/forum topics: persist learned topic names to the Telegram session sidecar store so agent context can keep using human topic names after a restart instead of relearning from future service metadata. (#66107) Thanks @obviyus.</li>
|
||||
<li>Doctor/systemd: keep <code>openclaw doctor --repair</code> and service reinstall from re-embedding dotenv-backed secrets in user systemd units, while preserving newer inline overrides over stale state-dir <code>.env</code> values. (#66249) Thanks @tmimmanuel.</li>
|
||||
<li>Ollama/OpenAI-compat: send <code>stream_options.include_usage</code> for Ollama streaming completions so local Ollama runs report real usage instead of falling back to bogus prompt-token counts that trigger premature compaction. (#64568) Thanks @xchunzhao and @vincentkoc.</li>
|
||||
<li>Doctor/plugins: cache external <code>preferOver</code> catalog lookups within each plugin auto-enable pass so large <code>agents.list</code> configs no longer peg CPU and repeatedly reread plugin catalogs during doctor/plugins resolution. (#66246) Thanks @yfge.</li>
|
||||
<li>GitHub Copilot/thinking: allow <code>github-copilot/gpt-5.4</code> to use <code>xhigh</code> reasoning so Copilot GPT-5.4 matches the rest of the GPT-5.4 family. (#50168) Thanks @jakepresent and @vincentkoc.</li>
|
||||
<li>Memory/embeddings: preserve non-OpenAI provider prefixes when normalizing OpenAI-compatible embedding model refs so proxy-backed memory providers stop failing with <code>Unknown memory embedding provider</code>. (#66452) Thanks @jlapenna.</li>
|
||||
<li>Agents/local models: clarify low-context preflight hints for self-hosted models, point config-backed caps at the relevant OpenClaw setting, and stop suggesting larger models when <code>agents.defaults.contextTokens</code> is the real limit. (#66236) Thanks @ImLukeF.</li>
|
||||
<li>Browser/SSRF: restore hostname navigation under the default browser SSRF policy while keeping explicit strict mode reachable from config, and keep managed loopback CDP <code>/json/new</code> fallback requests on the local CDP control policy so browser follow-up fixes stop regressing normal navigation or self-blocking local CDP control. (#66386) Thanks @obviyus.</li>
|
||||
<li>Models/Codex: canonicalize the legacy <code>openai-codex/gpt-5.4-codex</code> runtime alias to <code>openai-codex/gpt-5.4</code> while still honoring alias-specific and canonical per-model overrides. (#43060) Thanks @Sapientropic and @vincentkoc.</li>
|
||||
<li>Browser/SSRF: preserve explicit strict browser navigation mode for legacy <code>browser.ssrfPolicy.allowPrivateNetwork: false</code> configs by normalizing the legacy alias to the canonical strict marker instead of silently widening those installs to the default non-strict hostname-navigation path.</li>
|
||||
<li>Onboarding/custom providers: use <code>max_tokens=16</code> for OpenAI-compatible verification probes so stricter custom endpoints stop rejecting onboarding checks that only need a tiny completion. (#66450) Thanks @WuKongAI-CMU.</li>
|
||||
<li>Agents/subagents: emit the subagent registry lazy-runtime stub on the stable dist path that both source and bundled runtime imports resolve, so the follow-up dist fix no longer still fails with <code>ERR_MODULE_NOT_FOUND</code> at runtime. (#66420) Thanks @obviyus.</li>
|
||||
<li>Media-understanding/proxy env: auto-upgrade provider HTTP helper requests to trusted env-proxy mode only when <code>HTTP_PROXY</code>/<code>HTTPS_PROXY</code> is active and the target is not bypassed by <code>NO_PROXY</code>, so remote media-understanding and transcription requests stop failing local DNS pre-resolution in proxy-only environments without widening SSRF bypasses. (#52162) Thanks @mjamiv and @vincentkoc.</li>
|
||||
<li>Telegram/media downloads: let Telegram media fetches trust an operator-configured explicit proxy for target DNS resolution after hostname-policy checks, so proxy-backed installs stop failing <code>could not download media</code> on Bot API file downloads after the DNS-pinning regression. (#66245) Thanks @dawei41468 and @vincentkoc.</li>
|
||||
<li>Browser: keep loopback CDP readiness checks reachable under strict SSRF defaults so OpenClaw can reconnect to locally started managed Chrome. (#66354) Thanks @hxy91819.</li>
|
||||
<li>Agents/context engine: compact engine-owned sessions from the first tool-loop delta and preserve ingest fallback when <code>afterTurn</code> is absent, so long-running tool loops can stay bounded without dropping engine state. (#63555) Thanks @Bikkies.</li>
|
||||
<li>OpenAI Codex/auth: keep malformed Codex CLI auth-file diagnostics on the debug logger instead of stdout so interactive command output stays clean while auth read failures remain traceable. (#66451) Thanks @SimbaKingjoe.</li>
|
||||
<li>Discord/native commands: return the real status card for native <code>/status</code> interactions instead of falling through to the synthetic <code>✅ Done.</code> ack when the generic dispatcher produces no visible reply. (#54629) Thanks @tkozzer and @vincentkoc.</li>
|
||||
<li>Hooks/Ollama: let LLM-backed session-memory slug generation honor an explicit <code>agents.defaults.timeoutSeconds</code> override instead of always aborting after 15 seconds, so slow local Ollama runs stop silently dropping back to generic filenames. (#66237) Thanks @dmak and @vincentkoc.</li>
|
||||
<li>Media/transcription: remap <code>.aac</code> filenames to <code>.m4a</code> for OpenAI-compatible audio uploads so AAC voice notes stop failing MIME-sensitive transcription endpoints. (#66446) Thanks @ben-z.</li>
|
||||
<li>UI/chat: replace marked.js with markdown-it so maliciously crafted markdown can no longer freeze the Control UI via ReDoS. (#46707) Thanks @zhangfnf.</li>
|
||||
<li>Auto-reply/send policy: keep <code>sendPolicy: "deny"</code> from blocking inbound message processing, so the agent still runs its turn while all outbound delivery is suppressed for observer-style setups. (#65461, #53328) Thanks @omarshahine.</li>
|
||||
<li>BlueBubbles: lazy-refresh the Private API server-info cache on send when reply threading or message effects are requested but status is unknown, so sends no longer silently degrade to plain messages when the 10-minute cache expires. (#65447, #43764) Thanks @omarshahine.</li>
|
||||
<li>Heartbeat/security: force owner downgrade for untrusted <code>hook:wake</code> system events [AI-assisted]. (#66031) Thanks @pgondhi987.</li>
|
||||
<li>Browser/security: enforce SSRF policy on snapshot, screenshot, and tab routes [AI]. (#66040) Thanks @pgondhi987.</li>
|
||||
<li>Microsoft Teams/security: enforce sender allowlist checks on SSO signin invokes [AI]. (#66033) Thanks @pgondhi987.</li>
|
||||
<li>Config/security: redact <code>sourceConfig</code> and <code>runtimeConfig</code> alias fields in <code>redactConfigSnapshot</code> [AI]. (#66030) Thanks @pgondhi987.</li>
|
||||
<li>Agents/context engines: run opt-in turn maintenance as idle-aware background work so the next foreground turn no longer waits on proactive maintenance. (#65233) Thanks @100yenadmin.</li>
|
||||
<li>Plugins/status: report the registered context-engine IDs in <code>plugins inspect</code> instead of the owning plugin ID, so non-matching engine IDs and multi-engine plugins are classified correctly. (#58766) Thanks @zhuisDEV.</li>
|
||||
<li>Context engines: reject resolved plugin engines whose reported <code>info.id</code> does not match their registered slot id, so malformed engines fail fast before id-based runtime branches can misbehave. (#63222) Thanks @fuller-stack-dev.</li>
|
||||
<li>WhatsApp: patch installed Baileys media encryption writes during OpenClaw postinstall so the default npm/install.sh delivery path waits for encrypted media files to finish flushing before readback, avoiding transient <code>ENOENT</code> crashes on image sends. (#65896) Thanks @frankekn.</li>
|
||||
<li>Gateway/update: unify service entrypoint resolution around the canonical bundled gateway entrypoint so update, reinstall, and doctor repair stop drifting between stale <code>dist/entry.js</code> and current <code>dist/index.js</code> paths. (#65984) Thanks @mbelinky.</li>
|
||||
<li>Heartbeat/Telegram topics: keep isolated heartbeat replies on the bound forum topic when <code>target=last</code>, instead of dropping them into the group root chat. (#66035) Thanks @mbelinky.</li>
|
||||
<li>Browser/CDP: let managed local Chrome readiness, status probes, and managed loopback CDP control bypass browser SSRF policy for their own loopback control plane, so OpenClaw no longer misclassifies a healthy child browser as "not reachable after start". (#65695, #66043) Thanks @mbelinky.</li>
|
||||
<li>Gateway/sessions: stop heartbeat, cron-event, and exec-event turns from overwriting shared-session routing and origin metadata, preventing synthetic <code>heartbeat</code> targets from poisoning later cron or user delivery. (#66073, #63733, #35300) Thanks @mbelinky.</li>
|
||||
<li>Browser/CDP: let local attach-only <code>manual-cdp</code> profiles reuse the local loopback CDP control plane under strict default policy and remote-class probe timeouts, so tabs/snapshot stop falsely reporting a live local browser session as not running. (#65611, #66080) Thanks @mbelinky.</li>
|
||||
<li>Cron/scheduler: stop inventing short retries when cron next-run calculation returns no valid future slot, and keep a maintenance wake armed so enabled unscheduled jobs recover without entering a refire loop. (#66019, #66083) Thanks @mbelinky.</li>
|
||||
<li>Cron/scheduler: preserve the active error-backoff floor when maintenance repair recomputes a missing cron next-run, so recurring errored jobs do not resume early after a transient next-run resolution failure. (#66019, #66083, #66113) Thanks @mbelinky.</li>
|
||||
<li>Outbound/delivery-queue: persist the originating outbound <code>session</code> context on queued delivery entries and replay it during recovery, so write-ahead-queued sends keep their original outbound media policy context after restart instead of evaluating against a missing session. (#66025) Thanks @eleqtrizit.</li>
|
||||
<li>Memory/Ollama: restore the built-in <code>ollama</code> embedding adapter in memory-core so explicit <code>memorySearch.provider: "ollama"</code> works again, and include endpoint-aware cache keys so different Ollama hosts do not reuse each other's embeddings. (#63429, #66078, #66163) Thanks @nnish16 and @vincentkoc.</li>
|
||||
<li>Auto-reply/queue: split collect-mode followup drains into contiguous groups by per-message authorization context (sender id, owner status, exec/bash-elevated overrides), so queued items from different senders or exec configs no longer execute under the last queued run's owner-only and exec-approval context. (#66024) Thanks @eleqtrizit.</li>
|
||||
<li>Dreaming/memory-core: require a live queued Dreaming cron event before the heartbeat hook runs the sweep, so managed Dreaming no longer replays on later heartbeats after the scheduled run was already consumed. (#66139) Thanks @mbelinky.</li>
|
||||
<li>Control UI/Dreaming: stop Imported Insights and Memory Palace from calling optional <code>memory-wiki</code> gateway methods when the plugin is off, and refresh config before wiki reloads so the Dreaming tab stops showing misleading unknown-method failures. (#66140) Thanks @mbelinky.</li>
|
||||
<li>Agents/tools: only mark streamed unknown-tool retries as counted when a streamed message actually classifies an unavailable tool, and keep incomplete streamed tool names from resetting the retry streak before the final assistant message arrives. (#66145) Thanks @dutifulbob.</li>
|
||||
<li>Memory/active-memory: move recalled memory onto the hidden untrusted prompt-prefix path instead of system prompt injection, label the visible Active Memory status line fields, and include the resolved recall provider/model in gateway debug logs so trace/debug output matches what the model actually saw. (#66144) Thanks @Takhoffman.</li>
|
||||
<li>Memory/QMD: stop treating legacy lowercase <code>memory.md</code> as a second default root collection, so QMD recall no longer searches phantom <code>memory-alt-*</code> collections and builtin/QMD root-memory fallback stays aligned. (#66141) Thanks @mbelinky.</li>
|
||||
<li>Agents/subagents: ship <code>dist/agents/subagent-registry.runtime.js</code> in npm builds so <code>runtime: "subagent"</code> runs stop stalling in <code>queued</code> after the registry import fails. (#66189) Thanks @yqli2420 and @vincentkoc.</li>
|
||||
<li>Agents/OpenAI: map <code>minimal</code> thinking to OpenAI's supported <code>low</code> reasoning effort for GPT-5.4 requests, so embedded runs stop failing request validation. Thanks @steipete.</li>
|
||||
<li>Voice-call/media-stream: resolve the source IP from trusted forwarding headers for per-IP pending-connection limits when <code>webhookSecurity.trustForwardingHeaders</code> and <code>trustedProxyIPs</code> are configured, and reserve <code>maxConnections</code> capacity for in-flight WebSocket upgrades so concurrent handshakes can no longer momentarily exceed the operator-set cap. (#66027) Thanks @eleqtrizit.</li>
|
||||
<li>Feishu/allowlist: canonicalize allowlist entries by explicit <code>user</code>/<code>chat</code> kind, strip repeated <code>feishu:</code>/<code>lark:</code> provider prefixes, and stop folding opaque Feishu IDs to lowercase, so allowlist matching no longer crosses user/chat namespaces or widens to case-insensitive ID matches the operator did not intend. (#66021) Thanks @eleqtrizit.</li>
|
||||
<li>Telegram/status commands: let read-only status slash commands bypass busy topic turns, while keeping <code>/export-session</code> on the normal lane so it cannot interleave with an in-flight session mutation. (#66226) Thanks @VACInc and @vincentkoc.</li>
|
||||
<li>TTS/reply media: persist OpenClaw temp voice outputs into managed outbound media and allow them through reply-media normalization, so voice-note replies stop silently dropping. (#63511) Thanks @jetd1.</li>
|
||||
<li>Agents/tools: treat Windows drive-letter paths (<code>C:\\...</code>) as absolute when resolving sandbox and read-tool paths so workspace root is not prepended under POSIX path rules. (#54039) Thanks @ly85206559 and @vincentkoc.</li>
|
||||
<li>Agents/OpenAI: recover embedded GPT-style runs when reasoning-only or empty turns need bounded continuation, with replay-safe retry gating and incomplete-turn fallback when no visible answer arrives. (#66167) thanks @jalehman</li>
|
||||
<li>Outbound/relay-status: suppress internal relay-status placeholder payloads (<code>No channel reply.</code>, <code>Replied in-thread.</code>, <code>Replied in #...</code>, wiki-update status variants ending in <code>No channel reply.</code>) before channel delivery so internal housekeeping text does not leak to users.</li>
|
||||
<li>Slack/doctor: add a dedicated doctor-contract sidecar so config warmup paths such as <code>openclaw cron</code> no longer fall back to Slack's broader contract surface, which could trigger Slack-related config-read crashes on affected setups. (#63192) Thanks @shhtheonlyperson.</li>
|
||||
<li>Hooks/session-memory: pass the resolved agent workspace into gateway <code>/new</code> and <code>/reset</code> session-memory hooks so reset snapshots stay scoped to the right agent workspace instead of leaking into the default workspace. (#64735) Thanks @suboss87 and @vincentkoc.</li>
|
||||
<li>CLI/approvals: raise the default <code>openclaw approvals get</code> gateway timeout and report config-load timeouts explicitly, so slow hosts stop showing a misleading <code>Config unavailable.</code> note when the approvals snapshot succeeds but the follow-up config RPC needs more time. (#66239) Thanks @neeravmakwana.</li>
|
||||
<li>Media/store: honor configured agent media limits when saving generated media and persisting outbound reply media, so the store no longer hard-stops those flows at 5 MB before the configured limit applies. (#66229) Thanks @neeravmakwana and @vincentkoc.</li>
|
||||
</ul>
|
||||
<p><a href="https://github.com/openclaw/openclaw/blob/main/CHANGELOG.md">View full changelog</a></p>
|
||||
]]></description>
|
||||
<enclosure url="https://github.com/openclaw/openclaw/releases/download/v2026.4.14/OpenClaw-2026.4.14.zip" length="47490719" type="application/octet-stream" sparkle:edSignature="KW4gq3qjhKPSQebRVL/mSgttTOhLVKtnWz7pNCZt29oEZ96yU14OnxxSsmtNHmDi4m7G7gfVOfndp80XKFQlCw=="/>
|
||||
</item>
|
||||
</channel>
|
||||
</rss>
|
||||
</rss>
|
||||
|
||||
@@ -65,8 +65,8 @@ android {
|
||||
applicationId = "ai.openclaw.app"
|
||||
minSdk = 31
|
||||
targetSdk = 36
|
||||
versionCode = 2026042200
|
||||
versionName = "2026.4.22"
|
||||
versionCode = 2026042400
|
||||
versionName = "2026.4.24"
|
||||
ndk {
|
||||
// Support all major ABIs — native libs are tiny (~47 KB per ABI)
|
||||
abiFilters += listOf("armeabi-v7a", "arm64-v8a", "x86", "x86_64")
|
||||
|
||||
@@ -34,7 +34,7 @@ fun parseAssistantLaunchIntent(intent: Intent?): AssistantLaunchRequest? {
|
||||
AssistantLaunchRequest(
|
||||
source = "app_action",
|
||||
prompt = prompt,
|
||||
autoSend = prompt != null,
|
||||
autoSend = false,
|
||||
)
|
||||
}
|
||||
|
||||
|
||||
@@ -63,8 +63,6 @@ internal fun isPrivateLanGatewayHost(
|
||||
}
|
||||
if (host.isEmpty()) return false
|
||||
if (isLoopbackGatewayHost(host, allowEmulatorBridgeAlias = allowEmulatorBridgeAlias)) return true
|
||||
if (host.endsWith(".local")) return true
|
||||
if (!host.contains('.') && !host.contains(':')) return true
|
||||
|
||||
parseIpv4Address(host)?.let { ipv4 ->
|
||||
val first = ipv4[0].toInt() and 0xff
|
||||
|
||||
@@ -7,7 +7,7 @@ import ai.openclaw.app.gateway.GatewayClientInfo
|
||||
import ai.openclaw.app.gateway.GatewayConnectOptions
|
||||
import ai.openclaw.app.gateway.GatewayEndpoint
|
||||
import ai.openclaw.app.gateway.GatewayTlsParams
|
||||
import ai.openclaw.app.gateway.isPrivateLanGatewayHost
|
||||
import ai.openclaw.app.gateway.isLoopbackGatewayHost
|
||||
import ai.openclaw.app.LocationMode
|
||||
import ai.openclaw.app.VoiceWakeMode
|
||||
|
||||
@@ -34,7 +34,7 @@ class ConnectionManager(
|
||||
val stableId = endpoint.stableId
|
||||
val stored = storedFingerprint?.trim().takeIf { !it.isNullOrEmpty() }
|
||||
val isManual = stableId.startsWith("manual|")
|
||||
val cleartextAllowedHost = isPrivateLanGatewayHost(endpoint.host)
|
||||
val cleartextAllowedHost = isLoopbackGatewayHost(endpoint.host)
|
||||
|
||||
if (isManual) {
|
||||
if (!manualTlsEnabled && cleartextAllowedHost) return null
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
package ai.openclaw.app.ui
|
||||
|
||||
import ai.openclaw.app.gateway.isPrivateLanGatewayHost
|
||||
import ai.openclaw.app.gateway.isLoopbackGatewayHost
|
||||
import java.util.Base64
|
||||
import java.util.Locale
|
||||
import java.net.URI
|
||||
@@ -56,9 +56,9 @@ internal data class GatewayScannedSetupCodeResult(
|
||||
|
||||
private val gatewaySetupJson = Json { ignoreUnknownKeys = true }
|
||||
private const val remoteGatewaySecurityRule =
|
||||
"Tailscale and public mobile nodes require wss:// or Tailscale Serve. ws:// is allowed for private LAN, localhost, and the Android emulator."
|
||||
"Tailscale and public mobile nodes require wss:// or Tailscale Serve. ws:// is allowed only for localhost and the Android emulator."
|
||||
private const val remoteGatewaySecurityFix =
|
||||
"Use a private LAN host/address, or enable Tailscale Serve / expose a wss:// gateway URL."
|
||||
"Use localhost/the Android emulator, or enable Tailscale Serve / expose a wss:// gateway URL."
|
||||
|
||||
internal fun resolveGatewayConnectConfig(
|
||||
useSetupCode: Boolean,
|
||||
@@ -143,7 +143,7 @@ internal fun parseGatewayEndpoint(rawInput: String): GatewayEndpointConfig? {
|
||||
"wss", "https" -> true
|
||||
else -> true
|
||||
}
|
||||
if (!tls && !isPrivateLanGatewayHost(host)) {
|
||||
if (!tls && !isLoopbackGatewayHost(host)) {
|
||||
return GatewayEndpointParseResult(error = GatewayEndpointValidationError.INSECURE_REMOTE_URL)
|
||||
}
|
||||
val defaultPort =
|
||||
|
||||
@@ -49,7 +49,7 @@ internal fun buildGatewayDiagnosticsReport(
|
||||
Please:
|
||||
- pick one route only: same machine, same LAN, Tailscale, or public URL
|
||||
- classify this as pairing/auth, TLS trust, wrong advertised route, wrong address/port, or gateway down
|
||||
- remember: Tailscale/public mobile routes require wss:// or Tailscale Serve; private LAN ws:// is still allowed
|
||||
- remember: Tailscale/public mobile routes require wss:// or Tailscale Serve; ws:// is loopback-only
|
||||
- quote the exact app status/error below
|
||||
- tell me whether `openclaw devices list` should show a pending pairing request
|
||||
- if more signal is needed, ask for `openclaw qr --json`, `openclaw devices list`, and `openclaw nodes status`
|
||||
|
||||
@@ -4,7 +4,6 @@ import android.content.Intent
|
||||
import org.junit.Assert.assertEquals
|
||||
import org.junit.Assert.assertFalse
|
||||
import org.junit.Assert.assertNull
|
||||
import org.junit.Assert.assertTrue
|
||||
import org.junit.Test
|
||||
import org.junit.runner.RunWith
|
||||
import org.robolectric.RobolectricTestRunner
|
||||
@@ -33,7 +32,7 @@ class AssistantLaunchTest {
|
||||
requireNotNull(parsed)
|
||||
assertEquals("app_action", parsed.source)
|
||||
assertEquals("summarize my unread texts", parsed.prompt)
|
||||
assertTrue(parsed.autoSend)
|
||||
assertFalse(parsed.autoSend)
|
||||
}
|
||||
|
||||
@Test
|
||||
|
||||
@@ -108,7 +108,7 @@ class ConnectionManagerTest {
|
||||
}
|
||||
|
||||
@Test
|
||||
fun resolveTlsParamsForEndpoint_manualPrivateLanCanStayCleartextWhenToggleIsOff() {
|
||||
fun resolveTlsParamsForEndpoint_manualPrivateLanForcesTlsWhenToggleIsOff() {
|
||||
val endpoint = GatewayEndpoint.manual(host = "192.168.1.20", port = 18789)
|
||||
|
||||
val params =
|
||||
@@ -118,7 +118,9 @@ class ConnectionManagerTest {
|
||||
manualTlsEnabled = false,
|
||||
)
|
||||
|
||||
assertNull(params)
|
||||
assertEquals(true, params?.required)
|
||||
assertNull(params?.expectedFingerprint)
|
||||
assertEquals(false, params?.allowTOFU)
|
||||
}
|
||||
|
||||
@Test
|
||||
@@ -146,7 +148,7 @@ class ConnectionManagerTest {
|
||||
}
|
||||
|
||||
@Test
|
||||
fun resolveTlsParamsForEndpoint_discoveryPrivateLanWithoutHintsCanStayCleartext() {
|
||||
fun resolveTlsParamsForEndpoint_discoveryPrivateLanWithoutHintsStillRequiresTls() {
|
||||
val endpoint =
|
||||
GatewayEndpoint(
|
||||
stableId = "_openclaw-gw._tcp.|local.|Test",
|
||||
@@ -164,7 +166,9 @@ class ConnectionManagerTest {
|
||||
manualTlsEnabled = false,
|
||||
)
|
||||
|
||||
assertNull(params)
|
||||
assertEquals(true, params?.required)
|
||||
assertNull(params?.expectedFingerprint)
|
||||
assertEquals(false, params?.allowTOFU)
|
||||
}
|
||||
|
||||
@Test
|
||||
@@ -240,9 +244,9 @@ class ConnectionManagerTest {
|
||||
}
|
||||
|
||||
@Test
|
||||
fun isPrivateLanGatewayHost_acceptsLanHostsButRejectsTailnetHosts() {
|
||||
fun isPrivateLanGatewayHost_acceptsLanIpsButRejectsMdnsAndTailnetHosts() {
|
||||
assertTrue(isPrivateLanGatewayHost("192.168.1.20"))
|
||||
assertTrue(isPrivateLanGatewayHost("gateway.local"))
|
||||
assertFalse(isPrivateLanGatewayHost("gateway.local"))
|
||||
assertFalse(isPrivateLanGatewayHost("100.64.0.9"))
|
||||
assertFalse(isPrivateLanGatewayHost("gateway.tailnet.ts.net"))
|
||||
}
|
||||
|
||||
@@ -99,33 +99,16 @@ class GatewayConfigResolverTest {
|
||||
}
|
||||
|
||||
@Test
|
||||
fun parseGatewayEndpointAllowsPrivateLanCleartextWsUrls() {
|
||||
fun parseGatewayEndpointRejectsPrivateLanCleartextWsUrls() {
|
||||
val parsed = parseGatewayEndpoint("ws://192.168.1.20:18789")
|
||||
|
||||
assertEquals(
|
||||
GatewayEndpointConfig(
|
||||
host = "192.168.1.20",
|
||||
port = 18789,
|
||||
tls = false,
|
||||
displayUrl = "http://192.168.1.20:18789",
|
||||
),
|
||||
parsed,
|
||||
)
|
||||
assertNull(parsed)
|
||||
}
|
||||
|
||||
@Test
|
||||
fun parseGatewayEndpointAllowsMdnsCleartextWsUrls() {
|
||||
fun parseGatewayEndpointRejectsMdnsCleartextWsUrls() {
|
||||
val parsed = parseGatewayEndpoint("ws://gateway.local:18789")
|
||||
|
||||
assertEquals(
|
||||
GatewayEndpointConfig(
|
||||
host = "gateway.local",
|
||||
port = 18789,
|
||||
tls = false,
|
||||
displayUrl = "http://gateway.local:18789",
|
||||
),
|
||||
parsed,
|
||||
)
|
||||
assertNull(parsed)
|
||||
}
|
||||
|
||||
@Test
|
||||
@@ -163,13 +146,9 @@ class GatewayConfigResolverTest {
|
||||
}
|
||||
|
||||
@Test
|
||||
fun parseGatewayEndpointAllowsLinkLocalIpv6ZoneCleartextWsUrls() {
|
||||
fun parseGatewayEndpointRejectsLinkLocalIpv6ZoneCleartextWsUrls() {
|
||||
val parsed = parseGatewayEndpoint("ws://[fe80::1%25eth0]")
|
||||
|
||||
assertEquals("fe80::1%25eth0", parsed?.host)
|
||||
assertEquals(18789, parsed?.port)
|
||||
assertEquals(false, parsed?.tls)
|
||||
assertEquals("http://[fe80::1%25eth0]:18789", parsed?.displayUrl)
|
||||
assertNull(parsed)
|
||||
}
|
||||
|
||||
@Test
|
||||
@@ -290,19 +269,10 @@ class GatewayConfigResolverTest {
|
||||
}
|
||||
|
||||
@Test
|
||||
fun parseGatewayEndpointResultAcceptsLanCleartextGateway() {
|
||||
fun parseGatewayEndpointResultFlagsInsecureLanCleartextGateway() {
|
||||
val parsed = parseGatewayEndpointResult("ws://192.168.1.20:18789")
|
||||
|
||||
assertEquals(
|
||||
GatewayEndpointConfig(
|
||||
host = "192.168.1.20",
|
||||
port = 18789,
|
||||
tls = false,
|
||||
displayUrl = "http://192.168.1.20:18789",
|
||||
),
|
||||
parsed.config,
|
||||
)
|
||||
assertNull(parsed.error)
|
||||
assertNull(parsed.config)
|
||||
assertEquals(GatewayEndpointValidationError.INSECURE_REMOTE_URL, parsed.error)
|
||||
}
|
||||
|
||||
@Test
|
||||
@@ -443,7 +413,7 @@ class GatewayConfigResolverTest {
|
||||
}
|
||||
|
||||
@Test
|
||||
fun resolveGatewayConnectConfigAllowsPrivateLanManualCleartextEndpoint() {
|
||||
fun resolveGatewayConnectConfigRejectsPrivateLanManualCleartextEndpoint() {
|
||||
val resolved =
|
||||
resolveGatewayConnectConfig(
|
||||
useSetupCode = false,
|
||||
@@ -459,9 +429,7 @@ class GatewayConfigResolverTest {
|
||||
fallbackPassword = "",
|
||||
)
|
||||
|
||||
assertEquals("192.168.31.100", resolved?.host)
|
||||
assertEquals(18789, resolved?.port)
|
||||
assertEquals(false, resolved?.tls)
|
||||
assertNull(resolved)
|
||||
}
|
||||
|
||||
@Test
|
||||
|
||||
@@ -1,5 +1,13 @@
|
||||
# OpenClaw iOS Changelog
|
||||
|
||||
## 2026.4.24 - 2026-04-24
|
||||
|
||||
Maintenance update for the current OpenClaw development release.
|
||||
|
||||
## 2026.4.23 - 2026-04-23
|
||||
|
||||
Maintenance update for the current OpenClaw development release.
|
||||
|
||||
## 2026.4.22 - 2026-04-22
|
||||
|
||||
Maintenance update for the current OpenClaw development release.
|
||||
|
||||
@@ -2,8 +2,8 @@
|
||||
// Source of truth: apps/ios/version.json
|
||||
// Generated by scripts/ios-sync-versioning.ts.
|
||||
|
||||
OPENCLAW_IOS_VERSION = 2026.4.22
|
||||
OPENCLAW_MARKETING_VERSION = 2026.4.22
|
||||
OPENCLAW_IOS_VERSION = 2026.4.24
|
||||
OPENCLAW_MARKETING_VERSION = 2026.4.24
|
||||
OPENCLAW_BUILD_VERSION = 1
|
||||
|
||||
#include? "../build/Version.xcconfig"
|
||||
|
||||
@@ -1,3 +1,3 @@
|
||||
{
|
||||
"version": "2026.4.22"
|
||||
"version": "2026.4.24"
|
||||
}
|
||||
|
||||
@@ -1,6 +1,24 @@
|
||||
import Foundation
|
||||
import OpenClawProtocol
|
||||
|
||||
func whatsappLoginWaitRequestTimeoutMs(
|
||||
startedAt: Date,
|
||||
timeoutMs: Int,
|
||||
didRunFinalWait: inout Bool,
|
||||
now: Date = Date()) -> Int?
|
||||
{
|
||||
let elapsedMs = Int(now.timeIntervalSince(startedAt) * 1000)
|
||||
let remainingMs = max(timeoutMs - elapsedMs, 0)
|
||||
if remainingMs > 0 {
|
||||
return remainingMs
|
||||
}
|
||||
if didRunFinalWait {
|
||||
return nil
|
||||
}
|
||||
didRunFinalWait = true
|
||||
return 1
|
||||
}
|
||||
|
||||
extension ChannelsStore {
|
||||
func start() {
|
||||
guard !self.isPreview else { return }
|
||||
@@ -77,18 +95,28 @@ extension ChannelsStore {
|
||||
guard !self.whatsappBusy else { return }
|
||||
self.whatsappBusy = true
|
||||
defer { self.whatsappBusy = false }
|
||||
let startedAt = Date()
|
||||
var didRunFinalWait = false
|
||||
do {
|
||||
let params: [String: AnyCodable] = [
|
||||
"timeoutMs": AnyCodable(timeoutMs),
|
||||
]
|
||||
let result: WhatsAppLoginWaitResult = try await GatewayConnection.shared.requestDecoded(
|
||||
method: .webLoginWait,
|
||||
params: params,
|
||||
timeoutMs: Double(timeoutMs) + 5000)
|
||||
self.whatsappLoginMessage = result.message
|
||||
self.whatsappLoginConnected = result.connected
|
||||
if result.connected {
|
||||
self.whatsappLoginQrDataUrl = nil
|
||||
while let remainingMs = whatsappLoginWaitRequestTimeoutMs(
|
||||
startedAt: startedAt,
|
||||
timeoutMs: timeoutMs,
|
||||
didRunFinalWait: &didRunFinalWait)
|
||||
{
|
||||
var params: [String: AnyCodable] = [
|
||||
"timeoutMs": AnyCodable(remainingMs),
|
||||
]
|
||||
if let currentQrDataUrl = self.whatsappLoginQrDataUrl {
|
||||
params["currentQrDataUrl"] = AnyCodable(currentQrDataUrl)
|
||||
}
|
||||
let result: WhatsAppLoginWaitResult = try await GatewayConnection.shared.requestDecoded(
|
||||
method: .webLoginWait,
|
||||
params: params,
|
||||
timeoutMs: Double(remainingMs) + 5000)
|
||||
self.applyWhatsAppLoginWaitResult(result)
|
||||
if result.connected || result.qrDataUrl == nil || didRunFinalWait {
|
||||
break
|
||||
}
|
||||
}
|
||||
} catch {
|
||||
self.whatsappLoginMessage = error.localizedDescription
|
||||
@@ -151,9 +179,10 @@ private struct WhatsAppLoginStartResult: Codable {
|
||||
let connected: Bool?
|
||||
}
|
||||
|
||||
private struct WhatsAppLoginWaitResult: Codable {
|
||||
struct WhatsAppLoginWaitResult: Codable {
|
||||
let connected: Bool
|
||||
let message: String
|
||||
let qrDataUrl: String?
|
||||
}
|
||||
|
||||
private struct ChannelLogoutResult: Codable {
|
||||
|
||||
@@ -290,6 +290,16 @@ final class ChannelsStore {
|
||||
return self.snapshot?.channelOrder ?? []
|
||||
}
|
||||
|
||||
func applyWhatsAppLoginWaitResult(_ result: WhatsAppLoginWaitResult) {
|
||||
self.whatsappLoginMessage = result.message
|
||||
self.whatsappLoginConnected = result.connected
|
||||
if let qrDataUrl = result.qrDataUrl {
|
||||
self.whatsappLoginQrDataUrl = qrDataUrl
|
||||
} else if result.connected {
|
||||
self.whatsappLoginQrDataUrl = nil
|
||||
}
|
||||
}
|
||||
|
||||
init(isPreview: Bool = ProcessInfo.processInfo.isPreview) {
|
||||
self.isPreview = isPreview
|
||||
}
|
||||
|
||||
@@ -9,8 +9,14 @@ enum ExecAllowlistMatcher {
|
||||
for entry in entries {
|
||||
switch ExecApprovalHelpers.validateAllowlistPattern(entry.pattern) {
|
||||
case let .valid(pattern):
|
||||
let target = resolvedPath ?? rawExecutable
|
||||
if self.matches(pattern: pattern, target: target) { return entry }
|
||||
if ExecApprovalHelpers.patternHasPathSelector(pattern) {
|
||||
let target = resolvedPath ?? rawExecutable
|
||||
if self.matches(pattern: pattern, target: target) { return entry }
|
||||
} else if pattern != "*",
|
||||
!ExecApprovalHelpers.patternHasPathSelector(rawExecutable),
|
||||
self.matchesExecutableBasename(pattern: pattern, resolution: resolution) {
|
||||
return entry
|
||||
}
|
||||
case .invalid:
|
||||
continue
|
||||
}
|
||||
@@ -34,6 +40,20 @@ enum ExecAllowlistMatcher {
|
||||
return matches
|
||||
}
|
||||
|
||||
private static func matchesExecutableBasename(
|
||||
pattern: String,
|
||||
resolution: ExecCommandResolution) -> Bool
|
||||
{
|
||||
var candidates = Set<String>()
|
||||
if !resolution.executableName.isEmpty {
|
||||
candidates.insert(resolution.executableName)
|
||||
}
|
||||
if let resolvedPath = resolution.resolvedPath, !resolvedPath.isEmpty {
|
||||
candidates.insert(URL(fileURLWithPath: resolvedPath).lastPathComponent)
|
||||
}
|
||||
return candidates.contains { self.matches(pattern: pattern, target: $0) }
|
||||
}
|
||||
|
||||
private static func matches(pattern: String, target: String) -> Bool {
|
||||
let trimmed = pattern.trimmingCharacters(in: .whitespacesAndNewlines)
|
||||
guard !trimmed.isEmpty else { return false }
|
||||
|
||||
@@ -616,6 +616,17 @@ enum ExecApprovalsStore {
|
||||
let trimmedResolved = entry.lastResolvedPath?.trimmingCharacters(in: .whitespacesAndNewlines) ?? ""
|
||||
let normalizedResolved = trimmedResolved.isEmpty ? nil : trimmedResolved
|
||||
|
||||
if !ExecApprovalHelpers.patternHasPathSelector(trimmedPattern),
|
||||
!trimmedResolved.isEmpty,
|
||||
case let .valid(migratedPattern) = ExecApprovalHelpers.validateAllowlistPattern(trimmedResolved) {
|
||||
return ExecAllowlistEntry(
|
||||
id: entry.id,
|
||||
pattern: migratedPattern,
|
||||
lastUsedAt: entry.lastUsedAt,
|
||||
lastUsedCommand: entry.lastUsedCommand,
|
||||
lastResolvedPath: normalizedResolved)
|
||||
}
|
||||
|
||||
switch ExecApprovalHelpers.validateAllowlistPattern(trimmedPattern) {
|
||||
case let .valid(pattern):
|
||||
return ExecAllowlistEntry(
|
||||
@@ -724,11 +735,10 @@ enum ExecApprovalHelpers {
|
||||
static func validateAllowlistPattern(_ pattern: String?) -> ExecAllowlistPatternValidation {
|
||||
let trimmed = pattern?.trimmingCharacters(in: .whitespacesAndNewlines) ?? ""
|
||||
guard !trimmed.isEmpty else { return .invalid(.empty) }
|
||||
guard self.containsPathComponent(trimmed) else { return .invalid(.missingPathComponent) }
|
||||
return .valid(trimmed)
|
||||
}
|
||||
|
||||
static func isPathPattern(_ pattern: String?) -> Bool {
|
||||
static func isValidAllowlistPattern(_ pattern: String?) -> Bool {
|
||||
switch self.validateAllowlistPattern(pattern) {
|
||||
case .valid:
|
||||
true
|
||||
@@ -737,6 +747,11 @@ enum ExecApprovalHelpers {
|
||||
}
|
||||
}
|
||||
|
||||
static func isPathPattern(_ pattern: String?) -> Bool {
|
||||
let trimmed = pattern?.trimmingCharacters(in: .whitespacesAndNewlines) ?? ""
|
||||
return self.patternHasPathSelector(trimmed)
|
||||
}
|
||||
|
||||
static func parseDecision(_ raw: String?) -> ExecApprovalDecision? {
|
||||
let trimmed = raw?.trimmingCharacters(in: .whitespacesAndNewlines) ?? ""
|
||||
guard !trimmed.isEmpty else { return nil }
|
||||
@@ -759,7 +774,7 @@ enum ExecApprovalHelpers {
|
||||
return pattern.isEmpty ? nil : pattern
|
||||
}
|
||||
|
||||
private static func containsPathComponent(_ pattern: String) -> Bool {
|
||||
static func patternHasPathSelector(_ pattern: String) -> Bool {
|
||||
pattern.contains("/") || pattern.contains("~") || pattern.contains("\\")
|
||||
}
|
||||
}
|
||||
|
||||
@@ -70,6 +70,7 @@ actor GatewayConnection {
|
||||
case wizardStatus = "wizard.status"
|
||||
case talkConfig = "talk.config"
|
||||
case talkMode = "talk.mode"
|
||||
case talkSpeak = "talk.speak"
|
||||
case webLoginStart = "web.login.start"
|
||||
case webLoginWait = "web.login.wait"
|
||||
case channelsLogout = "channels.logout"
|
||||
|
||||
@@ -15,9 +15,9 @@
|
||||
<key>CFBundlePackageType</key>
|
||||
<string>APPL</string>
|
||||
<key>CFBundleShortVersionString</key>
|
||||
<string>2026.4.22</string>
|
||||
<string>2026.4.24</string>
|
||||
<key>CFBundleVersion</key>
|
||||
<string>2026042200</string>
|
||||
<string>2026042400</string>
|
||||
<key>CFBundleIconFile</key>
|
||||
<string>OpenClaw</string>
|
||||
<key>CFBundleURLTypes</key>
|
||||
|
||||
@@ -105,7 +105,7 @@ struct SystemRunSettingsView: View {
|
||||
.foregroundStyle(.secondary)
|
||||
} else {
|
||||
HStack(spacing: 8) {
|
||||
TextField("Add allowlist path pattern (case-insensitive globs)", text: self.$newPattern)
|
||||
TextField("Add command name or path glob", text: self.$newPattern)
|
||||
.textFieldStyle(.roundedBorder)
|
||||
Button("Add") {
|
||||
if self.model.addEntry(self.newPattern) == nil {
|
||||
@@ -113,10 +113,10 @@ struct SystemRunSettingsView: View {
|
||||
}
|
||||
}
|
||||
.buttonStyle(.bordered)
|
||||
.disabled(!self.model.isPathPattern(self.newPattern))
|
||||
.disabled(!self.model.isValidPattern(self.newPattern))
|
||||
}
|
||||
|
||||
Text("Path patterns only. Basename entries like \"echo\" are ignored.")
|
||||
Text("Bare names match PATH-resolved commands. Use a path glob for a specific binary.")
|
||||
.font(.footnote)
|
||||
.foregroundStyle(.secondary)
|
||||
if let validationMessage = self.model.allowlistValidationMessage {
|
||||
@@ -424,8 +424,8 @@ final class ExecApprovalsSettingsModel {
|
||||
self.entries.first(where: { $0.id == id })
|
||||
}
|
||||
|
||||
func isPathPattern(_ pattern: String) -> Bool {
|
||||
ExecApprovalHelpers.isPathPattern(pattern)
|
||||
func isValidPattern(_ pattern: String) -> Bool {
|
||||
ExecApprovalHelpers.isValidAllowlistPattern(pattern)
|
||||
}
|
||||
|
||||
func refreshSkillBins(force: Bool = false) async {
|
||||
|
||||
@@ -2,6 +2,7 @@ import AVFoundation
|
||||
import Foundation
|
||||
import OpenClawChatUI
|
||||
import OpenClawKit
|
||||
import OpenClawProtocol
|
||||
import OSLog
|
||||
import Speech
|
||||
|
||||
@@ -475,7 +476,16 @@ actor TalkModeRuntime {
|
||||
self.ttsLogger
|
||||
.error(
|
||||
"talk TTS failed: \(error.localizedDescription, privacy: .public); " +
|
||||
"falling back to system voice")
|
||||
"retrying gateway talk.speak")
|
||||
do {
|
||||
try await self.playGatewayTalkSpeak(input: input)
|
||||
return
|
||||
} catch {
|
||||
self.ttsLogger
|
||||
.error(
|
||||
"talk gateway TTS failed: \(error.localizedDescription, privacy: .public); " +
|
||||
"falling back to system voice")
|
||||
}
|
||||
do {
|
||||
try await self.playSystemVoice(input: input)
|
||||
} catch {
|
||||
@@ -720,6 +730,42 @@ actor TalkModeRuntime {
|
||||
return await self.playMP3(stream: stream)
|
||||
}
|
||||
|
||||
private func playGatewayTalkSpeak(input: TalkPlaybackInput) async throws {
|
||||
let params = Self.makeTalkSpeakParams(
|
||||
text: input.cleanedText,
|
||||
voiceId: input.voiceId,
|
||||
modelId: self.currentModelId ?? self.defaultModelId,
|
||||
outputFormat: self.defaultOutputFormat,
|
||||
directive: input.directive)
|
||||
let result: TalkSpeakResult = try await GatewayConnection.shared.requestDecoded(
|
||||
method: .talkSpeak,
|
||||
params: params,
|
||||
timeoutMs: max(30000, input.synthTimeoutSeconds * 1000 + 5000))
|
||||
guard let audioData = Data(base64Encoded: result.audiobase64), !audioData.isEmpty else {
|
||||
throw NSError(domain: "TalkSpeak", code: 1, userInfo: [
|
||||
NSLocalizedDescriptionKey: "gateway talk.speak returned empty audio",
|
||||
])
|
||||
}
|
||||
_ = await self.stopPCM()
|
||||
_ = await self.stopMP3()
|
||||
if self.interruptOnSpeech {
|
||||
guard await self.prepareForPlayback(generation: input.generation) else { return }
|
||||
}
|
||||
await MainActor.run { TalkModeController.shared.updatePhase(.speaking) }
|
||||
self.phase = .speaking
|
||||
let playback = await self.playTalkAudio(data: audioData)
|
||||
self.ttsLogger
|
||||
.info(
|
||||
"talk gateway audio provider=\(result.provider, privacy: .public) " +
|
||||
"format=\(result.outputformat ?? "unknown", privacy: .public) " +
|
||||
"finished=\(playback.finished, privacy: .public)")
|
||||
if !playback.finished, playback.interruptedAt == nil {
|
||||
throw NSError(domain: "TalkSpeak", code: 2, userInfo: [
|
||||
NSLocalizedDescriptionKey: "gateway talk.speak audio playback failed",
|
||||
])
|
||||
}
|
||||
}
|
||||
|
||||
private func playSystemVoice(input: TalkPlaybackInput) async throws {
|
||||
self.ttsLogger.info("talk system voice start chars=\(input.cleanedText.count, privacy: .public)")
|
||||
if self.interruptOnSpeech {
|
||||
@@ -847,6 +893,54 @@ actor TalkModeRuntime {
|
||||
}
|
||||
|
||||
extension TalkModeRuntime {
|
||||
static func makeTalkSpeakParams(
|
||||
text: String,
|
||||
voiceId: String?,
|
||||
modelId: String?,
|
||||
outputFormat: String?,
|
||||
directive: TalkDirective?) -> [String: AnyCodable]
|
||||
{
|
||||
var params: [String: AnyCodable] = ["text": AnyCodable(text)]
|
||||
|
||||
func addString(_ key: String, _ value: String?) {
|
||||
let trimmed = value?.trimmingCharacters(in: .whitespacesAndNewlines) ?? ""
|
||||
guard !trimmed.isEmpty else { return }
|
||||
params[key] = AnyCodable(trimmed)
|
||||
}
|
||||
|
||||
addString("voiceId", voiceId)
|
||||
addString("modelId", directive?.modelId ?? modelId)
|
||||
addString("outputFormat", directive?.outputFormat ?? outputFormat)
|
||||
if let speed = directive?.speed {
|
||||
params["speed"] = AnyCodable(speed)
|
||||
}
|
||||
if let rateWPM = directive?.rateWPM {
|
||||
params["rateWpm"] = AnyCodable(rateWPM)
|
||||
}
|
||||
if let stability = directive?.stability {
|
||||
params["stability"] = AnyCodable(stability)
|
||||
}
|
||||
if let similarity = directive?.similarity {
|
||||
params["similarity"] = AnyCodable(similarity)
|
||||
}
|
||||
if let style = directive?.style {
|
||||
params["style"] = AnyCodable(style)
|
||||
}
|
||||
if let speakerBoost = directive?.speakerBoost {
|
||||
params["speakerBoost"] = AnyCodable(speakerBoost)
|
||||
}
|
||||
if let seed = directive?.seed {
|
||||
params["seed"] = AnyCodable(seed)
|
||||
}
|
||||
addString("normalize", directive?.normalize)
|
||||
addString("language", directive?.language)
|
||||
if let latencyTier = directive?.latencyTier {
|
||||
params["latencyTier"] = AnyCodable(latencyTier)
|
||||
}
|
||||
|
||||
return params
|
||||
}
|
||||
|
||||
// MARK: - Audio playback (MainActor helpers)
|
||||
|
||||
@MainActor
|
||||
|
||||
@@ -464,6 +464,7 @@ public struct SendParams: Codable, Sendable {
|
||||
public let channel: String?
|
||||
public let accountid: String?
|
||||
public let agentid: String?
|
||||
public let replytoid: String?
|
||||
public let threadid: String?
|
||||
public let sessionkey: String?
|
||||
public let idempotencykey: String
|
||||
@@ -477,6 +478,7 @@ public struct SendParams: Codable, Sendable {
|
||||
channel: String?,
|
||||
accountid: String?,
|
||||
agentid: String?,
|
||||
replytoid: String?,
|
||||
threadid: String?,
|
||||
sessionkey: String?,
|
||||
idempotencykey: String)
|
||||
@@ -489,6 +491,7 @@ public struct SendParams: Codable, Sendable {
|
||||
self.channel = channel
|
||||
self.accountid = accountid
|
||||
self.agentid = agentid
|
||||
self.replytoid = replytoid
|
||||
self.threadid = threadid
|
||||
self.sessionkey = sessionkey
|
||||
self.idempotencykey = idempotencykey
|
||||
@@ -503,6 +506,7 @@ public struct SendParams: Codable, Sendable {
|
||||
case channel
|
||||
case accountid = "accountId"
|
||||
case agentid = "agentId"
|
||||
case replytoid = "replyToId"
|
||||
case threadid = "threadId"
|
||||
case sessionkey = "sessionKey"
|
||||
case idempotencykey = "idempotencyKey"
|
||||
@@ -590,6 +594,7 @@ public struct AgentParams: Codable, Sendable {
|
||||
public let timeout: Int?
|
||||
public let besteffortdeliver: Bool?
|
||||
public let lane: String?
|
||||
public let cleanupbundlemcponrunend: Bool?
|
||||
public let extrasystemprompt: String?
|
||||
public let bootstrapcontextmode: AnyCodable?
|
||||
public let bootstrapcontextrunkind: AnyCodable?
|
||||
@@ -621,6 +626,7 @@ public struct AgentParams: Codable, Sendable {
|
||||
timeout: Int?,
|
||||
besteffortdeliver: Bool?,
|
||||
lane: String?,
|
||||
cleanupbundlemcponrunend: Bool?,
|
||||
extrasystemprompt: String?,
|
||||
bootstrapcontextmode: AnyCodable?,
|
||||
bootstrapcontextrunkind: AnyCodable?,
|
||||
@@ -651,6 +657,7 @@ public struct AgentParams: Codable, Sendable {
|
||||
self.timeout = timeout
|
||||
self.besteffortdeliver = besteffortdeliver
|
||||
self.lane = lane
|
||||
self.cleanupbundlemcponrunend = cleanupbundlemcponrunend
|
||||
self.extrasystemprompt = extrasystemprompt
|
||||
self.bootstrapcontextmode = bootstrapcontextmode
|
||||
self.bootstrapcontextrunkind = bootstrapcontextrunkind
|
||||
@@ -683,6 +690,7 @@ public struct AgentParams: Codable, Sendable {
|
||||
case timeout
|
||||
case besteffortdeliver = "bestEffortDeliver"
|
||||
case lane
|
||||
case cleanupbundlemcponrunend = "cleanupBundleMcpOnRunEnd"
|
||||
case extrasystemprompt = "extraSystemPrompt"
|
||||
case bootstrapcontextmode = "bootstrapContextMode"
|
||||
case bootstrapcontextrunkind = "bootstrapContextRunKind"
|
||||
@@ -2317,6 +2325,62 @@ public struct TalkConfigResult: Codable, Sendable {
|
||||
}
|
||||
}
|
||||
|
||||
public struct TalkRealtimeSessionParams: Codable, Sendable {
|
||||
public let sessionkey: String?
|
||||
public let provider: String?
|
||||
public let model: String?
|
||||
public let voice: String?
|
||||
|
||||
public init(
|
||||
sessionkey: String?,
|
||||
provider: String?,
|
||||
model: String?,
|
||||
voice: String?)
|
||||
{
|
||||
self.sessionkey = sessionkey
|
||||
self.provider = provider
|
||||
self.model = model
|
||||
self.voice = voice
|
||||
}
|
||||
|
||||
private enum CodingKeys: String, CodingKey {
|
||||
case sessionkey = "sessionKey"
|
||||
case provider
|
||||
case model
|
||||
case voice
|
||||
}
|
||||
}
|
||||
|
||||
public struct TalkRealtimeSessionResult: Codable, Sendable {
|
||||
public let provider: String
|
||||
public let clientsecret: String
|
||||
public let model: String?
|
||||
public let voice: String?
|
||||
public let expiresat: Double?
|
||||
|
||||
public init(
|
||||
provider: String,
|
||||
clientsecret: String,
|
||||
model: String?,
|
||||
voice: String?,
|
||||
expiresat: Double?)
|
||||
{
|
||||
self.provider = provider
|
||||
self.clientsecret = clientsecret
|
||||
self.model = model
|
||||
self.voice = voice
|
||||
self.expiresat = expiresat
|
||||
}
|
||||
|
||||
private enum CodingKeys: String, CodingKey {
|
||||
case provider
|
||||
case clientsecret = "clientSecret"
|
||||
case model
|
||||
case voice
|
||||
case expiresat = "expiresAt"
|
||||
}
|
||||
}
|
||||
|
||||
public struct TalkSpeakParams: Codable, Sendable {
|
||||
public let text: String
|
||||
public let voiceid: String?
|
||||
@@ -2546,18 +2610,22 @@ public struct WebLoginStartParams: Codable, Sendable {
|
||||
public struct WebLoginWaitParams: Codable, Sendable {
|
||||
public let timeoutms: Int?
|
||||
public let accountid: String?
|
||||
public let currentqrdataurl: String?
|
||||
|
||||
public init(
|
||||
timeoutms: Int?,
|
||||
accountid: String?)
|
||||
accountid: String?,
|
||||
currentqrdataurl: String?)
|
||||
{
|
||||
self.timeoutms = timeoutms
|
||||
self.accountid = accountid
|
||||
self.currentqrdataurl = currentqrdataurl
|
||||
}
|
||||
|
||||
private enum CodingKeys: String, CodingKey {
|
||||
case timeoutms = "timeoutMs"
|
||||
case accountid = "accountId"
|
||||
case currentqrdataurl = "currentQrDataUrl"
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -156,4 +156,63 @@ struct ChannelsSettingsSmokeTests {
|
||||
let view = ChannelsSettings(store: store)
|
||||
_ = view.body
|
||||
}
|
||||
|
||||
@Test func `whatsapp login wait result keeps latest qr until connected`() {
|
||||
let store = makeChannelsStore(channels: [:])
|
||||
store.whatsappLoginQrDataUrl = "data:image/png;base64,initial"
|
||||
|
||||
store.applyWhatsAppLoginWaitResult(
|
||||
WhatsAppLoginWaitResult(
|
||||
connected: false,
|
||||
message: "QR refreshed. Scan the latest code in WhatsApp → Linked Devices.",
|
||||
qrDataUrl: "data:image/png;base64,rotated"))
|
||||
|
||||
#expect(store.whatsappLoginQrDataUrl == "data:image/png;base64,rotated")
|
||||
#expect(store.whatsappLoginConnected == false)
|
||||
|
||||
store.applyWhatsAppLoginWaitResult(
|
||||
WhatsAppLoginWaitResult(
|
||||
connected: false,
|
||||
message: "Still waiting for the QR scan. Let me know when you’ve scanned it.",
|
||||
qrDataUrl: nil))
|
||||
|
||||
#expect(store.whatsappLoginQrDataUrl == "data:image/png;base64,rotated")
|
||||
|
||||
store.applyWhatsAppLoginWaitResult(
|
||||
WhatsAppLoginWaitResult(
|
||||
connected: true,
|
||||
message: "✅ Linked! WhatsApp is ready.",
|
||||
qrDataUrl: nil))
|
||||
|
||||
#expect(store.whatsappLoginQrDataUrl == nil)
|
||||
#expect(store.whatsappLoginConnected == true)
|
||||
}
|
||||
|
||||
@Test func `whatsapp login wait budget allows one final poll`() {
|
||||
let startedAt = Date(timeIntervalSince1970: 1_700_000_000)
|
||||
var didRunFinalWait = false
|
||||
|
||||
#expect(
|
||||
whatsappLoginWaitRequestTimeoutMs(
|
||||
startedAt: startedAt,
|
||||
timeoutMs: 1_000,
|
||||
didRunFinalWait: &didRunFinalWait,
|
||||
now: Date(timeInterval: 0.25, since: startedAt)) == 750)
|
||||
#expect(didRunFinalWait == false)
|
||||
|
||||
#expect(
|
||||
whatsappLoginWaitRequestTimeoutMs(
|
||||
startedAt: startedAt,
|
||||
timeoutMs: 1_000,
|
||||
didRunFinalWait: &didRunFinalWait,
|
||||
now: Date(timeInterval: 1.25, since: startedAt)) == 1)
|
||||
#expect(didRunFinalWait == true)
|
||||
|
||||
#expect(
|
||||
whatsappLoginWaitRequestTimeoutMs(
|
||||
startedAt: startedAt,
|
||||
timeoutMs: 1_000,
|
||||
didRunFinalWait: &didRunFinalWait,
|
||||
now: Date(timeInterval: 1.5, since: startedAt)) == nil)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -66,22 +66,34 @@ struct ExecAllowlistTests {
|
||||
#expect(match?.pattern == entry.pattern)
|
||||
}
|
||||
|
||||
@Test func `match ignores basename pattern`() {
|
||||
@Test func `match accepts basename pattern for PATH resolved executable`() {
|
||||
let entry = ExecAllowlistEntry(pattern: "rg")
|
||||
let resolution = Self.homebrewRGResolution()
|
||||
let match = ExecAllowlistMatcher.match(entries: [entry], resolution: resolution)
|
||||
#expect(match == nil)
|
||||
#expect(match?.pattern == entry.pattern)
|
||||
}
|
||||
|
||||
@Test func `match ignores basename for relative executable`() {
|
||||
@Test func `match accepts basename glob for PATH resolved executable`() {
|
||||
let entry = ExecAllowlistEntry(pattern: "r?")
|
||||
let resolution = Self.homebrewRGResolution()
|
||||
let match = ExecAllowlistMatcher.match(entries: [entry], resolution: resolution)
|
||||
#expect(match?.pattern == entry.pattern)
|
||||
}
|
||||
|
||||
@Test func `match ignores basename for path selected executable`() {
|
||||
let entry = ExecAllowlistEntry(pattern: "echo")
|
||||
let resolution = ExecCommandResolution(
|
||||
let relativeResolution = ExecCommandResolution(
|
||||
rawExecutable: "./echo",
|
||||
resolvedPath: "/tmp/oc-basename/echo",
|
||||
executableName: "echo",
|
||||
cwd: "/tmp/oc-basename")
|
||||
let match = ExecAllowlistMatcher.match(entries: [entry], resolution: resolution)
|
||||
#expect(match == nil)
|
||||
let absoluteResolution = ExecCommandResolution(
|
||||
rawExecutable: "/tmp/oc-basename/echo",
|
||||
resolvedPath: "/tmp/oc-basename/echo",
|
||||
executableName: "echo",
|
||||
cwd: "/tmp/oc-basename")
|
||||
#expect(ExecAllowlistMatcher.match(entries: [entry], resolution: relativeResolution) == nil)
|
||||
#expect(ExecAllowlistMatcher.match(entries: [entry], resolution: absoluteResolution) == nil)
|
||||
}
|
||||
|
||||
@Test func `match is case insensitive`() {
|
||||
|
||||
@@ -33,18 +33,13 @@ struct ExecApprovalHelpersTests {
|
||||
#expect(ExecApprovalHelpers.isPathPattern("/usr/bin/rg"))
|
||||
#expect(ExecApprovalHelpers.isPathPattern(" ~/bin/rg "))
|
||||
#expect(!ExecApprovalHelpers.isPathPattern("rg"))
|
||||
#expect(ExecApprovalHelpers.isValidAllowlistPattern("rg"))
|
||||
|
||||
if case let .invalid(reason) = ExecApprovalHelpers.validateAllowlistPattern(" ") {
|
||||
#expect(reason == .empty)
|
||||
} else {
|
||||
Issue.record("Expected empty pattern rejection")
|
||||
}
|
||||
|
||||
if case let .invalid(reason) = ExecApprovalHelpers.validateAllowlistPattern("echo") {
|
||||
#expect(reason == .missingPathComponent)
|
||||
} else {
|
||||
Issue.record("Expected basename pattern rejection")
|
||||
}
|
||||
}
|
||||
|
||||
@Test func `requires ask matches policy`() {
|
||||
|
||||
@@ -31,7 +31,7 @@ struct ExecApprovalsStoreRefactorTests {
|
||||
}
|
||||
|
||||
@Test
|
||||
func `update allowlist reports rejected basename pattern`() async throws {
|
||||
func `update allowlist accepts basename pattern`() async throws {
|
||||
try await self.withTempStateDir { _ in
|
||||
let rejected = ExecApprovalsStore.updateAllowlist(
|
||||
agentId: "main",
|
||||
@@ -39,12 +39,10 @@ struct ExecApprovalsStoreRefactorTests {
|
||||
ExecAllowlistEntry(pattern: "echo"),
|
||||
ExecAllowlistEntry(pattern: "/bin/echo"),
|
||||
])
|
||||
#expect(rejected.count == 1)
|
||||
#expect(rejected.first?.reason == .missingPathComponent)
|
||||
#expect(rejected.first?.pattern == "echo")
|
||||
#expect(rejected.isEmpty)
|
||||
|
||||
let resolved = ExecApprovalsStore.resolve(agentId: "main")
|
||||
#expect(resolved.allowlist.map(\.pattern) == ["/bin/echo"])
|
||||
#expect(resolved.allowlist.map(\.pattern) == ["echo", "/bin/echo"])
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -1,3 +1,4 @@
|
||||
import OpenClawKit
|
||||
import Speech
|
||||
import Testing
|
||||
@testable import OpenClaw
|
||||
@@ -16,23 +17,19 @@ struct TalkModeRuntimeSpeechTests {
|
||||
let elevenLabsPlan = TalkModeRuntime.playbackPlan(
|
||||
provider: "elevenlabs",
|
||||
apiKey: "key",
|
||||
voiceId: "voice"
|
||||
)
|
||||
voiceId: "voice")
|
||||
let missingKeyPlan = TalkModeRuntime.playbackPlan(
|
||||
provider: "elevenlabs",
|
||||
apiKey: nil,
|
||||
voiceId: "voice"
|
||||
)
|
||||
voiceId: "voice")
|
||||
let missingVoicePlan = TalkModeRuntime.playbackPlan(
|
||||
provider: "elevenlabs",
|
||||
apiKey: "key",
|
||||
voiceId: nil
|
||||
)
|
||||
voiceId: nil)
|
||||
let blankKeyPlan = TalkModeRuntime.playbackPlan(
|
||||
provider: "elevenlabs",
|
||||
apiKey: "",
|
||||
voiceId: "voice"
|
||||
)
|
||||
voiceId: "voice")
|
||||
let mlxPlan = TalkModeRuntime.playbackPlan(provider: "mlx", apiKey: nil, voiceId: nil)
|
||||
let systemPlan = TalkModeRuntime.playbackPlan(provider: "system", apiKey: nil, voiceId: nil)
|
||||
|
||||
@@ -43,4 +40,40 @@ struct TalkModeRuntimeSpeechTests {
|
||||
#expect(mlxPlan == .mlxThenSystemVoice)
|
||||
#expect(systemPlan == .systemVoiceOnly)
|
||||
}
|
||||
|
||||
@Test func `talk speak params carry resolved voice and directive overrides`() {
|
||||
let params = TalkModeRuntime.makeTalkSpeakParams(
|
||||
text: "hello",
|
||||
voiceId: "voice-123",
|
||||
modelId: "eleven_v3",
|
||||
outputFormat: "mp3_44100_128",
|
||||
directive: TalkDirective(
|
||||
modelId: "eleven_turbo_v2_5",
|
||||
speed: 1.1,
|
||||
rateWPM: 180,
|
||||
stability: 0.4,
|
||||
similarity: 0.7,
|
||||
style: 0.2,
|
||||
speakerBoost: true,
|
||||
seed: 42,
|
||||
normalize: "auto",
|
||||
language: "en",
|
||||
outputFormat: "mp3_44100_128",
|
||||
latencyTier: 3))
|
||||
|
||||
#expect(params["text"]?.value as? String == "hello")
|
||||
#expect(params["voiceId"]?.value as? String == "voice-123")
|
||||
#expect(params["modelId"]?.value as? String == "eleven_turbo_v2_5")
|
||||
#expect(params["outputFormat"]?.value as? String == "mp3_44100_128")
|
||||
#expect(params["speed"]?.value as? Double == 1.1)
|
||||
#expect(params["rateWpm"]?.value as? Int == 180)
|
||||
#expect(params["stability"]?.value as? Double == 0.4)
|
||||
#expect(params["similarity"]?.value as? Double == 0.7)
|
||||
#expect(params["style"]?.value as? Double == 0.2)
|
||||
#expect(params["speakerBoost"]?.value as? Bool == true)
|
||||
#expect(params["seed"]?.value as? Int == 42)
|
||||
#expect(params["normalize"]?.value as? String == "auto")
|
||||
#expect(params["language"]?.value as? String == "en")
|
||||
#expect(params["latencyTier"]?.value as? Int == 3)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -412,8 +412,13 @@
|
||||
"title": "Sessions",
|
||||
"detailKeys": [
|
||||
"kinds",
|
||||
"label",
|
||||
"agentId",
|
||||
"search",
|
||||
"limit",
|
||||
"activeMinutes",
|
||||
"includeDerivedTitles",
|
||||
"includeLastMessage",
|
||||
"messageLimit"
|
||||
]
|
||||
},
|
||||
|
||||
@@ -464,6 +464,7 @@ public struct SendParams: Codable, Sendable {
|
||||
public let channel: String?
|
||||
public let accountid: String?
|
||||
public let agentid: String?
|
||||
public let replytoid: String?
|
||||
public let threadid: String?
|
||||
public let sessionkey: String?
|
||||
public let idempotencykey: String
|
||||
@@ -477,6 +478,7 @@ public struct SendParams: Codable, Sendable {
|
||||
channel: String?,
|
||||
accountid: String?,
|
||||
agentid: String?,
|
||||
replytoid: String?,
|
||||
threadid: String?,
|
||||
sessionkey: String?,
|
||||
idempotencykey: String)
|
||||
@@ -489,6 +491,7 @@ public struct SendParams: Codable, Sendable {
|
||||
self.channel = channel
|
||||
self.accountid = accountid
|
||||
self.agentid = agentid
|
||||
self.replytoid = replytoid
|
||||
self.threadid = threadid
|
||||
self.sessionkey = sessionkey
|
||||
self.idempotencykey = idempotencykey
|
||||
@@ -503,6 +506,7 @@ public struct SendParams: Codable, Sendable {
|
||||
case channel
|
||||
case accountid = "accountId"
|
||||
case agentid = "agentId"
|
||||
case replytoid = "replyToId"
|
||||
case threadid = "threadId"
|
||||
case sessionkey = "sessionKey"
|
||||
case idempotencykey = "idempotencyKey"
|
||||
@@ -590,6 +594,7 @@ public struct AgentParams: Codable, Sendable {
|
||||
public let timeout: Int?
|
||||
public let besteffortdeliver: Bool?
|
||||
public let lane: String?
|
||||
public let cleanupbundlemcponrunend: Bool?
|
||||
public let extrasystemprompt: String?
|
||||
public let bootstrapcontextmode: AnyCodable?
|
||||
public let bootstrapcontextrunkind: AnyCodable?
|
||||
@@ -621,6 +626,7 @@ public struct AgentParams: Codable, Sendable {
|
||||
timeout: Int?,
|
||||
besteffortdeliver: Bool?,
|
||||
lane: String?,
|
||||
cleanupbundlemcponrunend: Bool?,
|
||||
extrasystemprompt: String?,
|
||||
bootstrapcontextmode: AnyCodable?,
|
||||
bootstrapcontextrunkind: AnyCodable?,
|
||||
@@ -651,6 +657,7 @@ public struct AgentParams: Codable, Sendable {
|
||||
self.timeout = timeout
|
||||
self.besteffortdeliver = besteffortdeliver
|
||||
self.lane = lane
|
||||
self.cleanupbundlemcponrunend = cleanupbundlemcponrunend
|
||||
self.extrasystemprompt = extrasystemprompt
|
||||
self.bootstrapcontextmode = bootstrapcontextmode
|
||||
self.bootstrapcontextrunkind = bootstrapcontextrunkind
|
||||
@@ -683,6 +690,7 @@ public struct AgentParams: Codable, Sendable {
|
||||
case timeout
|
||||
case besteffortdeliver = "bestEffortDeliver"
|
||||
case lane
|
||||
case cleanupbundlemcponrunend = "cleanupBundleMcpOnRunEnd"
|
||||
case extrasystemprompt = "extraSystemPrompt"
|
||||
case bootstrapcontextmode = "bootstrapContextMode"
|
||||
case bootstrapcontextrunkind = "bootstrapContextRunKind"
|
||||
@@ -2317,6 +2325,62 @@ public struct TalkConfigResult: Codable, Sendable {
|
||||
}
|
||||
}
|
||||
|
||||
public struct TalkRealtimeSessionParams: Codable, Sendable {
|
||||
public let sessionkey: String?
|
||||
public let provider: String?
|
||||
public let model: String?
|
||||
public let voice: String?
|
||||
|
||||
public init(
|
||||
sessionkey: String?,
|
||||
provider: String?,
|
||||
model: String?,
|
||||
voice: String?)
|
||||
{
|
||||
self.sessionkey = sessionkey
|
||||
self.provider = provider
|
||||
self.model = model
|
||||
self.voice = voice
|
||||
}
|
||||
|
||||
private enum CodingKeys: String, CodingKey {
|
||||
case sessionkey = "sessionKey"
|
||||
case provider
|
||||
case model
|
||||
case voice
|
||||
}
|
||||
}
|
||||
|
||||
public struct TalkRealtimeSessionResult: Codable, Sendable {
|
||||
public let provider: String
|
||||
public let clientsecret: String
|
||||
public let model: String?
|
||||
public let voice: String?
|
||||
public let expiresat: Double?
|
||||
|
||||
public init(
|
||||
provider: String,
|
||||
clientsecret: String,
|
||||
model: String?,
|
||||
voice: String?,
|
||||
expiresat: Double?)
|
||||
{
|
||||
self.provider = provider
|
||||
self.clientsecret = clientsecret
|
||||
self.model = model
|
||||
self.voice = voice
|
||||
self.expiresat = expiresat
|
||||
}
|
||||
|
||||
private enum CodingKeys: String, CodingKey {
|
||||
case provider
|
||||
case clientsecret = "clientSecret"
|
||||
case model
|
||||
case voice
|
||||
case expiresat = "expiresAt"
|
||||
}
|
||||
}
|
||||
|
||||
public struct TalkSpeakParams: Codable, Sendable {
|
||||
public let text: String
|
||||
public let voiceid: String?
|
||||
@@ -2546,18 +2610,22 @@ public struct WebLoginStartParams: Codable, Sendable {
|
||||
public struct WebLoginWaitParams: Codable, Sendable {
|
||||
public let timeoutms: Int?
|
||||
public let accountid: String?
|
||||
public let currentqrdataurl: String?
|
||||
|
||||
public init(
|
||||
timeoutms: Int?,
|
||||
accountid: String?)
|
||||
accountid: String?,
|
||||
currentqrdataurl: String?)
|
||||
{
|
||||
self.timeoutms = timeoutms
|
||||
self.accountid = accountid
|
||||
self.currentqrdataurl = currentqrdataurl
|
||||
}
|
||||
|
||||
private enum CodingKeys: String, CodingKey {
|
||||
case timeoutms = "timeoutMs"
|
||||
case accountid = "accountId"
|
||||
case currentqrdataurl = "currentQrDataUrl"
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
88e22624ea8967e9e817212ff4aa62451001f8d4b2c8d872e5a77f38c66c5c3f config-baseline.json
|
||||
0f117e9214be948d351dfaf7d0cfaf7e6d76e47896881b840fdad17ee4b53a24 config-baseline.core.json
|
||||
35d132fe176bd2bf9f0e46b29de91baba63ec4db3317cc5b294a982b46d16ba9 config-baseline.channel.json
|
||||
5f0d160144cf751187cbc0219f8351307e8e82aafdb20ea0307a444f3e64b93c config-baseline.plugin.json
|
||||
b6d1e53947fcdfbff1b99f8ec79d3814d243385a1750b7fb40b40bb30f2e2975 config-baseline.json
|
||||
98c83ce8af9ec4703726d7d673add95279be008a801b1d298982cbd9c1785747 config-baseline.core.json
|
||||
d72032762ab46b99480b57deb81130a0ab5b1401189cfbaf4f7fef4a063a7f6c config-baseline.channel.json
|
||||
86f615b7d267b03888af0af7ccb3f8232a6b636f8a741d522ff425e46729ba81 config-baseline.plugin.json
|
||||
|
||||
@@ -1,2 +1,2 @@
|
||||
452cf5257df597bb0062c4478aca3afdbda6909fbaaf9ade214c27e8885935b1 plugin-sdk-api-baseline.json
|
||||
30117bdbd814978ad04be54e80b72385cabb0c726de1abcbad319c9d0b3ed101 plugin-sdk-api-baseline.jsonl
|
||||
56ccee3ef8ff3b0ba7e2e765ae631b59254464585d5fef9db7e905f2c4c34ded plugin-sdk-api-baseline.json
|
||||
39184cf8afaec691f0352d1a113e30a7099b87c0748237a3c7307e903ba24eee plugin-sdk-api-baseline.jsonl
|
||||
|
||||
78
docs/.i18n/glossary.th.json
Normal file
78
docs/.i18n/glossary.th.json
Normal file
@@ -0,0 +1,78 @@
|
||||
[
|
||||
{
|
||||
"source": "ACP",
|
||||
"target": "ACP"
|
||||
},
|
||||
{
|
||||
"source": "Active Memory",
|
||||
"target": "Active Memory"
|
||||
},
|
||||
{
|
||||
"source": "ClawHub",
|
||||
"target": "ClawHub"
|
||||
},
|
||||
{
|
||||
"source": "CLI",
|
||||
"target": "CLI"
|
||||
},
|
||||
{
|
||||
"source": "Compaction",
|
||||
"target": "Compaction"
|
||||
},
|
||||
{
|
||||
"source": "Cron",
|
||||
"target": "Cron"
|
||||
},
|
||||
{
|
||||
"source": "Dreaming",
|
||||
"target": "Dreaming"
|
||||
},
|
||||
{
|
||||
"source": "Gateway",
|
||||
"target": "Gateway"
|
||||
},
|
||||
{
|
||||
"source": "Heartbeat",
|
||||
"target": "Heartbeat"
|
||||
},
|
||||
{
|
||||
"source": "Mintlify",
|
||||
"target": "Mintlify"
|
||||
},
|
||||
{
|
||||
"source": "Node",
|
||||
"target": "Node"
|
||||
},
|
||||
{
|
||||
"source": "OpenClaw",
|
||||
"target": "OpenClaw"
|
||||
},
|
||||
{
|
||||
"source": "Pi",
|
||||
"target": "Pi"
|
||||
},
|
||||
{
|
||||
"source": "Plugin",
|
||||
"target": "Plugin"
|
||||
},
|
||||
{
|
||||
"source": "Skills",
|
||||
"target": "Skills"
|
||||
},
|
||||
{
|
||||
"source": "Tailscale",
|
||||
"target": "Tailscale"
|
||||
},
|
||||
{
|
||||
"source": "TaskFlow",
|
||||
"target": "TaskFlow"
|
||||
},
|
||||
{
|
||||
"source": "TUI",
|
||||
"target": "TUI"
|
||||
},
|
||||
{
|
||||
"source": "Webhook",
|
||||
"target": "Webhook"
|
||||
}
|
||||
]
|
||||
@@ -3,6 +3,18 @@
|
||||
"source": "OpenClaw",
|
||||
"target": "OpenClaw"
|
||||
},
|
||||
{
|
||||
"source": "OpenAI",
|
||||
"target": "OpenAI"
|
||||
},
|
||||
{
|
||||
"source": "OpenAI provider",
|
||||
"target": "OpenAI provider"
|
||||
},
|
||||
{
|
||||
"source": "Status",
|
||||
"target": "Status"
|
||||
},
|
||||
{
|
||||
"source": "Gateway",
|
||||
"target": "Gateway 网关"
|
||||
@@ -11,6 +23,30 @@
|
||||
"source": "Pi",
|
||||
"target": "Pi"
|
||||
},
|
||||
{
|
||||
"source": "Agent runtimes",
|
||||
"target": "Agent Runtimes"
|
||||
},
|
||||
{
|
||||
"source": "Agent Runtimes",
|
||||
"target": "Agent Runtimes"
|
||||
},
|
||||
{
|
||||
"source": "Codex harness",
|
||||
"target": "Codex harness"
|
||||
},
|
||||
{
|
||||
"source": "Agent harness plugins",
|
||||
"target": "Agent harness plugins"
|
||||
},
|
||||
{
|
||||
"source": "Agent loop",
|
||||
"target": "Agent loop"
|
||||
},
|
||||
{
|
||||
"source": "Models",
|
||||
"target": "Models"
|
||||
},
|
||||
{
|
||||
"source": "Skills",
|
||||
"target": "Skills"
|
||||
@@ -75,6 +111,10 @@
|
||||
"source": "BytePlus (International)",
|
||||
"target": "BytePlus(国际版)"
|
||||
},
|
||||
{
|
||||
"source": "Amazon Bedrock Mantle",
|
||||
"target": "Amazon Bedrock Mantle"
|
||||
},
|
||||
{
|
||||
"source": "Anthropic (API + Claude CLI)",
|
||||
"target": "Anthropic(API + Claude CLI)"
|
||||
@@ -319,14 +359,26 @@
|
||||
"source": "env var",
|
||||
"target": "环境变量"
|
||||
},
|
||||
{
|
||||
"source": "Google Meet Plugin",
|
||||
"target": "Google Meet 插件"
|
||||
},
|
||||
{
|
||||
"source": "Plugin SDK",
|
||||
"target": "插件 SDK"
|
||||
},
|
||||
{
|
||||
"source": "Building plugins",
|
||||
"target": "构建插件"
|
||||
},
|
||||
{
|
||||
"source": "Plugin SDK Overview",
|
||||
"target": "插件 SDK 概览"
|
||||
},
|
||||
{
|
||||
"source": "Plugin SDK overview",
|
||||
"target": "插件 SDK 概览"
|
||||
},
|
||||
{
|
||||
"source": "SDK Overview",
|
||||
"target": "SDK 概览"
|
||||
@@ -335,6 +387,22 @@
|
||||
"source": "Plugin Entry Points",
|
||||
"target": "插件入口点"
|
||||
},
|
||||
{
|
||||
"source": "Plugin entry points",
|
||||
"target": "插件入口点"
|
||||
},
|
||||
{
|
||||
"source": "Plugin hooks",
|
||||
"target": "插件钩子"
|
||||
},
|
||||
{
|
||||
"source": "Internal hooks",
|
||||
"target": "内部钩子"
|
||||
},
|
||||
{
|
||||
"source": "Plugin architecture internals",
|
||||
"target": "插件架构内部机制"
|
||||
},
|
||||
{
|
||||
"source": "Entry Points",
|
||||
"target": "入口点"
|
||||
@@ -379,6 +447,26 @@
|
||||
"source": "Testing",
|
||||
"target": "测试"
|
||||
},
|
||||
{
|
||||
"source": "Async Exec Duplicate Completion Investigation",
|
||||
"target": "Async Exec Duplicate Completion Investigation"
|
||||
},
|
||||
{
|
||||
"source": "QA Refactor",
|
||||
"target": "QA 重构"
|
||||
},
|
||||
{
|
||||
"source": "Rich Output Protocol",
|
||||
"target": "富输出协议"
|
||||
},
|
||||
{
|
||||
"source": "Tencent Cloud (TokenHub)",
|
||||
"target": "腾讯云(TokenHub)"
|
||||
},
|
||||
{
|
||||
"source": "Codex Harness Context Engine Port",
|
||||
"target": "Codex Harness Context Engine Port"
|
||||
},
|
||||
{
|
||||
"source": "/gateway/configuration#strict-validation",
|
||||
"target": "/gateway/configuration#strict-validation"
|
||||
|
||||
@@ -5,8 +5,8 @@ This directory owns docs authoring, Mintlify link rules, and docs i18n policy.
|
||||
## Mintlify Rules
|
||||
|
||||
- Docs are hosted on Mintlify (`https://docs.openclaw.ai`).
|
||||
- Internal doc links in `docs/**/*.md` must stay root-relative with no `.md` or `.mdx` suffix (example: `[Config](/configuration)`).
|
||||
- Section cross-references should use anchors on root-relative paths (example: `[Hooks](/configuration#hooks)`).
|
||||
- Internal doc links in `docs/**/*.md` must stay root-relative with no `.md` or `.mdx` suffix (example: `[Config](/gateway/configuration)`).
|
||||
- Section cross-references should use anchors on root-relative paths (example: `[Hooks](/gateway/configuration-reference#hooks)`).
|
||||
- Doc headings should avoid em dashes and apostrophes because Mintlify anchor generation is brittle there.
|
||||
- README and other GitHub-rendered docs should keep absolute docs URLs so links work outside Mintlify.
|
||||
- Docs content must stay generic: no personal device names, hostnames, or local paths; use placeholders like `user@gateway-host`.
|
||||
|
||||
@@ -1,13 +1,11 @@
|
||||
---
|
||||
title: "Auth Credential Semantics"
|
||||
summary: "Canonical credential eligibility and resolution semantics for auth profiles"
|
||||
title: "Auth credential semantics"
|
||||
read_when:
|
||||
- Working on auth profile resolution or credential routing
|
||||
- Debugging model auth failures or profile order
|
||||
---
|
||||
|
||||
# Auth Credential Semantics
|
||||
|
||||
This document defines the canonical credential eligibility and resolution semantics used across:
|
||||
|
||||
- `resolveAuthProfileOrder`
|
||||
@@ -78,3 +76,8 @@ For script compatibility, probe errors keep this first line unchanged:
|
||||
`Auth profile credentials are missing or expired.`
|
||||
|
||||
Human-friendly detail and stable reason codes may be added on subsequent lines.
|
||||
|
||||
## Related
|
||||
|
||||
- [Secrets management](/gateway/secrets)
|
||||
- [Auth storage](/concepts/oauth)
|
||||
|
||||
@@ -1,8 +1,11 @@
|
||||
---
|
||||
summary: "Redirect to /gateway/authentication"
|
||||
title: "Auth Monitoring"
|
||||
title: "Auth monitoring"
|
||||
---
|
||||
|
||||
# Auth Monitoring
|
||||
|
||||
This page moved to [Authentication](/gateway/authentication). See [Authentication](/gateway/authentication) for auth monitoring documentation.
|
||||
|
||||
## Related
|
||||
|
||||
- [Automation troubleshooting](/automation/troubleshooting)
|
||||
- [Hooks](/automation/hooks)
|
||||
|
||||
@@ -3,6 +3,10 @@ summary: "Redirect to Task Flow"
|
||||
title: "ClawFlow"
|
||||
---
|
||||
|
||||
# ClawFlow
|
||||
|
||||
ClawFlow was renamed to [Task Flow](/automation/taskflow). See [Task Flow](/automation/taskflow) for the current documentation.
|
||||
|
||||
## Related
|
||||
|
||||
- [Task flow](/automation/taskflow)
|
||||
- [Standing orders](/automation/standing-orders)
|
||||
- [Hooks](/automation/hooks)
|
||||
|
||||
@@ -4,11 +4,9 @@ read_when:
|
||||
- Scheduling background jobs or wakeups
|
||||
- Wiring external triggers (webhooks, Gmail) into OpenClaw
|
||||
- Deciding between heartbeat and cron for scheduled tasks
|
||||
title: "Scheduled Tasks"
|
||||
title: "Scheduled tasks"
|
||||
---
|
||||
|
||||
# Scheduled Tasks (Cron)
|
||||
|
||||
Cron is the Gateway's built-in scheduler. It persists jobs, wakes the agent at the right time, and can deliver output back to a chat channel or webhook endpoint.
|
||||
|
||||
## Quick start
|
||||
@@ -90,6 +88,8 @@ This fires ~5–6 times per month instead of 0–1 times per month. OpenClaw use
|
||||
|
||||
For isolated jobs, runtime teardown now includes best-effort browser cleanup for that cron session. Cleanup failures are ignored so the actual cron result still wins.
|
||||
|
||||
Isolated cron runs also dispose any bundled MCP runtime instances created for the job through the shared runtime-cleanup path. This matches how main-session and custom-session MCP clients are torn down, so isolated cron jobs do not leak stdio child processes or long-lived MCP connections across runs.
|
||||
|
||||
When isolated cron runs orchestrate subagents, delivery also prefers the final
|
||||
descendant output over stale parent interim text. If descendants are still
|
||||
running, OpenClaw suppresses that partial parent update instead of announcing it.
|
||||
@@ -233,7 +233,7 @@ Run an isolated agent turn:
|
||||
curl -X POST http://127.0.0.1:18789/hooks/agent \
|
||||
-H 'Authorization: Bearer SECRET' \
|
||||
-H 'Content-Type: application/json' \
|
||||
-d '{"message":"Summarize inbox","name":"Email","model":"openai/gpt-5.4-mini"}'
|
||||
-d '{"message":"Summarize inbox","name":"Email","model":"openai/gpt-5.4"}'
|
||||
```
|
||||
|
||||
Fields: `message` (required), `name`, `agentId`, `wakeMode`, `deliver`, `channel`, `to`, `model`, `thinking`, `timeoutSeconds`.
|
||||
|
||||
@@ -1,8 +1,11 @@
|
||||
---
|
||||
summary: "Redirect to /automation"
|
||||
title: "Cron vs Heartbeat"
|
||||
title: "Cron vs heartbeat"
|
||||
---
|
||||
|
||||
# Cron vs Heartbeat
|
||||
|
||||
This page moved to [Automation & Tasks](/automation). See [Automation & Tasks](/automation) for the decision guide comparing cron and heartbeat.
|
||||
|
||||
## Related
|
||||
|
||||
- [Scheduled tasks](/automation/cron-jobs)
|
||||
- [Background tasks](/automation/tasks)
|
||||
|
||||
@@ -3,6 +3,9 @@ summary: "Redirect to /automation/cron-jobs"
|
||||
title: "Gmail PubSub"
|
||||
---
|
||||
|
||||
# Gmail PubSub
|
||||
|
||||
This page moved to [Scheduled Tasks](/automation/cron-jobs#gmail-pubsub-integration). See [Scheduled Tasks](/automation/cron-jobs#gmail-pubsub-integration) for Gmail PubSub documentation.
|
||||
|
||||
## Related
|
||||
|
||||
- [Webhook](/automation/webhook)
|
||||
- [Automation troubleshooting](/automation/troubleshooting)
|
||||
|
||||
@@ -6,8 +6,6 @@ read_when:
|
||||
title: "Hooks"
|
||||
---
|
||||
|
||||
# Hooks
|
||||
|
||||
Hooks are small scripts that run when something happens inside the Gateway. They can be discovered from directories and inspected with `openclaw hooks`. The Gateway loads internal hooks only after you enable hooks or configure at least one hook entry, hook pack, legacy handler, or extra hook directory.
|
||||
|
||||
There are two kinds of hooks in OpenClaw:
|
||||
@@ -108,7 +106,7 @@ const handler = async (event) => {
|
||||
export default handler;
|
||||
```
|
||||
|
||||
Each event includes: `type`, `action`, `sessionKey`, `timestamp`, `messages` (push to send to user), and `context` (event-specific data).
|
||||
Each event includes: `type`, `action`, `sessionKey`, `timestamp`, `messages` (push to send to user), and `context` (event-specific data). Agent and tool plugin hook contexts can also include `trace`, a read-only W3C-compatible diagnostic trace context that plugins may pass into structured logs for OTEL correlation.
|
||||
|
||||
### Event context highlights
|
||||
|
||||
@@ -207,9 +205,12 @@ Runs `BOOT.md` from the active workspace when the gateway starts.
|
||||
|
||||
## Plugin hooks
|
||||
|
||||
Plugins can register hooks through the Plugin SDK for deeper integration: intercepting tool calls, modifying prompts, controlling message flow, and more. The Plugin SDK exposes 28 hooks covering model resolution, agent lifecycle, message flow, tool execution, subagent coordination, and gateway lifecycle.
|
||||
Plugins can register typed hooks through the Plugin SDK for deeper integration:
|
||||
intercepting tool calls, modifying prompts, controlling message flow, and more.
|
||||
Use plugin hooks when you need `before_tool_call`, `before_agent_reply`,
|
||||
`before_install`, or other in-process lifecycle hooks.
|
||||
|
||||
For the complete plugin hook reference including `before_tool_call`, `before_agent_reply`, `before_install`, and all other plugin hooks, see [Plugin Architecture](/plugins/architecture#provider-runtime-hooks).
|
||||
For the complete plugin hook reference, see [Plugin hooks](/plugins/hooks).
|
||||
|
||||
## Configuration
|
||||
|
||||
@@ -317,5 +318,5 @@ Check for missing binaries (PATH), environment variables, config values, or OS c
|
||||
|
||||
- [CLI Reference: hooks](/cli/hooks)
|
||||
- [Webhooks](/automation/cron-jobs#webhooks)
|
||||
- [Plugin Architecture](/plugins/architecture#provider-runtime-hooks) — full plugin hook reference
|
||||
- [Plugin hooks](/plugins/hooks) — in-process plugin lifecycle hooks
|
||||
- [Configuration](/gateway/configuration-reference#hooks)
|
||||
|
||||
@@ -4,11 +4,9 @@ read_when:
|
||||
- Deciding how to automate work with OpenClaw
|
||||
- Choosing between heartbeat, cron, hooks, and standing orders
|
||||
- Looking for the right automation entry point
|
||||
title: "Automation & Tasks"
|
||||
title: "Automation & tasks"
|
||||
---
|
||||
|
||||
# Automation & Tasks
|
||||
|
||||
OpenClaw runs work in the background through tasks, scheduled jobs, event hooks, and standing instructions. This page helps you choose the right mechanism and understand how they fit together.
|
||||
|
||||
## Quick decision guide
|
||||
@@ -42,7 +40,7 @@ flowchart TD
|
||||
| Audit what ran and when | Background Tasks | `openclaw tasks list` and `openclaw tasks audit` |
|
||||
| Multi-step research then summarize | Task Flow | Durable orchestration with revision tracking |
|
||||
| Run a script on session reset | Hooks | Event-driven, fires on lifecycle events |
|
||||
| Execute code on every tool call | Hooks | Hooks can filter by event type |
|
||||
| Execute code on every tool call | Plugin hooks | In-process hooks can intercept tool calls |
|
||||
| Always check compliance before replying | Standing Orders | Injected into every session automatically |
|
||||
|
||||
### Scheduled Tasks (Cron) vs Heartbeat
|
||||
@@ -85,7 +83,11 @@ See [Standing Orders](/automation/standing-orders).
|
||||
|
||||
### Hooks
|
||||
|
||||
Hooks are event-driven scripts triggered by agent lifecycle events (`/new`, `/reset`, `/stop`), session compaction, gateway startup, message flow, and tool calls. Hooks are automatically discovered from directories and can be managed with `openclaw hooks`.
|
||||
Internal hooks are event-driven scripts triggered by agent lifecycle events
|
||||
(`/new`, `/reset`, `/stop`), session compaction, gateway startup, and message
|
||||
flow. They are automatically discovered from directories and can be managed
|
||||
with `openclaw hooks`. For in-process tool-call interception, use
|
||||
[Plugin hooks](/plugins/hooks).
|
||||
|
||||
See [Hooks](/automation/hooks).
|
||||
|
||||
@@ -99,7 +101,7 @@ See [Heartbeat](/gateway/heartbeat).
|
||||
|
||||
- **Cron** handles precise schedules (daily reports, weekly reviews) and one-shot reminders. All cron executions create task records.
|
||||
- **Heartbeat** handles routine monitoring (inbox, calendar, notifications) in one batched turn every 30 minutes.
|
||||
- **Hooks** react to specific events (tool calls, session resets, compaction) with custom scripts.
|
||||
- **Hooks** react to specific events (session resets, compaction, message flow) with custom scripts. Plugin hooks cover tool calls.
|
||||
- **Standing orders** give the agent persistent context and authority boundaries.
|
||||
- **Task Flow** coordinates multi-step flows above individual tasks.
|
||||
- **Tasks** automatically track all detached work so you can inspect and audit it.
|
||||
@@ -110,6 +112,7 @@ See [Heartbeat](/gateway/heartbeat).
|
||||
- [Background Tasks](/automation/tasks) — task ledger for all detached work
|
||||
- [Task Flow](/automation/taskflow) — durable multi-step flow orchestration
|
||||
- [Hooks](/automation/hooks) — event-driven lifecycle scripts
|
||||
- [Plugin hooks](/plugins/hooks) — in-process tool, prompt, message, and lifecycle hooks
|
||||
- [Standing Orders](/automation/standing-orders) — persistent agent instructions
|
||||
- [Heartbeat](/gateway/heartbeat) — periodic main-session turns
|
||||
- [Configuration Reference](/gateway/configuration-reference) — all config keys
|
||||
|
||||
@@ -1,8 +1,12 @@
|
||||
---
|
||||
summary: "Redirect to /tools/message"
|
||||
summary: "Redirect to /cli/message"
|
||||
title: "Polls"
|
||||
---
|
||||
|
||||
# Polls
|
||||
|
||||
This page moved to [Message tool](/cli/message). See [Message tool](/cli/message) for poll documentation.
|
||||
|
||||
## Related
|
||||
|
||||
- [Webhook](/automation/webhook)
|
||||
- [Scheduled tasks](/automation/cron-jobs)
|
||||
- [Background tasks](/automation/tasks)
|
||||
|
||||
@@ -4,11 +4,9 @@ read_when:
|
||||
- Setting up autonomous agent workflows that run without per-task prompting
|
||||
- Defining what the agent can do independently vs. what needs human approval
|
||||
- Structuring multi-program agents with clear boundaries and escalation rules
|
||||
title: "Standing Orders"
|
||||
title: "Standing orders"
|
||||
---
|
||||
|
||||
# Standing Orders
|
||||
|
||||
Standing orders grant your agent **permanent operating authority** for defined programs. Instead of giving individual task instructions each time, you define programs with clear scope, triggers, and escalation rules — and the agent executes autonomously within those boundaries.
|
||||
|
||||
This is the difference between telling your assistant "send the weekly report" every Friday vs. granting standing authority: "You own the weekly report. Compile it every Friday, send it, and only escalate if something looks wrong."
|
||||
@@ -29,7 +27,7 @@ This is the difference between telling your assistant "send the weekly report" e
|
||||
- You only get involved for exceptions and approvals
|
||||
- The agent fills idle time productively
|
||||
|
||||
## How They Work
|
||||
## How they work
|
||||
|
||||
Standing orders are defined in your [agent workspace](/concepts/agent-workspace) files. The recommended approach is to include them directly in `AGENTS.md` (which is auto-injected every session) so the agent always has them in context. For larger configurations, you can also place them in a dedicated file like `standing-orders.md` and reference it from `AGENTS.md`.
|
||||
|
||||
@@ -200,8 +198,6 @@ This pattern prevents the most common agent failure mode: acknowledging a task w
|
||||
For agents managing multiple concerns, organize standing orders as separate programs with clear boundaries:
|
||||
|
||||
```markdown
|
||||
# Standing Orders
|
||||
|
||||
## Program 1: [Domain A] (Weekly)
|
||||
|
||||
...
|
||||
|
||||
@@ -4,11 +4,9 @@ read_when:
|
||||
- You want to understand how Task Flow relates to background tasks
|
||||
- You encounter Task Flow or openclaw tasks flow in release notes or docs
|
||||
- You want to inspect or manage durable flow state
|
||||
title: "Task Flow"
|
||||
title: "Task flow"
|
||||
---
|
||||
|
||||
# Task Flow
|
||||
|
||||
Task Flow is the flow orchestration substrate that sits above [background tasks](/automation/tasks). It manages durable multi-step flows with their own state, revision tracking, and sync semantics while individual tasks remain the unit of detached work.
|
||||
|
||||
## When to use Task Flow
|
||||
@@ -22,6 +20,78 @@ Use Task Flow when work spans multiple sequential or branching steps and you nee
|
||||
| Observe externally created tasks | Task Flow (mirrored) |
|
||||
| One-shot reminder | Cron job |
|
||||
|
||||
## Reliable scheduled workflow pattern
|
||||
|
||||
For recurring workflows such as market intelligence briefings, treat the schedule, orchestration, and reliability checks as separate layers:
|
||||
|
||||
1. Use [Scheduled Tasks](/automation/cron-jobs) for timing.
|
||||
2. Use a persistent cron session when the workflow should build on prior context.
|
||||
3. Use [Lobster](/tools/lobster) for deterministic steps, approval gates, and resume tokens.
|
||||
4. Use Task Flow to track the multi-step run across child tasks, waits, retries, and gateway restarts.
|
||||
|
||||
Example cron shape:
|
||||
|
||||
```bash
|
||||
openclaw cron add \
|
||||
--name "Market intelligence brief" \
|
||||
--cron "0 7 * * 1-5" \
|
||||
--tz "America/New_York" \
|
||||
--session session:market-intel \
|
||||
--message "Run the market-intel Lobster workflow. Verify source freshness before summarizing." \
|
||||
--announce \
|
||||
--channel slack \
|
||||
--to "channel:C1234567890"
|
||||
```
|
||||
|
||||
Use `session:<id>` instead of `isolated` when the recurring workflow needs deliberate history, previous run summaries, or standing context. Use `isolated` when each run should start fresh and all required state is explicit in the workflow.
|
||||
|
||||
Inside the workflow, put reliability checks before the LLM summary step:
|
||||
|
||||
```yaml
|
||||
name: market-intel-brief
|
||||
steps:
|
||||
- id: preflight
|
||||
command: market-intel check --json
|
||||
- id: collect
|
||||
command: market-intel collect --json
|
||||
stdin: $preflight.json
|
||||
- id: summarize
|
||||
command: market-intel summarize --json
|
||||
stdin: $collect.json
|
||||
- id: approve
|
||||
command: market-intel deliver --preview
|
||||
stdin: $summarize.json
|
||||
approval: required
|
||||
- id: deliver
|
||||
command: market-intel deliver --execute
|
||||
stdin: $summarize.json
|
||||
condition: $approve.approved
|
||||
```
|
||||
|
||||
Recommended preflight checks:
|
||||
|
||||
- Browser availability and profile choice, for example `openclaw` for managed state or `user` when a signed-in Chrome session is required. See [Browser](/tools/browser).
|
||||
- API credentials and quota for each source.
|
||||
- Network reachability for required endpoints.
|
||||
- Required tools enabled for the agent, such as `lobster`, `browser`, and `llm-task`.
|
||||
- Failure destination configured for cron so preflight failures are visible. See [Scheduled Tasks](/automation/cron-jobs#delivery-and-output).
|
||||
|
||||
Recommended data provenance fields for every collected item:
|
||||
|
||||
```json
|
||||
{
|
||||
"sourceUrl": "https://example.com/report",
|
||||
"retrievedAt": "2026-04-24T12:00:00Z",
|
||||
"asOf": "2026-04-24",
|
||||
"title": "Example report",
|
||||
"content": "..."
|
||||
}
|
||||
```
|
||||
|
||||
Have the workflow reject or mark stale items before summarization. The LLM step should receive only structured JSON and should be asked to preserve `sourceUrl`, `retrievedAt`, and `asOf` in its output. Use [LLM Task](/tools/llm-task) when you need a schema-validated model step inside the workflow.
|
||||
|
||||
For reusable team or community workflows, package the CLI, `.lobster` files, and any setup notes as a skill or plugin and publish it through [ClawHub](/tools/clawhub). Keep workflow-specific guardrails in that package unless the plugin API is missing a needed generic capability.
|
||||
|
||||
## Sync modes
|
||||
|
||||
### Managed mode
|
||||
@@ -77,6 +147,6 @@ Flows coordinate tasks, not replace them. A single flow may drive multiple backg
|
||||
## Related
|
||||
|
||||
- [Background Tasks](/automation/tasks) — the detached work ledger that flows coordinate
|
||||
- [CLI: tasks](/cli/index#tasks) — CLI command reference for `openclaw tasks flow`
|
||||
- [CLI: tasks](/cli/tasks) — CLI command reference for `openclaw tasks flow`
|
||||
- [Automation Overview](/automation) — all automation mechanisms at a glance
|
||||
- [Cron Jobs](/automation/cron-jobs) — scheduled jobs that may feed into flows
|
||||
|
||||
@@ -4,11 +4,9 @@ read_when:
|
||||
- Inspecting background work in progress or recently completed
|
||||
- Debugging delivery failures for detached agent runs
|
||||
- Understanding how background runs relate to sessions, cron, and heartbeat
|
||||
title: "Background Tasks"
|
||||
title: "Background tasks"
|
||||
---
|
||||
|
||||
# Background Tasks
|
||||
|
||||
> **Looking for scheduling?** See [Automation & Tasks](/automation) for choosing the right mechanism. This page covers **tracking** background work, not scheduling it.
|
||||
|
||||
Background tasks track work that runs **outside your main conversation session**:
|
||||
@@ -325,4 +323,4 @@ A task's `runId` links to the agent run doing the work. Agent lifecycle events (
|
||||
- [Task Flow](/automation/taskflow) — flow orchestration above tasks
|
||||
- [Scheduled Tasks](/automation/cron-jobs) — scheduling background work
|
||||
- [Heartbeat](/gateway/heartbeat) — periodic main-session turns
|
||||
- [CLI: Tasks](/cli/index#tasks) — CLI command reference
|
||||
- [CLI: Tasks](/cli/tasks) — CLI command reference
|
||||
|
||||
@@ -1,8 +1,12 @@
|
||||
---
|
||||
summary: "Redirect to /automation/cron-jobs"
|
||||
title: "Automation Troubleshooting"
|
||||
title: "Automation troubleshooting"
|
||||
---
|
||||
|
||||
# Automation Troubleshooting
|
||||
|
||||
This page moved to [Scheduled Tasks](/automation/cron-jobs#troubleshooting). See [Scheduled Tasks](/automation/cron-jobs#troubleshooting) for troubleshooting documentation.
|
||||
|
||||
## Related
|
||||
|
||||
- [Hooks](/automation/hooks)
|
||||
- [Background tasks](/automation/tasks)
|
||||
- [Gateway troubleshooting](/gateway/troubleshooting)
|
||||
|
||||
@@ -3,6 +3,10 @@ summary: "Redirect to /automation/cron-jobs"
|
||||
title: "Webhooks"
|
||||
---
|
||||
|
||||
# Webhooks
|
||||
|
||||
This page moved to [Scheduled Tasks](/automation/cron-jobs#webhooks). See [Scheduled Tasks](/automation/cron-jobs#webhooks) for webhook documentation.
|
||||
|
||||
## Related
|
||||
|
||||
- [Poll](/automation/poll)
|
||||
- [Gmail PubSub](/automation/gmail-pubsub)
|
||||
- [Hooks](/automation/hooks)
|
||||
|
||||
@@ -3,7 +3,7 @@ summary: "Brave Search API setup for web_search"
|
||||
read_when:
|
||||
- You want to use Brave Search for web_search
|
||||
- You need a BRAVE_API_KEY or plan details
|
||||
title: "Brave Search (legacy path)"
|
||||
title: "Brave search (legacy path)"
|
||||
---
|
||||
|
||||
# Brave Search API
|
||||
@@ -101,3 +101,7 @@ await web_search({
|
||||
- Results are cached for 15 minutes by default (configurable via `cacheTtlMinutes`).
|
||||
|
||||
See [Web tools](/tools/web) for the full web_search configuration.
|
||||
|
||||
## Related
|
||||
|
||||
- [Brave search](/tools/brave-search)
|
||||
|
||||
@@ -7,8 +7,6 @@ read_when:
|
||||
title: "BlueBubbles"
|
||||
---
|
||||
|
||||
# BlueBubbles (macOS REST)
|
||||
|
||||
Status: bundled plugin that talks to the BlueBubbles macOS server over HTTP. **Recommended for iMessage integration** due to its richer API and easier setup compared to the legacy imsg channel.
|
||||
|
||||
## Bundled plugin
|
||||
@@ -393,6 +391,8 @@ Use full IDs for durable automations and storage:
|
||||
|
||||
See [Configuration](/gateway/configuration) for template variables.
|
||||
|
||||
<a id="coalescing-split-send-dms-command--url-in-one-composition"></a>
|
||||
|
||||
## Coalescing split-send DMs (command + URL in one composition)
|
||||
|
||||
When a user types a command and a URL together in iMessage — e.g. `Dump https://example.com/article` — Apple splits the send into **two separate webhook deliveries**:
|
||||
|
||||
@@ -4,11 +4,9 @@ read_when:
|
||||
- Configuring broadcast groups
|
||||
- Debugging multi-agent replies in WhatsApp
|
||||
status: experimental
|
||||
title: "Broadcast Groups"
|
||||
title: "Broadcast groups"
|
||||
---
|
||||
|
||||
# Broadcast Groups
|
||||
|
||||
**Status:** Experimental
|
||||
**Version:** Added in 2026.1.9
|
||||
|
||||
@@ -435,8 +433,10 @@ Planned features:
|
||||
- [ ] Dynamic agent selection (choose agents based on message content)
|
||||
- [ ] Agent priorities (some agents respond before others)
|
||||
|
||||
## See Also
|
||||
## Related
|
||||
|
||||
- [Multi-Agent Configuration](/tools/multi-agent-sandbox-tools)
|
||||
- [Routing Configuration](/channels/channel-routing)
|
||||
- [Session Management](/concepts/session)
|
||||
- [Groups](/channels/groups)
|
||||
- [Channel routing](/channels/channel-routing)
|
||||
- [Pairing](/channels/pairing)
|
||||
- [Multi-agent sandbox tools](/tools/multi-agent-sandbox-tools)
|
||||
- [Session management](/concepts/session)
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
summary: "Routing rules per channel (WhatsApp, Telegram, Discord, Slack) and shared context"
|
||||
read_when:
|
||||
- Changing channel routing or inbox behavior
|
||||
title: "Channel Routing"
|
||||
title: "Channel routing"
|
||||
---
|
||||
|
||||
# Channels & routing
|
||||
@@ -13,7 +13,7 @@ host configuration.
|
||||
|
||||
## Key terms
|
||||
|
||||
- **Channel**: `telegram`, `whatsapp`, `discord`, `irc`, `googlechat`, `slack`, `signal`, `imessage`, `line`, plus extension channels. `webchat` is the internal WebChat UI channel and is not a configurable outbound channel.
|
||||
- **Channel**: `telegram`, `whatsapp`, `discord`, `irc`, `googlechat`, `slack`, `signal`, `imessage`, `line`, plus plugin channels. `webchat` is the internal WebChat UI channel and is not a configurable outbound channel.
|
||||
- **AccountId**: per‑channel account instance (when supported).
|
||||
- Optional channel default account: `channels.<channel>.defaultAccount` chooses
|
||||
which account is used when an outbound path does not specify `accountId`.
|
||||
@@ -23,10 +23,14 @@ host configuration.
|
||||
|
||||
## Session key shapes (examples)
|
||||
|
||||
Direct messages collapse to the agent’s **main** session:
|
||||
Direct messages collapse to the agent’s **main** session by default:
|
||||
|
||||
- `agent:<agentId>:<mainKey>` (default: `agent:main:main`)
|
||||
|
||||
Even when direct-message conversation history is shared with main, sandbox and
|
||||
tool policy use a derived per-account direct-chat runtime key for external DMs
|
||||
so channel-originated messages are not treated like local main-session runs.
|
||||
|
||||
Groups and channels remain isolated per channel:
|
||||
|
||||
- Groups: `agent:<agentId>:<channel>:group:<id>`
|
||||
@@ -137,3 +141,9 @@ Inbound replies include:
|
||||
- Quoted context is appended to `Body` as a `[Replying to ...]` block.
|
||||
|
||||
This is consistent across channels.
|
||||
|
||||
## Related
|
||||
|
||||
- [Groups](/channels/groups)
|
||||
- [Broadcast groups](/channels/broadcast-groups)
|
||||
- [Pairing](/channels/pairing)
|
||||
|
||||
@@ -5,9 +5,7 @@ read_when:
|
||||
title: "Discord"
|
||||
---
|
||||
|
||||
# Discord (Bot API)
|
||||
|
||||
Status: ready for DMs and guild channels via the official Discord gateway.
|
||||
Ready for DMs and guild channels via the official Discord gateway.
|
||||
|
||||
<CardGroup cols={3}>
|
||||
<Card title="Pairing" icon="link" href="/channels/pairing">
|
||||
@@ -307,7 +305,7 @@ By default, components are single use. Set `components.reusable=true` to allow b
|
||||
|
||||
To restrict who can click a button, set `allowedUsers` on that button (Discord user IDs, tags, or `*`). When configured, unmatched users receive an ephemeral denial.
|
||||
|
||||
The `/model` and `/models` slash commands open an interactive model picker with provider and model dropdowns plus a Submit step. Unless `commands.modelsWrite=false`, `/models add` also supports adding a new provider/model entry from chat, and newly added models show up without restarting the gateway. The picker reply is ephemeral and only the invoking user can use it.
|
||||
The `/model` and `/models` slash commands open an interactive model picker with provider, model, and compatible runtime dropdowns plus a Submit step. `/models add` is deprecated and now returns a deprecation message instead of registering models from chat. The picker reply is ephemeral and only the invoking user can use it.
|
||||
|
||||
File attachments:
|
||||
|
||||
@@ -495,60 +493,6 @@ Use `bindings[].match.roles` to route Discord guild members to different agents
|
||||
}
|
||||
```
|
||||
|
||||
## Developer Portal setup
|
||||
|
||||
<AccordionGroup>
|
||||
<Accordion title="Create app and bot">
|
||||
|
||||
1. Discord Developer Portal -> **Applications** -> **New Application**
|
||||
2. **Bot** -> **Add Bot**
|
||||
3. Copy bot token
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Privileged intents">
|
||||
In **Bot -> Privileged Gateway Intents**, enable:
|
||||
|
||||
- Message Content Intent
|
||||
- Server Members Intent (recommended)
|
||||
|
||||
Presence intent is optional and only required if you want to receive presence updates. Setting bot presence (`setPresence`) does not require enabling presence updates for members.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="OAuth scopes and baseline permissions">
|
||||
OAuth URL generator:
|
||||
|
||||
- scopes: `bot`, `applications.commands`
|
||||
|
||||
Typical baseline permissions:
|
||||
|
||||
**General Permissions**
|
||||
- View Channels
|
||||
**Text Permissions**
|
||||
- Send Messages
|
||||
- Read Message History
|
||||
- Embed Links
|
||||
- Attach Files
|
||||
- Add Reactions (optional)
|
||||
|
||||
This is the baseline set for normal text channels. If you plan to post in Discord threads, including forum or media channel workflows that create or continue a thread, also enable **Send Messages in Threads**.
|
||||
Avoid `Administrator` unless explicitly needed.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Copy IDs">
|
||||
Enable Discord Developer Mode, then copy:
|
||||
|
||||
- server ID
|
||||
- channel ID
|
||||
- user ID
|
||||
|
||||
Prefer numeric IDs in OpenClaw config for reliable audits and probes.
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
## Native commands and command auth
|
||||
|
||||
- `commands.native` defaults to `"auto"` and is enabled for Discord.
|
||||
@@ -591,30 +535,9 @@ Default slash command settings:
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Live stream preview">
|
||||
OpenClaw can stream draft replies by sending a temporary message and editing it as text arrives.
|
||||
OpenClaw can stream draft replies by sending a temporary message and editing it as text arrives. `channels.discord.streaming` takes `off` (default) | `partial` | `block` | `progress`. `progress` maps to `partial` on Discord; `streamMode` is a legacy alias and is auto-migrated.
|
||||
|
||||
- `channels.discord.streaming` controls preview streaming (`off` | `partial` | `block` | `progress`, default: `off`).
|
||||
- Default stays `off` because Discord preview edits can hit rate limits quickly, especially when multiple bots or gateways share the same account or guild traffic.
|
||||
- `progress` is accepted for cross-channel consistency and maps to `partial` on Discord.
|
||||
- `channels.discord.streamMode` is a legacy alias and is auto-migrated.
|
||||
- `partial` edits a single preview message as tokens arrive.
|
||||
- `block` emits draft-sized chunks (use `draftChunk` to tune size and breakpoints).
|
||||
- Media, error, and explicit-reply finals cancel pending preview edits without flushing a temporary draft before normal delivery.
|
||||
- `streaming.preview.toolProgress` controls whether tool/progress updates reuse the same draft preview message (default: `true`). Set `false` to keep separate tool/progress messages.
|
||||
|
||||
Example:
|
||||
|
||||
```json5
|
||||
{
|
||||
channels: {
|
||||
discord: {
|
||||
streaming: "partial",
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
`block` mode chunking defaults (clamped to `channels.discord.textChunkLimit`):
|
||||
Default stays `off` because Discord preview edits hit rate limits quickly when multiple bots or gateways share an account.
|
||||
|
||||
```json5
|
||||
{
|
||||
@@ -631,10 +554,12 @@ Default slash command settings:
|
||||
}
|
||||
```
|
||||
|
||||
Preview streaming is text-only; media replies fall back to normal delivery.
|
||||
- `partial` edits a single preview message as tokens arrive.
|
||||
- `block` emits draft-sized chunks (use `draftChunk` to tune size and breakpoints, clamped to `textChunkLimit`).
|
||||
- Media, error, and explicit-reply finals cancel pending preview edits.
|
||||
- `streaming.preview.toolProgress` (default `true`) controls whether tool/progress updates reuse the preview message.
|
||||
|
||||
Note: preview streaming is separate from block streaming. When block streaming is explicitly
|
||||
enabled for Discord, OpenClaw skips the preview stream to avoid double streaming.
|
||||
Preview streaming is text-only; media replies fall back to normal delivery. When `block` streaming is explicitly enabled, OpenClaw skips the preview stream to avoid double-streaming.
|
||||
|
||||
</Accordion>
|
||||
|
||||
@@ -652,13 +577,12 @@ Default slash command settings:
|
||||
|
||||
Thread behavior:
|
||||
|
||||
- Discord threads are routed as channel sessions
|
||||
- parent thread metadata can be used for parent-session linkage
|
||||
- thread config inherits parent channel config unless a thread-specific entry exists
|
||||
- Discord threads route as channel sessions and inherit parent channel config unless overridden.
|
||||
- `channels.discord.thread.inheritParent` (default `false`) opts new auto-threads into seeding from the parent transcript. Per-account overrides live under `channels.discord.accounts.<id>.thread.inheritParent`.
|
||||
- Message-tool reactions can resolve `user:<id>` DM targets.
|
||||
- `guilds.<guild>.channels.<channel>.requireMention: false` is preserved during reply-stage activation fallback.
|
||||
|
||||
Channel topics are injected as **untrusted** context (not as system prompt).
|
||||
Reply and quoted-message context currently stays as received.
|
||||
Discord allowlists primarily gate who can trigger the agent, not a full supplemental-context redaction boundary.
|
||||
Channel topics are injected as **untrusted** context. Allowlists gate who can trigger the agent, not a full supplemental-context redaction boundary.
|
||||
|
||||
</Accordion>
|
||||
|
||||
@@ -766,13 +690,9 @@ Default slash command settings:
|
||||
|
||||
Notes:
|
||||
|
||||
- `/acp spawn codex --bind here` binds the current Discord channel or thread in place and keeps future messages routed to the same ACP session.
|
||||
- That can still mean "start a fresh Codex ACP session", but it does not create a new Discord thread by itself. The existing channel stays the chat surface.
|
||||
- Codex may still run in its own `cwd` or backend workspace on disk. That workspace is runtime state, not a Discord thread.
|
||||
- Thread messages can inherit the parent channel ACP binding.
|
||||
- In a bound channel or thread, `/new` and `/reset` reset the same ACP session in place.
|
||||
- Temporary thread bindings still work and can override target resolution while active.
|
||||
- `spawnAcpSessions` is only required when OpenClaw needs to create/bind a child thread via `--thread auto|here`. It is not required for `/acp spawn ... --bind here` in the current channel.
|
||||
- `/acp spawn codex --bind here` binds the current channel or thread in place and keeps future messages on the same ACP session. Thread messages inherit the parent channel binding.
|
||||
- In a bound channel or thread, `/new` and `/reset` reset the same ACP session in place. Temporary thread bindings can override target resolution while active.
|
||||
- `spawnAcpSessions` is only required when OpenClaw needs to create/bind a child thread via `--thread auto|here`.
|
||||
|
||||
See [ACP Agents](/tools/acp-agents) for binding behavior details.
|
||||
|
||||
@@ -977,25 +897,9 @@ Default slash command settings:
|
||||
should only include a manual `/approve` command when the tool result says
|
||||
chat approvals are unavailable or manual approval is the only path.
|
||||
|
||||
Gateway auth for this handler uses the same shared credential resolution contract as other Gateway clients:
|
||||
Gateway auth and approval resolution follow the shared Gateway client contract (`plugin:` IDs resolve through `plugin.approval.resolve`; other IDs through `exec.approval.resolve`). Approvals expire after 30 minutes by default.
|
||||
|
||||
- env-first local auth (`OPENCLAW_GATEWAY_TOKEN` / `OPENCLAW_GATEWAY_PASSWORD` then `gateway.auth.*`)
|
||||
- in local mode, `gateway.remote.*` can be used as fallback only when `gateway.auth.*` is unset; configured-but-unresolved local SecretRefs fail closed
|
||||
- remote-mode support via `gateway.remote.*` when applicable
|
||||
- URL overrides are override-safe: CLI overrides do not reuse implicit credentials, and env overrides use env credentials only
|
||||
|
||||
Approval resolution behavior:
|
||||
|
||||
- IDs prefixed with `plugin:` resolve through `plugin.approval.resolve`.
|
||||
- Other IDs resolve through `exec.approval.resolve`.
|
||||
- Discord does not do an extra exec-to-plugin fallback hop here; the id
|
||||
prefix decides which gateway method it calls.
|
||||
|
||||
Exec approvals expire after 30 minutes by default. If approvals fail with
|
||||
unknown approval IDs, verify approver resolution, feature enablement, and
|
||||
that the delivered approval id kind matches the pending request.
|
||||
|
||||
Related docs: [Exec approvals](/tools/exec-approvals)
|
||||
See [Exec approvals](/tools/exec-approvals).
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
@@ -1048,9 +952,11 @@ Example:
|
||||
}
|
||||
```
|
||||
|
||||
## Voice channels
|
||||
## Voice
|
||||
|
||||
OpenClaw can join Discord voice channels for realtime, continuous conversations. This is separate from voice message attachments.
|
||||
Discord has two distinct voice surfaces: realtime **voice channels** (continuous conversations) and **voice message attachments** (the waveform preview format). The gateway supports both.
|
||||
|
||||
### Voice channels
|
||||
|
||||
Requirements:
|
||||
|
||||
@@ -1058,7 +964,7 @@ Requirements:
|
||||
- Configure `channels.discord.voice`.
|
||||
- The bot needs Connect + Speak permissions in the target voice channel.
|
||||
|
||||
Use the Discord-only native command `/vc join|leave|status` to control sessions. The command uses the account default agent and follows the same allowlist and group policy rules as other Discord commands.
|
||||
Use `/vc join|leave|status` to control sessions. The command uses the account default agent and follows the same allowlist and group policy rules as other Discord commands.
|
||||
|
||||
Auto-join example:
|
||||
|
||||
@@ -1096,17 +1002,13 @@ Notes:
|
||||
- OpenClaw also watches receive decrypt failures and auto-recovers by leaving/rejoining the voice channel after repeated failures in a short window.
|
||||
- If receive logs repeatedly show `DecryptionFailed(UnencryptedWhenPassthroughDisabled)`, this may be the upstream `@discordjs/voice` receive bug tracked in [discord.js #11419](https://github.com/discordjs/discord.js/issues/11419).
|
||||
|
||||
## Voice messages
|
||||
### Voice messages
|
||||
|
||||
Discord voice messages show a waveform preview and require OGG/Opus audio plus metadata. OpenClaw generates the waveform automatically, but it needs `ffmpeg` and `ffprobe` available on the gateway host to inspect and convert audio files.
|
||||
|
||||
Requirements and constraints:
|
||||
Discord voice messages show a waveform preview and require OGG/Opus audio. OpenClaw generates the waveform automatically, but needs `ffmpeg` and `ffprobe` on the gateway host to inspect and convert.
|
||||
|
||||
- Provide a **local file path** (URLs are rejected).
|
||||
- Omit text content (Discord does not allow text + voice message in the same payload).
|
||||
- Any audio format is accepted; OpenClaw converts to OGG/Opus when needed.
|
||||
|
||||
Example:
|
||||
- Omit text content (Discord rejects text + voice message in the same payload).
|
||||
- Any audio format is accepted; OpenClaw converts to OGG/Opus as needed.
|
||||
|
||||
```bash
|
||||
message(action="send", channel="discord", target="channel:123", path="/path/to/audio.mp3", asVoice=true)
|
||||
@@ -1230,13 +1132,11 @@ openclaw logs --follow
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
## Configuration reference pointers
|
||||
## Configuration reference
|
||||
|
||||
Primary reference:
|
||||
Primary reference: [Configuration reference - Discord](/gateway/config-channels#discord).
|
||||
|
||||
- [Configuration reference - Discord](/gateway/configuration-reference#discord)
|
||||
|
||||
High-signal Discord fields:
|
||||
<Accordion title="High-signal Discord fields">
|
||||
|
||||
- startup/auth: `enabled`, `token`, `accounts.*`, `allowBots`
|
||||
- policy: `groupPolicy`, `dm.*`, `guilds.*`, `guilds.*.channels.*`
|
||||
@@ -1246,13 +1146,14 @@ High-signal Discord fields:
|
||||
- reply/history: `replyToMode`, `historyLimit`, `dmHistoryLimit`, `dms.*.historyLimit`
|
||||
- delivery: `textChunkLimit`, `chunkMode`, `maxLinesPerMessage`
|
||||
- streaming: `streaming` (legacy alias: `streamMode`), `streaming.preview.toolProgress`, `draftChunk`, `blockStreaming`, `blockStreamingCoalesce`
|
||||
- media/retry: `mediaMaxMb`, `retry`
|
||||
- `mediaMaxMb` caps outbound Discord uploads (default: `100MB`)
|
||||
- media/retry: `mediaMaxMb` (caps outbound Discord uploads, default `100MB`), `retry`
|
||||
- actions: `actions.*`
|
||||
- presence: `activity`, `status`, `activityType`, `activityUrl`
|
||||
- UI: `ui.components.accentColor`
|
||||
- features: `threadBindings`, top-level `bindings[]` (`type: "acp"`), `pluralkit`, `execApprovals`, `intents`, `agentComponents`, `heartbeat`, `responsePrefix`
|
||||
|
||||
</Accordion>
|
||||
|
||||
## Safety and operations
|
||||
|
||||
- Treat bot tokens as secrets (`DISCORD_BOT_TOKEN` preferred in supervised environments).
|
||||
@@ -1261,10 +1162,23 @@ High-signal Discord fields:
|
||||
|
||||
## Related
|
||||
|
||||
- [Pairing](/channels/pairing)
|
||||
- [Groups](/channels/groups)
|
||||
- [Channel routing](/channels/channel-routing)
|
||||
- [Security](/gateway/security)
|
||||
- [Multi-agent routing](/concepts/multi-agent)
|
||||
- [Troubleshooting](/channels/troubleshooting)
|
||||
- [Slash commands](/tools/slash-commands)
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Pairing" icon="link" href="/channels/pairing">
|
||||
Pair a Discord user to the gateway.
|
||||
</Card>
|
||||
<Card title="Groups" icon="users" href="/channels/groups">
|
||||
Group chat and allowlist behavior.
|
||||
</Card>
|
||||
<Card title="Channel routing" icon="route" href="/channels/channel-routing">
|
||||
Route inbound messages to agents.
|
||||
</Card>
|
||||
<Card title="Security" icon="shield" href="/gateway/security">
|
||||
Threat model and hardening.
|
||||
</Card>
|
||||
<Card title="Multi-agent routing" icon="sitemap" href="/concepts/multi-agent">
|
||||
Map guilds and channels to agents.
|
||||
</Card>
|
||||
<Card title="Slash commands" icon="terminal" href="/tools/slash-commands">
|
||||
Native command behavior.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
@@ -16,7 +16,7 @@ Feishu/Lark is an all-in-one collaboration platform where teams chat, share docu
|
||||
|
||||
## Quick start
|
||||
|
||||
> **Requires OpenClaw 2026.4.10 or above.** Run `openclaw --version` to check. Upgrade with `openclaw update`.
|
||||
> **Requires OpenClaw 2026.4.24 or above.** Run `openclaw --version` to check. Upgrade with `openclaw update`.
|
||||
|
||||
<Steps>
|
||||
<Step title="Run the channel setup wizard">
|
||||
@@ -135,6 +135,8 @@ Default: `allowlist`
|
||||
|
||||
---
|
||||
|
||||
<a id="get-groupuser-ids"></a>
|
||||
|
||||
## Get group/user IDs
|
||||
|
||||
### Group IDs (`chat_id`, format: `oc_xxx`)
|
||||
|
||||
@@ -5,8 +5,6 @@ read_when:
|
||||
title: "Google Chat"
|
||||
---
|
||||
|
||||
# Google Chat (Chat API)
|
||||
|
||||
Status: ready for DMs + spaces via Google Chat API webhooks (HTTP only).
|
||||
|
||||
## Quick setup (beginner)
|
||||
@@ -35,7 +33,7 @@ Status: ready for DMs + spaces via Google Chat API webhooks (HTTP only).
|
||||
- Under **Connection settings**, select **HTTP endpoint URL**.
|
||||
- Under **Triggers**, select **Use a common HTTP endpoint URL for all triggers** and set it to your gateway's public URL followed by `/googlechat`.
|
||||
- _Tip: Run `openclaw status` to find your gateway's public URL._
|
||||
- Under **Visibility**, check **Make this Chat app available to specific people and groups in <Your Domain>**.
|
||||
- Under **Visibility**, check **Make this Chat app available to specific people and groups in `<Your Domain>`**.
|
||||
- Enter your email address (e.g. `user@example.com`) in the text box.
|
||||
- Click **Save** at the bottom.
|
||||
6. **Enable the app status**:
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user