diff --git a/docs/concepts/qa-e2e-automation.md b/docs/concepts/qa-e2e-automation.md index 56757fde9ee..3565c7f50be 100644 --- a/docs/concepts/qa-e2e-automation.md +++ b/docs/concepts/qa-e2e-automation.md @@ -278,7 +278,7 @@ Optional: - `OPENCLAW_QA_TELEGRAM_CAPTURE_CONTENT=1` keeps message bodies in observed-message artifacts (default redacts). -Scenarios (`extensions/qa-lab/src/live-transports/telegram/telegram-live.runtime.ts:44`): +Scenarios (`extensions/qa-lab/src/live-transports/telegram/telegram-live.runtime.ts`): - `telegram-canary` - `telegram-mention-gating` @@ -287,10 +287,17 @@ Scenarios (`extensions/qa-lab/src/live-transports/telegram/telegram-live.runtime - `telegram-commands-command` - `telegram-tools-compact-command` - `telegram-whoami-command` +- `telegram-status-command` +- `telegram-other-bot-command-gating` - `telegram-context-command` +- `telegram-current-session-status-tool` +- `telegram-reply-chain-exact-marker` +- `telegram-stream-final-single-message` - `telegram-long-final-reuses-preview` - `telegram-long-final-three-chunks` +The implicit default set always covers canary, mention gating, native command replies, command addressing, and bot-to-bot group replies. `mock-openai` defaults also include deterministic reply-chain and final-message streaming checks. `telegram-current-session-status-tool` remains opt-in because it is only stable when threaded directly after canary, not after arbitrary native command replies. Use `pnpm openclaw qa telegram --list-scenarios --provider-mode mock-openai` to print the current default/optional split with regression refs. + Output artifacts: - `telegram-qa-report.md` diff --git a/docs/help/testing.md b/docs/help/testing.md index 9c5df8fe205..6c43e7a004f 100644 --- a/docs/help/testing.md +++ b/docs/help/testing.md @@ -311,6 +311,7 @@ gh workflow run package-acceptance.yml --ref main \ - Runs the Telegram live QA lane against a real private group using the driver and SUT bot tokens from env. - Requires `OPENCLAW_QA_TELEGRAM_GROUP_ID`, `OPENCLAW_QA_TELEGRAM_DRIVER_BOT_TOKEN`, and `OPENCLAW_QA_TELEGRAM_SUT_BOT_TOKEN`. The group id must be the numeric Telegram chat id. - Supports `--credential-source convex` for shared pooled credentials. Use env mode by default, or set `OPENCLAW_QA_CREDENTIAL_SOURCE=convex` to opt into pooled leases. + - Defaults cover canary, mention gating, command addressing, `/status`, bot-to-bot mentioned replies, and core native command replies. `mock-openai` defaults also cover deterministic reply-chain and Telegram final-message streaming regressions. Use `--list-scenarios` for optional probes such as `session_status`. - Exits non-zero when any scenario fails. Use `--allow-failures` when you want artifacts without a failing exit code. - Requires two distinct bots in the same private group, with the SUT bot exposing a Telegram username.