docs(qa): document telegram e2e defaults

This commit is contained in:
Ayaan Zaidi
2026-05-08 15:47:35 +05:30
parent 5cd4996205
commit 5e27993cbe
2 changed files with 9 additions and 1 deletions

View File

@@ -278,7 +278,7 @@ Optional:
- `OPENCLAW_QA_TELEGRAM_CAPTURE_CONTENT=1` keeps message bodies in observed-message artifacts (default redacts).
Scenarios (`extensions/qa-lab/src/live-transports/telegram/telegram-live.runtime.ts:44`):
Scenarios (`extensions/qa-lab/src/live-transports/telegram/telegram-live.runtime.ts`):
- `telegram-canary`
- `telegram-mention-gating`
@@ -287,10 +287,17 @@ Scenarios (`extensions/qa-lab/src/live-transports/telegram/telegram-live.runtime
- `telegram-commands-command`
- `telegram-tools-compact-command`
- `telegram-whoami-command`
- `telegram-status-command`
- `telegram-other-bot-command-gating`
- `telegram-context-command`
- `telegram-current-session-status-tool`
- `telegram-reply-chain-exact-marker`
- `telegram-stream-final-single-message`
- `telegram-long-final-reuses-preview`
- `telegram-long-final-three-chunks`
The implicit default set always covers canary, mention gating, native command replies, command addressing, and bot-to-bot group replies. `mock-openai` defaults also include deterministic reply-chain and final-message streaming checks. `telegram-current-session-status-tool` remains opt-in because it is only stable when threaded directly after canary, not after arbitrary native command replies. Use `pnpm openclaw qa telegram --list-scenarios --provider-mode mock-openai` to print the current default/optional split with regression refs.
Output artifacts:
- `telegram-qa-report.md`

View File

@@ -311,6 +311,7 @@ gh workflow run package-acceptance.yml --ref main \
- Runs the Telegram live QA lane against a real private group using the driver and SUT bot tokens from env.
- Requires `OPENCLAW_QA_TELEGRAM_GROUP_ID`, `OPENCLAW_QA_TELEGRAM_DRIVER_BOT_TOKEN`, and `OPENCLAW_QA_TELEGRAM_SUT_BOT_TOKEN`. The group id must be the numeric Telegram chat id.
- Supports `--credential-source convex` for shared pooled credentials. Use env mode by default, or set `OPENCLAW_QA_CREDENTIAL_SOURCE=convex` to opt into pooled leases.
- Defaults cover canary, mention gating, command addressing, `/status`, bot-to-bot mentioned replies, and core native command replies. `mock-openai` defaults also cover deterministic reply-chain and Telegram final-message streaming regressions. Use `--list-scenarios` for optional probes such as `session_status`.
- Exits non-zero when any scenario fails. Use `--allow-failures` when you
want artifacts without a failing exit code.
- Requires two distinct bots in the same private group, with the SUT bot exposing a Telegram username.