* fix(qqbot): allow extension fields in channel config schema
Use passthrough() on QQBotConfigSchema, QQBotAccountSchema, and
QQBotStreamingSchema so third-party builds that share the qqbot
channel id can add custom fields without triggering
"must NOT have additional properties" validation errors.
tts and stt sub-schemas remain strict to preserve typo detection
for those sensitive fields.
* Update extensions/qqbot/openclaw.plugin.json
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
* chore(qqbot): update changelog for config schema passthrough
---------
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
Stop injecting CLAUDE_CODE_PROVIDER_MANAGED_BY_HOST into Claude CLI runs and strip inherited/backend overrides before spawn.\n\nAlso repairs the Zalo setup allowlist prompt wiring needed by the current main check gate.\n\nThanks @Alex-Alaniz.
* feat(models): allow private network via models.providers.*.request
Add optional request.allowPrivateNetwork for operator-controlled self-hosted
OpenAI-compatible bases (LAN/overlay/split DNS). Plumbs the flag into
resolveProviderRequestPolicyConfig for streaming provider HTTP and OpenAI
responses WebSocket so SSRF policy can allow private-resolved model URLs
when explicitly enabled.
Updates zod schema, config help/labels, and unit tests for sanitize/merge.
* agents thread provider request into websocket stream
* fix(config): scope allowPrivateNetwork to model requests
* fix(agents): refresh websocket manager on request changes
* fix(agents): scope runtime private-network overrides to models
* fix: allow private network provider request opt-in (#63671) (thanks @qas)
---------
Co-authored-by: Ayaan Zaidi <hi@obviy.us>
* refactor(sandbox): remove socat proxy and fix chromium keyring deadlock
* fix(sandbox): address review feedback by reinstating cdp isolation and stability flags
* fix(sandbox): increase entrypoint cdp timeout to 20s to honor autoStartTimeoutMs
* fix(sandbox): align implementation with PR description (keyring bypass, fail-fast, watchdog)
* fix
* fix(sandbox): remove bash CDP watchdog to eliminate dual-timeout race
* fix(sandbox): apply final fail-fast and lifecycle bindings
* fix(sandbox): restore noVNC and CDP port offset
* fix(sandbox): add max-time to curl to prevent HTTP hang
* fix(sandbox): align timeout with host and restore env flags
* fix(sandbox): pass auto-start timeout to container and restore wait -n
* fix(sandbox): update hash input type to include autoStartTimeoutMs
* fix(sandbox): implement production-grade lifecycle and timeout management
- Add strict integer validation for port and timeout environment variables
- Implement robust two-stage trap cleanup (SIGTERM with SIGKILL fallback) to prevent zombie processes
- Refactor CDP readiness probe to use absolute millisecond-precision deadlines
- Add early fail-fast detection if Chromium crashes during the startup phase
- Track all daemon PIDs explicitly for reliable teardown via wait -n
* fix(sandbox): allow renderer process limit to be 0 for chromium default
* fix(sandbox): add autoStartTimeoutMs to SandboxBrowserHashInput type
* test(sandbox): cover browser timeout cleanup
---------
Co-authored-by: Ayaan Zaidi <hi@obviy.us>
Regeneration-Prompt: |
Investigate the unrelated failures in `src/infra/git-commit.test.ts` that started blocking other prep and gate flows. The real-checkout assertions were failing whenever the current branch ref lived only in `.git/packed-refs`, because `resolveCommitHash()` only followed loose ref files under `refs/heads/*` even though worktrees and packed refs are common in this repo. Keep the existing safety checks that reject traversal from crafted HEAD contents, but fall back to reading an exact ref match from `packed-refs` in the common git dir when the loose ref is missing. Add a deterministic regression test that simulates a worktree checkout with `commondir` and only a packed branch ref so the test no longer depends on the local repository state.