Commit Graph

7025 Commits

Author SHA1 Message Date
pashpashpash
34fb96622e Support MCP hooks in the Codex harness (#71707)
* codex harness mcp hook parity

* tighten codex hook parity floor

* prove security-style mcp hook blocking

* bound native hook relay key handling

* clarify permission relay defers to provider

* harden native hook relay approvals

* fix(agents): bound native hook relay JSON work budget

---------

Co-authored-by: Peter Steinberger <steipete@gmail.com>
2026-04-25 21:35:47 +01:00
Peter Steinberger
e2fd3dcee9 fix(google): emit opus voice-note tts 2026-04-25 21:33:33 +01:00
Tars
d5b6667823 fix(minimax): enable portal music and video generation 2026-04-25 21:30:10 +01:00
Peter Steinberger
6a7b76e119 fix(acp): guard sessions_spawn runtime targets 2026-04-25 21:23:24 +01:00
Peter Steinberger
8fb24ac3ce test: cover Google telephony TTS private network opt-in 2026-04-25 21:02:06 +01:00
Rohan Shiralkar
cab66c5556 fix(google): honor models.providers.google.request.allowPrivateNetwork in TTS
Image generation and media understanding both thread the
sanitized models.providers.google.request config (including
allowPrivateNetwork) into resolveGoogleGenerativeAiHttpRequestConfig.
Speech synthesis omitted that arg, so TTS always saw
allowPrivateNetwork: false regardless of config — silently falling
back to a different speech provider when the configured Google TTS
endpoint resolved to a private/internal IP (proxies, custom backends,
test mocks).

Mirror the image-generation-provider pattern: thread request through
synthesizeGoogleTtsPcm at both call sites (synthesize and
synthesizeTelephony).

Follow-up to #67216.
2026-04-25 21:02:06 +01:00
InvalidPanda ツ
b64bfc5d9a fix(github-copilot): preserve reasoning IDs for Copilot Codex models (#71684)
* fix(github-copilot): preserve all reasoning IDs and add gpt-5.3-codex support

The existing guard (8fd15ed0e5) only skipped rewriting reasoning item IDs
when encrypted_content was a non-null string. When gpt-5.3-codex is used
via GitHub Copilot, the model falls through to the forward-compat catch-all
with reasoning:false, so encrypted_content is never requested and arrives
as null — bypassing the guard and causing a rewrite. Copilot validates
reasoning item IDs server-side regardless of whether the client includes
encrypted_content, so the rewritten id triggers the 400 error.

Two changes:

1. connection-bound-ids.ts: skip ALL reasoning items unconditionally.
   Reasoning items always reference server-side state bound to their
   original ID; rewriting any of them breaks Copilot's lookup.

2. models.ts + index.ts: extend the forward-compat cloning logic to
   cover gpt-5.3-codex (adds it to the template-target set and to
   CODEX_TEMPLATE_MODEL_IDS so it can also serve as a template source
   for gpt-5.4). Adds gpt-5.3-codex to COPILOT_XHIGH_MODEL_IDS for
   the thinking profile.

Thanks @InvalidPandaa.

* docs(github-copilot): clarify gpt-5.3-codex is a no-op template for itself

https://claude.ai/code/session_01EAFmq4WyKkiUkVAqRXp4Bm

* fix(github-copilot): remove dead reasoning prefix branch in deriveReplacementId

https://claude.ai/code/session_01EAFmq4WyKkiUkVAqRXp4Bm

* fix(github-copilot): align reasoning id replay tests

* test(plugin-sdk): use cjs sidecar for require fast path

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Peter Steinberger <steipete@gmail.com>
2026-04-25 20:52:07 +01:00
Vincent Koc
5c3eecfea7 fix(codex): require approvals for image-understanding turns (#71703) 2026-04-25 12:45:33 -07:00
Vincent Koc
346a72ddb9 fix(codex): require authorized inbound claims for bound turns (#71702)
* fix(codex): require authorized inbound claims for bound turns

* fix(codex): consume unauthorized bound turns
2026-04-25 12:42:23 -07:00
Vincent Koc
84f183b7ad test(diagnostics-otel): cover trace self-parent guard 2026-04-25 12:42:07 -07:00
Vincent Koc
1915b29a3c fix(slack): stop block-based sender rehydration on assistant message edits (#71700)
* fix(slack): stop block-based sender rehydration on message edits

* docs(changelog): note Slack sender attribution fix
2026-04-25 12:34:55 -07:00
Vincent Koc
5fe06f3cdc test(diagnostics-otel): cover untrusted trace parent rejection 2026-04-25 12:34:16 -07:00
Peter Steinberger
b732f21a86 fix: clarify voice-call setup diagnostics 2026-04-25 20:24:36 +01:00
Vincent Koc
44648440a5 fix(diagnostics-otel): stabilize genai token metric model attr 2026-04-25 12:22:55 -07:00
Peter Steinberger
75d64cd4b8 feat: expose generic image background option 2026-04-25 20:21:46 +01:00
Vincent Koc
5815ca93d9 fix(diagnostics-otel): honor genai usage semconv opt-in 2026-04-25 12:13:50 -07:00
Peter Steinberger
d9486c683b fix: stabilize macos npm update smoke 2026-04-25 20:09:32 +01:00
mushuiyu_xydt
0e1ef93e84 fix(minimax): use dedicated image generation endpoint (#61155)
* fix(minimax): use dedicated image generation endpoint

MiniMax image generation uses a dedicated API endpoint
(api.minimax.io/v1/image_generation) that is separate from the
text/chat API endpoint (api.minimax.io/anthropic).

Previously, the resolveMinimaxImageBaseUrl function would extract
the origin from the provider's configured baseUrl. If a user had
configured their baseUrl to the chat endpoint (e.g.,
api.minimax.chat/anthropic), the image generation would incorrectly
use that endpoint, resulting in "invalid api key" errors.

This fix always uses the dedicated image generation endpoint,
ignoring the provider's baseUrl configuration for image generation.

Fixes #61149

* fix(minimax): support CN endpoint for image generation

Respect MINIMAX_API_HOST environment variable to determine whether
to use the global (api.minimax.io) or CN (api.minimaxi.com) endpoint
for image generation.

This ensures that CN users who configure MINIMAX_API_HOST to use
api.minimaxi.com will continue to use the CN endpoint for image
generation, while global users continue to use api.minimax.io.

The original bug was caused by the code extracting the origin from
the provider's configured baseUrl, which could be set to incorrect
endpoints like api.minimax.chat. This fix uses the dedicated image
generation endpoints instead.

Fixes #61149

* fix(minimax): infer CN endpoint from provider config when env is unset

When MINIMAX_API_HOST is not set, fall back to checking the provider's
configured baseUrl to determine whether to use the CN or global image
endpoint. This ensures CN users who went through onboarding (which sets
models.providers.minimax.baseUrl to https://api.minimaxi.com/anthropic)
are correctly routed to the CN image endpoint.

The isMinimaxCnHost check ensures we only use the baseUrl origin for
CN detection - invalid endpoints like api.minimax.chat would not match
minimaxi.com and would correctly fall through to the global default.

Fixes #61149

* test(minimax): cover dedicated image endpoints

* fix(logging): handle context assembly diagnostics

* Revert "fix(logging): handle context assembly diagnostics"

This reverts commit f51d2f7d67f8193268dd37553ac77e80a0423390.

* test(minimax): isolate image endpoint env

* docs(changelog): credit minimax image fix

---------

Co-authored-by: Peter Steinberger <steipete@gmail.com>
2026-04-25 20:07:52 +01:00
Vincent Koc
5671fdca87 feat(diagnostics-otel): add genai usage span identity 2026-04-25 12:03:10 -07:00
Peter Steinberger
5eab16e086 fix: improve google meet setup diagnostics 2026-04-25 20:01:24 +01:00
Vincent Koc
cd7a8f870b feat(diagnostics-otel): add genai usage span attrs 2026-04-25 11:56:13 -07:00
91wan
bb2b68b34e fix(acp): pass Codex ACP model thinking overrides
Fix ACP Codex model/thinking override propagation.\n\nThanks @91wan.
2026-04-25 19:56:03 +01:00
Vincent Koc
dc19069d71 feat(diagnostics-otel): add genai operation duration metric 2026-04-25 11:48:10 -07:00
Peter Steinberger
4c0e9a4b2e fix(plugins): honor inferred agent model defaults 2026-04-25 19:40:32 +01:00
Troy Hitch
63241bf1e0 fix(bonjour): suppress ciao cancellation across plugin runtime copies
Fix the bundled Bonjour gateway discovery crash-loop caused by ciao probe cancellation rejections after the Bonjour plugin migration.

The plugin entry now wires the existing rejection handler into the advertiser, and the unhandled-rejection handler registry is anchored on globalThis so staged plugin SDK module copies register into the same process-level handler set used by the host.

Verification:
- pnpm test:serial extensions/bonjour/src/advertiser.test.ts src/infra/unhandled-rejections.fatal-detection.test.ts
- OPENCLAW_LOCAL_CHECK_MODE=throttled pnpm check:changed partially completed: conflict markers plus core/core-test/extensions/extension-test typecheck passed; local lint lane hit a self-lock and was stopped.
2026-04-25 11:38:30 -07:00
Peter Steinberger
e473577eaa test(voice): harden live STT transcript checks 2026-04-25 19:36:01 +01:00
Vincent Koc
7bbd47349e feat(diagnostics-otel): add genai token usage metric 2026-04-25 11:31:45 -07:00
Peter Steinberger
73706ca244 test: stabilize QA session memory ranking 2026-04-25 19:30:28 +01:00
Peter Steinberger
de0097a23c fix: support transparent OpenAI image generation 2026-04-25 19:28:56 +01:00
Peter Steinberger
31456e3326 fix(providers): handle proxied DeepSeek V4 replay 2026-04-25 19:23:15 +01:00
Vincent Koc
b8a41739d5 feat(diagnostics-otel): export memory diagnostics 2026-04-25 11:22:19 -07:00
Peter Steinberger
1380dc170e fix(browser): avoid restart hint for external profiles 2026-04-25 19:18:06 +01:00
Vincent Koc
d6ef1fcf24 feat(diagnostics-otel): export tool loop events 2026-04-25 11:11:56 -07:00
Chris Zhang
c3bfd328ad feat(litellm): add image generation provider (#70246)
* feat(litellm): add image generation provider

Registers litellm as an image-generation provider so model refs like
litellm/gpt-image-2 route through the LiteLLM proxy, and
agents.defaults.imageGenerationModel.fallbacks entries of the form
litellm/... resolve without "No image-generation provider registered
for litellm" errors.

Implementation uses the OpenAI-compatible /images/generations and
/images/edits endpoints that LiteLLM proxies for. BaseUrl resolves from
models.providers.litellm.baseUrl (default http://localhost:4000). Private
network is auto-allowed when baseUrl is a loopback/RFC1918 address, which
covers the common self-hosted LiteLLM proxy case without needing
OPENCLAW_PROVIDER_ALLOW_PRIVATE_NETWORK. Public baseUrls keep normal SSRF
defaults.

Default model is gpt-image-2 (matching upstream 4.21+ OpenAI default).
Advertises the same 2K/4K sizes OpenAI now exposes, plus legacy
256/512/1024 for dall-e-3. Supports both generate and edit.

Local patch. LiteLLM has no upstream image-generation support yet; revisit
if upstream adds one.

* ci: rerun after upstream main hot-fix

* fix(litellm): harden image generation provider

---------

Co-authored-by: Chris Zhang <chris@ChrisdeMac-mini.local>
Co-authored-by: Peter Steinberger <steipete@gmail.com>
2026-04-25 19:06:51 +01:00
Vincent Koc
ff172f46a5 feat(diagnostics-otel): add context assembly spans 2026-04-25 11:03:46 -07:00
Peter Steinberger
afd6b5d6fc fix(opencode-go): route DeepSeek V4 through OpenAI transport 2026-04-25 18:58:08 +01:00
Peter Steinberger
9ffe764416 fix(whatsapp): send voice note text separately 2026-04-25 18:55:03 +01:00
Peter Steinberger
617e1dd6bf fix(browser): honor remote CDP open timeouts 2026-04-25 18:52:57 +01:00
Vincent Koc
44114328b4 feat(diagnostics): surface provider request id hashes 2026-04-25 10:46:10 -07:00
Vincent Koc
a7de722f4f fix(diagnostics-otel): align GenAI semconv attrs 2026-04-25 10:33:13 -07:00
Peter Steinberger
88df8fe09d fix(browser): clarify Browserless CDP attach handling 2026-04-25 18:26:57 +01:00
Vincent Koc
dcdf97685b fix(diagnostics): trust internal trace parents (#71574)
* fix(diagnostics): trust internal trace parents

* fix(diagnostics): harden trusted trace metadata

* fix(tooling): honor explicit oxlint threads

* fix(agents): use stable nonmutating sort helpers

* chore(plugin-sdk): refresh api baseline

* fix(diagnostics): gate internal event subscriptions

* fix(diagnostics): isolate listener event copies

* chore(plugin-sdk): refresh internal diagnostics baseline

* chore(plugin-sdk): refresh diagnostics event baseline

* fix(diagnostics): keep event state module local

* fix(diagnostics): harden internal subscription capability

* fix(diagnostics): freeze listener metadata
2026-04-25 10:18:52 -07:00
Peter Steinberger
67506ac2a9 fix(xai): support video reference images 2026-04-25 18:14:51 +01:00
Peter Steinberger
390be8138f fix: add OpenCode Go DeepSeek V4 models 2026-04-25 18:11:59 +01:00
Peter Steinberger
6b3e4b88d6 test: update QA parity fixtures for GPT-5.5 2026-04-25 18:05:28 +01:00
Peter Steinberger
8bead989da fix(telegram): frame audio transcripts as untrusted 2026-04-25 17:45:40 +01:00
Vincent Koc
ab1d1a5c9e fix(browser): configure Chrome MCP existing-session launch (#71560) 2026-04-25 05:46:39 -07:00
Peter Steinberger
dd78b7f773 fix: harden OpenCode ACP bind dispatch 2026-04-25 13:38:58 +01:00
skylee-01
f7b71abf48 fix(agents): pass Claude system prompt via file 2026-04-25 17:59:25 +05:30
Peter Steinberger
b26367e22f test: add Crestodian QA lab setup scenario 2026-04-25 13:15:11 +01:00