mirror of
https://github.com/openclaw/openclaw.git
synced 2026-05-06 06:50:43 +00:00
fix(onboard): cap compat probe max_tokens (#66450)
* fix(onboard): cap compat probe max_tokens * docs(changelog): fix onboarding entry * Update CHANGELOG.md
This commit is contained in:
@@ -23,6 +23,7 @@ Docs: https://docs.openclaw.ai
|
||||
- Browser/SSRF: restore hostname navigation under the default browser SSRF policy while keeping explicit strict mode reachable from config, and keep managed loopback CDP `/json/new` fallback requests on the local CDP control policy so browser follow-up fixes stop regressing normal navigation or self-blocking local CDP control. (#66386) Thanks @obviyus.
|
||||
- Models/Codex: canonicalize the legacy `openai-codex/gpt-5.4-codex` runtime alias to `openai-codex/gpt-5.4` while still honoring alias-specific and canonical per-model overrides. (#43060) Thanks @Sapientropic and @vincentkoc.
|
||||
- Browser/SSRF: preserve explicit strict browser navigation mode for legacy `browser.ssrfPolicy.allowPrivateNetwork: false` configs by normalizing the legacy alias to the canonical strict marker instead of silently widening those installs to the default non-strict hostname-navigation path.
|
||||
- Onboarding/custom providers: use `max_tokens=16` for OpenAI-compatible verification probes so stricter custom endpoints stop rejecting onboarding checks that only need a tiny completion. (#66450) Thanks @WuKongAI-CMU.
|
||||
- Agents/subagents: emit the subagent registry lazy-runtime stub on the stable dist path that both source and bundled runtime imports resolve, so the follow-up dist fix no longer still fails with `ERR_MODULE_NOT_FOUND` at runtime. (#66420) Thanks @obviyus.
|
||||
- Media-understanding/proxy env: auto-upgrade provider HTTP helper requests to trusted env-proxy mode only when `HTTP_PROXY`/`HTTPS_PROXY` is active and the target is not bypassed by `NO_PROXY`, so remote media-understanding and transcription requests stop failing local DNS pre-resolution in proxy-only environments without widening SSRF bypasses. (#52162) Thanks @mjamiv and @vincentkoc.
|
||||
- Telegram/media downloads: let Telegram media fetches trust an operator-configured explicit proxy for target DNS resolution after hostname-policy checks, so proxy-backed installs stop failing `could not download media` on Bot API file downloads after the DNS-pinning regression. (#66245) Thanks @dawei41468 and @vincentkoc.
|
||||
|
||||
@@ -202,7 +202,7 @@ describe("promptCustomApiConfig", () => {
|
||||
|
||||
const firstCall = fetchMock.mock.calls[0]?.[1] as { body?: string } | undefined;
|
||||
expect(firstCall?.body).toBeDefined();
|
||||
expect(JSON.parse(firstCall?.body ?? "{}")).toMatchObject({ max_tokens: 1 });
|
||||
expect(JSON.parse(firstCall?.body ?? "{}")).toMatchObject({ max_tokens: 16 });
|
||||
});
|
||||
|
||||
it("uses azure responses-specific headers and body for openai verification probes", async () => {
|
||||
@@ -259,7 +259,7 @@ describe("promptCustomApiConfig", () => {
|
||||
expect(body).toEqual({
|
||||
model: "deepseek-v3-0324",
|
||||
messages: [{ role: "user", content: "Hi" }],
|
||||
max_tokens: 1,
|
||||
max_tokens: 16,
|
||||
stream: false,
|
||||
});
|
||||
});
|
||||
|
||||
@@ -400,7 +400,8 @@ async function requestOpenAiVerification(params: {
|
||||
body: {
|
||||
model: params.modelId,
|
||||
messages: [{ role: "user", content: "Hi" }],
|
||||
max_tokens: 1,
|
||||
// Recent OpenAI-family endpoints reject probes below 16 tokens.
|
||||
max_tokens: 16,
|
||||
stream: false,
|
||||
},
|
||||
});
|
||||
|
||||
Reference in New Issue
Block a user