mirror of
https://github.com/openclaw/openclaw.git
synced 2026-05-06 06:50:43 +00:00
fix(agents): collapse local model timeout knobs
This commit is contained in:
@@ -34,7 +34,7 @@ Docs: https://docs.openclaw.ai
|
||||
- Agents/LM Studio: promote standalone bracketed local-model tool requests into registered tool calls and hide unsupported bracket blocks from visible replies, so MemPalace MCP lookups do not print raw `[tool]` JSON scaffolding in chat. Fixes #66178. Thanks @detroit357.
|
||||
- Local models: warn when an assistant reply looks like a tool call but the provider emitted plain text instead of a structured tool invocation, making fake/non-executed tool calls visible in logs. Fixes #51332. Thanks @emilclaw.
|
||||
- Local models: classify terminated, reset, closed, timeout, and aborted model-call failures and attach a process memory snapshot to the diagnostic event, making LM Studio/Ollama RAM-pressure failures easier to prove from stability bundles. Refs #65551. Thanks @BigWiLLi111.
|
||||
- Local models: pass configured provider request timeouts through OpenAI SDK transports so long-running local or custom OpenAI-compatible streams are not capped by the SDK's 10-minute default. Fixes #63663. Thanks @aidiffuser.
|
||||
- Local models: pass configured provider request timeouts through OpenAI SDK transports and the model idle watchdog so long-running local or custom OpenAI-compatible streams use one timeout knob instead of hitting the SDK's 10-minute default or the 120s idle default. Fixes #63663. Thanks @aidiffuser.
|
||||
- LM Studio: trust configured LM Studio loopback, LAN, and tailnet endpoints for guarded model requests by default, preserving explicit private-network opt-outs. Refs #60994. Thanks @tnowakow.
|
||||
- Docker/setup: route Docker onboarding defaults for host-side LM Studio and Ollama through `host.docker.internal` and add the Linux host-gateway mapping to the bundled Compose file, so containerized gateways can reach local providers without using container loopback. Fixes #68684; supersedes #68702. Thanks @safrano9999 and @skolez.
|
||||
- Agents/LM Studio: strip prior-turn Gemma 4 reasoning from OpenAI-compatible replay while preserving active tool-call continuation reasoning. Fixes #68704. Thanks @chip-snomo and @Kailigithub.
|
||||
@@ -1868,7 +1868,7 @@ Docs: https://docs.openclaw.ai
|
||||
- Providers/Ollama: allow Ollama models using the native `api: "ollama"` path to optionally display thinking output when `/think` is set to a non-off level. (#62712) Thanks @hoyyeva.
|
||||
- Codex CLI: pass OpenClaw's system prompt through Codex's `model_instructions_file` config override so fresh Codex CLI sessions receive the same prompt guidance as Claude CLI sessions.
|
||||
- Auth/profiles: persist explicit auth-profile upserts directly and skip external CLI sync for local writes so profile changes are saved without stale external credential state.
|
||||
- Agents/timeouts: make the LLM idle timeout inherit `agents.defaults.timeoutSeconds` when configured, disable the unconfigured idle watchdog for cron runs, and point idle-timeout errors at `agents.defaults.llm.idleTimeoutSeconds`. Thanks @drvoss.
|
||||
- Agents/timeouts: make the LLM idle timeout inherit `agents.defaults.timeoutSeconds` when configured, disable the unconfigured idle watchdog for cron runs, and improve idle-timeout recovery guidance. Thanks @drvoss.
|
||||
- Agents/failover: classify Z.ai vendor code `1311` as billing and `1113` as auth, including long wrapped `1311` payloads, so these errors stop falling through to generic failover handling. (#49552) Thanks @1bcMax.
|
||||
- QQBot/media-tags: support HTML entity-encoded angle brackets (`<`/`>`), URL slashes in attributes, and self-closing media tags so upstream `<qqimg>` payloads are correctly parsed and normalized. (#60493) Thanks @ylc0919.
|
||||
- Memory/dreaming: harden grounded backfill inputs, diary writes, status payloads, and diary action classification by preserving source-day labels, rejecting missing or symlinked targets cleanly, normalizing diary headings in gateway backfills, and tightening claim splitting plus diary source metadata. Thanks @mbelinky.
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
decbeacc65183b4b2cf7a064c8d8c7846c45fc56c5dd72392dce1ea3117a3808 config-baseline.json
|
||||
d8c18c4bd1091dbc74865e1b1fb1bf5c78db12736373a2b4e5a866932b116f86 config-baseline.core.json
|
||||
454c34daa3f5f66a97f6a701968756a77a110fe611e013b0245fe6a9ef274997 config-baseline.json
|
||||
56edd542252c0ec8b3005dcddcf083a568d5e7700f7675c509c2963e36a4597c config-baseline.core.json
|
||||
07963db49502132f26db396c56b36e018b110e6c55a68b3cb012d3ec96f43901 config-baseline.channel.json
|
||||
f14d1d609ce93893f3bbd6c533251d30328f4deed5cf06da7cb2c9208147dc7a config-baseline.plugin.json
|
||||
|
||||
@@ -162,8 +162,8 @@ surfaces, while Codex native hooks remain a separate lower-level Codex mechanism
|
||||
|
||||
- `agent.wait` default: 30s (just the wait). `timeoutMs` param overrides.
|
||||
- Agent runtime: `agents.defaults.timeoutSeconds` default 172800s (48 hours); enforced in `runEmbeddedPiAgent` abort timer.
|
||||
- LLM idle timeout: `agents.defaults.llm.idleTimeoutSeconds` aborts a model request when no response chunks arrive before the idle window. Set it explicitly for slow local models or reasoning/tool-call providers; set it to 0 to disable. If it is not set, OpenClaw uses `agents.defaults.timeoutSeconds` when configured, otherwise 120s. Cron-triggered runs with no explicit LLM or agent timeout disable the idle watchdog and rely on the cron outer timeout.
|
||||
- Provider HTTP request timeout: `models.providers.<id>.timeoutSeconds` applies only to that provider's model HTTP fetches, including connect, headers, body, and total guarded-fetch abort handling. Use this for slow local/self-hosted providers such as Ollama before raising the whole agent runtime timeout.
|
||||
- Model idle timeout: OpenClaw aborts a model request when no response chunks arrive before the idle window. `models.providers.<id>.timeoutSeconds` extends this idle watchdog for slow local/self-hosted providers; otherwise OpenClaw uses `agents.defaults.timeoutSeconds` when configured, capped at 120s by default. Cron-triggered runs with no explicit model or agent timeout disable the idle watchdog and rely on the cron outer timeout.
|
||||
- Provider HTTP request timeout: `models.providers.<id>.timeoutSeconds` applies to that provider's model HTTP fetches, including connect, headers, body, SDK request timeout, total guarded-fetch abort handling, and model stream idle watchdog. Use this for slow local/self-hosted providers such as Ollama before raising the whole agent runtime timeout.
|
||||
|
||||
## Where things can end early
|
||||
|
||||
|
||||
@@ -237,7 +237,7 @@ describe("timeout-triggered compaction", () => {
|
||||
expect(result.payloads?.[0]?.text).toContain("timed out");
|
||||
});
|
||||
|
||||
it("points idle-timeout errors at the LLM idle timeout config key", async () => {
|
||||
it("points idle-timeout errors at the provider timeout config key", async () => {
|
||||
mockedRunEmbeddedAttempt.mockResolvedValueOnce(
|
||||
makeAttemptResult({
|
||||
timedOut: true,
|
||||
@@ -252,7 +252,7 @@ describe("timeout-triggered compaction", () => {
|
||||
|
||||
expect(mockedCompactDirect).not.toHaveBeenCalled();
|
||||
expect(result.payloads?.[0]?.isError).toBe(true);
|
||||
expect(result.payloads?.[0]?.text).toContain("agents.defaults.llm.idleTimeoutSeconds");
|
||||
expect(result.payloads?.[0]?.text).toContain("models.providers.<id>.timeoutSeconds");
|
||||
expect(result.payloads?.[0]?.text).not.toContain("agents.defaults.timeoutSeconds");
|
||||
});
|
||||
|
||||
|
||||
@@ -1898,8 +1898,8 @@ export async function runEmbeddedPiAgent(
|
||||
// callers do not lose the turn as an orphaned user message.
|
||||
if (timedOut && !timedOutDuringCompaction && !payloadsWithToolMedia?.length) {
|
||||
const timeoutText = idleTimedOut
|
||||
? "The model did not produce a response before the LLM idle timeout. " +
|
||||
"Please try again, or increase `agents.defaults.llm.idleTimeoutSeconds` in your config (set to 0 to disable)."
|
||||
? "The model did not produce a response before the model idle timeout. " +
|
||||
"Please try again, or increase `models.providers.<id>.timeoutSeconds` for slow local or self-hosted providers."
|
||||
: "Request timed out before a response was generated. " +
|
||||
"Please try again, or increase `agents.defaults.timeoutSeconds` in your config.";
|
||||
const replayInvalid = resolveReplayInvalidForAttempt(null);
|
||||
|
||||
@@ -1835,6 +1835,7 @@ export async function runEmbeddedAttempt(
|
||||
cfg: params.config,
|
||||
trigger: params.trigger,
|
||||
runTimeoutMs: params.timeoutMs !== configuredRunTimeoutMs ? params.timeoutMs : undefined,
|
||||
modelRequestTimeoutMs: (params.model as { requestTimeoutMs?: number }).requestTimeoutMs,
|
||||
});
|
||||
if (idleTimeoutMs > 0) {
|
||||
activeSession.agent.streamFn = streamWithIdleTimeout(
|
||||
|
||||
@@ -11,45 +11,11 @@ describe("resolveLlmIdleTimeoutMs", () => {
|
||||
expect(resolveLlmIdleTimeoutMs()).toBe(DEFAULT_LLM_IDLE_TIMEOUT_MS);
|
||||
});
|
||||
|
||||
it("returns default when llm config is missing", () => {
|
||||
it("returns default when agent defaults are missing", () => {
|
||||
const cfg = { agents: {} } as OpenClawConfig;
|
||||
expect(resolveLlmIdleTimeoutMs({ cfg })).toBe(DEFAULT_LLM_IDLE_TIMEOUT_MS);
|
||||
});
|
||||
|
||||
it("returns default when idleTimeoutSeconds is not set", () => {
|
||||
const cfg = { agents: { defaults: { llm: {} } } } as OpenClawConfig;
|
||||
expect(resolveLlmIdleTimeoutMs({ cfg })).toBe(DEFAULT_LLM_IDLE_TIMEOUT_MS);
|
||||
});
|
||||
|
||||
it("returns 0 when idleTimeoutSeconds is 0 (disabled)", () => {
|
||||
const cfg = { agents: { defaults: { llm: { idleTimeoutSeconds: 0 } } } } as OpenClawConfig;
|
||||
expect(resolveLlmIdleTimeoutMs({ cfg })).toBe(0);
|
||||
});
|
||||
|
||||
it("returns configured value in milliseconds", () => {
|
||||
const cfg = { agents: { defaults: { llm: { idleTimeoutSeconds: 30 } } } } as OpenClawConfig;
|
||||
expect(resolveLlmIdleTimeoutMs({ cfg })).toBe(30_000);
|
||||
});
|
||||
|
||||
it("caps at max safe timeout", () => {
|
||||
const cfg = {
|
||||
agents: { defaults: { llm: { idleTimeoutSeconds: 10_000_000 } } },
|
||||
} as OpenClawConfig;
|
||||
expect(resolveLlmIdleTimeoutMs({ cfg })).toBe(2_147_000_000);
|
||||
});
|
||||
|
||||
it("ignores negative values", () => {
|
||||
const cfg = { agents: { defaults: { llm: { idleTimeoutSeconds: -10 } } } } as OpenClawConfig;
|
||||
expect(resolveLlmIdleTimeoutMs({ cfg })).toBe(DEFAULT_LLM_IDLE_TIMEOUT_MS);
|
||||
});
|
||||
|
||||
it("ignores non-finite values", () => {
|
||||
const cfg = {
|
||||
agents: { defaults: { llm: { idleTimeoutSeconds: Infinity } } },
|
||||
} as OpenClawConfig;
|
||||
expect(resolveLlmIdleTimeoutMs({ cfg })).toBe(DEFAULT_LLM_IDLE_TIMEOUT_MS);
|
||||
});
|
||||
|
||||
it("caps agents.defaults.timeoutSeconds fallback at the default idle watchdog", () => {
|
||||
const cfg = { agents: { defaults: { timeoutSeconds: 300 } } } as OpenClawConfig;
|
||||
expect(resolveLlmIdleTimeoutMs({ cfg })).toBe(DEFAULT_LLM_IDLE_TIMEOUT_MS);
|
||||
@@ -72,31 +38,46 @@ describe("resolveLlmIdleTimeoutMs", () => {
|
||||
expect(resolveLlmIdleTimeoutMs({ runTimeoutMs: 2_147_000_000 })).toBe(0);
|
||||
});
|
||||
|
||||
it("prefers llm.idleTimeoutSeconds over agents.defaults.timeoutSeconds", () => {
|
||||
const cfg = {
|
||||
agents: { defaults: { timeoutSeconds: 300, llm: { idleTimeoutSeconds: 120 } } },
|
||||
} as OpenClawConfig;
|
||||
expect(resolveLlmIdleTimeoutMs({ cfg })).toBe(120_000);
|
||||
it("uses the provider request timeout as the model idle watchdog", () => {
|
||||
expect(resolveLlmIdleTimeoutMs({ modelRequestTimeoutMs: 300_000 })).toBe(300_000);
|
||||
});
|
||||
|
||||
it("prefers llm.idleTimeoutSeconds over an explicit run timeout override", () => {
|
||||
const cfg = {
|
||||
agents: { defaults: { llm: { idleTimeoutSeconds: 120 } } },
|
||||
} as OpenClawConfig;
|
||||
expect(resolveLlmIdleTimeoutMs({ cfg, runTimeoutMs: 900_000 })).toBe(120_000);
|
||||
it("caps provider request timeout at the max safe timeout", () => {
|
||||
expect(resolveLlmIdleTimeoutMs({ modelRequestTimeoutMs: 10_000_000_000 })).toBe(2_147_000_000);
|
||||
});
|
||||
|
||||
it("keeps idleTimeoutSeconds=0 disabled even when timeoutSeconds is set", () => {
|
||||
it("ignores invalid provider request timeout values", () => {
|
||||
expect(resolveLlmIdleTimeoutMs({ modelRequestTimeoutMs: -1 })).toBe(
|
||||
DEFAULT_LLM_IDLE_TIMEOUT_MS,
|
||||
);
|
||||
expect(resolveLlmIdleTimeoutMs({ modelRequestTimeoutMs: Infinity })).toBe(
|
||||
DEFAULT_LLM_IDLE_TIMEOUT_MS,
|
||||
);
|
||||
});
|
||||
|
||||
it("bounds provider request timeout by agents.defaults.timeoutSeconds when shorter", () => {
|
||||
const cfg = {
|
||||
agents: { defaults: { timeoutSeconds: 300, llm: { idleTimeoutSeconds: 0 } } },
|
||||
agents: { defaults: { timeoutSeconds: 45 } },
|
||||
} as OpenClawConfig;
|
||||
expect(resolveLlmIdleTimeoutMs({ cfg })).toBe(0);
|
||||
expect(resolveLlmIdleTimeoutMs({ cfg, modelRequestTimeoutMs: 300_000 })).toBe(45_000);
|
||||
});
|
||||
|
||||
it("bounds provider request timeout by explicit run timeout when shorter", () => {
|
||||
expect(resolveLlmIdleTimeoutMs({ modelRequestTimeoutMs: 300_000, runTimeoutMs: 45_000 })).toBe(
|
||||
45_000,
|
||||
);
|
||||
});
|
||||
|
||||
it("uses provider request timeout for cron model calls", () => {
|
||||
expect(resolveLlmIdleTimeoutMs({ trigger: "cron", modelRequestTimeoutMs: 300_000 })).toBe(
|
||||
300_000,
|
||||
);
|
||||
});
|
||||
|
||||
it("disables the default idle timeout for cron when no timeout is configured", () => {
|
||||
expect(resolveLlmIdleTimeoutMs({ trigger: "cron" })).toBe(0);
|
||||
|
||||
const cfg = { agents: { defaults: { llm: {} } } } as OpenClawConfig;
|
||||
const cfg = { agents: { defaults: {} } } as OpenClawConfig;
|
||||
expect(resolveLlmIdleTimeoutMs({ cfg, trigger: "cron" })).toBe(0);
|
||||
});
|
||||
|
||||
@@ -104,11 +85,6 @@ describe("resolveLlmIdleTimeoutMs", () => {
|
||||
const cfg = { agents: { defaults: { timeoutSeconds: 300 } } } as OpenClawConfig;
|
||||
expect(resolveLlmIdleTimeoutMs({ cfg, trigger: "cron" })).toBe(DEFAULT_LLM_IDLE_TIMEOUT_MS);
|
||||
});
|
||||
|
||||
it("keeps an explicit cron idle timeout when configured", () => {
|
||||
const cfg = { agents: { defaults: { llm: { idleTimeoutSeconds: 45 } } } } as OpenClawConfig;
|
||||
expect(resolveLlmIdleTimeoutMs({ cfg, trigger: "cron" })).toBe(45_000);
|
||||
});
|
||||
});
|
||||
|
||||
describe("streamWithIdleTimeout", () => {
|
||||
|
||||
@@ -23,34 +23,49 @@ export function resolveLlmIdleTimeoutMs(params?: {
|
||||
cfg?: OpenClawConfig;
|
||||
trigger?: EmbeddedRunTrigger;
|
||||
runTimeoutMs?: number;
|
||||
modelRequestTimeoutMs?: number;
|
||||
}): number {
|
||||
const clampTimeoutMs = (valueMs: number) => Math.min(Math.floor(valueMs), MAX_SAFE_TIMEOUT_MS);
|
||||
const clampImplicitTimeoutMs = (valueMs: number) =>
|
||||
clampTimeoutMs(Math.min(valueMs, DEFAULT_LLM_IDLE_TIMEOUT_MS));
|
||||
const raw = params?.cfg?.agents?.defaults?.llm?.idleTimeoutSeconds;
|
||||
// 0 means explicitly disabled (no timeout).
|
||||
if (raw === 0) {
|
||||
return 0;
|
||||
}
|
||||
if (typeof raw === "number" && Number.isFinite(raw) && raw > 0) {
|
||||
return clampTimeoutMs(raw * 1000);
|
||||
}
|
||||
|
||||
const runTimeoutMs = params?.runTimeoutMs;
|
||||
if (typeof runTimeoutMs === "number" && Number.isFinite(runTimeoutMs) && runTimeoutMs > 0) {
|
||||
if (runTimeoutMs >= MAX_SAFE_TIMEOUT_MS) {
|
||||
return 0;
|
||||
}
|
||||
return clampImplicitTimeoutMs(runTimeoutMs);
|
||||
}
|
||||
|
||||
const agentTimeoutSeconds = params?.cfg?.agents?.defaults?.timeoutSeconds;
|
||||
if (
|
||||
const agentTimeoutMs =
|
||||
typeof agentTimeoutSeconds === "number" &&
|
||||
Number.isFinite(agentTimeoutSeconds) &&
|
||||
agentTimeoutSeconds > 0
|
||||
? agentTimeoutSeconds * 1000
|
||||
: undefined;
|
||||
const timeoutBounds = [runTimeoutMs, agentTimeoutMs].filter(
|
||||
(value): value is number =>
|
||||
typeof value === "number" &&
|
||||
Number.isFinite(value) &&
|
||||
value > 0 &&
|
||||
value < MAX_SAFE_TIMEOUT_MS,
|
||||
);
|
||||
|
||||
const modelRequestTimeoutMs = params?.modelRequestTimeoutMs;
|
||||
if (
|
||||
typeof modelRequestTimeoutMs === "number" &&
|
||||
Number.isFinite(modelRequestTimeoutMs) &&
|
||||
modelRequestTimeoutMs > 0
|
||||
) {
|
||||
return clampImplicitTimeoutMs(agentTimeoutSeconds * 1000);
|
||||
return clampTimeoutMs(Math.min(modelRequestTimeoutMs, ...timeoutBounds));
|
||||
}
|
||||
|
||||
if (typeof runTimeoutMs === "number" && Number.isFinite(runTimeoutMs) && runTimeoutMs > 0) {
|
||||
return clampImplicitTimeoutMs(runTimeoutMs);
|
||||
}
|
||||
|
||||
if (agentTimeoutMs !== undefined) {
|
||||
return clampImplicitTimeoutMs(agentTimeoutMs);
|
||||
}
|
||||
|
||||
if (params?.trigger === "cron") {
|
||||
|
||||
@@ -386,28 +386,6 @@ describe("config cli", () => {
|
||||
]);
|
||||
});
|
||||
|
||||
it("writes agents.defaults.llm.idleTimeoutSeconds without disturbing sibling defaults", async () => {
|
||||
const resolved: OpenClawConfig = {
|
||||
agents: {
|
||||
defaults: {
|
||||
model: "openai/gpt-5.4",
|
||||
timeoutSeconds: 300,
|
||||
},
|
||||
},
|
||||
};
|
||||
setSnapshot(resolved, resolved);
|
||||
|
||||
await runConfigCommand(["config", "set", "agents.defaults.llm.idleTimeoutSeconds", "900"]);
|
||||
|
||||
expect(mockWriteConfigFile).toHaveBeenCalledTimes(1);
|
||||
const written = mockWriteConfigFile.mock.calls[0]?.[0];
|
||||
expect(written.agents?.defaults?.model).toBe("openai/gpt-5.4");
|
||||
expect(written.agents?.defaults?.timeoutSeconds).toBe(300);
|
||||
expect(written.agents?.defaults?.llm).toEqual({
|
||||
idleTimeoutSeconds: 900,
|
||||
});
|
||||
});
|
||||
|
||||
it("drops gateway.auth.password when switching mode to token", async () => {
|
||||
const resolved: OpenClawConfig = {
|
||||
gateway: {
|
||||
|
||||
@@ -1,6 +1,5 @@
|
||||
import { describe, expect, it } from "vitest";
|
||||
import { SENSITIVE_URL_HINT_TAG } from "../shared/net/redact-sensitive-url.js";
|
||||
import { DEFAULT_LLM_IDLE_TIMEOUT_SECONDS } from "./agent-timeout-defaults.js";
|
||||
import { computeBaseConfigSchemaResponse } from "./schema-base.js";
|
||||
import { GENERATED_BASE_CONFIG_SCHEMA } from "./schema.base.generated.js";
|
||||
|
||||
@@ -63,33 +62,4 @@ describe("generated base config schema", () => {
|
||||
expect(uiHints["agents.defaults.videoGenerationModel.fallbacks"]).toBeDefined();
|
||||
expect(uiHints["agents.defaults.mediaGenerationAutoProviderFallback"]).toBeDefined();
|
||||
});
|
||||
|
||||
it("keeps the LLM idle timeout schema help aligned with the runtime default", () => {
|
||||
const idleTimeoutDescription = (
|
||||
GENERATED_BASE_CONFIG_SCHEMA.schema as {
|
||||
properties?: {
|
||||
agents?: {
|
||||
properties?: {
|
||||
defaults?: {
|
||||
properties?: {
|
||||
llm?: {
|
||||
properties?: {
|
||||
idleTimeoutSeconds?: {
|
||||
description?: string;
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
}
|
||||
).properties?.agents?.properties?.defaults?.properties?.llm?.properties?.idleTimeoutSeconds
|
||||
?.description;
|
||||
|
||||
expect(idleTimeoutDescription).toContain(
|
||||
`Default: ${DEFAULT_LLM_IDLE_TIMEOUT_SECONDS} seconds.`,
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
@@ -4817,19 +4817,6 @@ export const GENERATED_BASE_CONFIG_SCHEMA: BaseConfigSchemaResponse = {
|
||||
},
|
||||
additionalProperties: false,
|
||||
},
|
||||
llm: {
|
||||
type: "object",
|
||||
properties: {
|
||||
idleTimeoutSeconds: {
|
||||
description:
|
||||
"Idle timeout for LLM streaming responses in seconds. If no token is received within this time, the request is aborted. Set to 0 to disable. Default: 120 seconds.",
|
||||
type: "integer",
|
||||
minimum: 0,
|
||||
maximum: 9007199254740991,
|
||||
},
|
||||
},
|
||||
additionalProperties: false,
|
||||
},
|
||||
compaction: {
|
||||
type: "object",
|
||||
properties: {
|
||||
|
||||
@@ -277,8 +277,6 @@ export type AgentDefaultsConfig = {
|
||||
cliBackends?: Record<string, CliBackendConfig>;
|
||||
/** Opt-in: prune old tool results from the LLM context to reduce token usage. */
|
||||
contextPruning?: AgentContextPruningConfig;
|
||||
/** LLM timeout configuration. */
|
||||
llm?: AgentLlmConfig;
|
||||
/** Compaction tuning and pre-compaction memory flush behavior. */
|
||||
compaction?: AgentCompactionConfig;
|
||||
/** Embedded Pi runner hardening and compatibility controls. */
|
||||
@@ -507,16 +505,3 @@ export type AgentCompactionMemoryFlushConfig = {
|
||||
/** System prompt appended for the memory flush turn. */
|
||||
systemPrompt?: string;
|
||||
};
|
||||
|
||||
/**
|
||||
* LLM timeout configuration.
|
||||
*/
|
||||
export type AgentLlmConfig = {
|
||||
/**
|
||||
* Idle timeout for LLM streaming responses in seconds.
|
||||
* If no token is received within this time, the request is aborted.
|
||||
* Set to 0 to disable (never timeout).
|
||||
* If unset, OpenClaw uses the default LLM idle timeout.
|
||||
*/
|
||||
idleTimeoutSeconds?: number;
|
||||
};
|
||||
|
||||
@@ -1,5 +1,4 @@
|
||||
import { z } from "zod";
|
||||
import { DEFAULT_LLM_IDLE_TIMEOUT_SECONDS } from "./agent-timeout-defaults.js";
|
||||
import { isValidNonNegativeByteSizeString } from "./byte-size.js";
|
||||
import {
|
||||
HeartbeatSchema,
|
||||
@@ -162,19 +161,6 @@ export const AgentDefaultsSchema = z
|
||||
})
|
||||
.strict()
|
||||
.optional(),
|
||||
llm: z
|
||||
.object({
|
||||
idleTimeoutSeconds: z
|
||||
.number()
|
||||
.int()
|
||||
.nonnegative()
|
||||
.optional()
|
||||
.describe(
|
||||
`Idle timeout for LLM streaming responses in seconds. If no token is received within this time, the request is aborted. Set to 0 to disable. Default: ${DEFAULT_LLM_IDLE_TIMEOUT_SECONDS} seconds.`,
|
||||
),
|
||||
})
|
||||
.strict()
|
||||
.optional(),
|
||||
compaction: z
|
||||
.object({
|
||||
mode: z.union([z.literal("default"), z.literal("safeguard")]).optional(),
|
||||
|
||||
Reference in New Issue
Block a user