mirror of
https://github.com/openclaw/openclaw.git
synced 2026-05-06 12:10:42 +00:00
feat: trigger compaction for oversized transcripts
This commit is contained in:
@@ -4,6 +4,10 @@ Docs: https://docs.openclaw.ai
|
||||
|
||||
## Unreleased
|
||||
|
||||
### Changes
|
||||
|
||||
- Agents/compaction: add an opt-in `agents.defaults.compaction.maxActiveTranscriptBytes` preflight trigger that runs normal local compaction when the active JSONL grows too large, requiring transcript rotation so successful compaction moves future turns onto a smaller successor file instead of raw byte-splitting history. Thanks @vincentkoc.
|
||||
|
||||
### Fixes
|
||||
|
||||
- Cron: classify isolated runs as errors when final output narrates known execution-denial markers such as `SYSTEM_RUN_DENIED`, `INVALID_REQUEST`, or approval-binding refusal phrases, so blocked commands no longer appear green in cron history. Fixes #67172; carries forward #67186. Thanks @oc-gh-dr, @hclsys, and @1yihui.
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
29181dbaa26242ced515ba4c2b363853a24b5b2623b33ecfede252c2a984b7c6 config-baseline.json
|
||||
2edac1da06bbb3709375bf82ae68890c67634f5ad3200a98a1d008b22c335e79 config-baseline.core.json
|
||||
0c3eaaee031f0adec2fcfc8a3a6a0d80dfc19d4d1c10b0ff4249b30e04b3c47d config-baseline.json
|
||||
420269ce22f17382cb253c80a232329e943296be101cda313506341ae39cc674 config-baseline.core.json
|
||||
07963db49502132f26db396c56b36e018b110e6c55a68b3cb012d3ec96f43901 config-baseline.channel.json
|
||||
74b74cb18ac37c0acaa765f398f1f9edbcee4c43567f02d45c89598a1e13afb4 config-baseline.plugin.json
|
||||
|
||||
@@ -124,6 +124,16 @@ active successor transcript from the compaction summary, preserved state, and
|
||||
unsummarized tail, then keeps the previous JSONL as the archived checkpoint
|
||||
source.
|
||||
|
||||
When `agents.defaults.compaction.maxActiveTranscriptBytes` is set, OpenClaw can
|
||||
trigger normal local compaction before a run if the active JSONL reaches that
|
||||
size. This is useful for long-running sessions where provider-side context
|
||||
management may keep model context healthy while the local transcript keeps
|
||||
growing. It does not split raw JSONL bytes; it only asks the normal compaction
|
||||
pipeline to create a semantic summary. Combine it with
|
||||
`truncateAfterCompaction: true` to move future turns onto the smaller successor
|
||||
transcript; without transcript rotation, the byte guard remains inactive because
|
||||
the active file would not shrink.
|
||||
|
||||
## Using a different model
|
||||
|
||||
By default, compaction uses your agent's primary model. You can use a more
|
||||
|
||||
@@ -554,6 +554,8 @@ Periodic heartbeat runs.
|
||||
qualityGuard: { enabled: true, maxRetries: 1 },
|
||||
postCompactionSections: ["Session Startup", "Red Lines"], // [] disables reinjection
|
||||
model: "openrouter/anthropic/claude-sonnet-4-6", // optional compaction-only model override
|
||||
truncateAfterCompaction: true, // rotate to a smaller successor JSONL after compaction
|
||||
maxActiveTranscriptBytes: "20mb", // optional preflight local compaction trigger
|
||||
notifyUser: true, // send brief notices when compaction starts and completes (default: false)
|
||||
memoryFlush: {
|
||||
enabled: true,
|
||||
@@ -576,6 +578,7 @@ Periodic heartbeat runs.
|
||||
- `qualityGuard`: retry-on-malformed-output checks for safeguard summaries. Enabled by default in safeguard mode; set `enabled: false` to skip the audit.
|
||||
- `postCompactionSections`: optional AGENTS.md H2/H3 section names to re-inject after compaction. Defaults to `["Session Startup", "Red Lines"]`; set `[]` to disable reinjection. When unset or explicitly set to that default pair, older `Every Session`/`Safety` headings are also accepted as a legacy fallback.
|
||||
- `model`: optional `provider/model-id` override for compaction summarization only. Use this when the main session should keep one model but compaction summaries should run on another; when unset, compaction uses the session's primary model.
|
||||
- `maxActiveTranscriptBytes`: optional byte threshold (`number` or strings like `"20mb"`) that triggers normal local compaction before a run when the active JSONL grows past the threshold. Requires `truncateAfterCompaction` so successful compaction can rotate to a smaller successor transcript. Disabled when unset or `0`.
|
||||
- `notifyUser`: when `true`, sends brief notices to the user when compaction starts and when it completes (for example, "Compacting context..." and "Compaction complete"). Disabled by default to keep compaction silent.
|
||||
- `memoryFlush`: silent agentic turn before auto-compaction to store durable memories. Skipped when workspace is read-only.
|
||||
|
||||
|
||||
@@ -259,6 +259,13 @@ Where:
|
||||
|
||||
These are Pi runtime semantics (OpenClaw consumes the events, but Pi decides when to compact).
|
||||
|
||||
OpenClaw can also trigger a preflight local compaction before opening the next
|
||||
run when `agents.defaults.compaction.maxActiveTranscriptBytes` is set and the
|
||||
active transcript file reaches that size. This is a file-size guard for local
|
||||
reopen cost, not raw archival: OpenClaw still runs normal semantic compaction,
|
||||
and it requires `truncateAfterCompaction` so the compacted summary can become a
|
||||
new successor transcript.
|
||||
|
||||
---
|
||||
|
||||
## Compaction settings (`reserveTokens`, `keepRecentTokens`)
|
||||
@@ -285,6 +292,11 @@ OpenClaw also enforces a safety floor for embedded runs:
|
||||
and keeps Pi's recent-tail cut point. Without an explicit keep budget,
|
||||
manual compaction remains a hard checkpoint and rebuilt context starts from
|
||||
the new summary.
|
||||
- Set `agents.defaults.compaction.maxActiveTranscriptBytes` to a byte value or
|
||||
string such as `"20mb"` to run local compaction before a turn when the active
|
||||
transcript gets large. This guard is active only when
|
||||
`truncateAfterCompaction` is also enabled. Leave it unset or set `0` to
|
||||
disable.
|
||||
- When `agents.defaults.compaction.truncateAfterCompaction` is enabled,
|
||||
OpenClaw rotates the active transcript to a compacted successor JSONL after
|
||||
compaction. The old full transcript remains archived and linked from the
|
||||
|
||||
@@ -364,6 +364,116 @@ describe("runMemoryFlushIfNeeded", () => {
|
||||
});
|
||||
});
|
||||
|
||||
it("triggers preflight compaction when the active transcript exceeds the configured byte threshold", async () => {
|
||||
const sessionFile = path.join(rootDir, "large-session.jsonl");
|
||||
await fs.writeFile(
|
||||
sessionFile,
|
||||
`${JSON.stringify({ message: { role: "user", content: "x".repeat(256) } })}\n`,
|
||||
"utf8",
|
||||
);
|
||||
const sessionEntry: SessionEntry = {
|
||||
sessionId: "session",
|
||||
sessionFile,
|
||||
updatedAt: Date.now(),
|
||||
totalTokens: 10,
|
||||
totalTokensFresh: true,
|
||||
compactionCount: 0,
|
||||
};
|
||||
const sessionStore = { main: sessionEntry };
|
||||
const replyOperation = {
|
||||
abortSignal: new AbortController().signal,
|
||||
setPhase: vi.fn(),
|
||||
updateSessionId: vi.fn(),
|
||||
};
|
||||
|
||||
const entry = await runPreflightCompactionIfNeeded({
|
||||
cfg: {
|
||||
agents: {
|
||||
defaults: {
|
||||
compaction: {
|
||||
truncateAfterCompaction: true,
|
||||
maxActiveTranscriptBytes: "10b",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
followupRun: createTestFollowupRun({
|
||||
sessionId: "session",
|
||||
sessionFile,
|
||||
sessionKey: "main",
|
||||
}),
|
||||
defaultModel: "anthropic/claude-opus-4-6",
|
||||
agentCfgContextTokens: 100_000,
|
||||
sessionEntry,
|
||||
sessionStore,
|
||||
sessionKey: "main",
|
||||
storePath: path.join(rootDir, "sessions.json"),
|
||||
isHeartbeat: false,
|
||||
replyOperation: replyOperation as never,
|
||||
});
|
||||
|
||||
expect(entry?.compactionCount).toBe(1);
|
||||
expect(replyOperation.setPhase).toHaveBeenCalledWith("preflight_compacting");
|
||||
const compactCall = compactEmbeddedPiSessionMock.mock.calls[0]?.[0] as {
|
||||
currentTokenCount?: number;
|
||||
sessionFile?: string;
|
||||
sessionId?: string;
|
||||
trigger?: string;
|
||||
};
|
||||
expect(compactCall).toEqual(
|
||||
expect.objectContaining({
|
||||
sessionId: "session",
|
||||
trigger: "budget",
|
||||
currentTokenCount: 10,
|
||||
}),
|
||||
);
|
||||
expect(compactCall.sessionFile).toContain("large-session.jsonl");
|
||||
});
|
||||
|
||||
it("keeps the active transcript byte threshold inactive unless transcript rotation is enabled", async () => {
|
||||
const sessionFile = path.join(rootDir, "large-session-no-rotation.jsonl");
|
||||
await fs.writeFile(
|
||||
sessionFile,
|
||||
`${JSON.stringify({ message: { role: "user", content: "x".repeat(256) } })}\n`,
|
||||
"utf8",
|
||||
);
|
||||
const sessionEntry: SessionEntry = {
|
||||
sessionId: "session",
|
||||
sessionFile,
|
||||
updatedAt: Date.now(),
|
||||
totalTokens: 10,
|
||||
totalTokensFresh: true,
|
||||
compactionCount: 0,
|
||||
};
|
||||
|
||||
const entry = await runPreflightCompactionIfNeeded({
|
||||
cfg: {
|
||||
agents: {
|
||||
defaults: {
|
||||
compaction: {
|
||||
maxActiveTranscriptBytes: "10b",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
followupRun: createTestFollowupRun({
|
||||
sessionId: "session",
|
||||
sessionFile,
|
||||
sessionKey: "main",
|
||||
}),
|
||||
defaultModel: "anthropic/claude-opus-4-6",
|
||||
agentCfgContextTokens: 100_000,
|
||||
sessionEntry,
|
||||
sessionStore: { main: sessionEntry },
|
||||
sessionKey: "main",
|
||||
isHeartbeat: false,
|
||||
replyOperation: createReplyOperation(),
|
||||
});
|
||||
|
||||
expect(entry).toBe(sessionEntry);
|
||||
expect(compactEmbeddedPiSessionMock).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it("uses configured prompts and stored bootstrap warning signatures", async () => {
|
||||
const sessionEntry: SessionEntry = {
|
||||
sessionId: "session",
|
||||
|
||||
@@ -35,6 +35,7 @@ import {
|
||||
} from "./agent-runner-utils.js";
|
||||
import {
|
||||
hasAlreadyFlushedForCurrentCompaction,
|
||||
resolveMaxActiveTranscriptBytes,
|
||||
resolveMemoryFlushContextWindowTokens,
|
||||
shouldRunMemoryFlush,
|
||||
shouldRunPreflightCompaction,
|
||||
@@ -400,8 +401,25 @@ export async function runPreflightCompactionIfNeeded(params: {
|
||||
typeof persistedTotalTokens === "number" &&
|
||||
Number.isFinite(persistedTotalTokens) &&
|
||||
persistedTotalTokens > 0;
|
||||
const maxActiveTranscriptBytes = resolveMaxActiveTranscriptBytes(params.cfg);
|
||||
const shouldCheckActiveTranscriptBytes = typeof maxActiveTranscriptBytes === "number";
|
||||
const transcriptSizeSnapshot = shouldCheckActiveTranscriptBytes
|
||||
? await readSessionLogSnapshot({
|
||||
sessionId: entry.sessionId,
|
||||
sessionEntry: entry,
|
||||
sessionKey: params.sessionKey ?? params.followupRun.run.sessionKey,
|
||||
opts: { storePath: params.storePath },
|
||||
includeByteSize: true,
|
||||
includeUsage: false,
|
||||
})
|
||||
: undefined;
|
||||
const activeTranscriptBytes = transcriptSizeSnapshot?.byteSize;
|
||||
const shouldCompactByTranscriptBytes =
|
||||
typeof activeTranscriptBytes === "number" &&
|
||||
typeof maxActiveTranscriptBytes === "number" &&
|
||||
activeTranscriptBytes >= maxActiveTranscriptBytes;
|
||||
const shouldUseTranscriptFallback = entry.totalTokensFresh === false || !hasPersistedTotalTokens;
|
||||
if (!shouldUseTranscriptFallback) {
|
||||
if (!shouldUseTranscriptFallback && !shouldCompactByTranscriptBytes) {
|
||||
return entry ?? params.sessionEntry;
|
||||
}
|
||||
const promptTokenEstimate = estimatePromptTokensForMemoryFlush(
|
||||
@@ -434,24 +452,31 @@ export async function runPreflightCompactionIfNeeded(params: {
|
||||
`isHeartbeat=${params.isHeartbeat} isCli=${isCli} ` +
|
||||
`persistedFresh=${entry?.totalTokensFresh === true} ` +
|
||||
`transcriptPromptTokens=${transcriptPromptTokens ?? "undefined"} ` +
|
||||
`promptTokensEst=${promptTokenEstimate ?? "undefined"}`,
|
||||
`promptTokensEst=${promptTokenEstimate ?? "undefined"} ` +
|
||||
`activeTranscriptBytes=${activeTranscriptBytes ?? "undefined"} ` +
|
||||
`maxActiveTranscriptBytes=${maxActiveTranscriptBytes ?? "undefined"} ` +
|
||||
`sizeTrigger=${shouldCompactByTranscriptBytes}`,
|
||||
);
|
||||
|
||||
const shouldCompact = shouldRunPreflightCompaction({
|
||||
const shouldCompactByTokens = shouldRunPreflightCompaction({
|
||||
entry,
|
||||
tokenCount: tokenCountForCompaction,
|
||||
contextWindowTokens,
|
||||
reserveTokensFloor,
|
||||
softThresholdTokens,
|
||||
});
|
||||
const shouldCompact = shouldCompactByTokens || shouldCompactByTranscriptBytes;
|
||||
if (!shouldCompact) {
|
||||
return entry ?? params.sessionEntry;
|
||||
}
|
||||
|
||||
const compactionTrigger = shouldCompactByTranscriptBytes ? "transcript_bytes" : "tokens";
|
||||
logVerbose(
|
||||
`preflightCompaction triggered: sessionKey=${params.sessionKey} ` +
|
||||
`tokenCount=${tokenCountForCompaction ?? freshPersistedTokens ?? "undefined"} ` +
|
||||
`threshold=${threshold}`,
|
||||
`threshold=${threshold} trigger=${compactionTrigger} ` +
|
||||
`activeTranscriptBytes=${activeTranscriptBytes ?? "undefined"} ` +
|
||||
`maxActiveTranscriptBytes=${maxActiveTranscriptBytes ?? "undefined"}`,
|
||||
);
|
||||
|
||||
params.replyOperation.setPhase("preflight_compacting");
|
||||
@@ -486,7 +511,7 @@ export async function runPreflightCompactionIfNeeded(params: {
|
||||
thinkLevel: params.followupRun.run.thinkLevel,
|
||||
bashElevated: params.followupRun.run.bashElevated,
|
||||
trigger: "budget",
|
||||
currentTokenCount: tokenCountForCompaction,
|
||||
currentTokenCount: tokenCountForCompaction ?? freshPersistedTokens,
|
||||
senderIsOwner: params.followupRun.run.senderIsOwner,
|
||||
ownerNumbers: params.followupRun.run.ownerNumbers,
|
||||
abortSignal: params.replyOperation.abortSignal,
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
import crypto from "node:crypto";
|
||||
import { resolveContextTokensForModel } from "../../agents/context.js";
|
||||
import { DEFAULT_CONTEXT_TOKENS } from "../../agents/defaults.js";
|
||||
import { parseNonNegativeByteSize } from "../../config/byte-size.js";
|
||||
import { resolveFreshSessionTotalTokens, type SessionEntry } from "../../config/sessions.js";
|
||||
import type { OpenClawConfig } from "../../config/types.openclaw.js";
|
||||
|
||||
@@ -21,6 +22,15 @@ export function resolveMemoryFlushContextWindowTokens(params: {
|
||||
);
|
||||
}
|
||||
|
||||
export function resolveMaxActiveTranscriptBytes(cfg?: OpenClawConfig): number | undefined {
|
||||
const compaction = cfg?.agents?.defaults?.compaction;
|
||||
if (compaction?.truncateAfterCompaction !== true) {
|
||||
return undefined;
|
||||
}
|
||||
const parsed = parseNonNegativeByteSize(compaction.maxActiveTranscriptBytes);
|
||||
return typeof parsed === "number" && parsed > 0 ? parsed : undefined;
|
||||
}
|
||||
|
||||
function resolvePositiveTokenCount(value: number | undefined): number | undefined {
|
||||
return typeof value === "number" && Number.isFinite(value) && value > 0
|
||||
? Math.floor(value)
|
||||
|
||||
@@ -32,6 +32,7 @@ describe("config compaction settings", () => {
|
||||
prompt: "Write notes.",
|
||||
systemPrompt: "Flush memory now.",
|
||||
},
|
||||
maxActiveTranscriptBytes: "20mb",
|
||||
});
|
||||
|
||||
expect(compaction?.reserveTokensFloor).toBe(12_345);
|
||||
@@ -46,6 +47,7 @@ describe("config compaction settings", () => {
|
||||
expect(compaction?.memoryFlush?.softThresholdTokens).toBe(1234);
|
||||
expect(compaction?.memoryFlush?.prompt).toBe("Write notes.");
|
||||
expect(compaction?.memoryFlush?.systemPrompt).toBe("Flush memory now.");
|
||||
expect(compaction?.maxActiveTranscriptBytes).toBe("20mb");
|
||||
});
|
||||
|
||||
it("preserves pi compaction override values", () => {
|
||||
|
||||
@@ -151,6 +151,7 @@ describe("config schema regressions", () => {
|
||||
defaults: {
|
||||
compaction: {
|
||||
truncateAfterCompaction: true,
|
||||
maxActiveTranscriptBytes: "20mb",
|
||||
},
|
||||
},
|
||||
},
|
||||
|
||||
@@ -5001,6 +5001,21 @@ export const GENERATED_BASE_CONFIG_SCHEMA: BaseConfigSchemaResponse = {
|
||||
description:
|
||||
"When enabled, rotates the active session JSONL file after compaction so future turns load only the summary and unsummarized tail while the previous full transcript remains archived. Prevents unbounded active transcript growth in long-running sessions. Default: false.",
|
||||
},
|
||||
maxActiveTranscriptBytes: {
|
||||
anyOf: [
|
||||
{
|
||||
type: "integer",
|
||||
minimum: 0,
|
||||
maximum: 9007199254740991,
|
||||
},
|
||||
{
|
||||
type: "string",
|
||||
},
|
||||
],
|
||||
title: "Compaction Active Transcript Size Threshold",
|
||||
description:
|
||||
'Triggers normal local compaction when the active session transcript reaches this size (bytes or strings like "20mb"). Requires truncateAfterCompaction so successful compaction can rotate to a smaller successor transcript; set to 0 or leave unset to disable. This never splits raw transcript bytes.',
|
||||
},
|
||||
notifyUser: {
|
||||
type: "boolean",
|
||||
title: "Compaction Notify User",
|
||||
@@ -26867,6 +26882,11 @@ export const GENERATED_BASE_CONFIG_SCHEMA: BaseConfigSchemaResponse = {
|
||||
help: "When enabled, rotates the active session JSONL file after compaction so future turns load only the summary and unsummarized tail while the previous full transcript remains archived. Prevents unbounded active transcript growth in long-running sessions. Default: false.",
|
||||
tags: ["advanced"],
|
||||
},
|
||||
"agents.defaults.compaction.maxActiveTranscriptBytes": {
|
||||
label: "Compaction Active Transcript Size Threshold",
|
||||
help: 'Triggers normal local compaction when the active session transcript reaches this size (bytes or strings like "20mb"). Requires truncateAfterCompaction so successful compaction can rotate to a smaller successor transcript; set to 0 or leave unset to disable. This never splits raw transcript bytes.',
|
||||
tags: ["performance"],
|
||||
},
|
||||
"agents.defaults.compaction.notifyUser": {
|
||||
label: "Compaction Notify User",
|
||||
help: "When enabled, sends brief compaction notices to the user when compaction starts and when it completes (for example, '🧹 Compacting context...' and '🧹 Compaction complete'). Disabled by default to keep compaction silent and non-intrusive.",
|
||||
|
||||
@@ -389,6 +389,7 @@ const TARGET_KEYS = [
|
||||
"agents.defaults.compaction.timeoutSeconds",
|
||||
"agents.defaults.compaction.model",
|
||||
"agents.defaults.compaction.truncateAfterCompaction",
|
||||
"agents.defaults.compaction.maxActiveTranscriptBytes",
|
||||
"agents.defaults.compaction.memoryFlush",
|
||||
"agents.defaults.compaction.memoryFlush.enabled",
|
||||
"agents.defaults.compaction.memoryFlush.softThresholdTokens",
|
||||
@@ -811,6 +812,10 @@ describe("config help copy quality", () => {
|
||||
const compactionModel = FIELD_HELP["agents.defaults.compaction.model"];
|
||||
expect(/provider\/model|different model|primary agent model/i.test(compactionModel)).toBe(true);
|
||||
|
||||
const transcriptBytes = FIELD_HELP["agents.defaults.compaction.maxActiveTranscriptBytes"];
|
||||
expect(/transcript|bytes|compaction/i.test(transcriptBytes)).toBe(true);
|
||||
expect(/never splits raw transcript bytes/i.test(transcriptBytes)).toBe(true);
|
||||
|
||||
const flush = FIELD_HELP["agents.defaults.compaction.memoryFlush.enabled"];
|
||||
expect(/pre-compaction|memory flush|token/i.test(flush)).toBe(true);
|
||||
});
|
||||
|
||||
@@ -1267,6 +1267,8 @@ export const FIELD_HELP: Record<string, string> = {
|
||||
"Optional provider/model override used only for compaction summarization. Set this when you want compaction to run on a different model than the session default, and leave it unset to keep using the primary agent model.",
|
||||
"agents.defaults.compaction.truncateAfterCompaction":
|
||||
"When enabled, rotates the active session JSONL file after compaction so future turns load only the summary and unsummarized tail while the previous full transcript remains archived. Prevents unbounded active transcript growth in long-running sessions. Default: false.",
|
||||
"agents.defaults.compaction.maxActiveTranscriptBytes":
|
||||
'Triggers normal local compaction when the active session transcript reaches this size (bytes or strings like "20mb"). Requires truncateAfterCompaction so successful compaction can rotate to a smaller successor transcript; set to 0 or leave unset to disable. This never splits raw transcript bytes.',
|
||||
"agents.defaults.compaction.notifyUser":
|
||||
"When enabled, sends brief compaction notices to the user when compaction starts and when it completes (for example, '🧹 Compacting context...' and '🧹 Compaction complete'). Disabled by default to keep compaction silent and non-intrusive.",
|
||||
"agents.defaults.compaction.memoryFlush":
|
||||
|
||||
@@ -595,6 +595,8 @@ export const FIELD_LABELS: Record<string, string> = {
|
||||
"agents.defaults.compaction.timeoutSeconds": "Compaction Timeout (Seconds)",
|
||||
"agents.defaults.compaction.model": "Compaction Model Override",
|
||||
"agents.defaults.compaction.truncateAfterCompaction": "Rotate Transcript After Compaction",
|
||||
"agents.defaults.compaction.maxActiveTranscriptBytes":
|
||||
"Compaction Active Transcript Size Threshold",
|
||||
"agents.defaults.compaction.notifyUser": "Compaction Notify User",
|
||||
"agents.defaults.compaction.memoryFlush": "Compaction Memory Flush",
|
||||
"agents.defaults.compaction.memoryFlush.enabled": "Compaction Memory Flush Enabled",
|
||||
|
||||
@@ -477,6 +477,14 @@ export type AgentCompactionConfig = {
|
||||
* Default: false (existing behavior preserved).
|
||||
*/
|
||||
truncateAfterCompaction?: boolean;
|
||||
/**
|
||||
* Trigger a normal local compaction when the active session JSONL reaches
|
||||
* this size (bytes, or byte-size string like "20mb"). Set to 0/unset to
|
||||
* disable. Requires truncateAfterCompaction so successful compaction can
|
||||
* rotate to a smaller successor transcript. This does not split raw
|
||||
* transcript bytes.
|
||||
*/
|
||||
maxActiveTranscriptBytes?: number | string;
|
||||
/**
|
||||
* Send brief compaction notices to the user when compaction starts and completes.
|
||||
* Default: false (silent by default).
|
||||
|
||||
@@ -96,9 +96,11 @@ describe("agent defaults schema", () => {
|
||||
const result = AgentDefaultsSchema.parse({
|
||||
compaction: {
|
||||
truncateAfterCompaction: true,
|
||||
maxActiveTranscriptBytes: "20mb",
|
||||
},
|
||||
})!;
|
||||
expect(result.compaction?.truncateAfterCompaction).toBe(true);
|
||||
expect(result.compaction?.maxActiveTranscriptBytes).toBe("20mb");
|
||||
});
|
||||
|
||||
it("accepts focused contextLimits on defaults and agent entries", () => {
|
||||
|
||||
@@ -20,6 +20,11 @@ import {
|
||||
|
||||
export const SilentReplyPolicySchema = z.union([z.literal("allow"), z.literal("disallow")]);
|
||||
|
||||
const NonNegativeByteSizeSchema = z.union([
|
||||
z.number().int().nonnegative(),
|
||||
z.string().refine(isValidNonNegativeByteSizeString, "Expected byte size string like 2mb"),
|
||||
]);
|
||||
|
||||
export const SilentReplyPolicyConfigSchema = z
|
||||
.object({
|
||||
direct: SilentReplyPolicySchema.optional(),
|
||||
@@ -199,20 +204,14 @@ export const AgentDefaultsSchema = z
|
||||
.object({
|
||||
enabled: z.boolean().optional(),
|
||||
softThresholdTokens: z.number().int().nonnegative().optional(),
|
||||
forceFlushTranscriptBytes: z
|
||||
.union([
|
||||
z.number().int().nonnegative(),
|
||||
z
|
||||
.string()
|
||||
.refine(isValidNonNegativeByteSizeString, "Expected byte size string like 2mb"),
|
||||
])
|
||||
.optional(),
|
||||
forceFlushTranscriptBytes: NonNegativeByteSizeSchema.optional(),
|
||||
prompt: z.string().optional(),
|
||||
systemPrompt: z.string().optional(),
|
||||
})
|
||||
.strict()
|
||||
.optional(),
|
||||
truncateAfterCompaction: z.boolean().optional(),
|
||||
maxActiveTranscriptBytes: NonNegativeByteSizeSchema.optional(),
|
||||
notifyUser: z.boolean().optional(),
|
||||
})
|
||||
.strict()
|
||||
|
||||
Reference in New Issue
Block a user