mirror of
https://github.com/openclaw/openclaw.git
synced 2026-04-12 01:31:08 +00:00
Feat: Add Active Memory recall plugin (#63286)
* Refine plugin debug plumbing * Tighten plugin debug handling * Reduce active memory overhead * Abort active memory sidecar on timeout * Rename active memory blocking subagent wording * Fix active memory cache and recall selection * Preserve active memory session scope * Sanitize recalled context before retrieval * Add active memory changelog entry * Harden active memory debug and transcript handling * Add active memory policy config * Raise active memory timeout default * Keep usage footer on primary reply * Clear stale active memory status lines * Match legacy active memory status prefixes * Preserve numeric active memory bullets * Reuse canonical session keys for active memory * Let active memory subagent decide relevance * Refine active memory plugin summary flow * Fix active memory main-session DM detection * Trim active memory summaries at word boundaries * Add active memory prompt styles * Fix active memory stale status cleanup * Rename active memory subagent wording * Add active memory prompt and thinking overrides * Remove active memory legacy status compat * Resolve active memory session id status * Add active memory session toggle * Add active memory global toggle * Fix active memory toggle state handling * Harden active memory transcript persistence * Fix active memory chat type gating * Scope active memory transcripts by agent * Show plugin debug before replies
This commit is contained in:
@@ -6,6 +6,7 @@ Docs: https://docs.openclaw.ai
|
||||
|
||||
### Changes
|
||||
|
||||
- Memory/Active Memory: add a new optional Active Memory plugin that gives OpenClaw a dedicated memory sub-agent right before the main reply, so ongoing chats can automatically pull in relevant preferences, context, and past details without making users remember to manually say "remember this" or "search memory" first. Includes configurable message/recent/full context modes, live `/verbose` inspection, advanced prompt/thinking overrides for tuning, and opt-in transcript persistence for debugging.
|
||||
- macOS/Talk: add an experimental local MLX speech provider for Talk Mode, with explicit provider selection, local utterance playback, interruption handling, and system-voice fallback. (#63539) Thanks @ImLukeF.
|
||||
- Docs i18n: chunk raw doc translation, reject truncated tagged outputs, avoid ambiguous body-only wrapper unwrapping, and recover from terminated Pi translation sessions without changing the default `openai/gpt-5.4` path. (#62969, #63808) Thanks @hxy91819.
|
||||
|
||||
|
||||
608
docs/concepts/active-memory.md
Normal file
608
docs/concepts/active-memory.md
Normal file
@@ -0,0 +1,608 @@
|
||||
---
|
||||
title: "Active Memory"
|
||||
summary: "A plugin-owned blocking memory sub-agent that injects relevant memory into interactive chat sessions"
|
||||
read_when:
|
||||
- You want to understand what active memory is for
|
||||
- You want to turn active memory on for a conversational agent
|
||||
- You want to tune active memory behavior without enabling it everywhere
|
||||
---
|
||||
|
||||
# Active Memory
|
||||
|
||||
Active memory is an optional plugin-owned blocking memory sub-agent that runs
|
||||
before the main reply for eligible conversational sessions.
|
||||
|
||||
It exists because most memory systems are capable but reactive. They rely on
|
||||
the main agent to decide when to search memory, or on the user to say things
|
||||
like "remember this" or "search memory." By then, the moment where memory would
|
||||
have made the reply feel natural has already passed.
|
||||
|
||||
Active memory gives the system one bounded chance to surface relevant memory
|
||||
before the main reply is generated.
|
||||
|
||||
## Paste This Into Your Agent
|
||||
|
||||
Paste this into your agent if you want it to enable Active Memory with a
|
||||
self-contained, safe-default setup:
|
||||
|
||||
```json5
|
||||
{
|
||||
plugins: {
|
||||
entries: {
|
||||
"active-memory": {
|
||||
enabled: true,
|
||||
config: {
|
||||
enabled: true,
|
||||
agents: ["main"],
|
||||
allowedChatTypes: ["direct"],
|
||||
modelFallbackPolicy: "default-remote",
|
||||
queryMode: "recent",
|
||||
promptStyle: "balanced",
|
||||
timeoutMs: 15000,
|
||||
maxSummaryChars: 220,
|
||||
persistTranscripts: false,
|
||||
logging: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
This turns the plugin on for the `main` agent, keeps it limited to direct-message
|
||||
style sessions by default, lets it inherit the current session model first, and
|
||||
still allows the built-in remote fallback if no explicit or inherited model is
|
||||
available.
|
||||
|
||||
After that, restart the gateway:
|
||||
|
||||
```bash
|
||||
node scripts/run-node.mjs gateway --profile dev
|
||||
```
|
||||
|
||||
To inspect it live in a conversation:
|
||||
|
||||
```text
|
||||
/verbose on
|
||||
```
|
||||
|
||||
## Turn active memory on
|
||||
|
||||
The safest setup is:
|
||||
|
||||
1. enable the plugin
|
||||
2. target one conversational agent
|
||||
3. keep logging on only while tuning
|
||||
|
||||
Start with this in `openclaw.json`:
|
||||
|
||||
```json5
|
||||
{
|
||||
plugins: {
|
||||
entries: {
|
||||
"active-memory": {
|
||||
enabled: true,
|
||||
config: {
|
||||
agents: ["main"],
|
||||
allowedChatTypes: ["direct"],
|
||||
modelFallbackPolicy: "default-remote",
|
||||
queryMode: "recent",
|
||||
promptStyle: "balanced",
|
||||
timeoutMs: 15000,
|
||||
maxSummaryChars: 220,
|
||||
persistTranscripts: false,
|
||||
logging: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
Then restart the gateway:
|
||||
|
||||
```bash
|
||||
node scripts/run-node.mjs gateway --profile dev
|
||||
```
|
||||
|
||||
What this means:
|
||||
|
||||
- `plugins.entries.active-memory.enabled: true` turns the plugin on
|
||||
- `config.agents: ["main"]` opts only the `main` agent into active memory
|
||||
- `config.allowedChatTypes: ["direct"]` keeps active memory on for direct-message style sessions only by default
|
||||
- if `config.model` is unset, active memory inherits the current session model first
|
||||
- `config.modelFallbackPolicy: "default-remote"` keeps the built-in remote fallback as the default when no explicit or inherited model is available
|
||||
- `config.promptStyle: "balanced"` uses the default general-purpose prompt style for `recent` mode
|
||||
- active memory still runs only on eligible interactive persistent chat sessions
|
||||
|
||||
## How to see it
|
||||
|
||||
Active memory injects hidden system context for the model. It does not expose
|
||||
raw `<active_memory_plugin>...</active_memory_plugin>` tags to the client.
|
||||
|
||||
## Session toggle
|
||||
|
||||
Use the plugin command when you want to pause or resume active memory for the
|
||||
current chat session without editing config:
|
||||
|
||||
```text
|
||||
/active-memory status
|
||||
/active-memory off
|
||||
/active-memory on
|
||||
```
|
||||
|
||||
This is session-scoped. It does not change
|
||||
`plugins.entries.active-memory.enabled`, agent targeting, or other global
|
||||
configuration.
|
||||
|
||||
If you want the command to write config and pause or resume active memory for
|
||||
all sessions, use the explicit global form:
|
||||
|
||||
```text
|
||||
/active-memory status --global
|
||||
/active-memory off --global
|
||||
/active-memory on --global
|
||||
```
|
||||
|
||||
The global form writes `plugins.entries.active-memory.config.enabled`. It leaves
|
||||
`plugins.entries.active-memory.enabled` on so the command remains available to
|
||||
turn active memory back on later.
|
||||
|
||||
If you want to see what active memory is doing in a live session, turn verbose
|
||||
mode on for that session:
|
||||
|
||||
```text
|
||||
/verbose on
|
||||
```
|
||||
|
||||
With verbose enabled, OpenClaw can show:
|
||||
|
||||
- an active memory status line such as `Active Memory: ok 842ms recent 34 chars`
|
||||
- a readable debug summary such as `Active Memory Debug: Lemon pepper wings with blue cheese.`
|
||||
|
||||
Those lines are derived from the same active memory pass that feeds the hidden
|
||||
system context, but they are formatted for humans instead of exposing raw prompt
|
||||
markup.
|
||||
|
||||
By default, the blocking memory sub-agent transcript is temporary and deleted
|
||||
after the run completes.
|
||||
|
||||
Example flow:
|
||||
|
||||
```text
|
||||
/verbose on
|
||||
what wings should i order?
|
||||
```
|
||||
|
||||
Expected visible reply shape:
|
||||
|
||||
```text
|
||||
...normal assistant reply...
|
||||
|
||||
🧩 Active Memory: ok 842ms recent 34 chars
|
||||
🔎 Active Memory Debug: Lemon pepper wings with blue cheese.
|
||||
```
|
||||
|
||||
## When it runs
|
||||
|
||||
Active memory uses two gates:
|
||||
|
||||
1. **Config opt-in**
|
||||
The plugin must be enabled, and the current agent id must appear in
|
||||
`plugins.entries.active-memory.config.agents`.
|
||||
2. **Strict runtime eligibility**
|
||||
Even when enabled and targeted, active memory only runs for eligible
|
||||
interactive persistent chat sessions.
|
||||
|
||||
The actual rule is:
|
||||
|
||||
```text
|
||||
plugin enabled
|
||||
+
|
||||
agent id targeted
|
||||
+
|
||||
allowed chat type
|
||||
+
|
||||
eligible interactive persistent chat session
|
||||
=
|
||||
active memory runs
|
||||
```
|
||||
|
||||
If any of those fail, active memory does not run.
|
||||
|
||||
## Session types
|
||||
|
||||
`config.allowedChatTypes` controls which kinds of conversations may run Active
|
||||
Memory at all.
|
||||
|
||||
The default is:
|
||||
|
||||
```json5
|
||||
allowedChatTypes: ["direct"]
|
||||
```
|
||||
|
||||
That means Active Memory runs by default in direct-message style sessions, but
|
||||
not in group or channel sessions unless you opt them in explicitly.
|
||||
|
||||
Examples:
|
||||
|
||||
```json5
|
||||
allowedChatTypes: ["direct"]
|
||||
```
|
||||
|
||||
```json5
|
||||
allowedChatTypes: ["direct", "group"]
|
||||
```
|
||||
|
||||
```json5
|
||||
allowedChatTypes: ["direct", "group", "channel"]
|
||||
```
|
||||
|
||||
## Where it runs
|
||||
|
||||
Active memory is a conversational enrichment feature, not a platform-wide
|
||||
inference feature.
|
||||
|
||||
| Surface | Runs active memory? |
|
||||
| ------------------------------------------------------------------- | ------------------------------------------------------- |
|
||||
| Control UI / web chat persistent sessions | Yes, if the plugin is enabled and the agent is targeted |
|
||||
| Other interactive channel sessions on the same persistent chat path | Yes, if the plugin is enabled and the agent is targeted |
|
||||
| Headless one-shot runs | No |
|
||||
| Heartbeat/background runs | No |
|
||||
| Generic internal `agent-command` paths | No |
|
||||
| Sub-agent/internal helper execution | No |
|
||||
|
||||
## Why use it
|
||||
|
||||
Use active memory when:
|
||||
|
||||
- the session is persistent and user-facing
|
||||
- the agent has meaningful long-term memory to search
|
||||
- continuity and personalization matter more than raw prompt determinism
|
||||
|
||||
It works especially well for:
|
||||
|
||||
- stable preferences
|
||||
- recurring habits
|
||||
- long-term user context that should surface naturally
|
||||
|
||||
It is a poor fit for:
|
||||
|
||||
- automation
|
||||
- internal workers
|
||||
- one-shot API tasks
|
||||
- places where hidden personalization would be surprising
|
||||
|
||||
## How it works
|
||||
|
||||
The runtime shape is:
|
||||
|
||||
```mermaid
|
||||
flowchart LR
|
||||
U["User Message"] --> Q["Build Memory Query"]
|
||||
Q --> R["Active Memory Blocking Memory Sub-Agent"]
|
||||
R -->|NONE or empty| M["Main Reply"]
|
||||
R -->|relevant summary| I["Append Hidden active_memory_plugin System Context"]
|
||||
I --> M["Main Reply"]
|
||||
```
|
||||
|
||||
The blocking memory sub-agent can use only:
|
||||
|
||||
- `memory_search`
|
||||
- `memory_get`
|
||||
|
||||
If the connection is weak, it should return `NONE`.
|
||||
|
||||
## Query modes
|
||||
|
||||
`config.queryMode` controls how much conversation the blocking memory sub-agent sees.
|
||||
|
||||
## Prompt styles
|
||||
|
||||
`config.promptStyle` controls how eager or strict the blocking memory sub-agent is
|
||||
when deciding whether to return memory.
|
||||
|
||||
Available styles:
|
||||
|
||||
- `balanced`: general-purpose default for `recent` mode
|
||||
- `strict`: least eager; best when you want very little bleed from nearby context
|
||||
- `contextual`: most continuity-friendly; best when conversation history should matter more
|
||||
- `recall-heavy`: more willing to surface memory on softer but still plausible matches
|
||||
- `precision-heavy`: aggressively prefers `NONE` unless the match is obvious
|
||||
- `preference-only`: optimized for favorites, habits, routines, taste, and recurring personal facts
|
||||
|
||||
Default mapping when `config.promptStyle` is unset:
|
||||
|
||||
```text
|
||||
message -> strict
|
||||
recent -> balanced
|
||||
full -> contextual
|
||||
```
|
||||
|
||||
If you set `config.promptStyle` explicitly, that override wins.
|
||||
|
||||
Example:
|
||||
|
||||
```json5
|
||||
promptStyle: "preference-only"
|
||||
```
|
||||
|
||||
## Model fallback policy
|
||||
|
||||
If `config.model` is unset, Active Memory tries to resolve a model in this order:
|
||||
|
||||
```text
|
||||
explicit plugin model
|
||||
-> current session model
|
||||
-> agent primary model
|
||||
-> optional built-in remote fallback
|
||||
```
|
||||
|
||||
`config.modelFallbackPolicy` controls the last step.
|
||||
|
||||
Default:
|
||||
|
||||
```json5
|
||||
modelFallbackPolicy: "default-remote"
|
||||
```
|
||||
|
||||
Other option:
|
||||
|
||||
```json5
|
||||
modelFallbackPolicy: "resolved-only"
|
||||
```
|
||||
|
||||
Use `resolved-only` if you want Active Memory to skip recall instead of falling
|
||||
back to the built-in remote default when no explicit or inherited model is
|
||||
available.
|
||||
|
||||
## Advanced escape hatches
|
||||
|
||||
These options are intentionally not part of the recommended setup.
|
||||
|
||||
`config.thinking` can override the blocking memory sub-agent thinking level:
|
||||
|
||||
```json5
|
||||
thinking: "medium"
|
||||
```
|
||||
|
||||
Default:
|
||||
|
||||
```json5
|
||||
thinking: "off"
|
||||
```
|
||||
|
||||
Do not enable this by default. Active Memory runs in the reply path, so extra
|
||||
thinking time directly increases user-visible latency.
|
||||
|
||||
`config.promptAppend` adds extra operator instructions after the default Active
|
||||
Memory prompt and before the conversation context:
|
||||
|
||||
```json5
|
||||
promptAppend: "Prefer stable long-term preferences over one-off events."
|
||||
```
|
||||
|
||||
`config.promptOverride` replaces the default Active Memory prompt. OpenClaw
|
||||
still appends the conversation context afterward:
|
||||
|
||||
```json5
|
||||
promptOverride: "You are a memory search agent. Return NONE or one compact user fact."
|
||||
```
|
||||
|
||||
Prompt customization is not recommended unless you are deliberately testing a
|
||||
different recall contract. The default prompt is tuned to return either `NONE`
|
||||
or compact user-fact context for the main model.
|
||||
|
||||
### `message`
|
||||
|
||||
Only the latest user message is sent.
|
||||
|
||||
```text
|
||||
Latest user message only
|
||||
```
|
||||
|
||||
Use this when:
|
||||
|
||||
- you want the fastest behavior
|
||||
- you want the strongest bias toward stable preference recall
|
||||
- follow-up turns do not need conversational context
|
||||
|
||||
Recommended timeout:
|
||||
|
||||
- start around `3000` to `5000` ms
|
||||
|
||||
### `recent`
|
||||
|
||||
The latest user message plus a small recent conversational tail is sent.
|
||||
|
||||
```text
|
||||
Recent conversation tail:
|
||||
user: ...
|
||||
assistant: ...
|
||||
user: ...
|
||||
|
||||
Latest user message:
|
||||
...
|
||||
```
|
||||
|
||||
Use this when:
|
||||
|
||||
- you want a better balance of speed and conversational grounding
|
||||
- follow-up questions often depend on the last few turns
|
||||
|
||||
Recommended timeout:
|
||||
|
||||
- start around `15000` ms
|
||||
|
||||
### `full`
|
||||
|
||||
The full conversation is sent to the blocking memory sub-agent.
|
||||
|
||||
```text
|
||||
Full conversation context:
|
||||
user: ...
|
||||
assistant: ...
|
||||
user: ...
|
||||
...
|
||||
```
|
||||
|
||||
Use this when:
|
||||
|
||||
- the strongest recall quality matters more than latency
|
||||
- the conversation contains important setup far back in the thread
|
||||
|
||||
Recommended timeout:
|
||||
|
||||
- increase it substantially compared with `message` or `recent`
|
||||
- start around `15000` ms or higher depending on thread size
|
||||
|
||||
In general, timeout should increase with context size:
|
||||
|
||||
```text
|
||||
message < recent < full
|
||||
```
|
||||
|
||||
## Transcript persistence
|
||||
|
||||
Active memory blocking memory sub-agent runs create a real `session.jsonl`
|
||||
transcript during the blocking memory sub-agent call.
|
||||
|
||||
By default, that transcript is temporary:
|
||||
|
||||
- it is written to a temp directory
|
||||
- it is used only for the blocking memory sub-agent run
|
||||
- it is deleted immediately after the run finishes
|
||||
|
||||
If you want to keep those blocking memory sub-agent transcripts on disk for debugging or
|
||||
inspection, turn persistence on explicitly:
|
||||
|
||||
```json5
|
||||
{
|
||||
plugins: {
|
||||
entries: {
|
||||
"active-memory": {
|
||||
enabled: true,
|
||||
config: {
|
||||
agents: ["main"],
|
||||
persistTranscripts: true,
|
||||
transcriptDir: "active-memory",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
When enabled, active memory stores transcripts in a separate directory under the
|
||||
target agent's sessions folder, not in the main user conversation transcript
|
||||
path.
|
||||
|
||||
The default layout is conceptually:
|
||||
|
||||
```text
|
||||
agents/<agent>/sessions/active-memory/<blocking-memory-sub-agent-session-id>.jsonl
|
||||
```
|
||||
|
||||
You can change the relative subdirectory with `config.transcriptDir`.
|
||||
|
||||
Use this carefully:
|
||||
|
||||
- blocking memory sub-agent transcripts can accumulate quickly on busy sessions
|
||||
- `full` query mode can duplicate a lot of conversation context
|
||||
- these transcripts contain hidden prompt context and recalled memories
|
||||
|
||||
## Configuration
|
||||
|
||||
All active memory configuration lives under:
|
||||
|
||||
```text
|
||||
plugins.entries.active-memory
|
||||
```
|
||||
|
||||
The most important fields are:
|
||||
|
||||
| Key | Type | Meaning |
|
||||
| --------------------------- | ---------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------ |
|
||||
| `enabled` | `boolean` | Enables the plugin itself |
|
||||
| `config.agents` | `string[]` | Agent ids that may use active memory |
|
||||
| `config.model` | `string` | Optional blocking memory sub-agent model ref; when unset, active memory uses the current session model |
|
||||
| `config.queryMode` | `"message" \| "recent" \| "full"` | Controls how much conversation the blocking memory sub-agent sees |
|
||||
| `config.promptStyle` | `"balanced" \| "strict" \| "contextual" \| "recall-heavy" \| "precision-heavy" \| "preference-only"` | Controls how eager or strict the blocking memory sub-agent is when deciding whether to return memory |
|
||||
| `config.thinking` | `"off" \| "minimal" \| "low" \| "medium" \| "high" \| "xhigh" \| "adaptive"` | Advanced thinking override for the blocking memory sub-agent; default `off` for speed |
|
||||
| `config.promptOverride` | `string` | Advanced full prompt replacement; not recommended for normal use |
|
||||
| `config.promptAppend` | `string` | Advanced extra instructions appended to the default or overridden prompt |
|
||||
| `config.timeoutMs` | `number` | Hard timeout for the blocking memory sub-agent |
|
||||
| `config.maxSummaryChars` | `number` | Maximum total characters allowed in the active-memory summary |
|
||||
| `config.logging` | `boolean` | Emits active memory logs while tuning |
|
||||
| `config.persistTranscripts` | `boolean` | Keeps blocking memory sub-agent transcripts on disk instead of deleting temp files |
|
||||
| `config.transcriptDir` | `string` | Relative blocking memory sub-agent transcript directory under the agent sessions folder |
|
||||
|
||||
Useful tuning fields:
|
||||
|
||||
| Key | Type | Meaning |
|
||||
| ----------------------------- | -------- | ------------------------------------------------------------- |
|
||||
| `config.maxSummaryChars` | `number` | Maximum total characters allowed in the active-memory summary |
|
||||
| `config.recentUserTurns` | `number` | Prior user turns to include when `queryMode` is `recent` |
|
||||
| `config.recentAssistantTurns` | `number` | Prior assistant turns to include when `queryMode` is `recent` |
|
||||
| `config.recentUserChars` | `number` | Max chars per recent user turn |
|
||||
| `config.recentAssistantChars` | `number` | Max chars per recent assistant turn |
|
||||
| `config.cacheTtlMs` | `number` | Cache reuse for repeated identical queries |
|
||||
|
||||
## Recommended setup
|
||||
|
||||
Start with `recent`.
|
||||
|
||||
```json5
|
||||
{
|
||||
plugins: {
|
||||
entries: {
|
||||
"active-memory": {
|
||||
enabled: true,
|
||||
config: {
|
||||
agents: ["main"],
|
||||
queryMode: "recent",
|
||||
promptStyle: "balanced",
|
||||
timeoutMs: 15000,
|
||||
maxSummaryChars: 220,
|
||||
logging: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
If you want to inspect live behavior while tuning, use `/verbose on` in the
|
||||
session instead of looking for a separate active-memory debug command.
|
||||
|
||||
Then move to:
|
||||
|
||||
- `message` if you want lower latency
|
||||
- `full` if you decide extra context is worth the slower blocking memory sub-agent
|
||||
|
||||
## Debugging
|
||||
|
||||
If active memory is not showing up where you expect:
|
||||
|
||||
1. Confirm the plugin is enabled under `plugins.entries.active-memory.enabled`.
|
||||
2. Confirm the current agent id is listed in `config.agents`.
|
||||
3. Confirm you are testing through an interactive persistent chat session.
|
||||
4. Turn on `config.logging: true` and watch the gateway logs.
|
||||
5. Verify memory search itself works with `openclaw memory status --deep`.
|
||||
|
||||
If memory hits are noisy, tighten:
|
||||
|
||||
- `maxSummaryChars`
|
||||
|
||||
If active memory is too slow:
|
||||
|
||||
- lower `queryMode`
|
||||
- lower `timeoutMs`
|
||||
- reduce recent turn counts
|
||||
- reduce per-turn char caps
|
||||
|
||||
## Related pages
|
||||
|
||||
- [Memory Search](/concepts/memory-search)
|
||||
- [Memory configuration reference](/reference/memory-config)
|
||||
- [Plugin SDK setup](/plugins/sdk-setup)
|
||||
@@ -138,5 +138,6 @@ earlier conversations. This is opt-in via
|
||||
|
||||
## Further reading
|
||||
|
||||
- [Active Memory](/concepts/active-memory) -- sub-agent memory for interactive chat sessions
|
||||
- [Memory](/concepts/memory) -- file layout, backends, tools
|
||||
- [Memory configuration reference](/reference/memory-config) -- all config knobs
|
||||
|
||||
@@ -17,10 +17,22 @@ conceptual overviews, see:
|
||||
- [Builtin Engine](/concepts/memory-builtin) -- default SQLite backend
|
||||
- [QMD Engine](/concepts/memory-qmd) -- local-first sidecar
|
||||
- [Memory Search](/concepts/memory-search) -- search pipeline and tuning
|
||||
- [Active Memory](/concepts/active-memory) -- enabling the memory sub-agent for interactive sessions
|
||||
|
||||
All memory search settings live under `agents.defaults.memorySearch` in
|
||||
`openclaw.json` unless noted otherwise.
|
||||
|
||||
If you are looking for the **active memory** feature toggle and sub-agent config,
|
||||
that lives under `plugins.entries.active-memory` instead of `memorySearch`.
|
||||
|
||||
Active memory uses a two-gate model:
|
||||
|
||||
1. the plugin must be enabled and target the current agent id
|
||||
2. the request must be an eligible interactive persistent chat session
|
||||
|
||||
See [Active Memory](/concepts/active-memory) for the activation model,
|
||||
plugin-owned config, transcript persistence, and safe rollout pattern.
|
||||
|
||||
---
|
||||
|
||||
## Provider selection
|
||||
|
||||
1448
extensions/active-memory/index.test.ts
Normal file
1448
extensions/active-memory/index.test.ts
Normal file
File diff suppressed because it is too large
Load Diff
1559
extensions/active-memory/index.ts
Normal file
1559
extensions/active-memory/index.ts
Normal file
File diff suppressed because it is too large
Load Diff
120
extensions/active-memory/openclaw.plugin.json
Normal file
120
extensions/active-memory/openclaw.plugin.json
Normal file
@@ -0,0 +1,120 @@
|
||||
{
|
||||
"id": "active-memory",
|
||||
"name": "Active Memory",
|
||||
"description": "Runs a bounded blocking memory sub-agent before eligible conversational replies and injects relevant memory into prompt context.",
|
||||
"configSchema": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
"enabled": { "type": "boolean" },
|
||||
"agents": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" }
|
||||
},
|
||||
"model": { "type": "string" },
|
||||
"modelFallbackPolicy": {
|
||||
"type": "string",
|
||||
"enum": ["default-remote", "resolved-only"]
|
||||
},
|
||||
"allowedChatTypes": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "string",
|
||||
"enum": ["direct", "group", "channel"]
|
||||
}
|
||||
},
|
||||
"thinking": {
|
||||
"type": "string",
|
||||
"enum": ["off", "minimal", "low", "medium", "high", "xhigh", "adaptive"]
|
||||
},
|
||||
"timeoutMs": { "type": "integer", "minimum": 250 },
|
||||
"queryMode": {
|
||||
"type": "string",
|
||||
"enum": ["message", "recent", "full"]
|
||||
},
|
||||
"promptStyle": {
|
||||
"type": "string",
|
||||
"enum": [
|
||||
"balanced",
|
||||
"strict",
|
||||
"contextual",
|
||||
"recall-heavy",
|
||||
"precision-heavy",
|
||||
"preference-only"
|
||||
]
|
||||
},
|
||||
"promptOverride": { "type": "string" },
|
||||
"promptAppend": { "type": "string" },
|
||||
"maxSummaryChars": { "type": "integer", "minimum": 40, "maximum": 1000 },
|
||||
"recentUserTurns": { "type": "integer", "minimum": 0, "maximum": 4 },
|
||||
"recentAssistantTurns": { "type": "integer", "minimum": 0, "maximum": 3 },
|
||||
"recentUserChars": { "type": "integer", "minimum": 40, "maximum": 1000 },
|
||||
"recentAssistantChars": { "type": "integer", "minimum": 40, "maximum": 1000 },
|
||||
"logging": { "type": "boolean" },
|
||||
"persistTranscripts": { "type": "boolean" },
|
||||
"transcriptDir": { "type": "string" },
|
||||
"cacheTtlMs": { "type": "integer", "minimum": 1000, "maximum": 120000 }
|
||||
}
|
||||
},
|
||||
"uiHints": {
|
||||
"enabled": {
|
||||
"label": "Active Memory Recall",
|
||||
"help": "Globally enable or pause Active Memory recall while keeping the plugin command available."
|
||||
},
|
||||
"agents": {
|
||||
"label": "Target Agents",
|
||||
"help": "Explicit agent ids that may use active memory."
|
||||
},
|
||||
"model": {
|
||||
"label": "Memory Model",
|
||||
"help": "Provider/model used for the blocking memory sub-agent."
|
||||
},
|
||||
"modelFallbackPolicy": {
|
||||
"label": "Model Fallback Policy",
|
||||
"help": "Choose whether Active Memory falls back to the built-in remote default model when no explicit or inherited model is available."
|
||||
},
|
||||
"allowedChatTypes": {
|
||||
"label": "Allowed Chat Types",
|
||||
"help": "Choose which session types may run Active Memory. Defaults to direct-message style sessions only."
|
||||
},
|
||||
"timeoutMs": {
|
||||
"label": "Timeout (ms)"
|
||||
},
|
||||
"queryMode": {
|
||||
"label": "Query Mode",
|
||||
"help": "Choose whether the blocking memory sub-agent sees only the latest user message, a small recent tail, or the full conversation."
|
||||
},
|
||||
"promptStyle": {
|
||||
"label": "Prompt Style",
|
||||
"help": "Choose how eager or strict the blocking memory sub-agent should be when deciding whether to return memory."
|
||||
},
|
||||
"thinking": {
|
||||
"label": "Thinking Override",
|
||||
"help": "Advanced: optional thinking level for the blocking memory sub-agent. Defaults to off for speed."
|
||||
},
|
||||
"promptOverride": {
|
||||
"label": "Prompt Override",
|
||||
"help": "Advanced: replace the default Active Memory sub-agent instructions. Conversation context is still appended."
|
||||
},
|
||||
"promptAppend": {
|
||||
"label": "Prompt Append",
|
||||
"help": "Advanced: append extra operator instructions after the default Active Memory sub-agent instructions."
|
||||
},
|
||||
"maxSummaryChars": {
|
||||
"label": "Max Summary Characters",
|
||||
"help": "Maximum total characters allowed in the active-memory summary."
|
||||
},
|
||||
"logging": {
|
||||
"label": "Enable Logging",
|
||||
"help": "Emit active memory timing and result logs."
|
||||
},
|
||||
"persistTranscripts": {
|
||||
"label": "Persist Transcripts",
|
||||
"help": "Keep blocking memory sub-agent session transcripts on disk in a separate plugin-owned directory."
|
||||
},
|
||||
"transcriptDir": {
|
||||
"label": "Transcript Directory",
|
||||
"help": "Relative directory under the agent sessions folder used when transcript persistence is enabled."
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,13 +1,18 @@
|
||||
import { describe, expect, it } from "vitest";
|
||||
import { slackApprovalNativeRuntime } from "./approval-handler.runtime.js";
|
||||
|
||||
type SlackPayload = {
|
||||
text: string;
|
||||
blocks?: unknown;
|
||||
};
|
||||
|
||||
function findSlackActionsBlock(blocks: Array<{ type?: string; elements?: unknown[] }>) {
|
||||
return blocks.find((block) => block.type === "actions");
|
||||
}
|
||||
|
||||
describe("slackApprovalNativeRuntime", () => {
|
||||
it("renders only the allowed pending actions", async () => {
|
||||
const payload = await slackApprovalNativeRuntime.presentation.buildPendingPayload({
|
||||
const payload = (await slackApprovalNativeRuntime.presentation.buildPendingPayload({
|
||||
cfg: {} as never,
|
||||
accountId: "default",
|
||||
context: {
|
||||
@@ -44,7 +49,7 @@ describe("slackApprovalNativeRuntime", () => {
|
||||
},
|
||||
],
|
||||
} as never,
|
||||
});
|
||||
})) as SlackPayload;
|
||||
|
||||
expect(payload.text).toContain("*Exec approval required*");
|
||||
const actionsBlock = findSlackActionsBlock(
|
||||
@@ -101,8 +106,11 @@ describe("slackApprovalNativeRuntime", () => {
|
||||
if (result.kind !== "update") {
|
||||
throw new Error("expected Slack resolved update payload");
|
||||
}
|
||||
expect(result.payload.text).toContain("*Exec approval: Allowed once*");
|
||||
expect(result.payload.text).toContain("Resolved by <@U123APPROVER>.");
|
||||
expect(result.payload.blocks.some((block) => block.type === "actions")).toBe(false);
|
||||
const payload = result.payload as SlackPayload;
|
||||
expect(payload.text).toContain("*Exec approval: Allowed once*");
|
||||
expect(payload.text).toContain("Resolved by <@U123APPROVER>.");
|
||||
expect(
|
||||
(payload.blocks as Array<{ type?: string }>).some((block) => block.type === "actions"),
|
||||
).toBe(false);
|
||||
});
|
||||
});
|
||||
|
||||
@@ -1,9 +1,14 @@
|
||||
import { describe, expect, it, vi } from "vitest";
|
||||
import { telegramApprovalNativeRuntime } from "./approval-handler.runtime.js";
|
||||
|
||||
type TelegramPayload = {
|
||||
text: string;
|
||||
buttons?: Array<Array<{ text: string }>>;
|
||||
};
|
||||
|
||||
describe("telegramApprovalNativeRuntime", () => {
|
||||
it("renders only the allowed pending buttons", async () => {
|
||||
const payload = await telegramApprovalNativeRuntime.presentation.buildPendingPayload({
|
||||
const payload = (await telegramApprovalNativeRuntime.presentation.buildPendingPayload({
|
||||
cfg: {} as never,
|
||||
accountId: "default",
|
||||
context: {
|
||||
@@ -38,7 +43,7 @@ describe("telegramApprovalNativeRuntime", () => {
|
||||
},
|
||||
],
|
||||
} as never,
|
||||
});
|
||||
})) as TelegramPayload;
|
||||
|
||||
expect(payload.text).toContain("/approve req-1 allow-once");
|
||||
expect(payload.text).not.toContain("allow-always");
|
||||
|
||||
@@ -27,10 +27,7 @@ vi.mock("./pi-embedded-runner/runs.js", () => ({
|
||||
}));
|
||||
|
||||
vi.mock("./model-selection.js", () => ({
|
||||
normalizeStoredOverrideModel: (params: {
|
||||
providerOverride?: string;
|
||||
modelOverride?: string;
|
||||
}) => {
|
||||
normalizeStoredOverrideModel: (params: { providerOverride?: string; modelOverride?: string }) => {
|
||||
const providerOverride = params.providerOverride?.trim();
|
||||
const modelOverride = params.modelOverride?.trim();
|
||||
if (!providerOverride || !modelOverride) {
|
||||
|
||||
@@ -1,3 +1,6 @@
|
||||
import fs from "node:fs/promises";
|
||||
import os from "node:os";
|
||||
import path from "node:path";
|
||||
import { describe, expect, it } from "vitest";
|
||||
import type { OpenClawConfig } from "../config/config.js";
|
||||
import {
|
||||
@@ -5,6 +8,7 @@ import {
|
||||
DEFAULT_BOOTSTRAP_MAX_CHARS,
|
||||
DEFAULT_BOOTSTRAP_PROMPT_TRUNCATION_WARNING_MODE,
|
||||
DEFAULT_BOOTSTRAP_TOTAL_MAX_CHARS,
|
||||
ensureSessionHeader,
|
||||
resolveBootstrapMaxChars,
|
||||
resolveBootstrapPromptTruncationWarningMode,
|
||||
resolveBootstrapTotalMaxChars,
|
||||
@@ -25,6 +29,22 @@ const createLargeBootstrapFiles = (): WorkspaceBootstrapFile[] => [
|
||||
makeFile({ name: "SOUL.md", path: "/tmp/SOUL.md", content: "b".repeat(10_000) }),
|
||||
makeFile({ name: "USER.md", path: "/tmp/USER.md", content: "c".repeat(10_000) }),
|
||||
];
|
||||
|
||||
describe("ensureSessionHeader", () => {
|
||||
it("creates transcript files with restrictive permissions", async () => {
|
||||
const tempDir = await fs.mkdtemp(path.join(os.tmpdir(), "openclaw-session-header-"));
|
||||
try {
|
||||
const sessionFile = path.join(tempDir, "nested", "session.jsonl");
|
||||
await ensureSessionHeader({ sessionFile, sessionId: "session-1", cwd: tempDir });
|
||||
|
||||
expect((await fs.stat(path.dirname(sessionFile))).mode & 0o777).toBe(0o700);
|
||||
expect((await fs.stat(sessionFile)).mode & 0o777).toBe(0o600);
|
||||
} finally {
|
||||
await fs.rm(tempDir, { recursive: true, force: true });
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
describe("buildBootstrapContextFiles", () => {
|
||||
it("keeps missing markers", () => {
|
||||
const files = [makeFile({ missing: true, content: undefined })];
|
||||
|
||||
@@ -184,7 +184,7 @@ export async function ensureSessionHeader(params: {
|
||||
} catch {
|
||||
// create
|
||||
}
|
||||
await fs.mkdir(path.dirname(file), { recursive: true });
|
||||
await fs.mkdir(path.dirname(file), { recursive: true, mode: 0o700 });
|
||||
const sessionVersion = 2;
|
||||
const entry = {
|
||||
type: "session",
|
||||
@@ -193,7 +193,10 @@ export async function ensureSessionHeader(params: {
|
||||
timestamp: new Date().toISOString(),
|
||||
cwd: params.cwd,
|
||||
};
|
||||
await fs.writeFile(file, `${JSON.stringify(entry)}\n`, "utf-8");
|
||||
await fs.writeFile(file, `${JSON.stringify(entry)}\n`, {
|
||||
encoding: "utf-8",
|
||||
mode: 0o600,
|
||||
});
|
||||
}
|
||||
|
||||
export function buildBootstrapContextFiles(
|
||||
|
||||
@@ -832,7 +832,6 @@ export function buildBuiltinChatCommands(): ChatCommandDefinition[] {
|
||||
registerAlias(commands, "reasoning", "/reason");
|
||||
registerAlias(commands, "elevated", "/elev");
|
||||
registerAlias(commands, "steer", "/tell");
|
||||
|
||||
assertCommandRegistry(commands);
|
||||
return commands;
|
||||
}
|
||||
|
||||
@@ -7,7 +7,9 @@ import {
|
||||
abortEmbeddedPiRun,
|
||||
isEmbeddedPiRunActive,
|
||||
} from "../../agents/pi-embedded-runner/runs.js";
|
||||
import * as sessionTypesModule from "../../config/sessions.js";
|
||||
import type { SessionEntry } from "../../config/sessions.js";
|
||||
import { loadSessionStore, saveSessionStore } from "../../config/sessions.js";
|
||||
import {
|
||||
clearMemoryPluginState,
|
||||
registerMemoryFlushPlanResolver,
|
||||
@@ -482,6 +484,285 @@ describe("runReplyAgent block streaming", () => {
|
||||
});
|
||||
});
|
||||
|
||||
describe("runReplyAgent Active Memory inline debug", () => {
|
||||
it("appends inline Active Memory debug payload when verbose is enabled", async () => {
|
||||
const tmp = await fs.mkdtemp(path.join(os.tmpdir(), "openclaw-active-memory-inline-"));
|
||||
const storePath = path.join(tmp, "sessions.json");
|
||||
const sessionKey = "main";
|
||||
const sessionEntry: SessionEntry = {
|
||||
sessionId: "session",
|
||||
updatedAt: Date.now(),
|
||||
};
|
||||
|
||||
await fs.writeFile(
|
||||
storePath,
|
||||
JSON.stringify(
|
||||
{
|
||||
[sessionKey]: sessionEntry,
|
||||
},
|
||||
null,
|
||||
2,
|
||||
),
|
||||
"utf-8",
|
||||
);
|
||||
|
||||
runEmbeddedPiAgentMock.mockImplementationOnce(async () => {
|
||||
const latest = loadSessionStore(storePath, { skipCache: true });
|
||||
latest[sessionKey] = {
|
||||
...latest[sessionKey],
|
||||
pluginDebugEntries: [
|
||||
{
|
||||
pluginId: "active-memory",
|
||||
lines: [
|
||||
"🧩 Active Memory: ok 842ms recent 34 chars",
|
||||
"🔎 Active Memory Debug: Lemon pepper wings with blue cheese.",
|
||||
],
|
||||
},
|
||||
],
|
||||
};
|
||||
await saveSessionStore(storePath, latest);
|
||||
return {
|
||||
payloads: [{ text: "Normal reply" }],
|
||||
meta: {},
|
||||
};
|
||||
});
|
||||
|
||||
const typing = createMockTypingController();
|
||||
const sessionCtx = {
|
||||
Provider: "telegram",
|
||||
OriginatingTo: "chat:1",
|
||||
AccountId: "primary",
|
||||
MessageSid: "msg",
|
||||
} as unknown as TemplateContext;
|
||||
const resolvedQueue = { mode: "interrupt" } as unknown as QueueSettings;
|
||||
const followupRun = {
|
||||
prompt: "hello",
|
||||
summaryLine: "hello",
|
||||
enqueuedAt: Date.now(),
|
||||
run: {
|
||||
agentId: "main",
|
||||
sessionId: "session",
|
||||
sessionKey,
|
||||
messageProvider: "telegram",
|
||||
sessionFile: "/tmp/session.jsonl",
|
||||
workspaceDir: "/tmp",
|
||||
config: {},
|
||||
skillsSnapshot: {},
|
||||
provider: "anthropic",
|
||||
model: "claude",
|
||||
thinkLevel: "low",
|
||||
verboseLevel: "on",
|
||||
elevatedLevel: "off",
|
||||
bashElevated: {
|
||||
enabled: false,
|
||||
allowed: false,
|
||||
defaultLevel: "off",
|
||||
},
|
||||
timeoutMs: 1_000,
|
||||
blockReplyBreak: "message_end",
|
||||
},
|
||||
} as unknown as FollowupRun;
|
||||
|
||||
const result = await runReplyAgent({
|
||||
commandBody: "hello",
|
||||
followupRun,
|
||||
queueKey: sessionKey,
|
||||
resolvedQueue,
|
||||
shouldSteer: false,
|
||||
shouldFollowup: false,
|
||||
isActive: false,
|
||||
isStreaming: false,
|
||||
typing,
|
||||
sessionCtx,
|
||||
sessionEntry,
|
||||
sessionStore: { [sessionKey]: sessionEntry },
|
||||
sessionKey,
|
||||
storePath,
|
||||
defaultModel: "anthropic/claude-opus-4-6",
|
||||
resolvedVerboseLevel: "on",
|
||||
isNewSession: false,
|
||||
blockStreamingEnabled: false,
|
||||
resolvedBlockStreamingBreak: "message_end",
|
||||
shouldInjectGroupIntro: false,
|
||||
typingMode: "instant",
|
||||
});
|
||||
|
||||
expect(Array.isArray(result)).toBe(true);
|
||||
expect((result as { text?: string }[]).map((payload) => payload.text)).toEqual([
|
||||
"🧩 Active Memory: ok 842ms recent 34 chars\n🔎 Active Memory Debug: Lemon pepper wings with blue cheese.",
|
||||
"Normal reply",
|
||||
]);
|
||||
});
|
||||
|
||||
it("does not reload the session store when verbose is disabled", async () => {
|
||||
const tmp = await fs.mkdtemp(path.join(os.tmpdir(), "openclaw-active-memory-inline-"));
|
||||
const storePath = path.join(tmp, "sessions.json");
|
||||
const sessionKey = "main";
|
||||
const sessionEntry: SessionEntry = {
|
||||
sessionId: "session",
|
||||
updatedAt: Date.now(),
|
||||
};
|
||||
|
||||
await fs.writeFile(
|
||||
storePath,
|
||||
JSON.stringify(
|
||||
{
|
||||
[sessionKey]: sessionEntry,
|
||||
},
|
||||
null,
|
||||
2,
|
||||
),
|
||||
"utf-8",
|
||||
);
|
||||
|
||||
const loadSessionStoreSpy = vi.spyOn(sessionTypesModule, "loadSessionStore");
|
||||
runEmbeddedPiAgentMock.mockResolvedValueOnce({
|
||||
payloads: [{ text: "Normal reply" }],
|
||||
meta: {},
|
||||
});
|
||||
|
||||
const typing = createMockTypingController();
|
||||
const sessionCtx = {
|
||||
Provider: "telegram",
|
||||
OriginatingTo: "chat:1",
|
||||
AccountId: "primary",
|
||||
MessageSid: "msg",
|
||||
} as unknown as TemplateContext;
|
||||
const resolvedQueue = { mode: "interrupt" } as unknown as QueueSettings;
|
||||
const followupRun = {
|
||||
prompt: "hello",
|
||||
summaryLine: "hello",
|
||||
enqueuedAt: Date.now(),
|
||||
run: {
|
||||
agentId: "main",
|
||||
sessionId: "session",
|
||||
sessionKey,
|
||||
messageProvider: "telegram",
|
||||
sessionFile: "/tmp/session.jsonl",
|
||||
workspaceDir: "/tmp",
|
||||
config: {},
|
||||
skillsSnapshot: {},
|
||||
provider: "anthropic",
|
||||
model: "claude",
|
||||
thinkLevel: "low",
|
||||
verboseLevel: "off",
|
||||
elevatedLevel: "off",
|
||||
bashElevated: {
|
||||
enabled: false,
|
||||
allowed: false,
|
||||
defaultLevel: "off",
|
||||
},
|
||||
timeoutMs: 1_000,
|
||||
blockReplyBreak: "message_end",
|
||||
},
|
||||
} as unknown as FollowupRun;
|
||||
|
||||
const result = await runReplyAgent({
|
||||
commandBody: "hello",
|
||||
followupRun,
|
||||
queueKey: sessionKey,
|
||||
resolvedQueue,
|
||||
shouldSteer: false,
|
||||
shouldFollowup: false,
|
||||
isActive: false,
|
||||
isStreaming: false,
|
||||
typing,
|
||||
sessionCtx,
|
||||
sessionEntry,
|
||||
sessionStore: { [sessionKey]: sessionEntry },
|
||||
sessionKey,
|
||||
storePath,
|
||||
defaultModel: "anthropic/claude-opus-4-6",
|
||||
resolvedVerboseLevel: "off",
|
||||
isNewSession: false,
|
||||
blockStreamingEnabled: false,
|
||||
resolvedBlockStreamingBreak: "message_end",
|
||||
shouldInjectGroupIntro: false,
|
||||
typingMode: "instant",
|
||||
});
|
||||
|
||||
expect(loadSessionStoreSpy).not.toHaveBeenCalledWith(storePath, { skipCache: true });
|
||||
expect(result).toMatchObject({ text: "Normal reply" });
|
||||
});
|
||||
});
|
||||
|
||||
describe("runReplyAgent claude-cli routing", () => {
|
||||
function createRun() {
|
||||
const typing = createMockTypingController();
|
||||
const sessionCtx = {
|
||||
Provider: "webchat",
|
||||
OriginatingTo: "session:1",
|
||||
AccountId: "primary",
|
||||
MessageSid: "msg",
|
||||
} as unknown as TemplateContext;
|
||||
const resolvedQueue = { mode: "interrupt" } as unknown as QueueSettings;
|
||||
const followupRun = {
|
||||
prompt: "hello",
|
||||
summaryLine: "hello",
|
||||
enqueuedAt: Date.now(),
|
||||
run: {
|
||||
sessionId: "session",
|
||||
sessionKey: "main",
|
||||
messageProvider: "webchat",
|
||||
sessionFile: "/tmp/session.jsonl",
|
||||
workspaceDir: "/tmp",
|
||||
config: { agents: { defaults: { cliBackends: { "claude-cli": {} } } } },
|
||||
skillsSnapshot: {},
|
||||
provider: "claude-cli",
|
||||
model: "opus-4.5",
|
||||
thinkLevel: "low",
|
||||
verboseLevel: "off",
|
||||
elevatedLevel: "off",
|
||||
bashElevated: {
|
||||
enabled: false,
|
||||
allowed: false,
|
||||
defaultLevel: "off",
|
||||
},
|
||||
timeoutMs: 1_000,
|
||||
blockReplyBreak: "message_end",
|
||||
},
|
||||
} as unknown as FollowupRun;
|
||||
|
||||
return runReplyAgent({
|
||||
commandBody: "hello",
|
||||
followupRun,
|
||||
queueKey: "main",
|
||||
resolvedQueue,
|
||||
shouldSteer: false,
|
||||
shouldFollowup: false,
|
||||
isActive: false,
|
||||
isStreaming: false,
|
||||
typing,
|
||||
sessionCtx,
|
||||
defaultModel: "claude-cli/opus-4.5",
|
||||
resolvedVerboseLevel: "off",
|
||||
isNewSession: false,
|
||||
blockStreamingEnabled: false,
|
||||
resolvedBlockStreamingBreak: "message_end",
|
||||
shouldInjectGroupIntro: false,
|
||||
typingMode: "instant",
|
||||
});
|
||||
}
|
||||
|
||||
it("uses the CLI runner for claude-cli provider", async () => {
|
||||
runCliAgentMock.mockResolvedValueOnce({
|
||||
payloads: [{ text: "ok" }],
|
||||
meta: {
|
||||
agentMeta: {
|
||||
provider: "claude-cli",
|
||||
model: "opus-4.5",
|
||||
},
|
||||
},
|
||||
});
|
||||
|
||||
const result = await createRun();
|
||||
|
||||
expect(runEmbeddedPiAgentMock).not.toHaveBeenCalled();
|
||||
expect(runCliAgentMock).toHaveBeenCalledTimes(1);
|
||||
expect(result).toMatchObject({ text: "ok" });
|
||||
});
|
||||
});
|
||||
|
||||
describe("runReplyAgent messaging tool suppression", () => {
|
||||
function createRun(
|
||||
messageProvider = "slack",
|
||||
|
||||
@@ -4,7 +4,12 @@ import { resolveModelAuthMode } from "../../agents/model-auth.js";
|
||||
import { isCliProvider } from "../../agents/model-selection.js";
|
||||
import { queueEmbeddedPiMessage } from "../../agents/pi-embedded.js";
|
||||
import { hasNonzeroUsage } from "../../agents/usage.js";
|
||||
import { type SessionEntry, updateSessionStoreEntry } from "../../config/sessions.js";
|
||||
import {
|
||||
loadSessionStore,
|
||||
resolveSessionPluginDebugLines,
|
||||
type SessionEntry,
|
||||
updateSessionStoreEntry,
|
||||
} from "../../config/sessions.js";
|
||||
import type { TypingMode } from "../../config/types.js";
|
||||
import { emitAgentEvent } from "../../infra/agent-events.js";
|
||||
import { emitDiagnosticEvent, isDiagnosticsEnabled } from "../../infra/diagnostic-events.js";
|
||||
@@ -65,6 +70,39 @@ import type { TypingController } from "./typing.js";
|
||||
|
||||
const BLOCK_REPLY_SEND_TIMEOUT_MS = 15_000;
|
||||
|
||||
function buildInlinePluginStatusPayload(entry: SessionEntry | undefined): ReplyPayload | undefined {
|
||||
const lines = resolveSessionPluginDebugLines(entry);
|
||||
if (lines.length === 0) {
|
||||
return undefined;
|
||||
}
|
||||
return { text: lines.join("\n") };
|
||||
}
|
||||
|
||||
function refreshSessionEntryFromStore(params: {
|
||||
storePath?: string;
|
||||
sessionKey?: string;
|
||||
fallbackEntry?: SessionEntry;
|
||||
activeSessionStore?: Record<string, SessionEntry>;
|
||||
}): SessionEntry | undefined {
|
||||
const { storePath, sessionKey, fallbackEntry, activeSessionStore } = params;
|
||||
if (!storePath || !sessionKey) {
|
||||
return fallbackEntry;
|
||||
}
|
||||
try {
|
||||
const latestStore = loadSessionStore(storePath, { skipCache: true });
|
||||
const latestEntry = latestStore?.[sessionKey];
|
||||
if (!latestEntry) {
|
||||
return fallbackEntry;
|
||||
}
|
||||
if (activeSessionStore) {
|
||||
activeSessionStore[sessionKey] = latestEntry;
|
||||
}
|
||||
return latestEntry;
|
||||
} catch {
|
||||
return fallbackEntry;
|
||||
}
|
||||
}
|
||||
|
||||
export async function runReplyAgent(params: {
|
||||
commandBody: string;
|
||||
followupRun: FollowupRun;
|
||||
@@ -652,6 +690,15 @@ export async function runReplyAgent(params: {
|
||||
}
|
||||
}
|
||||
|
||||
if (verboseEnabled) {
|
||||
activeSessionEntry = refreshSessionEntryFromStore({
|
||||
storePath,
|
||||
sessionKey,
|
||||
fallbackEntry: activeSessionEntry,
|
||||
activeSessionStore,
|
||||
});
|
||||
}
|
||||
|
||||
// If verbose is enabled, prepend operational run notices.
|
||||
let finalPayloads = guardedReplyPayloads;
|
||||
const verboseNotices: ReplyPayload[] = [];
|
||||
@@ -758,8 +805,15 @@ export async function runReplyAgent(params: {
|
||||
verboseNotices.push({ text: `🧹 Auto-compaction complete${suffix}.` });
|
||||
}
|
||||
}
|
||||
if (verboseNotices.length > 0) {
|
||||
finalPayloads = [...verboseNotices, ...finalPayloads];
|
||||
const prefixPayloads = [...verboseNotices];
|
||||
if (verboseEnabled) {
|
||||
const pluginStatusPayload = buildInlinePluginStatusPayload(activeSessionEntry);
|
||||
if (pluginStatusPayload) {
|
||||
prefixPayloads.push(pluginStatusPayload);
|
||||
}
|
||||
}
|
||||
if (prefixPayloads.length > 0) {
|
||||
finalPayloads = [...prefixPayloads, ...finalPayloads];
|
||||
}
|
||||
if (responseUsageLine) {
|
||||
finalPayloads = appendUsageLine(finalPayloads, responseUsageLine);
|
||||
|
||||
@@ -123,6 +123,68 @@ describe("buildStatusMessage", () => {
|
||||
expect(normalized).toContain("Reasoning: on");
|
||||
});
|
||||
|
||||
it("shows plugin status lines only when verbose is enabled", () => {
|
||||
const visible = normalizeTestText(
|
||||
buildStatusMessage({
|
||||
agent: {
|
||||
model: "anthropic/pi:opus",
|
||||
},
|
||||
sessionEntry: {
|
||||
sessionId: "abc",
|
||||
updatedAt: 0,
|
||||
verboseLevel: "on",
|
||||
pluginDebugEntries: [
|
||||
{ pluginId: "active-memory", lines: ["🧩 Active Memory: timeout 15s recent"] },
|
||||
],
|
||||
},
|
||||
sessionKey: "agent:main:main",
|
||||
queue: { mode: "collect", depth: 0 },
|
||||
}),
|
||||
);
|
||||
const hidden = normalizeTestText(
|
||||
buildStatusMessage({
|
||||
agent: {
|
||||
model: "anthropic/pi:opus",
|
||||
},
|
||||
sessionEntry: {
|
||||
sessionId: "abc",
|
||||
updatedAt: 0,
|
||||
verboseLevel: "off",
|
||||
pluginDebugEntries: [
|
||||
{ pluginId: "active-memory", lines: ["🧩 Active Memory: timeout 15s recent"] },
|
||||
],
|
||||
},
|
||||
sessionKey: "agent:main:main",
|
||||
queue: { mode: "collect", depth: 0 },
|
||||
}),
|
||||
);
|
||||
|
||||
expect(visible).toContain("Active Memory: timeout 15s recent");
|
||||
expect(hidden).not.toContain("Active Memory: timeout 15s recent");
|
||||
});
|
||||
|
||||
it("shows structured plugin debug lines in verbose status", () => {
|
||||
const visible = normalizeTestText(
|
||||
buildStatusMessage({
|
||||
agent: {
|
||||
model: "anthropic/pi:opus",
|
||||
},
|
||||
sessionEntry: {
|
||||
sessionId: "abc",
|
||||
updatedAt: 0,
|
||||
verboseLevel: "on",
|
||||
pluginDebugEntries: [
|
||||
{ pluginId: "active-memory", lines: ["🧩 Active Memory: ok 842ms recent 34 chars"] },
|
||||
],
|
||||
},
|
||||
sessionKey: "agent:main:main",
|
||||
queue: { mode: "collect", depth: 0 },
|
||||
}),
|
||||
);
|
||||
|
||||
expect(visible).toContain("Active Memory: ok 842ms recent 34 chars");
|
||||
});
|
||||
|
||||
it("shows fast mode when enabled", () => {
|
||||
const text = buildStatusMessage({
|
||||
agent: {
|
||||
|
||||
@@ -17,6 +17,7 @@ import { resolveChannelModelOverride } from "../channels/model-overrides.js";
|
||||
import type { OpenClawConfig } from "../config/config.js";
|
||||
import {
|
||||
resolveMainSessionKey,
|
||||
resolveSessionPluginDebugLines,
|
||||
resolveSessionFilePath,
|
||||
resolveSessionFilePathOptions,
|
||||
type SessionEntry,
|
||||
@@ -673,6 +674,8 @@ export function buildStatusMessage(args: StatusArgs): string {
|
||||
const queueDetails = formatQueueDetails(args.queue);
|
||||
const verboseLabel =
|
||||
verboseLevel === "full" ? "verbose:full" : verboseLevel === "on" ? "verbose" : null;
|
||||
const pluginDebugLines = verboseLevel !== "off" ? resolveSessionPluginDebugLines(entry) : [];
|
||||
const pluginStatusLine = pluginDebugLines.length > 0 ? pluginDebugLines.join(" · ") : null;
|
||||
const elevatedLabel =
|
||||
elevatedLevel && elevatedLevel !== "off"
|
||||
? elevatedLevel === "on"
|
||||
@@ -833,6 +836,7 @@ export function buildStatusMessage(args: StatusArgs): string {
|
||||
args.subagentsLine,
|
||||
args.taskLine,
|
||||
`⚙️ ${optionsLine}`,
|
||||
pluginStatusLine ? `🧩 ${pluginStatusLine}` : null,
|
||||
voiceLine,
|
||||
activationLine,
|
||||
]
|
||||
|
||||
@@ -103,6 +103,11 @@ export type SessionCompactionCheckpoint = {
|
||||
postCompaction: SessionCompactionTranscriptReference;
|
||||
};
|
||||
|
||||
export type SessionPluginDebugEntry = {
|
||||
pluginId: string;
|
||||
lines: string[];
|
||||
};
|
||||
|
||||
export type SessionEntry = {
|
||||
/**
|
||||
* Last delivered heartbeat payload (used to suppress duplicate heartbeat notifications).
|
||||
@@ -238,9 +243,28 @@ export type SessionEntry = {
|
||||
lastThreadId?: string | number;
|
||||
skillsSnapshot?: SessionSkillSnapshot;
|
||||
systemPromptReport?: SessionSystemPromptReport;
|
||||
/**
|
||||
* Generic plugin-owned runtime debug entries shown in verbose status surfaces.
|
||||
* Each plugin owns and may overwrite only its own entry between turns.
|
||||
*/
|
||||
pluginDebugEntries?: SessionPluginDebugEntry[];
|
||||
acp?: SessionAcpMeta;
|
||||
};
|
||||
|
||||
export function resolveSessionPluginDebugLines(
|
||||
entry: Pick<SessionEntry, "pluginDebugEntries"> | undefined,
|
||||
): string[] {
|
||||
return Array.isArray(entry?.pluginDebugEntries)
|
||||
? entry.pluginDebugEntries.flatMap((pluginEntry) =>
|
||||
Array.isArray(pluginEntry?.lines)
|
||||
? pluginEntry.lines.filter(
|
||||
(line): line is string => typeof line === "string" && line.trim().length > 0,
|
||||
)
|
||||
: [],
|
||||
)
|
||||
: [];
|
||||
}
|
||||
|
||||
export function normalizeSessionRuntimeModelFields(entry: SessionEntry): SessionEntry {
|
||||
const normalizedModel = normalizeOptionalString(entry.model);
|
||||
const normalizedProvider = normalizeOptionalString(entry.modelProvider);
|
||||
|
||||
Reference in New Issue
Block a user