mirror of
https://github.com/openclaw/openclaw.git
synced 2026-04-12 01:31:08 +00:00
docs: cover 2026.4.7 changelog gaps
This commit is contained in:
@@ -1003,6 +1003,8 @@ Core examples:
|
||||
- moderation: `timeout`, `kick`, `ban`
|
||||
- presence: `setPresence`
|
||||
|
||||
The `event-create` action accepts an optional `image` parameter (URL or local file path) to set the scheduled event cover image.
|
||||
|
||||
Action gates live under `channels.discord.actions.*`.
|
||||
|
||||
Default gate behavior:
|
||||
|
||||
@@ -42,7 +42,7 @@ These phases are internal implementation details, not separate user-configured
|
||||
Light phase ingests recent daily memory signals and recall traces, dedupes them,
|
||||
and stages candidate lines.
|
||||
|
||||
- Reads from short-term recall state and recent daily memory files.
|
||||
- Reads from short-term recall state, recent daily memory files, and redacted session transcripts when available.
|
||||
- Writes a managed `## Light Sleep` block when storage includes inline output.
|
||||
- Records reinforcement signals for later deep ranking.
|
||||
- Never writes to `MEMORY.md`.
|
||||
@@ -66,6 +66,13 @@ REM phase extracts patterns and reflective signals.
|
||||
- Records REM reinforcement signals used by deep ranking.
|
||||
- Never writes to `MEMORY.md`.
|
||||
|
||||
## Session transcript ingestion
|
||||
|
||||
Dreaming can ingest redacted session transcripts into the dreaming corpus. When
|
||||
transcripts are available, they are fed into the light phase alongside daily
|
||||
memory signals and recall traces. Personal and sensitive content is redacted
|
||||
before ingestion.
|
||||
|
||||
## Dream Diary
|
||||
|
||||
Dreaming also keeps a narrative **Dream Diary** in `DREAMS.md`.
|
||||
|
||||
@@ -1156,6 +1156,20 @@ Optional CLI backends for text-only fallback runs (no tool calls). Useful as a b
|
||||
- Sessions supported when `sessionArg` is set.
|
||||
- Image pass-through supported when `imageArg` accepts file paths.
|
||||
|
||||
### `agents.defaults.systemPromptOverride`
|
||||
|
||||
Replace the entire OpenClaw-assembled system prompt with a fixed string. Set at the default level (`agents.defaults.systemPromptOverride`) or per agent (`agents.list[].systemPromptOverride`). Per-agent values take precedence; an empty or whitespace-only value is ignored. Useful for controlled prompt experiments.
|
||||
|
||||
```json5
|
||||
{
|
||||
agents: {
|
||||
defaults: {
|
||||
systemPromptOverride: "You are a helpful assistant.",
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
### `agents.defaults.heartbeat`
|
||||
|
||||
Periodic heartbeat runs.
|
||||
@@ -1168,6 +1182,7 @@ Periodic heartbeat runs.
|
||||
every: "30m", // 0m disables
|
||||
model: "openai/gpt-5.4-mini",
|
||||
includeReasoning: false,
|
||||
includeSystemPromptSection: true, // default: true; false omits the Heartbeat section from the system prompt
|
||||
lightContext: false, // default: false; true keeps only HEARTBEAT.md from workspace bootstrap files
|
||||
isolatedSession: false, // default: false; true runs each heartbeat in a fresh session (no conversation history)
|
||||
session: "main",
|
||||
@@ -1184,6 +1199,7 @@ Periodic heartbeat runs.
|
||||
```
|
||||
|
||||
- `every`: duration string (ms/s/m/h). Default: `30m` (API-key auth) or `1h` (OAuth auth). Set to `0m` to disable.
|
||||
- `includeSystemPromptSection`: when false, omits the Heartbeat section from the system prompt and skips `HEARTBEAT.md` injection into bootstrap context. Default: `true`.
|
||||
- `suppressToolErrorWarnings`: when true, suppresses tool error warning payloads during heartbeat runs.
|
||||
- `directPolicy`: direct/DM delivery policy. `allow` (default) permits direct-target delivery. `block` suppresses direct-target delivery and emits `reason=dm-blocked`.
|
||||
- `lightContext`: when true, heartbeat runs use lightweight bootstrap context and keep only `HEARTBEAT.md` from workspace bootstrap files.
|
||||
|
||||
@@ -387,13 +387,13 @@ AI CLI backend such as `codex-cli`.
|
||||
|
||||
### Exclusive slots
|
||||
|
||||
| Method | What it registers |
|
||||
| ------------------------------------------ | ------------------------------------- |
|
||||
| `api.registerContextEngine(id, factory)` | Context engine (one active at a time) |
|
||||
| `api.registerMemoryCapability(capability)` | Unified memory capability |
|
||||
| `api.registerMemoryPromptSection(builder)` | Memory prompt section builder |
|
||||
| `api.registerMemoryFlushPlan(resolver)` | Memory flush plan resolver |
|
||||
| `api.registerMemoryRuntime(runtime)` | Memory runtime adapter |
|
||||
| Method | What it registers |
|
||||
| ------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `api.registerContextEngine(id, factory)` | Context engine (one active at a time). The `assemble()` callback receives `availableTools` and `citationsMode` so the engine can tailor prompt additions. |
|
||||
| `api.registerMemoryCapability(capability)` | Unified memory capability |
|
||||
| `api.registerMemoryPromptSection(builder)` | Memory prompt section builder |
|
||||
| `api.registerMemoryFlushPlan(resolver)` | Memory flush plan resolver |
|
||||
| `api.registerMemoryRuntime(runtime)` | Memory runtime adapter |
|
||||
|
||||
### Memory embedding adapters
|
||||
|
||||
|
||||
@@ -98,6 +98,9 @@ Gemini CLI JSON usage notes:
|
||||
| Video understanding | Yes |
|
||||
| Web search (Grounding) | Yes |
|
||||
| Thinking/reasoning | Yes (Gemini 3.1+) |
|
||||
| Gemma 4 models | Yes |
|
||||
|
||||
Gemma 4 models (for example `gemma-4-26b-a4b-it`) support thinking mode. OpenClaw rewrites `thinkingBudget` to a supported Google `thinkingLevel` for Gemma 4. Setting thinking to `off` preserves thinking disabled instead of mapping to `MINIMAL`.
|
||||
|
||||
## Direct Gemini cache reuse
|
||||
|
||||
|
||||
@@ -119,7 +119,8 @@ openclaw models set ollama/gemma4
|
||||
When you set `OLLAMA_API_KEY` (or an auth profile) and **do not** define `models.providers.ollama`, OpenClaw discovers models from the local Ollama instance at `http://127.0.0.1:11434`:
|
||||
|
||||
- Queries `/api/tags`
|
||||
- Uses best-effort `/api/show` lookups to read `contextWindow` when available
|
||||
- Uses best-effort `/api/show` lookups to read `contextWindow` and detect capabilities (including vision) when available
|
||||
- Models with a `vision` capability reported by `/api/show` are marked as image-capable (`input: ["text", "image"]`), so OpenClaw auto-injects images into the prompt for those models
|
||||
- Marks `reasoning` with a model-name heuristic (`r1`, `reasoning`, `think`)
|
||||
- Sets `maxTokens` to the default Ollama max-token cap used by OpenClaw
|
||||
- Sets all costs to `0`
|
||||
|
||||
Reference in New Issue
Block a user