mirror of
https://github.com/openclaw/openclaw.git
synced 2026-03-12 07:20:45 +00:00
chore: Run pnpm format:fix.
This commit is contained in:
@@ -8,6 +8,7 @@ read_when: "Changing onboarding wizard steps or config schema endpoints"
|
||||
Purpose: shared onboarding + config surfaces across CLI, macOS app, and Web UI.
|
||||
|
||||
## Components
|
||||
|
||||
- Wizard engine (shared session + prompts + onboarding state).
|
||||
- CLI onboarding uses the same wizard flow as the UI clients.
|
||||
- Gateway RPC exposes wizard + config schema endpoints.
|
||||
@@ -15,6 +16,7 @@ Purpose: shared onboarding + config surfaces across CLI, macOS app, and Web UI.
|
||||
- Web UI renders config forms from JSON Schema + UI hints.
|
||||
|
||||
## Gateway RPC
|
||||
|
||||
- `wizard.start` params: `{ mode?: "local"|"remote", workspace?: string }`
|
||||
- `wizard.next` params: `{ sessionId, answer?: { stepId, value? } }`
|
||||
- `wizard.cancel` params: `{ sessionId }`
|
||||
@@ -22,13 +24,16 @@ Purpose: shared onboarding + config surfaces across CLI, macOS app, and Web UI.
|
||||
- `config.schema` params: `{}`
|
||||
|
||||
Responses (shape)
|
||||
|
||||
- Wizard: `{ sessionId, done, step?, status?, error? }`
|
||||
- Config schema: `{ schema, uiHints, version, generatedAt }`
|
||||
|
||||
## UI Hints
|
||||
|
||||
- `uiHints` keyed by path; optional metadata (label/help/group/order/advanced/sensitive/placeholder).
|
||||
- Sensitive fields render as password inputs; no redaction layer.
|
||||
- Unsupported schema nodes fall back to the raw JSON editor.
|
||||
|
||||
## Notes
|
||||
|
||||
- This doc is the single place to track protocol refactors for onboarding/config.
|
||||
|
||||
@@ -8,9 +8,11 @@ last_updated: "2026-01-05"
|
||||
# Cron Add Hardening & Schema Alignment
|
||||
|
||||
## Context
|
||||
|
||||
Recent gateway logs show repeated `cron.add` failures with invalid parameters (missing `sessionTarget`, `wakeMode`, `payload`, and malformed `schedule`). This indicates that at least one client (likely the agent tool call path) is sending wrapped or partially specified job payloads. Separately, there is drift between cron provider enums in TypeScript, gateway schema, CLI flags, and UI form types, plus a UI mismatch for `cron.status` (expects `jobCount` while gateway returns `jobs`).
|
||||
|
||||
## Goals
|
||||
|
||||
- Stop `cron.add` INVALID_REQUEST spam by normalizing common wrapper payloads and inferring missing `kind` fields.
|
||||
- Align cron provider lists across gateway schema, cron types, CLI docs, and UI forms.
|
||||
- Make agent cron tool schema explicit so the LLM produces correct job payloads.
|
||||
@@ -18,11 +20,13 @@ Recent gateway logs show repeated `cron.add` failures with invalid parameters (m
|
||||
- Add tests to cover normalization and tool behavior.
|
||||
|
||||
## Non-goals
|
||||
|
||||
- Change cron scheduling semantics or job execution behavior.
|
||||
- Add new schedule kinds or cron expression parsing.
|
||||
- Overhaul the UI/UX for cron beyond the necessary field fixes.
|
||||
|
||||
## Findings (current gaps)
|
||||
|
||||
- `CronPayloadSchema` in gateway excludes `signal` + `imessage`, while TS types include them.
|
||||
- Control UI CronStatus expects `jobCount`, but gateway returns `jobs`.
|
||||
- Agent cron tool schema allows arbitrary `job` objects, enabling malformed inputs.
|
||||
@@ -53,5 +57,6 @@ See [Cron jobs](/automation/cron-jobs) for the normalized shape and examples.
|
||||
- Manual Control UI smoke: add a cron job per provider + verify status job count.
|
||||
|
||||
## Open Questions
|
||||
|
||||
- Should `cron.add` accept explicit `state` from clients (currently disallowed by schema)?
|
||||
- Should we allow `webchat` as an explicit delivery provider (currently filtered in delivery resolution)?
|
||||
|
||||
@@ -3,6 +3,7 @@ summary: "Telegram allowlist hardening: prefix + whitespace normalization"
|
||||
read_when:
|
||||
- Reviewing historical Telegram allowlist changes
|
||||
---
|
||||
|
||||
# Telegram Allowlist Hardening
|
||||
|
||||
**Date**: 2026-01-05
|
||||
@@ -25,7 +26,7 @@ All of these are accepted for the same ID:
|
||||
|
||||
- `telegram:123456`
|
||||
- `TG:123456`
|
||||
- ` tg:123456 `
|
||||
- `tg:123456`
|
||||
|
||||
## Why it matters
|
||||
|
||||
|
||||
@@ -3,10 +3,12 @@ summary: "Exploration: model config, auth profiles, and fallback behavior"
|
||||
read_when:
|
||||
- Exploring future model selection + auth profile ideas
|
||||
---
|
||||
|
||||
# Model Config (Exploration)
|
||||
|
||||
This document captures **ideas** for future model configuration. It is not a
|
||||
shipping spec. For current behavior, see:
|
||||
|
||||
- [Models](/concepts/models)
|
||||
- [Model failover](/concepts/model-failover)
|
||||
- [OAuth + profiles](/concepts/oauth)
|
||||
@@ -14,6 +16,7 @@ shipping spec. For current behavior, see:
|
||||
## Motivation
|
||||
|
||||
Operators want:
|
||||
|
||||
- Multiple auth profiles per provider (personal vs work).
|
||||
- Simple `/model` selection with predictable fallbacks.
|
||||
- Clear separation between text models and image-capable models.
|
||||
|
||||
@@ -15,12 +15,14 @@ This doc proposes an **offline-first** memory architecture that keeps Markdown a
|
||||
## Why change?
|
||||
|
||||
The current setup (one file per day) is excellent for:
|
||||
|
||||
- “append-only” journaling
|
||||
- human editing
|
||||
- git-backed durability + auditability
|
||||
- low-friction capture (“just write it down”)
|
||||
|
||||
It’s weak for:
|
||||
|
||||
- high-recall retrieval (“what did we decide about X?”, “last time we tried Y?”)
|
||||
- entity-centric answers (“tell me about Alice / The Castle / warelay”) without rereading many files
|
||||
- opinion/preference stability (and evidence when it changes)
|
||||
@@ -38,12 +40,14 @@ It’s weak for:
|
||||
|
||||
Two pieces to blend:
|
||||
|
||||
1) **Letta/MemGPT-style control loop**
|
||||
1. **Letta/MemGPT-style control loop**
|
||||
|
||||
- keep a small “core” always in context (persona + key user facts)
|
||||
- everything else is out-of-context and retrieved via tools
|
||||
- memory writes are explicit tool calls (append/replace/insert), persisted, then re-injected next turn
|
||||
|
||||
2) **Hindsight-style memory substrate**
|
||||
2. **Hindsight-style memory substrate**
|
||||
|
||||
- separate what’s observed vs what’s believed vs what’s summarized
|
||||
- support retain/recall/reflect
|
||||
- confidence-bearing opinions that can evolve with evidence
|
||||
@@ -74,6 +78,7 @@ Suggested workspace layout:
|
||||
```
|
||||
|
||||
Notes:
|
||||
|
||||
- **Daily log stays daily log**. No need to turn it into JSON.
|
||||
- The `bank/` files are **curated**, produced by reflection jobs, and can still be edited by hand.
|
||||
- `memory.md` remains “small + core-ish”: the things you want Clawd to see every session.
|
||||
@@ -87,6 +92,7 @@ Add a derived index under the workspace (not necessarily git tracked):
|
||||
```
|
||||
|
||||
Back it with:
|
||||
|
||||
- SQLite schema for facts + entity links + opinion metadata
|
||||
- SQLite **FTS5** for lexical recall (fast, tiny, offline)
|
||||
- optional embeddings table for semantic recall (still offline)
|
||||
@@ -100,6 +106,7 @@ The index is always **rebuildable from Markdown**.
|
||||
Hindsight’s key insight that matters here: store **narrative, self-contained facts**, not tiny snippets.
|
||||
|
||||
Practical rule for `memory/YYYY-MM-DD.md`:
|
||||
|
||||
- at end of day (or during), add a `## Retain` section with 2–5 bullets that are:
|
||||
- narrative (cross-turn context preserved)
|
||||
- self-contained (standalone makes sense later)
|
||||
@@ -115,6 +122,7 @@ Example:
|
||||
```
|
||||
|
||||
Minimal parsing:
|
||||
|
||||
- Type prefix: `W` (world), `B` (experience/biographical), `O` (opinion), `S` (observation/summary; usually generated)
|
||||
- Entities: `@Peter`, `@warelay`, etc (slugs map to `bank/entities/*.md`)
|
||||
- Opinion confidence: `O(c=0.0..1.0)` optional
|
||||
@@ -124,12 +132,14 @@ If you don’t want authors to think about it: the reflect job can infer these b
|
||||
### Recall: queries over the derived index
|
||||
|
||||
Recall should support:
|
||||
|
||||
- **lexical**: “find exact terms / names / commands” (FTS5)
|
||||
- **entity**: “tell me about X” (entity pages + entity-linked facts)
|
||||
- **temporal**: “what happened around Nov 27” / “since last week”
|
||||
- **opinion**: “what does Peter prefer?” (with confidence + evidence)
|
||||
|
||||
Return format should be agent-friendly and cite sources:
|
||||
|
||||
- `kind` (`world|experience|opinion|observation`)
|
||||
- `timestamp` (source day, or extracted time range if present)
|
||||
- `entities` (`["Peter","warelay"]`)
|
||||
@@ -139,11 +149,13 @@ Return format should be agent-friendly and cite sources:
|
||||
### Reflect: produce stable pages + update beliefs
|
||||
|
||||
Reflection is a scheduled job (daily or heartbeat `ultrathink`) that:
|
||||
|
||||
- updates `bank/entities/*.md` from recent facts (entity summaries)
|
||||
- updates `bank/opinions.md` confidence based on reinforcement/contradiction
|
||||
- optionally proposes edits to `memory.md` (“core-ish” durable facts)
|
||||
|
||||
Opinion evolution (simple, explainable):
|
||||
|
||||
- each opinion has:
|
||||
- statement
|
||||
- confidence `c ∈ [0,1]`
|
||||
@@ -158,6 +170,7 @@ Opinion evolution (simple, explainable):
|
||||
Recommendation: **deep integration in OpenClaw**, but keep a separable core library.
|
||||
|
||||
### Why integrate into OpenClaw?
|
||||
|
||||
- OpenClaw already knows:
|
||||
- the workspace path (`agents.defaults.workspace`)
|
||||
- the session model + heartbeats
|
||||
@@ -167,6 +180,7 @@ Recommendation: **deep integration in OpenClaw**, but keep a separable core libr
|
||||
- `openclaw memory reflect --since 7d`
|
||||
|
||||
### Why still split a library?
|
||||
|
||||
- keep memory logic testable without gateway/runtime
|
||||
- reuse from other contexts (local scripts, future desktop app, etc.)
|
||||
|
||||
@@ -178,6 +192,7 @@ The memory tooling is intended to be a small CLI + library layer, but this is ex
|
||||
If “S-Collide” refers to **SuCo (Subspace Collision)**: it’s an ANN retrieval approach that targets strong recall/latency tradeoffs by using learned/structured collisions in subspaces (paper: arXiv 2411.14754, 2024).
|
||||
|
||||
Pragmatic take for `~/.openclaw/workspace`:
|
||||
|
||||
- **don’t start** with SuCo.
|
||||
- start with SQLite FTS + (optional) simple embeddings; you’ll get most UX wins immediately.
|
||||
- consider SuCo/HNSW/ScaNN-class solutions only once:
|
||||
@@ -186,12 +201,14 @@ Pragmatic take for `~/.openclaw/workspace`:
|
||||
- recall quality is meaningfully bottlenecked by lexical search
|
||||
|
||||
Offline-friendly alternatives (in increasing complexity):
|
||||
|
||||
- SQLite FTS5 + metadata filters (zero ML)
|
||||
- Embeddings + brute force (works surprisingly far if chunk count is low)
|
||||
- HNSW index (common, robust; needs a library binding)
|
||||
- SuCo (research-grade; attractive if there’s a solid implementation you can embed)
|
||||
|
||||
Open question:
|
||||
|
||||
- what’s the **best** offline embedding model for “personal assistant memory” on your machines (laptop + desktop)?
|
||||
- if you already have Ollama: embed with a local model; otherwise ship a small embedding model in the toolchain.
|
||||
|
||||
|
||||
Reference in New Issue
Block a user