docs: remove docs/refactor/ directory

Delete all 7 refactor design docs and the zh-CN translations.
Remove the zh-CN nav group from docs.json.

These were orphaned from English nav and accessible only by
direct URL. Internal design docs do not belong on the public
docs site.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
Vincent Koc
2026-03-18 00:45:29 -07:00
parent fbd88e2c8f
commit bde4c7995f
13 changed files with 0 additions and 2908 deletions

View File

@@ -1949,16 +1949,6 @@
"zh-CN/experiments/research/memory",
"zh-CN/experiments/proposals/model-config"
]
},
{
"group": "重构方案",
"pages": [
"zh-CN/refactor/clawnet",
"zh-CN/refactor/exec-host",
"zh-CN/refactor/outbound-session-mirroring",
"zh-CN/refactor/plugin-sdk",
"zh-CN/refactor/strict-config"
]
}
]
},

View File

@@ -1,417 +0,0 @@
---
summary: "Clawnet refactor: unify network protocol, roles, auth, approvals, identity"
read_when:
- Planning a unified network protocol for nodes + operator clients
- Reworking approvals, pairing, TLS, and presence across devices
title: "Clawnet Refactor"
---
# Clawnet refactor (protocol + auth unification)
## Hi
Hi Peter — great direction; this unlocks simpler UX + stronger security.
## Purpose
Single, rigorous document for:
- Current state: protocols, flows, trust boundaries.
- Pain points: approvals, multihop routing, UI duplication.
- Proposed new state: one protocol, scoped roles, unified auth/pairing, TLS pinning.
- Identity model: stable IDs + cute slugs.
- Migration plan, risks, open questions.
## Goals (from discussion)
- One protocol for all clients (mac app, CLI, iOS, Android, headless node).
- Every network participant authenticated + paired.
- Role clarity: nodes vs operators.
- Central approvals routed to where the user is.
- TLS encryption + optional pinning for all remote traffic.
- Minimal code duplication.
- Single machine should appear once (no UI/node duplicate entry).
## Nongoals (explicit)
- Remove capability separation (still need leastprivilege).
- Expose full gateway control plane without scope checks.
- Make auth depend on human labels (slugs remain nonsecurity).
---
# Current state (asis)
## Two protocols
### 1) Gateway WebSocket (control plane)
- Full API surface: config, channels, models, sessions, agent runs, logs, nodes, etc.
- Default bind: loopback. Remote access via SSH/Tailscale.
- Auth: token/password via `connect`.
- No TLS pinning (relies on loopback/tunnel).
- Code:
- `src/gateway/server/ws-connection/message-handler.ts`
- `src/gateway/client.ts`
- `docs/gateway/protocol.md`
### 2) Bridge (node transport)
- Narrow allowlist surface, node identity + pairing.
- JSONL over TCP; optional TLS + cert fingerprint pinning.
- TLS advertises fingerprint in discovery TXT.
- Code:
- `src/infra/bridge/server/connection.ts`
- `src/gateway/server-bridge.ts`
- `src/node-host/bridge-client.ts`
- `docs/gateway/bridge-protocol.md`
## Control plane clients today
- CLI → Gateway WS via `callGateway` (`src/gateway/call.ts`).
- macOS app UI → Gateway WS (`GatewayConnection`).
- Web Control UI → Gateway WS.
- ACP → Gateway WS.
- Browser control uses its own HTTP control server.
## Nodes today
- macOS app in node mode connects to Gateway bridge (`MacNodeBridgeSession`).
- iOS/Android apps connect to Gateway bridge.
- Pairing + pernode token stored on gateway.
## Current approval flow (exec)
- Agent uses `system.run` via Gateway.
- Gateway invokes node over bridge.
- Node runtime decides approval.
- UI prompt shown by mac app (when node == mac app).
- Node returns `invoke-res` to Gateway.
- Multihop, UI tied to node host.
## Presence + identity today
- Gateway presence entries from WS clients.
- Node presence entries from bridge.
- mac app can show two entries for same machine (UI + node).
- Node identity stored in pairing store; UI identity separate.
---
# Problems / pain points
- Two protocol stacks to maintain (WS + Bridge).
- Approvals on remote nodes: prompt appears on node host, not where user is.
- TLS pinning only exists for bridge; WS depends on SSH/Tailscale.
- Identity duplication: same machine shows as multiple instances.
- Ambiguous roles: UI + node + CLI capabilities not clearly separated.
---
# Proposed new state (Clawnet)
## One protocol, two roles
Single WS protocol with role + scope.
- **Role: node** (capability host)
- **Role: operator** (control plane)
- Optional **scope** for operator:
- `operator.read` (status + viewing)
- `operator.write` (agent run, sends)
- `operator.admin` (config, channels, models)
### Role behaviors
**Node**
- Can register capabilities (`caps`, `commands`, permissions).
- Can receive `invoke` commands (`system.run`, `camera.*`, `canvas.*`, `screen.record`, etc).
- Can send events: `voice.transcript`, `agent.request`, `chat.subscribe`.
- Cannot call config/models/channels/sessions/agent control plane APIs.
**Operator**
- Full control plane API, gated by scope.
- Receives all approvals.
- Does not directly execute OS actions; routes to nodes.
### Key rule
Role is perconnection, not per device. A device may open both roles, separately.
---
# Unified authentication + pairing
## Client identity
Every client provides:
- `deviceId` (stable, derived from device key).
- `displayName` (human name).
- `role` + `scope` + `caps` + `commands`.
## Pairing flow (unified)
- Client connects unauthenticated.
- Gateway creates a **pairing request** for that `deviceId`.
- Operator receives prompt; approves/denies.
- Gateway issues credentials bound to:
- device public key
- role(s)
- scope(s)
- capabilities/commands
- Client persists token, reconnects authenticated.
## Devicebound auth (avoid bearer token replay)
Preferred: device keypairs.
- Device generates keypair once.
- `deviceId = fingerprint(publicKey)`.
- Gateway sends nonce; device signs; gateway verifies.
- Tokens are issued to a public key (proofofpossession), not a string.
Alternatives:
- mTLS (client certs): strongest, more ops complexity.
- Shortlived bearer tokens only as a temporary phase (rotate + revoke early).
## Silent approval (SSH heuristic)
Define it precisely to avoid a weak link. Prefer one:
- **Localonly**: autopair when client connects via loopback/Unix socket.
- **Challenge via SSH**: gateway issues nonce; client proves SSH by fetching it.
- **Physical presence window**: after a local approval on gateway host UI, allow autopair for a short window (e.g. 10 minutes).
Always log + record autoapprovals.
---
# TLS everywhere (dev + prod)
## Reuse existing bridge TLS
Use current TLS runtime + fingerprint pinning:
- `src/infra/bridge/server/tls.ts`
- fingerprint verification logic in `src/node-host/bridge-client.ts`
## Apply to WS
- WS server supports TLS with same cert/key + fingerprint.
- WS clients can pin fingerprint (optional).
- Discovery advertises TLS + fingerprint for all endpoints.
- Discovery is locator hints only; never a trust anchor.
## Why
- Reduce reliance on SSH/Tailscale for confidentiality.
- Make remote mobile connections safe by default.
---
# Approvals redesign (centralized)
## Current
Approval happens on node host (mac app node runtime). Prompt appears where node runs.
## Proposed
Approval is **gatewayhosted**, UI delivered to operator clients.
### New flow
1. Gateway receives `system.run` intent (agent).
2. Gateway creates approval record: `approval.requested`.
3. Operator UI(s) show prompt.
4. Approval decision sent to gateway: `approval.resolve`.
5. Gateway invokes node command if approved.
6. Node executes, returns `invoke-res`.
### Approval semantics (hardening)
- Broadcast to all operators; only the active UI shows a modal (others get a toast).
- First resolution wins; gateway rejects subsequent resolves as already settled.
- Default timeout: deny after N seconds (e.g. 60s), log reason.
- Resolution requires `operator.approvals` scope.
## Benefits
- Prompt appears where user is (mac/phone).
- Consistent approvals for remote nodes.
- Node runtime stays headless; no UI dependency.
---
# Role clarity examples
## iPhone app
- **Node role** for: mic, camera, voice chat, location, pushtotalk.
- Optional **operator.read** for status and chat view.
- Optional **operator.write/admin** only when explicitly enabled.
## macOS app
- Operator role by default (control UI).
- Node role when “Mac node” enabled (system.run, screen, camera).
- Same deviceId for both connections → merged UI entry.
## CLI
- Operator role always.
- Scope derived by subcommand:
- `status`, `logs` → read
- `agent`, `message` → write
- `config`, `channels` → admin
- approvals + pairing → `operator.approvals` / `operator.pairing`
---
# Identity + slugs
## Stable ID
Required for auth; never changes.
Preferred:
- Keypair fingerprint (public key hash).
## Cute slug (lobsterthemed)
Human label only.
- Example: `scarlet-claw`, `saltwave`, `mantis-pinch`.
- Stored in gateway registry, editable.
- Collision handling: `-2`, `-3`.
## UI grouping
Same `deviceId` across roles → single “Instance” row:
- Badge: `operator`, `node`.
- Shows capabilities + last seen.
---
# Migration strategy
## Phase 0: Document + align
- Publish this doc.
- Inventory all protocol calls + approval flows.
## Phase 1: Add roles/scopes to WS
- Extend `connect` params with `role`, `scope`, `deviceId`.
- Add allowlist gating for node role.
## Phase 2: Bridge compatibility
- Keep bridge running.
- Add WS node support in parallel.
- Gate features behind config flag.
## Phase 3: Central approvals
- Add approval request + resolve events in WS.
- Update mac app UI to prompt + respond.
- Node runtime stops prompting UI.
## Phase 4: TLS unification
- Add TLS config for WS using bridge TLS runtime.
- Add pinning to clients.
## Phase 5: Deprecate bridge
- Migrate iOS/Android/mac node to WS.
- Keep bridge as fallback; remove once stable.
## Phase 6: Devicebound auth
- Require keybased identity for all nonlocal connections.
- Add revocation + rotation UI.
---
# Security notes
- Role/allowlist enforced at gateway boundary.
- No client gets “full” API without operator scope.
- Pairing required for _all_ connections.
- TLS + pinning reduces MITM risk for mobile.
- SSH silent approval is a convenience; still recorded + revocable.
- Discovery is never a trust anchor.
- Capability claims are verified against server allowlists by platform/type.
# Streaming + large payloads (node media)
WS control plane is fine for small messages, but nodes also do:
- camera clips
- screen recordings
- audio streams
Options:
1. WS binary frames + chunking + backpressure rules.
2. Separate streaming endpoint (still TLS + auth).
3. Keep bridge longer for mediaheavy commands, migrate last.
Pick one before implementation to avoid drift.
# Capability + command policy
- Nodereported caps/commands are treated as **claims**.
- Gateway enforces perplatform allowlists.
- Any new command requires operator approval or explicit allowlist change.
- Audit changes with timestamps.
# Audit + rate limiting
- Log: pairing requests, approvals/denials, token issuance/rotation/revocation.
- Ratelimit pairing spam and approval prompts.
# Protocol hygiene
- Explicit protocol version + error codes.
- Reconnect rules + heartbeat policy.
- Presence TTL and lastseen semantics.
---
# Open questions
1. Single device running both roles: token model
- Recommend separate tokens per role (node vs operator).
- Same deviceId; different scopes; clearer revocation.
2. Operator scope granularity
- read/write/admin + approvals + pairing (minimum viable).
- Consider perfeature scopes later.
3. Token rotation + revocation UX
- Autorotate on role change.
- UI to revoke by deviceId + role.
4. Discovery
- Extend current Bonjour TXT to include WS TLS fingerprint + role hints.
- Treat as locator hints only.
5. Crossnetwork approval
- Broadcast to all operator clients; active UI shows modal.
- First response wins; gateway enforces atomicity.
---
# Summary (TL;DR)
- Today: WS control plane + Bridge node transport.
- Pain: approvals + duplication + two stacks.
- Proposal: one WS protocol with explicit roles + scopes, unified pairing + TLS pinning, gatewayhosted approvals, stable device IDs + cute slugs.
- Outcome: simpler UX, stronger security, less duplication, better mobile routing.

View File

@@ -1,299 +0,0 @@
---
summary: "Refactor clusters with highest LOC reduction potential"
read_when:
- You want to reduce total LOC without changing behavior
- You are choosing the next dedupe or extraction pass
title: "Refactor Cluster Backlog"
---
# Refactor Cluster Backlog
Ranked by likely LOC reduction, safety, and breadth.
## 1. Channel plugin config and security scaffolding
Highest-value cluster.
Repeated shapes across many channel plugins:
- `config.listAccountIds`
- `config.resolveAccount`
- `config.defaultAccountId`
- `config.setAccountEnabled`
- `config.deleteAccount`
- `config.describeAccount`
- `security.resolveDmPolicy`
Strong examples:
- `extensions/telegram/src/channel.ts`
- `extensions/googlechat/src/channel.ts`
- `extensions/slack/src/channel.ts`
- `extensions/discord/src/channel.ts`
- `extensions/matrix/src/channel.ts`
- `extensions/irc/src/channel.ts`
- `extensions/signal/src/channel.ts`
- `extensions/mattermost/src/channel.ts`
Likely extraction shape:
- `buildChannelConfigAdapter(...)`
- `buildMultiAccountConfigAdapter(...)`
- `buildDmSecurityAdapter(...)`
Expected savings:
- ~250-450 LOC
Risk:
- Medium. Each channel has slightly different `isConfigured`, warnings, and normalization.
## 2. Extension runtime singleton boilerplate
Very safe.
Nearly every extension has the same runtime holder:
- `let runtime: PluginRuntime | null = null`
- `setXRuntime`
- `getXRuntime`
Strong examples:
- `extensions/telegram/src/runtime.ts`
- `extensions/matrix/src/runtime.ts`
- `extensions/slack/src/runtime.ts`
- `extensions/discord/src/runtime.ts`
- `extensions/whatsapp/src/runtime.ts`
- `extensions/imessage/src/runtime.ts`
- `extensions/twitch/src/runtime.ts`
Special-case variants:
- `extensions/bluebubbles/src/runtime.ts`
- `extensions/line/src/runtime.ts`
- `extensions/synology-chat/src/runtime.ts`
Likely extraction shape:
- `createPluginRuntimeStore<T>(errorMessage)`
Expected savings:
- ~180-260 LOC
Risk:
- Low
## 3. Setup prompt and config-patch steps
Large surface area.
Many setup files repeat:
- resolve account id
- prompt allowlist entries
- merge allowFrom
- set DM policy
- prompt secrets
- patch top-level vs account-scoped config
Strong examples:
- `extensions/bluebubbles/src/setup-surface.ts`
- `extensions/googlechat/src/setup-surface.ts`
- `extensions/msteams/src/setup-surface.ts`
- `extensions/zalo/src/setup-surface.ts`
- `extensions/zalouser/src/setup-surface.ts`
- `extensions/nextcloud-talk/src/setup-surface.ts`
- `extensions/matrix/src/setup-surface.ts`
- `extensions/irc/src/setup-surface.ts`
Existing helper surface:
- `src/channels/plugins/setup-wizard-helpers.ts`
Likely extraction shape:
- `promptAllowFromList(...)`
- `buildDmPolicyAdapter(...)`
- `applyScopedAccountPatch(...)`
- `promptSecretFields(...)`
Expected savings:
- ~300-600 LOC
Risk:
- Medium. Easy to over-generalize; keep helpers narrow and composable.
## 4. Multi-account config-schema fragments
Repeated schema fragments across extensions.
Common patterns:
- `const allowFromEntry = z.union([z.string(), z.number()])`
- account schema plus:
- `accounts: z.object({}).catchall(accountSchema).optional()`
- `defaultAccount: z.string().optional()`
- repeated DM/group fields
- repeated markdown/tool policy fields
Strong examples:
- `extensions/bluebubbles/src/config-schema.ts`
- `extensions/zalo/src/config-schema.ts`
- `extensions/zalouser/src/config-schema.ts`
- `extensions/matrix/src/config-schema.ts`
- `extensions/nostr/src/config-schema.ts`
Likely extraction shape:
- `AllowFromEntrySchema`
- `buildMultiAccountChannelSchema(accountSchema)`
- `buildCommonDmGroupFields(...)`
Expected savings:
- ~120-220 LOC
Risk:
- Low to medium. Some schemas are simple, some are special.
## 5. Webhook and monitor lifecycle startup
Good medium-value cluster.
Repeated `startAccount` / monitor setup patterns:
- resolve account
- compute webhook path
- log startup
- start monitor
- wait for abort
- cleanup
- status sink updates
Strong examples:
- `extensions/googlechat/src/channel.ts`
- `extensions/bluebubbles/src/channel.ts`
- `extensions/zalo/src/channel.ts`
- `extensions/telegram/src/channel.ts`
- `extensions/nextcloud-talk/src/channel.ts`
Existing helper surface:
- `src/plugin-sdk/channel-lifecycle.ts`
Likely extraction shape:
- helper for account monitor lifecycle
- helper for webhook-backed account startup
Expected savings:
- ~150-300 LOC
Risk:
- Medium to high. Transport details diverge quickly.
## 6. Small exact-clone cleanup
Low-risk cleanup bucket.
Examples:
- duplicated gateway argv detection:
- `src/infra/gateway-lock.ts`
- `src/cli/daemon-cli/lifecycle.ts`
- duplicated port diagnostics rendering:
- `src/cli/daemon-cli/restart-health.ts`
- duplicated session-key construction:
- `src/web/auto-reply/monitor/broadcast.ts`
Expected savings:
- ~30-60 LOC
Risk:
- Low
## Test clusters
### LINE webhook event fixtures
Strong examples:
- `src/line/bot-handlers.test.ts`
Likely extraction:
- `makeLineEvent(...)`
- `runLineEvent(...)`
- `makeLineAccount(...)`
Expected savings:
- ~120-180 LOC
### Telegram native command auth matrix
Strong examples:
- `src/telegram/bot-native-commands.group-auth.test.ts`
- `src/telegram/bot-native-commands.plugin-auth.test.ts`
Likely extraction:
- forum context builder
- denied-message assertion helper
- table-driven auth cases
Expected savings:
- ~80-140 LOC
### Zalo lifecycle setup
Strong examples:
- `extensions/zalo/src/monitor.lifecycle.test.ts`
Likely extraction:
- shared monitor setup harness
Expected savings:
- ~50-90 LOC
### Brave llm-context unsupported-option tests
Strong examples:
- `src/agents/tools/web-tools.enabled-defaults.test.ts`
Likely extraction:
- `it.each(...)` matrix
Expected savings:
- ~30-50 LOC
## Suggested order
1. Runtime singleton boilerplate
2. Small exact-clone cleanup
3. Config and security builder extraction
4. Test-helper extraction
5. Onboarding step extraction
6. Monitor lifecycle helper extraction

View File

@@ -1,316 +0,0 @@
---
summary: "Refactor plan: exec host routing, node approvals, and headless runner"
read_when:
- Designing exec host routing or exec approvals
- Implementing node runner + UI IPC
- Adding exec host security modes and slash commands
title: "Exec Host Refactor"
---
# Exec host refactor plan
## Goals
- Add `exec.host` + `exec.security` to route execution across **sandbox**, **gateway**, and **node**.
- Keep defaults **safe**: no cross-host execution unless explicitly enabled.
- Split execution into a **headless runner service** with optional UI (macOS app) via local IPC.
- Provide **per-agent** policy, allowlist, ask mode, and node binding.
- Support **ask modes** that work _with_ or _without_ allowlists.
- Cross-platform: Unix socket + token auth (macOS/Linux/Windows parity).
## Non-goals
- No legacy allowlist migration or legacy schema support.
- No PTY/streaming for node exec (aggregated output only).
- No new network layer beyond the existing Bridge + Gateway.
## Decisions (locked)
- **Config keys:** `exec.host` + `exec.security` (per-agent override allowed).
- **Elevation:** keep `/elevated` as an alias for gateway full access.
- **Ask default:** `on-miss`.
- **Approvals store:** `~/.openclaw/exec-approvals.json` (JSON, no legacy migration).
- **Runner:** headless system service; UI app hosts a Unix socket for approvals.
- **Node identity:** use existing `nodeId`.
- **Socket auth:** Unix socket + token (cross-platform); split later if needed.
- **Node host state:** `~/.openclaw/node.json` (node id + pairing token).
- **macOS exec host:** run `system.run` inside the macOS app; node host service forwards requests over local IPC.
- **No XPC helper:** stick to Unix socket + token + peer checks.
## Key concepts
### Host
- `sandbox`: Docker exec (current behavior).
- `gateway`: exec on gateway host.
- `node`: exec on node runner via Bridge (`system.run`).
### Security mode
- `deny`: always block.
- `allowlist`: allow only matches.
- `full`: allow everything (equivalent to elevated).
### Ask mode
- `off`: never ask.
- `on-miss`: ask only when allowlist does not match.
- `always`: ask every time.
Ask is **independent** of allowlist; allowlist can be used with `always` or `on-miss`.
### Policy resolution (per exec)
1. Resolve `exec.host` (tool param → agent override → global default).
2. Resolve `exec.security` and `exec.ask` (same precedence).
3. If host is `sandbox`, proceed with local sandbox exec.
4. If host is `gateway` or `node`, apply security + ask policy on that host.
## Default safety
- Default `exec.host = sandbox`.
- Default `exec.security = deny` for `gateway` and `node`.
- Default `exec.ask = on-miss` (only relevant if security allows).
- If no node binding is set, **agent may target any node**, but only if policy allows it.
## Config surface
### Tool parameters
- `exec.host` (optional): `sandbox | gateway | node`.
- `exec.security` (optional): `deny | allowlist | full`.
- `exec.ask` (optional): `off | on-miss | always`.
- `exec.node` (optional): node id/name to use when `host=node`.
### Config keys (global)
- `tools.exec.host`
- `tools.exec.security`
- `tools.exec.ask`
- `tools.exec.node` (default node binding)
### Config keys (per agent)
- `agents.list[].tools.exec.host`
- `agents.list[].tools.exec.security`
- `agents.list[].tools.exec.ask`
- `agents.list[].tools.exec.node`
### Alias
- `/elevated on` = set `tools.exec.host=gateway`, `tools.exec.security=full` for the agent session.
- `/elevated off` = restore previous exec settings for the agent session.
## Approvals store (JSON)
Path: `~/.openclaw/exec-approvals.json`
Purpose:
- Local policy + allowlists for the **execution host** (gateway or node runner).
- Ask fallback when no UI is available.
- IPC credentials for UI clients.
Proposed schema (v1):
```json
{
"version": 1,
"socket": {
"path": "~/.openclaw/exec-approvals.sock",
"token": "base64-opaque-token"
},
"defaults": {
"security": "deny",
"ask": "on-miss",
"askFallback": "deny"
},
"agents": {
"agent-id-1": {
"security": "allowlist",
"ask": "on-miss",
"allowlist": [
{
"pattern": "~/Projects/**/bin/rg",
"lastUsedAt": 0,
"lastUsedCommand": "rg -n TODO",
"lastResolvedPath": "/Users/user/Projects/.../bin/rg"
}
]
}
}
}
```
Notes:
- No legacy allowlist formats.
- `askFallback` applies only when `ask` is required and no UI is reachable.
- File permissions: `0600`.
## Runner service (headless)
### Role
- Enforce `exec.security` + `exec.ask` locally.
- Execute system commands and return output.
- Emit Bridge events for exec lifecycle (optional but recommended).
### Service lifecycle
- Launchd/daemon on macOS; system service on Linux/Windows.
- Approvals JSON is local to the execution host.
- UI hosts a local Unix socket; runners connect on demand.
## UI integration (macOS app)
### IPC
- Unix socket at `~/.openclaw/exec-approvals.sock` (0600).
- Token stored in `exec-approvals.json` (0600).
- Peer checks: same-UID only.
- Challenge/response: nonce + HMAC(token, request-hash) to prevent replay.
- Short TTL (e.g., 10s) + max payload + rate limit.
### Ask flow (macOS app exec host)
1. Node service receives `system.run` from gateway.
2. Node service connects to the local socket and sends the prompt/exec request.
3. App validates peer + token + HMAC + TTL, then shows dialog if needed.
4. App executes the command in UI context and returns output.
5. Node service returns output to gateway.
If UI missing:
- Apply `askFallback` (`deny|allowlist|full`).
### Diagram (SCI)
```
Agent -> Gateway -> Bridge -> Node Service (TS)
| IPC (UDS + token + HMAC + TTL)
v
Mac App (UI + TCC + system.run)
```
## Node identity + binding
- Use existing `nodeId` from Bridge pairing.
- Binding model:
- `tools.exec.node` restricts the agent to a specific node.
- If unset, agent can pick any node (policy still enforces defaults).
- Node selection resolution:
- `nodeId` exact match
- `displayName` (normalized)
- `remoteIp`
- `nodeId` prefix (>= 6 chars)
## Eventing
### Who sees events
- System events are **per session** and shown to the agent on the next prompt.
- Stored in the gateway in-memory queue (`enqueueSystemEvent`).
### Event text
- `Exec started (node=<id>, id=<runId>)`
- `Exec finished (node=<id>, id=<runId>, code=<code>)` + optional output tail
- `Exec denied (node=<id>, id=<runId>, <reason>)`
### Transport
Option A (recommended):
- Runner sends Bridge `event` frames `exec.started` / `exec.finished`.
- Gateway `handleBridgeEvent` maps these into `enqueueSystemEvent`.
Option B:
- Gateway `exec` tool handles lifecycle directly (synchronous only).
## Exec flows
### Sandbox host
- Existing `exec` behavior (Docker or host when unsandboxed).
- PTY supported in non-sandbox mode only.
### Gateway host
- Gateway process executes on its own machine.
- Enforces local `exec-approvals.json` (security/ask/allowlist).
### Node host
- Gateway calls `node.invoke` with `system.run`.
- Runner enforces local approvals.
- Runner returns aggregated stdout/stderr.
- Optional Bridge events for start/finish/deny.
## Output caps
- Cap combined stdout+stderr at **200k**; keep **tail 20k** for events.
- Truncate with a clear suffix (e.g., `"… (truncated)"`).
## Slash commands
- `/exec host=<sandbox|gateway|node> security=<deny|allowlist|full> ask=<off|on-miss|always> node=<id>`
- Per-agent, per-session overrides; non-persistent unless saved via config.
- `/elevated on|off|ask|full` remains a shortcut for `host=gateway security=full` (with `full` skipping approvals).
## Cross-platform story
- The runner service is the portable execution target.
- UI is optional; if missing, `askFallback` applies.
- Windows/Linux support the same approvals JSON + socket protocol.
## Implementation phases
### Phase 1: config + exec routing
- Add config schema for `exec.host`, `exec.security`, `exec.ask`, `exec.node`.
- Update tool plumbing to respect `exec.host`.
- Add `/exec` slash command and keep `/elevated` alias.
### Phase 2: approvals store + gateway enforcement
- Implement `exec-approvals.json` reader/writer.
- Enforce allowlist + ask modes for `gateway` host.
- Add output caps.
### Phase 3: node runner enforcement
- Update node runner to enforce allowlist + ask.
- Add Unix socket prompt bridge to macOS app UI.
- Wire `askFallback`.
### Phase 4: events
- Add node → gateway Bridge events for exec lifecycle.
- Map to `enqueueSystemEvent` for agent prompts.
### Phase 5: UI polish
- Mac app: allowlist editor, per-agent switcher, ask policy UI.
- Node binding controls (optional).
## Testing plan
- Unit tests: allowlist matching (glob + case-insensitive).
- Unit tests: policy resolution precedence (tool param → agent override → global).
- Integration tests: node runner deny/allow/ask flows.
- Bridge event tests: node event → system event routing.
## Open risks
- UI unavailability: ensure `askFallback` is respected.
- Long-running commands: rely on timeout + output caps.
- Multi-node ambiguity: error unless node binding or explicit node param.
## Related docs
- [Exec tool](/tools/exec)
- [Exec approvals](/tools/exec-approvals)
- [Nodes](/nodes)
- [Elevated mode](/tools/elevated)

View File

@@ -1,260 +0,0 @@
---
summary: "Design for an opt-in Firecrawl extension that adds search/scrape value without hardwiring Firecrawl into core defaults"
read_when:
- Designing Firecrawl integration work
- Evaluating web_search/web_fetch plugin extension surfaces
- Deciding whether Firecrawl belongs in core or as an extension
title: "Firecrawl Extension Design"
---
# Firecrawl Extension Design
## Goal
Ship Firecrawl as an **opt-in extension** that adds:
- explicit Firecrawl tools for agents,
- optional Firecrawl-backed `web_search` integration,
- self-hosted support,
- stronger security defaults than the current core fallback path,
without pushing Firecrawl into the default setup/onboarding path.
## Why this shape
Recent Firecrawl issues/PRs cluster into three buckets:
1. **Release/schema drift**
- Several releases rejected `tools.web.fetch.firecrawl` even though docs and runtime code supported it.
2. **Security hardening**
- Current `fetchFirecrawlContent()` still posts to the Firecrawl endpoint with raw `fetch()`, while the main web-fetch path uses the SSRF guard.
3. **Product pressure**
- Users want Firecrawl-native search/scrape flows, especially for self-hosted/private setups.
- Maintainers explicitly rejected wiring Firecrawl deeply into core defaults, setup flow, and browser behavior.
That combination argues for an extension, not more Firecrawl-specific logic in the default core path.
## Design principles
- **Opt-in, vendor-scoped**: no auto-enable, no setup hijack, no default tool-profile widening.
- **Extension owns Firecrawl-specific config**: prefer plugin config over growing `tools.web.*` again.
- **Useful on day one**: works even if core `web_search` / `web_fetch` extension surfaces stay unchanged.
- **Security-first**: endpoint fetches use the same guarded networking posture as other web tools.
- **Self-hosted-friendly**: config + env fallback, explicit base URL, no hosted-only assumptions.
## Proposed extension
Plugin id: `firecrawl`
### MVP capabilities
Register explicit tools:
- `firecrawl_search`
- `firecrawl_scrape`
Optional later:
- `firecrawl_crawl`
- `firecrawl_map`
Do **not** add Firecrawl browser automation in the first version. That was the part of PR #32543 that pulled Firecrawl too far into core behavior and raised the most maintainership concern.
## Config shape
Use plugin-scoped config:
```json5
{
plugins: {
entries: {
firecrawl: {
enabled: true,
config: {
apiKey: "FIRECRAWL_API_KEY",
baseUrl: "https://api.firecrawl.dev",
timeoutSeconds: 60,
maxAgeMs: 172800000,
proxy: "auto",
storeInCache: true,
onlyMainContent: true,
search: {
enabled: true,
defaultLimit: 5,
sources: ["web"],
categories: [],
scrapeResults: false,
},
scrape: {
formats: ["markdown"],
fallbackForWebFetchLikeUse: false,
},
},
},
},
},
}
```
### Credential resolution
Precedence:
1. `plugins.entries.firecrawl.config.apiKey`
2. `FIRECRAWL_API_KEY`
Base URL precedence:
1. `plugins.entries.firecrawl.config.baseUrl`
2. `FIRECRAWL_BASE_URL`
3. `https://api.firecrawl.dev`
### Compatibility bridge
For the first release, the extension may also **read** existing core config at `tools.web.fetch.firecrawl.*` as a fallback source so existing users do not need to migrate immediately.
Write path stays plugin-local. Do not keep expanding core Firecrawl config surfaces.
## Tool design
### `firecrawl_search`
Inputs:
- `query`
- `limit`
- `sources`
- `categories`
- `scrapeResults`
- `timeoutSeconds`
Behavior:
- Calls Firecrawl `v2/search`
- Returns normalized OpenClaw-friendly result objects:
- `title`
- `url`
- `snippet`
- `source`
- optional `content`
- Wraps result content as untrusted external content
- Cache key includes query + relevant provider params
Why explicit tool first:
- Works today without changing `tools.web.search.provider`
- Avoids current schema/loader constraints
- Gives users Firecrawl value immediately
### `firecrawl_scrape`
Inputs:
- `url`
- `formats`
- `onlyMainContent`
- `maxAgeMs`
- `proxy`
- `storeInCache`
- `timeoutSeconds`
Behavior:
- Calls Firecrawl `v2/scrape`
- Returns markdown/text plus metadata:
- `title`
- `finalUrl`
- `status`
- `warning`
- Wraps extracted content the same way `web_fetch` does
- Shares cache semantics with web tool expectations where practical
Why explicit scrape tool:
- Sidesteps the unresolved `Readability -> Firecrawl -> basic HTML cleanup` ordering bug in core `web_fetch`
- Gives users a deterministic “always use Firecrawl” path for JS-heavy/bot-protected sites
## What the extension should not do
- No auto-adding `browser`, `web_search`, or `web_fetch` to `tools.alsoAllow`
- No default onboarding step in `openclaw setup`
- No Firecrawl-specific browser session lifecycle in core
- No change to built-in `web_fetch` fallback semantics in the extension MVP
## Phase plan
### Phase 1: extension-only, no core schema changes
Implement:
- `extensions/firecrawl/`
- plugin config schema
- `firecrawl_search`
- `firecrawl_scrape`
- tests for config resolution, endpoint selection, caching, error handling, and SSRF guard usage
This phase is enough to ship real user value.
### Phase 2: optional `web_search` provider integration
Support `tools.web.search.provider = "firecrawl"` only after fixing two core constraints:
1. `src/plugins/web-search-providers.ts` must load configured/installed web-search-provider plugins instead of a hardcoded bundled list.
2. `src/config/types.tools.ts` and `src/config/zod-schema.agent-runtime.ts` must stop hardcoding the provider enum in a way that blocks plugin-registered ids.
Recommended shape:
- keep built-in providers documented,
- allow any registered plugin provider id at runtime,
- validate provider-specific config via the provider plugin or a generic provider bag.
### Phase 3: optional `web_fetch` provider capability
Do this only if maintainers want vendor-specific fetch backends to participate in `web_fetch`.
Needed core addition:
- `registerWebFetchProvider` or equivalent fetch-backend extension surface
Without that capability, the extension should keep `firecrawl_scrape` as an explicit tool rather than trying to patch built-in `web_fetch`.
## Security requirements
The extension must treat Firecrawl as a **trusted operator-configured endpoint**, but still harden transport:
- Use SSRF-guarded fetch for the Firecrawl endpoint call, not raw `fetch()`
- Preserve self-hosted/private-network compatibility using the same trusted-web-tools endpoint policy used elsewhere
- Never log the API key
- Keep endpoint/base URL resolution explicit and predictable
- Treat Firecrawl-returned content as untrusted external content
This mirrors the intent behind the SSRF hardening PRs without assuming Firecrawl is a hostile multi-tenant surface.
## Why not a skill
The repo already closed a Firecrawl skill PR in favor of ClawHub distribution. That is fine for optional user-installed prompt workflows, but it does not solve:
- deterministic tool availability,
- provider-grade config/credential handling,
- self-hosted endpoint support,
- caching,
- stable typed outputs,
- security review on network behavior.
This belongs as an extension, not a prompt-only skill.
## Success criteria
- Users can install/enable one extension and get reliable Firecrawl search/scrape without touching core defaults.
- Self-hosted Firecrawl works with config/env fallback.
- Extension endpoint fetches use guarded networking.
- No new Firecrawl-specific core onboarding/default behavior.
- Core can later adopt plugin-native `web_search` / `web_fetch` extension surfaces without redesigning the extension.
## Recommended implementation order
1. Build `firecrawl_scrape`
2. Build `firecrawl_search`
3. Add docs and examples
4. If desired, generalize `web_search` provider loading so the extension can back `web_search`
5. Only then consider a true `web_fetch` provider capability

View File

@@ -1,89 +0,0 @@
---
title: Outbound Session Mirroring Refactor (Issue #1520)
description: Track outbound session mirroring refactor notes, decisions, tests, and open items.
summary: "Refactor notes for mirroring outbound sends into target channel sessions"
read_when:
- Working on outbound transcript/session mirroring behavior
- Debugging sessionKey derivation for send/message tool paths
---
# Outbound Session Mirroring Refactor (Issue #1520)
## Status
- In progress.
- Core + plugin channel routing updated for outbound mirroring.
- Gateway send now derives target session when sessionKey is omitted.
## Context
Outbound sends were mirrored into the _current_ agent session (tool session key) rather than the target channel session. Inbound routing uses channel/peer session keys, so outbound responses landed in the wrong session and first-contact targets often lacked session entries.
## Goals
- Mirror outbound messages into the target channel session key.
- Create session entries on outbound when missing.
- Keep thread/topic scoping aligned with inbound session keys.
- Cover core channels plus bundled extensions.
## Implementation Summary
- New outbound session routing helper:
- `src/infra/outbound/outbound-session.ts`
- `resolveOutboundSessionRoute` builds target sessionKey using `buildAgentSessionKey` (dmScope + identityLinks).
- `ensureOutboundSessionEntry` writes minimal `MsgContext` via `recordSessionMetaFromInbound`.
- `runMessageAction` (send) derives target sessionKey and passes it to `executeSendAction` for mirroring.
- `message-tool` no longer mirrors directly; it only resolves agentId from the current session key.
- Plugin send path mirrors via `appendAssistantMessageToSessionTranscript` using the derived sessionKey.
- Gateway send derives a target session key when none is provided (default agent), and ensures a session entry.
## Thread/Topic Handling
- Slack: replyTo/threadId -> `resolveThreadSessionKeys` (suffix).
- Discord: threadId/replyTo -> `resolveThreadSessionKeys` with `useSuffix=false` to match inbound (thread channel id already scopes session).
- Telegram: topic IDs map to `chatId:topic:<id>` via `buildTelegramGroupPeerId`.
## Extensions Covered
- Matrix, MS Teams, Mattermost, BlueBubbles, Nextcloud Talk, Zalo, Zalo Personal, Nostr, Tlon.
- Notes:
- Mattermost targets now strip `@` for DM session key routing.
- Zalo Personal uses DM peer kind for 1:1 targets (group only when `group:` is present).
- BlueBubbles group targets strip `chat_*` prefixes to match inbound session keys.
- Slack auto-thread mirroring matches channel ids case-insensitively.
- Gateway send lowercases provided session keys before mirroring.
## Decisions
- **Gateway send session derivation**: if `sessionKey` is provided, use it. If omitted, derive a sessionKey from target + default agent and mirror there.
- **Session entry creation**: always use `recordSessionMetaFromInbound` with `Provider/From/To/ChatType/AccountId/Originating*` aligned to inbound formats.
- **Target normalization**: outbound routing uses resolved targets (post `resolveChannelTarget`) when available.
- **Session key casing**: canonicalize session keys to lowercase on write and during migrations.
## Tests Added/Updated
- `src/infra/outbound/outbound.test.ts`
- Slack thread session key.
- Telegram topic session key.
- dmScope identityLinks with Discord.
- `src/agents/tools/message-tool.test.ts`
- Derives agentId from session key (no sessionKey passed through).
- `src/gateway/server-methods/send.test.ts`
- Derives session key when omitted and creates session entry.
## Open Items / Follow-ups
- Voice-call plugin uses custom `voice:<phone>` session keys. Outbound mapping is not standardized here; if message-tool should support voice-call sends, add explicit mapping.
- Confirm if any external plugin uses non-standard `From/To` formats beyond the bundled set.
## Files Touched
- `src/infra/outbound/outbound-session.ts`
- `src/infra/outbound/outbound-send-service.ts`
- `src/infra/outbound/message-action-runner.ts`
- `src/agents/tools/message-tool.ts`
- `src/gateway/server-methods/send.ts`
- Tests in:
- `src/infra/outbound/outbound.test.ts`
- `src/agents/tools/message-tool.test.ts`
- `src/gateway/server-methods/send.test.ts`

View File

@@ -1,264 +0,0 @@
---
summary: "Plan: one clean plugin SDK + runtime for all messaging connectors"
read_when:
- Defining or refactoring the plugin architecture
- Migrating channel connectors to the plugin SDK/runtime
title: "Plugin SDK Refactor"
---
# Plugin SDK + Runtime Refactor Plan
Goal: every messaging connector is a plugin (bundled or external) using one stable API.
No plugin imports from `src/**` directly. All dependencies go through the SDK or runtime.
## Why now
- Current connectors mix patterns: direct core imports, dist-only bridges, and custom helpers.
- This makes upgrades brittle and blocks a clean external plugin surface.
## Target architecture (two layers)
### 1) Plugin SDK (compile-time, stable, publishable)
Scope: types, helpers, and config utilities. No runtime state, no side effects.
Contents (examples):
- Types: `ChannelPlugin`, adapters, `ChannelMeta`, `ChannelCapabilities`, `ChannelDirectoryEntry`.
- Config helpers: `buildChannelConfigSchema`, `setAccountEnabledInConfigSection`, `deleteAccountFromConfigSection`,
`applyAccountNameToChannelSection`.
- Pairing helpers: `PAIRING_APPROVED_MESSAGE`, `formatPairingApproveHint`.
- Setup entry points: host-owned `setup` + `setupWizard`; avoid broad public onboarding helpers.
- Tool param helpers: `createActionGate`, `readStringParam`, `readNumberParam`, `readReactionParams`, `jsonResult`.
- Docs link helper: `formatDocsLink`.
Delivery:
- Publish as `openclaw/plugin-sdk` (or export from core under `openclaw/plugin-sdk`).
- Semver with explicit stability guarantees.
### 2) Plugin Runtime (execution surface, injected)
Scope: everything that touches core runtime behavior.
Accessed via `OpenClawPluginApi.runtime` so plugins never import `src/**`.
Proposed surface (minimal but complete):
```ts
export type PluginRuntime = {
channel: {
text: {
chunkMarkdownText(text: string, limit: number): string[];
resolveTextChunkLimit(cfg: OpenClawConfig, channel: string, accountId?: string): number;
hasControlCommand(text: string, cfg: OpenClawConfig): boolean;
};
reply: {
dispatchReplyWithBufferedBlockDispatcher(params: {
ctx: unknown;
cfg: unknown;
dispatcherOptions: {
deliver: (payload: {
text?: string;
mediaUrls?: string[];
mediaUrl?: string;
}) => void | Promise<void>;
onError?: (err: unknown, info: { kind: string }) => void;
};
}): Promise<void>;
createReplyDispatcherWithTyping?: unknown; // adapter for Teams-style flows
};
routing: {
resolveAgentRoute(params: {
cfg: unknown;
channel: string;
accountId: string;
peer: { kind: RoutePeerKind; id: string };
}): { sessionKey: string; accountId: string };
};
pairing: {
buildPairingReply(params: { channel: string; idLine: string; code: string }): string;
readAllowFromStore(channel: string): Promise<string[]>;
upsertPairingRequest(params: {
channel: string;
id: string;
meta?: { name?: string };
}): Promise<{ code: string; created: boolean }>;
};
media: {
fetchRemoteMedia(params: { url: string }): Promise<{ buffer: Buffer; contentType?: string }>;
saveMediaBuffer(
buffer: Uint8Array,
contentType: string | undefined,
direction: "inbound" | "outbound",
maxBytes: number,
): Promise<{ path: string; contentType?: string }>;
};
mentions: {
buildMentionRegexes(cfg: OpenClawConfig, agentId?: string): RegExp[];
matchesMentionPatterns(text: string, regexes: RegExp[]): boolean;
};
groups: {
resolveGroupPolicy(
cfg: OpenClawConfig,
channel: string,
accountId: string,
groupId: string,
): {
allowlistEnabled: boolean;
allowed: boolean;
groupConfig?: unknown;
defaultConfig?: unknown;
};
resolveRequireMention(
cfg: OpenClawConfig,
channel: string,
accountId: string,
groupId: string,
override?: boolean,
): boolean;
};
debounce: {
createInboundDebouncer<T>(opts: {
debounceMs: number;
buildKey: (v: T) => string | null;
shouldDebounce: (v: T) => boolean;
onFlush: (entries: T[]) => Promise<void>;
onError?: (err: unknown) => void;
}): { push: (v: T) => void; flush: () => Promise<void> };
resolveInboundDebounceMs(cfg: OpenClawConfig, channel: string): number;
};
commands: {
resolveCommandAuthorizedFromAuthorizers(params: {
useAccessGroups: boolean;
authorizers: Array<{ configured: boolean; allowed: boolean }>;
}): boolean;
};
};
logging: {
shouldLogVerbose(): boolean;
getChildLogger(name: string): PluginLogger;
};
state: {
resolveStateDir(cfg: OpenClawConfig): string;
};
};
```
Notes:
- Runtime is the only way to access core behavior.
- SDK is intentionally small and stable.
- Each runtime method maps to an existing core implementation (no duplication).
## Migration plan (phased, safe)
### Phase 0: scaffolding
- Introduce `openclaw/plugin-sdk`.
- Add `api.runtime` to `OpenClawPluginApi` with the surface above.
- Maintain existing imports during a transition window (deprecation warnings).
### Phase 1: bridge cleanup (low risk)
- Replace per-extension `core-bridge.ts` with `api.runtime`.
- Migrate BlueBubbles, Zalo, Zalo Personal first (already close).
- Remove duplicated bridge code.
### Phase 2: light direct-import plugins
- Migrate Matrix to SDK + runtime.
- Validate onboarding, directory, group mention logic.
### Phase 3: heavy direct-import plugins
- Migrate MS Teams (largest set of runtime helpers).
- Ensure reply/typing semantics match current behavior.
### Phase 4: iMessage pluginization
- Move iMessage into `extensions/imessage`.
- Replace direct core calls with `api.runtime`.
- Keep config keys, CLI behavior, and docs intact.
### Phase 5: enforcement
- Add lint rule / CI check: no `extensions/**` imports from `src/**`.
- Add plugin SDK/version compatibility checks (runtime + SDK semver).
## Compatibility and versioning
- SDK: semver, published, documented changes.
- Runtime: versioned per core release. Add `api.runtime.version`.
- Plugins declare a required runtime range (e.g., `openclawRuntime: ">=2026.2.0"`).
## Testing strategy
- Adapter-level unit tests (runtime functions exercised with real core implementation).
- Golden tests per plugin: ensure no behavior drift (routing, pairing, allowlist, mention gating).
- A single end-to-end plugin sample used in CI (install + run + smoke).
## Open questions
- Where to host SDK types: separate package or core export?
- Runtime type distribution: in SDK (types only) or in core?
- How to expose docs links for bundled vs external plugins?
- Do we allow limited direct core imports for in-repo plugins during transition?
## Success criteria
- All channel connectors are plugins using SDK + runtime.
- No `extensions/**` imports from `src/**`.
- New connector templates depend only on SDK + runtime.
- External plugins can be developed and updated without core source access.
Related docs: [Plugins](/tools/plugin), [Channels](/channels/index), [Configuration](/gateway/configuration).
## Capability plan alignment
The plugin SDK refactor now aligns with the public capability model documented
in [Plugins](/tools/plugin#public-capability-model).
Key decisions:
- Capabilities are the public plugin model. Registration is explicit and typed.
- Legacy hook-only plugins remain supported without migration.
- Plugin shapes (plain-capability, hybrid-capability, hook-only, non-capability)
are classified from actual registration behavior.
- `openclaw plugins inspect` provides canonical deep introspection for any
loaded plugin, showing shape, capabilities, hooks, tools, and diagnostics.
- Export boundary: export capabilities, not implementation convenience. Trim
non-contract helper exports.
Required test matrix for the capability model:
- hook-only legacy plugin fixture
- plain capability plugin fixture
- hybrid capability plugin fixture
- real-world legacy hook-style plugin fixture
- `before_agent_start` still works
- typed hooks remain additive
- capability usage and plugin shape are inspectable
## Implemented channel-owned capabilities
Recent refactor work widened the channel plugin contract so core can stop owning
channel-specific UX and routing behavior:
- `messaging.buildCrossContextComponents`: channel-owned cross-context UI markers
(for example Discord components v2 containers)
- `messaging.enableInteractiveReplies`: channel-owned reply normalization toggles
(for example Slack interactive replies)
- `messaging.resolveOutboundSessionRoute`: channel-owned outbound session routing
- `status.formatCapabilitiesProbe` / `status.buildCapabilitiesDiagnostics`: channel-owned
`/channels capabilities` probe display and extra audits/scopes
- `threading.resolveAutoThreadId`: channel-owned same-conversation auto-threading
- `threading.resolveReplyTransport`: channel-owned reply-vs-thread delivery mapping
- `actions.requiresTrustedRequesterSender`: channel-owned privileged action trust gates
- `execApprovals.*`: channel-owned exec approval surface state, forwarding suppression,
pending payload UX, and pre-delivery hooks
- `lifecycle.onAccountConfigChanged` / `lifecycle.onAccountRemoved`: channel-owned cleanup on
config mutation/removal
- `allowlist.supportsScope`: channel-owned allowlist scope advertisement
These capabilities should be preferred over new `channel === "discord"` /
`telegram` branches in shared core flows.

View File

@@ -1,93 +0,0 @@
---
summary: "Strict config validation + doctor-only migrations"
read_when:
- Designing or implementing config validation behavior
- Working on config migrations or doctor workflows
- Handling plugin config schemas or plugin load gating
title: "Strict Config Validation"
---
# Strict config validation (doctor-only migrations)
## Goals
- **Reject unknown config keys everywhere** (root + nested), except root `$schema` metadata.
- **Reject plugin config without a schema**; dont load that plugin.
- **Remove legacy auto-migration on load**; migrations run via doctor only.
- **Auto-run doctor (dry-run) on startup**; if invalid, block non-diagnostic commands.
## Non-goals
- Backward compatibility on load (legacy keys do not auto-migrate).
- Silent drops of unrecognized keys.
## Strict validation rules
- Config must match the schema exactly at every level.
- Unknown keys are validation errors (no passthrough at root or nested), except root `$schema` when it is a string.
- `plugins.entries.<id>.config` must be validated by the plugins schema.
- If a plugin lacks a schema, **reject plugin load** and surface a clear error.
- Unknown `channels.<id>` keys are errors unless a plugin manifest declares the channel id.
- Plugin manifests (`openclaw.plugin.json`) are required for all plugins.
## Plugin schema enforcement
- Each plugin provides a strict JSON Schema for its config (inline in the manifest).
- Plugin load flow:
1. Resolve plugin manifest + schema (`openclaw.plugin.json`).
2. Validate config against the schema.
3. If missing schema or invalid config: block plugin load, record error.
- Error message includes:
- Plugin id
- Reason (missing schema / invalid config)
- Path(s) that failed validation
- Disabled plugins keep their config, but Doctor + logs surface a warning.
## Doctor flow
- Doctor runs **every time** config is loaded (dry-run by default).
- If config invalid:
- Print a summary + actionable errors.
- Instruct: `openclaw doctor --fix`.
- `openclaw doctor --fix`:
- Applies migrations.
- Removes unknown keys.
- Writes updated config.
## Command gating (when config is invalid)
Allowed (diagnostic-only):
- `openclaw doctor`
- `openclaw logs`
- `openclaw health`
- `openclaw help`
- `openclaw status`
- `openclaw gateway status`
Everything else must hard-fail with: “Config invalid. Run `openclaw doctor --fix`.”
## Error UX format
- Single summary header.
- Grouped sections:
- Unknown keys (full paths)
- Legacy keys / migrations needed
- Plugin load failures (plugin id + reason + path)
## Implementation touchpoints
- `src/config/zod-schema.ts`: remove root passthrough; strict objects everywhere.
- `src/config/zod-schema.providers.ts`: ensure strict channel schemas.
- `src/config/validation.ts`: fail on unknown keys; do not apply legacy migrations.
- `src/config/io.ts`: remove legacy auto-migrations; always run doctor dry-run.
- `src/config/legacy*.ts`: move usage to doctor only.
- `src/plugins/*`: add schema registry + gating.
- CLI command gating in `src/cli`.
## Tests
- Unknown key rejection (root + nested).
- Plugin missing schema → plugin load blocked with clear error.
- Invalid config → gateway startup blocked except diagnostic commands.
- Doctor dry-run auto; `doctor --fix` writes corrected config.

View File

@@ -1,424 +0,0 @@
---
read_when:
- 规划节点 + 操作者客户端的统一网络协议
- 重新设计跨设备的审批、配对、TLS 和在线状态
summary: Clawnet 重构:统一网络协议、角色、认证、审批、身份
title: Clawnet 重构
x-i18n:
generated_at: "2026-02-03T07:55:03Z"
model: claude-opus-4-5
provider: pi
source_hash: 719b219c3b326479658fe6101c80d5273fc56eb3baf50be8535e0d1d2bb7987f
source_path: refactor/clawnet.md
workflow: 15
---
# Clawnet 重构(协议 + 认证统一)
## 嗨
嗨 Peter — 方向很好;这将解锁更简单的用户体验 + 更强的安全性。
## 目的
单一、严谨的文档用于:
- 当前状态:协议、流程、信任边界。
- 痛点审批、多跳路由、UI 重复。
- 提议的新状态:一个协议、作用域角色、统一的认证/配对、TLS 固定。
- 身份模型:稳定 ID + 可爱的别名。
- 迁移计划、风险、开放问题。
## 目标(来自讨论)
- 所有客户端使用一个协议mac 应用、CLI、iOS、Android、无头节点
- 每个网络参与者都经过认证 + 配对。
- 角色清晰:节点 vs 操作者。
- 中央审批路由到用户所在位置。
- 所有远程流量使用 TLS 加密 + 可选固定。
- 最小化代码重复。
- 单台机器应该只显示一次(无 UI/节点重复条目)。
## 非目标(明确)
- 移除能力分离(仍需要最小权限)。
- 不经作用域检查就暴露完整的 Gateway 网关控制平面。
- 使认证依赖于人类标签(别名仍然是非安全性的)。
---
# 当前状态(现状)
## 两个协议
### 1) Gateway 网关 WebSocket控制平面
- 完整 API 表面:配置、渠道、模型、会话、智能体运行、日志、节点等。
- 默认绑定loopback。通过 SSH/Tailscale 远程访问。
- 认证:通过 `connect` 的令牌/密码。
- 无 TLS 固定(依赖 loopback/隧道)。
- 代码:
- `src/gateway/server/ws-connection/message-handler.ts`
- `src/gateway/client.ts`
- `docs/gateway/protocol.md`
### 2) Bridge节点传输
- 窄允许列表表面,节点身份 + 配对。
- TCP 上的 JSONL可选 TLS + 证书指纹固定。
- TLS 在设备发现 TXT 中公布指纹。
- 代码:
- `src/infra/bridge/server/connection.ts`
- `src/gateway/server-bridge.ts`
- `src/node-host/bridge-client.ts`
- `docs/gateway/bridge-protocol.md`
## 当前的控制平面客户端
- CLI → 通过 `callGateway``src/gateway/call.ts`)连接 Gateway 网关 WS。
- macOS 应用 UI → Gateway 网关 WS`GatewayConnection`)。
- Web 控制 UI → Gateway 网关 WS。
- ACP → Gateway 网关 WS。
- 浏览器控制使用自己的 HTTP 控制服务器。
## 当前的节点
- macOS 应用在节点模式下连接到 Gateway 网关 bridge`MacNodeBridgeSession`)。
- iOS/Android 应用连接到 Gateway 网关 bridge。
- 配对 + 每节点令牌存储在 Gateway 网关上。
## 当前审批流程exec
- 智能体通过 Gateway 网关使用 `system.run`
- Gateway 网关通过 bridge 调用节点。
- 节点运行时决定审批。
- UI 提示由 mac 应用显示(当节点 == mac 应用时)。
- 节点向 Gateway 网关返回 `invoke-res`
- 多跳UI 绑定到节点主机。
## 当前的在线状态 + 身份
- 来自 WS 客户端的 Gateway 网关在线状态条目。
- 来自 bridge 的节点在线状态条目。
- mac 应用可能为同一台机器显示两个条目UI + 节点)。
- 节点身份存储在配对存储中UI 身份是分开的。
---
# 问题/痛点
- 需要维护两个协议栈WS + Bridge
- 远程节点上的审批:提示出现在节点主机上,而不是用户所在位置。
- TLS 固定仅存在于 bridgeWS 依赖 SSH/Tailscale。
- 身份重复:同一台机器显示为多个实例。
- 角色模糊UI + 节点 + CLI 能力没有明确分离。
---
# 提议的新状态Clawnet
## 一个协议,两个角色
带有角色 + 作用域的单一 WS 协议。
- **角色node**(能力宿主)
- **角色operator**(控制平面)
- 操作者的可选**作用域**
- `operator.read`(状态 + 查看)
- `operator.write`(智能体运行、发送)
- `operator.admin`(配置、渠道、模型)
### 角色行为
**Node**
- 可以注册能力(`caps``commands`、permissions
- 可以接收 `invoke` 命令(`system.run``camera.*``canvas.*``screen.record` 等)。
- 可以发送事件:`voice.transcript``agent.request``chat.subscribe`
- 不能调用配置/模型/渠道/会话/智能体控制平面 API。
**Operator**
- 完整控制平面 API受作用域限制。
- 接收所有审批。
- 不直接执行 OS 操作;路由到节点。
### 关键规则
角色是按连接的,不是按设备。一个设备可以分别打开两个角色。
---
# 统一认证 + 配对
## 客户端身份
每个客户端提供:
- `deviceId`(稳定的,从设备密钥派生)。
- `displayName`(人类名称)。
- `role` + `scope` + `caps` + `commands`
## 配对流程(统一)
- 客户端未认证连接。
- Gateway 网关为该 `deviceId` 创建**配对请求**。
- 操作者收到提示;批准/拒绝。
- Gateway 网关颁发绑定到以下内容的凭证:
- 设备公钥
- 角色
- 作用域
- 能力/命令
- 客户端持久化令牌,重新认证连接。
## 设备绑定认证(避免 bearer 令牌重放)
首选:设备密钥对。
- 设备一次性生成密钥对。
- `deviceId = fingerprint(publicKey)`
- Gateway 网关发送 nonce设备签名Gateway 网关验证。
- 令牌颁发给公钥(所有权证明),而不是字符串。
替代方案:
- mTLS客户端证书最强运维复杂度更高。
- 短期 bearer 令牌仅作为临时阶段(早期轮换 + 撤销)。
## 静默批准SSH 启发式)
精确定义以避免薄弱环节。优选其一:
- **仅限本地**:当客户端通过 loopback/Unix socket 连接时自动配对。
- **通过 SSH 质询**Gateway 网关颁发 nonce客户端通过获取它来证明 SSH。
- **物理存在窗口**:在 Gateway 网关主机 UI 上本地批准后,允许在短窗口内(例如 10 分钟)自动配对。
始终记录 + 记录自动批准。
---
# TLS 无处不在(开发 + 生产)
## 复用现有 bridge TLS
使用当前 TLS 运行时 + 指纹固定:
- `src/infra/bridge/server/tls.ts`
- `src/node-host/bridge-client.ts` 中的指纹验证逻辑
## 应用于 WS
- WS 服务器使用相同的证书/密钥 + 指纹支持 TLS。
- WS 客户端可以固定指纹(可选)。
- 设备发现为所有端点公布 TLS + 指纹。
- 设备发现仅是定位器提示;永远不是信任锚。
## 为什么
- 减少对 SSH/Tailscale 的机密性依赖。
- 默认情况下使远程移动连接安全。
---
# 审批重新设计(集中化)
## 当前
审批发生在节点主机上mac 应用节点运行时)。提示出现在节点运行的地方。
## 提议
审批是 **Gateway 网关托管的**UI 传递给操作者客户端。
### 新流程
1. Gateway 网关接收 `system.run` 意图(智能体)。
2. Gateway 网关创建审批记录:`approval.requested`
3. 操作者 UI 显示提示。
4. 审批决定发送到 Gateway 网关:`approval.resolve`
5. 如果批准Gateway 网关调用节点命令。
6. 节点执行,返回 `invoke-res`
### 审批语义(加固)
- 广播到所有操作者;只有活跃的 UI 显示模态框(其他显示 toast
- 先解决者获胜Gateway 网关拒绝后续解决为已结算。
- 默认超时N 秒后拒绝(例如 60 秒),记录原因。
- 解决需要 `operator.approvals` 作用域。
## 好处
- 提示出现在用户所在位置mac/手机)。
- 远程节点的一致审批。
- 节点运行时保持无头;无 UI 依赖。
---
# 角色清晰示例
## iPhone 应用
- **Node 角色**用于:麦克风、相机、语音聊天、位置、一键通话。
- 可选的 **operator.read** 用于状态和聊天视图。
- 可选的 **operator.write/admin** 仅在明确启用时。
## macOS 应用
- 默认是 Operator 角色(控制 UI
- 启用"Mac 节点"时是 Node 角色system.run、屏幕、相机
- 两个连接使用相同的 deviceId → 合并的 UI 条目。
## CLI
- 始终是 Operator 角色。
- 作用域按子命令派生:
- `status``logs` → read
- `agent``message` → write
- `config``channels` → admin
- 审批 + 配对 → `operator.approvals` / `operator.pairing`
---
# 身份 + 别名
## 稳定 ID
认证必需;永不改变。
首选:
- 密钥对指纹(公钥哈希)。
## 可爱别名(龙虾主题)
仅人类标签。
- 示例:`scarlet-claw``saltwave``mantis-pinch`
- 存储在 Gateway 网关注册表中,可编辑。
- 冲突处理:`-2``-3`
## UI 分组
跨角色的相同 `deviceId` → 单个"实例"行:
- 徽章:`operator``node`
- 显示能力 + 最后在线。
---
# 迁移策略
## 阶段 0记录 + 对齐
- 发布此文档。
- 盘点所有协议调用 + 审批流程。
## 阶段 1向 WS 添加角色/作用域
-`role``scope``deviceId` 扩展 `connect` 参数。
- 为 node 角色添加允许列表限制。
## 阶段 2Bridge 兼容性
- 保持 bridge 运行。
- 并行添加 WS node 支持。
- 通过配置标志限制功能。
## 阶段 3中央审批
- 在 WS 中添加审批请求 + 解决事件。
- 更新 mac 应用 UI 以提示 + 响应。
- 节点运行时停止提示 UI。
## 阶段 4TLS 统一
- 使用 bridge TLS 运行时为 WS 添加 TLS 配置。
- 向客户端添加固定。
## 阶段 5弃用 bridge
- 将 iOS/Android/mac 节点迁移到 WS。
- 保持 bridge 作为后备;稳定后移除。
## 阶段 6设备绑定认证
- 所有非本地连接都需要基于密钥的身份。
- 添加撤销 + 轮换 UI。
---
# 安全说明
- 角色/允许列表在 Gateway 网关边界强制执行。
- 没有客户端可以在没有 operator 作用域的情况下获得"完整"API。
- *所有*连接都需要配对。
- TLS + 固定减少移动设备的 MITM 风险。
- SSH 静默批准是便利措施;仍然记录 + 可撤销。
- 设备发现永远不是信任锚。
- 能力声明通过按平台/类型的服务器允许列表验证。
# 流式传输 + 大型负载(节点媒体)
WS 控制平面对于小消息没问题,但节点还做:
- 相机剪辑
- 屏幕录制
- 音频流
选项:
1. WS 二进制帧 + 分块 + 背压规则。
2. 单独的流式端点(仍然是 TLS + 认证)。
3. 对于媒体密集型命令保持 bridge 更长时间,最后迁移。
在实现前选择一个以避免漂移。
# 能力 + 命令策略
- 节点报告的 caps/commands 被视为**声明**。
- Gateway 网关强制执行每平台允许列表。
- 任何新命令都需要操作者批准或显式允许列表更改。
- 用时间戳审计更改。
# 审计 + 速率限制
- 记录:配对请求、批准/拒绝、令牌颁发/轮换/撤销。
- 速率限制配对垃圾和审批提示。
# 协议卫生
- 显式协议版本 + 错误代码。
- 重连规则 + 心跳策略。
- 在线状态 TTL 和最后在线语义。
---
# 开放问题
1. 同时运行两个角色的单个设备:令牌模型
- 建议每个角色单独的令牌node vs operator
- 相同的 deviceId不同的作用域更清晰的撤销。
2. 操作者作用域粒度
- read/write/admin + approvals + pairing最小可行
- 以后考虑每功能作用域。
3. 令牌轮换 + 撤销 UX
- 角色更改时自动轮换。
- 按 deviceId + 角色撤销的 UI。
4. 设备发现
- 扩展当前 Bonjour TXT 以包含 WS TLS 指纹 + 角色提示。
- 仅作为定位器提示处理。
5. 跨网络审批
- 广播到所有操作者客户端;活跃的 UI 显示模态框。
- 先响应者获胜Gateway 网关强制原子性。
---
# 总结TL;DR
- 当前WS 控制平面 + Bridge 节点传输。
- 痛点:审批 + 重复 + 两个栈。
- 提议:一个带有显式角色 + 作用域的 WS 协议,统一配对 + TLS 固定Gateway 网关托管的审批,稳定设备 ID + 可爱别名。
- 结果:更简单的 UX更强的安全性更少的重复更好的移动路由。

View File

@@ -1,323 +0,0 @@
---
read_when:
- 设计 exec 主机路由或 exec 批准
- 实现节点运行器 + UI IPC
- 添加 exec 主机安全模式和斜杠命令
summary: 重构计划exec 主机路由、节点批准和无头运行器
title: Exec 主机重构
x-i18n:
generated_at: "2026-02-03T07:54:43Z"
model: claude-opus-4-5
provider: pi
source_hash: 53a9059cbeb1f3f1dbb48c2b5345f88ca92372654fef26f8481e651609e45e3a
source_path: refactor/exec-host.md
workflow: 15
---
# Exec 主机重构计划
## 目标
- 添加 `exec.host` + `exec.security` 以在**沙箱**、**Gateway 网关**和**节点**之间路由执行。
- 保持默认**安全**:除非明确启用,否则不进行跨主机执行。
- 将执行拆分为**无头运行器服务**,通过本地 IPC 连接可选的 UImacOS 应用)。
- 提供**每智能体**策略、允许列表、询问模式和节点绑定。
- 支持*与*或*不与*允许列表一起使用的**询问模式**。
- 跨平台Unix socket + token 认证macOS/Linux/Windows 一致性)。
## 非目标
- 无遗留允许列表迁移或遗留 schema 支持。
- 节点 exec 无 PTY/流式传输(仅聚合输出)。
- 除现有 Bridge + Gateway 网关外无新网络层。
## 决定(已锁定)
- **配置键:** `exec.host` + `exec.security`(允许每智能体覆盖)。
- **提升:** 保留 `/elevated` 作为 Gateway 网关完全访问的别名。
- **询问默认:** `on-miss`
- **批准存储:** `~/.openclaw/exec-approvals.json`JSON无遗留迁移
- **运行器:** 无头系统服务UI 应用托管 Unix socket 用于批准。
- **节点身份:** 使用现有 `nodeId`
- **Socket 认证:** Unix socket + token跨平台如需要稍后拆分。
- **节点主机状态:** `~/.openclaw/node.json`(节点 id + 配对 token
- **macOS exec 主机:** 在 macOS 应用内运行 `system.run`;节点主机服务通过本地 IPC 转发请求。
- **无 XPC helper** 坚持使用 Unix socket + token + 对等检查。
## 关键概念
### 主机
- `sandbox`Docker exec当前行为
- `gateway`:在 Gateway 网关主机上执行。
- `node`:通过 Bridge 在节点运行器上执行(`system.run`)。
### 安全模式
- `deny`:始终阻止。
- `allowlist`:仅允许匹配项。
- `full`:允许一切(等同于提升模式)。
### 询问模式
- `off`:从不询问。
- `on-miss`:仅在允许列表不匹配时询问。
- `always`:每次都询问。
询问**独立于**允许列表;允许列表可与 `always``on-miss` 一起使用。
### 策略解析(每次执行)
1. 解析 `exec.host`(工具参数 → 智能体覆盖 → 全局默认)。
2. 解析 `exec.security``exec.ask`(相同优先级)。
3. 如果主机是 `sandbox`,继续本地沙箱执行。
4. 如果主机是 `gateway``node`,在该主机上应用安全 + 询问策略。
## 默认安全
- 默认 `exec.host = sandbox`
- `gateway``node` 默认 `exec.security = deny`
- 默认 `exec.ask = on-miss`(仅在安全允许时相关)。
- 如果未设置节点绑定,**智能体可以定向任何节点**,但仅在策略允许时。
## 配置表面
### 工具参数
- `exec.host`(可选):`sandbox | gateway | node`
- `exec.security`(可选):`deny | allowlist | full`
- `exec.ask`(可选):`off | on-miss | always`
- `exec.node`(可选):当 `host=node` 时使用的节点 id/名称。
### 配置键(全局)
- `tools.exec.host`
- `tools.exec.security`
- `tools.exec.ask`
- `tools.exec.node`(默认节点绑定)
### 配置键(每智能体)
- `agents.list[].tools.exec.host`
- `agents.list[].tools.exec.security`
- `agents.list[].tools.exec.ask`
- `agents.list[].tools.exec.node`
### 别名
- `/elevated on` = 为智能体会话设置 `tools.exec.host=gateway``tools.exec.security=full`
- `/elevated off` = 为智能体会话恢复之前的 exec 设置。
## 批准存储JSON
路径:`~/.openclaw/exec-approvals.json`
用途:
- **执行主机**Gateway 网关或节点运行器)的本地策略 + 允许列表。
- 无 UI 可用时的询问回退。
- UI 客户端的 IPC 凭证。
建议的 schemav1
```json
{
"version": 1,
"socket": {
"path": "~/.openclaw/exec-approvals.sock",
"token": "base64-opaque-token"
},
"defaults": {
"security": "deny",
"ask": "on-miss",
"askFallback": "deny"
},
"agents": {
"agent-id-1": {
"security": "allowlist",
"ask": "on-miss",
"allowlist": [
{
"pattern": "~/Projects/**/bin/rg",
"lastUsedAt": 0,
"lastUsedCommand": "rg -n TODO",
"lastResolvedPath": "/Users/user/Projects/.../bin/rg"
}
]
}
}
}
```
注意事项:
- 无遗留允许列表格式。
- `askFallback` 仅在需要 `ask` 且无法访问 UI 时应用。
- 文件权限:`0600`
## 运行器服务(无头)
### 角色
- 在本地强制执行 `exec.security` + `exec.ask`
- 执行系统命令并返回输出。
- 为 exec 生命周期发出 Bridge 事件(可选但推荐)。
### 服务生命周期
- macOS 上的 Launchd/daemonLinux/Windows 上的系统服务。
- 批准 JSON 是执行主机本地的。
- UI 托管本地 Unix socket运行器按需连接。
## UI 集成macOS 应用)
### IPC
- Unix socket 位于 `~/.openclaw/exec-approvals.sock`0600
- Token 存储在 `exec-approvals.json`0600中。
- 对等检查:仅同 UID。
- 挑战/响应nonce + HMAC(token, request-hash) 防止重放。
- 短 TTL例如 10s+ 最大负载 + 速率限制。
### 询问流程macOS 应用 exec 主机)
1. 节点服务从 Gateway 网关接收 `system.run`
2. 节点服务连接到本地 socket 并发送提示/exec 请求。
3. 应用验证对等 + token + HMAC + TTL然后在需要时显示对话框。
4. 应用在 UI 上下文中执行命令并返回输出。
5. 节点服务将输出返回给 Gateway 网关。
如果 UI 缺失:
- 应用 `askFallback``deny|allowlist|full`)。
### 图示SCI
```
Agent -> Gateway -> Bridge -> Node Service (TS)
| IPC (UDS + token + HMAC + TTL)
v
Mac App (UI + TCC + system.run)
```
## 节点身份 + 绑定
- 使用 Bridge 配对中的现有 `nodeId`
- 绑定模型:
- `tools.exec.node` 将智能体限制为特定节点。
- 如果未设置,智能体可以选择任何节点(策略仍强制执行默认值)。
- 节点选择解析:
- `nodeId` 精确匹配
- `displayName`(规范化)
- `remoteIp`
- `nodeId` 前缀(>= 6 字符)
## 事件
### 谁看到事件
- 系统事件是**每会话**的,在下一个提示时显示给智能体。
- 存储在 Gateway 网关内存队列中(`enqueueSystemEvent`)。
### 事件文本
- `Exec started (node=<id>, id=<runId>)`
- `Exec finished (node=<id>, id=<runId>, code=<code>)` + 可选输出尾部
- `Exec denied (node=<id>, id=<runId>, <reason>)`
### 传输
选项 A推荐
- 运行器发送 Bridge `event``exec.started` / `exec.finished`
- Gateway 网关 `handleBridgeEvent` 将这些映射到 `enqueueSystemEvent`
选项 B
- Gateway 网关 `exec` 工具直接处理生命周期(仅同步)。
## Exec 流程
### 沙箱主机
- 现有 `exec` 行为Docker 或无沙箱时的主机)。
- 仅在非沙箱模式下支持 PTY。
### Gateway 网关主机
- Gateway 网关进程在其自己的机器上执行。
- 强制执行本地 `exec-approvals.json`(安全/询问/允许列表)。
### 节点主机
- Gateway 网关调用 `node.invoke` 配合 `system.run`
- 运行器强制执行本地批准。
- 运行器返回聚合的 stdout/stderr。
- 可选的 Bridge 事件用于开始/完成/拒绝。
## 输出上限
- 组合 stdout+stderr 上限为 **200k**;为事件保留**尾部 20k**。
- 使用清晰的后缀截断(例如 `"… (truncated)"`)。
## 斜杠命令
- `/exec host=<sandbox|gateway|node> security=<deny|allowlist|full> ask=<off|on-miss|always> node=<id>`
- 每智能体、每会话覆盖;除非通过配置保存,否则非持久。
- `/elevated on|off|ask|full` 仍然是 `host=gateway security=full` 的快捷方式(`full` 跳过批准)。
## 跨平台方案
- 运行器服务是可移植的执行目标。
- UI 是可选的;如果缺失,应用 `askFallback`
- Windows/Linux 支持相同的批准 JSON + socket 协议。
## 实现阶段
### 阶段 1配置 + exec 路由
-`exec.host``exec.security``exec.ask``exec.node` 添加配置 schema。
- 更新工具管道以遵守 `exec.host`
- 添加 `/exec` 斜杠命令并保留 `/elevated` 别名。
### 阶段 2批准存储 + Gateway 网关强制执行
- 实现 `exec-approvals.json` 读取器/写入器。
-`gateway` 主机强制执行允许列表 + 询问模式。
- 添加输出上限。
### 阶段 3节点运行器强制执行
- 更新节点运行器以强制执行允许列表 + 询问。
- 添加 Unix socket 提示桥接到 macOS 应用 UI。
- 连接 `askFallback`
### 阶段 4事件
- 为 exec 生命周期添加节点 → Gateway 网关 Bridge 事件。
- 映射到 `enqueueSystemEvent` 用于智能体提示。
### 阶段 5UI 完善
- Mac 应用:允许列表编辑器、每智能体切换器、询问策略 UI。
- 节点绑定控制(可选)。
## 测试计划
- 单元测试允许列表匹配glob + 不区分大小写)。
- 单元测试:策略解析优先级(工具参数 → 智能体覆盖 → 全局)。
- 集成测试:节点运行器拒绝/允许/询问流程。
- Bridge 事件测试:节点事件 → 系统事件路由。
## 开放风险
- UI 不可用:确保遵守 `askFallback`
- 长时间运行的命令:依赖超时 + 输出上限。
- 多节点歧义:除非有节点绑定或显式节点参数,否则报错。
## 相关文档
- [Exec 工具](/tools/exec)
- [执行批准](/tools/exec-approvals)
- [节点](/nodes)
- [提升模式](/tools/elevated)

View File

@@ -1,92 +0,0 @@
---
description: Track outbound session mirroring refactor notes, decisions, tests, and open items.
title: 出站会话镜像重构Issue
x-i18n:
generated_at: "2026-02-03T07:53:51Z"
model: claude-opus-4-5
provider: pi
source_hash: b88a72f36f7b6d8a71fde9d014c0a87e9a8b8b0d449b67119cf3b6f414fa2b81
source_path: refactor/outbound-session-mirroring.md
workflow: 15
---
# 出站会话镜像重构Issue #1520
## 状态
- 进行中。
- 核心 + 插件渠道路由已更新以支持出站镜像。
- Gateway 网关发送现在在省略 sessionKey 时派生目标会话。
## 背景
出站发送被镜像到*当前*智能体会话(工具会话键)而不是目标渠道会话。入站路由使用渠道/对等方会话键,因此出站响应落在错误的会话中,首次联系的目标通常缺少会话条目。
## 目标
- 将出站消息镜像到目标渠道会话键。
- 在缺失时为出站创建会话条目。
- 保持线程/话题作用域与入站会话键对齐。
- 涵盖核心渠道加内置扩展。
## 实现摘要
- 新的出站会话路由辅助器:
- `src/infra/outbound/outbound-session.ts`
- `resolveOutboundSessionRoute` 使用 `buildAgentSessionKey`dmScope + identityLinks构建目标 sessionKey。
- `ensureOutboundSessionEntry` 通过 `recordSessionMetaFromInbound` 写入最小的 `MsgContext`
- `runMessageAction`(发送)派生目标 sessionKey 并将其传递给 `executeSendAction` 进行镜像。
- `message-tool` 不再直接镜像;它只从当前会话键解析 agentId。
- 插件发送路径使用派生的 sessionKey 通过 `appendAssistantMessageToSessionTranscript` 进行镜像。
- Gateway 网关发送在未提供时派生目标会话键(默认智能体),并确保会话条目。
## 线程/话题处理
- SlackreplyTo/threadId -> `resolveThreadSessionKeys`(后缀)。
- DiscordthreadId/replyTo -> `resolveThreadSessionKeys``useSuffix=false` 以匹配入站(线程频道 id 已经作用域会话)。
- Telegram话题 ID 通过 `buildTelegramGroupPeerId` 映射到 `chatId:topic:<id>`
## 涵盖的扩展
- Matrix、MS Teams、Mattermost、BlueBubbles、Nextcloud Talk、Zalo、Zalo Personal、Nostr、Tlon。
- 注意:
- Mattermost 目标现在为私信会话键路由去除 `@`
- Zalo Personal 对 1:1 目标使用私信对等方类型(仅当存在 `group:` 时才使用群组)。
- BlueBubbles 群组目标去除 `chat_*` 前缀以匹配入站会话键。
- Slack 自动线程镜像不区分大小写地匹配频道 id。
- Gateway 网关发送在镜像前将提供的会话键转换为小写。
## 决策
- **Gateway 网关发送会话派生**:如果提供了 `sessionKey`,则使用它。如果省略,从目标 + 默认智能体派生 sessionKey 并镜像到那里。
- **会话条目创建**:始终使用 `recordSessionMetaFromInbound``Provider/From/To/ChatType/AccountId/Originating*` 与入站格式对齐。
- **目标规范化**:出站路由在可用时使用解析后的目标(`resolveChannelTarget` 之后)。
- **会话键大小写**:在写入和迁移期间将会话键规范化为小写。
## 添加/更新的测试
- `src/infra/outbound/outbound-session.test.ts`
- Slack 线程会话键。
- Telegram 话题会话键。
- dmScope identityLinks 与 Discord。
- `src/agents/tools/message-tool.test.ts`
- 从会话键派生 agentId不传递 sessionKey
- `src/gateway/server-methods/send.test.ts`
- 在省略时派生会话键并创建会话条目。
## 待处理项目 / 后续跟进
- 语音通话插件使用自定义的 `voice:<phone>` 会话键。出站映射在这里没有标准化;如果 message-tool 应该支持语音通话发送,请添加显式映射。
- 确认是否有任何外部插件使用内置集之外的非标准 `From/To` 格式。
## 涉及的文件
- `src/infra/outbound/outbound-session.ts`
- `src/infra/outbound/outbound-send-service.ts`
- `src/infra/outbound/message-action-runner.ts`
- `src/agents/tools/message-tool.ts`
- `src/gateway/server-methods/send.ts`
- 测试:
- `src/infra/outbound/outbound-session.test.ts`
- `src/agents/tools/message-tool.test.ts`
- `src/gateway/server-methods/send.test.ts`

View File

@@ -1,221 +0,0 @@
---
read_when:
- 定义或重构插件架构
- 将渠道连接器迁移到插件 SDK/运行时
summary: 计划:为所有消息连接器提供一套统一的插件 SDK + 运行时
title: 插件 SDK 重构
x-i18n:
generated_at: "2026-02-01T21:36:45Z"
model: claude-opus-4-5
provider: pi
source_hash: d1964e2e47a19ee1d42ddaaa9cf1293c80bb0be463b049dc8468962f35bb6cb0
source_path: refactor/plugin-sdk.md
workflow: 15
---
# 插件 SDK + 运行时重构计划
目标:每个消息连接器都是一个插件(内置或外部),使用统一稳定的 API。
插件不直接从 `src/**` 导入任何内容。所有依赖项均通过 SDK 或运行时获取。
## 为什么现在做
- 当前连接器混用多种模式:直接导入核心模块、仅 dist 的桥接方式以及自定义辅助函数。
- 这使得升级变得脆弱,并阻碍了干净的外部插件接口。
## 目标架构(两层)
### 1插件 SDK编译时稳定可发布
范围:类型、辅助函数和配置工具。无运行时状态,无副作用。
内容(示例):
- 类型:`ChannelPlugin`、适配器、`ChannelMeta``ChannelCapabilities``ChannelDirectoryEntry`
- 配置辅助函数:`buildChannelConfigSchema``setAccountEnabledInConfigSection``deleteAccountFromConfigSection`
`applyAccountNameToChannelSection`
- 配对辅助函数:`PAIRING_APPROVED_MESSAGE``formatPairingApproveHint`
- 新手引导辅助函数:`promptChannelAccessConfig``addWildcardAllowFrom`、新手引导类型。
- 工具参数辅助函数:`createActionGate``readStringParam``readNumberParam``readReactionParams``jsonResult`
- 文档链接辅助函数:`formatDocsLink`
交付方式:
-`openclaw/plugin-sdk` 发布(或从核心以 `openclaw/plugin-sdk` 导出)。
- 使用语义化版本控制,提供明确的稳定性保证。
### 2插件运行时执行层注入式
范围:所有涉及核心运行时行为的内容。
通过 `OpenClawPluginApi.runtime` 访问,确保插件永远不会导入 `src/**`
建议的接口(最小但完整):
```ts
export type PluginRuntime = {
channel: {
text: {
chunkMarkdownText(text: string, limit: number): string[];
resolveTextChunkLimit(cfg: OpenClawConfig, channel: string, accountId?: string): number;
hasControlCommand(text: string, cfg: OpenClawConfig): boolean;
};
reply: {
dispatchReplyWithBufferedBlockDispatcher(params: {
ctx: unknown;
cfg: unknown;
dispatcherOptions: {
deliver: (payload: {
text?: string;
mediaUrls?: string[];
mediaUrl?: string;
}) => void | Promise<void>;
onError?: (err: unknown, info: { kind: string }) => void;
};
}): Promise<void>;
createReplyDispatcherWithTyping?: unknown; // adapter for Teams-style flows
};
routing: {
resolveAgentRoute(params: {
cfg: unknown;
channel: string;
accountId: string;
peer: { kind: RoutePeerKind; id: string };
}): { sessionKey: string; accountId: string };
};
pairing: {
buildPairingReply(params: { channel: string; idLine: string; code: string }): string;
readAllowFromStore(channel: string): Promise<string[]>;
upsertPairingRequest(params: {
channel: string;
id: string;
meta?: { name?: string };
}): Promise<{ code: string; created: boolean }>;
};
media: {
fetchRemoteMedia(params: { url: string }): Promise<{ buffer: Buffer; contentType?: string }>;
saveMediaBuffer(
buffer: Uint8Array,
contentType: string | undefined,
direction: "inbound" | "outbound",
maxBytes: number,
): Promise<{ path: string; contentType?: string }>;
};
mentions: {
buildMentionRegexes(cfg: OpenClawConfig, agentId?: string): RegExp[];
matchesMentionPatterns(text: string, regexes: RegExp[]): boolean;
};
groups: {
resolveGroupPolicy(
cfg: OpenClawConfig,
channel: string,
accountId: string,
groupId: string,
): {
allowlistEnabled: boolean;
allowed: boolean;
groupConfig?: unknown;
defaultConfig?: unknown;
};
resolveRequireMention(
cfg: OpenClawConfig,
channel: string,
accountId: string,
groupId: string,
override?: boolean,
): boolean;
};
debounce: {
createInboundDebouncer<T>(opts: {
debounceMs: number;
buildKey: (v: T) => string | null;
shouldDebounce: (v: T) => boolean;
onFlush: (entries: T[]) => Promise<void>;
onError?: (err: unknown) => void;
}): { push: (v: T) => void; flush: () => Promise<void> };
resolveInboundDebounceMs(cfg: OpenClawConfig, channel: string): number;
};
commands: {
resolveCommandAuthorizedFromAuthorizers(params: {
useAccessGroups: boolean;
authorizers: Array<{ configured: boolean; allowed: boolean }>;
}): boolean;
};
};
logging: {
shouldLogVerbose(): boolean;
getChildLogger(name: string): PluginLogger;
};
state: {
resolveStateDir(cfg: OpenClawConfig): string;
};
};
```
备注:
- 运行时是访问核心行为的唯一方式。
- SDK 故意保持小巧和稳定。
- 每个运行时方法都映射到现有的核心实现(无重复代码)。
## 迁移计划(分阶段,安全)
### 阶段 0基础搭建
- 引入 `openclaw/plugin-sdk`
-`OpenClawPluginApi` 中添加带有上述接口的 `api.runtime`
- 在过渡期内保留现有导入方式(添加弃用警告)。
### 阶段 1桥接清理低风险
-`api.runtime` 替换每个扩展中的 `core-bridge.ts`
- 优先迁移 BlueBubbles、Zalo、Zalo Personal已经接近完成
- 移除重复的桥接代码。
### 阶段 2轻度直接导入的插件
- 将 Matrix 迁移到 SDK + 运行时。
- 验证新手引导、目录、群组提及逻辑。
### 阶段 3重度直接导入的插件
- 迁移 Microsoft Teams使用运行时辅助函数最多的插件
- 确保回复/正在输入的语义与当前行为一致。
### 阶段 4iMessage 插件化
- 将 iMessage 移入 `extensions/imessage`
-`api.runtime` 替换直接的核心调用。
- 保持配置键、CLI 行为和文档不变。
### 阶段 5强制执行
- 添加 lint 规则 / CI 检查:禁止 `extensions/**``src/**` 导入。
- 添加插件 SDK/版本兼容性检查(运行时 + SDK 语义化版本)。
## 兼容性与版本控制
- SDK语义化版本控制已发布变更有文档记录。
- 运行时:按核心版本进行版本控制。添加 `api.runtime.version`
- 插件声明所需的运行时版本范围(例如 `openclawRuntime: ">=2026.2.0"`)。
## 测试策略
- 适配器级单元测试(使用真实核心实现验证运行时函数)。
- 每个插件的黄金测试:确保行为无偏差(路由、配对、允许列表、提及过滤)。
- CI 中使用单个端到端插件示例(安装 + 运行 + 冒烟测试)。
## 待解决问题
- SDK 类型托管在哪里:独立包还是核心导出?
- 运行时类型分发:在 SDK 中(仅类型)还是在核心中?
- 如何为内置插件与外部插件暴露文档链接?
- 过渡期间是否允许仓库内插件有限地直接导入核心模块?
## 成功标准
- 所有渠道连接器都是使用 SDK + 运行时的插件。
- `extensions/**` 不再从 `src/**` 导入。
- 新连接器模板仅依赖 SDK + 运行时。
- 外部插件可以在无需访问核心源码的情况下进行开发和更新。
相关文档:[插件](/tools/plugin)、[渠道](/channels/index)、[配置](/gateway/configuration)。

View File

@@ -1,100 +0,0 @@
---
read_when:
- 设计或实现配置验证行为
- 处理配置迁移或 doctor 工作流
- 处理插件配置 schema 或插件加载门控
summary: 严格配置验证 + 仅通过 doctor 进行迁移
title: 严格配置验证
x-i18n:
generated_at: "2026-02-03T10:08:51Z"
model: claude-opus-4-5
provider: pi
source_hash: 5bc7174a67d2234e763f21330d8fe3afebc23b2e5c728a04abcc648b453a91cc
source_path: refactor/strict-config.md
workflow: 15
---
# 严格配置验证(仅通过 doctor 进行迁移)
## 目标
- **在所有地方拒绝未知配置键**(根级 + 嵌套)。
- **拒绝没有 schema 的插件配置**;不加载该插件。
- **移除加载时的旧版自动迁移**;迁移仅通过 doctor 运行。
- **启动时自动运行 doctordry-run**;如果无效,阻止非诊断命令。
## 非目标
- 加载时的向后兼容性(旧版键不会自动迁移)。
- 静默丢弃无法识别的键。
## 严格验证规则
- 配置必须在每个层级精确匹配 schema。
- 未知键是验证错误(根级或嵌套都不允许透传)。
- `plugins.entries.<id>.config` 必须由插件的 schema 验证。
- 如果插件缺少 schema**拒绝插件加载**并显示清晰的错误。
- 未知的 `channels.<id>` 键是错误,除非插件清单声明了该渠道 id。
- 所有插件都需要插件清单(`openclaw.plugin.json`)。
## 插件 schema 强制执行
- 每个插件为其配置提供严格的 JSON Schema内联在清单中
- 插件加载流程:
1. 解析插件清单 + schema`openclaw.plugin.json`)。
2. 根据 schema 验证配置。
3. 如果缺少 schema 或配置无效:阻止插件加载,记录错误。
- 错误消息包括:
- 插件 id
- 原因(缺少 schema / 配置无效)
- 验证失败的路径
- 禁用的插件保留其配置,但 Doctor + 日志会显示警告。
## Doctor 流程
- 每次加载配置时都会运行 Doctor默认 dry-run
- 如果配置无效:
- 打印摘要 + 可操作的错误。
- 指示:`openclaw doctor --fix`
- `openclaw doctor --fix`
- 应用迁移。
- 移除未知键。
- 写入更新后的配置。
## 命令门控(当配置无效时)
允许的命令(仅诊断):
- `openclaw doctor`
- `openclaw logs`
- `openclaw health`
- `openclaw help`
- `openclaw status`
- `openclaw gateway status`
其他所有命令必须硬失败并显示:"Config invalid. Run `openclaw doctor --fix`."
## 错误用户体验格式
- 单个摘要标题。
- 分组部分:
- 未知键(完整路径)
- 旧版键/需要迁移
- 插件加载失败(插件 id + 原因 + 路径)
## 实现接触点
- `src/config/zod-schema.ts`:移除根级透传;所有地方使用严格对象。
- `src/config/zod-schema.providers.ts`:确保严格的渠道 schema。
- `src/config/validation.ts`:未知键时失败;不应用旧版迁移。
- `src/config/io.ts`:移除旧版自动迁移;始终运行 doctor dry-run。
- `src/config/legacy*.ts`:将用法移至仅 doctor。
- `src/plugins/*`:添加 schema 注册表 + 门控。
- `src/cli` 中的 CLI 命令门控。
## 测试
- 未知键拒绝(根级 + 嵌套)。
- 插件缺少 schema → 插件加载被阻止并显示清晰错误。
- 无效配置 → Gateway 网关启动被阻止,诊断命令除外。
- Doctor dry-run 自动运行;`doctor --fix` 写入修正后的配置。