docs: remove duplicate H1 where frontmatter title already sets it

This commit is contained in:
Vincent Koc
2026-04-23 13:11:14 -07:00
parent 219a11d2bd
commit 4a2cd533ac
251 changed files with 1 additions and 503 deletions

View File

@@ -7,8 +7,6 @@ read_when:
- You want to tune active memory behavior without enabling it everywhere
---
# Active Memory
Active memory is an optional plugin-owned blocking memory sub-agent that runs
before the main reply for eligible conversational sessions.

View File

@@ -6,8 +6,6 @@ read_when:
title: "Agent Workspace"
---
# Agent workspace
The workspace is the agent's home. It is the only working directory used for
file tools and for workspace context. Keep it private and treat it as memory.

View File

@@ -5,8 +5,6 @@ read_when:
title: "Agent Runtime"
---
# Agent Runtime
OpenClaw runs a single embedded agent runtime.
## Workspace (required)

View File

@@ -5,8 +5,6 @@ read_when:
title: "Gateway Architecture"
---
# Gateway architecture
## Overview
- A single longlived **Gateway** owns all messaging surfaces (WhatsApp via

View File

@@ -6,8 +6,6 @@ read_when:
title: "Compaction"
---
# Compaction
Every model has a context window -- the maximum number of tokens it can process.
When a conversation approaches that limit, OpenClaw **compacts** older messages
into a summary so the chat can continue.

View File

@@ -7,8 +7,6 @@ read_when:
title: "Context Engine"
---
# Context Engine
A **context engine** controls how OpenClaw builds model context for each run.
It decides which messages to include, how to summarize older history, and how
to manage context across subagent boundaries.

View File

@@ -7,8 +7,6 @@ read_when:
title: "Context"
---
# Context
“Context” is **everything OpenClaw sends to the model for a run**. It is bounded by the models **context window** (token limit).
Beginner mental model:

View File

@@ -5,8 +5,6 @@ read_when: "You want an agent with its own identity that acts on behalf of human
status: active
---
# Delegate Architecture
Goal: run OpenClaw as a **named delegate** — an agent with its own identity that acts "on behalf of" people in an organization. The agent never impersonates a human. It sends, reads, and schedules under its own account with explicit delegation permissions.
This extends [Multi-Agent Routing](/concepts/multi-agent) from personal use into organizational deployments.

View File

@@ -7,8 +7,6 @@ read_when:
- You want to tune consolidation without polluting MEMORY.md
---
# Dreaming
Dreaming is the background memory consolidation system in `memory-core`.
It helps OpenClaw move strong short-term signals into durable memory while
keeping the process explainable and reviewable.

View File

@@ -7,8 +7,6 @@ read_when:
- You want one place to find the currently documented experimental flags
---
# Experimental features
Experimental features in OpenClaw are **opt-in preview surfaces**. They are
behind explicit flags because they still need real-world mileage before they
deserve a stable default or a long-lived public contract.

View File

@@ -5,8 +5,6 @@ read_when:
title: "Features"
---
# Features
## Highlights
<Columns>

View File

@@ -7,8 +7,6 @@ read_when:
title: "Markdown Formatting"
---
# Markdown formatting
OpenClaw formats outbound Markdown by converting it into a shared intermediate
representation (IR) before rendering channel-specific output. The IR keeps the
source text intact while carrying style/link spans so chunking and rendering can

View File

@@ -6,8 +6,6 @@ read_when:
- You want to configure embedding providers or hybrid search
---
# Builtin Memory Engine
The builtin engine is the default memory backend. It stores your memory index in
a per-agent SQLite database and needs no extra dependencies to get started.

View File

@@ -6,8 +6,6 @@ read_when:
- You want AI-powered recall and user modeling
---
# Honcho Memory
[Honcho](https://honcho.dev) adds AI-native memory to OpenClaw. It persists
conversations to a dedicated service and builds user and agent models over time,
giving your agent cross-session context that goes beyond workspace Markdown

View File

@@ -6,8 +6,6 @@ read_when:
- You want advanced memory features like reranking or extra indexed paths
---
# QMD Memory Engine
[QMD](https://github.com/tobi/qmd) is a local-first search sidecar that runs
alongside OpenClaw. It combines BM25, vector search, and reranking in a single
binary, and can index content beyond your workspace memory files.

View File

@@ -7,8 +7,6 @@ read_when:
- You want to tune search quality
---
# Memory Search
`memory_search` finds relevant notes from your memory files, even when the
wording differs from the original text. It works by indexing memory into small
chunks and searching them using embeddings, keywords, or both.

View File

@@ -6,8 +6,6 @@ read_when:
- You want to know what memory files to write
---
# Memory Overview
OpenClaw remembers things by writing **plain Markdown files** in your agent's
workspace. The model only "remembers" what gets saved to disk -- there is no
hidden state.

View File

@@ -7,8 +7,6 @@ read_when:
title: "Messages"
---
# Messages
This page ties together how OpenClaw handles inbound messages, sessions, queueing,
streaming, and reasoning visibility.

View File

@@ -7,8 +7,6 @@ read_when:
title: "Model Failover"
---
# Model failover
OpenClaw handles failures in two stages:
1. **Auth profile rotation** within the current provider.

View File

@@ -6,8 +6,6 @@ read_when:
title: "Model providers"
---
# Model providers
This page covers **LLM/model providers** (not chat channels like WhatsApp/Telegram).
For model selection rules, see [/concepts/models](/concepts/models).

View File

@@ -7,8 +7,6 @@ read_when:
title: "Models CLI"
---
# Models CLI
See [/concepts/model-failover](/concepts/model-failover) for auth profile
rotation, cooldowns, and how that interacts with fallbacks.
Quick provider overview + examples: [/concepts/model-providers](/concepts/model-providers).

View File

@@ -5,8 +5,6 @@ read_when: "You want multiple isolated agents (workspaces + auth) in one gateway
status: active
---
# Multi-Agent Routing
Goal: multiple _isolated_ agents (separate workspace + `agentDir` + sessions), plus multiple channel accounts (e.g. two WhatsApps) in one running Gateway. Inbound is routed to an agent via bindings.
## What is "one agent"?

View File

@@ -8,8 +8,6 @@ read_when:
title: "OAuth"
---
# OAuth
OpenClaw supports “subscription auth” via OAuth for providers that offer it
(notably **OpenAI Codex (ChatGPT OAuth)**). For Anthropic, the practical split
is now:

View File

@@ -7,8 +7,6 @@ read_when:
title: "Presence"
---
# Presence
OpenClaw “presence” is a lightweight, besteffort view of:
- the **Gateway** itself, and

View File

@@ -7,8 +7,6 @@ read_when:
title: "QA E2E Automation"
---
# QA E2E Automation
The private QA stack is meant to exercise OpenClaw in a more realistic,
channel-shaped way than a single unit test can.

View File

@@ -6,8 +6,6 @@ read_when:
title: "Retry Policy"
---
# Retry policy
## Goals
- Retry per HTTP request, not per multi-step flow.

View File

@@ -6,8 +6,6 @@ read_when:
- You want to understand Anthropic prompt cache optimization
---
# Session Pruning
Session pruning trims **old tool results** from the context before each LLM
call. It reduces context bloat from accumulated tool outputs (exec results, file
reads, search results) without rewriting normal conversation text.

View File

@@ -7,8 +7,6 @@ read_when:
title: "Session Tools"
---
# Session Tools
OpenClaw gives agents tools to work across sessions, inspect status, and
orchestrate sub-agents.

View File

@@ -6,8 +6,6 @@ read_when:
title: "Session Management"
---
# Session Management
OpenClaw organizes conversations into **sessions**. Each message is routed to a
session based on where it came from -- DMs, group chats, cron jobs, etc.

View File

@@ -7,8 +7,6 @@ read_when:
title: "SOUL.md Personality Guide"
---
# SOUL.md Personality Guide
`SOUL.md` is where your agent's voice lives.
OpenClaw injects it on normal sessions, so it has real weight. If your agent

View File

@@ -6,8 +6,6 @@ read_when:
title: "System Prompt"
---
# System Prompt
OpenClaw builds a custom system prompt for every agent run. The prompt is **OpenClaw-owned** and does not use the pi-coding-agent default prompt.
The prompt is assembled by OpenClaw and injected into each agent run.

View File

@@ -6,8 +6,6 @@ read_when:
title: "Timezones"
---
# Timezones
OpenClaw standardizes timestamps so the model sees a **single reference time**.
## Message envelopes (local by default)

View File

@@ -5,8 +5,6 @@ read_when:
title: "Typing Indicators"
---
# Typing indicators
Typing indicators are sent to the chat channel while a run is active. Use
`agents.defaults.typingMode` to control **when** typing starts and `typingIntervalSeconds`
to control **how often** it refreshes.

View File

@@ -6,8 +6,6 @@ read_when:
title: "Usage Tracking"
---
# Usage tracking
## What it is
- Pulls provider usage/quota directly from their usage endpoints.