diff --git a/docs/concepts/context.md b/docs/concepts/context.md
index 46412ef7de9..c4cc9990d8d 100644
--- a/docs/concepts/context.md
+++ b/docs/concepts/context.md
@@ -1,26 +1,26 @@
---
summary: "Context: what the model sees, how it is built, and how to inspect it"
read_when:
- - You want to understand what “context” means in OpenClaw
- - You are debugging why the model “knows” something (or forgot it)
+ - You want to understand what "context" means in OpenClaw
+ - You are debugging why the model "knows" something (or forgot it)
- You want to reduce context overhead (/context, /status, /compact)
title: "Context"
---
-“Context” is **everything OpenClaw sends to the model for a run**. It is bounded by the model’s **context window** (token limit).
+"Context" is **everything OpenClaw sends to the model for a run**. It is bounded by the model's **context window** (token limit).
Beginner mental model:
- **System prompt** (OpenClaw-built): rules, tools, skills list, time/runtime, and injected workspace files.
-- **Conversation history**: your messages + the assistant’s messages for this session.
+- **Conversation history**: your messages + the assistant's messages for this session.
- **Tool calls/results + attachments**: command output, file reads, images/audio, etc.
-Context is _not the same thing_ as “memory”: memory can be stored on disk and reloaded later; context is what’s inside the model’s current window.
+Context is _not the same thing_ as "memory": memory can be stored on disk and reloaded later; context is what's inside the model's current window.
## Quick start (inspect context)
-- `/status` → quick “how full is my window?” view + session settings.
-- `/context list` → what’s injected + rough sizes (per file + totals).
+- `/status` → quick "how full is my window?" view + session settings.
+- `/context list` → what's injected + rough sizes (per file + totals).
- `/context detail` → deeper breakdown: per-file, per-tool schema sizes, per-skill entry sizes, and system prompt size.
- `/usage tokens` → append per-reply usage footer to normal replies.
- `/compact` → summarize older history into a compact entry to free window space.
@@ -29,7 +29,7 @@ See also: [Slash commands](/tools/slash-commands), [Token use & costs](/referenc
## Example output
-Values vary by model, provider, tool policy, and what’s in your workspace.
+Values vary by model, provider, tool policy, and what's in your workspace.
### `/context list`
@@ -83,7 +83,7 @@ Everything the model receives counts, including:
- Tool calls + tool results.
- Attachments/transcripts (images/audio/files).
- Compaction summaries and pruning artifacts.
-- Provider “wrappers” or hidden headers (not visible, still counted).
+- Provider "wrappers" or hidden headers (not visible, still counted).
## How OpenClaw builds the system prompt
@@ -118,14 +118,14 @@ When truncation occurs, the runtime can inject an in-prompt warning block under
The system prompt includes a compact **skills list** (name + description + location). This list has real overhead.
-Skill instructions are _not_ included by default. The model is expected to `read` the skill’s `SKILL.md` **only when needed**.
+Skill instructions are _not_ included by default. The model is expected to `read` the skill's `SKILL.md` **only when needed**.
## Tools: there are two costs
Tools affect context in two ways:
-1. **Tool list text** in the system prompt (what you see as “Tooling”).
-2. **Tool schemas** (JSON). These are sent to the model so it can call tools. They count toward context even though you don’t see them as plain text.
+1. **Tool list text** in the system prompt (what you see as "Tooling").
+2. **Tool schemas** (JSON). These are sent to the model so it can call tools. They count toward context even though you don't see them as plain text.
`/context detail` breaks down the biggest tool schemas so you can see what dominates.
@@ -137,7 +137,7 @@ Slash commands are handled by the Gateway. There are a few different behaviors:
- **Directives**: `/think`, `/verbose`, `/trace`, `/reasoning`, `/elevated`, `/model`, `/queue` are stripped before the model sees the message.
- Directive-only messages persist session settings.
- Inline directives in a normal message act as per-message hints.
-- **Inline shortcuts** (allowlisted senders only): certain `/...` tokens inside a normal message can run immediately (example: “hey /status”), and are stripped before the model sees the remaining text.
+- **Inline shortcuts** (allowlisted senders only): certain `/...` tokens inside a normal message can run immediately (example: "hey /status"), and are stripped before the model sees the remaining text.
Details: [Slash commands](/tools/slash-commands).
@@ -147,7 +147,7 @@ What persists across messages depends on the mechanism:
- **Normal history** persists in the session transcript until compacted/pruned by policy.
- **Compaction** persists a summary into the transcript and keeps recent messages intact.
-- **Pruning** drops old tool results from the _in-memory_ prompt to free context-window space, but does not rewrite the session transcript — the full history is still inspectable on disk.
+- **Pruning** drops old tool results from the _in-memory_ prompt to free context-window space, but does not rewrite the session transcript - the full history is still inspectable on disk.
Docs: [Session](/concepts/session), [Compaction](/concepts/compaction), [Session pruning](/concepts/session-pruning).
@@ -165,13 +165,23 @@ pluggable interface, lifecycle hooks, and configuration.
`/context` prefers the latest **run-built** system prompt report when available:
- `System prompt (run)` = captured from the last embedded (tool-capable) run and persisted in the session store.
-- `System prompt (estimate)` = computed on the fly when no run report exists (or when running via a CLI backend that doesn’t generate the report).
+- `System prompt (estimate)` = computed on the fly when no run report exists (or when running via a CLI backend that doesn't generate the report).
Either way, it reports sizes and top contributors; it does **not** dump the full system prompt or tool schemas.
## Related
-- [Context Engine](/concepts/context-engine) — custom context injection via plugins
-- [Compaction](/concepts/compaction) — summarizing long conversations
-- [System Prompt](/concepts/system-prompt) — how the system prompt is built
-- [Agent Loop](/concepts/agent-loop) — the full agent execution cycle
+
+
+ Custom context injection via plugins.
+
+
+ Summarizing long conversations to keep them inside the model window.
+
+
+ How the system prompt is built and what it injects each turn.
+
+
+ The full agent execution cycle from inbound message to final reply.
+
+
diff --git a/docs/concepts/soul.md b/docs/concepts/soul.md
index 059caaecdcf..089098a168d 100644
--- a/docs/concepts/soul.md
+++ b/docs/concepts/soul.md
@@ -101,8 +101,16 @@ surfaces, make sure the tone still fits the room.
Sharp is good. Annoying is not.
-## Related docs
+## Related
-- [Agent workspace](/concepts/agent-workspace)
-- [System prompt](/concepts/system-prompt)
-- [SOUL.md template](/reference/templates/SOUL)
+
+
+ Workspace files OpenClaw injects into the system prompt.
+
+
+ How `SOUL.md` is composed into the per-turn system prompt.
+
+
+ Starter template for a personality file.
+
+
diff --git a/docs/tools/apply-patch.md b/docs/tools/apply-patch.md
index 36068c6e2ff..382c90ba72d 100644
--- a/docs/tools/apply-patch.md
+++ b/docs/tools/apply-patch.md
@@ -51,6 +51,14 @@ The tool accepts a single `input` string that wraps one or more file operations:
## Related
-- [Diffs](/tools/diffs)
-- [Exec tool](/tools/exec)
-- [Code execution](/tools/code-execution)
+
+
+ Read-only diff viewer for change presentation.
+
+
+ Shell command execution from the agent.
+
+
+ Sandboxed remote Python analysis with xAI.
+
+
diff --git a/docs/tools/perplexity-search.md b/docs/tools/perplexity-search.md
index 1f7ec5f3f9b..9920d2267d3 100644
--- a/docs/tools/perplexity-search.md
+++ b/docs/tools/perplexity-search.md
@@ -6,8 +6,6 @@ read_when:
title: "Perplexity search"
---
-# Perplexity Search API
-
OpenClaw supports Perplexity Search API as a `web_search` provider.
It returns structured results with `title`, `url`, and `snippet` fields.
@@ -104,7 +102,7 @@ Search query.
-Number of results to return (1–10).
+Number of results to return (1-10).
@@ -116,7 +114,7 @@ ISO 639-1 language code (e.g. `en`, `de`, `fr`).
-Time filter — `day` is 24 hours.
+Time filter - `day` is 24 hours.
@@ -206,7 +204,17 @@ await web_search({
## Related
-- [Web Search overview](/tools/web) -- all providers and auto-detection
-- [Perplexity Search API docs](https://docs.perplexity.ai/docs/search/quickstart) -- official Perplexity documentation
-- [Brave Search](/tools/brave-search) -- structured results with country/language filters
-- [Exa Search](/tools/exa-search) -- neural search with content extraction
+
+
+ All providers and auto-detection rules.
+
+
+ Structured results with country and language filters.
+
+
+ Neural search with content extraction.
+
+
+ Official Perplexity Search API quickstart and reference.
+
+