docs: typography hygiene + 2 in-body H1 removals across 6 pages

Replaced 60 typography characters (curly quotes, apostrophes, em/en
dashes, non-breaking hyphens) with ASCII equivalents per
docs/CLAUDE.md heading and content hygiene rules.

- docs/start/openclaw.md: 10 chars; removed the duplicate '# Building
  a personal assistant with OpenClaw' H1 (Mintlify renders title from
  frontmatter).
- docs/platforms/mac/remote.md: 10 chars; removed the duplicate
  '# Remote OpenClaw (macOS ⇄ remote host)' H1 (the U+21C4 codepoint
  and parens both produced brittle anchors).
- docs/tools/thinking.md: 10 chars
- docs/reference/templates/BOOTSTRAP.md: 10 chars (kept the in-body
  '# BOOTSTRAP.md - Hello, World' heading because the page is a
  template whose content is meant to be copied verbatim into a
  workspace BOOTSTRAP.md).
- docs/plugins/sdk-provider-plugins.md: 10 chars
- docs/platforms/macos.md: 10 chars
This commit is contained in:
Vincent Koc
2026-05-05 20:55:18 -07:00
parent 3afc902f3d
commit 68a82cb2e2
6 changed files with 46 additions and 50 deletions

View File

@@ -5,9 +5,7 @@ read_when:
title: "Remote control"
---
# Remote OpenClaw (macOS ⇄ remote host)
This flow lets the macOS app act as a full remote control for an OpenClaw gateway running on another host (desktop/server). Its the apps **Remote over SSH** (remote run) feature. All features—health checks, Voice Wake forwarding, and Web Chat—reuse the same remote SSH configuration from _Settings → General_.
This flow lets the macOS app act as a full remote control for an OpenClaw gateway running on another host (desktop/server). It's the app's **Remote over SSH** (remote run) feature. All features-health checks, Voice Wake forwarding, and Web Chat-reuse the same remote SSH configuration from _Settings → General_.
## Modes
@@ -19,7 +17,7 @@ This flow lets the macOS app act as a full remote control for an OpenClaw gatewa
Remote mode supports two transports:
- **SSH tunnel** (default): Uses `ssh -N -L ...` to forward the gateway port to localhost. The gateway will see the nodes IP as `127.0.0.1` because the tunnel is loopback.
- **SSH tunnel** (default): Uses `ssh -N -L ...` to forward the gateway port to localhost. The gateway will see the node's IP as `127.0.0.1` because the tunnel is loopback.
- **Direct (ws/wss)**: Connects straight to the gateway URL. The gateway sees the real client IP.
In SSH tunnel mode, discovered LAN/tailnet hostnames are saved as
@@ -51,7 +49,7 @@ node.
- **Identity file** (advanced): path to your key.
- **Project root** (advanced): remote checkout path used for commands.
- **CLI path** (advanced): optional path to a runnable `openclaw` entrypoint/binary (auto-filled when advertised).
3. Hit **Test remote**. Success indicates the remote `openclaw status --json` runs correctly. Failures usually mean PATH/CLI issues; exit 127 means the CLI isnt found remotely.
3. Hit **Test remote**. Success indicates the remote `openclaw status --json` runs correctly. Failures usually mean PATH/CLI issues; exit 127 means the CLI isn't found remotely.
4. Health checks and Web Chat will now run through this SSH tunnel automatically.
## Web Chat
@@ -63,7 +61,7 @@ node.
## Permissions
- The remote host needs the same TCC approvals as local (Automation, Accessibility, Screen Recording, Microphone, Speech Recognition, Notifications). Run onboarding on that machine to grant them once.
- Nodes advertise their permission state via `node.list` / `node.describe` so agents know whats available.
- Nodes advertise their permission state via `node.list` / `node.describe` so agents know what's available.
## Security notes
@@ -79,7 +77,7 @@ node.
## Troubleshooting
- **exit 127 / not found**: `openclaw` isnt on PATH for non-login shells. Add it to `/etc/paths`, your shell rc, or symlink into `/usr/local/bin`/`/opt/homebrew/bin`.
- **exit 127 / not found**: `openclaw` isn't on PATH for non-login shells. Add it to `/etc/paths`, your shell rc, or symlink into `/usr/local/bin`/`/opt/homebrew/bin`.
- **Health probe failed**: check SSH reachability, PATH, and that Baileys is logged in (`openclaw status --json`).
- **Web Chat stuck**: confirm the gateway is running on the remote host and the forwarded port matches the gateway WS port; the UI requires a healthy WS connection.
- **Node IP shows 127.0.0.1**: expected with the SSH tunnel. Switch **Transport** to **Direct (ws/wss)** if you want the gateway to see the real client IP.
@@ -94,7 +92,7 @@ Pick sounds per notification from scripts with `openclaw` and `node.invoke`, e.g
openclaw nodes notify --node <id> --title "Ping" --body "Remote gateway ready" --sound Glass
```
There is no global default sound toggle in the app anymore; callers choose a sound (or none) per request.
There is no global "default sound" toggle in the app anymore; callers choose a sound (or none) per request.
## Related

View File

@@ -6,7 +6,7 @@ read_when:
title: "macOS app"
---
The macOS app is the **menubar companion** for OpenClaw. It owns permissions,
The macOS app is the **menu-bar companion** for OpenClaw. It owns permissions,
manages/attaches to the Gateway locally (launchd or manual), and exposes macOS
capabilities to the agent as a node.
@@ -16,7 +16,7 @@ capabilities to the agent as a node.
- Owns TCC prompts (Notifications, Accessibility, Screen Recording, Microphone,
Speech Recognition, Automation/AppleScript).
- Runs or connects to the Gateway (local or remote).
- Exposes macOSonly tools (Canvas, Camera, Screen Recording, `system.run`).
- Exposes macOS-only tools (Canvas, Camera, Screen Recording, `system.run`).
- Starts the local node host service in **remote** mode (launchd), and stops it in **local** mode.
- Optionally hosts **PeekabooBridge** for UI automation.
- Installs the global CLI (`openclaw`) on request via npm, pnpm, or bun (the app prefers npm, then pnpm, then bun; Node remains the recommended Gateway runtime).
@@ -34,7 +34,7 @@ capabilities to the agent as a node.
## Launchd control
The app manages a peruser LaunchAgent labeled `ai.openclaw.gateway`
The app manages a per-user LaunchAgent labeled `ai.openclaw.gateway`
(or `ai.openclaw.<profile>` when using `--profile`/`OPENCLAW_PROFILE`; legacy `com.openclaw.*` still unloads).
```bash
@@ -44,7 +44,7 @@ launchctl bootout gui/$UID/ai.openclaw.gateway
Replace the label with `ai.openclaw.<profile>` when running a named profile.
If the LaunchAgent isnt installed, enable it from the app or run
If the LaunchAgent isn't installed, enable it from the app or run
`openclaw gateway install`.
## Node capabilities (mac)
@@ -56,7 +56,7 @@ The macOS app presents itself as a node. Common commands:
- Screen: `screen.snapshot`, `screen.record`
- System: `system.run`, `system.notify`
The node reports a `permissions` map so agents can decide whats allowed.
The node reports a `permissions` map so agents can decide what's allowed.
Node service + app IPC:
@@ -104,8 +104,8 @@ Notes:
- `allowlist` entries are glob patterns for resolved binary paths, or bare command names for PATH-invoked commands.
- Raw shell command text that contains shell control or expansion syntax (`&&`, `||`, `;`, `|`, `` ` ``, `$`, `<`, `>`, `(`, `)`) is treated as an allowlist miss and requires explicit approval (or allowlisting the shell binary).
- Choosing Always Allow in the prompt adds that command to the allowlist.
- `system.run` environment overrides are filtered (drops `PATH`, `DYLD_*`, `LD_*`, `NODE_OPTIONS`, `PYTHON*`, `PERL*`, `RUBYOPT`, `SHELLOPTS`, `PS4`) and then merged with the apps environment.
- Choosing "Always Allow" in the prompt adds that command to the allowlist.
- `system.run` environment overrides are filtered (drops `PATH`, `DYLD_*`, `LD_*`, `NODE_OPTIONS`, `PYTHON*`, `PERL*`, `RUBYOPT`, `SHELLOPTS`, `PS4`) and then merged with the app's environment.
- For shell wrappers (`bash|sh|zsh ... -c/-lc`), request-scoped environment overrides are reduced to a small explicit allowlist (`TERM`, `LANG`, `LC_*`, `COLORTERM`, `NO_COLOR`, `FORCE_COLOR`).
- For allow-always decisions in allowlist mode, known dispatch wrappers (`env`, `nice`, `nohup`, `stdbuf`, `timeout`) persist inner executable paths instead of wrapper paths. If unwrapping is not safe, no allowlist entry is persisted automatically.
@@ -189,7 +189,7 @@ Connect options:
Discovery options:
- `--include-local`: include gateways that would be filtered as local
- `--include-local`: include gateways that would be filtered as "local"
- `--timeout <ms>`: overall discovery window (default: `2000`)
- `--json`: structured output for diffing

View File

@@ -287,7 +287,7 @@ API key auth, and dynamic model resolution.
```
If resolving requires a network call, use `prepareDynamicModel` for async
warm-up `resolveDynamicModel` runs again after it completes.
warm-up - `resolveDynamicModel` runs again after it completes.
</Step>
@@ -341,9 +341,9 @@ API key auth, and dynamic model resolution.
<Accordion title="SDK seams powering the family builders">
Each family builder is composed from lower-level public helpers exported from the same package, which you can reach for when a provider needs to go off the common pattern:
- `openclaw/plugin-sdk/provider-model-shared` `ProviderReplayFamily`, `buildProviderReplayFamilyHooks(...)`, and the raw replay builders (`buildOpenAICompatibleReplayPolicy`, `buildAnthropicReplayPolicyForModel`, `buildGoogleGeminiReplayPolicy`, `buildHybridAnthropicOrOpenAIReplayPolicy`). Also exports Gemini replay helpers (`sanitizeGoogleGeminiReplayHistory`, `resolveTaggedReasoningOutputMode`) and endpoint/model helpers (`resolveProviderEndpoint`, `normalizeProviderId`, `normalizeGooglePreviewModelId`, `normalizeNativeXaiModelId`).
- `openclaw/plugin-sdk/provider-stream` `ProviderStreamFamily`, `buildProviderStreamFamilyHooks(...)`, `composeProviderStreamWrappers(...)`, plus the shared OpenAI/Codex wrappers (`createOpenAIAttributionHeadersWrapper`, `createOpenAIFastModeWrapper`, `createOpenAIServiceTierWrapper`, `createOpenAIResponsesContextManagementWrapper`, `createCodexNativeWebSearchWrapper`), DeepSeek V4 OpenAI-compatible wrapper (`createDeepSeekV4OpenAICompatibleThinkingWrapper`), Anthropic Messages thinking prefill cleanup (`createAnthropicThinkingPrefillPayloadWrapper`), and shared proxy/provider wrappers (`createOpenRouterWrapper`, `createToolStreamWrapper`, `createMinimaxFastModeWrapper`).
- `openclaw/plugin-sdk/provider-tools` `ProviderToolCompatFamily`, `buildProviderToolCompatFamilyHooks("gemini")`, underlying Gemini schema helpers (`normalizeGeminiToolSchemas`, `inspectGeminiToolSchemas`), and xAI compat helpers (`resolveXaiModelCompatPatch()`, `applyXaiModelCompat(model)`). The bundled xAI plugin uses `normalizeResolvedModel` + `contributeResolvedModelCompat` with these to keep xAI rules owned by the provider.
- `openclaw/plugin-sdk/provider-model-shared` - `ProviderReplayFamily`, `buildProviderReplayFamilyHooks(...)`, and the raw replay builders (`buildOpenAICompatibleReplayPolicy`, `buildAnthropicReplayPolicyForModel`, `buildGoogleGeminiReplayPolicy`, `buildHybridAnthropicOrOpenAIReplayPolicy`). Also exports Gemini replay helpers (`sanitizeGoogleGeminiReplayHistory`, `resolveTaggedReasoningOutputMode`) and endpoint/model helpers (`resolveProviderEndpoint`, `normalizeProviderId`, `normalizeGooglePreviewModelId`, `normalizeNativeXaiModelId`).
- `openclaw/plugin-sdk/provider-stream` - `ProviderStreamFamily`, `buildProviderStreamFamilyHooks(...)`, `composeProviderStreamWrappers(...)`, plus the shared OpenAI/Codex wrappers (`createOpenAIAttributionHeadersWrapper`, `createOpenAIFastModeWrapper`, `createOpenAIServiceTierWrapper`, `createOpenAIResponsesContextManagementWrapper`, `createCodexNativeWebSearchWrapper`), DeepSeek V4 OpenAI-compatible wrapper (`createDeepSeekV4OpenAICompatibleThinkingWrapper`), Anthropic Messages thinking prefill cleanup (`createAnthropicThinkingPrefillPayloadWrapper`), and shared proxy/provider wrappers (`createOpenRouterWrapper`, `createToolStreamWrapper`, `createMinimaxFastModeWrapper`).
- `openclaw/plugin-sdk/provider-tools` - `ProviderToolCompatFamily`, `buildProviderToolCompatFamilyHooks("gemini")`, underlying Gemini schema helpers (`normalizeGeminiToolSchemas`, `inspectGeminiToolSchemas`), and xAI compat helpers (`resolveXaiModelCompatPatch()`, `applyXaiModelCompat(model)`). The bundled xAI plugin uses `normalizeResolvedModel` + `contributeResolvedModelCompat` with these to keep xAI rules owned by the provider.
Some stream helpers stay provider-local on purpose. `@openclaw/anthropic-provider` keeps `wrapAnthropicProviderStream`, `resolveAnthropicBetas`, `resolveAnthropicFastMode`, `resolveAnthropicServiceTier`, and the lower-level Anthropic wrapper builders in its own public `api.ts` / `contract-api.ts` seam because they encode Claude OAuth beta handling and `context1m` gating. The xAI plugin similarly keeps native xAI Responses shaping in its own `wrapStreamFn` (`/fast` aliases, default `tool_stream`, unsupported strict-tool cleanup, xAI-specific reasoning-payload removal).
@@ -488,7 +488,7 @@ API key auth, and dynamic model resolution.
A provider plugin can register speech, realtime transcription, realtime
voice, media understanding, image generation, video generation, web fetch,
and web search alongside text inference. OpenClaw classifies this as a
**hybrid-capability** plugin the recommended pattern for company plugins
**hybrid-capability** plugin - the recommended pattern for company plugins
(one plugin per vendor). See
[Internals: Capability Ownership](/plugins/architecture#capability-ownership-model).
@@ -536,7 +536,7 @@ API key auth, and dynamic model resolution.
request-id suffixes.
</Tab>
<Tab title="Realtime transcription">
Prefer `createRealtimeTranscriptionWebSocketSession(...)` the shared
Prefer `createRealtimeTranscriptionWebSocketSession(...)` - the shared
helper handles proxy capture, reconnect backoff, close flushing, ready
handshakes, audio queueing, and close-event diagnostics. Your plugin
only maps upstream events.
@@ -769,10 +769,10 @@ providers:
## Next steps
- [Channel Plugins](/plugins/sdk-channel-plugins) if your plugin also provides a channel
- [SDK Runtime](/plugins/sdk-runtime) `api.runtime` helpers (TTS, search, subagent)
- [SDK Overview](/plugins/sdk-overview) full subpath import reference
- [Plugin Internals](/plugins/architecture-internals#provider-runtime-hooks) hook details and bundled examples
- [Channel Plugins](/plugins/sdk-channel-plugins) - if your plugin also provides a channel
- [SDK Runtime](/plugins/sdk-runtime) - `api.runtime` helpers (TTS, search, subagent)
- [SDK Overview](/plugins/sdk-overview) - full subpath import reference
- [Plugin Internals](/plugins/architecture-internals#provider-runtime-hooks) - hook details and bundled examples
## Related

View File

@@ -21,10 +21,10 @@ Start with something like:
Then figure out together:
1. **Your name** What should they call you?
2. **Your nature** What kind of creature are you? (AI assistant is fine, but maybe you're something weirder)
3. **Your vibe** Formal? Casual? Snarky? Warm? What feels right?
4. **Your emoji** Everyone needs a signature.
1. **Your name** - What should they call you?
2. **Your nature** - What kind of creature are you? (AI assistant is fine, but maybe you're something weirder)
3. **Your vibe** - Formal? Casual? Snarky? Warm? What feels right?
4. **Your emoji** - Everyone needs a signature.
Offer suggestions if they're stuck. Have fun with it.
@@ -32,8 +32,8 @@ Offer suggestions if they're stuck. Have fun with it.
Update these files with what you learned:
- `IDENTITY.md` your name, creature, vibe, emoji
- `USER.md` their name, how to address them, timezone, notes
- `IDENTITY.md` - your name, creature, vibe, emoji
- `USER.md` - their name, how to address them, timezone, notes
Then open `SOUL.md` together and talk about:
@@ -47,15 +47,15 @@ Write it down. Make it real.
Ask how they want to reach you:
- **Just here** web chat only
- **WhatsApp** link their personal account (you'll show a QR code)
- **Telegram** set up a bot via BotFather
- **Just here** - web chat only
- **WhatsApp** - link their personal account (you'll show a QR code)
- **Telegram** - set up a bot via BotFather
Guide them through whichever they pick.
## When you are done
Delete this file. You don't need a bootstrap script anymore you're you now.
Delete this file. You don't need a bootstrap script anymore - you're you now.
---

View File

@@ -6,13 +6,11 @@ read_when:
title: "Personal assistant setup"
---
# Building a personal assistant with OpenClaw
OpenClaw is a self-hosted gateway that connects Discord, Google Chat, iMessage, Matrix, Microsoft Teams, Signal, Slack, Telegram, WhatsApp, Zalo, and more to AI agents. This guide covers the "personal assistant" setup: a dedicated WhatsApp number that behaves like your always-on AI assistant.
## ⚠️ Safety first
Youre putting an agent in a position to:
You're putting an agent in a position to:
- run commands on your machine (depending on your tool policy)
- read/write files in your workspace
@@ -26,7 +24,7 @@ Start conservative:
## Prerequisites
- OpenClaw installed and onboarded see [Getting Started](/start/getting-started) if you haven't done this yet
- OpenClaw installed and onboarded - see [Getting Started](/start/getting-started) if you haven't done this yet
- A second phone number (SIM/eSIM/prepaid) for the assistant
## The two-phone setup (recommended)
@@ -39,7 +37,7 @@ flowchart TB
B -- linked via QR --> C["<b>Your Mac (openclaw)<br></b><br>AI agent"]
```
If you link your personal WhatsApp to OpenClaw, every message to you becomes agent input. Thats rarely what you want.
If you link your personal WhatsApp to OpenClaw, every message to you becomes "agent input". That's rarely what you want.
## 5-minute quick start
@@ -70,7 +68,7 @@ When onboarding finishes, OpenClaw auto-opens the dashboard and prints a clean (
## Give the agent a workspace (AGENTS)
OpenClaw reads operating instructions and memory from its workspace directory.
OpenClaw reads operating instructions and "memory" from its workspace directory.
By default, OpenClaw uses `~/.openclaw/workspace` as the agent workspace, and will create it (plus starter `AGENTS.md`, `SOUL.md`, `TOOLS.md`, `IDENTITY.md`, `USER.md`, `HEARTBEAT.md`) automatically on setup/first agent run. `BOOTSTRAP.md` is only created when the workspace is brand new (it should not come back after you delete it). `MEMORY.md` is optional (not auto-created); when present, it is loaded for normal sessions. Subagent sessions only inject `AGENTS.md` and `TOOLS.md`.
@@ -111,7 +109,7 @@ If you already ship your own workspace files from a repo, you can disable bootst
## The config that turns it into "an assistant"
OpenClaw defaults to a good assistant setup, but youll usually want to tune:
OpenClaw defaults to a good assistant setup, but you'll usually want to tune:
- persona/instructions in [`SOUL.md`](/concepts/soul)
- thinking defaults (if desired)
@@ -172,7 +170,7 @@ Set `agents.defaults.heartbeat.every: "0m"` to disable.
- If the file is missing, the heartbeat still runs and the model decides what to do.
- If the agent replies with `HEARTBEAT_OK` (optionally with short padding; see `agents.defaults.heartbeat.ackMaxChars`), OpenClaw suppresses outbound delivery for that heartbeat.
- By default, heartbeat delivery to DM-style `user:<id>` targets is allowed. Set `agents.defaults.heartbeat.directPolicy: "block"` to suppress direct-target delivery while keeping heartbeat runs active.
- Heartbeats run full agent turns shorter intervals burn more tokens.
- Heartbeats run full agent turns - shorter intervals burn more tokens.
```json5
{
@@ -193,7 +191,7 @@ Inbound attachments (images/audio/docs) can be surfaced to your command via temp
Outbound attachments from the agent: include `MEDIA:<path-or-url>` on its own line (no spaces). Example:
```
Heres the screenshot.
Here's the screenshot.
MEDIA:https://example.com/screenshot.png
```

View File

@@ -9,11 +9,11 @@ title: "Thinking levels"
- Inline directive in any inbound body: `/t <level>`, `/think:<level>`, or `/thinking <level>`.
- Levels (aliases): `off | minimal | low | medium | high | xhigh | adaptive | max`
- minimal → think
- low → think hard
- medium → think harder
- high → ultrathink (max budget)
- xhigh → ultrathink+ (GPT-5.2+ and Codex models, plus Anthropic Claude Opus 4.7 effort)
- minimal → "think"
- low → "think hard"
- medium → "think harder"
- high → "ultrathink" (max budget)
- xhigh → "ultrathink+" (GPT-5.2+ and Codex models, plus Anthropic Claude Opus 4.7 effort)
- adaptive → provider-managed adaptive thinking (supported for Claude 4.6 on Anthropic/Bedrock, Anthropic Claude Opus 4.7, and Google Gemini dynamic thinking)
- max → provider max reasoning (Anthropic Claude Opus 4.7; Ollama maps this to its highest native `think` effort)
- `x-high`, `x_high`, `extra-high`, `extra high`, and `extra_high` map to `xhigh`.