From fa2a32d0c5080c4d7eaf6ba9daa955b4cb8ee667 Mon Sep 17 00:00:00 2001 From: Vincent Koc Date: Tue, 5 May 2026 22:44:17 -0700 Subject: [PATCH] docs: typography hygiene across 6 pages (cli/gateway/platforms) --- docs/cli/models.md | 4 ++-- docs/gateway/local-models.md | 4 ++-- docs/gateway/logging.md | 4 ++-- docs/gateway/openai-http-api.md | 4 ++-- docs/gateway/openresponses-http-api.md | 4 ++-- docs/platforms/android.md | 4 ++-- 6 files changed, 12 insertions(+), 12 deletions(-) diff --git a/docs/cli/models.md b/docs/cli/models.md index ed03c2a4e4d..ed7bf29d069 100644 --- a/docs/cli/models.md +++ b/docs/cli/models.md @@ -36,7 +36,7 @@ In `--json` output, `auth.providers` is the env/config/store-aware provider overview, while `auth.oauth` is auth-store profile health only. Add `--probe` to run live auth probes against each configured provider profile. Probes are real requests (may consume tokens and trigger rate limits). -Use `--agent ` to inspect a configured agent’s model/auth state. When omitted, +Use `--agent ` to inspect a configured agent's model/auth state. When omitted, the command uses `OPENCLAW_AGENT_DIR`/`PI_CODING_AGENT_DIR` if set, otherwise the configured default agent. Probe rows can come from auth profiles, env credentials, or `models.json`. @@ -176,7 +176,7 @@ provider you choose. printing token, API-key, or OAuth secret material. Use `--provider ` to filter to one provider, such as `openai-codex`, and `--json` for scripting. -`models auth login` runs a provider plugin’s auth flow (OAuth/API key). Use +`models auth login` runs a provider plugin's auth flow (OAuth/API key). Use `openclaw plugins list` to see which providers are installed. Use `openclaw models auth --agent ` to write auth results to a specific configured agent store. The parent `--agent` flag is honored by diff --git a/docs/gateway/local-models.md b/docs/gateway/local-models.md index d5bcf482a12..4b1f8508b1d 100644 --- a/docs/gateway/local-models.md +++ b/docs/gateway/local-models.md @@ -70,7 +70,7 @@ Best current local stack. Load a large model in LM Studio (for example, a full-s **Setup checklist** - Install LM Studio: [https://lmstudio.ai](https://lmstudio.ai) -- In LM Studio, download the **largest model build available** (avoid “small”/heavily quantized variants), start the server, confirm `http://127.0.0.1:1234/v1/models` lists it. +- In LM Studio, download the **largest model build available** (avoid "small"/heavily quantized variants), start the server, confirm `http://127.0.0.1:1234/v1/models` lists it. - Replace `my-local-model` with the actual model ID shown in LM Studio. - Keep the model loaded; cold-load adds startup latency. - Adjust `contextWindow`/`maxTokens` if your LM Studio build differs. @@ -317,7 +317,7 @@ If the model loads cleanly but full agent turns misbehave, work top-down — con ## Troubleshooting - Gateway can reach the proxy? `curl http://127.0.0.1:1234/v1/models`. -- LM Studio model unloaded? Reload; cold start is a common “hanging” cause. +- LM Studio model unloaded? Reload; cold start is a common "hanging" cause. - Local server says `terminated`, `ECONNRESET`, or closes the stream mid-turn? OpenClaw records a low-cardinality `model.call.error.failureKind` plus the OpenClaw process RSS/heap snapshot in diagnostics. For LM Studio/Ollama diff --git a/docs/gateway/logging.md b/docs/gateway/logging.md index c70a5b7252b..1f8d5024f6a 100644 --- a/docs/gateway/logging.md +++ b/docs/gateway/logging.md @@ -10,7 +10,7 @@ title: "Gateway logging" For a user-facing overview (CLI + Control UI + config), see [/logging](/logging). -OpenClaw has two log “surfaces”: +OpenClaw has two log "surfaces": - **Console output** (what you see in the terminal / Debug UI). - **File logs** (JSON lines) written by the gateway logger. @@ -90,7 +90,7 @@ does not make them emit raw secrets. The gateway prints WebSocket protocol logs in two modes: -- **Normal mode (no `--verbose`)**: only “interesting” RPC results are printed: +- **Normal mode (no `--verbose`)**: only "interesting" RPC results are printed: - errors (`ok=false`) - slow calls (default threshold: `>= 50ms`) - parse errors diff --git a/docs/gateway/openai-http-api.md b/docs/gateway/openai-http-api.md index 86ec14def17..4b20c93c1c3 100644 --- a/docs/gateway/openai-http-api.md +++ b/docs/gateway/openai-http-api.md @@ -5,14 +5,14 @@ read_when: title: "OpenAI chat completions" --- -OpenClaw’s Gateway can serve a small OpenAI-compatible Chat Completions endpoint. +OpenClaw's Gateway can serve a small OpenAI-compatible Chat Completions endpoint. This endpoint is **disabled by default**. Enable it in config first. - `POST /v1/chat/completions` - Same port as the Gateway (WS + HTTP multiplex): `http://:/v1/chat/completions` -When the Gateway’s OpenAI-compatible HTTP surface is enabled, it also serves: +When the Gateway's OpenAI-compatible HTTP surface is enabled, it also serves: - `GET /v1/models` - `GET /v1/models/{id}` diff --git a/docs/gateway/openresponses-http-api.md b/docs/gateway/openresponses-http-api.md index 2c7876abd89..76204fd0b47 100644 --- a/docs/gateway/openresponses-http-api.md +++ b/docs/gateway/openresponses-http-api.md @@ -6,7 +6,7 @@ read_when: title: "OpenResponses API" --- -OpenClaw’s Gateway can serve an OpenResponses-compatible `POST /v1/responses` endpoint. +OpenClaw's Gateway can serve an OpenResponses-compatible `POST /v1/responses` endpoint. This endpoint is **disabled by default**. Enable it in config first. @@ -95,7 +95,7 @@ Supported: Roles: `system`, `developer`, `user`, `assistant`. - `system` and `developer` are appended to the system prompt. -- The most recent `user` or `function_call_output` item becomes the “current message.” +- The most recent `user` or `function_call_output` item becomes the "current message." - Earlier user/assistant messages are included as history for context. ### `function_call_output` (turn-based tools) diff --git a/docs/platforms/android.md b/docs/platforms/android.md index 259dd9a7bc5..203f8981d41 100644 --- a/docs/platforms/android.md +++ b/docs/platforms/android.md @@ -37,7 +37,7 @@ For Tailscale or public hosts, Android requires a secure endpoint: ### Prerequisites -- You can run the Gateway on the “master” machine. +- You can run the Gateway on the "master" machine. - Android device/emulator can reach the gateway WebSocket: - Same LAN with mDNS/NSD, **or** - Same Tailscale tailnet using Wide-Area Bonjour / unicast DNS-SD (see below), **or** @@ -84,7 +84,7 @@ service endpoint instead of TXT-only hints. #### Tailnet (Vienna ⇄ London) discovery via unicast DNS-SD -Android NSD/mDNS discovery won’t cross networks. If your Android node and the gateway are on different networks but connected via Tailscale, use Wide-Area Bonjour / unicast DNS-SD instead. +Android NSD/mDNS discovery won't cross networks. If your Android node and the gateway are on different networks but connected via Tailscale, use Wide-Area Bonjour / unicast DNS-SD instead. Discovery alone is not sufficient for tailnet/public Android pairing. The discovered route still needs a secure endpoint (`wss://` or Tailscale Serve):