docs: typography hygiene across 6 pages (cli/gateway/platforms)

This commit is contained in:
Vincent Koc
2026-05-05 22:44:17 -07:00
parent 5f783d7ddd
commit fa2a32d0c5
6 changed files with 12 additions and 12 deletions

View File

@@ -36,7 +36,7 @@ In `--json` output, `auth.providers` is the env/config/store-aware provider
overview, while `auth.oauth` is auth-store profile health only.
Add `--probe` to run live auth probes against each configured provider profile.
Probes are real requests (may consume tokens and trigger rate limits).
Use `--agent <id>` to inspect a configured agents model/auth state. When omitted,
Use `--agent <id>` to inspect a configured agent's model/auth state. When omitted,
the command uses `OPENCLAW_AGENT_DIR`/`PI_CODING_AGENT_DIR` if set, otherwise the
configured default agent.
Probe rows can come from auth profiles, env credentials, or `models.json`.
@@ -176,7 +176,7 @@ provider you choose.
printing token, API-key, or OAuth secret material. Use `--provider <id>` to
filter to one provider, such as `openai-codex`, and `--json` for scripting.
`models auth login` runs a provider plugins auth flow (OAuth/API key). Use
`models auth login` runs a provider plugin's auth flow (OAuth/API key). Use
`openclaw plugins list` to see which providers are installed.
Use `openclaw models auth --agent <id> <subcommand>` to write auth results to a
specific configured agent store. The parent `--agent` flag is honored by

View File

@@ -70,7 +70,7 @@ Best current local stack. Load a large model in LM Studio (for example, a full-s
**Setup checklist**
- Install LM Studio: [https://lmstudio.ai](https://lmstudio.ai)
- In LM Studio, download the **largest model build available** (avoid small/heavily quantized variants), start the server, confirm `http://127.0.0.1:1234/v1/models` lists it.
- In LM Studio, download the **largest model build available** (avoid "small"/heavily quantized variants), start the server, confirm `http://127.0.0.1:1234/v1/models` lists it.
- Replace `my-local-model` with the actual model ID shown in LM Studio.
- Keep the model loaded; cold-load adds startup latency.
- Adjust `contextWindow`/`maxTokens` if your LM Studio build differs.
@@ -317,7 +317,7 @@ If the model loads cleanly but full agent turns misbehave, work top-down — con
## Troubleshooting
- Gateway can reach the proxy? `curl http://127.0.0.1:1234/v1/models`.
- LM Studio model unloaded? Reload; cold start is a common hanging cause.
- LM Studio model unloaded? Reload; cold start is a common "hanging" cause.
- Local server says `terminated`, `ECONNRESET`, or closes the stream mid-turn?
OpenClaw records a low-cardinality `model.call.error.failureKind` plus the
OpenClaw process RSS/heap snapshot in diagnostics. For LM Studio/Ollama

View File

@@ -10,7 +10,7 @@ title: "Gateway logging"
For a user-facing overview (CLI + Control UI + config), see [/logging](/logging).
OpenClaw has two log surfaces:
OpenClaw has two log "surfaces":
- **Console output** (what you see in the terminal / Debug UI).
- **File logs** (JSON lines) written by the gateway logger.
@@ -90,7 +90,7 @@ does not make them emit raw secrets.
The gateway prints WebSocket protocol logs in two modes:
- **Normal mode (no `--verbose`)**: only interesting RPC results are printed:
- **Normal mode (no `--verbose`)**: only "interesting" RPC results are printed:
- errors (`ok=false`)
- slow calls (default threshold: `>= 50ms`)
- parse errors

View File

@@ -5,14 +5,14 @@ read_when:
title: "OpenAI chat completions"
---
OpenClaws Gateway can serve a small OpenAI-compatible Chat Completions endpoint.
OpenClaw's Gateway can serve a small OpenAI-compatible Chat Completions endpoint.
This endpoint is **disabled by default**. Enable it in config first.
- `POST /v1/chat/completions`
- Same port as the Gateway (WS + HTTP multiplex): `http://<gateway-host>:<port>/v1/chat/completions`
When the Gateways OpenAI-compatible HTTP surface is enabled, it also serves:
When the Gateway's OpenAI-compatible HTTP surface is enabled, it also serves:
- `GET /v1/models`
- `GET /v1/models/{id}`

View File

@@ -6,7 +6,7 @@ read_when:
title: "OpenResponses API"
---
OpenClaws Gateway can serve an OpenResponses-compatible `POST /v1/responses` endpoint.
OpenClaw's Gateway can serve an OpenResponses-compatible `POST /v1/responses` endpoint.
This endpoint is **disabled by default**. Enable it in config first.
@@ -95,7 +95,7 @@ Supported:
Roles: `system`, `developer`, `user`, `assistant`.
- `system` and `developer` are appended to the system prompt.
- The most recent `user` or `function_call_output` item becomes the current message.
- The most recent `user` or `function_call_output` item becomes the "current message."
- Earlier user/assistant messages are included as history for context.
### `function_call_output` (turn-based tools)

View File

@@ -37,7 +37,7 @@ For Tailscale or public hosts, Android requires a secure endpoint:
### Prerequisites
- You can run the Gateway on the master machine.
- You can run the Gateway on the "master" machine.
- Android device/emulator can reach the gateway WebSocket:
- Same LAN with mDNS/NSD, **or**
- Same Tailscale tailnet using Wide-Area Bonjour / unicast DNS-SD (see below), **or**
@@ -84,7 +84,7 @@ service endpoint instead of TXT-only hints.
#### Tailnet (Vienna ⇄ London) discovery via unicast DNS-SD
Android NSD/mDNS discovery wont cross networks. If your Android node and the gateway are on different networks but connected via Tailscale, use Wide-Area Bonjour / unicast DNS-SD instead.
Android NSD/mDNS discovery won't cross networks. If your Android node and the gateway are on different networks but connected via Tailscale, use Wide-Area Bonjour / unicast DNS-SD instead.
Discovery alone is not sufficient for tailnet/public Android pairing. The discovered route still needs a secure endpoint (`wss://` or Tailscale Serve):