mirror of
https://github.com/openclaw/openclaw.git
synced 2026-05-06 17:20:45 +00:00
docs: typography hygiene across 6 pages (cli/gateway/platforms)
This commit is contained in:
@@ -70,7 +70,7 @@ Best current local stack. Load a large model in LM Studio (for example, a full-s
|
||||
**Setup checklist**
|
||||
|
||||
- Install LM Studio: [https://lmstudio.ai](https://lmstudio.ai)
|
||||
- In LM Studio, download the **largest model build available** (avoid “small”/heavily quantized variants), start the server, confirm `http://127.0.0.1:1234/v1/models` lists it.
|
||||
- In LM Studio, download the **largest model build available** (avoid "small"/heavily quantized variants), start the server, confirm `http://127.0.0.1:1234/v1/models` lists it.
|
||||
- Replace `my-local-model` with the actual model ID shown in LM Studio.
|
||||
- Keep the model loaded; cold-load adds startup latency.
|
||||
- Adjust `contextWindow`/`maxTokens` if your LM Studio build differs.
|
||||
@@ -317,7 +317,7 @@ If the model loads cleanly but full agent turns misbehave, work top-down — con
|
||||
## Troubleshooting
|
||||
|
||||
- Gateway can reach the proxy? `curl http://127.0.0.1:1234/v1/models`.
|
||||
- LM Studio model unloaded? Reload; cold start is a common “hanging” cause.
|
||||
- LM Studio model unloaded? Reload; cold start is a common "hanging" cause.
|
||||
- Local server says `terminated`, `ECONNRESET`, or closes the stream mid-turn?
|
||||
OpenClaw records a low-cardinality `model.call.error.failureKind` plus the
|
||||
OpenClaw process RSS/heap snapshot in diagnostics. For LM Studio/Ollama
|
||||
|
||||
@@ -10,7 +10,7 @@ title: "Gateway logging"
|
||||
|
||||
For a user-facing overview (CLI + Control UI + config), see [/logging](/logging).
|
||||
|
||||
OpenClaw has two log “surfaces”:
|
||||
OpenClaw has two log "surfaces":
|
||||
|
||||
- **Console output** (what you see in the terminal / Debug UI).
|
||||
- **File logs** (JSON lines) written by the gateway logger.
|
||||
@@ -90,7 +90,7 @@ does not make them emit raw secrets.
|
||||
|
||||
The gateway prints WebSocket protocol logs in two modes:
|
||||
|
||||
- **Normal mode (no `--verbose`)**: only “interesting” RPC results are printed:
|
||||
- **Normal mode (no `--verbose`)**: only "interesting" RPC results are printed:
|
||||
- errors (`ok=false`)
|
||||
- slow calls (default threshold: `>= 50ms`)
|
||||
- parse errors
|
||||
|
||||
@@ -5,14 +5,14 @@ read_when:
|
||||
title: "OpenAI chat completions"
|
||||
---
|
||||
|
||||
OpenClaw’s Gateway can serve a small OpenAI-compatible Chat Completions endpoint.
|
||||
OpenClaw's Gateway can serve a small OpenAI-compatible Chat Completions endpoint.
|
||||
|
||||
This endpoint is **disabled by default**. Enable it in config first.
|
||||
|
||||
- `POST /v1/chat/completions`
|
||||
- Same port as the Gateway (WS + HTTP multiplex): `http://<gateway-host>:<port>/v1/chat/completions`
|
||||
|
||||
When the Gateway’s OpenAI-compatible HTTP surface is enabled, it also serves:
|
||||
When the Gateway's OpenAI-compatible HTTP surface is enabled, it also serves:
|
||||
|
||||
- `GET /v1/models`
|
||||
- `GET /v1/models/{id}`
|
||||
|
||||
@@ -6,7 +6,7 @@ read_when:
|
||||
title: "OpenResponses API"
|
||||
---
|
||||
|
||||
OpenClaw’s Gateway can serve an OpenResponses-compatible `POST /v1/responses` endpoint.
|
||||
OpenClaw's Gateway can serve an OpenResponses-compatible `POST /v1/responses` endpoint.
|
||||
|
||||
This endpoint is **disabled by default**. Enable it in config first.
|
||||
|
||||
@@ -95,7 +95,7 @@ Supported:
|
||||
Roles: `system`, `developer`, `user`, `assistant`.
|
||||
|
||||
- `system` and `developer` are appended to the system prompt.
|
||||
- The most recent `user` or `function_call_output` item becomes the “current message.”
|
||||
- The most recent `user` or `function_call_output` item becomes the "current message."
|
||||
- Earlier user/assistant messages are included as history for context.
|
||||
|
||||
### `function_call_output` (turn-based tools)
|
||||
|
||||
Reference in New Issue
Block a user