docs: typography hygiene across 6 pages

Replaced 66 typography characters (curly quotes, apostrophes, em/en
dashes, non-breaking hyphens) with ASCII equivalents per
docs/CLAUDE.md heading and content hygiene rules.

- docs/channels/mattermost.md: 12 chars
- docs/tools/plugin.md: 11 chars
- docs/providers/xai.md: 11 chars
- docs/plugins/building-plugins.md: 11 chars
- docs/concepts/streaming.md: 11 chars
- docs/concepts/model-providers.md: 11 chars
This commit is contained in:
Vincent Koc
2026-05-05 20:44:58 -07:00
parent ebb8bed78f
commit 4ee234f8ee
6 changed files with 65 additions and 65 deletions

View File

@@ -82,7 +82,7 @@ Provider-owned runner behavior lives on explicit provider hooks such as replay p
## Built-in providers (pi-ai catalog)
OpenClaw ships with the piai catalog. These providers require **no** `models.providers` config; just set auth + pick a model.
OpenClaw ships with the pi-ai catalog. These providers require **no** `models.providers` config; just set auth + pick a model.
### OpenAI
@@ -295,11 +295,11 @@ See [/providers/kilocode](/providers/kilocode) for setup details.
| ----------------------- | -------------------------------- | ------------------------------------------------------------ | --------------------------------------------- |
| BytePlus | `byteplus` / `byteplus-plan` | `BYTEPLUS_API_KEY` | `byteplus-plan/ark-code-latest` |
| Cerebras | `cerebras` | `CEREBRAS_API_KEY` | `cerebras/zai-glm-4.7` |
| Cloudflare AI Gateway | `cloudflare-ai-gateway` | `CLOUDFLARE_AI_GATEWAY_API_KEY` | |
| Cloudflare AI Gateway | `cloudflare-ai-gateway` | `CLOUDFLARE_AI_GATEWAY_API_KEY` | - |
| DeepInfra | `deepinfra` | `DEEPINFRA_API_KEY` | `deepinfra/deepseek-ai/DeepSeek-V3.2` |
| DeepSeek | `deepseek` | `DEEPSEEK_API_KEY` | `deepseek/deepseek-v4-flash` |
| GitHub Copilot | `github-copilot` | `COPILOT_GITHUB_TOKEN` / `GH_TOKEN` / `GITHUB_TOKEN` | |
| Groq | `groq` | `GROQ_API_KEY` | |
| GitHub Copilot | `github-copilot` | `COPILOT_GITHUB_TOKEN` / `GH_TOKEN` / `GITHUB_TOKEN` | - |
| Groq | `groq` | `GROQ_API_KEY` | - |
| Hugging Face Inference | `huggingface` | `HUGGINGFACE_HUB_TOKEN` or `HF_TOKEN` | `huggingface/deepseek-ai/DeepSeek-R1` |
| Kilo Gateway | `kilocode` | `KILOCODE_API_KEY` | `kilocode/kilo/auto` |
| Kimi Coding | `kimi` | `KIMI_API_KEY` or `KIMICODE_API_KEY` | `kimi/kimi-code` |
@@ -312,7 +312,7 @@ See [/providers/kilocode](/providers/kilocode) for setup details.
| Qwen Cloud | `qwen` | `QWEN_API_KEY` / `MODELSTUDIO_API_KEY` / `DASHSCOPE_API_KEY` | `qwen/qwen3.5-plus` |
| StepFun | `stepfun` / `stepfun-plan` | `STEPFUN_API_KEY` | `stepfun/step-3.5-flash` |
| Together | `together` | `TOGETHER_API_KEY` | `together/moonshotai/Kimi-K2.5` |
| Venice | `venice` | `VENICE_API_KEY` | |
| Venice | `venice` | `VENICE_API_KEY` | - |
| Vercel AI Gateway | `vercel-ai-gateway` | `AI_GATEWAY_API_KEY` | `vercel-ai-gateway/anthropic/claude-opus-4.6` |
| Volcano Engine (Doubao) | `volcengine` / `volcengine-plan` | `VOLCANO_ENGINE_API_KEY` | `volcengine-plan/ark-code-latest` |
| xAI | `xai` | `XAI_API_KEY` | `xai/grok-4.3` |
@@ -343,7 +343,7 @@ See [/providers/kilocode](/providers/kilocode) for setup details.
## Providers via `models.providers` (custom/base URL)
Use `models.providers` (or `models.json`) to add **custom** providers or OpenAI/Anthropiccompatible proxies.
Use `models.providers` (or `models.json`) to add **custom** providers or OpenAI/Anthropic-compatible proxies.
Many of the bundled provider plugins below already publish a default catalog. Use explicit `models.providers.<id>` entries only when you want to override the default base URL, headers, or model list.
@@ -635,7 +635,7 @@ See [/providers/sglang](/providers/sglang) for details.
### Local proxies (LM Studio, vLLM, LiteLLM, etc.)
Example (OpenAIcompatible):
Example (OpenAI-compatible):
```json5
{
@@ -708,7 +708,7 @@ See also: [Configuration](/gateway/configuration) for full configuration example
## Related
- [Configuration reference](/gateway/config-agents#agent-defaults) model config keys
- [Model failover](/concepts/model-failover) fallback chains and retry behavior
- [Models](/concepts/models) model configuration and aliases
- [Providers](/providers) per-provider setup guides
- [Configuration reference](/gateway/config-agents#agent-defaults) - model config keys
- [Model failover](/concepts/model-failover) - fallback chains and retry behavior
- [Models](/concepts/models) - model configuration and aliases
- [Providers](/providers) - per-provider setup guides

View File

@@ -69,17 +69,17 @@ streaming and the provider also includes it in the completed reply.
Block chunking is implemented by `EmbeddedBlockChunker`:
- **Low bound:** dont emit until buffer >= `minChars` (unless forced).
- **Low bound:** don't emit until buffer >= `minChars` (unless forced).
- **High bound:** prefer splits before `maxChars`; if forced, split at `maxChars`.
- **Break preference:** `paragraph``newline``sentence``whitespace` → hard break.
- **Code fences:** never split inside fences; when forced at `maxChars`, close + reopen the fence to keep Markdown valid.
`maxChars` is clamped to the channel `textChunkLimit`, so you cant exceed per-channel caps.
`maxChars` is clamped to the channel `textChunkLimit`, so you can't exceed per-channel caps.
## Coalescing (merge streamed blocks)
When block streaming is enabled, OpenClaw can **merge consecutive block chunks**
before sending them out. This reduces single-line spam while still providing
before sending them out. This reduces "single-line spam" while still providing
progressive output.
- Coalescing waits for **idle gaps** (`idleMs`) before flushing.
@@ -98,7 +98,7 @@ block replies (after the first block). This makes multi-bubble responses feel
more natural.
- Config: `agents.defaults.humanDelay` (override per agent via `agents.list[].humanDelay`).
- Modes: `off` (default), `natural` (8002500ms), `custom` (`minMs`/`maxMs`).
- Modes: `off` (default), `natural` (800-2500ms), `custom` (`minMs`/`maxMs`).
- Applies only to **block replies**, not final replies or tool summaries.
## "Stream chunks or everything"
@@ -193,7 +193,7 @@ Matrix:
### Tool-progress preview updates
Preview streaming can also include **tool-progress** updates short status lines like "searching the web", "reading file", or "calling tool" that appear in the same preview message while tools are running, ahead of the final reply. This keeps multi-step tool turns visually alive rather than silent between the first thinking preview and the final answer.
Preview streaming can also include **tool-progress** updates - short status lines like "searching the web", "reading file", or "calling tool" - that appear in the same preview message while tools are running, ahead of the final reply. This keeps multi-step tool turns visually alive rather than silent between the first thinking preview and the final answer.
Supported surfaces:
@@ -243,7 +243,7 @@ Use the same shape under another compact progress channel key, for example `chan
## Related
- [Message lifecycle refactor](/concepts/message-lifecycle-refactor) - target shared preview, edit, stream, and finalization design
- [Progress drafts](/concepts/progress-drafts) visible work-in-progress messages that update during long turns
- [Messages](/concepts/messages) message lifecycle and delivery
- [Retry](/concepts/retry) retry behavior on delivery failure
- [Channels](/channels) per-channel streaming support
- [Progress drafts](/concepts/progress-drafts) - visible work-in-progress messages that update during long turns
- [Messages](/concepts/messages) - message lifecycle and delivery
- [Retry](/concepts/retry) - retry behavior on delivery failure
- [Channels](/channels) - per-channel streaming support