docs: audit and fix 5 pages (typography hygiene + dup H1)

This commit is contained in:
Vincent Koc
2026-05-05 22:03:50 -07:00
parent 06c490f818
commit f531eff629
5 changed files with 8 additions and 10 deletions

View File

@@ -8,7 +8,7 @@ title: "Tests"
- Full testing kit (suites, live, Docker): [Testing](/help/testing)
- Update and plugin package validation: [Testing updates and plugins](/help/testing-updates-plugins)
- `pnpm test:force`: Kills any lingering gateway process holding the default control port, then runs the full Vitest suite with an isolated gateway port so server tests dont collide with a running instance. Use this when a prior gateway run left port 18789 occupied.
- `pnpm test:force`: Kills any lingering gateway process holding the default control port, then runs the full Vitest suite with an isolated gateway port so server tests don't collide with a running instance. Use this when a prior gateway run left port 18789 occupied.
- `pnpm test:coverage`: Runs the unit suite with V8 coverage (via `vitest.unit.config.ts`). This is a default-unit-lane coverage gate, not whole-repo all-file coverage. Thresholds are 70% lines/functions/statements and 55% branches. Because `coverage.all` is false and the default lane scopes coverage includes to non-fast unit tests with sibling source files, the gate measures source owned by this lane instead of every transitive import it happens to load.
- `pnpm test:coverage:changed`: Runs unit coverage only for files changed since `origin/main`.
- `pnpm test:changed`: cheap smart changed test run. It runs precise targets from direct test edits, sibling `*.test.ts` files, explicit source mappings, and the local import graph. Broad/config/package changes are skipped unless they map to precise tests.
@@ -72,7 +72,7 @@ Usage:
- `source ~/.profile && pnpm tsx scripts/bench-model.ts --runs 10`
- Optional env: `MINIMAX_API_KEY`, `MINIMAX_BASE_URL`, `MINIMAX_MODEL`, `ANTHROPIC_API_KEY`
- Default prompt: Reply with a single word: ok. No punctuation or extra text.
- Default prompt: "Reply with a single word: ok. No punctuation or extra text."
Last run (2025-12-31, 20 runs):

View File

@@ -6,8 +6,6 @@ read_when:
title: "Token use and costs"
---
# Token use & costs
OpenClaw tracks **tokens**, not characters. Tokens are model-specific, but most
OpenAI-style models average ~4 characters per token for English text.
@@ -63,7 +61,7 @@ For a practical breakdown (per injected file, tools, skills, and system prompt s
Use these in chat:
- `/status`**emojirich status card** with the session model, context usage,
- `/status`**emoji-rich status card** with the session model, context usage,
last response input/output tokens, and **estimated cost** (API key only).
- `/usage off|tokens|full` → appends a **per-response usage footer** to every reply.
- Persists per session (stored as `responseUsage`).
@@ -149,7 +147,7 @@ per agent with `agents.list[].params.cacheRetention`.
For a full knob-by-knob guide, see [Prompt Caching](/reference/prompt-caching).
For Anthropic API pricing, cache reads are significantly cheaper than input
tokens, while cache writes are billed at a higher multiplier. See Anthropics
tokens, while cache writes are billed at a higher multiplier. See Anthropic's
prompt caching pricing for the latest rates and TTL multipliers:
[https://docs.anthropic.com/docs/build-with-claude/prompt-caching](https://docs.anthropic.com/docs/build-with-claude/prompt-caching)