mirror of
https://github.com/openclaw/openclaw.git
synced 2026-05-06 12:00:44 +00:00
docs: audit and fix 5 pages (typography hygiene + dup H1)
This commit is contained in:
@@ -64,7 +64,7 @@ Authoritative advertised **discovery** inventory lives in
|
||||
## Current pipeline
|
||||
|
||||
- `pnpm protocol:gen`
|
||||
- writes JSON Schema (draft‑07) to `dist/protocol.schema.json`
|
||||
- writes JSON Schema (draft-07) to `dist/protocol.schema.json`
|
||||
- `pnpm protocol:gen:swift`
|
||||
- generates Swift gateway models
|
||||
- `pnpm protocol:check`
|
||||
|
||||
@@ -20,12 +20,12 @@ title: "Usage tracking"
|
||||
|
||||
## Where it shows up
|
||||
|
||||
- `/status` in chats: emoji‑rich status card with session tokens + estimated cost (API key only). Provider usage shows for the **current model provider** when available as a normalized `X% left` window.
|
||||
- `/status` in chats: emoji-rich status card with session tokens + estimated cost (API key only). Provider usage shows for the **current model provider** when available as a normalized `X% left` window.
|
||||
- `/usage off|tokens|full` in chats: per-response usage footer (OAuth shows tokens only).
|
||||
- `/usage cost` in chats: local cost summary aggregated from OpenClaw session logs.
|
||||
- CLI: `openclaw status --usage` prints a full per-provider breakdown.
|
||||
- CLI: `openclaw channels list` prints the same usage snapshot alongside provider config (use `--no-usage` to skip).
|
||||
- macOS menu bar: “Usage” section under Context (only if available).
|
||||
- macOS menu bar: "Usage" section under Context (only if available).
|
||||
|
||||
## Providers + credentials
|
||||
|
||||
|
||||
@@ -14,7 +14,7 @@ title: "Hetzner"
|
||||
|
||||
Run a persistent OpenClaw Gateway on a Hetzner VPS using Docker, with durable state, baked-in binaries, and safe restart behavior.
|
||||
|
||||
If you want “OpenClaw 24/7 for ~$5”, this is the simplest reliable setup.
|
||||
If you want "OpenClaw 24/7 for ~$5", this is the simplest reliable setup.
|
||||
Hetzner pricing changes; pick the smallest Debian/Ubuntu VPS and scale up if you hit OOMs.
|
||||
|
||||
Security model reminder:
|
||||
|
||||
@@ -8,7 +8,7 @@ title: "Tests"
|
||||
- Full testing kit (suites, live, Docker): [Testing](/help/testing)
|
||||
- Update and plugin package validation: [Testing updates and plugins](/help/testing-updates-plugins)
|
||||
|
||||
- `pnpm test:force`: Kills any lingering gateway process holding the default control port, then runs the full Vitest suite with an isolated gateway port so server tests don’t collide with a running instance. Use this when a prior gateway run left port 18789 occupied.
|
||||
- `pnpm test:force`: Kills any lingering gateway process holding the default control port, then runs the full Vitest suite with an isolated gateway port so server tests don't collide with a running instance. Use this when a prior gateway run left port 18789 occupied.
|
||||
- `pnpm test:coverage`: Runs the unit suite with V8 coverage (via `vitest.unit.config.ts`). This is a default-unit-lane coverage gate, not whole-repo all-file coverage. Thresholds are 70% lines/functions/statements and 55% branches. Because `coverage.all` is false and the default lane scopes coverage includes to non-fast unit tests with sibling source files, the gate measures source owned by this lane instead of every transitive import it happens to load.
|
||||
- `pnpm test:coverage:changed`: Runs unit coverage only for files changed since `origin/main`.
|
||||
- `pnpm test:changed`: cheap smart changed test run. It runs precise targets from direct test edits, sibling `*.test.ts` files, explicit source mappings, and the local import graph. Broad/config/package changes are skipped unless they map to precise tests.
|
||||
@@ -72,7 +72,7 @@ Usage:
|
||||
|
||||
- `source ~/.profile && pnpm tsx scripts/bench-model.ts --runs 10`
|
||||
- Optional env: `MINIMAX_API_KEY`, `MINIMAX_BASE_URL`, `MINIMAX_MODEL`, `ANTHROPIC_API_KEY`
|
||||
- Default prompt: “Reply with a single word: ok. No punctuation or extra text.”
|
||||
- Default prompt: "Reply with a single word: ok. No punctuation or extra text."
|
||||
|
||||
Last run (2025-12-31, 20 runs):
|
||||
|
||||
|
||||
@@ -6,8 +6,6 @@ read_when:
|
||||
title: "Token use and costs"
|
||||
---
|
||||
|
||||
# Token use & costs
|
||||
|
||||
OpenClaw tracks **tokens**, not characters. Tokens are model-specific, but most
|
||||
OpenAI-style models average ~4 characters per token for English text.
|
||||
|
||||
@@ -63,7 +61,7 @@ For a practical breakdown (per injected file, tools, skills, and system prompt s
|
||||
|
||||
Use these in chat:
|
||||
|
||||
- `/status` → **emoji‑rich status card** with the session model, context usage,
|
||||
- `/status` → **emoji-rich status card** with the session model, context usage,
|
||||
last response input/output tokens, and **estimated cost** (API key only).
|
||||
- `/usage off|tokens|full` → appends a **per-response usage footer** to every reply.
|
||||
- Persists per session (stored as `responseUsage`).
|
||||
@@ -149,7 +147,7 @@ per agent with `agents.list[].params.cacheRetention`.
|
||||
For a full knob-by-knob guide, see [Prompt Caching](/reference/prompt-caching).
|
||||
|
||||
For Anthropic API pricing, cache reads are significantly cheaper than input
|
||||
tokens, while cache writes are billed at a higher multiplier. See Anthropic’s
|
||||
tokens, while cache writes are billed at a higher multiplier. See Anthropic's
|
||||
prompt caching pricing for the latest rates and TTL multipliers:
|
||||
[https://docs.anthropic.com/docs/build-with-claude/prompt-caching](https://docs.anthropic.com/docs/build-with-claude/prompt-caching)
|
||||
|
||||
|
||||
Reference in New Issue
Block a user