diff --git a/docs/providers/inferrs.md b/docs/providers/inferrs.md
index 11817c3a65d..982fa9616c1 100644
--- a/docs/providers/inferrs.md
+++ b/docs/providers/inferrs.md
@@ -7,12 +7,19 @@ read_when:
title: "Inferrs"
---
-[inferrs](https://github.com/ericcurtin/inferrs) can serve local models behind an
-OpenAI-compatible `/v1` API. OpenClaw works with `inferrs` through the generic
-`openai-completions` path.
+[inferrs](https://github.com/ericcurtin/inferrs) can serve local models behind an OpenAI-compatible `/v1` API. OpenClaw works with `inferrs` through the generic `openai-completions` path.
-`inferrs` is currently best treated as a custom self-hosted OpenAI-compatible
-backend, not a dedicated OpenClaw provider plugin.
+| Property | Value |
+| ------------------ | ------------------------------------------------------------------ |
+| Provider id | `inferrs` (custom; configure under `models.providers.inferrs`) |
+| Plugin | none — `inferrs` is not a bundled OpenClaw provider plugin |
+| Auth env var | Optional. Any value works if your inferrs server has no auth |
+| API | OpenAI-compatible (`openai-completions`) |
+| Suggested base URL | `http://127.0.0.1:8080/v1` (or wherever your inferrs server lives) |
+
+
+ `inferrs` is currently best treated as a custom self-hosted OpenAI-compatible backend, not a dedicated OpenClaw provider plugin. You configure it through `models.providers.inferrs` rather than an onboarding choice flag. If you need a true bundled plugin with auto-discovery, see [SGLang](/providers/sglang) or [vLLM](/providers/vllm).
+
## Getting started
diff --git a/docs/providers/mistral.md b/docs/providers/mistral.md
index de0f156b9f9..d73499009d3 100644
--- a/docs/providers/mistral.md
+++ b/docs/providers/mistral.md
@@ -7,13 +7,21 @@ read_when:
title: "Mistral"
---
-OpenClaw supports Mistral for both text/image model routing (`mistral/...`) and
-audio transcription via Voxtral in media understanding.
-Mistral can also be used for memory embeddings (`memorySearch.provider = "mistral"`).
+OpenClaw includes a bundled Mistral plugin that registers four contracts: chat completions, media understanding (Voxtral batch transcription), realtime STT for Voice Call (Voxtral Realtime), and memory embeddings (`mistral-embed`).
-- Provider: `mistral`
-- Auth: `MISTRAL_API_KEY`
-- API: Mistral Chat Completions (`https://api.mistral.ai/v1`)
+| Property | Value |
+| ---------------- | ------------------------------------------- |
+| Provider id | `mistral` |
+| Plugin | bundled, `enabledByDefault: true` |
+| Auth env var | `MISTRAL_API_KEY` |
+| Onboarding flag | `--auth-choice mistral-api-key` |
+| Direct CLI flag | `--mistral-api-key ` |
+| API | OpenAI-compatible (`openai-completions`) |
+| Base URL | `https://api.mistral.ai/v1` |
+| Default model | `mistral/mistral-large-latest` |
+| Embedding model | `mistral-embed` |
+| Voxtral batch | `voxtral-mini-latest` (audio transcription) |
+| Voxtral realtime | `voxtral-mini-transcribe-realtime-2602` |
## Getting started
@@ -157,10 +165,10 @@ matching `sampleRate` only if your upstream stream is already raw PCM.
- - Mistral auth uses `MISTRAL_API_KEY`.
- - Provider base URL defaults to `https://api.mistral.ai/v1`.
+ - Mistral auth uses `MISTRAL_API_KEY` (Bearer header).
+ - Provider base URL defaults to `https://api.mistral.ai/v1` and accepts the standard OpenAI-compatible chat-completions request shape.
- Onboarding default model is `mistral/mistral-large-latest`.
- - Z.AI uses Bearer auth with your API key.
+ - Override the base URL under `models.providers.mistral.baseUrl` only when Mistral explicitly publishes a regional endpoint you need.
diff --git a/docs/providers/tencent.md b/docs/providers/tencent.md
index 6ef772af9f6..ae97e8eed93 100644
--- a/docs/providers/tencent.md
+++ b/docs/providers/tencent.md
@@ -6,20 +6,19 @@ read_when:
- You need the TokenHub API key setup
---
-# Tencent Cloud TokenHub
+Tencent Cloud ships as a bundled provider plugin in OpenClaw. It gives access to Tencent Hy3 preview through the TokenHub endpoint (`tencent-tokenhub`) using an OpenAI-compatible API.
-Tencent Cloud ships as a **bundled provider plugin** in OpenClaw. It gives access to Tencent Hy3 preview through the TokenHub endpoint (`tencent-tokenhub`).
-
-The provider uses an OpenAI-compatible API.
-
-| Property | Value |
-| ------------- | ------------------------------------------ |
-| Provider | `tencent-tokenhub` |
-| Default model | `tencent-tokenhub/hy3-preview` |
-| Auth | `TOKENHUB_API_KEY` |
-| API | OpenAI-compatible chat completions |
-| Base URL | `https://tokenhub.tencentmaas.com/v1` |
-| Global URL | `https://tokenhub-intl.tencentmaas.com/v1` |
+| Property | Value |
+| ---------------- | ----------------------------------------------------- |
+| Provider id | `tencent-tokenhub` |
+| Plugin | bundled, `enabledByDefault: true` |
+| Auth env var | `TOKENHUB_API_KEY` |
+| Onboarding flag | `--auth-choice tokenhub-api-key` |
+| Direct CLI flag | `--tokenhub-api-key ` |
+| API | OpenAI-compatible (`openai-completions`) |
+| Default base URL | `https://tokenhub.tencentmaas.com/v1` |
+| Global base URL | `https://tokenhub-intl.tencentmaas.com/v1` (override) |
+| Default model | `tencent-tokenhub/hy3-preview` |
## Quick start
@@ -28,9 +27,24 @@ The provider uses an OpenAI-compatible API.
Create an API key in Tencent Cloud TokenHub. If you choose a limited access scope for the key, include **Hy3 preview** in the allowed models.
- ```bash
- openclaw onboard --auth-choice tokenhub-api-key
- ```
+
+
+```bash Onboarding
+openclaw onboard --auth-choice tokenhub-api-key
+```
+
+```bash Direct flag
+openclaw onboard --non-interactive \
+ --auth-choice tokenhub-api-key \
+ --tokenhub-api-key "$TOKENHUB_API_KEY"
+```
+
+```bash Env only
+export TOKENHUB_API_KEY=...
+```
+
+
+
```bash
@@ -59,38 +73,58 @@ openclaw onboard --non-interactive \
Hy3 preview is Tencent Hunyuan's large MoE language model for reasoning, long-context instruction following, code, and agent workflows. Tencent's OpenAI-compatible examples use `hy3-preview` as the model id and support standard chat-completions tool calling plus `reasoning_effort`.
-The model id is `hy3-preview`. Do not confuse it with Tencent's `HY-3D-*` models, which are 3D generation APIs and are not the OpenClaw chat model configured by this provider.
+ The model id is `hy3-preview`. Do not confuse it with Tencent's `HY-3D-*` models, which are 3D generation APIs and are not the OpenClaw chat model configured by this provider.
-## Endpoint override
+## Tiered pricing
-OpenClaw defaults to Tencent Cloud's `https://tokenhub.tencentmaas.com/v1` endpoint. Tencent also documents an international TokenHub endpoint:
+The bundled catalog ships tiered cost metadata that scales with input window length, so cost estimates are populated without manual overrides.
-```bash
-openclaw config set models.providers.tencent-tokenhub.baseUrl "https://tokenhub-intl.tencentmaas.com/v1"
-```
+| Input tokens range | Input rate | Output rate | Cache read |
+| ------------------ | ---------- | ----------- | ---------- |
+| 0 - 16,000 | 0.176 | 0.587 | 0.059 |
+| 16,000 - 32,000 | 0.235 | 0.939 | 0.088 |
+| 32,000+ | 0.293 | 1.173 | 0.117 |
-Only override the endpoint when your TokenHub account or region requires it.
+Rates are per million tokens in USD as advertised by Tencent. Override pricing under `models.providers.tencent-tokenhub` only when you need a different surface.
-## Notes
+## Advanced configuration
-- TokenHub model refs use `tencent-tokenhub/`.
-- The bundled catalog currently includes `hy3-preview`.
-- The plugin marks Hy3 preview as reasoning-capable and streaming-usage capable.
-- The plugin ships with tiered Hy3 pricing metadata, so cost estimates are populated without manual pricing overrides.
-- Override pricing, context, or endpoint metadata in `models.providers` only when needed.
+
+
+ OpenClaw defaults to Tencent Cloud's `https://tokenhub.tencentmaas.com/v1` endpoint. Tencent also documents an international TokenHub endpoint:
-## Environment note
+ ```bash
+ openclaw config set models.providers.tencent-tokenhub.baseUrl "https://tokenhub-intl.tencentmaas.com/v1"
+ ```
-If the Gateway runs as a daemon (launchd/systemd), make sure `TOKENHUB_API_KEY`
-is available to that process (for example, in `~/.openclaw/.env` or via
-`env.shellEnv`).
+ Only override the endpoint when your TokenHub account or region requires it.
-## Related documentation
+
-- [OpenClaw Configuration](/gateway/configuration)
-- [Model Providers](/concepts/model-providers)
-- [Tencent TokenHub product page](https://cloud.tencent.com/product/tokenhub)
-- [Tencent TokenHub text generation](https://cloud.tencent.com/document/product/1823/130079)
-- [Tencent TokenHub Cline setup for Hy3 preview](https://cloud.tencent.com/document/product/1823/130932)
-- [Tencent Hy3 preview model card](https://huggingface.co/tencent/Hy3-preview)
+
+ If the Gateway runs as a managed service (launchd, systemd, Docker), `TOKENHUB_API_KEY` must be visible to that process. Set it in `~/.openclaw/.env` or via `env.shellEnv` so launchd, systemd, or Docker exec environments can read it.
+
+
+ Keys set only in `~/.profile` are not visible to managed gateway processes. Use the env file or config seam for persistent availability.
+
+
+
+
+
+## Related
+
+
+
+ Choosing providers, model refs, and failover behavior.
+
+
+ Full config schema including provider settings.
+
+
+ Tencent Cloud's TokenHub product page.
+
+
+ Tencent Hunyuan Hy3 preview details and benchmarks.
+
+