diff --git a/docs/providers/deepseek.md b/docs/providers/deepseek.md
index 2bb3365a3b5..9f1af7f8f4b 100644
--- a/docs/providers/deepseek.md
+++ b/docs/providers/deepseek.md
@@ -9,37 +9,55 @@ read_when:
[DeepSeek](https://www.deepseek.com) provides powerful AI models with an OpenAI-compatible API.
-- Provider: `deepseek`
-- Auth: `DEEPSEEK_API_KEY`
-- API: OpenAI-compatible
-- Base URL: `https://api.deepseek.com`
+| Property | Value |
+| -------- | -------------------------- |
+| Provider | `deepseek` |
+| Auth | `DEEPSEEK_API_KEY` |
+| API | OpenAI-compatible |
+| Base URL | `https://api.deepseek.com` |
-## Quick start
+## Getting started
-Set the API key (recommended: store it for the Gateway):
+
+
+ Create an API key at [platform.deepseek.com](https://platform.deepseek.com/api_keys).
+
+
+ ```bash
+ openclaw onboard --auth-choice deepseek-api-key
+ ```
-```bash
-openclaw onboard --auth-choice deepseek-api-key
-```
+ This will prompt for your API key and set `deepseek/deepseek-chat` as the default model.
-This will prompt for your API key and set `deepseek/deepseek-chat` as the default model.
+
+
+ ```bash
+ openclaw models list --provider deepseek
+ ```
+
+
-## Non-interactive example
+
+
+ For scripted or headless installations, pass all flags directly:
-```bash
-openclaw onboard --non-interactive \
- --mode local \
- --auth-choice deepseek-api-key \
- --deepseek-api-key "$DEEPSEEK_API_KEY" \
- --skip-health \
- --accept-risk
-```
+ ```bash
+ openclaw onboard --non-interactive \
+ --mode local \
+ --auth-choice deepseek-api-key \
+ --deepseek-api-key "$DEEPSEEK_API_KEY" \
+ --skip-health \
+ --accept-risk
+ ```
-## Environment note
+
+
+
If the Gateway runs as a daemon (launchd/systemd), make sure `DEEPSEEK_API_KEY`
is available to that process (for example, in `~/.openclaw/.env` or via
`env.shellEnv`).
+
## Built-in catalog
@@ -48,6 +66,30 @@ is available to that process (for example, in `~/.openclaw/.env` or via
| `deepseek/deepseek-chat` | DeepSeek Chat | text | 131,072 | 8,192 | Default model; DeepSeek V3.2 non-thinking surface |
| `deepseek/deepseek-reasoner` | DeepSeek Reasoner | text | 131,072 | 65,536 | Reasoning-enabled V3.2 surface |
+
Both bundled models currently advertise streaming usage compatibility in source.
+
-Get your API key at [platform.deepseek.com](https://platform.deepseek.com/api_keys).
+## Config example
+
+```json5
+{
+ env: { DEEPSEEK_API_KEY: "sk-..." },
+ agents: {
+ defaults: {
+ model: { primary: "deepseek/deepseek-chat" },
+ },
+ },
+}
+```
+
+## Related
+
+
+
+ Choosing providers, model refs, and failover behavior.
+
+
+ Full config reference for agents, models, and providers.
+
+
diff --git a/docs/providers/nvidia.md b/docs/providers/nvidia.md
index dae9663b8ee..042d9610938 100644
--- a/docs/providers/nvidia.md
+++ b/docs/providers/nvidia.md
@@ -8,21 +8,35 @@ title: "NVIDIA"
# NVIDIA
-NVIDIA provides an OpenAI-compatible API at `https://integrate.api.nvidia.com/v1` for open models for free. Authenticate with an API key from [build.nvidia.com](https://build.nvidia.com/settings/api-keys).
+NVIDIA provides an OpenAI-compatible API at `https://integrate.api.nvidia.com/v1` for
+open models for free. Authenticate with an API key from
+[build.nvidia.com](https://build.nvidia.com/settings/api-keys).
-## CLI setup
+## Getting started
-Export the key once, then run onboarding and set an NVIDIA model:
+
+
+ Create an API key at [build.nvidia.com](https://build.nvidia.com/settings/api-keys).
+
+
+ ```bash
+ export NVIDIA_API_KEY="nvapi-..."
+ openclaw onboard --auth-choice skip
+ ```
+
+
+ ```bash
+ openclaw models set nvidia/nvidia/nemotron-3-super-120b-a12b
+ ```
+
+
-```bash
-export NVIDIA_API_KEY="nvapi-..."
-openclaw onboard --auth-choice skip
-openclaw models set nvidia/nvidia/nemotron-3-super-120b-a12b
-```
+
+If you pass `--token` instead of the env var, the value lands in shell history and
+`ps` output. Prefer the `NVIDIA_API_KEY` environment variable when possible.
+
-If you still pass `--token`, remember it lands in shell history and `ps` output; prefer the env var when possible.
-
-## Config snippet
+## Config example
```json5
{
@@ -43,7 +57,7 @@ If you still pass `--token`, remember it lands in shell history and `ps` output;
}
```
-## Model IDs
+## Built-in catalog
| Model ref | Name | Context | Max output |
| ------------------------------------------ | ---------------------------- | ------- | ---------- |
@@ -52,8 +66,38 @@ If you still pass `--token`, remember it lands in shell history and `ps` output;
| `nvidia/minimaxai/minimax-m2.5` | Minimax M2.5 | 196,608 | 8,192 |
| `nvidia/z-ai/glm5` | GLM 5 | 202,752 | 8,192 |
-## Notes
+## Advanced notes
-- OpenAI-compatible `/v1` endpoint; use an API key from [build.nvidia.com](https://build.nvidia.com/).
-- Provider auto-enables when `NVIDIA_API_KEY` is set.
-- The bundled catalog is static; costs default to `0` in source.
+
+
+ The provider auto-enables when the `NVIDIA_API_KEY` environment variable is set.
+ No explicit provider config is required beyond the key.
+
+
+
+ The bundled catalog is static. Costs default to `0` in source since NVIDIA
+ currently offers free API access for the listed models.
+
+
+
+ NVIDIA uses the standard `/v1` completions endpoint. Any OpenAI-compatible
+ tooling should work out of the box with the NVIDIA base URL.
+
+
+
+
+NVIDIA models are currently free to use. Check
+[build.nvidia.com](https://build.nvidia.com/) for the latest availability and
+rate-limit details.
+
+
+## Related
+
+
+
+ Choosing providers, model refs, and failover behavior.
+
+
+ Full config reference for agents, models, and providers.
+
+
diff --git a/docs/providers/opencode-go.md b/docs/providers/opencode-go.md
index 4552e916beb..fa2de5f7a22 100644
--- a/docs/providers/opencode-go.md
+++ b/docs/providers/opencode-go.md
@@ -12,21 +12,60 @@ OpenCode Go is the Go catalog within [OpenCode](/providers/opencode).
It uses the same `OPENCODE_API_KEY` as the Zen catalog, but keeps the runtime
provider id `opencode-go` so upstream per-model routing stays correct.
+| Property | Value |
+| ---------------- | ------------------------------- |
+| Runtime provider | `opencode-go` |
+| Auth | `OPENCODE_API_KEY` |
+| Parent setup | [OpenCode](/providers/opencode) |
+
## Supported models
-- `opencode-go/kimi-k2.5`
-- `opencode-go/glm-5`
-- `opencode-go/minimax-m2.5`
+| Model ref | Name |
+| -------------------------- | ------------ |
+| `opencode-go/kimi-k2.5` | Kimi K2.5 |
+| `opencode-go/glm-5` | GLM 5 |
+| `opencode-go/minimax-m2.5` | MiniMax M2.5 |
-## CLI setup
+## Getting started
-```bash
-openclaw onboard --auth-choice opencode-go
-# or non-interactive
-openclaw onboard --opencode-go-api-key "$OPENCODE_API_KEY"
-```
+
+
+
+
+ ```bash
+ openclaw onboard --auth-choice opencode-go
+ ```
+
+
+ ```bash
+ openclaw config set agents.defaults.model.primary "opencode-go/kimi-k2.5"
+ ```
+
+
+ ```bash
+ openclaw models list --provider opencode-go
+ ```
+
+
+
-## Config snippet
+
+
+
+ ```bash
+ openclaw onboard --opencode-go-api-key "$OPENCODE_API_KEY"
+ ```
+
+
+ ```bash
+ openclaw models list --provider opencode-go
+ ```
+
+
+
+
+
+## Config example
```json5
{
@@ -35,11 +74,37 @@ openclaw onboard --opencode-go-api-key "$OPENCODE_API_KEY"
}
```
-## Routing behavior
+## Advanced notes
-OpenClaw handles per-model routing automatically when the model ref uses `opencode-go/...`.
+
+
+ OpenClaw handles per-model routing automatically when the model ref uses
+ `opencode-go/...`. No additional provider config is required.
+
-## Notes
+
+ Runtime refs stay explicit: `opencode/...` for Zen, `opencode-go/...` for Go.
+ This keeps upstream per-model routing correct across both catalogs.
+
-- Use [OpenCode](/providers/opencode) for the shared onboarding and catalog overview.
-- Runtime refs stay explicit: `opencode/...` for Zen, `opencode-go/...` for Go.
+
+ The same `OPENCODE_API_KEY` is used by both the Zen and Go catalogs. Entering
+ the key during setup stores credentials for both runtime providers.
+
+
+
+
+See [OpenCode](/providers/opencode) for the shared onboarding overview and the full
+Zen + Go catalog reference.
+
+
+## Related
+
+
+
+ Shared onboarding, catalog overview, and advanced notes.
+
+
+ Choosing providers, model refs, and failover behavior.
+
+
diff --git a/docs/providers/openrouter.md b/docs/providers/openrouter.md
index e931edec486..a934497024a 100644
--- a/docs/providers/openrouter.md
+++ b/docs/providers/openrouter.md
@@ -11,13 +11,28 @@ title: "OpenRouter"
OpenRouter provides a **unified API** that routes requests to many models behind a single
endpoint and API key. It is OpenAI-compatible, so most OpenAI SDKs work by switching the base URL.
-## CLI setup
+## Getting started
-```bash
-openclaw onboard --auth-choice openrouter-api-key
-```
+
+
+ Create an API key at [openrouter.ai/keys](https://openrouter.ai/keys).
+
+
+ ```bash
+ openclaw onboard --auth-choice openrouter-api-key
+ ```
+
+
+ Onboarding defaults to `openrouter/auto`. Pick a concrete model later:
-## Config snippet
+ ```bash
+ openclaw models set openrouter//
+ ```
+
+
+
+
+## Config example
```json5
{
@@ -30,30 +45,71 @@ openclaw onboard --auth-choice openrouter-api-key
}
```
-## Notes
+## Model references
-- Model refs are `openrouter//`.
-- Onboarding defaults to `openrouter/auto`. Switch to a concrete model later with
- `openclaw models set openrouter//`.
-- For more model/provider options, see [/concepts/model-providers](/concepts/model-providers).
-- OpenRouter uses a Bearer token with your API key under the hood.
-- On real OpenRouter requests (`https://openrouter.ai/api/v1`), OpenClaw also
- adds OpenRouter's documented app-attribution headers:
- `HTTP-Referer: https://openclaw.ai`, `X-OpenRouter-Title: OpenClaw`, and
- `X-OpenRouter-Categories: cli-agent`.
-- On verified OpenRouter routes, Anthropic model refs also keep the
- OpenRouter-specific Anthropic `cache_control` markers that OpenClaw uses for
- better prompt-cache reuse on system/developer prompt blocks.
-- If you repoint the OpenRouter provider at some other proxy/base URL, OpenClaw
- does not inject those OpenRouter-specific headers or Anthropic cache markers.
-- OpenRouter still runs through the proxy-style OpenAI-compatible path, so
- native OpenAI-only request shaping such as `serviceTier`, Responses `store`,
- OpenAI reasoning-compat payloads, and prompt-cache hints is not forwarded.
-- Gemini-backed OpenRouter refs stay on the proxy-Gemini path: OpenClaw keeps
- Gemini thought-signature sanitation there, but does not enable native Gemini
- replay validation or bootstrap rewrites.
-- On supported non-`auto` routes, OpenClaw maps the selected thinking level to
- OpenRouter proxy reasoning payloads. Unsupported model hints and
- `openrouter/auto` skip that reasoning injection.
-- If you pass OpenRouter provider routing under model params, OpenClaw forwards
- it as OpenRouter routing metadata before the shared stream wrappers run.
+
+Model refs follow the pattern `openrouter//`. For the full list of
+available providers and models, see [/concepts/model-providers](/concepts/model-providers).
+
+
+## Authentication and headers
+
+OpenRouter uses a Bearer token with your API key under the hood.
+
+On real OpenRouter requests (`https://openrouter.ai/api/v1`), OpenClaw also adds
+OpenRouter's documented app-attribution headers:
+
+| Header | Value |
+| ------------------------- | --------------------- |
+| `HTTP-Referer` | `https://openclaw.ai` |
+| `X-OpenRouter-Title` | `OpenClaw` |
+| `X-OpenRouter-Categories` | `cli-agent` |
+
+
+If you repoint the OpenRouter provider at some other proxy or base URL, OpenClaw
+does **not** inject those OpenRouter-specific headers or Anthropic cache markers.
+
+
+## Advanced notes
+
+
+
+ On verified OpenRouter routes, Anthropic model refs keep the
+ OpenRouter-specific Anthropic `cache_control` markers that OpenClaw uses for
+ better prompt-cache reuse on system/developer prompt blocks.
+
+
+
+ On supported non-`auto` routes, OpenClaw maps the selected thinking level to
+ OpenRouter proxy reasoning payloads. Unsupported model hints and
+ `openrouter/auto` skip that reasoning injection.
+
+
+
+ OpenRouter still runs through the proxy-style OpenAI-compatible path, so
+ native OpenAI-only request shaping such as `serviceTier`, Responses `store`,
+ OpenAI reasoning-compat payloads, and prompt-cache hints is not forwarded.
+
+
+
+ Gemini-backed OpenRouter refs stay on the proxy-Gemini path: OpenClaw keeps
+ Gemini thought-signature sanitation there, but does not enable native Gemini
+ replay validation or bootstrap rewrites.
+
+
+
+ If you pass OpenRouter provider routing under model params, OpenClaw forwards
+ it as OpenRouter routing metadata before the shared stream wrappers run.
+
+
+
+## Related
+
+
+
+ Choosing providers, model refs, and failover behavior.
+
+
+ Full config reference for agents, models, and providers.
+
+