docs(providers): improve openrouter, nvidia, deepseek, opencode-go with Mintlify components

This commit is contained in:
Vincent Koc
2026-04-12 11:37:09 +01:00
parent 7de76ac6e3
commit 571c4db5d4
4 changed files with 290 additions and 83 deletions

View File

@@ -9,37 +9,55 @@ read_when:
[DeepSeek](https://www.deepseek.com) provides powerful AI models with an OpenAI-compatible API.
- Provider: `deepseek`
- Auth: `DEEPSEEK_API_KEY`
- API: OpenAI-compatible
- Base URL: `https://api.deepseek.com`
| Property | Value |
| -------- | -------------------------- |
| Provider | `deepseek` |
| Auth | `DEEPSEEK_API_KEY` |
| API | OpenAI-compatible |
| Base URL | `https://api.deepseek.com` |
## Quick start
## Getting started
Set the API key (recommended: store it for the Gateway):
<Steps>
<Step title="Get your API key">
Create an API key at [platform.deepseek.com](https://platform.deepseek.com/api_keys).
</Step>
<Step title="Run onboarding">
```bash
openclaw onboard --auth-choice deepseek-api-key
```
```bash
openclaw onboard --auth-choice deepseek-api-key
```
This will prompt for your API key and set `deepseek/deepseek-chat` as the default model.
This will prompt for your API key and set `deepseek/deepseek-chat` as the default model.
</Step>
<Step title="Verify models are available">
```bash
openclaw models list --provider deepseek
```
</Step>
</Steps>
## Non-interactive example
<AccordionGroup>
<Accordion title="Non-interactive setup">
For scripted or headless installations, pass all flags directly:
```bash
openclaw onboard --non-interactive \
--mode local \
--auth-choice deepseek-api-key \
--deepseek-api-key "$DEEPSEEK_API_KEY" \
--skip-health \
--accept-risk
```
```bash
openclaw onboard --non-interactive \
--mode local \
--auth-choice deepseek-api-key \
--deepseek-api-key "$DEEPSEEK_API_KEY" \
--skip-health \
--accept-risk
```
## Environment note
</Accordion>
</AccordionGroup>
<Warning>
If the Gateway runs as a daemon (launchd/systemd), make sure `DEEPSEEK_API_KEY`
is available to that process (for example, in `~/.openclaw/.env` or via
`env.shellEnv`).
</Warning>
## Built-in catalog
@@ -48,6 +66,30 @@ is available to that process (for example, in `~/.openclaw/.env` or via
| `deepseek/deepseek-chat` | DeepSeek Chat | text | 131,072 | 8,192 | Default model; DeepSeek V3.2 non-thinking surface |
| `deepseek/deepseek-reasoner` | DeepSeek Reasoner | text | 131,072 | 65,536 | Reasoning-enabled V3.2 surface |
<Tip>
Both bundled models currently advertise streaming usage compatibility in source.
</Tip>
Get your API key at [platform.deepseek.com](https://platform.deepseek.com/api_keys).
## Config example
```json5
{
env: { DEEPSEEK_API_KEY: "sk-..." },
agents: {
defaults: {
model: { primary: "deepseek/deepseek-chat" },
},
},
}
```
## Related
<CardGroup cols={2}>
<Card title="Model selection" href="/concepts/model-providers" icon="layers">
Choosing providers, model refs, and failover behavior.
</Card>
<Card title="Configuration reference" href="/gateway/configuration-reference" icon="gear">
Full config reference for agents, models, and providers.
</Card>
</CardGroup>

View File

@@ -8,21 +8,35 @@ title: "NVIDIA"
# NVIDIA
NVIDIA provides an OpenAI-compatible API at `https://integrate.api.nvidia.com/v1` for open models for free. Authenticate with an API key from [build.nvidia.com](https://build.nvidia.com/settings/api-keys).
NVIDIA provides an OpenAI-compatible API at `https://integrate.api.nvidia.com/v1` for
open models for free. Authenticate with an API key from
[build.nvidia.com](https://build.nvidia.com/settings/api-keys).
## CLI setup
## Getting started
Export the key once, then run onboarding and set an NVIDIA model:
<Steps>
<Step title="Get your API key">
Create an API key at [build.nvidia.com](https://build.nvidia.com/settings/api-keys).
</Step>
<Step title="Export the key and run onboarding">
```bash
export NVIDIA_API_KEY="nvapi-..."
openclaw onboard --auth-choice skip
```
</Step>
<Step title="Set an NVIDIA model">
```bash
openclaw models set nvidia/nvidia/nemotron-3-super-120b-a12b
```
</Step>
</Steps>
```bash
export NVIDIA_API_KEY="nvapi-..."
openclaw onboard --auth-choice skip
openclaw models set nvidia/nvidia/nemotron-3-super-120b-a12b
```
<Warning>
If you pass `--token` instead of the env var, the value lands in shell history and
`ps` output. Prefer the `NVIDIA_API_KEY` environment variable when possible.
</Warning>
If you still pass `--token`, remember it lands in shell history and `ps` output; prefer the env var when possible.
## Config snippet
## Config example
```json5
{
@@ -43,7 +57,7 @@ If you still pass `--token`, remember it lands in shell history and `ps` output;
}
```
## Model IDs
## Built-in catalog
| Model ref | Name | Context | Max output |
| ------------------------------------------ | ---------------------------- | ------- | ---------- |
@@ -52,8 +66,38 @@ If you still pass `--token`, remember it lands in shell history and `ps` output;
| `nvidia/minimaxai/minimax-m2.5` | Minimax M2.5 | 196,608 | 8,192 |
| `nvidia/z-ai/glm5` | GLM 5 | 202,752 | 8,192 |
## Notes
## Advanced notes
- OpenAI-compatible `/v1` endpoint; use an API key from [build.nvidia.com](https://build.nvidia.com/).
- Provider auto-enables when `NVIDIA_API_KEY` is set.
- The bundled catalog is static; costs default to `0` in source.
<AccordionGroup>
<Accordion title="Auto-enable behavior">
The provider auto-enables when the `NVIDIA_API_KEY` environment variable is set.
No explicit provider config is required beyond the key.
</Accordion>
<Accordion title="Catalog and pricing">
The bundled catalog is static. Costs default to `0` in source since NVIDIA
currently offers free API access for the listed models.
</Accordion>
<Accordion title="OpenAI-compatible endpoint">
NVIDIA uses the standard `/v1` completions endpoint. Any OpenAI-compatible
tooling should work out of the box with the NVIDIA base URL.
</Accordion>
</AccordionGroup>
<Tip>
NVIDIA models are currently free to use. Check
[build.nvidia.com](https://build.nvidia.com/) for the latest availability and
rate-limit details.
</Tip>
## Related
<CardGroup cols={2}>
<Card title="Model selection" href="/concepts/model-providers" icon="layers">
Choosing providers, model refs, and failover behavior.
</Card>
<Card title="Configuration reference" href="/gateway/configuration-reference" icon="gear">
Full config reference for agents, models, and providers.
</Card>
</CardGroup>

View File

@@ -12,21 +12,60 @@ OpenCode Go is the Go catalog within [OpenCode](/providers/opencode).
It uses the same `OPENCODE_API_KEY` as the Zen catalog, but keeps the runtime
provider id `opencode-go` so upstream per-model routing stays correct.
| Property | Value |
| ---------------- | ------------------------------- |
| Runtime provider | `opencode-go` |
| Auth | `OPENCODE_API_KEY` |
| Parent setup | [OpenCode](/providers/opencode) |
## Supported models
- `opencode-go/kimi-k2.5`
- `opencode-go/glm-5`
- `opencode-go/minimax-m2.5`
| Model ref | Name |
| -------------------------- | ------------ |
| `opencode-go/kimi-k2.5` | Kimi K2.5 |
| `opencode-go/glm-5` | GLM 5 |
| `opencode-go/minimax-m2.5` | MiniMax M2.5 |
## CLI setup
## Getting started
```bash
openclaw onboard --auth-choice opencode-go
# or non-interactive
openclaw onboard --opencode-go-api-key "$OPENCODE_API_KEY"
```
<Tabs>
<Tab title="Interactive">
<Steps>
<Step title="Run onboarding">
```bash
openclaw onboard --auth-choice opencode-go
```
</Step>
<Step title="Set a Go model as default">
```bash
openclaw config set agents.defaults.model.primary "opencode-go/kimi-k2.5"
```
</Step>
<Step title="Verify models are available">
```bash
openclaw models list --provider opencode-go
```
</Step>
</Steps>
</Tab>
## Config snippet
<Tab title="Non-interactive">
<Steps>
<Step title="Pass the key directly">
```bash
openclaw onboard --opencode-go-api-key "$OPENCODE_API_KEY"
```
</Step>
<Step title="Verify models are available">
```bash
openclaw models list --provider opencode-go
```
</Step>
</Steps>
</Tab>
</Tabs>
## Config example
```json5
{
@@ -35,11 +74,37 @@ openclaw onboard --opencode-go-api-key "$OPENCODE_API_KEY"
}
```
## Routing behavior
## Advanced notes
OpenClaw handles per-model routing automatically when the model ref uses `opencode-go/...`.
<AccordionGroup>
<Accordion title="Routing behavior">
OpenClaw handles per-model routing automatically when the model ref uses
`opencode-go/...`. No additional provider config is required.
</Accordion>
## Notes
<Accordion title="Runtime ref convention">
Runtime refs stay explicit: `opencode/...` for Zen, `opencode-go/...` for Go.
This keeps upstream per-model routing correct across both catalogs.
</Accordion>
- Use [OpenCode](/providers/opencode) for the shared onboarding and catalog overview.
- Runtime refs stay explicit: `opencode/...` for Zen, `opencode-go/...` for Go.
<Accordion title="Shared credentials">
The same `OPENCODE_API_KEY` is used by both the Zen and Go catalogs. Entering
the key during setup stores credentials for both runtime providers.
</Accordion>
</AccordionGroup>
<Tip>
See [OpenCode](/providers/opencode) for the shared onboarding overview and the full
Zen + Go catalog reference.
</Tip>
## Related
<CardGroup cols={2}>
<Card title="OpenCode (parent)" href="/providers/opencode" icon="server">
Shared onboarding, catalog overview, and advanced notes.
</Card>
<Card title="Model selection" href="/concepts/model-providers" icon="layers">
Choosing providers, model refs, and failover behavior.
</Card>
</CardGroup>

View File

@@ -11,13 +11,28 @@ title: "OpenRouter"
OpenRouter provides a **unified API** that routes requests to many models behind a single
endpoint and API key. It is OpenAI-compatible, so most OpenAI SDKs work by switching the base URL.
## CLI setup
## Getting started
```bash
openclaw onboard --auth-choice openrouter-api-key
```
<Steps>
<Step title="Get your API key">
Create an API key at [openrouter.ai/keys](https://openrouter.ai/keys).
</Step>
<Step title="Run onboarding">
```bash
openclaw onboard --auth-choice openrouter-api-key
```
</Step>
<Step title="(Optional) Switch to a specific model">
Onboarding defaults to `openrouter/auto`. Pick a concrete model later:
## Config snippet
```bash
openclaw models set openrouter/<provider>/<model>
```
</Step>
</Steps>
## Config example
```json5
{
@@ -30,30 +45,71 @@ openclaw onboard --auth-choice openrouter-api-key
}
```
## Notes
## Model references
- Model refs are `openrouter/<provider>/<model>`.
- Onboarding defaults to `openrouter/auto`. Switch to a concrete model later with
`openclaw models set openrouter/<provider>/<model>`.
- For more model/provider options, see [/concepts/model-providers](/concepts/model-providers).
- OpenRouter uses a Bearer token with your API key under the hood.
- On real OpenRouter requests (`https://openrouter.ai/api/v1`), OpenClaw also
adds OpenRouter's documented app-attribution headers:
`HTTP-Referer: https://openclaw.ai`, `X-OpenRouter-Title: OpenClaw`, and
`X-OpenRouter-Categories: cli-agent`.
- On verified OpenRouter routes, Anthropic model refs also keep the
OpenRouter-specific Anthropic `cache_control` markers that OpenClaw uses for
better prompt-cache reuse on system/developer prompt blocks.
- If you repoint the OpenRouter provider at some other proxy/base URL, OpenClaw
does not inject those OpenRouter-specific headers or Anthropic cache markers.
- OpenRouter still runs through the proxy-style OpenAI-compatible path, so
native OpenAI-only request shaping such as `serviceTier`, Responses `store`,
OpenAI reasoning-compat payloads, and prompt-cache hints is not forwarded.
- Gemini-backed OpenRouter refs stay on the proxy-Gemini path: OpenClaw keeps
Gemini thought-signature sanitation there, but does not enable native Gemini
replay validation or bootstrap rewrites.
- On supported non-`auto` routes, OpenClaw maps the selected thinking level to
OpenRouter proxy reasoning payloads. Unsupported model hints and
`openrouter/auto` skip that reasoning injection.
- If you pass OpenRouter provider routing under model params, OpenClaw forwards
it as OpenRouter routing metadata before the shared stream wrappers run.
<Note>
Model refs follow the pattern `openrouter/<provider>/<model>`. For the full list of
available providers and models, see [/concepts/model-providers](/concepts/model-providers).
</Note>
## Authentication and headers
OpenRouter uses a Bearer token with your API key under the hood.
On real OpenRouter requests (`https://openrouter.ai/api/v1`), OpenClaw also adds
OpenRouter's documented app-attribution headers:
| Header | Value |
| ------------------------- | --------------------- |
| `HTTP-Referer` | `https://openclaw.ai` |
| `X-OpenRouter-Title` | `OpenClaw` |
| `X-OpenRouter-Categories` | `cli-agent` |
<Warning>
If you repoint the OpenRouter provider at some other proxy or base URL, OpenClaw
does **not** inject those OpenRouter-specific headers or Anthropic cache markers.
</Warning>
## Advanced notes
<AccordionGroup>
<Accordion title="Anthropic cache markers">
On verified OpenRouter routes, Anthropic model refs keep the
OpenRouter-specific Anthropic `cache_control` markers that OpenClaw uses for
better prompt-cache reuse on system/developer prompt blocks.
</Accordion>
<Accordion title="Thinking / reasoning injection">
On supported non-`auto` routes, OpenClaw maps the selected thinking level to
OpenRouter proxy reasoning payloads. Unsupported model hints and
`openrouter/auto` skip that reasoning injection.
</Accordion>
<Accordion title="OpenAI-only request shaping">
OpenRouter still runs through the proxy-style OpenAI-compatible path, so
native OpenAI-only request shaping such as `serviceTier`, Responses `store`,
OpenAI reasoning-compat payloads, and prompt-cache hints is not forwarded.
</Accordion>
<Accordion title="Gemini-backed routes">
Gemini-backed OpenRouter refs stay on the proxy-Gemini path: OpenClaw keeps
Gemini thought-signature sanitation there, but does not enable native Gemini
replay validation or bootstrap rewrites.
</Accordion>
<Accordion title="Provider routing metadata">
If you pass OpenRouter provider routing under model params, OpenClaw forwards
it as OpenRouter routing metadata before the shared stream wrappers run.
</Accordion>
</AccordionGroup>
## Related
<CardGroup cols={2}>
<Card title="Model selection" href="/concepts/model-providers" icon="layers">
Choosing providers, model refs, and failover behavior.
</Card>
<Card title="Configuration reference" href="/gateway/configuration-reference" icon="gear">
Full config reference for agents, models, and providers.
</Card>
</CardGroup>