mirror of
https://github.com/openclaw/openclaw.git
synced 2026-05-12 04:00:44 +00:00
Summary: - The PR removes `max` from OpenRouter DeepSeek V4 thinking profiles, maps stale OpenRouter `max` overrides to `xhigh`, preserves direct DeepSeek behavior, and updates docs, tests, and changelog. - Reproducibility: yes. Source inspection on current main shows OpenRouter DeepSeek V4 advertises `max` and se ... ffort: "max"`, matching the linked 400 logs; I did not need a live OpenRouter request for this assist pass. Automerge notes: - Ran the ClawSweeper repair loop before final review. - Addressed earlier ClawSweeper review findings before merge. - Included post-review commit in the final squash: docs(changelog): credit OpenRouter duplicate fix - Included post-review commit in the final squash: fix(openrouter): keep DeepSeek V4 reasoning effort valid Validation: - ClawSweeper review passed for headbecdea4223. - Required merge gates passed before the squash merge. Prepared head SHA:becdea4223Review: https://github.com/openclaw/openclaw/pull/77423#issuecomment-4372880583 Co-authored-by: sallyom <somalley@redhat.com> Co-authored-by: clawsweeper <274271284+clawsweeper[bot]@users.noreply.github.com>
247 lines
8.6 KiB
Markdown
247 lines
8.6 KiB
Markdown
---
|
|
summary: "Use OpenRouter's unified API to access many models in OpenClaw"
|
|
read_when:
|
|
- You want a single API key for many LLMs
|
|
- You want to run models via OpenRouter in OpenClaw
|
|
- You want to use OpenRouter for image generation
|
|
- You want to use OpenRouter for video generation
|
|
title: "OpenRouter"
|
|
---
|
|
|
|
OpenRouter provides a **unified API** that routes requests to many models behind a single
|
|
endpoint and API key. It is OpenAI-compatible, so most OpenAI SDKs work by switching the base URL.
|
|
|
|
## Getting started
|
|
|
|
<Steps>
|
|
<Step title="Get your API key">
|
|
Create an API key at [openrouter.ai/keys](https://openrouter.ai/keys).
|
|
</Step>
|
|
<Step title="Run onboarding">
|
|
```bash
|
|
openclaw onboard --auth-choice openrouter-api-key
|
|
```
|
|
</Step>
|
|
<Step title="(Optional) Switch to a specific model">
|
|
Onboarding defaults to `openrouter/auto`. Pick a concrete model later:
|
|
|
|
```bash
|
|
openclaw models set openrouter/<provider>/<model>
|
|
```
|
|
|
|
</Step>
|
|
</Steps>
|
|
|
|
## Config example
|
|
|
|
```json5
|
|
{
|
|
env: { OPENROUTER_API_KEY: "sk-or-..." },
|
|
agents: {
|
|
defaults: {
|
|
model: { primary: "openrouter/auto" },
|
|
},
|
|
},
|
|
}
|
|
```
|
|
|
|
## Model references
|
|
|
|
<Note>
|
|
Model refs follow the pattern `openrouter/<provider>/<model>`. For the full list of
|
|
available providers and models, see [/concepts/model-providers](/concepts/model-providers).
|
|
</Note>
|
|
|
|
Bundled fallback examples:
|
|
|
|
| Model ref | Notes |
|
|
| --------------------------------- | ---------------------------- |
|
|
| `openrouter/auto` | OpenRouter automatic routing |
|
|
| `openrouter/moonshotai/kimi-k2.6` | Kimi K2.6 via MoonshotAI |
|
|
|
|
## Image generation
|
|
|
|
OpenRouter can also back the `image_generate` tool. Use an OpenRouter image model under `agents.defaults.imageGenerationModel`:
|
|
|
|
```json5
|
|
{
|
|
env: { OPENROUTER_API_KEY: "sk-or-..." },
|
|
agents: {
|
|
defaults: {
|
|
imageGenerationModel: {
|
|
primary: "openrouter/google/gemini-3.1-flash-image-preview",
|
|
timeoutMs: 180_000,
|
|
},
|
|
},
|
|
},
|
|
}
|
|
```
|
|
|
|
OpenClaw sends image requests to OpenRouter's chat completions image API with `modalities: ["image", "text"]`. Gemini image models receive supported `aspectRatio` and `resolution` hints through OpenRouter's `image_config`. Use `agents.defaults.imageGenerationModel.timeoutMs` for slower OpenRouter image models; the `image_generate` tool's per-call `timeoutMs` parameter still wins.
|
|
|
|
## Video generation
|
|
|
|
OpenRouter can also back the `video_generate` tool through its asynchronous `/videos` API. Use an OpenRouter video model under `agents.defaults.videoGenerationModel`:
|
|
|
|
```json5
|
|
{
|
|
env: { OPENROUTER_API_KEY: "sk-or-..." },
|
|
agents: {
|
|
defaults: {
|
|
videoGenerationModel: {
|
|
primary: "openrouter/google/veo-3.1-fast",
|
|
},
|
|
},
|
|
},
|
|
}
|
|
```
|
|
|
|
OpenClaw submits text-to-video and image-to-video jobs to OpenRouter, polls
|
|
the returned `polling_url`, and downloads the completed video from
|
|
OpenRouter's `unsigned_urls` or the documented job content endpoint.
|
|
Reference images are sent as first/last frame images by default; images
|
|
tagged with `reference_image` are sent as OpenRouter input references. The
|
|
bundled `google/veo-3.1-fast` default advertises the currently supported 4/6/8
|
|
second durations, `720P`/`1080P` resolutions, and `16:9`/`9:16` aspect
|
|
ratios. Video-to-video is not registered for OpenRouter because the upstream
|
|
video generation API currently accepts text and image references.
|
|
|
|
## Text-to-speech
|
|
|
|
OpenRouter can also be used as a TTS provider through its OpenAI-compatible
|
|
`/audio/speech` endpoint.
|
|
|
|
```json5
|
|
{
|
|
messages: {
|
|
tts: {
|
|
auto: "always",
|
|
provider: "openrouter",
|
|
providers: {
|
|
openrouter: {
|
|
model: "hexgrad/kokoro-82m",
|
|
voice: "af_alloy",
|
|
responseFormat: "mp3",
|
|
},
|
|
},
|
|
},
|
|
},
|
|
}
|
|
```
|
|
|
|
If `messages.tts.providers.openrouter.apiKey` is omitted, TTS reuses
|
|
`models.providers.openrouter.apiKey`, then `OPENROUTER_API_KEY`.
|
|
|
|
## Authentication and headers
|
|
|
|
OpenRouter uses a Bearer token with your API key under the hood.
|
|
|
|
On real OpenRouter requests (`https://openrouter.ai/api/v1`), OpenClaw also adds
|
|
OpenRouter's documented app-attribution headers:
|
|
|
|
| Header | Value |
|
|
| ------------------------- | ------------------------------------------------------------------------------------------------------ |
|
|
| `HTTP-Referer` | `https://openclaw.ai` |
|
|
| `X-OpenRouter-Title` | `OpenClaw` |
|
|
| `X-OpenRouter-Categories` | `cli-agent,cloud-agent,programming-app,creative-writing,writing-assistant,general-chat,personal-agent` |
|
|
|
|
<Warning>
|
|
If you repoint the OpenRouter provider at some other proxy or base URL, OpenClaw
|
|
does **not** inject those OpenRouter-specific headers or Anthropic cache markers.
|
|
</Warning>
|
|
|
|
## Advanced configuration
|
|
|
|
<AccordionGroup>
|
|
<Accordion title="Response caching">
|
|
OpenRouter response caching is opt-in. Enable it per OpenRouter model with
|
|
model params:
|
|
|
|
```json5
|
|
{
|
|
agents: {
|
|
defaults: {
|
|
models: {
|
|
"openrouter/auto": {
|
|
params: {
|
|
responseCache: true,
|
|
responseCacheTtlSeconds: 300,
|
|
},
|
|
},
|
|
},
|
|
},
|
|
},
|
|
}
|
|
```
|
|
|
|
OpenClaw sends `X-OpenRouter-Cache: true` and, when configured,
|
|
`X-OpenRouter-Cache-TTL`. `responseCacheClear: true` forces a refresh for
|
|
the current request and stores the replacement response. Snake_case aliases
|
|
(`response_cache`, `response_cache_ttl_seconds`, and
|
|
`response_cache_clear`) are also accepted.
|
|
|
|
This is separate from provider prompt caching and from OpenRouter's
|
|
Anthropic `cache_control` markers. It is only applied on verified
|
|
`openrouter.ai` routes, not custom proxy base URLs.
|
|
|
|
</Accordion>
|
|
|
|
<Accordion title="Anthropic cache markers">
|
|
On verified OpenRouter routes, Anthropic model refs keep the
|
|
OpenRouter-specific Anthropic `cache_control` markers that OpenClaw uses for
|
|
better prompt-cache reuse on system/developer prompt blocks.
|
|
</Accordion>
|
|
|
|
<Accordion title="Anthropic reasoning prefill">
|
|
On verified OpenRouter routes, Anthropic model refs with reasoning enabled
|
|
drop trailing assistant prefill turns before the request reaches OpenRouter,
|
|
matching Anthropic's requirement that reasoning conversations end with a user
|
|
turn.
|
|
</Accordion>
|
|
|
|
<Accordion title="Thinking / reasoning injection">
|
|
On supported non-`auto` routes, OpenClaw maps the selected thinking level to
|
|
OpenRouter proxy reasoning payloads. Unsupported model hints and
|
|
`openrouter/auto` skip that reasoning injection. Hunter Alpha also skips
|
|
proxy reasoning for stale configured model refs because OpenRouter could
|
|
return final answer text in reasoning fields for that retired route.
|
|
</Accordion>
|
|
|
|
<Accordion title="DeepSeek V4 reasoning replay">
|
|
On verified OpenRouter routes, `openrouter/deepseek/deepseek-v4-flash` and
|
|
`openrouter/deepseek/deepseek-v4-pro` fill missing `reasoning_content` on
|
|
replayed assistant turns so thinking/tool conversations keep DeepSeek V4's
|
|
required follow-up shape. OpenClaw sends OpenRouter-supported
|
|
`reasoning_effort` values for these routes; `xhigh` is the highest advertised
|
|
level, and stale `max` overrides are mapped to `xhigh`.
|
|
</Accordion>
|
|
|
|
<Accordion title="OpenAI-only request shaping">
|
|
OpenRouter still runs through the proxy-style OpenAI-compatible path, so
|
|
native OpenAI-only request shaping such as `serviceTier`, Responses `store`,
|
|
OpenAI reasoning-compat payloads, and prompt-cache hints is not forwarded.
|
|
</Accordion>
|
|
|
|
<Accordion title="Gemini-backed routes">
|
|
Gemini-backed OpenRouter refs stay on the proxy-Gemini path: OpenClaw keeps
|
|
Gemini thought-signature sanitation there, but does not enable native Gemini
|
|
replay validation or bootstrap rewrites.
|
|
</Accordion>
|
|
|
|
<Accordion title="Provider routing metadata">
|
|
If you pass OpenRouter provider routing under model params, OpenClaw forwards
|
|
it as OpenRouter routing metadata before the shared stream wrappers run.
|
|
</Accordion>
|
|
</AccordionGroup>
|
|
|
|
## Related
|
|
|
|
<CardGroup cols={2}>
|
|
<Card title="Model selection" href="/concepts/model-providers" icon="layers">
|
|
Choosing providers, model refs, and failover behavior.
|
|
</Card>
|
|
<Card title="Configuration reference" href="/gateway/configuration-reference" icon="gear">
|
|
Full config reference for agents, models, and providers.
|
|
</Card>
|
|
</CardGroup>
|