diff --git a/docs/providers/glm.md b/docs/providers/glm.md index 49524b83053..68dca787dee 100644 --- a/docs/providers/glm.md +++ b/docs/providers/glm.md @@ -11,26 +11,42 @@ title: "GLM Models" GLM is a **model family** (not a company) available through the Z.AI platform. In OpenClaw, GLM models are accessed via the `zai` provider and model IDs like `zai/glm-5`. -## CLI setup +## Getting started -```bash -# Generic API-key setup with endpoint auto-detection -openclaw onboard --auth-choice zai-api-key + + + Pick the onboarding choice that matches your Z.AI plan and region: -# Coding Plan Global, recommended for Coding Plan users -openclaw onboard --auth-choice zai-coding-global + | Auth choice | Best for | + | ----------- | -------- | + | `zai-api-key` | Generic API-key setup with endpoint auto-detection | + | `zai-coding-global` | Coding Plan users (global) | + | `zai-coding-cn` | Coding Plan users (China region) | + | `zai-global` | General API (global) | + | `zai-cn` | General API (China region) | -# Coding Plan CN (China region), recommended for Coding Plan users -openclaw onboard --auth-choice zai-coding-cn + ```bash + # Example: generic auto-detect + openclaw onboard --auth-choice zai-api-key -# General API -openclaw onboard --auth-choice zai-global + # Example: Coding Plan global + openclaw onboard --auth-choice zai-coding-global + ``` -# General API CN (China region) -openclaw onboard --auth-choice zai-cn -``` + + + ```bash + openclaw config set agents.defaults.model.primary "zai/glm-5.1" + ``` + + + ```bash + openclaw models list --provider zai + ``` + + -## Config snippet +## Config example ```json5 { @@ -39,30 +55,56 @@ openclaw onboard --auth-choice zai-cn } ``` + `zai-api-key` lets OpenClaw detect the matching Z.AI endpoint from the key and apply the correct base URL automatically. Use the explicit regional choices when you want to force a specific Coding Plan or general API surface. + -## Current bundled GLM models +## Bundled GLM models OpenClaw currently seeds the bundled `zai` provider with these GLM refs: -- `glm-5.1` -- `glm-5` -- `glm-5-turbo` -- `glm-5v-turbo` -- `glm-4.7` -- `glm-4.7-flash` -- `glm-4.7-flashx` -- `glm-4.6` -- `glm-4.6v` -- `glm-4.5` -- `glm-4.5-air` -- `glm-4.5-flash` -- `glm-4.5v` +| Model | Model | +| --------------- | ---------------- | +| `glm-5.1` | `glm-4.7` | +| `glm-5` | `glm-4.7-flash` | +| `glm-5-turbo` | `glm-4.7-flashx` | +| `glm-5v-turbo` | `glm-4.6` | +| `glm-4.5` | `glm-4.6v` | +| `glm-4.5-air` | | +| `glm-4.5-flash` | | +| `glm-4.5v` | | -## Notes + +The default bundled model ref is `zai/glm-5.1`. GLM versions and availability +can change; check Z.AI's docs for the latest. + -- GLM versions and availability can change; check Z.AI's docs for the latest. -- Default bundled model ref is `zai/glm-5.1`. -- For provider details, see [/providers/zai](/providers/zai). +## Advanced notes + + + + When you use the `zai-api-key` auth choice, OpenClaw inspects the key format + to determine the correct Z.AI base URL. Explicit regional choices + (`zai-coding-global`, `zai-coding-cn`, `zai-global`, `zai-cn`) override + auto-detection and pin the endpoint directly. + + + + GLM models are served by the `zai` runtime provider. For full provider + configuration, regional endpoints, and additional capabilities, see + [Z.AI provider docs](/providers/zai). + + + +## Related + + + + Full Z.AI provider configuration and regional endpoints. + + + Choosing providers, model refs, and failover behavior. + + diff --git a/docs/providers/opencode.md b/docs/providers/opencode.md index c2e941126c2..fe6a970438c 100644 --- a/docs/providers/opencode.md +++ b/docs/providers/opencode.md @@ -10,30 +10,78 @@ title: "OpenCode" OpenCode exposes two hosted catalogs in OpenClaw: -- `opencode/...` for the **Zen** catalog -- `opencode-go/...` for the **Go** catalog +| Catalog | Prefix | Runtime provider | +| ------- | ----------------- | ---------------- | +| **Zen** | `opencode/...` | `opencode` | +| **Go** | `opencode-go/...` | `opencode-go` | Both catalogs use the same OpenCode API key. OpenClaw keeps the runtime provider ids split so upstream per-model routing stays correct, but onboarding and docs treat them as one OpenCode setup. -## CLI setup +## Getting started -### Zen catalog + + + **Best for:** the curated OpenCode multi-model proxy (Claude, GPT, Gemini). -```bash -openclaw onboard --auth-choice opencode-zen -openclaw onboard --opencode-zen-api-key "$OPENCODE_API_KEY" -``` + + + ```bash + openclaw onboard --auth-choice opencode-zen + ``` -### Go catalog + Or pass the key directly: -```bash -openclaw onboard --auth-choice opencode-go -openclaw onboard --opencode-go-api-key "$OPENCODE_API_KEY" -``` + ```bash + openclaw onboard --opencode-zen-api-key "$OPENCODE_API_KEY" + ``` + + + ```bash + openclaw config set agents.defaults.model.primary "opencode/claude-opus-4-6" + ``` + + + ```bash + openclaw models list --provider opencode + ``` + + -## Config snippet + + + + **Best for:** the OpenCode-hosted Kimi, GLM, and MiniMax lineup. + + + + ```bash + openclaw onboard --auth-choice opencode-go + ``` + + Or pass the key directly: + + ```bash + openclaw onboard --opencode-go-api-key "$OPENCODE_API_KEY" + ``` + + + ```bash + openclaw config set agents.defaults.model.primary "opencode-go/kimi-k2.5" + ``` + + + ```bash + openclaw models list --provider opencode-go + ``` + + + + + + +## Config example ```json5 { @@ -46,23 +94,58 @@ openclaw onboard --opencode-go-api-key "$OPENCODE_API_KEY" ### Zen -- Runtime provider: `opencode` -- Example models: `opencode/claude-opus-4-6`, `opencode/gpt-5.4`, `opencode/gemini-3-pro` -- Best when you want the curated OpenCode multi-model proxy +| Property | Value | +| ---------------- | ----------------------------------------------------------------------- | +| Runtime provider | `opencode` | +| Example models | `opencode/claude-opus-4-6`, `opencode/gpt-5.4`, `opencode/gemini-3-pro` | ### Go -- Runtime provider: `opencode-go` -- Example models: `opencode-go/kimi-k2.5`, `opencode-go/glm-5`, `opencode-go/minimax-m2.5` -- Best when you want the OpenCode-hosted Kimi/GLM/MiniMax lineup +| Property | Value | +| ---------------- | ------------------------------------------------------------------------ | +| Runtime provider | `opencode-go` | +| Example models | `opencode-go/kimi-k2.5`, `opencode-go/glm-5`, `opencode-go/minimax-m2.5` | -## Notes +## Advanced notes -- `OPENCODE_ZEN_API_KEY` is also supported. -- Entering one OpenCode key during setup stores credentials for both runtime providers. -- You sign in to OpenCode, add billing details, and copy your API key. -- Billing and catalog availability are managed from the OpenCode dashboard. -- Gemini-backed OpenCode refs stay on the proxy-Gemini path, so OpenClaw keeps - Gemini thought-signature sanitation there without enabling native Gemini - replay validation or bootstrap rewrites. -- Non-Gemini OpenCode refs keep the minimal OpenAI-compatible replay policy. + + + `OPENCODE_ZEN_API_KEY` is also supported as an alias for `OPENCODE_API_KEY`. + + + + Entering one OpenCode key during setup stores credentials for both runtime + providers. You do not need to onboard each catalog separately. + + + + You sign in to OpenCode, add billing details, and copy your API key. Billing + and catalog availability are managed from the OpenCode dashboard. + + + + Gemini-backed OpenCode refs stay on the proxy-Gemini path, so OpenClaw keeps + Gemini thought-signature sanitation there without enabling native Gemini + replay validation or bootstrap rewrites. + + + + Non-Gemini OpenCode refs keep the minimal OpenAI-compatible replay policy. + + + + +Entering one OpenCode key during setup stores credentials for both the Zen and +Go runtime providers, so you only need to onboard once. + + +## Related + + + + Choosing providers, model refs, and failover behavior. + + + Full config reference for agents, models, and providers. + + diff --git a/docs/providers/perplexity-provider.md b/docs/providers/perplexity-provider.md index c93475efdd3..acfcc077dee 100644 --- a/docs/providers/perplexity-provider.md +++ b/docs/providers/perplexity-provider.md @@ -16,30 +16,52 @@ This page covers the Perplexity **provider** setup. For the Perplexity **tool** (how the agent uses it), see [Perplexity tool](/tools/perplexity-search). -- Type: web search provider (not a model provider) -- Auth: `PERPLEXITY_API_KEY` (direct) or `OPENROUTER_API_KEY` (via OpenRouter) -- Config path: `plugins.entries.perplexity.config.webSearch.apiKey` +| Property | Value | +| ----------- | ---------------------------------------------------------------------- | +| Type | Web search provider (not a model provider) | +| Auth | `PERPLEXITY_API_KEY` (direct) or `OPENROUTER_API_KEY` (via OpenRouter) | +| Config path | `plugins.entries.perplexity.config.webSearch.apiKey` | -## Quick start +## Getting started -1. Set the API key: + + + Run the interactive web-search configuration flow: -```bash -openclaw configure --section web -``` + ```bash + openclaw configure --section web + ``` -Or set it directly: + Or set the key directly: -```bash -openclaw config set plugins.entries.perplexity.config.webSearch.apiKey "pplx-xxxxxxxxxxxx" -``` + ```bash + openclaw config set plugins.entries.perplexity.config.webSearch.apiKey "pplx-xxxxxxxxxxxx" + ``` -2. The agent will automatically use Perplexity for web searches when configured. + + + The agent will automatically use Perplexity for web searches once the key is + configured. No additional steps are required. + + ## Search modes The plugin auto-selects the transport based on API key prefix: + + + When your key starts with `pplx-`, OpenClaw uses the native Perplexity Search + API. This transport returns structured results and supports domain, language, + and date filters (see filtering options below). + + + When your key starts with `sk-or-`, OpenClaw routes through OpenRouter using + the Perplexity Sonar model. This transport returns AI-synthesized answers with + citations. + + + | Key prefix | Transport | Features | | ---------- | ---------------------------- | ------------------------------------------------ | | `pplx-` | Native Perplexity Search API | Structured results, domain/language/date filters | @@ -47,16 +69,58 @@ The plugin auto-selects the transport based on API key prefix: ## Native API filtering -When using the native Perplexity API (`pplx-` key), searches support: + +Filtering options are only available when using the native Perplexity API +(`pplx-` key). OpenRouter/Sonar searches do not support these parameters. + -- **Country**: 2-letter country code -- **Language**: ISO 639-1 language code -- **Date range**: day, week, month, year -- **Domain filters**: allowlist/denylist (max 20 domains) -- **Content budget**: `max_tokens`, `max_tokens_per_page` +When using the native Perplexity API, searches support the following filters: -## Environment note +| Filter | Description | Example | +| -------------- | -------------------------------------- | ----------------------------------- | +| Country | 2-letter country code | `us`, `de`, `jp` | +| Language | ISO 639-1 language code | `en`, `fr`, `zh` | +| Date range | Recency window | `day`, `week`, `month`, `year` | +| Domain filters | Allowlist or denylist (max 20 domains) | `example.com` | +| Content budget | Token limits per response / per page | `max_tokens`, `max_tokens_per_page` | -If the Gateway runs as a daemon (launchd/systemd), make sure -`PERPLEXITY_API_KEY` is available to that process (for example, in -`~/.openclaw/.env` or via `env.shellEnv`). +## Advanced notes + + + + If the OpenClaw Gateway runs as a daemon (launchd/systemd), make sure + `PERPLEXITY_API_KEY` is available to that process. + + + A key set only in `~/.profile` will not be visible to a launchd/systemd + daemon unless that environment is explicitly imported. Set the key in + `~/.openclaw/.env` or via `env.shellEnv` to ensure the gateway process can + read it. + + + + + + If you prefer to route Perplexity searches through OpenRouter, set an + `OPENROUTER_API_KEY` (prefix `sk-or-`) instead of a native Perplexity key. + OpenClaw will detect the prefix and switch to the Sonar transport + automatically. + + + The OpenRouter transport is useful if you already have an OpenRouter account + and want consolidated billing across multiple providers. + + + + + +## Related + + + + How the agent invokes Perplexity searches and interprets results. + + + Full configuration reference including plugin entries. + + diff --git a/docs/providers/runway.md b/docs/providers/runway.md index 3061a497f7d..0cb7ad28039 100644 --- a/docs/providers/runway.md +++ b/docs/providers/runway.md @@ -11,25 +11,29 @@ read_when: OpenClaw ships a bundled `runway` provider for hosted video generation. -- Provider id: `runway` -- Auth: `RUNWAYML_API_SECRET` (canonical) or `RUNWAY_API_KEY` -- API: Runway task-based video generation (`GET /v1/tasks/{id}` polling) +| Property | Value | +| ----------- | ----------------------------------------------------------------- | +| Provider id | `runway` | +| Auth | `RUNWAYML_API_SECRET` (canonical) or `RUNWAY_API_KEY` | +| API | Runway task-based video generation (`GET /v1/tasks/{id}` polling) | -## Quick start +## Getting started -1. Set the API key: - -```bash -openclaw onboard --auth-choice runway-api-key -``` - -2. Set Runway as the default video provider: - -```bash -openclaw config set agents.defaults.videoGenerationModel.primary "runway/gen4.5" -``` - -3. Ask the agent to generate a video. Runway will be used automatically. + + + ```bash + openclaw onboard --auth-choice runway-api-key + ``` + + + ```bash + openclaw config set agents.defaults.videoGenerationModel.primary "runway/gen4.5" + ``` + + + Ask the agent to generate a video. Runway will be used automatically. + + ## Supported modes @@ -39,9 +43,14 @@ openclaw config set agents.defaults.videoGenerationModel.primary "runway/gen4.5" | Image-to-video | `gen4.5` | 1 local or remote image | | Video-to-video | `gen4_aleph` | 1 local or remote video | -- Local image and video references are supported via data URIs. -- Video-to-video currently requires `runway/gen4_aleph` specifically. -- Text-only runs currently expose `16:9` and `9:16` aspect ratios. + +Local image and video references are supported via data URIs. Text-only runs +currently expose `16:9` and `9:16` aspect ratios. + + + +Video-to-video currently requires `runway/gen4_aleph` specifically. + ## Configuration @@ -57,7 +66,28 @@ openclaw config set agents.defaults.videoGenerationModel.primary "runway/gen4.5" } ``` +## Advanced notes + + + + OpenClaw recognizes both `RUNWAYML_API_SECRET` (canonical) and `RUNWAY_API_KEY`. + Either variable will authenticate the Runway provider. + + + + Runway uses a task-based API. After submitting a generation request, OpenClaw + polls `GET /v1/tasks/{id}` until the video is ready. No additional + configuration is needed for the polling behavior. + + + ## Related -- [Video Generation](/tools/video-generation) -- shared tool parameters, provider selection, and async behavior -- [Configuration Reference](/gateway/configuration-reference#agent-defaults) + + + Shared tool parameters, provider selection, and async behavior. + + + Agent default settings including video generation model. + + diff --git a/docs/providers/vercel-ai-gateway.md b/docs/providers/vercel-ai-gateway.md index f76e2b51bb5..9b844735cee 100644 --- a/docs/providers/vercel-ai-gateway.md +++ b/docs/providers/vercel-ai-gateway.md @@ -8,36 +8,58 @@ read_when: # Vercel AI Gateway -The [Vercel AI Gateway](https://vercel.com/ai-gateway) provides a unified API to access hundreds of models through a single endpoint. +The [Vercel AI Gateway](https://vercel.com/ai-gateway) provides a unified API to +access hundreds of models through a single endpoint. -- Provider: `vercel-ai-gateway` -- Auth: `AI_GATEWAY_API_KEY` -- API: Anthropic Messages compatible -- OpenClaw auto-discovers the Gateway `/v1/models` catalog, so `/models vercel-ai-gateway` - includes current model refs such as `vercel-ai-gateway/openai/gpt-5.4`. +| Property | Value | +| ------------- | -------------------------------- | +| Provider | `vercel-ai-gateway` | +| Auth | `AI_GATEWAY_API_KEY` | +| API | Anthropic Messages compatible | +| Model catalog | Auto-discovered via `/v1/models` | -## Quick start + +OpenClaw auto-discovers the Gateway `/v1/models` catalog, so +`/models vercel-ai-gateway` includes current model refs such as +`vercel-ai-gateway/openai/gpt-5.4`. + -1. Set the API key (recommended: store it for the Gateway): +## Getting started -```bash -openclaw onboard --auth-choice ai-gateway-api-key -``` + + + Run onboarding and choose the AI Gateway auth option: -2. Set a default model: + ```bash + openclaw onboard --auth-choice ai-gateway-api-key + ``` -```json5 -{ - agents: { - defaults: { - model: { primary: "vercel-ai-gateway/anthropic/claude-opus-4.6" }, - }, - }, -} -``` + + + Add the model to your OpenClaw config: + + ```json5 + { + agents: { + defaults: { + model: { primary: "vercel-ai-gateway/anthropic/claude-opus-4.6" }, + }, + }, + } + ``` + + + + ```bash + openclaw models list --provider vercel-ai-gateway + ``` + + ## Non-interactive example +For scripted or CI setups, pass all values on the command line: + ```bash openclaw onboard --non-interactive \ --mode local \ @@ -45,16 +67,53 @@ openclaw onboard --non-interactive \ --ai-gateway-api-key "$AI_GATEWAY_API_KEY" ``` -## Environment note - -If the Gateway runs as a daemon (launchd/systemd), make sure `AI_GATEWAY_API_KEY` -is available to that process (for example, in `~/.openclaw/.env` or via -`env.shellEnv`). - ## Model ID shorthand OpenClaw accepts Vercel Claude shorthand model refs and normalizes them at runtime: -- `vercel-ai-gateway/claude-opus-4.6` -> `vercel-ai-gateway/anthropic/claude-opus-4.6` -- `vercel-ai-gateway/opus-4.6` -> `vercel-ai-gateway/anthropic/claude-opus-4-6` +| Shorthand input | Normalized model ref | +| ----------------------------------- | --------------------------------------------- | +| `vercel-ai-gateway/claude-opus-4.6` | `vercel-ai-gateway/anthropic/claude-opus-4.6` | +| `vercel-ai-gateway/opus-4.6` | `vercel-ai-gateway/anthropic/claude-opus-4-6` | + + +You can use either the shorthand or the fully qualified model ref in your +configuration. OpenClaw resolves the canonical form automatically. + + +## Advanced notes + + + + If the OpenClaw Gateway runs as a daemon (launchd/systemd), make sure + `AI_GATEWAY_API_KEY` is available to that process. + + + A key set only in `~/.profile` will not be visible to a launchd/systemd + daemon unless that environment is explicitly imported. Set the key in + `~/.openclaw/.env` or via `env.shellEnv` to ensure the gateway process can + read it. + + + + + + Vercel AI Gateway routes requests to the upstream provider based on the model + ref prefix. For example, `vercel-ai-gateway/anthropic/claude-opus-4.6` routes + through Anthropic, while `vercel-ai-gateway/openai/gpt-5.4` routes through + OpenAI. Your single `AI_GATEWAY_API_KEY` handles authentication for all + upstream providers. + + + +## Related + + + + Choosing providers, model refs, and failover behavior. + + + General troubleshooting and FAQ. + +