Files
openclaw/docs/providers/deepseek.md
2026-04-25 04:25:44 +01:00

4.8 KiB

summary, title, read_when
summary title read_when
DeepSeek setup (auth + model selection) DeepSeek
You want to use DeepSeek with OpenClaw
You need the API key env var or CLI auth choice

DeepSeek provides powerful AI models with an OpenAI-compatible API.

Property Value
Provider deepseek
Auth DEEPSEEK_API_KEY
API OpenAI-compatible
Base URL https://api.deepseek.com

Getting started

Create an API key at [platform.deepseek.com](https://platform.deepseek.com/api_keys). ```bash openclaw onboard --auth-choice deepseek-api-key ```
This will prompt for your API key and set `deepseek/deepseek-v4-flash` as the default model.
```bash openclaw models list --provider deepseek ```
To inspect the bundled static catalog without requiring a running Gateway,
use:

```bash
openclaw models list --all --provider deepseek
```
For scripted or headless installations, pass all flags directly:
```bash
openclaw onboard --non-interactive \
  --mode local \
  --auth-choice deepseek-api-key \
  --deepseek-api-key "$DEEPSEEK_API_KEY" \
  --skip-health \
  --accept-risk
```
If the Gateway runs as a daemon (launchd/systemd), make sure `DEEPSEEK_API_KEY` is available to that process (for example, in `~/.openclaw/.env` or via `env.shellEnv`).

Built-in catalog

Model ref Name Input Context Max output Notes
deepseek/deepseek-v4-flash DeepSeek V4 Flash text 1,000,000 384,000 Default model; V4 thinking-capable surface
deepseek/deepseek-v4-pro DeepSeek V4 Pro text 1,000,000 384,000 V4 thinking-capable surface
deepseek/deepseek-chat DeepSeek Chat text 131,072 8,192 DeepSeek V3.2 non-thinking surface
deepseek/deepseek-reasoner DeepSeek Reasoner text 131,072 65,536 Reasoning-enabled V3.2 surface
V4 models support DeepSeek's `thinking` control. OpenClaw also replays DeepSeek `reasoning_content` on follow-up turns so thinking sessions with tool calls can continue.

Thinking and tools

DeepSeek V4 thinking sessions have a stricter replay contract than most OpenAI-compatible providers: when a thinking-enabled assistant message includes tool calls, DeepSeek expects the prior assistant reasoning_content to be sent back on the follow-up request. OpenClaw handles this inside the DeepSeek plugin, so normal multi-turn tool use works with deepseek/deepseek-v4-flash and deepseek/deepseek-v4-pro.

If you switch an existing session from another OpenAI-compatible provider to a DeepSeek V4 model, older assistant tool-call turns may not have native DeepSeek reasoning_content. OpenClaw fills that missing field for DeepSeek V4 thinking requests so the provider can accept the replayed tool-call history without requiring /new.

When thinking is disabled in OpenClaw (including the UI None selection), OpenClaw sends DeepSeek thinking: { type: "disabled" } and strips replayed reasoning_content from the outgoing history. This keeps disabled-thinking sessions on the non-thinking DeepSeek path.

Use deepseek/deepseek-v4-flash for the default fast path. Use deepseek/deepseek-v4-pro when you want the stronger V4 model and can accept higher cost or latency.

Live testing

The direct live model suite includes DeepSeek V4 in the modern model set. To run only the DeepSeek V4 direct-model checks:

OPENCLAW_LIVE_PROVIDERS=deepseek \
OPENCLAW_LIVE_MODELS="deepseek/deepseek-v4-flash,deepseek/deepseek-v4-pro" \
pnpm test:live src/agents/models.profiles.live.test.ts

That live check verifies both V4 models can complete and that thinking/tool follow-up turns preserve the replay payload DeepSeek requires.

Config example

{
  env: { DEEPSEEK_API_KEY: "sk-..." },
  agents: {
    defaults: {
      model: { primary: "deepseek/deepseek-v4-flash" },
    },
  },
}
Choosing providers, model refs, and failover behavior. Full config reference for agents, models, and providers.