mirror of
https://github.com/openclaw/openclaw.git
synced 2026-05-06 16:30:57 +00:00
fix: simplify ollama onboarding (#67005)
* feat(ollama): split interactive cloud and local setup * test(ollama): cover cloud onboarding flow * docs(ollama): simplify provider setup docs * docs(onboarding): update ollama wizard copy * fix(ollama): restore web search auth helper * fix(ollama): harden setup auth and ssrf handling * fix(ollama): address review regressions * fix(ollama): scope ssrf hardening to ollama * feat(ollama): add hybrid onboarding mode * fix(ollama): tighten cloud credential setup * refactor(ollama): distill host-backed setup modes * fix(ollama): preserve cloud api key in config * fix: simplify ollama onboarding (#67005)
This commit is contained in:
@@ -8,7 +8,7 @@ title: "Ollama"
|
||||
|
||||
# Ollama
|
||||
|
||||
Ollama is a local LLM runtime that makes it easy to run open-source models on your machine. OpenClaw integrates with Ollama's native API (`/api/chat`), supports streaming and tool calling, and can auto-discover local Ollama models when you opt in with `OLLAMA_API_KEY` (or an auth profile) and do not define an explicit `models.providers.ollama` entry.
|
||||
OpenClaw integrates with Ollama's native API (`/api/chat`) for hosted cloud models and local/self-hosted Ollama servers. You can use Ollama in three modes: `Cloud + Local` through a reachable Ollama host, `Cloud only` against `https://ollama.com`, or `Local only` against a reachable Ollama host.
|
||||
|
||||
<Warning>
|
||||
**Remote Ollama users**: Do not use the `/v1` OpenAI-compatible URL (`http://host:11434/v1`) with OpenClaw. This breaks tool calling and models may output raw tool JSON as plain text. Use the native Ollama API URL instead: `baseUrl: "http://host:11434"` (no `/v1`).
|
||||
@@ -20,7 +20,7 @@ Choose your preferred setup method and mode.
|
||||
|
||||
<Tabs>
|
||||
<Tab title="Onboarding (recommended)">
|
||||
**Best for:** fastest path to a working Ollama setup with automatic model discovery.
|
||||
**Best for:** fastest path to a working Ollama cloud or local setup.
|
||||
|
||||
<Steps>
|
||||
<Step title="Run onboarding">
|
||||
@@ -31,13 +31,12 @@ Choose your preferred setup method and mode.
|
||||
Select **Ollama** from the provider list.
|
||||
</Step>
|
||||
<Step title="Choose your mode">
|
||||
- **Cloud + Local** — cloud-hosted models and local models together
|
||||
- **Local** — local models only
|
||||
|
||||
If you choose **Cloud + Local** and are not signed in to ollama.com, onboarding opens a browser sign-in flow.
|
||||
- **Cloud + Local** — local Ollama host plus cloud models routed through that host
|
||||
- **Cloud only** — hosted Ollama models via `https://ollama.com`
|
||||
- **Local only** — local models only
|
||||
</Step>
|
||||
<Step title="Select a model">
|
||||
Onboarding discovers available models and suggests defaults. It auto-pulls the selected model if it is not available locally.
|
||||
`Cloud only` prompts for `OLLAMA_API_KEY` and suggests hosted cloud defaults. `Cloud + Local` and `Local only` ask for an Ollama base URL, discover available models, and auto-pull the selected local model if it is not available yet. `Cloud + Local` also checks whether that Ollama host is signed in for cloud access.
|
||||
</Step>
|
||||
<Step title="Verify the model is available">
|
||||
```bash
|
||||
@@ -67,13 +66,15 @@ Choose your preferred setup method and mode.
|
||||
</Tab>
|
||||
|
||||
<Tab title="Manual setup">
|
||||
**Best for:** full control over installation, model pulls, and config.
|
||||
**Best for:** full control over cloud or local setup.
|
||||
|
||||
<Steps>
|
||||
<Step title="Install Ollama">
|
||||
Download from [ollama.com/download](https://ollama.com/download).
|
||||
<Step title="Choose cloud or local">
|
||||
- **Cloud + Local**: install Ollama, sign in with `ollama signin`, and route cloud requests through that host
|
||||
- **Cloud only**: use `https://ollama.com` with an `OLLAMA_API_KEY`
|
||||
- **Local only**: install Ollama from [ollama.com/download](https://ollama.com/download)
|
||||
</Step>
|
||||
<Step title="Pull a local model">
|
||||
<Step title="Pull a local model (local only)">
|
||||
```bash
|
||||
ollama pull gemma4
|
||||
# or
|
||||
@@ -82,22 +83,18 @@ Choose your preferred setup method and mode.
|
||||
ollama pull llama3.3
|
||||
```
|
||||
</Step>
|
||||
<Step title="Sign in for cloud models (optional)">
|
||||
If you want cloud models too:
|
||||
|
||||
```bash
|
||||
ollama signin
|
||||
```
|
||||
</Step>
|
||||
<Step title="Enable Ollama for OpenClaw">
|
||||
Set any value for the API key (Ollama does not require a real key):
|
||||
For `Cloud only`, use your real `OLLAMA_API_KEY`. For host-backed setups, any placeholder value works:
|
||||
|
||||
```bash
|
||||
# Set environment variable
|
||||
# Cloud
|
||||
export OLLAMA_API_KEY="your-ollama-api-key"
|
||||
|
||||
# Local-only
|
||||
export OLLAMA_API_KEY="ollama-local"
|
||||
|
||||
# Or configure in your config file
|
||||
openclaw config set models.providers.ollama.apiKey "ollama-local"
|
||||
openclaw config set models.providers.ollama.apiKey "OLLAMA_API_KEY"
|
||||
```
|
||||
</Step>
|
||||
<Step title="Inspect and set your model">
|
||||
@@ -127,18 +124,23 @@ Choose your preferred setup method and mode.
|
||||
|
||||
<Tabs>
|
||||
<Tab title="Cloud + Local">
|
||||
Cloud models let you run cloud-hosted models alongside your local models. Examples include `kimi-k2.5:cloud`, `minimax-m2.7:cloud`, and `glm-5.1:cloud` -- these do **not** require a local `ollama pull`.
|
||||
`Cloud + Local` uses a reachable Ollama host as the control point for both local and cloud models. This is Ollama's preferred hybrid flow.
|
||||
|
||||
Select **Cloud + Local** mode during setup. The wizard checks whether you are signed in and opens a browser sign-in flow when needed. If authentication cannot be verified, the wizard falls back to local model defaults.
|
||||
Use **Cloud + Local** during setup. OpenClaw prompts for the Ollama base URL, discovers local models from that host, and checks whether the host is signed in for cloud access with `ollama signin`. When the host is signed in, OpenClaw also suggests hosted cloud defaults such as `kimi-k2.5:cloud`, `minimax-m2.7:cloud`, and `glm-5.1:cloud`.
|
||||
|
||||
You can also sign in directly at [ollama.com/signin](https://ollama.com/signin).
|
||||
If the host is not signed in yet, OpenClaw keeps the setup local-only until you run `ollama signin`.
|
||||
|
||||
OpenClaw currently suggests these cloud defaults: `kimi-k2.5:cloud`, `minimax-m2.7:cloud`, `glm-5.1:cloud`.
|
||||
</Tab>
|
||||
|
||||
<Tab title="Cloud only">
|
||||
`Cloud only` runs against Ollama's hosted API at `https://ollama.com`.
|
||||
|
||||
Use **Cloud only** during setup. OpenClaw prompts for `OLLAMA_API_KEY`, sets `baseUrl: "https://ollama.com"`, and seeds the hosted cloud model list. This path does **not** require a local Ollama server or `ollama signin`.
|
||||
|
||||
</Tab>
|
||||
|
||||
<Tab title="Local only">
|
||||
In local-only mode, OpenClaw discovers models from the local Ollama instance. No cloud sign-in is needed.
|
||||
In local-only mode, OpenClaw discovers models from the configured Ollama instance. This path is for local or self-hosted Ollama servers.
|
||||
|
||||
OpenClaw currently suggests `gemma4` as the local default.
|
||||
|
||||
@@ -182,7 +184,7 @@ If you set `models.providers.ollama` explicitly, auto-discovery is skipped and y
|
||||
|
||||
<Tabs>
|
||||
<Tab title="Basic (implicit discovery)">
|
||||
The simplest way to enable Ollama is via environment variable:
|
||||
The simplest local-only enablement path is via environment variable:
|
||||
|
||||
```bash
|
||||
export OLLAMA_API_KEY="ollama-local"
|
||||
@@ -195,25 +197,25 @@ If you set `models.providers.ollama` explicitly, auto-discovery is skipped and y
|
||||
</Tab>
|
||||
|
||||
<Tab title="Explicit (manual models)">
|
||||
Use explicit config when Ollama runs on another host/port, you want to force specific context windows or model lists, or you want fully manual model definitions.
|
||||
Use explicit config when you want hosted cloud setup, Ollama runs on another host/port, you want to force specific context windows or model lists, or you want fully manual model definitions.
|
||||
|
||||
```json5
|
||||
{
|
||||
models: {
|
||||
providers: {
|
||||
ollama: {
|
||||
baseUrl: "http://ollama-host:11434",
|
||||
apiKey: "ollama-local",
|
||||
baseUrl: "https://ollama.com",
|
||||
apiKey: "OLLAMA_API_KEY",
|
||||
api: "ollama",
|
||||
models: [
|
||||
{
|
||||
id: "gpt-oss:20b",
|
||||
name: "GPT-OSS 20B",
|
||||
id: "kimi-k2.5:cloud",
|
||||
name: "kimi-k2.5:cloud",
|
||||
reasoning: false,
|
||||
input: ["text"],
|
||||
input: ["text", "image"],
|
||||
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
|
||||
contextWindow: 8192,
|
||||
maxTokens: 8192 * 10
|
||||
contextWindow: 128000,
|
||||
maxTokens: 8192
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
@@ -40,7 +40,7 @@ For a high-level overview, see [Onboarding (CLI)](/start/wizard).
|
||||
- Sets `agents.defaults.model` to `openai/gpt-5.4` when model is unset, `openai/*`, or `openai-codex/*`.
|
||||
- **xAI (Grok) API key**: prompts for `XAI_API_KEY` and configures xAI as a model provider.
|
||||
- **OpenCode**: prompts for `OPENCODE_API_KEY` (or `OPENCODE_ZEN_API_KEY`, get it at https://opencode.ai/auth) and lets you pick the Zen or Go catalog.
|
||||
- **Ollama**: prompts for the Ollama base URL, offers **Cloud + Local** or **Local** mode, discovers available models, and auto-pulls the selected local model when needed.
|
||||
- **Ollama**: offers **Cloud + Local**, **Cloud only**, or **Local only** first. `Cloud only` prompts for `OLLAMA_API_KEY` and uses `https://ollama.com`; the host-backed modes prompt for the Ollama base URL, discover available models, and auto-pull the selected local model when needed; `Cloud + Local` also checks whether that Ollama host is signed in for cloud access.
|
||||
- More detail: [Ollama](/providers/ollama)
|
||||
- **API key**: stores the key for you.
|
||||
- **Vercel AI Gateway (multi-model proxy)**: prompts for `AI_GATEWAY_API_KEY`.
|
||||
|
||||
@@ -181,8 +181,10 @@ What you set:
|
||||
More detail: [Synthetic](/providers/synthetic).
|
||||
</Accordion>
|
||||
<Accordion title="Ollama (Cloud and local open models)">
|
||||
Prompts for base URL (default `http://127.0.0.1:11434`), then offers Cloud + Local or Local mode.
|
||||
Discovers available models and suggests defaults.
|
||||
Prompts for `Cloud + Local`, `Cloud only`, or `Local only` first.
|
||||
`Cloud only` uses `OLLAMA_API_KEY` with `https://ollama.com`.
|
||||
The host-backed modes prompt for base URL (default `http://127.0.0.1:11434`), discover available models, and suggest defaults.
|
||||
`Cloud + Local` also checks whether that Ollama host is signed in for cloud access.
|
||||
More detail: [Ollama](/providers/ollama).
|
||||
</Accordion>
|
||||
<Accordion title="Moonshot and Kimi Coding">
|
||||
|
||||
Reference in New Issue
Block a user