mirror of
https://github.com/openclaw/openclaw.git
synced 2026-03-12 07:20:45 +00:00
docs(ollama): align onboarding guidance with code
This commit is contained in:
@@ -2084,8 +2084,21 @@ More context: [Models](/concepts/models).
|
||||
|
||||
### Can I use selfhosted models llamacpp vLLM Ollama
|
||||
|
||||
Yes. If your local server exposes an OpenAI-compatible API, you can point a
|
||||
custom provider at it. Ollama is supported directly and is the easiest path.
|
||||
Yes. Ollama is the easiest path for local models.
|
||||
|
||||
Quickest setup:
|
||||
|
||||
1. Install Ollama from `https://ollama.com/download`
|
||||
2. Pull a local model such as `ollama pull glm-4.7-flash`
|
||||
3. If you want Ollama Cloud too, run `ollama signin`
|
||||
4. Run `openclaw onboard` and choose `Ollama`
|
||||
5. Pick `Local` or `Cloud + Local`
|
||||
|
||||
Notes:
|
||||
|
||||
- `Cloud + Local` gives you Ollama Cloud models plus your local Ollama models
|
||||
- cloud models such as `kimi-k2.5:cloud` do not need a local pull
|
||||
- for manual switching, use `openclaw models list` and `openclaw models set ollama/<model>`
|
||||
|
||||
Security note: smaller or heavily quantized models are more vulnerable to prompt
|
||||
injection. We strongly recommend **large models** for any bot that can use tools.
|
||||
|
||||
Reference in New Issue
Block a user