mirror of
https://github.com/openclaw/openclaw.git
synced 2026-03-12 07:20:45 +00:00
fix: i am fixing all the changes that claude made. vibe coding is not there yet. anyways, i fixed the issues that the bot told me to fix
This commit is contained in:
committed by
Peter Steinberger
parent
c640b5f86c
commit
8f2884b986
@@ -1,9 +1,9 @@
|
||||
---
|
||||
summary: "Model providers (LLMs) supported by OpenClaw"
|
||||
summary: 'Model providers (LLMs) supported by OpenClaw'
|
||||
read_when:
|
||||
- You want to choose a model provider
|
||||
- You need a quick overview of supported LLM backends
|
||||
title: "Model Providers"
|
||||
title: 'Model Providers'
|
||||
---
|
||||
|
||||
# Model Providers
|
||||
@@ -29,7 +29,7 @@ See [Venice AI](/providers/venice).
|
||||
|
||||
```json5
|
||||
{
|
||||
agents: { defaults: { model: { primary: "anthropic/claude-opus-4-6" } } },
|
||||
agents: { defaults: { model: { primary: 'anthropic/claude-opus-4-6' } } },
|
||||
}
|
||||
```
|
||||
|
||||
@@ -55,6 +55,7 @@ See [Venice AI](/providers/venice).
|
||||
- [Ollama (local models)](/providers/ollama)
|
||||
- [vLLM (local models)](/providers/vllm)
|
||||
- [Qianfan](/providers/qianfan)
|
||||
- [NVIDIA](/providers/nvidia)
|
||||
|
||||
## Transcription providers
|
||||
|
||||
|
||||
51
docs/providers/nvidia.md
Normal file
51
docs/providers/nvidia.md
Normal file
@@ -0,0 +1,51 @@
|
||||
---
|
||||
summary: "Use NVIDIA's OpenAI-compatible API in OpenClaw"
|
||||
read_when:
|
||||
- You want to use NVIDIA models in OpenClaw
|
||||
- You need NVIDIA_API_KEY setup
|
||||
title: 'NVIDIA'
|
||||
---
|
||||
|
||||
# NVIDIA
|
||||
|
||||
NVIDIA provides an OpenAI-compatible API at `https://integrate.api.nvidia.com/v1` for Nemotron and NeMo models. Authenticate with an API key from [NVIDIA NGC](https://catalog.ngc.nvidia.com/).
|
||||
|
||||
## CLI setup
|
||||
|
||||
```bash
|
||||
openclaw onboard --auth-choice apiKey --token-provider nvidia --token "$NVIDIA_API_KEY"
|
||||
```
|
||||
|
||||
If `NVIDIA_API_KEY` is already exported, you can omit `--token`.
|
||||
|
||||
## Config snippet
|
||||
|
||||
```json5
|
||||
{
|
||||
env: { NVIDIA_API_KEY: 'nvapi-...' },
|
||||
models: {
|
||||
providers: {
|
||||
nvidia: {
|
||||
baseUrl: 'https://integrate.api.nvidia.com/v1',
|
||||
api: 'openai-completions',
|
||||
},
|
||||
},
|
||||
},
|
||||
agents: {
|
||||
defaults: {
|
||||
model: { primary: 'nvidia/llama-3.1-nemotron-70b-instruct' },
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
## Model IDs
|
||||
|
||||
- `nvidia/llama-3.1-nemotron-70b-instruct` (default)
|
||||
- `nvidia/llama-3.3-70b-instruct`
|
||||
- `nvidia/mistral-nemo-minitron-8b-8k-instruct`
|
||||
|
||||
## Notes
|
||||
|
||||
- OpenAI-compatible `/v1` endpoint; use an API key from NVIDIA NGC.
|
||||
- Provider auto-enables when `NVIDIA_API_KEY` is set; uses static defaults (131,072-token context window, 4,096 max tokens).
|
||||
Reference in New Issue
Block a user