fix: i am fixing all the changes that claude made. vibe coding is not there yet. anyways, i fixed the issues that the bot told me to fix

This commit is contained in:
Gabriel
2026-02-08 03:08:41 +00:00
committed by Peter Steinberger
parent c640b5f86c
commit 8f2884b986
6 changed files with 57 additions and 112 deletions

View File

@@ -289,7 +289,6 @@ Docs: https://docs.openclaw.ai
- Models: support Anthropic Opus 4.6 and OpenAI Codex gpt-5.3-codex (forward-compat fallbacks). (#9853, #10720, #9995) Thanks @TinyTb, @calvin-hpnet, @tyler6204.
- Providers: add xAI (Grok) support. (#9885) Thanks @grp06.
- Providers: add Baidu Qianfan support. (#8868) Thanks @ide-rea.
- Providers: add NVIDIA API support with models including Llama 3.1 Nemotron 70B, Llama 3.3 70B, and Mistral NeMo Minitron 8B.
- Web UI: add token usage dashboard. (#10072) Thanks @Takhoffman.
- Web UI: add RTL auto-direction support for Hebrew/Arabic text in chat composer and rendered messages. (#11498) Thanks @dirbalak.
- Memory: native Voyage AI support. (#7078) Thanks @mcinteerj.

View File

@@ -1,107 +0,0 @@
---
summary: "NVIDIA API setup for AI model access"
read_when:
- You want to use NVIDIA's AI models
- You need NVIDIA_API_KEY setup
title: "NVIDIA API"
---
# NVIDIA API
OpenClaw can use NVIDIA's API (https://integrate.api.nvidia.com/v1) for accessing various AI models. NVIDIA provides access to state-of-the-art language models through their integration endpoint.
## API Setup
### NVIDIA (direct)
- Base URL: [https://integrate.api.nvidia.com/v1](https://integrate.api.nvidia.com/v1)
- Environment variable: `NVIDIA_API_KEY`
- Get your API key from: [NVIDIA NGC](https://catalog.ngc.nvidia.com/)
## Config example
```json5
{
models: {
providers: {
nvidia: {
apiKey: "nvapi-...",
baseUrl: "https://integrate.api.nvidia.com/v1",
},
},
},
agents: {
default: {
provider: "nvidia",
model: "nvidia/llama-3.1-nemotron-70b-instruct",
},
},
}
```
## Available Models
OpenClaw includes support for several NVIDIA models:
- `nvidia/llama-3.1-nemotron-70b-instruct` (default) — High-performance instruction-following model
- `nvidia/llama-3.3-70b-instruct` — Latest Llama 3.3 variant
- `nvidia/mistral-nemo-minitron-8b-8k-instruct` — Smaller, efficient model
## Environment Variable Setup
Set your NVIDIA API key as an environment variable:
```bash
export NVIDIA_API_KEY="nvapi-your-key-here"
```
Or add it to your `.env` file:
```bash
NVIDIA_API_KEY=nvapi-your-key-here
```
## Usage in Config
Minimal configuration (uses environment variable):
```json5
{
agents: {
default: {
provider: "nvidia",
model: "nvidia/llama-3.1-nemotron-70b-instruct",
},
},
}
```
Explicit API key configuration:
```json5
{
models: {
providers: {
nvidia: {
apiKey: "NVIDIA_API_KEY",
baseUrl: "https://integrate.api.nvidia.com/v1",
api: "openai-completions",
},
},
},
}
```
## Professional and Personal Use
NVIDIA's API is suitable for both professional and personal applications:
- **Professional**: Enterprise-grade models for business applications, research, and development
- **Personal**: Access to powerful AI models for learning, experimentation, and personal projects
## Notes
- NVIDIA API uses OpenAI-compatible endpoints
- Models are automatically discovered if `NVIDIA_API_KEY` is set
- Default context window: 131,072 tokens
- Default max tokens: 4,096 tokens

View File

@@ -1,9 +1,9 @@
---
summary: "Model providers (LLMs) supported by OpenClaw"
summary: 'Model providers (LLMs) supported by OpenClaw'
read_when:
- You want to choose a model provider
- You need a quick overview of supported LLM backends
title: "Model Providers"
title: 'Model Providers'
---
# Model Providers
@@ -29,7 +29,7 @@ See [Venice AI](/providers/venice).
```json5
{
agents: { defaults: { model: { primary: "anthropic/claude-opus-4-6" } } },
agents: { defaults: { model: { primary: 'anthropic/claude-opus-4-6' } } },
}
```
@@ -55,6 +55,7 @@ See [Venice AI](/providers/venice).
- [Ollama (local models)](/providers/ollama)
- [vLLM (local models)](/providers/vllm)
- [Qianfan](/providers/qianfan)
- [NVIDIA](/providers/nvidia)
## Transcription providers

51
docs/providers/nvidia.md Normal file
View File

@@ -0,0 +1,51 @@
---
summary: "Use NVIDIA's OpenAI-compatible API in OpenClaw"
read_when:
- You want to use NVIDIA models in OpenClaw
- You need NVIDIA_API_KEY setup
title: 'NVIDIA'
---
# NVIDIA
NVIDIA provides an OpenAI-compatible API at `https://integrate.api.nvidia.com/v1` for Nemotron and NeMo models. Authenticate with an API key from [NVIDIA NGC](https://catalog.ngc.nvidia.com/).
## CLI setup
```bash
openclaw onboard --auth-choice apiKey --token-provider nvidia --token "$NVIDIA_API_KEY"
```
If `NVIDIA_API_KEY` is already exported, you can omit `--token`.
## Config snippet
```json5
{
env: { NVIDIA_API_KEY: 'nvapi-...' },
models: {
providers: {
nvidia: {
baseUrl: 'https://integrate.api.nvidia.com/v1',
api: 'openai-completions',
},
},
},
agents: {
defaults: {
model: { primary: 'nvidia/llama-3.1-nemotron-70b-instruct' },
},
},
}
```
## Model IDs
- `nvidia/llama-3.1-nemotron-70b-instruct` (default)
- `nvidia/llama-3.3-70b-instruct`
- `nvidia/mistral-nemo-minitron-8b-8k-instruct`
## Notes
- OpenAI-compatible `/v1` endpoint; use an API key from NVIDIA NGC.
- Provider auto-enables when `NVIDIA_API_KEY` is set; uses static defaults (131,072-token context window, 4,096 max tokens).

View File

@@ -305,6 +305,7 @@ export function resolveEnvApiKey(provider: string): EnvApiKeyResult | null {
"cloudflare-ai-gateway": "CLOUDFLARE_AI_GATEWAY_API_KEY",
moonshot: "MOONSHOT_API_KEY",
minimax: "MINIMAX_API_KEY",
nvidia: "NVIDIA_API_KEY",
xiaomi: "XIAOMI_API_KEY",
synthetic: "SYNTHETIC_API_KEY",
venice: "VENICE_API_KEY",

View File

@@ -2,7 +2,7 @@ import { mkdtempSync } from "node:fs";
import { tmpdir } from "node:os";
import { join } from "node:path";
import { describe, expect, it } from "vitest";
import { resolveImplicitProviders, buildNvidiaProvider } from "./models-config.providers.js";
import { buildNvidiaProvider, resolveImplicitProviders } from "./models-config.providers.js";
describe("NVIDIA provider", () => {
it("should include nvidia when NVIDIA_API_KEY is configured", async () => {