diff --git a/docs/providers/nvidia.md b/docs/providers/nvidia.md
index e35feb42475..1bd8eee19be 100644
--- a/docs/providers/nvidia.md
+++ b/docs/providers/nvidia.md
@@ -88,6 +88,38 @@ openclaw onboard --auth-choice nvidia-api-key --nvidia-api-key "nvapi-..."
NVIDIA uses the standard `/v1` completions endpoint. Any OpenAI-compatible
tooling should work out of the box with the NVIDIA base URL.
+
+
+ Some NVIDIA-hosted custom models can take longer than the default model idle
+ watchdog before they emit a first response chunk. For custom NVIDIA provider
+ entries, raise the provider timeout instead of raising the whole agent
+ runtime timeout:
+
+ ```json5
+ {
+ models: {
+ providers: {
+ "custom-integrate-api-nvidia-com": {
+ baseUrl: "https://integrate.api.nvidia.com/v1",
+ api: "openai-completions",
+ apiKey: "NVIDIA_API_KEY",
+ timeoutSeconds: 300,
+ },
+ },
+ },
+ agents: {
+ defaults: {
+ models: {
+ "custom-integrate-api-nvidia-com/meta/llama-3.1-70b-instruct": {
+ params: { thinking: "off" },
+ },
+ },
+ },
+ },
+ }
+ ```
+
+