diff --git a/docs/providers/fireworks.md b/docs/providers/fireworks.md index f5eb461c2b3..8e44e32c917 100644 --- a/docs/providers/fireworks.md +++ b/docs/providers/fireworks.md @@ -51,10 +51,10 @@ openclaw onboard --non-interactive \ ## Built-in catalog -| Model ref | Name | Input | Context | Max output | Notes | -| ------------------------------------------------------ | --------------------------- | ---------- | ------- | ---------- | ------------------------------------------ | -| `fireworks/accounts/fireworks/models/kimi-k2p6` | Kimi K2.6 | text,image | 262,144 | 262,144 | Latest Kimi model on Fireworks | -| `fireworks/accounts/fireworks/routers/kimi-k2p5-turbo` | Kimi K2.5 Turbo (Fire Pass) | text,image | 256,000 | 256,000 | Default bundled starter model on Fireworks | +| Model ref | Name | Input | Context | Max output | Notes | +| ------------------------------------------------------ | --------------------------- | ---------- | ------- | ---------- | --------------------------------------------------------------------------------------------------------------------------------------------------- | +| `fireworks/accounts/fireworks/models/kimi-k2p6` | Kimi K2.6 | text,image | 262,144 | 262,144 | Latest Kimi model on Fireworks. Thinking is disabled for Fireworks K2.6 requests; route through Moonshot directly if you need Kimi thinking output. | +| `fireworks/accounts/fireworks/routers/kimi-k2p5-turbo` | Kimi K2.5 Turbo (Fire Pass) | text,image | 256,000 | 256,000 | Default bundled starter model on Fireworks | If Fireworks publishes a newer model such as a fresh Qwen or Gemma release, you can switch to it directly by using its Fireworks model id without waiting for a bundled catalog update. diff --git a/docs/providers/ollama.md b/docs/providers/ollama.md index 8b49d2ba699..011d2397837 100644 --- a/docs/providers/ollama.md +++ b/docs/providers/ollama.md @@ -137,6 +137,8 @@ Choose your preferred setup method and mode. Use **Cloud only** during setup. OpenClaw prompts for `OLLAMA_API_KEY`, sets `baseUrl: "https://ollama.com"`, and seeds the hosted cloud model list. This path does **not** require a local Ollama server or `ollama signin`. + The cloud model list shown during `openclaw onboard` is populated live from `https://ollama.com/api/tags`, capped at 500 entries, so the picker reflects the current hosted catalog rather than a static seed. If `ollama.com` is unreachable or returns no models at setup time, OpenClaw falls back to the previous hardcoded suggestions so onboarding still completes. +