docs(providers): rewrite Fireworks page with thinking-off context

Verified against extensions/fireworks/openclaw.plugin.json and the
bundled provider entry. The plugin is enabledByDefault, registers the
`fireworks-ai` alias (defineSingleProviderPluginEntry), and dynamically
clones the Fire Pass template for any custom Fireworks model id with
thinking forced off when the id matches the Kimi pattern (model-id.ts +
thinking-policy.ts).

Added: alias mention, direct CLI flag, properties summary, dedicated
Note explaining why thinking is forced off for Kimi (the bundled
thinking policy + Fireworks API rejecting reasoning_* params), and a
'Why thinking is off' accordion pointing operators at Moonshot for
native reasoning. Replaced the broken `/concepts/model-providers` Tip
ordering and added a Thinking modes card to round out cross-links.

Reorganized Step 1 as a CodeGroup so onboarding, direct flag, and env
fallback are visible up front instead of buried under a separate
non-interactive example block (kept the non-interactive block for full
unattended install). Verified `/concepts/model-providers`,
`/help/troubleshooting`, `/tools/thinking`, and `/providers/moonshot`
targets exist on origin/main.
This commit is contained in:
Vincent Koc
2026-05-05 16:40:57 -07:00
parent 81349cdc2a
commit 67657356f0

View File

@@ -4,39 +4,61 @@ title: "Fireworks"
read_when:
- You want to use Fireworks with OpenClaw
- You need the Fireworks API key env var or default model id
- You are debugging Kimi thinking-off behavior on Fireworks
---
[Fireworks](https://fireworks.ai) exposes open-weight and routed models through an OpenAI-compatible API. OpenClaw includes a bundled Fireworks provider plugin.
[Fireworks](https://fireworks.ai) exposes open-weight and routed models through an OpenAI-compatible API. OpenClaw includes a bundled Fireworks provider plugin that ships with two pre-cataloged Kimi models and accepts any Fireworks model or router id at runtime.
| Property | Value |
| ------------- | ------------------------------------------------------ |
| Provider | `fireworks` |
| Auth | `FIREWORKS_API_KEY` |
| API | OpenAI-compatible chat/completions |
| Base URL | `https://api.fireworks.ai/inference/v1` |
| Default model | `fireworks/accounts/fireworks/routers/kimi-k2p5-turbo` |
| Property | Value |
| --------------- | ------------------------------------------------------ |
| Provider id | `fireworks` (alias: `fireworks-ai`) |
| Plugin | bundled, `enabledByDefault: true` |
| Auth env var | `FIREWORKS_API_KEY` |
| Onboarding flag | `--auth-choice fireworks-api-key` |
| Direct CLI flag | `--fireworks-api-key <key>` |
| API | OpenAI-compatible (`openai-completions`) |
| Base URL | `https://api.fireworks.ai/inference/v1` |
| Default model | `fireworks/accounts/fireworks/routers/kimi-k2p5-turbo` |
| Default alias | `Kimi K2.5 Turbo` |
## Getting started
<Steps>
<Step title="Set up Fireworks auth through onboarding">
```bash
openclaw onboard --auth-choice fireworks-api-key
```
<Step title="Set the Fireworks API key">
<CodeGroup>
This stores your Fireworks key in OpenClaw config and sets the Fire Pass starter model as the default.
```bash Onboarding
openclaw onboard --auth-choice fireworks-api-key
```
```bash Direct flag
openclaw onboard --non-interactive \
--auth-choice fireworks-api-key \
--fireworks-api-key "$FIREWORKS_API_KEY"
```
```bash Env only
export FIREWORKS_API_KEY=fw-...
```
</CodeGroup>
Onboarding stores the key against the `fireworks` provider in your auth profiles and sets the **Fire Pass** Kimi K2.5 Turbo router as the default model.
</Step>
<Step title="Verify the model is available">
```bash
openclaw models list --provider fireworks
```
The list should include `Kimi K2.6` and `Kimi K2.5 Turbo (Fire Pass)`. If `FIREWORKS_API_KEY` is unresolved, `openclaw models status --json` reports the missing credential under `auth.unusableProfiles`.
</Step>
</Steps>
## Non-interactive example
## Non-interactive setup
For scripted or CI setups, pass all values on the command line:
For scripted or CI installs, pass everything on the command line:
```bash
openclaw onboard --non-interactive \
@@ -49,25 +71,25 @@ openclaw onboard --non-interactive \
## Built-in catalog
| Model ref | Name | Input | Context | Max output | Notes |
| ------------------------------------------------------ | --------------------------- | ---------- | ------- | ---------- | --------------------------------------------------------------------------------------------------------------------------------------------------- |
| `fireworks/accounts/fireworks/models/kimi-k2p6` | Kimi K2.6 | text,image | 262,144 | 262,144 | Latest Kimi model on Fireworks. Thinking is disabled for Fireworks K2.6 requests; route through Moonshot directly if you need Kimi thinking output. |
| `fireworks/accounts/fireworks/routers/kimi-k2p5-turbo` | Kimi K2.5 Turbo (Fire Pass) | text,image | 256,000 | 256,000 | Default bundled starter model on Fireworks |
| Model ref | Name | Input | Context | Max output | Thinking |
| ------------------------------------------------------ | --------------------------- | ------------ | ------- | ---------- | -------------------- |
| `fireworks/accounts/fireworks/models/kimi-k2p6` | Kimi K2.6 | text + image | 262,144 | 262,144 | Forced off |
| `fireworks/accounts/fireworks/routers/kimi-k2p5-turbo` | Kimi K2.5 Turbo (Fire Pass) | text + image | 256,000 | 256,000 | Forced off (default) |
<Tip>
If Fireworks publishes a newer model such as a fresh Qwen or Gemma release, you can switch to it directly by using its Fireworks model id without waiting for a bundled catalog update.
</Tip>
<Note>
OpenClaw pins all Fireworks Kimi models to `thinking: off` because Fireworks rejects Kimi thinking parameters in production. Routing the same model through [Moonshot](/providers/moonshot) directly preserves Kimi reasoning output. See [thinking modes](/tools/thinking) for switching between providers.
</Note>
## Custom Fireworks model ids
OpenClaw accepts dynamic Fireworks model ids too. Use the exact model or router id shown by Fireworks and prefix it with `fireworks/`.
OpenClaw accepts any Fireworks model or router id at runtime. Use the exact id shown by Fireworks and prefix it with `fireworks/`. Dynamic resolution clones the Fire Pass template (text + image input, OpenAI-compatible API, default cost zero) and disables thinking automatically when the id matches the Kimi pattern.
```json5
{
agents: {
defaults: {
model: {
primary: "fireworks/accounts/fireworks/routers/kimi-k2p5-turbo",
primary: "fireworks/accounts/fireworks/models/<your-model-id>",
},
},
},
@@ -81,26 +103,41 @@ OpenClaw accepts dynamic Fireworks model ids too. Use the exact model or router
- Router model: `fireworks/accounts/fireworks/routers/kimi-k2p5-turbo`
- Direct model: `fireworks/accounts/fireworks/models/<model-name>`
OpenClaw strips the `fireworks/` prefix when building the API request and sends the remaining path to the Fireworks endpoint.
OpenClaw strips the `fireworks/` prefix when constructing the API request and sends the remaining path to the Fireworks endpoint as the OpenAI-compatible `model` field.
</Accordion>
<Accordion title="Environment note">
If the Gateway runs outside your interactive shell, make sure `FIREWORKS_API_KEY` is available to that process too.
<Accordion title="Why thinking is forced off for Kimi">
Fireworks K2.6 returns a 400 if the request carries `reasoning_*` parameters even though Kimi supports thinking through Moonshot's own API. The bundled policy (`extensions/fireworks/thinking-policy.ts`) advertises only the `off` thinking level for Kimi model ids, so manual `/think` switches and provider-policy surfaces stay aligned with the runtime contract.
To use Kimi reasoning end-to-end, configure the [Moonshot provider](/providers/moonshot) and route the same model through it.
</Accordion>
<Accordion title="Environment availability for the daemon">
If the Gateway runs as a managed service (launchd, systemd, Docker), the Fireworks key must be visible to that process — not just to your interactive shell.
<Warning>
A key sitting only in `~/.profile` will not help a launchd/systemd daemon unless that environment is imported there as well. Set the key in `~/.openclaw/.env` or via `env.shellEnv` to ensure the gateway process can read it.
A key sitting only in `~/.profile` will not help a launchd or systemd daemon unless that environment is imported there too. Set the key in `~/.openclaw/.env` or via `env.shellEnv` to make it readable from the gateway process.
</Warning>
On macOS, `openclaw gateway install` already wires `~/.openclaw/.env` into the LaunchAgent environment file. Re-run install (or `openclaw doctor --fix`) after rotating the key.
</Accordion>
</AccordionGroup>
## Related
<CardGroup cols={2}>
<Card title="Model selection" href="/concepts/model-providers" icon="layers">
<Card title="Model providers" href="/concepts/model-providers" icon="layers">
Choosing providers, model refs, and failover behavior.
</Card>
<Card title="Thinking modes" href="/tools/thinking" icon="brain">
`/think` levels, provider policies, and routing reasoning-capable models.
</Card>
<Card title="Moonshot" href="/providers/moonshot" icon="moon">
Run Kimi with native thinking output through Moonshot's own API.
</Card>
<Card title="Troubleshooting" href="/help/troubleshooting" icon="wrench">
General troubleshooting and FAQ.
</Card>