Commit Graph

221 Commits

Author SHA1 Message Date
Peter Steinberger
ced267c5cb fix(moonshot): apply native thinking payload compatibility 2026-03-03 01:05:24 +00:00
Peter Steinberger
e930517154 fix(ci): resolve docs lint and test typing regressions 2026-03-03 00:55:01 +00:00
Peter Steinberger
86090b0ff2 docs(models): refresh minimax kimi glm provider docs 2026-03-03 00:40:15 +00:00
Peter Steinberger
6b85ec3022 docs: tighten subscription guidance and update MiniMax M2.5 refs 2026-03-03 00:02:37 +00:00
Peter Steinberger
eb35fb745d docs: remove provider recommendation language 2026-03-02 17:33:38 +00:00
Vincent Koc
c6e5026edf Docs: sort provider lists A-Z 2026-03-01 23:42:55 -08:00
Peter Steinberger
3049ca840f docs: replace bare provider URLs with markdown links 2026-03-02 06:01:29 +00:00
Peter Steinberger
bc0288bcfb docs: clarify adaptive thinking and openai websocket docs 2026-03-02 05:46:57 +00:00
Peter Steinberger
d1615eb35f feat(openai): add websocket warm-up with configurable toggle 2026-03-01 22:45:03 +00:00
Agent
063c4f00ea docs: clarify Anthropic context1m long-context requirements 2026-03-01 22:35:26 +00:00
Peter Steinberger
7ced38b5ef feat(agents): make openai responses websocket-first with fallback 2026-03-01 22:32:37 +00:00
Vincent Koc
f16ecd1dac fix(ollama): unify context window handling across discovery, merge, and OpenAI-compat transport (#29205)
* fix(ollama): inject num_ctx for OpenAI-compatible transport

* fix(ollama): discover per-model context and preserve higher limits

* fix(agents): prefer matching provider model for fallback limits

* fix(types): require numeric token limits in provider model merge

* fix(types): accept unknown payload in ollama num_ctx wrapper

* fix(types): simplify ollama settled-result extraction

* config(models): add provider flag for Ollama OpenAI num_ctx injection

* config(schema): allow provider num_ctx injection flag

* config(labels): label provider num_ctx injection flag

* config(help): document provider num_ctx injection flag

* agents(ollama): gate OpenAI num_ctx injection with provider config

* tests(ollama): cover provider num_ctx injection flag behavior

* docs(config): list provider num_ctx injection option

* docs(ollama): document OpenAI num_ctx injection toggle

* docs(config): clarify merge token-limit precedence

* config(help): note merge uses higher model token limits

* fix(ollama): cap /api/show discovery concurrency

* fix(ollama): restrict num_ctx injection to OpenAI compat

* tests(ollama): cover ipv6 and compat num_ctx gating

* fix(ollama): detect remote compat endpoints for ollama-labeled providers

* fix(ollama): cap per-model /api/show lookups to bound discovery load
2026-02-27 17:20:47 -08:00
Vincent Koc
d17c083803 docs(ollama): clarify /v1 tool-calling guidance (#29204) 2026-02-27 15:21:13 -08:00
Peter Steinberger
dede4089a6 docs(openai): add clear server compaction toggle examples 2026-02-27 16:21:08 +00:00
Peter Steinberger
8da3a9a92d fix(agents): auto-enable OpenAI Responses server-side compaction (#16930, #22441, #25088)
Landed from contributor PRs #16930, #22441, and #25088.

Co-authored-by: liweiguang <codingpunk@gmail.com>
Co-authored-by: EdwardWu7 <wuzhiyuan7@gmail.com>
Co-authored-by: MoerAI <friendnt@g.skku.edu>
2026-02-27 16:15:50 +00:00
Peter Steinberger
03d7641b0e feat(agents): default codex transport to websocket-first 2026-02-26 16:22:53 +01:00
Gustavo Madeira Santana
5239b55c0a Config: expand Kilo catalog and persist selected Kilo models (#24921)
Merged via /review-pr -> /prepare-pr -> /merge-pr.

Prepared head SHA: f5a7e1a385
Co-authored-by: gumadeiras <5599352+gumadeiras@users.noreply.github.com>
Co-authored-by: gumadeiras <5599352+gumadeiras@users.noreply.github.com>
Reviewed-by: @gumadeiras
2026-02-23 21:17:37 -05:00
John Fawcett
13f32e2f7d feat: Add Kilo Gateway provider (#20212)
* feat: Add Kilo Gateway provider

Add support for Kilo Gateway as a model provider, similar to OpenRouter.
Kilo Gateway provides a unified API that routes requests to many models
behind a single endpoint and API key.

Changes:
- Add kilocode provider option to auth-choice and onboarding flows
- Add KILOCODE_API_KEY environment variable support
- Add kilocode/ model prefix handling in model-auth and extra-params
- Add provider documentation in docs/providers/kilocode.md
- Update model-providers.md with Kilo Gateway section
- Add design doc for the integration

* kilocode: add provider tests and normalize onboard auth-choice registration

* kilocode: register in resolveImplicitProviders so models appear in provider filter

* kilocode: update base URL from /api/openrouter/ to /api/gateway/

* docs: fix formatting in kilocode docs

* fix: address PR review — remove kilocode from cacheRetention, fix stale model refs and CLI name in docs, fix TS2742

* docs: fix stale refs in design doc — Moltbot to OpenClaw, MoltbotConfig to OpenClawConfig, remove extra-params section, fix doc path

* fix: use resolveAgentModelPrimaryValue for AgentModelConfig union type

---------

Co-authored-by: Mark IJbema <mark@kilocode.ai>
2026-02-23 23:29:27 +00:00
Peter Steinberger
78e7f41d28 docs: detail per-agent prompt caching configuration 2026-02-23 18:46:40 +00:00
Vincent Koc
f03ff39754 Providers: skip context1m beta for Anthropic OAuth tokens (#24620)
* Providers: skip context1m beta for Anthropic OAuth tokens

* Tests: cover OAuth context1m beta skip behavior

* Docs: note context1m OAuth incompatibility

* Agents: add context1m-aware context token resolver

* Agents: cover context1m context-token resolver

* Commands: apply context1m-aware context tokens in session store

* Commands: apply context1m-aware context tokens in status summary

* Status: resolve context tokens with context1m model params

* Status: test context1m status context display
2026-02-23 12:29:09 -05:00
Sally O'Malley
eb4ff6df81 Allow Claude model requests to route through Google Vertex AI (#23985)
* feat: add anthropic-vertex provider for Claude via GCP Vertex AI

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Signed-off-by: sallyom <somalley@redhat.com>

* docs: add anthropic-vertex provider guide

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Signed-off-by: sallyom <somalley@redhat.com>

* Agents: validate Anthropic Vertex project env

* Changelog: format update for Vertex entry

* Providers: rename Anthropic Vertex to Google Vertex Claude

* Providers: remove Vertex Claude provider path

* Models: normalize Vercel Claude shorthand refs

* Onboarding: default Vercel model to Claude shorthand

* Changelog: add @vincentkoc credit for #23985

* Onboarding: keep canonical Vercel default model ref

* Tests: expand Vercel model normalization coverage

---------

Signed-off-by: sallyom <somalley@redhat.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Vincent Koc <vincentkoc@ieee.org>
2026-02-23 11:04:31 -05:00
Vincent Koc
d92ba4f8aa feat: Provider/Mistral full support for Mistral on OpenClaw 🇫🇷 (#23845)
* Onboard: add Mistral auth choice and CLI flags

* Onboard/Auth: add Mistral provider config defaults

* Auth choice: wire Mistral API-key flow

* Onboard non-interactive: support --mistral-api-key

* Media understanding: add Mistral Voxtral audio provider

* Changelog: note Mistral onboarding and media support

* Docs: add Mistral provider and onboarding/media references

* Tests: cover Mistral media registry/defaults and auth mapping

* Memory: add Mistral embeddings provider support

* Onboarding: refresh Mistral model metadata

* Docs: document Mistral embeddings and endpoints

* Memory: persist Mistral embedding client state in managers

* Memory: add regressions for mistral provider wiring

* Gateway: add live tool probe retry helper

* Gateway: cover live tool probe retry helper

* Gateway: retry malformed live tool-read probe responses

* Memory: support plain-text batch error bodies

* Tests: add Mistral Voxtral live transcription smoke

* Docs: add Mistral live audio test command

* Revert: remove Mistral live voice test and docs entry

* Onboard: re-export Mistral default model ref from models

* Changelog: credit joeVenner for Mistral work

* fix: include Mistral in auto audio key fallback

* Update CHANGELOG.md

* Update CHANGELOG.md

---------

Co-authored-by: Shakker <shakkerdroid@gmail.com>
2026-02-23 00:03:56 +00:00
Peter Steinberger
290f375aa1 docs: fix Together provider env path 2026-02-22 20:23:40 +01:00
Peter Steinberger
6fef318fda docs: replace legacy chat examples in Venice provider guide 2026-02-22 20:15:07 +01:00
Vincent Koc
dcf2c6d7f1 docs: normalize Amazon Bedrock setup section labels (#22549)
* docs(channels): promote Signal option setups to onboarding sections

* docs(channels): rename Microsoft Teams minimal setup section

* docs(channels): standardize onboarding option headings for Zalo and Twitch

* docs(providers): normalize Amazon Bedrock onboarding section labels
2026-02-21 03:40:54 -05:00
Peter Steinberger
c90b09cb02 feat(agents): support Anthropic 1M context beta header 2026-02-18 03:29:48 +01:00
Sebastian
5d1bcc76cc docs(zai): document tool_stream defaults 2026-02-17 09:22:55 -05:00
Peter Steinberger
fdda261478 fix: align NVIDIA provider docs and model ids (#11606) 2026-02-14 05:48:40 +01:00
Gabriel
3feb5d1f10 fix: LINT AGAIN 2026-02-14 05:48:40 +01:00
Gabriel
f90a39e984 fix: my mistakes 2026-02-14 05:48:40 +01:00
Gabriel
ae8be6ac23 fix: linting thime 2026-02-14 05:48:40 +01:00
Gabriel
8f2884b986 fix: i am fixing all the changes that claude made. vibe coding is not there yet. anyways, i fixed the issues that the bot told me to fix 2026-02-14 05:48:40 +01:00
Sunwoo Yu
11702290ff feat(ollama): add native /api/chat provider for streaming + tool calling (#11853)
Merged via /review-pr -> /prepare-pr -> /merge-pr.

Prepared head SHA: 0a723f98e6
Co-authored-by: BrokenFinger98 <115936166+BrokenFinger98@users.noreply.github.com>
Co-authored-by: steipete <58493+steipete@users.noreply.github.com>
Reviewed-by: @steipete
2026-02-14 01:20:42 +01:00
Tonic
08b7932df0 feat(agents) : Hugging Face Inference provider first-class support and Together API fix and Direct Injection Refactor Auths [AI-assisted] (#13472)
* initial commit

* removes assesment from docs

* resolves automated review comments

* resolves lint , type , tests , refactors , and submits

* solves : why do we have to lint the tests xD

* adds greptile fixes

* solves a type error

* solves a ci error

* refactors auths

* solves a failing test after i pulled from main lol

* solves a failing test after i pulled from main lol

* resolves token naming issue to comply with better practices when using hf / huggingface

* fixes curly lints !

* fixes failing tests for google api from main

* solve merge conflicts

* solve failing tests with a defensive check 'undefined' openrouterapi key

* fix: preserve Hugging Face auth-choice intent and token behavior (#13472) (thanks @Josephrp)

* test: resolve auth-choice cherry-pick conflict cleanup (#13472)

---------

Co-authored-by: Cursor <cursoragent@cursor.com>
Co-authored-by: Peter Steinberger <steipete@gmail.com>
2026-02-13 16:18:16 +01:00
gejifeng
e73d881c50 Onboarding: add vLLM provider support 2026-02-13 15:48:37 +01:00
Peter Steinberger
5e7842a41d feat(zai): auto-detect endpoint + default glm-5 (#14786)
* feat(zai): auto-detect endpoint + default glm-5

* test: fix Z.AI default endpoint expectation (#14786)

* test: bump embedded runner beforeAll timeout

* chore: update changelog for Z.AI GLM-5 autodetect (#14786)

* chore: resolve changelog merge conflict with main (#14786)

* chore: append changelog note for #14786 without merge conflict

* chore: sync changelog with main to resolve merge conflict
2026-02-12 19:16:04 +01:00
ryan-crabbe
a36b9be245 Feat/litellm provider (#12823)
* feat: add LiteLLM provider types, env var, credentials, and auth choice

Add litellm-api-key auth choice, LITELLM_API_KEY env var mapping,
setLitellmApiKey() credential storage, and LITELLM_DEFAULT_MODEL_REF.

* feat: add LiteLLM onboarding handler and provider config

Add applyLitellmProviderConfig which properly registers
models.providers.litellm with baseUrl, api type, and model definitions.
This fixes the critical bug from PR #6488 where the provider entry was
never created, causing model resolution to fail at runtime.

* docs: add LiteLLM provider documentation

Add setup guide covering onboarding, manual config, virtual keys,
model routing, and usage tracking. Link from provider index.

* docs: add LiteLLM to sidebar navigation in docs.json

Add providers/litellm to both English and Chinese provider page lists
so the docs page appears in the sidebar navigation.

* test: add LiteLLM non-interactive onboarding test

Wire up litellmApiKey flag inference and auth-choice handler for the
non-interactive onboarding path, and add an integration test covering
profile, model default, and credential storage.

* fix: register --litellm-api-key CLI flag and add preferred provider mapping

Wire up the missing Commander CLI option, action handler mapping, and
help text for --litellm-api-key. Add litellm-api-key to the preferred
provider map for consistency with other providers.

* fix: remove zh-CN sidebar entry for litellm (no localized page yet)

* style: format buildLitellmModelDefinition return type

* fix(onboarding): harden LiteLLM provider setup (#12823)

* refactor(onboarding): keep auth-choice provider dispatcher under size limit

---------

Co-authored-by: Peter Steinberger <steipete@gmail.com>
2026-02-11 11:46:56 +01:00
Riccardo Giorato
be6de9bb75 Update Together default model to together/moonshotai/Kimi-K2.5 (#13324) 2026-02-11 08:39:15 +09:00
Riccardo Giorato
661279cbfa feat: adding support for Together ai provider (#10304) 2026-02-10 08:49:34 +09:00
Seb Slight
929a3725d3 docs: canonicalize docs paths and align zh navigation (#11428)
* docs(navigation): canonicalize paths and align zh nav

* chore(docs): remove stray .DS_Store

* docs(scripts): add non-mint docs link audit

* docs(nav): fix zh source paths and preserve legacy redirects (#11428) (thanks @sebslight)

* chore(docs): satisfy lint for docs link audit script (#11428) (thanks @sebslight)
2026-02-07 15:40:35 -05:00
Peter Steinberger
88ffad1c4f Merge PR #8868: add Baidu Qianfan support (thanks @ide-rea) 2026-02-07 00:19:04 -08:00
ide-rea
43c0a7fe1c Merge branch 'openclaw:main' into qianfan 2026-02-07 14:07:52 +08:00
Seb Slight
578a6e27aa Docs: enable markdownlint autofixables except list numbering (#10476)
* docs(markdownlint): enable autofixable rules except list numbering

* docs(zalo): fix malformed bot platform link
2026-02-06 10:08:59 -05:00
Sebastian
0a1f4f666a revert(docs): undo markdownlint autofix churn 2026-02-06 10:00:08 -05:00
Sebastian
c7aec0660e docs(markdownlint): enable autofixable rules and normalize links 2026-02-06 09:55:12 -05:00
Sebastian
1bf9f237f7 docs: linting 2026-02-06 09:35:57 -05:00
ide-rea
3997316fb0 Merge branch 'main' into qianfan 2026-02-06 17:58:28 +08:00
Raphael Borg Ellul Vincenti
34a58b839c fix(ollama): add streaming config and fix OLLAMA_API_KEY env var support (#9870)
* fix(ollama): add streaming config and fix OLLAMA_API_KEY env var support

Adds configurable streaming parameter to model configuration and sets streaming
to false by default for Ollama models. This addresses the corrupted response
issue caused by upstream SDK bug badlogic/pi-mono#1205 where interleaved
content/reasoning deltas in streaming responses cause garbled output.

Changes:
- Add streaming param to AgentModelEntryConfig type
- Set streaming: false default for Ollama models
- Add OLLAMA_API_KEY to envMap (was missing, preventing env var auth)
- Document streaming configuration in Ollama provider docs
- Add tests for Ollama model configuration

Users can now configure streaming per-model and Ollama authentication
via OLLAMA_API_KEY environment variable works correctly.

Fixes #8839
Related: badlogic/pi-mono#1205

* docs(ollama): use gpt-oss:20b as primary example

Updates documentation to use gpt-oss:20b as the primary example model
since it supports tool calling. The model examples now show:

- gpt-oss:20b as the primary recommended model (tool-capable)
- llama3.3 and qwen2.5-coder:32b as additional options

This provides users with a clear, working example that supports
OpenClaw's tool calling features.

* chore: remove unused vi import from ollama test
2026-02-05 16:35:38 -08:00
Gustavo Madeira Santana
4629054403 chore: apply local workspace updates (#9911)
* chore: apply local workspace updates

* fix: resolve prep findings after rebase (#9898) (thanks @gumadeiras)

* refactor: centralize model allowlist normalization (#9898) (thanks @gumadeiras)

* fix: guard model allowlist initialization (#9911)

* docs: update changelog scope for #9911

* docs: remove model names from changelog entry (#9911)

* fix: satisfy type-aware lint in model allowlist (#9911)
2026-02-05 16:54:44 -05:00
大猫子
679bb087db docs: fix incorrect model.fallback to model.fallbacks in Ollama config (#9384) (#9749)
Both English and Chinese documentation had incorrect configuration template
using 'fallback' instead of 'fallbacks' in agents.defaults.model config.

Co-authored-by: damaozi <1811866786@qq.com>
2026-02-05 13:56:58 -05:00