mirror of
https://github.com/openclaw/openclaw.git
synced 2026-05-06 11:10:45 +00:00
fix(openai-completions): enable local streaming usage compat (#68711) (thanks @gaineyllc)
This commit is contained in:
@@ -15,6 +15,10 @@ OpenClaw can also **auto-discover** available models from SGLang when you opt
|
||||
in with `SGLANG_API_KEY` (any value works if your server does not enforce auth)
|
||||
and you do not define an explicit `models.providers.sglang` entry.
|
||||
|
||||
OpenClaw treats `sglang` as a local OpenAI-compatible provider that supports
|
||||
streamed usage accounting, so status/context token counts can update from
|
||||
`stream_options.include_usage` responses.
|
||||
|
||||
## Getting started
|
||||
|
||||
<Steps>
|
||||
|
||||
@@ -12,6 +12,10 @@ vLLM can serve open-source (and some custom) models via an **OpenAI-compatible**
|
||||
|
||||
OpenClaw can also **auto-discover** available models from vLLM when you opt in with `VLLM_API_KEY` (any value works if your server does not enforce auth) and you do not define an explicit `models.providers.vllm` entry.
|
||||
|
||||
OpenClaw treats `vllm` as a local OpenAI-compatible provider that supports
|
||||
streamed usage accounting, so status/context token counts can update from
|
||||
`stream_options.include_usage` responses.
|
||||
|
||||
| Property | Value |
|
||||
| ---------------- | ---------------------------------------- |
|
||||
| Provider ID | `vllm` |
|
||||
|
||||
Reference in New Issue
Block a user