mirror of
https://github.com/openclaw/openclaw.git
synced 2026-05-06 18:30:44 +00:00
docs: prune recent additions for readability
This commit is contained in:
@@ -268,11 +268,9 @@ OpenClaw supports Anthropic's prompt caching feature for API-key auth.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Claude Opus 4.7 1M context normalization">
|
||||
Claude Opus 4.7 (`anthropic/claude-opus-4.7`) and its `claude-cli` variant are normalized to a 1M context window in resolved runtime metadata and active-agent status/context reporting. You do not need `params.context1m: true` for Opus 4.7; it no longer inherits the stale 200k fallback.
|
||||
|
||||
Compaction and overflow handling use the 1M window automatically. Other Anthropic models keep their published limits.
|
||||
|
||||
<Accordion title="Claude Opus 4.7 1M context">
|
||||
`anthropic/claude-opus-4.7` and its `claude-cli` variant have a 1M context
|
||||
window by default — no `params.context1m: true` needed.
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
|
||||
@@ -174,8 +174,6 @@ If you prefer explicit config instead of auto-discovery:
|
||||
}
|
||||
```
|
||||
|
||||
Context-window metadata for discovered Mantle models uses known published limits when available and falls back conservatively for unlisted models, so compaction and overflow handling behave correctly for newer entries without overstating unknown models.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Relationship to Amazon Bedrock provider">
|
||||
|
||||
@@ -104,9 +104,11 @@ Interactive setup can prompt for an optional preferred load context length and a
|
||||
|
||||
### Streaming usage compatibility
|
||||
|
||||
OpenClaw marks LM Studio as streaming-usage compatible, so token accounting no longer degrades to unknown or stale totals on streamed completions. OpenClaw also recovers token counts from llama.cpp-style `timings.prompt_n` / `timings.predicted_n` metadata when LM Studio does not emit an OpenAI-shaped `usage` object.
|
||||
LM Studio is streaming-usage compatible. When it does not emit an OpenAI-shaped
|
||||
`usage` object, OpenClaw recovers token counts from llama.cpp-style
|
||||
`timings.prompt_n` / `timings.predicted_n` metadata instead.
|
||||
|
||||
Other OpenAI-compatible local backends covered by the same behavior:
|
||||
Same behavior applies to these OpenAI-compatible local backends:
|
||||
|
||||
- vLLM
|
||||
- SGLang
|
||||
|
||||
@@ -349,9 +349,9 @@ Config lives under `plugins.entries.moonshot.config.webSearch`:
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Tool call id sanitization">
|
||||
Moonshot Kimi serves native tool_call ids shaped like `functions.<name>:<index>` on the OpenAI-compatible transport. OpenClaw no longer strict-sanitizes these ids for Moonshot, so multi-turn agentic flows through Kimi K2.6 keep working past 2-3 tool-calling rounds when the serving layer matches mangled ids against the original tool definitions.
|
||||
Moonshot Kimi serves tool_call ids shaped like `functions.<name>:<index>`. OpenClaw preserves them unchanged so multi-turn tool calls keep working.
|
||||
|
||||
If a custom OpenAI-compatible provider needs the previous behavior, set `sanitizeToolCallIds: true` on the provider entry. The flag lives on the shared `openai-compatible` replay family; Moonshot is wired to the opt-out by default.
|
||||
To force strict sanitization on a custom OpenAI-compatible provider, set `sanitizeToolCallIds: true`:
|
||||
|
||||
```json5
|
||||
{
|
||||
|
||||
Reference in New Issue
Block a user