mirror of
https://github.com/openclaw/openclaw.git
synced 2026-04-10 08:41:13 +00:00
docs: refresh google cached content refs
This commit is contained in:
@@ -257,6 +257,9 @@ OpenClaw ships with the pi‑ai catalog. These providers require **no**
|
||||
- Example models: `google/gemini-3.1-pro-preview`, `google/gemini-3-flash-preview`
|
||||
- Compatibility: legacy OpenClaw config using `google/gemini-3.1-flash-preview` is normalized to `google/gemini-3-flash-preview`
|
||||
- CLI: `openclaw onboard --auth-choice gemini-api-key`
|
||||
- Direct Gemini runs also accept `agents.defaults.models["google/<model>"].params.cachedContent`
|
||||
(or legacy `cached_content`) to forward a provider-native
|
||||
`cachedContents/...` handle; Gemini cache hits surface as OpenClaw `cacheRead`
|
||||
|
||||
### Google Vertex and Gemini CLI
|
||||
|
||||
|
||||
@@ -83,6 +83,36 @@ retry.
|
||||
| Web search (Grounding) | Yes |
|
||||
| Thinking/reasoning | Yes (Gemini 3.1+) |
|
||||
|
||||
## Direct Gemini cache reuse
|
||||
|
||||
For direct Gemini API runs (`api: "google-generative-ai"`), OpenClaw now
|
||||
passes a configured `cachedContent` handle through to Gemini requests.
|
||||
|
||||
- Configure per-model or global params with either
|
||||
`cachedContent` or legacy `cached_content`
|
||||
- If both are present, `cachedContent` wins
|
||||
- Example value: `cachedContents/prebuilt-context`
|
||||
- Gemini cache-hit usage is normalized into OpenClaw `cacheRead` from
|
||||
upstream `cachedContentTokenCount`
|
||||
|
||||
Example:
|
||||
|
||||
```json5
|
||||
{
|
||||
agents: {
|
||||
defaults: {
|
||||
models: {
|
||||
"google/gemini-2.5-pro": {
|
||||
params: {
|
||||
cachedContent: "cachedContents/prebuilt-context",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
## Image generation
|
||||
|
||||
The bundled `google` image-generation provider defaults to
|
||||
|
||||
@@ -127,6 +127,17 @@ stops injecting those OpenRouter-specific Anthropic cache markers.
|
||||
|
||||
If the provider does not support this cache mode, `cacheRetention` has no effect.
|
||||
|
||||
### Google Gemini direct API
|
||||
|
||||
- Direct Gemini transport (`api: "google-generative-ai"`) reports cache hits
|
||||
through upstream `cachedContentTokenCount`; OpenClaw maps that to `cacheRead`.
|
||||
- If you already have a Gemini cached-content handle, you can pass it through as
|
||||
`params.cachedContent` (or legacy `params.cached_content`) on the configured
|
||||
model.
|
||||
- This is separate from Anthropic/OpenAI prompt-prefix caching. OpenClaw is
|
||||
forwarding a provider-native cached-content reference, not synthesizing cache
|
||||
markers.
|
||||
|
||||
## OpenClaw cache-stability guards
|
||||
|
||||
OpenClaw also keeps several cache-sensitive payload shapes deterministic before
|
||||
|
||||
Reference in New Issue
Block a user