mirror of
https://github.com/openclaw/openclaw.git
synced 2026-04-12 17:51:22 +00:00
* refresh infer branch onto latest main * flatten infer media commands * fix tts runtime facade export * validate explicit web search providers * fix infer auth logout persistence
2.0 KiB
2.0 KiB
summary, read_when, title
| summary | read_when | title | ||
|---|---|---|---|---|
| Infer-first CLI for provider-backed model, image, audio, TTS, video, web, and embedding workflows |
|
Inference CLI |
Inference CLI
openclaw infer is the canonical headless surface for provider-backed inference workflows.
openclaw capability remains supported as a fallback alias for compatibility.
It intentionally exposes capability families, not raw gateway RPC names and not raw agent tool ids.
Command tree
openclaw infer
list
inspect
model
run
list
inspect
providers
auth login
auth logout
auth status
image
generate
edit
describe
describe-many
providers
audio
transcribe
providers
tts
convert
voices
providers
status
enable
disable
set-provider
video
generate
describe
providers
web
search
fetch
providers
embedding
create
providers
Transport
Supported transport flags:
--local--gateway
Default transport is implicit auto at the command-family level:
- Stateless execution commands default to local.
- Gateway-managed state commands default to gateway.
Examples:
openclaw infer model run --prompt "hello" --json
openclaw infer image generate --prompt "friendly lobster" --json
openclaw infer tts status --json
openclaw infer embedding create --text "hello world" --json
JSON output
Capability commands normalize JSON output under a shared envelope:
{
"ok": true,
"capability": "image.generate",
"transport": "local",
"provider": "openai",
"model": "gpt-image-1",
"attempts": [],
"outputs": []
}
Top-level fields are stable:
okcapabilitytransportprovidermodelattemptsoutputserror
Notes
model runreuses the agent runtime so provider/model overrides behave like normal agent execution.tts statusdefaults to gateway because it reflects gateway-managed TTS state.