mirror of
https://github.com/openclaw/openclaw.git
synced 2026-05-06 21:10:43 +00:00
fix(cli): pass instructions for local openai-codex model probes (#76470)
* fix infer model run codex instructions * docs changelog for codex model probe fix * fix codex model probe instructions only * docs: note codex model probe instruction shim * chore: rerun proof gate --------- Co-authored-by: Le LI <leli@LedeMacBook-Air.local> Co-authored-by: Peter Steinberger <steipete@gmail.com>
This commit is contained in:
@@ -164,7 +164,8 @@ openclaw infer model run --local --model ollama/qwen2.5vl:7b --prompt "Describe
|
||||
|
||||
Notes:
|
||||
|
||||
- Local `model run` is the narrowest CLI smoke for provider/model/auth health because it sends only the supplied prompt to the selected model.
|
||||
- Local `model run` is the narrowest CLI smoke for provider/model/auth health because, for non-Codex providers, it sends only the supplied prompt to the selected model.
|
||||
- `openai-codex/*` local probes are the narrow exception: OpenClaw adds a minimal system instruction so the Codex Responses transport can populate its required `instructions` field, without adding full agent context, tools, memory, or session transcript.
|
||||
- Local `model run --file` keeps that lean path and attaches image content directly to the single user message. Common image files such as PNG, JPEG, and WebP work when their MIME type is detected as `image/*`; unsupported or unrecognized files fail before the provider is called.
|
||||
- `model run --file` is best when you want to test the selected multimodal text model directly. Use `infer image describe` when you want OpenClaw's image-understanding provider selection and default image-model routing.
|
||||
- The selected model must support image input; text-only models may reject the request at the provider layer.
|
||||
|
||||
Reference in New Issue
Block a user