fix(cli): fail empty local model probes

This commit is contained in:
Peter Steinberger
2026-04-27 23:16:29 +01:00
parent 81390c643b
commit 48e91f09d5
4 changed files with 40 additions and 13 deletions

View File

@@ -159,6 +159,7 @@ openclaw infer model run --local --model openai/gpt-4.1 --prompt "Reply with exa
Notes:
- Local `model run` is the narrowest CLI smoke for provider/model/auth health because it sends only the supplied prompt to the selected model.
- Local `model run` exits non-zero when the provider returns no text output, so unreachable local providers and empty completions do not look like successful probes.
- Use `model run --gateway` when you need to test Gateway routing, agent-runtime setup, or Gateway-managed provider state instead of the lean local completion path.
- `model auth login`, `model auth logout`, and `model auth status` manage saved provider auth state.