mirror of
https://github.com/openclaw/openclaw.git
synced 2026-05-06 06:30:42 +00:00
fix(lmstudio): allow keyless local onboarding
This commit is contained in:
@@ -13,6 +13,7 @@ Docs: https://docs.openclaw.ai
|
||||
### Fixes
|
||||
|
||||
- Agents/Bedrock: stop heartbeat runs from persisting blank user transcript turns and repair existing blank user text messages before replay, preventing AWS Bedrock `ContentBlock` blank-text validation failures. Fixes #72640 and #72622. Thanks @goldzulu.
|
||||
- LM Studio: allow interactive onboarding to leave the API key blank for unauthenticated local servers, using local synthetic auth while clearing stale LM Studio auth profiles. Fixes #66937. Thanks @olamedia.
|
||||
- Process/Windows: decode command stdout and stderr from raw bytes with console-codepage awareness, while preserving valid UTF-8 output and multibyte characters split across chunks. Fixes #50519. Thanks @iready, @kevinten10, @zhangyongjie1997, @knightplat-blip, @heiqishi666, and @slepybear.
|
||||
- Agents/bootstrap: dedupe hook-injected bootstrap context files by workspace-relative path and store normalized resolved paths so duplicate relative and absolute hook paths no longer depend on the process cwd. (#59344; fixes #59319; related #56721, #56725, and #57587) Thanks @koen666.
|
||||
- Agents/bootstrap: refresh cached workspace bootstrap snapshots on long-lived main-session turns when `AGENTS.md`, `SOUL.md`, `MEMORY.md`, or `TOOLS.md` change on disk, while preserving unchanged snapshot identity through the workspace file cache. (#64871; related #43901, #26497, #28594, #30896) Thanks @aimqwest and @mikejuyoon.
|
||||
|
||||
@@ -30,17 +30,13 @@ lms server start --port 1234
|
||||
|
||||
If you are using the app, make sure you have JIT enabled for a smooth experience. Learn more in the [LM Studio JIT and TTL guide](https://lmstudio.ai/docs/developer/core/ttl-and-auto-evict).
|
||||
|
||||
3. OpenClaw requires an LM Studio token value. Set `LM_API_TOKEN`:
|
||||
3. If LM Studio authentication is enabled, set `LM_API_TOKEN`:
|
||||
|
||||
```bash
|
||||
export LM_API_TOKEN="your-lm-studio-api-token"
|
||||
```
|
||||
|
||||
If LM Studio authentication is disabled, use any non-empty token value:
|
||||
|
||||
```bash
|
||||
export LM_API_TOKEN="placeholder-key"
|
||||
```
|
||||
If LM Studio authentication is disabled, you can leave the API key blank during interactive OpenClaw setup.
|
||||
|
||||
For LM Studio auth setup details, see [LM Studio Authentication](https://lmstudio.ai/docs/developer/core/authentication).
|
||||
|
||||
@@ -73,7 +69,7 @@ openclaw onboard \
|
||||
--auth-choice lmstudio
|
||||
```
|
||||
|
||||
Or specify base URL or model with API key:
|
||||
Or specify the base URL, model, and optional API key:
|
||||
|
||||
```bash
|
||||
openclaw onboard \
|
||||
@@ -88,13 +84,14 @@ openclaw onboard \
|
||||
`--custom-model-id` takes the model key as returned by LM Studio (e.g. `qwen/qwen3.5-9b`), without
|
||||
the `lmstudio/` provider prefix.
|
||||
|
||||
Non-interactive onboarding requires `--lmstudio-api-key` (or `LM_API_TOKEN` in env).
|
||||
For unauthenticated LM Studio servers, any non-empty token value works.
|
||||
For authenticated LM Studio servers, pass `--lmstudio-api-key` or set `LM_API_TOKEN`.
|
||||
For unauthenticated LM Studio servers, omit the key; OpenClaw stores a local non-secret marker.
|
||||
|
||||
`--custom-api-key` remains supported for compatibility, but `--lmstudio-api-key` is preferred for LM Studio.
|
||||
|
||||
This writes `models.providers.lmstudio`, sets the default model to
|
||||
`lmstudio/<custom-model-id>`, and writes the `lmstudio:default` auth profile.
|
||||
This writes `models.providers.lmstudio` and sets the default model to
|
||||
`lmstudio/<custom-model-id>`. When you provide an API key, setup also writes the
|
||||
`lmstudio:default` auth profile.
|
||||
|
||||
Interactive setup can prompt for an optional preferred load context length and applies it across the discovered LM Studio models it saves into config.
|
||||
|
||||
@@ -147,7 +144,7 @@ Same behavior applies to these OpenAI-compatible local backends:
|
||||
|
||||
### LM Studio not detected
|
||||
|
||||
Make sure LM Studio is running and that you set `LM_API_TOKEN` (for unauthenticated servers, any non-empty token value works):
|
||||
Make sure LM Studio is running. If authentication is enabled, also set `LM_API_TOKEN`:
|
||||
|
||||
```bash
|
||||
# Start via desktop app, or headless:
|
||||
@@ -166,7 +163,7 @@ If setup reports HTTP 401, verify your API key:
|
||||
|
||||
- Check that `LM_API_TOKEN` matches the key configured in LM Studio.
|
||||
- For LM Studio auth setup details, see [LM Studio Authentication](https://lmstudio.ai/docs/developer/core/authentication).
|
||||
- If your server does not require authentication, use any non-empty token value for `LM_API_TOKEN`.
|
||||
- If your server does not require authentication, leave the key blank during setup.
|
||||
|
||||
### Just-in-time model loading
|
||||
|
||||
|
||||
@@ -69,6 +69,7 @@ export default definePluginEntry({
|
||||
const providerSetup = await loadProviderSetup();
|
||||
return await providerSetup.promptAndConfigureLmstudioInteractive({
|
||||
config: ctx.config,
|
||||
agentDir: ctx.agentDir,
|
||||
prompter: ctx.prompter,
|
||||
secretInputMode: ctx.secretInputMode,
|
||||
allowSecretRefPrompt: ctx.allowSecretRefPrompt,
|
||||
|
||||
@@ -702,6 +702,126 @@ describe("lmstudio setup", () => {
|
||||
]);
|
||||
});
|
||||
|
||||
it("interactive setup accepts a blank API key for unauthenticated local LM Studio", async () => {
|
||||
const { prompter, text } = createQueuedWizardPrompterHarness([
|
||||
"http://localhost:1234/api/v1/",
|
||||
"",
|
||||
"",
|
||||
]);
|
||||
|
||||
const result = await promptAndConfigureLmstudioInteractive({
|
||||
config: buildConfig(),
|
||||
prompter,
|
||||
});
|
||||
|
||||
expect(text).toHaveBeenCalledTimes(3);
|
||||
expect(fetchLmstudioModelsMock).toHaveBeenCalledWith({
|
||||
baseUrl: "http://localhost:1234/v1",
|
||||
apiKey: LMSTUDIO_LOCAL_API_KEY_PLACEHOLDER,
|
||||
timeoutMs: 5000,
|
||||
});
|
||||
expect(removeProviderAuthProfilesWithLockMock).toHaveBeenCalledWith({
|
||||
provider: "lmstudio",
|
||||
agentDir: undefined,
|
||||
});
|
||||
expect(result.profiles).toEqual([]);
|
||||
expect(result.configPatch?.models?.providers?.lmstudio).toMatchObject({
|
||||
baseUrl: "http://localhost:1234/v1",
|
||||
api: "openai-completions",
|
||||
apiKey: LMSTUDIO_LOCAL_API_KEY_PLACEHOLDER,
|
||||
models: [
|
||||
{
|
||||
id: "qwen3-8b-instruct",
|
||||
},
|
||||
],
|
||||
});
|
||||
expect(result.configPatch?.models?.providers?.lmstudio).not.toHaveProperty("auth");
|
||||
});
|
||||
|
||||
it("interactive setup uses existing Authorization headers when the API key is blank", async () => {
|
||||
const config = {
|
||||
models: {
|
||||
providers: {
|
||||
lmstudio: {
|
||||
baseUrl: "http://localhost:1234/v1",
|
||||
api: "openai-completions",
|
||||
apiKey: "stale-config-key",
|
||||
auth: "api-key",
|
||||
headers: {
|
||||
Authorization: "Bearer proxy-token",
|
||||
},
|
||||
models: [],
|
||||
},
|
||||
},
|
||||
},
|
||||
} as OpenClawConfig;
|
||||
const { prompter } = createQueuedWizardPrompterHarness([
|
||||
"http://localhost:1234/api/v1/",
|
||||
"",
|
||||
"",
|
||||
]);
|
||||
|
||||
const result = await promptAndConfigureLmstudioInteractive({
|
||||
config,
|
||||
prompter,
|
||||
});
|
||||
|
||||
expect(fetchLmstudioModelsMock).toHaveBeenCalledWith({
|
||||
baseUrl: "http://localhost:1234/v1",
|
||||
apiKey: undefined,
|
||||
headers: {
|
||||
Authorization: "Bearer proxy-token",
|
||||
},
|
||||
timeoutMs: 5000,
|
||||
});
|
||||
expect(removeProviderAuthProfilesWithLockMock).toHaveBeenCalledWith({
|
||||
provider: "lmstudio",
|
||||
agentDir: undefined,
|
||||
});
|
||||
expect(result.profiles).toEqual([]);
|
||||
expect(result.configPatch?.models?.providers?.lmstudio).toMatchObject({
|
||||
baseUrl: "http://localhost:1234/v1",
|
||||
api: "openai-completions",
|
||||
headers: {
|
||||
Authorization: "Bearer proxy-token",
|
||||
},
|
||||
models: [
|
||||
{
|
||||
id: "qwen3-8b-instruct",
|
||||
},
|
||||
],
|
||||
});
|
||||
expect(result.configPatch?.models?.providers?.lmstudio).not.toHaveProperty("apiKey");
|
||||
expect(result.configPatch?.models?.providers?.lmstudio).not.toHaveProperty("auth");
|
||||
});
|
||||
|
||||
it("interactive setup without a wizard accepts a blank API key for local LM Studio", async () => {
|
||||
const promptText = vi
|
||||
.fn()
|
||||
.mockResolvedValueOnce("http://localhost:1234/api/v1/")
|
||||
.mockResolvedValueOnce("");
|
||||
|
||||
const result = await promptAndConfigureLmstudioInteractive({
|
||||
config: buildConfig(),
|
||||
promptText,
|
||||
});
|
||||
|
||||
expect(fetchLmstudioModelsMock).toHaveBeenCalledWith({
|
||||
baseUrl: "http://localhost:1234/v1",
|
||||
apiKey: LMSTUDIO_LOCAL_API_KEY_PLACEHOLDER,
|
||||
timeoutMs: 5000,
|
||||
});
|
||||
expect(removeProviderAuthProfilesWithLockMock).toHaveBeenCalledWith({
|
||||
provider: "lmstudio",
|
||||
agentDir: undefined,
|
||||
});
|
||||
expect(result.profiles).toEqual([]);
|
||||
expect(result.configPatch?.models?.providers?.lmstudio).toMatchObject({
|
||||
apiKey: LMSTUDIO_LOCAL_API_KEY_PLACEHOLDER,
|
||||
});
|
||||
expect(result.configPatch?.models?.providers?.lmstudio).not.toHaveProperty("auth");
|
||||
});
|
||||
|
||||
it("interactive setup overwrites existing config apiKey during re-auth", async () => {
|
||||
const config = {
|
||||
models: {
|
||||
|
||||
@@ -2,6 +2,7 @@ import {
|
||||
removeProviderAuthProfilesWithLock,
|
||||
buildApiKeyCredential,
|
||||
ensureApiKeyFromEnvOrPrompt,
|
||||
hasConfiguredSecretInput,
|
||||
normalizeOptionalSecretInput,
|
||||
type OpenClawConfig,
|
||||
type SecretInput,
|
||||
@@ -363,6 +364,7 @@ async function discoverLmstudioSetupModels(params: {
|
||||
/** Interactive LM Studio setup with connectivity and model-availability checks. */
|
||||
export async function promptAndConfigureLmstudioInteractive(params: {
|
||||
config: OpenClawConfig;
|
||||
agentDir?: string;
|
||||
prompter?: WizardPrompter;
|
||||
secretInputMode?: SecretInputMode;
|
||||
allowSecretRefPrompt?: boolean;
|
||||
@@ -395,7 +397,7 @@ export async function promptAndConfigureLmstudioInteractive(params: {
|
||||
envLabel: LMSTUDIO_DEFAULT_API_KEY_ENV_VAR,
|
||||
promptMessage: `${LMSTUDIO_PROVIDER_LABEL} API key`,
|
||||
normalize: (value) => value.trim(),
|
||||
validate: (value) => (value.trim() ? undefined : "Required"),
|
||||
validate: () => undefined,
|
||||
prompter: params.prompter,
|
||||
secretInputMode:
|
||||
params.allowSecretRefPrompt === false
|
||||
@@ -406,30 +408,38 @@ export async function promptAndConfigureLmstudioInteractive(params: {
|
||||
credentialMode = mode;
|
||||
},
|
||||
})
|
||||
: String(
|
||||
await promptText({
|
||||
: (
|
||||
(await promptText({
|
||||
message: `${LMSTUDIO_PROVIDER_LABEL} API key`,
|
||||
placeholder: "sk-...",
|
||||
validate: (value) => (value?.trim() ? undefined : "Required"),
|
||||
}),
|
||||
placeholder: "sk-... (leave blank if auth is disabled)",
|
||||
validate: () => undefined,
|
||||
})) ?? ""
|
||||
).trim();
|
||||
const credential = params.prompter
|
||||
? buildApiKeyCredential(
|
||||
PROVIDER_ID,
|
||||
credentialInput ??
|
||||
(implicitRefMode && autoRefEnvKey ? `\${${LMSTUDIO_DEFAULT_API_KEY_ENV_VAR}}` : apiKey),
|
||||
undefined,
|
||||
credentialMode
|
||||
? { secretInputMode: credentialMode }
|
||||
: implicitRefMode && autoRefEnvKey
|
||||
? { secretInputMode: "ref" }
|
||||
: undefined,
|
||||
)
|
||||
: {
|
||||
type: "api_key" as const,
|
||||
provider: PROVIDER_ID,
|
||||
key: apiKey,
|
||||
};
|
||||
const normalizedApiKey = normalizeOptionalSecretInput(apiKey);
|
||||
const credentialSource =
|
||||
credentialInput ??
|
||||
(implicitRefMode && autoRefEnvKey ? `\${${LMSTUDIO_DEFAULT_API_KEY_ENV_VAR}}` : apiKey);
|
||||
const shouldStoreCredential = params.prompter
|
||||
? credentialMode === "ref" || hasConfiguredSecretInput(credentialSource)
|
||||
: normalizedApiKey !== undefined;
|
||||
const credential = shouldStoreCredential
|
||||
? params.prompter
|
||||
? buildApiKeyCredential(
|
||||
PROVIDER_ID,
|
||||
credentialSource,
|
||||
undefined,
|
||||
credentialMode
|
||||
? { secretInputMode: credentialMode }
|
||||
: implicitRefMode && autoRefEnvKey
|
||||
? { secretInputMode: "ref" }
|
||||
: undefined,
|
||||
)
|
||||
: {
|
||||
type: "api_key" as const,
|
||||
provider: PROVIDER_ID,
|
||||
key: normalizedApiKey ?? apiKey,
|
||||
}
|
||||
: undefined;
|
||||
const existingProvider = params.config.models?.providers?.[PROVIDER_ID];
|
||||
// Auth setup updates auth/profile/provider model fields but does not mutate
|
||||
// user-provided header overrides. Runtime request assembly is the source of truth for auth.
|
||||
@@ -439,9 +449,19 @@ export async function promptAndConfigureLmstudioInteractive(params: {
|
||||
env: process.env,
|
||||
headers: persistedHeaders,
|
||||
});
|
||||
const hasAuthorizationHeader = hasLmstudioAuthorizationHeader(resolvedHeaders);
|
||||
const setupDiscoveryApiKey =
|
||||
normalizedApiKey ??
|
||||
(shouldUseLmstudioApiKeyPlaceholder({
|
||||
hasModels: true,
|
||||
resolvedApiKey: undefined,
|
||||
hasAuthorizationHeader,
|
||||
})
|
||||
? LMSTUDIO_LOCAL_API_KEY_PLACEHOLDER
|
||||
: undefined);
|
||||
const setupDiscovery = await discoverLmstudioSetupModels({
|
||||
baseUrl,
|
||||
apiKey,
|
||||
apiKey: setupDiscoveryApiKey,
|
||||
...(resolvedHeaders ? { headers: resolvedHeaders } : {}),
|
||||
timeoutMs: 5000,
|
||||
});
|
||||
@@ -475,21 +495,29 @@ export async function promptAndConfigureLmstudioInteractive(params: {
|
||||
const defaultModel = setupDiscovery.value.defaultModel;
|
||||
const persistedApiKey =
|
||||
resolvePersistedLmstudioApiKey({
|
||||
currentApiKey: existingProvider?.apiKey,
|
||||
explicitAuth: resolveLmstudioProviderAuthMode(apiKey),
|
||||
fallbackApiKey: LMSTUDIO_DEFAULT_API_KEY_ENV_VAR,
|
||||
currentApiKey: normalizedApiKey ? existingProvider?.apiKey : undefined,
|
||||
explicitAuth: resolveLmstudioProviderAuthMode(normalizedApiKey),
|
||||
fallbackApiKey: normalizedApiKey ? LMSTUDIO_DEFAULT_API_KEY_ENV_VAR : undefined,
|
||||
preferFallbackApiKey: true,
|
||||
hasModels: discoveredModels.length > 0,
|
||||
hasAuthorizationHeader: hasLmstudioAuthorizationHeader(resolvedHeaders),
|
||||
}) ?? LMSTUDIO_DEFAULT_API_KEY_ENV_VAR;
|
||||
hasAuthorizationHeader,
|
||||
}) ?? (normalizedApiKey ? LMSTUDIO_DEFAULT_API_KEY_ENV_VAR : undefined);
|
||||
if (!credential) {
|
||||
await removeProviderAuthProfilesWithLock({
|
||||
provider: PROVIDER_ID,
|
||||
agentDir: params.agentDir,
|
||||
});
|
||||
}
|
||||
|
||||
return {
|
||||
profiles: [
|
||||
{
|
||||
profileId: `${PROVIDER_ID}:default`,
|
||||
credential,
|
||||
},
|
||||
],
|
||||
profiles: credential
|
||||
? [
|
||||
{
|
||||
profileId: `${PROVIDER_ID}:default`,
|
||||
credential,
|
||||
},
|
||||
]
|
||||
: [],
|
||||
configPatch: {
|
||||
agents: {
|
||||
defaults: {
|
||||
|
||||
Reference in New Issue
Block a user