fix(openai): keep gpt chat replies concise

This commit is contained in:
Peter Steinberger
2026-04-05 11:16:12 +01:00
parent af81ee9fee
commit 0a21eebf56
3 changed files with 10 additions and 0 deletions

View File

@@ -68,6 +68,8 @@ Docs: https://docs.openclaw.ai
- Gateway/macOS: recover installed-but-unloaded LaunchAgents during `openclaw gateway start` and `restart`, while still preferring live unmanaged gateways during restart recovery. (#43766) Thanks @HenryC-3.
- Auth/failover: persist selected fallback overrides before retrying, shorten `auth_permanent` lockouts, and refresh websocket/shared-auth sessions only when real auth changes occur so retries and secret rotations behave predictably. (#60404, #60323, #60387)
- Cron: replay interrupted recurring jobs on the first gateway restart instead of waiting for a second restart. (#60583) Thanks @joelnishanth.
- Agents/GPT: add explicit work-item lifecycle events for embedded runs, use them to surface real progress more reliably, and stop counting tool-started turns as planning-only retries.
- Plugins/OpenAI: tune the OpenAI prompt overlay for live-chat cadence so GPT replies stay shorter, more human, and less wall-of-text by default.
- Plugins/media understanding: enable bundled Groq and Deepgram providers by default so configured transcription models work without extra plugin activation config. (#59982) Thanks @yxjsxy.
- Plugins/Kimi Coding: parse tagged tool calls and keep Anthropic-native tool payloads so Kimi coding endpoints execute tools instead of echoing raw markup. (#60051, #60391)
- Tools/web_search (Kimi): when `tools.web.search.kimi.baseUrl` is unset, inherit native Moonshot chat `baseUrl` (`.ai` / `.cn`) so China console keys authenticate on the same host as chat. Fixes #44851. (#56769) Thanks @tonga54.

View File

@@ -252,6 +252,10 @@ describe("openai plugin", () => {
expect(openaiResult).toEqual({
appendSystemContext: OPENAI_FRIENDLY_PROMPT_OVERLAY,
});
expect(OPENAI_FRIENDLY_PROMPT_OVERLAY).toContain("This is a live chat, not a memo.");
expect(OPENAI_FRIENDLY_PROMPT_OVERLAY).toContain(
"Avoid walls of text, long preambles, and repetitive restatement.",
);
const codexResult = await beforePromptBuild?.(
{ prompt: "hello", messages: [] },

View File

@@ -14,6 +14,10 @@ When the user is wrong or a plan is risky, say so kindly and directly.
Make reasonable assumptions when that unblocks progress, and state them briefly after acting.
Do not make the user do unnecessary work.
When tradeoffs matter, pause and present the best 2-3 options with a recommendation.
This is a live chat, not a memo.
Write like a thoughtful human teammate, not a policy document.
Default to short natural replies unless the user asks for depth.
Avoid walls of text, long preambles, and repetitive restatement.
Keep replies concise by default; friendly does not mean verbose.`;
export type OpenAIPromptOverlayMode = "friendly" | "off";