fix: improve MiniMax coding-plan parsing (#52349) (thanks @IVY-AI-gif)

This commit is contained in:
Peter Steinberger
2026-04-04 12:31:45 +01:00
parent dd9c9dac53
commit 9eb8184f36
2 changed files with 60 additions and 0 deletions

View File

@@ -40,6 +40,7 @@ Docs: https://docs.openclaw.ai
- Node exec approvals: keep node-host `system.run` approvals bound to the prepared execution plan, so script-drift revalidation still runs after agent-side approval forwarding.
- Models/MiniMax: honor `MINIMAX_API_HOST` for implicit bundled MiniMax provider catalogs so China-hosted API-key setups pick `api.minimaxi.com/anthropic` without manual provider config. (#34524) Thanks @caiqinghua.
- Usage/MiniMax: invert remaining-style `usage_percent` fields when MiniMax reports only remaining percentage data, so usage bars stop showing nearly-full remaining quota as nearly-exhausted usage. (#60254) Thanks @jwchmodx.
- Usage/MiniMax: prefer the chat-model `model_remains` entry and derive Coding Plan window labels from MiniMax interval timestamps so MiniMax usage snapshots stop picking zero-budget media rows and misreporting 4h windows as `5h`. (#52349) Thanks @IVY-AI-gif.
- MiniMax: advertise image input on bundled `MiniMax-M2.7` and `MiniMax-M2.7-highspeed` model definitions so image-capable flows can route through the M2.7 family correctly. (#54843) Thanks @MerlinMiao88888888.
- Agents/exec approvals: let `exec-approvals.json` agent security override stricter gateway tool defaults so approved subagents can use `security: "full"` without falling back to allowlist enforcement again. (#60310) Thanks @lml2468.
- Tasks/maintenance: mark stale cron runs and CLI tasks backed only by long-lived chat sessions as lost again so task cleanup does not keep dead work alive indefinitely. (#60310) Thanks @lml2468.

View File

@@ -157,6 +157,65 @@ describe("fetchMinimaxUsage", () => {
windows: [{ label: "2h", usedPercent: 40, resetAt: 1_700_000_100_000 }],
},
},
{
name: "prefers chat model entries from model_remains and derives window labels from timestamps",
payload: {
data: {
model_remains: [
{
model_name: "speech-hd",
current_interval_total_count: 0,
current_interval_usage_count: 0,
start_time: 1_774_180_800_000,
end_time: 1_774_195_200_000,
},
{
model_name: "MiniMax-M*",
current_interval_total_count: 600,
current_interval_usage_count: 595,
start_time: 1_774_180_800_000,
end_time: 1_774_195_200_000,
},
{
model_name: "image-01",
current_interval_total_count: 0,
current_interval_usage_count: 0,
start_time: 1_774_180_800_000,
end_time: 1_774_195_200_000,
},
],
},
},
expected: {
plan: "Coding Plan · MiniMax-M*",
windows: [{ label: "4h", usedPercent: 0.8333333333333334, resetAt: 1_774_195_200_000 }],
},
},
{
name: "falls back to the first non-zero model_remains record when no MiniMax chat entry exists",
payload: {
data: {
model_remains: [
{
model_name: "speech-hd",
current_interval_total_count: 0,
current_interval_usage_count: 0,
},
{
model_name: "video-01",
current_interval_total_count: 200,
current_interval_usage_count: 150,
start_time: 1_774_180_800_000,
end_time: 1_774_195_200_000,
},
],
},
},
expected: {
plan: "Coding Plan · video-01",
windows: [{ label: "4h", usedPercent: 25, resetAt: 1_774_195_200_000 }],
},
},
])("$name", async ({ payload, expected }) => {
await expectMinimaxUsageResult({ payload, expected });
});