fix: hide adaptive think option for GPT models

This commit is contained in:
Peter Steinberger
2026-04-21 06:16:10 +01:00
parent 0da5e0e34e
commit e4adb0b0e3
18 changed files with 136 additions and 40 deletions

View File

@@ -43,6 +43,7 @@ For model selection rules, see [/concepts/models](/concepts/models).
`matchesContextOverflowError`, `classifyFailoverReason`,
`isCacheTtlEligible`, `buildMissingAuthMessage`, `suppressBuiltInModel`,
`augmentModelCatalog`, `isBinaryThinking`, `supportsXHighThinking`,
`supportsAdaptiveThinking`,
`resolveDefaultThinkingLevel`, `applyConfigDefaults`, `isModernModelRef`,
`prepareRuntimeAuth`, `resolveUsageAuth`, `fetchUsageSnapshot`, and
`onModelSelected`.
@@ -133,6 +134,7 @@ Typical split:
discovery and config merging
- `isBinaryThinking`: provider owns binary on/off thinking UX
- `supportsXHighThinking`: provider opts selected models into `xhigh`
- `supportsAdaptiveThinking`: provider opts selected models into `adaptive`
- `resolveDefaultThinkingLevel`: provider owns default `/think` policy for a
model family
- `applyConfigDefaults`: provider applies provider-specific global defaults