Files
openclaw/extensions/github-copilot/openclaw.plugin.json
Eduardo Piva 75405f64d0 github-copilot: live catalog discovery via /models + add gpt-5.5
The plugin's `catalog.run` hook already exchanged a GitHub OAuth token
for a short-lived Copilot API token and resolved the per-account baseUrl,
but it returned `models: []` and the bundled openclaw runtime relied
entirely on the static manifest catalog. That meant:

- Static `contextWindow` values were a conservative 128k for every
  model, far below reality (gpt-5.4/5.5 are 400k, claude-opus-4.6/4.7
  internal variants are 1M, claude-sonnet-4 is 200k, etc.).
- Newly published Copilot models (gpt-5.5, gpt-5.1*, gemini-3-pro-preview,
  the claude-opus-*-1m internal variants, etc.) didn't appear at all
  until the manifest was patched.
- Per-account entitlement was invisible — every user saw the same
  hardcoded 22-model list regardless of plan.

Wire it up:

- Add `fetchCopilotModelCatalog` in `extensions/github-copilot/models.ts`.
  Calls `${baseUrl}/models` with the resolved Copilot API token and the
  same Editor-Version / Copilot-Integration-Id headers used elsewhere in
  the plugin. Maps each entry to a `ModelDefinitionConfig`:
  - `contextWindow` ← `capabilities.limits.max_context_window_tokens`
  - `maxTokens`     ← `capabilities.limits.max_output_tokens`
  - `input`         ← `["text", "image"]` if `supports.vision`, else `["text"]`
  - `reasoning`     ← `Array.isArray(supports.reasoning_effort) && supports.reasoning_effort.length > 0`
  - `api`           ← `anthropic-messages` for Anthropic vendor or claude*
                      ids; otherwise `openai-responses`
  Filters out non-chat objects (embeddings) and internal routers
  (`accounts/...` ids). Dedupes by id. 10s default timeout.

- Update the `catalog.run` hook in `extensions/github-copilot/index.ts`
  to call the new function after token-exchange and return the live
  results. On any HTTP/parse failure it falls back to `models: []`,
  which preserves the static manifest catalog as the visible fallback —
  no behavior regression for users with `discovery.enabled: false` or
  in offline scenarios.

- Bump `modelCatalog.discovery."github-copilot"` from `"static"` to
  `"refreshable"` in `openclaw.plugin.json` so the catalog hook is
  actually invoked at runtime. Without this the discovery infrastructure
  treats the provider as static-only and never calls `catalog.run`.

- Add `gpt-5.5` to the static manifest catalog and `DEFAULT_MODEL_IDS`
  with the correct values from the API (`contextWindow: 400000`,
  `maxTokens: 128000`, `reasoning: true`, multimodal). This means users
  on `discovery.enabled: false` still get gpt-5.5 visible without
  needing to override `models.providers.github-copilot.models` in their
  config.

Tests added (5, all passing alongside the existing 24):

- `fetchCopilotModelCatalog` maps a representative `/models` response
  (chat models incl. an internal 1M-context Anthropic variant, a router,
  an embedding) to the right `ModelDefinitionConfig` shape with real
  context windows.
- baseUrl trailing slash is normalized.
- Duplicate ids in the API response are deduped (first wins).
- Non-2xx HTTP raises so the caller can fall back to the static catalog.
- Empty token / baseUrl reject synchronously without calling fetch.

Targeted run: `pnpm test extensions/github-copilot/models.test.ts` →
29/29 pass. `pnpm exec oxfmt --check extensions/github-copilot/` clean.
`pnpm tsgo:core` clean.

Real-world proof:

Built locally and dropped the resulting tarball into a downstream
container with `gh auth login --hostname github.com` (Copilot
subscription on the linked account). Before this change,
`openclaw models list --provider github-copilot` returned the 22-entry
static catalog with every entry showing 128k context. After this change,
the same command (with `--refresh`) returns 30 entries with API-accurate
context windows including the new gpt-5.1 family, the claude-opus-*-1m
variants, and the corrected `gemini-3*-preview` ids.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-08 21:55:18 -04:00

271 lines
8.6 KiB
JSON

{
"id": "github-copilot",
"activation": {
"onStartup": false
},
"enabledByDefault": true,
"providers": ["github-copilot"],
"providerEndpoints": [
{
"endpointClass": "github-copilot-native",
"hostSuffixes": [".githubcopilot.com"]
}
],
"providerRequest": {
"providers": {
"github-copilot": {
"family": "github-copilot"
}
}
},
"modelCatalog": {
"providers": {
"github-copilot": {
"baseUrl": "https://api.individual.githubcopilot.com",
"api": "openai-responses",
"models": [
{
"id": "claude-haiku-4.5",
"name": "Claude Haiku 4.5",
"api": "anthropic-messages",
"input": ["text", "image"],
"contextWindow": 128000,
"maxTokens": 8192,
"cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 }
},
{
"id": "claude-opus-4.5",
"name": "Claude Opus 4.5",
"api": "anthropic-messages",
"input": ["text", "image"],
"contextWindow": 128000,
"maxTokens": 8192,
"cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 }
},
{
"id": "claude-opus-4.6",
"name": "Claude Opus 4.6",
"api": "anthropic-messages",
"input": ["text", "image"],
"contextWindow": 128000,
"maxTokens": 8192,
"cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 }
},
{
"id": "claude-opus-4.7",
"name": "Claude Opus 4.7",
"api": "anthropic-messages",
"input": ["text", "image"],
"contextWindow": 128000,
"maxTokens": 8192,
"cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 }
},
{
"id": "claude-sonnet-4",
"name": "Claude Sonnet 4",
"api": "anthropic-messages",
"input": ["text", "image"],
"contextWindow": 128000,
"maxTokens": 8192,
"cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 }
},
{
"id": "claude-sonnet-4.5",
"name": "Claude Sonnet 4.5",
"api": "anthropic-messages",
"input": ["text", "image"],
"contextWindow": 128000,
"maxTokens": 8192,
"cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 }
},
{
"id": "claude-sonnet-4.6",
"name": "Claude Sonnet 4.6",
"api": "anthropic-messages",
"input": ["text", "image"],
"contextWindow": 128000,
"maxTokens": 8192,
"cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 }
},
{
"id": "gemini-2.5-pro",
"name": "Gemini 2.5 Pro",
"input": ["text", "image"],
"contextWindow": 128000,
"maxTokens": 8192,
"cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 }
},
{
"id": "gemini-3-flash",
"name": "Gemini 3 Flash",
"input": ["text", "image"],
"contextWindow": 128000,
"maxTokens": 8192,
"cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 }
},
{
"id": "gemini-3.1-pro",
"name": "Gemini 3.1 Pro",
"input": ["text", "image"],
"contextWindow": 128000,
"maxTokens": 8192,
"cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 }
},
{
"id": "gpt-4.1",
"name": "GPT-4.1",
"input": ["text", "image"],
"contextWindow": 128000,
"maxTokens": 8192,
"cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 }
},
{
"id": "gpt-5-mini",
"name": "GPT-5 mini",
"reasoning": true,
"input": ["text", "image"],
"contextWindow": 128000,
"maxTokens": 8192,
"cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 }
},
{
"id": "gpt-5.2",
"name": "GPT-5.2",
"reasoning": true,
"input": ["text", "image"],
"contextWindow": 128000,
"maxTokens": 8192,
"cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 }
},
{
"id": "gpt-5.2-codex",
"name": "GPT-5.2-Codex",
"reasoning": true,
"input": ["text"],
"contextWindow": 128000,
"maxTokens": 8192,
"cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 }
},
{
"id": "gpt-5.3-codex",
"name": "GPT-5.3-Codex",
"reasoning": true,
"input": ["text"],
"contextWindow": 128000,
"maxTokens": 8192,
"cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 }
},
{
"id": "gpt-5.4",
"name": "GPT-5.4",
"reasoning": true,
"input": ["text", "image"],
"contextWindow": 128000,
"maxTokens": 8192,
"cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 }
},
{
"id": "gpt-5.5",
"name": "GPT-5.5",
"reasoning": true,
"input": ["text", "image"],
"contextWindow": 400000,
"maxTokens": 128000,
"cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 }
},
{
"id": "gpt-5.4-mini",
"name": "GPT-5.4 mini",
"reasoning": true,
"input": ["text", "image"],
"contextWindow": 128000,
"maxTokens": 8192,
"cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 }
},
{
"id": "gpt-5.4-nano",
"name": "GPT-5.4 nano",
"reasoning": true,
"input": ["text", "image"],
"contextWindow": 128000,
"maxTokens": 8192,
"cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 }
},
{
"id": "grok-code-fast-1",
"name": "Grok Code Fast 1",
"input": ["text"],
"contextWindow": 128000,
"maxTokens": 8192,
"cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 }
},
{
"id": "raptor-mini",
"name": "Raptor mini",
"input": ["text"],
"contextWindow": 128000,
"maxTokens": 8192,
"cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 }
},
{
"id": "goldeneye",
"name": "Goldeneye",
"input": ["text"],
"contextWindow": 128000,
"maxTokens": 8192,
"cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 }
}
]
}
},
"discovery": {
"github-copilot": "refreshable"
}
},
"contracts": {
"memoryEmbeddingProviders": ["github-copilot"]
},
"providerAuthEnvVars": {
"github-copilot": ["COPILOT_GITHUB_TOKEN", "GH_TOKEN", "GITHUB_TOKEN"]
},
"providerAuthChoices": [
{
"provider": "github-copilot",
"method": "device",
"choiceId": "github-copilot",
"choiceLabel": "GitHub Copilot",
"choiceHint": "Device login with your GitHub account",
"groupId": "copilot",
"groupLabel": "Copilot",
"groupHint": "GitHub + local proxy",
"optionKey": "githubCopilotToken",
"cliFlag": "--github-copilot-token",
"cliOption": "--github-copilot-token <token>",
"cliDescription": "GitHub Copilot OAuth token"
}
],
"configSchema": {
"type": "object",
"additionalProperties": false,
"properties": {
"discovery": {
"type": "object",
"additionalProperties": false,
"properties": {
"enabled": { "type": "boolean" }
}
}
}
},
"uiHints": {
"discovery": {
"label": "Model Discovery",
"help": "Plugin-owned controls for GitHub Copilot model auto-discovery."
},
"discovery.enabled": {
"label": "Enable Discovery",
"help": "When false, OpenClaw keeps the GitHub Copilot plugin available but skips implicit startup discovery from ambient Copilot credentials."
}
}
}