fix(openai): reuse Codex OAuth for OpenAI images

This commit is contained in:
Peter Steinberger
2026-04-23 22:06:28 +01:00
parent f1ad5e27e0
commit ddcc39de91
8 changed files with 189 additions and 212 deletions

View File

@@ -9,7 +9,7 @@ Docs: https://docs.openclaw.ai
- Agents/subagents: add optional forked context for native `sessions_spawn` runs so agents can let a child inherit the requester transcript when needed, while keeping clean isolated sessions as the default; includes prompt guidance, context-engine hook metadata, docs, and QA coverage.
- Codex harness: add structured debug logging for embedded harness selection decisions so `/status` stays simple while gateway logs explain auto-selection and Pi fallback reasons. (#70760) Thanks @100yenadmin.
- Providers/OpenAI: add forward-compatible `gpt-5.5` and `gpt-5.5-pro` support for OpenAI API keys, OpenAI Codex OAuth, and the Codex CLI default model.
- Providers/OpenAI Codex: add image generation and reference-image editing through Codex OAuth, so `openai-codex/gpt-image-2` works without an `OPENAI_API_KEY`. Fixes #70703.
- Providers/OpenAI: add image generation and reference-image editing through Codex OAuth, so `openai/gpt-image-2` works without an `OPENAI_API_KEY`. Fixes #70703.
### Fixes

View File

@@ -184,16 +184,17 @@ Choose your preferred auth method and follow the setup steps.
The bundled `openai` plugin registers image generation through the `image_generate` tool.
It supports both OpenAI API-key image generation and Codex OAuth image
generation.
generation through the same `openai/gpt-image-2` model ref.
| Capability | OpenAI API key | Codex OAuth |
| ------------------------- | ---------------------------------- | ---------------------------------- |
| Model ref | `openai/gpt-image-2` | `openai-codex/gpt-image-2` |
| Auth | `OPENAI_API_KEY` | OpenAI Codex OAuth sign-in |
| Max images per request | 4 | 4 |
| Edit mode | Enabled (up to 5 reference images) | Enabled (up to 5 reference images) |
| Size overrides | Supported, including 2K/4K sizes | Supported, including 2K/4K sizes |
| Aspect ratio / resolution | Not forwarded to OpenAI Images API | Mapped to supported size when safe |
| Capability | OpenAI API key | Codex OAuth |
| ------------------------- | ---------------------------------- | ------------------------------------ |
| Model ref | `openai/gpt-image-2` | `openai/gpt-image-2` |
| Auth | `OPENAI_API_KEY` | OpenAI Codex OAuth sign-in |
| Transport | OpenAI Images API | Codex Responses backend |
| Max images per request | 4 | 4 |
| Edit mode | Enabled (up to 5 reference images) | Enabled (up to 5 reference images) |
| Size overrides | Supported, including 2K/4K sizes | Supported, including 2K/4K sizes |
| Aspect ratio / resolution | Not forwarded to OpenAI Images API | Mapped to a supported size when safe |
```json5
{
@@ -205,18 +206,6 @@ generation.
}
```
Use Codex OAuth instead:
```json5
{
agents: {
defaults: {
imageGenerationModel: { primary: "openai-codex/gpt-image-2" },
},
},
}
```
<Note>
See [Image Generation](/tools/image-generation) for shared tool parameters, provider selection, and failover behavior.
</Note>
@@ -225,12 +214,10 @@ See [Image Generation](/tools/image-generation) for shared tool parameters, prov
editing. `gpt-image-1` remains usable as an explicit model override, but new
OpenAI image workflows should use `openai/gpt-image-2`.
The `openai-codex` provider also exposes `gpt-image-2` for image generation and
reference-image editing through OpenAI Codex OAuth. Use
`openai-codex/gpt-image-2` when the agent is signed in with Codex OAuth but does
not have an `OPENAI_API_KEY`. OpenClaw resolves the stored Codex OAuth access
token for `openai-codex` and sends image requests through the Codex Responses
backend, so this path works without the public OpenAI Images API key.
For Codex OAuth installs, keep the same `openai/gpt-image-2` ref. If no
`OPENAI_API_KEY` is available, OpenClaw resolves the stored OAuth access token
for the `openai-codex` auth profile and sends image requests through the Codex
Responses backend, so this path works without the public OpenAI Images API key.
Generate:
@@ -238,18 +225,6 @@ Generate:
/tool image_generate model=openai/gpt-image-2 prompt="A polished launch poster for OpenClaw on macOS" size=3840x2160 count=1
```
Generate with Codex OAuth:
```
/tool image_generate model=openai-codex/gpt-image-2 prompt="A polished launch poster for OpenClaw on macOS" size=3840x2160 count=1
```
Edit with Codex OAuth:
```
/tool image_generate model=openai-codex/gpt-image-2 prompt="Preserve the object shape, change the material to translucent glass" image=/path/to/reference.png size=1024x1536
```
Edit:
```

View File

@@ -30,36 +30,25 @@ The tool only appears when at least one image generation provider is available.
}
```
Use Codex OAuth instead of an OpenAI API key:
Codex OAuth uses the same `openai/gpt-image-2` model ref. If no `OPENAI_API_KEY`
is available, OpenClaw resolves the existing `openai-codex` OAuth profile and
sends the image request through the Codex Responses backend.
```json5
{
agents: {
defaults: {
imageGenerationModel: {
primary: "openai-codex/gpt-image-2",
},
},
},
}
```
3. Ask the agent: _"Generate an image of a friendly lobster mascot."_
3. Ask the agent: _"Generate an image of a friendly robot mascot."_
The agent calls `image_generate` automatically. No tool allow-listing needed — it's enabled by default when a provider is available.
## Supported providers
| Provider | Default model | Edit support | API key |
| ------------ | -------------------------------- | ---------------------------------- | ----------------------------------------------------- |
| OpenAI | `gpt-image-2` | Yes (up to 4 images) | `OPENAI_API_KEY` |
| OpenAI Codex | `gpt-image-2` | Yes (up to 4 images) | OpenAI Codex OAuth |
| Google | `gemini-3.1-flash-image-preview` | Yes | `GEMINI_API_KEY` or `GOOGLE_API_KEY` |
| fal | `fal-ai/flux/dev` | Yes | `FAL_KEY` |
| MiniMax | `image-01` | Yes (subject reference) | `MINIMAX_API_KEY` or MiniMax OAuth (`minimax-portal`) |
| ComfyUI | `workflow` | Yes (1 image, workflow-configured) | `COMFY_API_KEY` or `COMFY_CLOUD_API_KEY` for cloud |
| Vydra | `grok-imagine` | No | `VYDRA_API_KEY` |
| xAI | `grok-imagine-image` | Yes (up to 5 images) | `XAI_API_KEY` |
| Provider | Default model | Edit support | Auth |
| -------- | -------------------------------- | ---------------------------------- | ----------------------------------------------------- |
| OpenAI | `gpt-image-2` | Yes (up to 4 images) | `OPENAI_API_KEY` or OpenAI Codex OAuth |
| Google | `gemini-3.1-flash-image-preview` | Yes | `GEMINI_API_KEY` or `GOOGLE_API_KEY` |
| fal | `fal-ai/flux/dev` | Yes | `FAL_KEY` |
| MiniMax | `image-01` | Yes (subject reference) | `MINIMAX_API_KEY` or MiniMax OAuth (`minimax-portal`) |
| ComfyUI | `workflow` | Yes (1 image, workflow-configured) | `COMFY_API_KEY` or `COMFY_CLOUD_API_KEY` for cloud |
| Vydra | `grok-imagine` | No | `VYDRA_API_KEY` |
| xAI | `grok-imagine-image` | Yes (up to 5 images) | `XAI_API_KEY` |
Use `action: "list"` to inspect available providers and models at runtime:
@@ -73,7 +62,7 @@ Use `action: "list"` to inspect available providers and models at runtime:
| ------------- | -------- | ------------------------------------------------------------------------------------- |
| `prompt` | string | Image generation prompt (required for `action: "generate"`) |
| `action` | string | `"generate"` (default) or `"list"` to inspect providers |
| `model` | string | Provider/model override, e.g. `openai/gpt-image-2` or `openai-codex/gpt-image-2` |
| `model` | string | Provider/model override, e.g. `openai/gpt-image-2` |
| `image` | string | Single reference image path or URL for edit mode |
| `images` | string[] | Multiple reference images for edit mode (up to 5) |
| `size` | string | Size hint: `1024x1024`, `1536x1024`, `1024x1536`, `2048x2048`, `3840x2160` |
@@ -139,11 +128,12 @@ OpenAI, Google, and xAI support up to 5 reference images via the `images` parame
### OpenAI `gpt-image-2`
OpenAI image generation defaults to `openai/gpt-image-2` with `OPENAI_API_KEY`.
Use `openai-codex/gpt-image-2` to generate or edit images with the same Codex
OAuth sign-in used by `openai-codex` chat models. The older `openai/gpt-image-1`
model can still be selected explicitly, but new OpenAI image-generation and
image-editing requests should use `gpt-image-2`.
OpenAI image generation defaults to `openai/gpt-image-2`. It uses
`OPENAI_API_KEY` when available. If no API key is configured, OpenClaw reuses the
same `openai-codex` OAuth profile used by Codex subscription chat models and
sends the image request through the Codex Responses backend. The older
`openai/gpt-image-1` model can still be selected explicitly, but new OpenAI
image-generation and image-editing requests should use `gpt-image-2`.
`gpt-image-2` supports both text-to-image generation and reference-image
editing through the same `image_generate` tool. OpenClaw forwards `prompt`,
@@ -169,18 +159,6 @@ Edit one local reference image:
/tool image_generate action=generate model=openai/gpt-image-2 prompt="Keep the subject, replace the background with a bright studio setup" image=/path/to/reference.png size=1024x1536
```
Generate with Codex OAuth:
```
/tool image_generate action=generate model=openai-codex/gpt-image-2 prompt="A clean editorial poster for OpenClaw image generation" size=3840x2160 count=1
```
Edit one local reference image with Codex OAuth:
```
/tool image_generate action=generate model=openai-codex/gpt-image-2 prompt="Keep the subject, replace the background with a bright studio setup" image=/path/to/reference.png size=1024x1536
```
Edit with multiple references:
```

View File

@@ -1,8 +1,5 @@
import { afterEach, describe, expect, it, vi } from "vitest";
import {
buildOpenAICodexImageGenerationProvider,
buildOpenAIImageGenerationProvider,
} from "./image-generation-provider.js";
import { buildOpenAIImageGenerationProvider } from "./image-generation-provider.js";
const {
resolveApiKeyForProviderMock,
@@ -11,7 +8,13 @@ const {
assertOkOrThrowHttpErrorMock,
resolveProviderHttpRequestConfigMock,
} = vi.hoisted(() => ({
resolveApiKeyForProviderMock: vi.fn(async () => ({ apiKey: "openai-key" })),
resolveApiKeyForProviderMock: vi.fn(
async (_params?: {
provider?: string;
}): Promise<{ apiKey?: string; source?: string; mode?: string }> => ({
apiKey: "openai-key",
}),
),
postJsonRequestMock: vi.fn(),
postMultipartRequestMock: vi.fn(),
assertOkOrThrowHttpErrorMock: vi.fn(async () => {}),
@@ -76,9 +79,19 @@ function mockCodexImageStream(params: { imageData?: string; revisedPrompt?: stri
}));
}
function mockCodexAuthOnly() {
resolveApiKeyForProviderMock.mockImplementation(async (params?: { provider?: string }) => {
if (params?.provider === "openai-codex") {
return { apiKey: "codex-key", source: "profile:openai-codex:default", mode: "oauth" };
}
return {};
});
}
describe("openai image generation provider", () => {
afterEach(() => {
resolveApiKeyForProviderMock.mockClear();
resolveApiKeyForProviderMock.mockReset();
resolveApiKeyForProviderMock.mockResolvedValue({ apiKey: "openai-key" });
postJsonRequestMock.mockReset();
postMultipartRequestMock.mockReset();
assertOkOrThrowHttpErrorMock.mockClear();
@@ -281,13 +294,14 @@ describe("openai image generation provider", () => {
expect(result.images).toHaveLength(1);
});
it("registers Codex OAuth image generation through Responses streaming", async () => {
it("falls back to Codex OAuth image generation through Responses streaming", async () => {
mockCodexAuthOnly();
mockCodexImageStream({ imageData: "codex-image", revisedPrompt: "revised codex prompt" });
const provider = buildOpenAICodexImageGenerationProvider();
const provider = buildOpenAIImageGenerationProvider();
const authStore = { version: 1, profiles: {} };
const result = await provider.generateImage({
provider: "openai-codex",
provider: "openai",
model: "gpt-image-2",
prompt: "Draw a Codex lighthouse",
cfg: {},
@@ -296,6 +310,12 @@ describe("openai image generation provider", () => {
size: "1024x1536",
});
expect(resolveApiKeyForProviderMock).toHaveBeenCalledWith(
expect.objectContaining({
provider: "openai",
store: authStore,
}),
);
expect(resolveApiKeyForProviderMock).toHaveBeenCalledWith(
expect.objectContaining({
provider: "openai-codex",
@@ -306,7 +326,7 @@ describe("openai image generation provider", () => {
expect.objectContaining({
defaultBaseUrl: "https://chatgpt.com/backend-api/codex",
defaultHeaders: expect.objectContaining({
Authorization: "Bearer openai-key",
Authorization: "Bearer codex-key",
Accept: "text/event-stream",
}),
provider: "openai-codex",
@@ -353,11 +373,12 @@ describe("openai image generation provider", () => {
});
it("sends Codex reference images as Responses input images", async () => {
mockCodexAuthOnly();
mockCodexImageStream();
const provider = buildOpenAICodexImageGenerationProvider();
const provider = buildOpenAIImageGenerationProvider();
await provider.generateImage({
provider: "openai-codex",
provider: "openai",
model: "gpt-image-2",
prompt: "Use the reference image",
cfg: {},
@@ -384,11 +405,12 @@ describe("openai image generation provider", () => {
});
it("satisfies Codex count by issuing one Responses request per image", async () => {
mockCodexAuthOnly();
mockCodexImageStream({ imageData: "codex-image" });
const provider = buildOpenAICodexImageGenerationProvider();
const provider = buildOpenAIImageGenerationProvider();
const result = await provider.generateImage({
provider: "openai-codex",
provider: "openai",
model: "gpt-image-2",
prompt: "Draw two Codex icons",
cfg: {},

View File

@@ -221,7 +221,7 @@ function extractCodexImageGenerationResult(params: {
}
function createOpenAIImageGenerationProviderBase(params: {
id: "openai" | "openai-codex";
id: "openai";
label: string;
isConfigured: ImageGenerationProvider["isConfigured"];
generateImage: ImageGenerationProvider["generateImage"];
@@ -255,6 +255,104 @@ function createOpenAIImageGenerationProviderBase(params: {
};
}
async function resolveOptionalApiKeyForProvider(
params: Parameters<typeof resolveApiKeyForProvider>[0],
) {
try {
return await resolveApiKeyForProvider(params);
} catch {
return null;
}
}
async function generateOpenAICodexImage(params: {
req: Parameters<ImageGenerationProvider["generateImage"]>[0];
apiKey: string;
}): Promise<ImageGenerationResult> {
const { req, apiKey } = params;
const inputImages = req.inputImages ?? [];
const { baseUrl, allowPrivateNetwork, headers, dispatcherPolicy } =
resolveProviderHttpRequestConfig({
defaultBaseUrl: DEFAULT_OPENAI_CODEX_IMAGE_BASE_URL,
defaultHeaders: {
Authorization: `Bearer ${apiKey}`,
Accept: "text/event-stream",
},
provider: "openai-codex",
api: "openai-codex-responses",
capability: "image",
transport: "http",
});
const model = req.model || DEFAULT_OPENAI_IMAGE_MODEL;
const count = req.count ?? 1;
const size = req.size ?? DEFAULT_SIZE;
headers.set("Content-Type", "application/json");
const content: Array<Record<string, unknown>> = [
{ type: "input_text", text: req.prompt },
...inputImages.map((image) => ({
type: "input_image",
image_url: toOpenAIDataUrl(image),
detail: "auto",
})),
];
const results: ImageGenerationResult[] = [];
for (let index = 0; index < count; index += 1) {
const requestResult = await postJsonRequest({
url: `${baseUrl}/responses`,
headers,
body: {
model: "gpt-5.4",
input: [
{
role: "user",
content,
},
],
instructions: OPENAI_CODEX_IMAGE_INSTRUCTIONS,
tools: [
{
type: "image_generation",
model,
size,
},
],
tool_choice: { type: "image_generation" },
stream: true,
store: false,
},
timeoutMs: req.timeoutMs,
fetchFn: fetch,
allowPrivateNetwork,
dispatcherPolicy,
});
const { response, release } = requestResult;
try {
await assertOkOrThrowHttpError(response, "OpenAI Codex image generation failed");
results.push(
extractCodexImageGenerationResult({
body: await readResponseBodyText(response),
model,
}),
);
} finally {
await release();
}
}
const images = results.flatMap((result) => result.images);
return {
images: images.map((image, index) =>
Object.assign({}, image, {
fileName: `image-${index + 1}.png`,
}),
),
model,
metadata: {
responses: results.map((result) => result.metadata).filter(Boolean),
},
};
}
export function buildOpenAIImageGenerationProvider(): ImageGenerationProvider {
return createOpenAIImageGenerationProviderBase({
id: "openai",
@@ -263,18 +361,31 @@ export function buildOpenAIImageGenerationProvider(): ImageGenerationProvider {
isProviderApiKeyConfigured({
provider: "openai",
agentDir,
}) ||
isProviderApiKeyConfigured({
provider: "openai-codex",
agentDir,
}),
async generateImage(req) {
const inputImages = req.inputImages ?? [];
const isEdit = inputImages.length > 0;
const auth = await resolveApiKeyForProvider({
const auth = await resolveOptionalApiKeyForProvider({
provider: "openai",
cfg: req.cfg,
agentDir: req.agentDir,
store: req.authStore,
});
if (!auth.apiKey) {
throw new Error("OpenAI API key missing");
if (!auth?.apiKey) {
const codexAuth = await resolveOptionalApiKeyForProvider({
provider: "openai-codex",
cfg: req.cfg,
agentDir: req.agentDir,
store: req.authStore,
});
if (codexAuth?.apiKey) {
return generateOpenAICodexImage({ req, apiKey: codexAuth.apiKey });
}
throw new Error("OpenAI API key or Codex OAuth missing");
}
const rawBaseUrl = resolveConfiguredOpenAIBaseUrl(req.cfg);
const isAzure = isAzureOpenAIBaseUrl(rawBaseUrl);
@@ -382,108 +493,3 @@ export function buildOpenAIImageGenerationProvider(): ImageGenerationProvider {
},
});
}
export function buildOpenAICodexImageGenerationProvider(): ImageGenerationProvider {
return createOpenAIImageGenerationProviderBase({
id: "openai-codex",
label: "OpenAI Codex",
isConfigured: ({ agentDir }) =>
isProviderApiKeyConfigured({
provider: "openai-codex",
agentDir,
}),
async generateImage(req) {
const inputImages = req.inputImages ?? [];
const auth = await resolveApiKeyForProvider({
provider: "openai-codex",
cfg: req.cfg,
agentDir: req.agentDir,
store: req.authStore,
});
if (!auth.apiKey) {
throw new Error("OpenAI Codex OAuth missing");
}
const { baseUrl, allowPrivateNetwork, headers, dispatcherPolicy } =
resolveProviderHttpRequestConfig({
defaultBaseUrl: DEFAULT_OPENAI_CODEX_IMAGE_BASE_URL,
defaultHeaders: {
Authorization: `Bearer ${auth.apiKey}`,
Accept: "text/event-stream",
},
provider: "openai-codex",
api: "openai-codex-responses",
capability: "image",
transport: "http",
});
const model = req.model || DEFAULT_OPENAI_IMAGE_MODEL;
const count = req.count ?? 1;
const size = req.size ?? DEFAULT_SIZE;
headers.set("Content-Type", "application/json");
const content: Array<Record<string, unknown>> = [
{ type: "input_text", text: req.prompt },
...inputImages.map((image) => ({
type: "input_image",
image_url: toOpenAIDataUrl(image),
detail: "auto",
})),
];
const results: ImageGenerationResult[] = [];
for (let index = 0; index < count; index += 1) {
const requestResult = await postJsonRequest({
url: `${baseUrl}/responses`,
headers,
body: {
model: "gpt-5.4",
input: [
{
role: "user",
content,
},
],
instructions: OPENAI_CODEX_IMAGE_INSTRUCTIONS,
tools: [
{
type: "image_generation",
model,
size,
},
],
tool_choice: { type: "image_generation" },
stream: true,
store: false,
},
timeoutMs: req.timeoutMs,
fetchFn: fetch,
allowPrivateNetwork,
dispatcherPolicy,
});
const { response, release } = requestResult;
try {
await assertOkOrThrowHttpError(response, "OpenAI Codex image generation failed");
results.push(
extractCodexImageGenerationResult({
body: await readResponseBodyText(response),
model,
}),
);
} finally {
await release();
}
}
const images = results.flatMap((result) => result.images);
return {
images: images.map((image, index) =>
Object.assign({}, image, {
fileName: `image-${index + 1}.png`,
}),
),
model,
metadata: {
responses: results.map((result) => result.metadata).filter(Boolean),
},
};
},
});
}

View File

@@ -2,10 +2,7 @@ import { resolvePluginConfigObject } from "openclaw/plugin-sdk/config-runtime";
import { definePluginEntry } from "openclaw/plugin-sdk/plugin-entry";
import { buildProviderToolCompatFamilyHooks } from "openclaw/plugin-sdk/provider-tools";
import { buildOpenAICodexCliBackend } from "./cli-backend.js";
import {
buildOpenAICodexImageGenerationProvider,
buildOpenAIImageGenerationProvider,
} from "./image-generation-provider.js";
import { buildOpenAIImageGenerationProvider } from "./image-generation-provider.js";
import {
openaiCodexMediaUnderstandingProvider,
openaiMediaUnderstandingProvider,
@@ -52,7 +49,6 @@ export default definePluginEntry({
api.registerProvider(buildProviderWithPromptContribution(buildOpenAICodexProviderPlugin()));
api.registerMemoryEmbeddingProvider(openAiMemoryEmbeddingProviderAdapter);
api.registerImageGenerationProvider(buildOpenAIImageGenerationProvider());
api.registerImageGenerationProvider(buildOpenAICodexImageGenerationProvider());
api.registerRealtimeTranscriptionProvider(buildOpenAIRealtimeTranscriptionProvider());
api.registerRealtimeVoiceProvider(buildOpenAIRealtimeVoiceProvider());
api.registerSpeechProvider(buildOpenAISpeechProvider());

View File

@@ -54,7 +54,7 @@
"realtimeVoiceProviders": ["openai"],
"memoryEmbeddingProviders": ["openai"],
"mediaUnderstandingProviders": ["openai", "openai-codex"],
"imageGenerationProviders": ["openai", "openai-codex"],
"imageGenerationProviders": ["openai"],
"videoGenerationProviders": ["openai"]
},
"mediaUnderstandingProviderMetadata": {

View File

@@ -104,7 +104,7 @@ export const pluginRegistrationContractCases = {
realtimeTranscriptionProviderIds: ["openai"],
realtimeVoiceProviderIds: ["openai"],
mediaUnderstandingProviderIds: ["openai", "openai-codex"],
imageGenerationProviderIds: ["openai", "openai-codex"],
imageGenerationProviderIds: ["openai"],
requireSpeechVoices: true,
requireDescribeImages: true,
requireGenerateImage: true,