Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
29 commits
Select commit Hold shift + click to select a range
a2d1a28
feat(openai-codex): add provider module skeleton
Apr 23, 2026
808ed66
feat(openai-codex): request format conversion (chat to responses)
Apr 23, 2026
1928490
feat(openai-codex): SSE response event decoder
Apr 23, 2026
b265191
feat(openai-codex): OAuth device-code flow on the server
Apr 23, 2026
ec44e6e
feat(openai-codex): frontend token manager with always-fresh refresh
Apr 23, 2026
ddcebff
feat(openai-codex): model catalog and device-code login flow
Apr 23, 2026
757d811
feat(openai-codex): wire provider into onscreen agent client
Apr 23, 2026
dcae1e7
feat(openai-codex): wire provider into admin agent client
Apr 23, 2026
b7b103c
docs(openai-codex): README section with setup guide and troubleshooting
Apr 23, 2026
b64f811
fix(openai-codex): parse device-auth payload and rename status field
Apr 23, 2026
708943f
fix(openai-codex): emit output_text for assistant content, input_text…
Apr 23, 2026
e2231ce
fix(openai-codex): route token persistence through store and storage
Apr 23, 2026
6fc5ac1
feat(openai-codex): live model-catalog discovery with static fallback
Apr 23, 2026
6ae4cf4
feat(openai-codex): wire live catalog into overlay and admin settings
Apr 23, 2026
48b0cb2
fix(openai-codex): always route model discovery through the outbound …
Apr 23, 2026
0f1fbed
fix(openai-codex): send space_session cookie on discovery proxy call
Apr 23, 2026
23c6595
fix(openai-codex): include required client_version on models endpoint
Apr 23, 2026
626bd79
fix(openai-codex): show codex model in provider summary, not api model
Apr 23, 2026
3154b22
fix(openai-codex): show codex-specific save confirmation in status bar
Apr 23, 2026
e3958c6
docs(openai-codex): document local testing constraints for reviewers
Apr 23, 2026
0da3a6e
docs(openai-codex): correct status->state rename in poll endpoint docs
Apr 23, 2026
e88b4eb
fix(openai-codex): overlay codex client validates codexModel, not model
Apr 23, 2026
9b8a511
refactor(openai-codex): drop unused isCodexEndpoint helper
Apr 23, 2026
81aa866
refactor(openai-codex): collapse per-store token parse/serialize helpers
Apr 23, 2026
b3180a9
fix(openai-codex): mirror cross-surface scope note on overlay login p…
Apr 23, 2026
74824c1
fix(openai-codex): poll immediately on first iteration of device flow
Apr 23, 2026
05f3b13
fix(openai-codex): explicit codex_tokens deletion on sign-out
Apr 23, 2026
bba1709
fix(openai-codex): route chat-completion request through space.proxy
Apr 27, 2026
3974f62
fix(openai-codex): read SINGLE_USER_APP from runtime config, not runt…
May 12, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions AGENTS.md
Original file line number Diff line number Diff line change
Expand Up @@ -95,6 +95,7 @@ App docs:
- `/app/L0/_all/mod/_core/onscreen_agent/prompts/AGENTS.md`
- `/app/L0/_all/mod/_core/onscreen_menu/AGENTS.md`
- `/app/L0/_all/mod/_core/open_router/AGENTS.md`
- `/app/L0/_all/mod/_core/openai_codex/AGENTS.md`
- `/app/L0/_all/mod/_core/panels/AGENTS.md`
- `/app/L0/_all/mod/_core/promptinclude/AGENTS.md`
- `/app/L0/_all/mod/_core/router/AGENTS.md`
Expand Down Expand Up @@ -123,6 +124,7 @@ Server docs:
- `/server/lib/customware/AGENTS.md`
- `/server/lib/file_watch/AGENTS.md`
- `/server/lib/git/AGENTS.md`
- `/server/lib/openai_codex/AGENTS.md`
- `/server/lib/share/AGENTS.md`
- `/server/lib/tmp/AGENTS.md`
- `/server/pages/AGENTS.md`
Expand Down
33 changes: 33 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -134,6 +134,39 @@ node space supervise HOST=0.0.0.0 PORT=3000 # zero downtime auto-update

Run `node space help` to see the full command surface and built-in help for each from [`commands/params.yaml`](./commands/params.yaml).

## Sign in with ChatGPT (Codex OAuth)

The overlay and admin chat surfaces ship a `ChatGPT` provider tab that uses the official OpenAI Codex OAuth device-code flow. If you already pay for ChatGPT Plus, the agent can send its chats through that subscription via `https://chatgpt.com/backend-api/codex/responses` without requiring a separate OpenAI platform API key.

**Requirements**

- An active **ChatGPT Plus** subscription. Free accounts cannot authorize the Codex device flow. Team and Enterprise plans have not been verified yet; they may work if they expose the same `/backend-api/codex` surface to the account.
- The space-agent server process must be reachable from the browser you sign in from; the OAuth device-flow calls run through authenticated server endpoints to serialize refresh-token rotation safely.

**Setup**

1. In the overlay or admin settings dialog, open the `ChatGPT` tab and press **Sign in with ChatGPT**.
2. A verification URL plus a short code appear. Open the URL in a signed-in ChatGPT browser tab and paste the code.
3. Once you approve the device, the agent receives access and refresh tokens, stores them encrypted in your user config through `userCrypto`, and unlocks the model dropdown.
4. Pick a model (default `gpt-5.4-mini`) and close the dialog. The next message is routed through Codex.

The overlay config stores its tokens in `~/conf/onscreen-agent.yaml`, the admin chat in `~/conf/admin-chat.yaml`. The two surfaces do not share tokens, so signing in on one does not sign in on the other.

**Why does this need a server-side OAuth endpoint?**

Space Agent normally prefers frontend implementations. Codex refresh tokens use single-use rotation: if two browser tabs refresh at the same moment, one call succeeds and the other fails with `invalid_grant`, discarding the only valid refresh token and forcing a full re-login. The OAuth device-code flow and token refresh therefore run through server endpoints in `server/api/openai_codex_*.js` with a single-writer mutex. This is covered under the `shared-data integrity` rule in [`/server/AGENTS.md`](./server/AGENTS.md).

**Troubleshooting**

- **HTTP 403 with `cf-mitigated: challenge`**: Cloudflare blocked the request because the required originator headers were missing. The client ships those headers automatically; if you see this in logs after modifying `app/L0/_all/mod/_core/openai_codex/request.js`, check that `User-Agent`, `originator`, and `ChatGPT-Account-ID` are still set on every outbound Codex request.
- **HTTP 401 "Refresh token is no longer valid. Please log in again."**: Your refresh token has already been consumed by another Codex client, often the `codex` CLI or the Codex VS Code extension signed into the same account. Sign in again in the settings dialog.
- **`response.completed.response.output` is empty**: the Codex endpoint sometimes returns an empty final output array even when the streamed deltas arrived correctly. The adapter accumulates text live from `response.output_text.delta` for exactly this reason; do not read the final reply from the completion payload.
- **HTTP 400 after local changes to the request body**: the Codex `/responses` endpoint rejects `max_output_tokens`, `temperature`, `tools`, and several other Chat-Completions fields. The shape converter in `app/L0/_all/mod/_core/openai_codex/request_shape.js` strips them; if you add a new field, check the drop-list there first.

**Disclaimer**

This provider uses your ChatGPT Plus subscription via the official OpenAI Codex OAuth flow, the same flow the Codex CLI and VS Code extension use. OpenAI's terms of service apply; use at your own risk and avoid non-interactive volume patterns that might look automated.

## AI-driven development and documentation

Space Agent is developed by AI agents, including its documentation.
Expand Down
1 change: 1 addition & 0 deletions app/AGENTS.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,7 @@ Current module-local docs in the app tree:
- `app/L0/_all/mod/_core/onscreen_agent/AGENTS.md`
- `app/L0/_all/mod/_core/onscreen_menu/AGENTS.md`
- `app/L0/_all/mod/_core/open_router/AGENTS.md`
- `app/L0/_all/mod/_core/openai_codex/AGENTS.md`
- `app/L1/_all/mod/metrics/posthog/AGENTS.md`
- `app/L0/_admin/mod/_core/overlay_agent/AGENTS.md`

Expand Down
3 changes: 2 additions & 1 deletion app/L0/_all/mod/_core/admin/views/agent/AGENTS.md
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,8 @@ Prompt rules:

Current behavior:

- the LLM settings modal keeps one provider switch at the top with tabs named `API` and `Local`, and shows either the API settings fields or one `Local` section
- the LLM settings modal keeps one provider switch at the top with three tabs named `API`, `ChatGPT`, and `Local`; API settings show endpoint, model, and API key fields, the `ChatGPT` tab owns the OpenAI Codex OAuth device-code login plus a model dropdown sourced from `/mod/_core/openai_codex/models.js`, and the `Local` section mounts the shared Hugging Face sidebar
- the `ChatGPT` tab scope is local to the admin chat: users sign in separately in the overlay settings to enable Codex there too, and refresh tokens for admin are stored in `~/conf/admin-chat.yaml` under a `userCrypto:`-prefixed `codex_tokens` entry, independent of the overlay config
- the `Local` section only supports the Hugging Face browser runtime
- the toolbar LLM settings button summarizes the current selection with the configured model name only; it does not prepend provider labels such as `API`, `Local`, or `Hugging Face`
- the local section mounts the standalone Hugging Face config sidebar component through `<x-component>`, so the admin modal and the routed testing harness share the same component file instead of maintaining duplicated local-provider markup
Expand Down
152 changes: 152 additions & 0 deletions app/L0/_all/mod/_core/admin/views/agent/api.js
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,10 @@ import * as prompt from "/mod/_core/admin/views/agent/prompt.js";
import { mergeConsecutiveChatMessages } from "/mod/_core/framework/js/chat-messages.js";
import * as proxyUrl from "/mod/_core/framework/js/proxy-url.js";
import { getHuggingFaceManager } from "/mod/_core/huggingface/manager.js";
import {
CODEX_STREAM_DONE_MARKER,
mapCodexEventToChatFrames
} from "/mod/_core/openai_codex/sse_adapter.js";

function createHeaders(apiKey) {
const headers = {
Expand Down Expand Up @@ -406,6 +410,87 @@ async function readStreamingResponse(response, onDelta) {
}
}

function parseCodexEventBlock(eventBlock, onDelta, meta) {
const lines = eventBlock.split(/\r?\n/u);

for (const line of lines) {
if (!line.startsWith("data:")) {
continue;
}

const value = line.slice(5).trim();

if (!value) {
continue;
}

let event;
try {
event = JSON.parse(value);
} catch {
continue;
}

// `mapCodexEventToChatFrames` throws on response.failed / error events,
// which bubbles up to the caller as a request failure. That is the
// intended behavior for terminal upstream errors mid-stream.
const frames = mapCodexEventToChatFrames(event);

for (const frame of frames) {
if (frame === CODEX_STREAM_DONE_MARKER) {
meta.sawDoneMarker = true;
return true;
}

const delta = extractStreamingDelta(frame);
noteCompletionPayload(meta, frame, delta);

if (delta) {
onDelta(delta);
}
}
}

return false;
}

async function readCodexStreamingResponse(response, onDelta) {
const meta = createCompletionResponseMeta("stream");
const reader = response.body.getReader();
const decoder = new TextDecoder();
let buffer = "";

while (true) {
const { done, value } = await reader.read();
buffer += decoder.decode(value || new Uint8Array(), {
stream: !done
});

let boundary = buffer.indexOf("\n\n");

while (boundary !== -1) {
const eventBlock = buffer.slice(0, boundary).trim();
buffer = buffer.slice(boundary + 2);

if (eventBlock && parseCodexEventBlock(eventBlock, onDelta, meta)) {
return finalizeCompletionResponseMeta(meta);
}

boundary = buffer.indexOf("\n\n");
}

if (done) {
const remaining = buffer.trim();

if (remaining) {
parseCodexEventBlock(remaining, onDelta, meta);
}

return finalizeCompletionResponseMeta(meta);
}
}
}

export const prepareAdminAgentApiRequest = globalThis.space.extend(
import.meta,
async function prepareAdminAgentApiRequest({
Expand Down Expand Up @@ -496,6 +581,62 @@ async function streamAdminAgentApiCompletion({ promptContext, settings, systemPr
return readStreamingResponse(response, onDelta);
}

function parseAdminCodexTokens(settings) {
const raw = settings?.codexTokens;

if (!raw) {
return null;
}

if (typeof raw === "object") {
return raw;
}

if (typeof raw !== "string") {
return null;
}

try {
return JSON.parse(raw);
} catch {
return null;
}
}

async function streamAdminAgentCodexCompletion({ promptContext, settings, systemPrompt, messages, onDelta, signal }) {
if (!settings?.model?.trim() && !settings?.codexModel?.trim()) {
throw new Error("Choose a Codex model before sending a message.");
}

const tokens = parseAdminCodexTokens(settings);

if (!tokens?.accessToken) {
const error = new Error("Sign in with ChatGPT before sending a message.");
error.requiresCodexLogin = true;
throw error;
}

const apiRequest = await prepareAdminAgentApiRequest({
messages,
promptContext,
settings,
systemPrompt
});
const response = await fetch(apiRequest.requestUrl, {
...buildFetchRequestInit(apiRequest, signal)
});

if (!response.ok) {
await throwResponseError(response);
}

if (!response.body) {
throw new Error("Streaming response body is not available.");
}

return readCodexStreamingResponse(response, onDelta);
}

export async function streamAdminAgentCompletion({ promptContext, settings, systemPrompt, messages, onDelta, signal }) {
const provider = config.normalizeAdminChatLlmProvider(settings?.provider);
const normalizedPromptContext = normalizeAdminPromptContext(promptContext, systemPrompt);
Expand All @@ -512,6 +653,17 @@ export async function streamAdminAgentCompletion({ promptContext, settings, syst
return result.responseMeta;
}

if (provider === config.ADMIN_CHAT_LLM_PROVIDER.CODEX) {
return streamAdminAgentCodexCompletion({
messages,
onDelta,
promptContext: normalizedPromptContext,
settings,
signal,
systemPrompt: normalizedPromptContext.systemPrompt
});
}

return streamAdminAgentApiCompletion({
messages,
onDelta,
Expand Down
16 changes: 13 additions & 3 deletions app/L0/_all/mod/_core/admin/views/agent/config.js
Original file line number Diff line number Diff line change
@@ -1,10 +1,12 @@
import { DEFAULT_PROMPT_BUDGET_RATIOS, normalizePromptBudgetRatios } from "/mod/_core/agent_prompt/prompt-items.js";
import { CODEX_DEFAULT_MODEL_ID } from "/mod/_core/openai_codex/models.js";

export const ADMIN_CHAT_CONFIG_PATH = "~/conf/admin-chat.yaml";
export const ADMIN_CHAT_HISTORY_PATH = "~/hist/admin-chat.json";
export const DEFAULT_ADMIN_CHAT_MAX_TOKENS = 120_000;
export const ADMIN_CHAT_LLM_PROVIDER = {
API: "api",
CODEX: "openai-codex",
LOCAL: "local"
};

Expand All @@ -15,6 +17,8 @@ export const ADMIN_CHAT_LOCAL_PROVIDER = {
export const DEFAULT_ADMIN_CHAT_SETTINGS = {
apiEndpoint: "https://openrouter.ai/api/v1/chat/completions",
apiKey: "",
codexModel: CODEX_DEFAULT_MODEL_ID,
codexTokens: "",
huggingfaceDtype: "q4",
huggingfaceModel: "",
localProvider: ADMIN_CHAT_LOCAL_PROVIDER.HUGGINGFACE,
Expand All @@ -26,9 +30,15 @@ export const DEFAULT_ADMIN_CHAT_SETTINGS = {
};

export function normalizeAdminChatLlmProvider(value) {
return value === ADMIN_CHAT_LLM_PROVIDER.LOCAL
? ADMIN_CHAT_LLM_PROVIDER.LOCAL
: ADMIN_CHAT_LLM_PROVIDER.API;
if (value === ADMIN_CHAT_LLM_PROVIDER.LOCAL) {
return ADMIN_CHAT_LLM_PROVIDER.LOCAL;
}

if (value === ADMIN_CHAT_LLM_PROVIDER.CODEX) {
return ADMIN_CHAT_LLM_PROVIDER.CODEX;
}

return ADMIN_CHAT_LLM_PROVIDER.API;
}

export function normalizeAdminChatLocalProvider(value) {
Expand Down
63 changes: 63 additions & 0 deletions app/L0/_all/mod/_core/admin/views/agent/panel.html
Original file line number Diff line number Diff line change
Expand Up @@ -188,6 +188,14 @@ <h2>Provider and model configuration</h2>
>
<span>API</span>
</button>
<button
type="button"
class="secondary-button dialog-segmented-button"
:class="{ 'is-active': $store.adminAgent.isSettingsDraftUsingCodexProvider }"
@click="$store.adminAgent.setSettingsProvider('openai-codex')"
>
<span>ChatGPT</span>
</button>
<button
type="button"
class="secondary-button dialog-segmented-button"
Expand All @@ -211,6 +219,61 @@ <h2>Provider and model configuration</h2>
<input type="password" x-model="$store.adminAgent.settingsDraft.apiKey" />
</label>
</div>
<div x-show="$store.adminAgent.isSettingsDraftUsingCodexProvider" x-cloak class="admin-agent-codex-settings">
<template x-if="!$store.adminAgent.isCodexLoginActive && !$store.adminAgent.hasCodexTokens">
<div class="admin-agent-codex-login">
<p class="field-note">
Uses your existing ChatGPT Plus subscription via the official OpenAI Codex OAuth flow.
Nothing is billed to the OpenAI platform API on top of your subscription.
</p>
<p class="field-note">
This sign-in is local to the admin chat. Sign in separately in the overlay settings to enable Codex there too.
</p>
<button type="button" class="primary-button" @click="$store.adminAgent.startCodexLogin()">
Sign in with ChatGPT
</button>
<p class="field-note" x-show="$store.adminAgent.codexLoginError" x-text="$store.adminAgent.codexLoginError"></p>
</div>
</template>
<template x-if="$store.adminAgent.isCodexLoginActive">
<div class="admin-agent-codex-login-pending">
<p>
Open
<a :href="$store.adminAgent.codexVerificationUrl" target="_blank" rel="noopener noreferrer" x-text="$store.adminAgent.codexVerificationUrl"></a>
and enter this code:
</p>
<p class="admin-agent-codex-user-code" x-text="$store.adminAgent.codexUserCode"></p>
<button type="button" class="secondary-button" @click="$store.adminAgent.cancelCodexLogin()">
Cancel
</button>
</div>
</template>
<template x-if="!$store.adminAgent.isCodexLoginActive && $store.adminAgent.hasCodexTokens">
<div class="admin-agent-codex-signed-in">
<p class="field-note">
Signed in<span x-show="$store.adminAgent.codexAccountIdSummary">
as <code x-text="$store.adminAgent.codexAccountIdSummary"></code></span>.
</p>
<label class="field">
<span>Model</span>
<select x-model="$store.adminAgent.settingsDraft.codexModel">
<template x-for="entry in $store.adminAgent.codexModelCatalog" :key="entry.id">
<option :value="entry.id" x-text="`${entry.id} — ${entry.description}`"></option>
</template>
<template x-if="!$store.adminAgent.isCodexSelectedModelInCatalog">
<option :value="$store.adminAgent.settingsDraft.codexModel" x-text="$store.adminAgent.settingsDraft.codexModel"></option>
</template>
</select>
<p class="field-note" x-show="!$store.adminAgent.isCodexSelectedModelInCatalog">
⚠ Not currently listed for your account. Pick another model if chats fail.
</p>
</label>
<button type="button" class="secondary-button" @click="$store.adminAgent.clearCodexLogin()">
Sign out
</button>
</div>
</template>
</div>
<div x-show="$store.adminAgent.isSettingsDraftUsingLocalProvider" x-cloak class="admin-agent-local-settings">
<div class="admin-agent-local-provider-panel">
<x-component path="/mod/_core/huggingface/config-sidebar.html" mode="admin"></x-component>
Expand Down
Loading