OpenCode Codex Auth Plugin

Access GPT-5 Codex through your ChatGPT Plus/Pro subscription

Download as .zip Download as .tar.gz View on GitHub

Troubleshooting Guide

Common issues and debugging techniques for the OpenCode OpenAI Codex Auth Plugin.

Authentication Issues

“401 Unauthorized” Error

Symptoms:

Error: 401 Unauthorized
Failed to access Codex API

Causes:

  1. Token expired
  2. Not authenticated yet
  3. Invalid credentials

Solutions:

1. Re-authenticate:

opencode auth login

2. Check auth file exists:

cat ~/.opencode/auth/openai.json
# Should show OAuth credentials

3. Check token expiration:

# Token has "expires" timestamp
cat ~/.opencode/auth/openai.json | jq '.expires'

# Compare to current time
date +%s000  # Current timestamp in milliseconds

Browser Doesn’t Open for OAuth

Symptoms:

Solutions:

1. Manual browser open:

# The auth URL is shown in console - copy and paste to browser manually

2. Check port 1455 availability:

# See if something is using the OAuth callback port
lsof -i :1455

3. Official Codex CLI conflict:

“403 Forbidden” Error

Cause: ChatGPT subscription issue

Check:

  1. Active ChatGPT Plus or Pro subscription
  2. Subscription not expired
  3. Billing is current

Solution: Visit ChatGPT and verify subscription status


Model Issues

“Model not found”

Error: Model 'openai/gpt-5-codex-low' not found

Cause 1: Config key mismatch

Check your config:

{
  "models": {
    "gpt-5-codex-low": { ... }  //  This is the key
  }
}

CLI must match exactly:

opencode run "test" --model=openai/gpt-5-codex-low  # Must match config key

Cause 2: Missing provider prefix

❌ Wrong:

model: gpt-5-codex-low

✅ Correct:

model: openai/gpt-5-codex-low

Per-Model Options Not Applied

Symptom: All models behave the same despite different reasoningEffort

Debug:

DEBUG_CODEX_PLUGIN=1 opencode run "test" --model=openai/your-model

Look for:

hasModelSpecificConfig: true  ← Should be true
resolvedConfig: { reasoningEffort: 'low', ... }  ← Should show your options

If false: Config lookup failed

Common causes:

  1. Model name in CLI doesn’t match config key
  2. Typo in config file
  3. Wrong config file location

Multi-Turn Issues

“Item not found” Errors

Error:

AI_APICallError: Item with id 'msg_abc123' not found.
Items are not persisted when `store` is set to false.

Cause: Old plugin version (fixed in v2.1.2+)

Solution:

# Update plugin
(cd ~ && sed -i.bak '/"opencode-openai-codex-auth"/d' .cache/opencode/package.json && rm -rf .cache/opencode/node_modules/opencode-openai-codex-auth)

# Restart OpenCode
opencode

Verify fix:

DEBUG_CODEX_PLUGIN=1 opencode
> write test.txt
> read test.txt
> what did you write?

Should see: Successfully removed all X message IDs

Context Not Preserved

Symptom: Model doesn’t remember previous turns

Check logs:

ENABLE_PLUGIN_REQUEST_LOGGING=1 opencode
> first message
> second message

Verify:

# Turn 2 should have full history
cat ~/.opencode/logs/codex-plugin/request-*-after-transform.json | jq '.body.input | length'
# Should show increasing count (3, 5, 7, 9, ...)

What to check:

  1. Full message history present (not just current turn)
  2. No item_reference items (filtered out)
  3. All IDs stripped (jq '.body.input[].id' should all be null)

Request Errors

“400 Bad Request”

Check error details:

ENABLE_PLUGIN_REQUEST_LOGGING=1 opencode run "test"

# Read error
cat ~/.opencode/logs/codex-plugin/request-*-error-response.json

Common causes:

  1. Invalid options for model (e.g., minimal for gpt-5-codex)
  2. Malformed request body
  3. Unsupported parameter

“Rate Limit Exceeded”

Error:

Rate limit reached for gpt-5-codex

Solutions:

1. Wait for reset: Check headers in response logs:

cat ~/.opencode/logs/codex-plugin/request-*-response.json | jq '.headers["x-codex-primary-reset-after-seconds"]'

2. Switch to different model:

# If codex is rate limited, try gpt-5
opencode run "task" --model=openai/gpt-5

“Context Window Exceeded”

Error:

Your input exceeds the context window

Cause: Too much conversation history

Solutions:

1. Start new conversation:

# Exit and restart OpenCode (clears history)

2. Use compact mode (if OpenCode supports it)

3. Switch to model with larger context:


GitHub API Issues

Rate Limit Exhausted

Error:

Failed to fetch instructions from GitHub: Failed to fetch latest release: 403
Using cached instructions

Cause: GitHub API rate limit (60 req/hour for unauthenticated)

Status: Fixed in v2.1.2 with 15-minute caching

Verify fix:

# Should only check GitHub once per 15 minutes
ls -lt ~/.opencode/cache/codex-instructions-meta.json

# Check lastChecked timestamp
cat ~/.opencode/cache/codex-instructions-meta.json | jq '.lastChecked'

Manual workaround (if on old version):


Debug Techniques

Enable Full Logging

# Both debug and request logging
DEBUG_CODEX_PLUGIN=1 ENABLE_PLUGIN_REQUEST_LOGGING=1 opencode run "test"

What you get:

Log locations:

Inspect Actual API Requests

# Run command with logging
ENABLE_PLUGIN_REQUEST_LOGGING=1 opencode run "test" --model=openai/gpt-5-codex-low

# Check what was sent to API
cat ~/.opencode/logs/codex-plugin/request-*-after-transform.json | jq '{
  model: .body.model,
  reasoning: .body.reasoning,
  text: .body.text,
  store: .body.store,
  include: .body.include
}'

Verify:

Compare with Expected

See development/TESTING.md for expected values matrix.


Performance Issues

Slow Responses

Possible causes:

  1. reasoningEffort: "high" - Uses more computation
  2. textVerbosity: "high" - Generates longer outputs
  3. Network latency

Solutions:

High Token Usage

Monitor usage:

# Tokens shown in logs
cat ~/.opencode/logs/codex-plugin/request-*-stream-full.json | grep -o '"total_tokens":[0-9]*'

Reduce tokens:

  1. Lower textVerbosity
  2. Lower reasoningEffort
  3. Shorter system prompts (disable CODEX_MODE if needed)

Getting Help

Before Opening an Issue

  1. Enable logging:
    DEBUG_CODEX_PLUGIN=1 ENABLE_PLUGIN_REQUEST_LOGGING=1 opencode run "your command"
    
  2. Collect info:
    • OpenCode version: opencode --version
    • Plugin version: Check package.json or npm
    • Error logs from ~/.opencode/logs/codex-plugin/
    • Config file (redact sensitive info)
  3. Check existing issues:

Reporting Bugs

Include:

Account or Subscription Issues

If you’re experiencing authentication problems:

Note: If OpenAI has flagged your account for unusual usage patterns, you may experience authentication issues. Contact OpenAI support if you believe your account has been incorrectly restricted.

If you receive errors related to terms of service violations:

This plugin cannot help with TOS violations or account restrictions. Contact OpenAI support for account-specific issues.


Next: Configuration Guide Developer Docs Back to Home