Troubleshooting Guide
Common issues and debugging techniques for the OpenCode OpenAI Codex Auth Plugin.
Authentication Issues
“401 Unauthorized” Error
Symptoms:
Error: 401 Unauthorized
Failed to access Codex API
Causes:
- Token expired
- Not authenticated yet
- Invalid credentials
Solutions:
1. Re-authenticate:
opencode auth login
2. Check auth file exists:
cat ~/.opencode/auth/openai.json
# Should show OAuth credentials
3. Check token expiration:
# Token has "expires" timestamp
cat ~/.opencode/auth/openai.json | jq '.expires'
# Compare to current time
date +%s000 # Current timestamp in milliseconds
Browser Doesn’t Open for OAuth
Symptoms:
opencode auth login
succeeds but no browser window- OAuth callback times out
Solutions:
1. Manual browser open:
# The auth URL is shown in console - copy and paste to browser manually
2. Check port 1455 availability:
# See if something is using the OAuth callback port
lsof -i :1455
3. Official Codex CLI conflict:
- Stop Codex CLI if running
- Both use port 1455 for OAuth
“403 Forbidden” Error
Cause: ChatGPT subscription issue
Check:
- Active ChatGPT Plus or Pro subscription
- Subscription not expired
- Billing is current
Solution: Visit ChatGPT and verify subscription status
Model Issues
“Model not found”
Error: Model 'openai/gpt-5-codex-low' not found
Cause 1: Config key mismatch
Check your config:
{
"models": {
"gpt-5-codex-low": { ... } // ← This is the key
}
}
CLI must match exactly:
opencode run "test" --model=openai/gpt-5-codex-low # Must match config key
Cause 2: Missing provider prefix
❌ Wrong:
model: gpt-5-codex-low
✅ Correct:
model: openai/gpt-5-codex-low
Per-Model Options Not Applied
Symptom: All models behave the same despite different reasoningEffort
Debug:
DEBUG_CODEX_PLUGIN=1 opencode run "test" --model=openai/your-model
Look for:
hasModelSpecificConfig: true ← Should be true
resolvedConfig: { reasoningEffort: 'low', ... } ← Should show your options
If false
: Config lookup failed
Common causes:
- Model name in CLI doesn’t match config key
- Typo in config file
- Wrong config file location
Multi-Turn Issues
“Item not found” Errors
Error:
AI_APICallError: Item with id 'msg_abc123' not found.
Items are not persisted when `store` is set to false.
Cause: Old plugin version (fixed in v2.1.2+)
Solution:
# Update plugin
(cd ~ && sed -i.bak '/"opencode-openai-codex-auth"/d' .cache/opencode/package.json && rm -rf .cache/opencode/node_modules/opencode-openai-codex-auth)
# Restart OpenCode
opencode
Verify fix:
DEBUG_CODEX_PLUGIN=1 opencode
> write test.txt
> read test.txt
> what did you write?
Should see: Successfully removed all X message IDs
Context Not Preserved
Symptom: Model doesn’t remember previous turns
Check logs:
ENABLE_PLUGIN_REQUEST_LOGGING=1 opencode
> first message
> second message
Verify:
# Turn 2 should have full history
cat ~/.opencode/logs/codex-plugin/request-*-after-transform.json | jq '.body.input | length'
# Should show increasing count (3, 5, 7, 9, ...)
What to check:
- Full message history present (not just current turn)
- No
item_reference
items (filtered out) - All IDs stripped (
jq '.body.input[].id'
should all benull
)
Request Errors
“400 Bad Request”
Check error details:
ENABLE_PLUGIN_REQUEST_LOGGING=1 opencode run "test"
# Read error
cat ~/.opencode/logs/codex-plugin/request-*-error-response.json
Common causes:
- Invalid options for model (e.g.,
minimal
for gpt-5-codex) - Malformed request body
- Unsupported parameter
“Rate Limit Exceeded”
Error:
Rate limit reached for gpt-5-codex
Solutions:
1. Wait for reset: Check headers in response logs:
cat ~/.opencode/logs/codex-plugin/request-*-response.json | jq '.headers["x-codex-primary-reset-after-seconds"]'
2. Switch to different model:
# If codex is rate limited, try gpt-5
opencode run "task" --model=openai/gpt-5
“Context Window Exceeded”
Error:
Your input exceeds the context window
Cause: Too much conversation history
Solutions:
1. Start new conversation:
# Exit and restart OpenCode (clears history)
2. Use compact mode (if OpenCode supports it)
3. Switch to model with larger context:
- gpt-5-codex has larger context than gpt-5-nano
GitHub API Issues
Rate Limit Exhausted
Error:
Failed to fetch instructions from GitHub: Failed to fetch latest release: 403
Using cached instructions
Cause: GitHub API rate limit (60 req/hour for unauthenticated)
Status: Fixed in v2.1.2 with 15-minute caching
Verify fix:
# Should only check GitHub once per 15 minutes
ls -lt ~/.opencode/cache/codex-instructions-meta.json
# Check lastChecked timestamp
cat ~/.opencode/cache/codex-instructions-meta.json | jq '.lastChecked'
Manual workaround (if on old version):
- Wait 1 hour for rate limit to reset
- Or use cached instructions (automatic fallback)
Debug Techniques
Enable Full Logging
# Both debug and request logging
DEBUG_CODEX_PLUGIN=1 ENABLE_PLUGIN_REQUEST_LOGGING=1 opencode run "test"
What you get:
- Console: Debug messages showing config resolution
- Files: Complete request/response logs
Log locations:
~/.opencode/logs/codex-plugin/request-*-before-transform.json
~/.opencode/logs/codex-plugin/request-*-after-transform.json
~/.opencode/logs/codex-plugin/request-*-response.json
Inspect Actual API Requests
# Run command with logging
ENABLE_PLUGIN_REQUEST_LOGGING=1 opencode run "test" --model=openai/gpt-5-codex-low
# Check what was sent to API
cat ~/.opencode/logs/codex-plugin/request-*-after-transform.json | jq '{
model: .body.model,
reasoning: .body.reasoning,
text: .body.text,
store: .body.store,
include: .body.include
}'
Verify:
model
: Normalized correctly?reasoning.effort
: Matches your config?text.verbosity
: Matches your config?store
: Should befalse
include
: Should havereasoning.encrypted_content
Compare with Expected
See development/TESTING.md for expected values matrix.
Performance Issues
Slow Responses
Possible causes:
reasoningEffort: "high"
- Uses more computationtextVerbosity: "high"
- Generates longer outputs- Network latency
Solutions:
- Use lower reasoning effort for faster responses
- Check network connection
- Try different time of day (server load varies)
High Token Usage
Monitor usage:
# Tokens shown in logs
cat ~/.opencode/logs/codex-plugin/request-*-stream-full.json | grep -o '"total_tokens":[0-9]*'
Reduce tokens:
- Lower
textVerbosity
- Lower
reasoningEffort
- Shorter system prompts (disable CODEX_MODE if needed)
Getting Help
Before Opening an Issue
- Enable logging:
DEBUG_CODEX_PLUGIN=1 ENABLE_PLUGIN_REQUEST_LOGGING=1 opencode run "your command"
- Collect info:
- OpenCode version:
opencode --version
- Plugin version: Check
package.json
or npm - Error logs from
~/.opencode/logs/codex-plugin/
- Config file (redact sensitive info)
- OpenCode version:
- Check existing issues:
Reporting Bugs
Include:
- ✅ Error message
- ✅ Steps to reproduce
- ✅ Config file (redacted)
- ✅ Log files
- ✅ OpenCode version
- ✅ Plugin version
Account or Subscription Issues
If you’re experiencing authentication problems:
- Ensure active subscription: Verify your ChatGPT Plus/Pro subscription is active at ChatGPT Settings
- Check subscription type: This plugin requires Plus or Pro (Free tier is not supported)
- Review usage limits: Check if you’ve exceeded your subscription’s usage limits
- Revoke and re-authorize:
- Revoke access: ChatGPT Settings → Authorized Apps
- Remove local tokens:
opencode auth logout
- Re-authenticate:
opencode auth login
Note: If OpenAI has flagged your account for unusual usage patterns, you may experience authentication issues. Contact OpenAI support if you believe your account has been incorrectly restricted.
Compliance-Related Issues
If you receive errors related to terms of service violations:
- Review your usage: Ensure you’re using the plugin for personal development only
- Check rate limits: Verify you haven’t exceeded usage limits
- Avoid automation: Do not use for high-volume automated requests
- Commercial use: Switch to OpenAI Platform API for commercial applications
This plugin cannot help with TOS violations or account restrictions. Contact OpenAI support for account-specific issues.
Next: Configuration Guide | Developer Docs | Back to Home |