OpenCode Codex Auth Plugin

Access GPT-5 Codex through your ChatGPT Plus/Pro subscription

Download as .zip Download as .tar.gz View on GitHub

Complete Test Scenarios

Comprehensive testing matrix for all config scenarios and backwards compatibility.

Test Scenarios Matrix

Scenario 1: Default OpenCode Models (No Custom Config)

Config:

{
  "plugin": ["opencode-openai-codex-auth"]
}

Available Models: (from OpenCode’s models.dev database)

Test Cases:

User Selects Plugin Receives Normalizes To Config Lookup API Receives Result
openai/gpt-5 "gpt-5" "gpt-5" models["gpt-5"] → undefined "gpt-5" ✅ Uses global options
openai/gpt-5-codex "gpt-5-codex" "gpt-5-codex" models["gpt-5-codex"] → undefined "gpt-5-codex" ✅ Uses global options
openai/gpt-5-mini "gpt-5-mini" "gpt-5" models["gpt-5-mini"] → undefined "gpt-5" ✅ Uses global options
openai/gpt-5-nano "gpt-5-nano" "gpt-5" models["gpt-5-nano"] → undefined "gpt-5" ✅ Uses global options

Expected Behavior:


Scenario 2: Custom Config with Preset Names (New Style)

Config:

{
  "plugin": ["opencode-openai-codex-auth"],
  "provider": {
    "openai": {
      "options": {
        "reasoningEffort": "medium"
      },
      "models": {
        "gpt-5-codex-low": {
          "name": "GPT 5 Codex Low (OAuth)",
          "options": { "reasoningEffort": "low" }
        },
        "gpt-5-codex-high": {
          "name": "GPT 5 Codex High (OAuth)",
          "options": { "reasoningEffort": "high" }
        }
      }
    }
  }
}

Test Cases:

User Selects Plugin Receives Config Lookup Resolved Options API Receives Result
openai/gpt-5-codex-low "gpt-5-codex-low" Found ✅ { reasoningEffort: "low" } "gpt-5-codex" ✅ Per-model
openai/gpt-5-codex-high "gpt-5-codex-high" Found ✅ { reasoningEffort: "high" } "gpt-5-codex" ✅ Per-model
openai/gpt-5-codex "gpt-5-codex" Not found { reasoningEffort: "medium" } "gpt-5-codex" ✅ Global

Expected Behavior:


Scenario 3: Old Config (Backwards Compatibility)

Config:

{
  "plugin": ["opencode-openai-codex-auth"],
  "provider": {
    "openai": {
      "options": {
        "reasoningEffort": "medium"
      },
      "models": {
        "GPT 5 Codex Low (ChatGPT Subscription)": {
          "id": "gpt-5-codex",
          "options": { "reasoningEffort": "low" }
        }
      }
    }
  }
}

Test Cases:

User Selects Plugin Receives Config Lookup Resolved Options API Receives Result
openai/GPT 5 Codex Low (ChatGPT Subscription) "GPT 5 Codex Low (ChatGPT Subscription)" Found ✅ { reasoningEffort: "low" } "gpt-5-codex" ✅ Per-model

Expected Behavior:


Scenario 4: Mixed Config (Default + Custom)

Config:

{
  "plugin": ["opencode-openai-codex-auth"],
  "provider": {
    "openai": {
      "models": {
        "gpt-5-codex-low": {
          "name": "GPT 5 Codex Low (OAuth)",
          "options": { "reasoningEffort": "low" }
        }
      }
    }
  }
}

Available Models:

Test Cases:

User Selects Config Lookup Uses Options Result
openai/gpt-5-codex-low Found ✅ Per-model ✅ Custom config
openai/gpt-5-codex Not found Global ✅ Default model
openai/gpt-5 Not found Global ✅ Default model

Expected Behavior:


Scenario 5: Edge Cases

5a: Model Name with Uppercase

Config:

{
  "models": {
    "GPT-5-CODEX-HIGH": {
      "options": { "reasoningEffort": "high" }
    }
  }
}

Test:

User selects: openai/GPT-5-CODEX-HIGH
Plugin receives: "GPT-5-CODEX-HIGH"
normalizeModel: "GPT-5-CODEX-HIGH" → "gpt-5-codex" ✅ (includes "codex")
Config lookup: models["GPT-5-CODEX-HIGH"] → Found ✅
API receives: "gpt-5-codex" ✅

Result: ✅ Works (case-insensitive includes())


5b: Model Name with Special Characters

Config:

{
  "models": {
    "my-gpt5-codex-variant": {
      "options": { "reasoningEffort": "high" }
    }
  }
}

Test:

User selects: openai/my-gpt5-codex-variant
Plugin receives: "my-gpt5-codex-variant"
normalizeModel: "my-gpt5-codex-variant" → "gpt-5-codex" ✅ (includes "codex")
Config lookup: models["my-gpt5-codex-variant"] → Found ✅
API receives: "gpt-5-codex" ✅

Result: ✅ Works (normalization handles it)


5c: No Config, No Model Specified

Config:

{
  "plugin": ["opencode-openai-codex-auth"]
}

Test:

User selects: (none - uses OpenCode default)
Plugin receives: undefined or default from OpenCode
normalizeModel: undefined → "gpt-5" ✅ (fallback)
Config lookup: models[undefined] → undefined
API receives: "gpt-5" ✅

Result: ✅ Works (safe fallback)


5d: Only gpt-5 in Name (No codex)

Config:

{
  "models": {
    "my-gpt-5-variant": {
      "options": { "reasoningEffort": "high" }
    }
  }
}

Test:

User selects: openai/my-gpt-5-variant
Plugin receives: "my-gpt-5-variant"
normalizeModel: "my-gpt-5-variant" → "gpt-5" ✅ (includes "gpt-5", not "codex")
Config lookup: models["my-gpt-5-variant"] → Found ✅
API receives: "gpt-5" ✅

Result: ✅ Works (correct model selected)


Scenario 6: Multi-Turn Conversation (store:false Test)

Config: Any

Test Sequence:

Turn 1: > write hello to test.txt
Turn 2: > read the file
Turn 3: > what did you write?
Turn 4: > now delete it

What Plugin Should Do:

Turn Input Has IDs? Filter Result Encrypted Content Result
1 No No filtering needed Received in response ✅ Works
2 Yes (from Turn 1) ALL removed ✅ Sent back in request ✅ Works
3 Yes (from Turn 1-2) ALL removed ✅ Sent back in request ✅ Works
4 Yes (from Turn 1-3) ALL removed ✅ Sent back in request ✅ Works

Expected Behavior:


Backwards Compatibility Testing

Test Matrix

Plugin Version Config Format Expected Result
Old (<2.1.2) Long names + id ❌ Per-model options broken, ID errors
Old (<2.1.2) Short names ❌ Per-model options broken, ID errors
New (2.1.2+) Long names + id ALL FIXED
New (2.1.2+) Short names ALL FIXED
New (2.1.2+) Short names (no id) OPTIMAL

Backwards Compatibility Tests

Test 1: Old Plugin User Upgrades

Before (Plugin v2.1.1):

{
  "models": {
    "GPT 5 Codex Low (ChatGPT Subscription)": {
      "id": "gpt-5-codex",
      "options": { "reasoningEffort": "low" }
    }
  }
}

After (Plugin v2.1.2):

Result:Works without config changes


Config:

{
  "models": {
    "gpt-5-codex-low": {
      "name": "GPT 5 Codex Low (OAuth)",
      "options": { "reasoningEffort": "low" }
    }
  }
}

Expected:

Result:Optimal experience


Test 3: Minimal Config (No Custom Models)

Config:

{
  "plugin": ["opencode-openai-codex-auth"],
  "model": "openai/gpt-5-codex"
}

Expected:

Result:Works out of the box


Debug Logging Test Cases

Enable Debug Mode

DEBUG_CODEX_PLUGIN=1 opencode run "test" --model=openai/gpt-5-codex-low

Expected Debug Output

Case 1: Custom Model with Config

[openai-codex-plugin] Debug logging ENABLED
[openai-codex-plugin] Model config lookup: "gpt-5-codex-low" → normalized to "gpt-5-codex" for API {
  hasModelSpecificConfig: true,
  resolvedConfig: {
    reasoningEffort: 'low',
    textVerbosity: 'medium',
    reasoningSummary: 'auto',
    include: ['reasoning.encrypted_content']
  }
}
[openai-codex-plugin] Filtering 0 message IDs from input: []

Verify: hasModelSpecificConfig: true confirms per-model options found


Case 2: Default Model (No Custom Config)

DEBUG_CODEX_PLUGIN=1 opencode run "test" --model=openai/gpt-5-codex
[openai-codex-plugin] Debug logging ENABLED
[openai-codex-plugin] Model config lookup: "gpt-5-codex" → normalized to "gpt-5-codex" for API {
  hasModelSpecificConfig: false,
  resolvedConfig: {
    reasoningEffort: 'medium',
    textVerbosity: 'medium',
    reasoningSummary: 'auto',
    include: ['reasoning.encrypted_content']
  }
}
[openai-codex-plugin] Filtering 0 message IDs from input: []

Verify: hasModelSpecificConfig: false confirms using global options


Case 3: Multi-Turn with ID Filtering

[openai-codex-plugin] Filtering 3 message IDs from input: ['msg_abc123', 'rs_xyz789', 'msg_def456']
[openai-codex-plugin] Successfully removed all 3 message IDs

Verify: All IDs removed, no warnings


Case 4: Warning if IDs Leak (Should Never Happen)

[openai-codex-plugin] WARNING: 1 IDs still present after filtering: ['msg_abc123']

This would indicate a bug - should never appear


Integration Test Plan

Manual Testing Procedure

Step 1: Fresh Install Test

# 1. Clear cache
(cd ~ && rm -rf .cache/opencode/node_modules/opencode-openai-codex-auth)

# 2. Use minimal config
cat > ~/.config/opencode/opencode.json <<'EOF'
{
  "plugin": ["opencode-openai-codex-auth"],
  "model": "openai/gpt-5-codex"
}
EOF

# 3. Test default model
DEBUG_CODEX_PLUGIN=1 opencode run "write hello world to test.txt"

Verify:


Step 2: Custom Config Test

# Update config with custom models
cat > ~/.config/opencode/opencode.json <<'EOF'
{
  "plugin": ["opencode-openai-codex-auth"],
  "provider": {
    "openai": {
      "models": {
        "gpt-5-codex-low": {
          "name": "GPT 5 Codex Low (OAuth)",
          "options": { "reasoningEffort": "low" }
        },
        "gpt-5-codex-high": {
          "name": "GPT 5 Codex High (OAuth)",
          "options": { "reasoningEffort": "high" }
        }
      }
    }
  }
}
EOF

# Test per-model options
DEBUG_CODEX_PLUGIN=1 opencode run "test low" --model=openai/gpt-5-codex-low
DEBUG_CODEX_PLUGIN=1 opencode run "test high" --model=openai/gpt-5-codex-high

Verify:


Step 3: Multi-Turn Test (Critical for store:false)

DEBUG_CODEX_PLUGIN=1 opencode --model=openai/gpt-5-codex-medium
> write "test content" to file1.txt
> read file1.txt
> what did you just write?
> create file2.txt with different content
> compare the two files

Verify:


Step 4: Model Switching Test

DEBUG_CODEX_PLUGIN=1 opencode
> /model openai/gpt-5-codex-low
> write hello to test.txt
> /model openai/gpt-5-codex-high
> write goodbye to test2.txt

Verify:


Step 5: TUI Persistence Test

# 1. Start opencode
opencode --model=openai/gpt-5-codex-high

# 2. Run a command
> write test

# 3. Exit (ctrl+c)

# 4. Restart
opencode

# 5. Check which model is selected
> /model

Verify:


Normalization Edge Cases

Test: normalizeModel() Coverage

normalizeModel("gpt-5-codex")          // → "gpt-5-codex" ✅
normalizeModel("gpt-5-codex-low")      // → "gpt-5-codex" ✅
normalizeModel("GPT-5-CODEX-HIGH")     // → "gpt-5-codex" ✅
normalizeModel("my-codex-model")       // → "gpt-5-codex" ✅
normalizeModel("gpt-5")                // → "gpt-5" ✅
normalizeModel("gpt-5-mini")           // → "gpt-5" ✅
normalizeModel("gpt-5-nano")           // → "gpt-5" ✅
normalizeModel("GPT 5 High")           // → "gpt-5" ✅
normalizeModel(undefined)              // → "gpt-5" ✅
normalizeModel("random-model")         // → "gpt-5" ✅ (fallback)

Implementation:

export function normalizeModel(model: string | undefined): string {
  if (!model) return "gpt-5";
  if (model.includes("codex")) return "gpt-5-codex";  // Check codex first
  if (model.includes("gpt-5")) return "gpt-5";        // Then gpt-5
  return "gpt-5";  // Safe fallback
}

Why this works:


Expected Failures (These Should Error)

Invalid Model Selection

opencode run "test" --model=openai/claude-3.5

Expected: ❌ Error before plugin (OpenCode rejects unknown model)

Missing Authentication

# Without running: opencode auth login
opencode run "test" --model=openai/gpt-5-codex

Expected: ❌ 401 Unauthorized error


Success Criteria

All Tests Must Pass

No Errors


Automated Test Suggestions

Unit Tests (Future)

describe('normalizeModel', () => {
  test('handles all default models', () => {
    expect(normalizeModel('gpt-5')).toBe('gpt-5')
    expect(normalizeModel('gpt-5-codex')).toBe('gpt-5-codex')
    expect(normalizeModel('gpt-5-mini')).toBe('gpt-5')
    expect(normalizeModel('gpt-5-nano')).toBe('gpt-5')
  })

  test('handles custom preset names', () => {
    expect(normalizeModel('gpt-5-codex-low')).toBe('gpt-5-codex')
    expect(normalizeModel('gpt-5-high')).toBe('gpt-5')
  })

  test('handles legacy names', () => {
    expect(normalizeModel('GPT 5 Codex Low (ChatGPT Subscription)')).toBe('gpt-5-codex')
  })

  test('handles edge cases', () => {
    expect(normalizeModel(undefined)).toBe('gpt-5')
    expect(normalizeModel('random')).toBe('gpt-5')
  })
})

describe('getModelConfig', () => {
  test('returns per-model options when found', () => {
    const config = getModelConfig('gpt-5-codex-low', {
      global: { reasoningEffort: 'medium' },
      models: {
        'gpt-5-codex-low': {
          options: { reasoningEffort: 'low' }
        }
      }
    })
    expect(config.reasoningEffort).toBe('low')
  })

  test('returns global options when model not in config', () => {
    const config = getModelConfig('gpt-5-codex', {
      global: { reasoningEffort: 'medium' },
      models: {}
    })
    expect(config.reasoningEffort).toBe('medium')
  })
})

describe('filterInput', () => {
  test('removes all message IDs', () => {
    const input = [
      { id: 'msg_123', role: 'user', content: [] },
      { id: 'rs_456', role: 'assistant', content: [] },
      { role: 'user', content: [] }  // No ID
    ]
    const result = filterInput(input)
    expect(result.every(item => !item.id)).toBe(true)
  })
})

See Also