OpenCode Config Flow: Complete Guide
This document explains how OpenCode configuration flows from user files through the plugin system to the Codex API.
Table of Contents
- Config Loading Order
- Provider Options Flow
- Model Selection & Persistence
- Plugin Configuration
- Examples
- Best Practices
Config Loading Order
OpenCode loads and merges configuration from multiple sources in this order (last wins):
1. Global Config
~/.opencode/config.json
~/.opencode/opencode.json
~/.opencode/opencode.jsonc
2. Project Configs (traversed upward from cwd)
<project>/.opencode/opencode.json
<parent>/.opencode/opencode.json
... (up to worktree root)
3. Custom Config (via flags)
OPENCODE_CONFIG=/path/to/config.json opencode
# or
OPENCODE_CONFIG_CONTENT='{"model":"openai/gpt-5"}' opencode
4. Auth Configs
# From .well-known/opencode endpoints (for OAuth providers)
https://auth.example.com/.well-known/opencode
Source: tmp/opencode/packages/opencode/src/config/config.ts:26-51
Provider Options Flow
Options are merged at multiple stages before reaching the plugin:
Stage 1: Database Defaults
Models.dev provides baseline capabilities for each provider/model.
Stage 2: Environment Variables
export OPENAI_API_KEY="sk-..."
Stage 3: Custom Loaders
Plugins can inject options via the loader() function.
Stage 4: User Config (HIGHEST PRIORITY)
{
"provider": {
"openai": {
"options": {
"reasoningEffort": "medium",
"textVerbosity": "low"
}
}
}
}
Result: User config overrides everything else.
Source: tmp/opencode/packages/opencode/src/provider/provider.ts:236-339
Model Selection & Persistence
Display Names vs Internal IDs
Your Config (config/full-opencode.json):
{
"provider": {
"openai": {
"models": {
"gpt-5-codex-medium": {
"name": "GPT 5 Codex Medium (OAuth)",
"limit": {
"context": 272000,
"output": 128000
},
"options": {
"reasoningEffort": "medium",
"reasoningSummary": "auto",
"textVerbosity": "medium",
"include": [
"reasoning.encrypted_content"
],
"store": false
}
}
}
}
}
}
What OpenCode Uses:
- UI Display: “GPT 5 Codex Medium (OAuth)” ✅
- Persistence:
provider_id: "openai"+model_id: "gpt-5-codex-medium"✅ - Plugin lookup:
models["gpt-5-codex-medium"]→ used to build Codex request ✅
TUI Persistence
The TUI stores recently used models in ~/.opencode/tui:
[[recently_used_models]]
provider_id = "openai"
model_id = "gpt-5-codex"
last_used = 2025-10-12T10:30:00Z
Key Point: Custom display names are UI-only. The underlying id field is what gets persisted and sent to APIs.
Source: tmp/opencode/packages/tui/internal/app/state.go:54-79
Plugin Configuration
How This Plugin Receives Config
Plugin Entry Point (index.ts:64-86):
async loader(getAuth: () => Promise<Auth>, provider: unknown) {
const providerConfig = provider as {
options?: Record<string, unknown>;
models?: UserConfig["models"]
};
const userConfig: UserConfig = {
global: providerConfig?.options || {}, // Global options
models: providerConfig?.models || {}, // Per-model options
};
// ... use userConfig in custom fetch()
}
Config Structure
type UserConfig = {
global: {
// Applied to ALL models
reasoningEffort?: "minimal" | "low" | "medium" | "high";
textVerbosity?: "low" | "medium" | "high";
include?: string[];
};
models: {
[modelName: string]: {
options?: {
// Override global for specific model
reasoningEffort?: "minimal" | "low" | "medium" | "high";
textVerbosity?: "low" | "medium" | "high";
};
};
};
};
Option Precedence
For a given model, options are merged:
- Global options (
provider.openai.options) - Model-specific options (
provider.openai.models[modelName].options) ← WINS
Implementation: lib/request/request-transformer.ts:getModelConfig()
Examples
Example 1: Global Options Only
{
"plugin": ["opencode-openai-codex-auth"],
"provider": {
"openai": {
"options": {
"reasoningEffort": "medium",
"textVerbosity": "medium",
"include": ["reasoning.encrypted_content"]
}
}
}
}
Result: All OpenAI models use these options.
Example 2: Per-Model Override
{
"plugin": ["opencode-openai-codex-auth"],
"provider": {
"openai": {
"options": {
"reasoningEffort": "medium",
"textVerbosity": "medium"
},
"models": {
"gpt-5-codex-high": {
"name": "GPT 5 Codex High (OAuth)",
"options": {
"reasoningEffort": "high",
"reasoningSummary": "detailed"
}
},
"gpt-5-nano": {
"name": "GPT 5 Nano (OAuth)",
"options": {
"reasoningEffort": "minimal",
"textVerbosity": "low"
}
}
}
}
}
}
Result:
gpt-5-codex-highusesreasoningEffort: "high"(overridden) +textVerbosity: "medium"(from global)gpt-5-nanousesreasoningEffort: "minimal"+textVerbosity: "low"(both overridden)
Example 3: Full Configuration
{
"$schema": "https://opencode.ai/config.json",
"plugin": ["opencode-openai-codex-auth"],
"model": "openai/gpt-5-codex-medium",
"provider": {
"openai": {
"options": {
"reasoningEffort": "medium",
"reasoningSummary": "auto",
"textVerbosity": "medium",
"include": ["reasoning.encrypted_content"]
},
"models": {
"gpt-5-codex-low": {
"name": "GPT 5 Codex Low (OAuth)",
"options": {
"reasoningEffort": "low"
}
},
"gpt-5-codex-high": {
"name": "GPT 5 Codex High (OAuth)",
"options": {
"reasoningEffort": "high",
"reasoningSummary": "detailed"
}
}
}
}
}
}
Best Practices
1. Use Per-Model Options for Variants
Instead of duplicating global options, override only what’s different:
❌ Bad:
{
"models": {
"gpt-5-low": {
"id": "gpt-5",
"options": {
"reasoningEffort": "low",
"textVerbosity": "low",
"include": ["reasoning.encrypted_content"]
}
},
"gpt-5-high": {
"id": "gpt-5",
"options": {
"reasoningEffort": "high",
"textVerbosity": "high",
"include": ["reasoning.encrypted_content"]
}
}
}
}
✅ Good:
{
"options": {
"include": ["reasoning.encrypted_content"]
},
"models": {
"gpt-5-low": {
"id": "gpt-5",
"options": {
"reasoningEffort": "low",
"textVerbosity": "low"
}
},
"gpt-5-high": {
"id": "gpt-5",
"options": {
"reasoningEffort": "high",
"textVerbosity": "high"
}
}
}
}
2. Keep Display Names Meaningful
Custom model names help you remember what each variant does:
{
"models": {
"GPT 5 Codex - Fast & Cheap": {
"id": "gpt-5-codex",
"options": { "reasoningEffort": "low" }
},
"GPT 5 Codex - Balanced": {
"id": "gpt-5-codex",
"options": { "reasoningEffort": "medium" }
},
"GPT 5 Codex - Max Quality": {
"id": "gpt-5-codex",
"options": { "reasoningEffort": "high" }
}
}
}
3. Set Defaults at Global Level
Most common settings should be global:
{
"options": {
"reasoningEffort": "medium",
"reasoningSummary": "auto",
"textVerbosity": "medium",
"include": ["reasoning.encrypted_content"]
}
}
4. Use Config Files, Not Environment Variables
While you can set CODEX_MODE=0 to disable the bridge prompt, it’s better to document such settings in config files:
❌ Bad: CODEX_MODE=0 opencode
✅ Good: Create ~/.opencode/openai-codex-auth-config.json:
{
"codexMode": false
}
Troubleshooting
Config Not Being Applied
- Check config file syntax with
jq . < config.json - Verify config file location (use absolute paths)
- Check OpenCode logs for config load errors
- Use
OPENCODE_CONFIG_CONTENTto test minimal configs
Model Not Persisting
- TUI remembers the
idfield, not the display name - Check
~/.opencode/tuifor recently used models - Verify your config has the correct
idfield
Options Not Taking Effect
- Model-specific options override global options
- Plugin receives merged config from OpenCode
- Add debug logging to verify what plugin receives
See Also
- ARCHITECTURE.md - Plugin architecture and design decisions
- OpenCode Config Schema - Official schema
- Models.dev - Model capability database