Skip to content

AI Providers

erode supports three AI providers: Gemini (default), OpenAI, and Anthropic (experimental). Set the provider with the AI_PROVIDER environment variable.

Open full interactive viewer →

Each provider uses two model tiers to balance cost and quality:

  • Fast model: Used for Stage 1 (component resolution) and Stage 2 (dependency scan). These are cheaper, faster models suited for extraction tasks.
  • Advanced model: Used for Stage 3 (PR analysis) and Stage 4 (model generation). These are stronger models that handle the deeper architectural reasoning.
TierDefault model
Fastgemini-2.5-flash
Advancedgemini-2.5-flash
TierDefault model
Fastgpt-4.1-mini
Advancedgpt-4.1
TierDefault model
Fastclaude-haiku-4-5-20251001
Advancedclaude-sonnet-4-5-20250929

You can override the default models with environment variables:

VariableDescription
GEMINI_FAST_MODELGemini model for fast tier (Stages 1–2)
GEMINI_ADVANCED_MODELGemini model for advanced tier (Stages 3–4)
OPENAI_FAST_MODELOpenAI model for fast tier (Stages 1–2)
OPENAI_ADVANCED_MODELOpenAI model for advanced tier (Stages 3–4)
ANTHROPIC_FAST_MODELAnthropic model for fast tier (Stages 1–2)
ANTHROPIC_ADVANCED_MODELAnthropic model for advanced tier (Stages 3–4)
VariableDefault
GEMINI_TIMEOUT60000 ms
OPENAI_TIMEOUT60000 ms
ANTHROPIC_TIMEOUT60000 ms

These control the maximum wait time for each API request. Increase them if you experience timeouts with large diffs.

Gemini is the default provider and is generally cheaper per request. OpenAI offers strong analysis quality with broad model availability. Anthropic support is experimental.

Start with Gemini or OpenAI during evaluation.