Skip to main content
**Screenshot: Settings → AI Providers panel showing four provider cards (Anthropic, OpenAI, OpenRouter, Local Models), each with a connection status indicator, a masked API key field, and a “Test Connection” button. Anthropic should show “Connected” in green.

Supported Providers

ADE supports four AI provider categories. You can configure any combination — agents can use different providers depending on the task.

Anthropic (Claude)

Default and recommended. Models: claude-opus-4-6, claude-sonnet-4-6, claude-haiku-4-5. Best for complex reasoning, long-context tasks, and code review.

OpenAI

Supported via the OpenAI API. Models: gpt-4o, gpt-4o-mini, o1, o3. Best for structured output, code completion, and tool use.

OpenRouter

Route to hundreds of models through a single API key. Useful for cost optimization, model comparison, and accessing open-source models (Llama, Mistral, Gemini).

Local Models

Via Ollama or LM Studio. Point ADE to a local OpenAI-compatible endpoint — no API key required. Best for offline use or privacy-sensitive codebases.

Provider Comparison

ProviderBest ForContext WindowRequires API Key
Anthropic ClaudeComplex reasoning, long-context, code reviewUp to 200K tokensYes
OpenAI GPTStructured output, code completion, tool useUp to 128K tokensYes
OpenRouterCost optimization, model comparisonVaries by modelYes
Ollama / LM StudioOffline, privacy-sensitive workVaries by modelNo

Where to Configure Providers

Provider configuration lives in two places:
  1. local.secret.yaml — API keys and endpoint URLs. Never committed to git.
  2. ade.yaml — Default model and budget defaults. Committed to git and shared with your team.
  3. Settings → AI Providers — GUI for adding/rotating keys, testing connections, and setting per-provider defaults. Writes to local.secret.yaml on save.

Setting Up Anthropic (Claude)

Anthropic is the default provider. Claude models are the recommended choice for most ADE workflows due to their strong code understanding and large context windows.
1

Get an API key

Go to console.anthropic.com and create an API key. Keys start with sk-ant-.
2

Add the key to local.secret.yaml

# .ade/local.secret.yaml
anthropic:
  apiKey: "sk-ant-..."
Or use Settings → AI Providers → Anthropic → Add API Key.
3

Set the default model in ade.yaml

# .ade/ade.yaml
ai:
  defaultModel: "claude-opus-4-6"
Available models: claude-opus-4-6, claude-sonnet-4-6, claude-haiku-4-5.
4

Test the connection

In Settings → AI Providers → Anthropic, click Test Connection. ADE sends a minimal request to the API and reports success or an error with the HTTP status code.

Setting Up OpenAI

1

Get an API key

Go to platform.openai.com and create an API key. Keys start with sk-.
2

Add the key to local.secret.yaml

# .ade/local.secret.yaml
openai:
  apiKey: "sk-..."
3

Optionally set OpenAI as default

# .ade/ade.yaml
ai:
  defaultModel: "gpt-4o"
Available models: gpt-4o, gpt-4o-mini, o1, o3.

Setting Up OpenRouter

OpenRouter provides a single API key that routes to hundreds of upstream models. This is useful for comparing models, accessing open-source models, or optimizing cost by routing different agent types to cheaper models.
# .ade/local.secret.yaml
openrouter:
  apiKey: "sk-or-..."
  defaultModel: "anthropic/claude-opus-4-6"   # OpenRouter model identifier
openrouter:
  apiKey: "sk-or-..."
  defaultModel: "anthropic/claude-opus-4-6"
OpenRouter model identifiers use the format <provider>/<model-name>. Find the full list at openrouter.ai/models. When using OpenRouter, the model string you set in ade.yaml should use the OpenRouter format, prefixed with openrouter: — for example: openrouter:anthropic/claude-opus-4-6.

Setting Up Local Models (Ollama)

Local models run entirely on your machine. No API key is needed. ADE communicates with locally-running Ollama or LM Studio through their OpenAI-compatible API endpoints.
1

Install and start Ollama

Install Ollama and pull a model:
brew install ollama
ollama pull codellama:13b
ollama serve
Ollama listens on http://localhost:11434 by default.
2

Configure the local endpoint in local.secret.yaml

# .ade/local.secret.yaml
localModels:
  baseUrl: "http://localhost:11434/v1"
  # No apiKey needed for local Ollama
For LM Studio, use http://localhost:1234/v1 (or your configured port).
3

Set a local model as default in ade.yaml

# .ade/ade.yaml
ai:
  defaultModel: "local:codellama:13b"
  # The "local:" prefix tells ADE to use the localModels endpoint
Local models have significantly smaller context windows than cloud models (typically 4K–32K tokens versus 128K–200K). ADE’s context compaction will trigger more frequently. For complex missions or large codebases, cloud models are strongly recommended.

Model Reference

**Screenshot: The model selection dropdown in the Mission creation dialog, showing models grouped by provider (Anthropic, OpenAI, OpenRouter, Local) with context window sizes and cost-per-token indicators beside each model name.

Anthropic (Claude)

ModelContext WindowBest For
claude-opus-4-6200K tokensComplex reasoning, architecture review, long-context analysis
claude-sonnet-4-6200K tokensGeneral coding tasks, balanced speed and quality
claude-haiku-4-5200K tokensFast, lightweight tasks; automation runs; high-volume tool calls

OpenAI

ModelContext WindowBest For
gpt-4o128K tokensStructured output, JSON generation, tool use
gpt-4o-mini128K tokensCost-efficient tasks, simple completions
o1128K tokensDeep reasoning, math, multi-step logic
o3128K tokensAdvanced reasoning with extended compute

Model Selection by Context

ADE lets you assign different models to different agent roles. This is the recommended approach for cost efficiency: use Opus for planning and architecture, Sonnet for execution, and Haiku for automation runs.
# .ade/ade.yaml
ai:
  defaultModel: "claude-sonnet-4-6"      # Fallback for all agents
  routingRules:
    - role: "orchestrator"               # Mission planning agent
      model: "claude-opus-4-6"
    - role: "worker"                     # Mission execution agents
      model: "claude-sonnet-4-6"
    - role: "automation"                 # Automation runs (cost-sensitive)
      model: "claude-haiku-4-5"
    - role: "validator"                  # Mission validation agent
      model: "claude-sonnet-4-6"
For solo developers running many automations, routing automation runs to claude-haiku-4-5 and reserving claude-opus-4-6 for mission planning can reduce monthly spend by 60–80% with minimal quality impact on routine tasks.

Budget Configuration

Budget caps prevent runaway spend from long-running agents or automation loops. Caps are enforced in the main process — when a session reaches its cap, the agent receives a stop signal and the session ends gracefully.
# .ade/ade.yaml
ai:
  defaultBudgetPerSession: 1.00         # USD per chat session
  defaultBudgetPerMission: 10.00        # USD per mission (all workers combined)
  defaultBudgetPerAutomation: 0.50      # USD per automation trigger
  monthlyBudgetCap: 100.00              # Hard monthly ceiling across all agents
Applied to chat sessions in the Agent Chat pane. When the session’s cumulative cost reaches this value, the agent receives a stop signal. You can raise the cap for the current session in the chat header without editing config.
Applied to the entire mission run, summing across all workers. If the combined worker spend reaches the cap during execution, the orchestrator suspends remaining tasks and surfaces an intervention request asking whether to extend the budget or abandon the mission.
Applied per automation trigger event. Each time an automation fires, it gets a fresh budget for that run. Useful for keeping PR review automations economical.
A hard ceiling across all providers, all agents, all sessions. Once reached, all agent activity pauses. ADE shows a banner and sends a notification. The cap resets on the first day of each calendar month. You can raise or reset it in Settings → Budget.

Context Compaction

ADE monitors context usage for every active agent session. When usage reaches the configured threshold (default 70%), ADE automatically compacts the context to free space. How compaction works:
  1. ADE identifies the oldest message segments in the context window
  2. A compaction request is sent to summarize those segments into a compact narrative
  3. The summary replaces the original messages in the context
  4. Key details (function signatures, file paths, error messages, agent decisions) are preserved verbatim in the summary
# .ade/ade.yaml — or override in local.yaml
ai:
  contextCompaction:
    threshold: 0.70          # Trigger at 70% usage (0.0–1.0)
    strategy: "summarize"    # "summarize" | "truncate" | "manual"
    preserveSystemPrompt: true
Some providers handle large contexts better than others. If you notice quality degradation at high context usage with a specific provider, lower the compaction threshold for that provider. Per-provider thresholds can be set in Settings → AI → Context Management.

API Key Security

**Screenshot: The API key field in Settings → AI Providers showing a masked key (“sk-ant-••••••••••••1234”) with a “Rotate” button and a “Revoke” button, alongside a small info badge reading “Stored in local.secret.yaml — never committed to git.”
API keys are handled according to ADE’s trust boundary model:
  • Storage: Keys live only in local.secret.yaml on disk. ADE reads them into main process memory at startup.
  • In-memory handling: Once loaded, keys are held in the main process (Electron main) only. They are never passed to the renderer process.
  • SQLite: Keys are never written to ADE’s SQLite database. The database stores session metadata, costs, and audit records — not credentials.
  • Log redaction: Any key value that appears in log output (e.g., from a misconfigured MCP tool) is automatically redacted and replaced with ***.
  • IPC bridge: The preload bridge does not expose any IPC channel that returns raw API key values to the renderer.

Troubleshooting

Your API key is invalid or has been revoked. Verify the key value in local.secret.yaml — ensure there are no leading or trailing spaces, and that the key is for the correct provider. Anthropic keys start with sk-ant-; OpenAI keys start with sk-.
You are hitting the provider’s rate limit. ADE automatically retries with exponential backoff (up to 3 retries). If rate limits are consistently hit, consider: (1) switching to a less popular model tier, (2) upgrading your API plan, or (3) routing some agents through OpenRouter which aggregates capacity.
Your provider account has exhausted its monthly quota. This is separate from ADE’s budget cap — it is a limit set by the provider. Log in to your provider’s billing dashboard to review and increase your quota. ADE will resume automatically once the provider accepts requests again.
The model string in ade.yaml does not match a valid model ID for the configured provider. Check the exact model IDs in the Model Reference table above. Model IDs are case-sensitive.
If using Ollama, confirm that ollama serve is running and that the model is pulled (ollama list). Verify the baseUrl in local.secret.yaml matches the port Ollama is listening on (default 11434). Use Test Connection in Settings → AI Providers → Local to get the exact error.
The agent’s conversation has exceeded the model’s context limit even after compaction. Options: (1) lower the compaction threshold so compaction fires earlier, (2) start a new session (the Lane Pack will provide continuity), (3) switch to a model with a larger context window.

What’s Next

MCP Servers

Connect external MCP servers to give agents tools beyond ADE’s built-ins.

Settings

Configure usage graphs, notification thresholds, and per-agent model overrides.

Missions

Learn how model routing rules apply during mission planning and execution phases.

Permissions

Understand how API keys are protected by ADE’s trust boundary architecture.