How to Configure OpenRouter Model Routing
OpenRouter acts as a unified gateway to multiple LLM providers and models, enabling intelligent routing based on task complexity. By configuring OpenRouter with OpenClaw, you can automatically send simple tasks to Claude Haiku, complex reasoning to Claude Opus, and everything else to Sonnet โ cutting costs by 50-70% while maintaining quality.
Why This Is Hard to Do Yourself
These are the common pitfalls that trip people up.
Manual model selection
Without routing, you manually pick a model for each task or default to one expensive option
Task complexity detection
How do you know which model to use for each request? Pattern matching is brittle and hard to maintain
Fallback complexity
If your primary model is rate-limited or down, requests fail. Setting up fallbacks manually is tedious
Hidden cost drains
Simple formatting and summarization tasks sent to Opus waste 50x the tokens they should
Step-by-Step Guide
Sign up for OpenRouter
Create an account and get your API key.
# 1. Go to https://openrouter.ai/
# 2. Sign up with email or GitHub
# 3. Navigate to Keys section
# 4. Click "Create Key"
# 5. Copy your key (starts with sk-or-...)
# Add credits to your account:
# OpenRouter charges ~10-20% markup over base model pricingConfigure OpenRouter in OpenClaw
Set up the OpenRouter provider.
# In .env:
OPENROUTER_API_KEY=sk-or-v1-...
OPENROUTER_APP_NAME=openclaw-app # Optional: appears in OpenRouter dashboard
# In config/providers.yaml:
providers:
openrouter:
api_key: ${OPENROUTER_API_KEY}
base_url: https://openrouter.ai/api/v1
default_model: anthropic/claude-3.5-sonnetDefine routing rules
Set up pattern-based model routing.
# In config/routing.yaml:
routing:
default: anthropic/claude-3.5-sonnet
rules:
# Haiku for simple, structured tasks
- patterns:
- "summarize"
- "format as (json|yaml|markdown)"
- "translate to"
- "list all"
model: anthropic/claude-3-haiku
# Opus for complex reasoning
- patterns:
- "analyze.*security"
- "code review"
- "debug.*issue"
- "design.*architecture"
model: anthropic/claude-3-opus
# Sonnet for everything else (default)Warning: Routing patterns are regex-based and evaluated in order. Put more specific patterns first to avoid them being caught by broader ones.
Set up fallback models
Handle rate limits and outages gracefully.
# In config/routing.yaml (continued):
fallbacks:
anthropic/claude-3-opus:
- anthropic/claude-3.5-sonnet
- openai/gpt-4
anthropic/claude-3.5-sonnet:
- anthropic/claude-3-haiku
- openai/gpt-4o-mini
anthropic/claude-3-haiku:
- openai/gpt-4o-mini
retry:
max_attempts: 3
backoff_seconds: 2Test routing with sample prompts
Verify models are routing correctly.
# Test Haiku routing (should use haiku):
curl -X POST http://localhost:3000/api/chat \
-H "Content-Type: application/json" \
-d '{"message": "Summarize this conversation"}'
# Test Opus routing (should use opus):
curl -X POST http://localhost:3000/api/chat \
-H "Content-Type: application/json" \
-d '{"message": "Analyze the security implications of this code"}'
# Check logs to verify which model was used:
tail -f ~/.openclaw/logs/routing.logMonitor routing decisions and costs
Track which models handle which requests.
# View routing statistics:
curl http://localhost:3000/api/routing/stats
# Expected output:
# {
# "total_requests": 1240,
# "by_model": {
# "anthropic/claude-3-haiku": 620,
# "anthropic/claude-3.5-sonnet": 510,
# "anthropic/claude-3-opus": 110
# },
# "cost_savings": "64.2%"
# }Get Expert Routing Configuration
Routing rules that are too aggressive hurt quality. Rules that are too conservative waste money. Our experts analyze your actual usage patterns and configure optimal routing that balances cost and performance.
Get matched with a specialist who can help.
Sign Up for Expert Help โ