Model Failover
OpenClaw (formerly Moltbot) supports automatic model failover. When your primary model fails, seamlessly switch to backup providers for uninterrupted service.
OpenClaw monitors API responses and automatically retries with fallback models when errors occur. The failover is transparent - users won't notice the switch.
Failover chain:
- ✓Automatic detection — Detects rate limits, timeouts, and errors
- ✓Seamless switching — Users don't notice the failover
- ✓Recovery — Automatically returns to primary when available
Rate Limiting
When you hit API rate limits, automatically switch to a backup provider.
Service Outage
If a provider is down, seamlessly fail over to an alternative.
Cost Optimization
Start with cheaper models and fall back to premium only when needed.
Load Balancing
Distribute requests across multiple providers for better performance.
Configure your failover chain in the config file:
{
"models": {
"failover": {
"enabled": true,
"retry_count": 2,
"retry_delay": 1000,
"chain": [
{
"provider": "anthropic",
"model": "claude-3-5-sonnet-20241022",
"api_key": "your-anthropic-key"
},
{
"provider": "openai",
"model": "gpt-4o",
"api_key": "your-openai-key"
},
{
"provider": "ollama",
"model": "llama3.3:70b",
"base_url": "http://localhost:11434"
}
]
}
}
}retry_count— Attempts before moving to next modelretry_delay— Milliseconds between retrieschain— Ordered list of fallback models
Mix and match models from any supported provider:
Configure different failover chains for different types of tasks:
{
"models": {
"routing": {
"coding": {
"chain": ["claude-3-5-sonnet", "gpt-4o", "codellama"]
},
"writing": {
"chain": ["claude-3-opus", "gpt-4o", "claude-3-sonnet"]
},
"quick_tasks": {
"chain": ["gpt-3.5-turbo", "claude-3-haiku", "ollama:mistral"]
},
"default": {
"chain": ["claude-3-5-sonnet", "gpt-4o", "ollama:llama3"]
}
}
}
}Configuration Examples
High Availability
Ensure your AI assistant is always available, even during provider outages.
Cost-Conscious Setup
Use free/cheap models first, premium models only when needed.
Quality First
Start with the best models, fall back to slightly lower quality if unavailable.
Local + Cloud
Try local models first for privacy, fall back to cloud when needed.
OpenClaw tracks model health and performance:
- •Response times — Tracks latency for each provider
- •Error rates — Monitors failures and adjusts routing
- •Cost tracking — Logs costs per model for optimization
- •Alerts — Notify when failover is triggered
Full Documentation
Read the complete model failover configuration guide.
Configure Model Failover
Set up reliable AI with automatic failover between providers.