🦞
AI Model Configuration

Configure OpenClaw with Vercel AI Gateway

Deploy OpenClaw (formerly Moltbot) with Vercel AI Gateway for edge-optimized AI with automatic failover and caching.

^ Why Choose Vercel AI Gateway?
  • +Edge-first - Run AI closer to users with Vercel's global edge network
  • +Automatic failover - If OpenAI is down, seamlessly switch to Anthropic
  • +Response caching - Cache identical requests to save costs
  • +Unified SDK - Same code works with any provider
  • +Built-in streaming - Optimized streaming for real-time responses

Key Features

Edge Deployment

Run AI inference close to your users for lower latency.

Provider Failover

Automatic fallback between OpenAI, Anthropic, and other providers.

Built-in Caching

Cache responses to reduce costs and improve speed.

Usage Analytics

Monitor costs and usage through Vercel dashboard.

Supported Providers

OpenAI

Recommended
gpt-4ogpt-4o-minio1

Anthropic

claude-3.5-sonnetclaude-3.5-haiku

Google

gemini-1.5-progemini-1.5-flash

Mistral

mistral-largemistral-small

Setup Instructions

1

Set Up Vercel AI Gateway

Enable AI Gateway in your Vercel project:

  • - Go to your Vercel dashboard
  • - Navigate to Project Settings > AI
  • - Enable AI Gateway
  • - Add your provider API keys (OpenAI, Anthropic, etc.)
2

Get Gateway URL

Copy your AI Gateway endpoint:

https://gateway.ai.vercel.sh/v1

Your gateway URL may include your project-specific identifier.

3

Configure OpenClaw

Add Vercel AI Gateway to your openclaw.json:

{
  "agent": {
    "provider": "vercel-ai",
    "model": "gpt-4o",
    "baseUrl": "https://gateway.ai.vercel.sh/v1",
    "apiKey": "your-vercel-token"
  }
}
4

Restart OpenClaw

Apply the configuration:

openclaw restart
Advanced Configuration with Failover
{
  "agent": {
    "provider": "vercel-ai",
    "model": "gpt-4o",
    "baseUrl": "https://gateway.ai.vercel.sh/v1",
    "apiKey": "your-vercel-token",
    "maxTokens": 4096,
    "temperature": 0.7,
    "fallback": {
      "provider": "anthropic",
      "model": "claude-3.5-sonnet"
    },
    "cache": {
      "enabled": true,
      "ttl": 3600
    }
  }
}
  • fallback - Automatic provider switch on failure
  • cache.enabled - Enable response caching
  • cache.ttl - Cache duration in seconds
Pricing

Vercel AI Gateway pricing:

  • Gateway usageFree (included with Vercel)
  • AI provider costsPass-through pricing
  • Caching savingsUp to 50% reduction

You pay the same model prices as direct API access, with caching potentially reducing costs.

For detailed Vercel AI Gateway configuration, see the OpenClaw Vercel AI Documentation.

Vercel AI Configured!

Now connect your messaging channels with edge-optimized AI.