Integrate LiteLLM Models in Openclaw
Quick Start
LiteLLM is a unified proxy that lets you access multiple LLM providers through a single OpenAI-compatible API. This guide shows you how to integrate LiteLLM with Openclaw to route requests to Claude, GPT, and other models seamlessly.
Step 1: Start the LiteLLM Proxy
First, ensure your LiteLLM proxy is running locally or remotely. By default, LiteLLM runs on http://localhost:4000.
pip install 'litellm[proxy]'
litellm --model claude-opus-4-6
Step 2: Configure Openclaw
Method A: Interactive Setup (Recommended)
You can quickly configure LiteLLM using Openclaw's onboarding wizard:
Openclaw onboard --auth-choice litellm-api-key
Method B: Manual Configuration
If you prefer to define the exact models available through your proxy, you can edit your ~/.Openclaw/Openclaw.json configuration file. You must set the api format to "openai-completions" and provide the model definitions.
{
"models": {
"providers": {
"litellm": {
"baseUrl": "http://localhost:4000",
"apiKey": "${LITELLM_API_KEY}",
"api": "openai-completions",
"models": [
{
"id": "claude-opus-4-6",
"name": "Claude Opus 4.6",
"reasoning": true,
"input": ["text", "image"],
"contextWindow": 200000,
"maxTokens": 64000
},
{
"id": "gpt-4o",
"name": "GPT-4o",
"reasoning": false,
"input": ["text", "image"],
"contextWindow": 128000,
"maxTokens": 8192
}
]
}
}
},
"agents": {
"defaults": {
"model": { "primary": "litellm/claude-opus-4-6" }
}
}
}
Step 3: Set Your Primary Model
Once configured, set your preferred model via CLI:
Openclaw models set litellm/claude-opus-4-6
Testing Your Setup
Send a test message to verify the integration:
{ "message": "Hello from LiteLLM via Openclaw!" }
Troubleshooting & Best Practices
- Connection refused: Ensure LiteLLM proxy is running on the configured port.
- Authentication errors: Verify your LITELLM_API_KEY is correctly set.
- Model not found: Check that the model ID in your config matches the LiteLLM proxy configuration.
- Performance: For production, consider running LiteLLM on a dedicated server rather than localhost.
For more Openclaw integrations, check out vLLM Models Openclaw and GLM Models Openclaw.