How to Integrate Z.AI Models in Openclaw
Want to use Z.AI GLM models with Openclaw? This guide shows you how to connect your Z.AI API key and start using models like zai/glm-5 in minutes.
Quick Start
Z.AI provides access to GLM models through a simple API. Once configured, you can use Z.AI models alongside other providers like GLM Models Openclaw or LiteLLM Models Openclaw.
Step 1: Get Your Z.AI API Key
First, generate an API key from the Z.AI console. Openclaw uses this key under the environment variable ZAI_API_KEY.
Step 2: Configure Openclaw
You have two options for setup:
Option A: Interactive Setup (Recommended)
Openclaw onboard --auth-choice zai-api-key
Option B: Non-Interactive Setup (for scripting)
Openclaw onboard --zai-api-key "$ZAI_API_KEY"
Or with full non-interactive mode:
Openclaw onboard --non-interactive \
--auth-choice zai-coding-global \
--zai-api-key "$ZAI_API_KEY"
Option C: Manual Configuration
Edit your ~/.Openclaw/Openclaw.json file directly:
{
"env": { "ZAI_API_KEY": "sk-..." },
"agents": {
"defaults": {
"model": { "primary": "zai/glm-5" }
}
}
}
Testing Your Setup
Once configured, test your connection:
Openclaw models list | grep zai
You should see Z.AI models available. Try a quick prompt:
Openclaw ask --model zai/glm-5 "Hello, can you confirm you're working?"
Troubleshooting
- Authentication errors: Verify your ZAI_API_KEY is correct and exported in your environment.
- Model not found: Ensure the Gateway has been restarted after configuration changes with openclaw gateway restart.
- Rate limiting: Check your Z.AI console for usage limits and quotas.
Best Practices
- Store your API key in environment variables, not hardcoded in config files
- Use ~/.Openclaw/.env for sensitive credentials if running as a daemon
- Set a fallback model in case Z.AI is temporarily unavailable
- Monitor usage through the Z.AI console to track costs
That's it! You're now ready to use Z.AI GLM models with Openclaw.