Integrate GLM Models in Openclaw

Image
Table of contents: [Show]

Quick Start

GLM models from Z.AI bring powerful reasoning capabilities to your Openclaw setup through a streamlined integration process.

  • Access to state-of-the-art Chinese LLMs with competitive performance benchmarks.
  • Configuration complexity often blocks developers from leveraging these models effectively.
  • A working GLM integration that auto-configures thinking modes and tool streaming.
GLM Models Integration

Step 1: Get Your API Key

Generate an API key from the Z.AI console. This key authenticates your requests to the GLM model family.

Step 2: Configure via CLI

Run the interactive onboarding wizard for the fastest setup:

openclaw onboard --auth-choice zai-api-key

For automated deployments, use non-interactive mode:

openclaw onboard --zai-api-key "$ZAI_API_KEY"

Step 3: Manual Configuration (Optional)

Edit your ~/.openclaw/openclaw.json for persistent settings:

{ "env": { "ZAI_API_KEY": "sk-..." }, "agents": { "defaults": { "model": { "primary": "zai/glm-5" } } } }

Working Example

After configuration, test your setup:

openclaw ask "Explain quantum computing in simple terms"

Expected response: A clear, structured explanation generated by the GLM model.

GLM-Specific Features

Thinking Mode

GLM-4.x models enable thinking mode by default. Disable with:

--thinking off

Or configure permanently in openclaw.json:

agents.defaults.models["zai/<model>"].params.thinking = false

Tool Streaming

Tool call streaming is enabled by default. To disable:

params.tool_stream = false

Troubleshooting & Best Practices

  • Alias normalization: Openclaw accepts z.ai/* and z-ai/*, auto-converting to zai/*.
  • Coding endpoints: Use --auth-choice zai-coding-global or zai-coding-cn for specialized coding plans.
  • Cerebras alternative: Set primary model to cerebras/zai-glm-4.7 for Cerebras-hosted GLM.

GLM integration expands your model options with competitive alternatives to Western LLMs, complete with automatic feature configuration.