How to Integrate Together AI Models in Openclaw

Image
Table of contents: [Show]

Together AI Openclaw Integration

Together AI provides access to high-performance open-source models like Kimi K2.5 through a fast, scalable inference API. This guide walks you through integrating Together AI models into your Openclaw setup.

Quick Start

Before you begin, grab your TOGETHER_API_KEY from the Together AI dashboard. You'll need this to authenticate requests.

Interactive Setup

Run the Openclaw onboarding wizard and select Together AI when prompted:

openclaw onboard --auth-choice together-api-key

Paste your API key when prompted. The wizard will configure your ~/.openclaw/openclaw.json automatically.

Non-Interactive Setup

For CI/CD or automated deployments, use the non-interactive mode:

openclaw onboard --non-interactive --mode local --auth-choice together-api-key --together-api-key "$TOGETHER_API_KEY"

Configure Your Primary Model

Edit your ~/.openclaw/openclaw.json to set Kimi K2.5 as your default model:

{ "agents": { "defaults": { "model": { "primary": "together/moonshotai/Kimi-K2.5" } } } }

If you're exploring other AI providers, check out our guides on OpenRouter Models and LiteLLM Models for more options.

Environment Setup for Daemon Mode

If you run Openclaw Gateway as a background service via systemd or launchd, ensure the API key is accessible:

echo "TOGETHER_API_KEY=your_key_here" >> ~/.openclaw/.env

Alternatively, configure env.shellEnv in your Openclaw config to export the variable.

Test Your Setup

Verify the integration works by sending a test prompt:

openclaw ask "Explain quantum computing in one sentence"

You should receive a response from Kimi K2.5 via Together AI's infrastructure.

Troubleshooting

  • 401 Unauthorized: Double-check your API key is valid and not expired. Regenerate at the Together AI dashboard if needed.
  • Model not found: Ensure you're using the correct model ID prefix together/ followed by the full path like moonshotai/Kimi-K2.5.
  • Rate limits: Together AI enforces rate limits based on your plan. Check your usage dashboard if requests fail with 429 errors.

Best Practices

  • Store API keys in ~/.openclaw/.env rather than hardcoding in config files
  • Use --non-interactive mode for containerized deployments
  • Monitor token usage through the Together AI dashboard to optimize costs

LLM Task Integration

Ready to explore more AI integrations? Browse our Venice AI setup guide for privacy-focused inference options.