2026-01-10 21:37:38 +01:00
---
summary: "Model provider overview with example configs + CLI flows"
read_when:
- You need a provider-by-provider model setup reference
- You want example configs or CLI onboarding commands for model providers
2026-01-31 16:04:03 -05:00
title: "Model Providers"
2026-01-10 21:37:38 +01:00
---
2026-01-31 21:13:13 +09:00
2026-01-10 21:37:38 +01:00
# Model providers
2026-01-13 08:11:59 +00:00
This page covers **LLM/model providers** (not chat channels like WhatsApp/Telegram).
2026-01-10 21:37:38 +01:00
For model selection rules, see [/concepts/models ](/concepts/models ).
## Quick rules
2026-02-05 16:54:44 -05:00
- Model refs use `provider/model` (example: `opencode/claude-opus-4-6` ).
2026-01-10 21:37:38 +01:00
- If you set `agents.defaults.models` , it becomes the allowlist.
2026-01-30 03:15:10 +01:00
- CLI helpers: `openclaw onboard` , `openclaw models list` , `openclaw models set <provider/model>` .
2026-01-10 21:37:38 +01:00
2026-02-18 01:31:11 +01:00
## API key rotation
- Supports generic provider rotation for selected providers.
- Configure multiple keys via:
- `OPENCLAW_LIVE_<PROVIDER>_KEY` (single live override, highest priority)
- `<PROVIDER>_API_KEYS` (comma or semicolon list)
- `<PROVIDER>_API_KEY` (primary key)
- `<PROVIDER>_API_KEY_*` (numbered list, e.g. `<PROVIDER>_API_KEY_1` )
- For Google providers, `GOOGLE_API_KEY` is also included as fallback.
- Key selection order preserves priority and deduplicates values.
- Requests are retried with the next key only on rate-limit responses (for example `429` , `rate_limit` , `quota` , `resource exhausted` ).
- Non-rate-limit failures fail immediately; no key rotation is attempted.
- When all candidate keys fail, the final error is returned from the last attempt.
2026-01-10 21:37:38 +01:00
## Built-in providers (pi-ai catalog)
2026-01-30 03:15:10 +01:00
OpenClaw ships with the pi‑ ai catalog. These providers require **no**
2026-01-10 21:37:38 +01:00
`models.providers` config; just set auth + pick a model.
### OpenAI
- Provider: `openai`
- Auth: `OPENAI_API_KEY`
2026-02-18 01:31:11 +01:00
- Optional rotation: `OPENAI_API_KEYS` , `OPENAI_API_KEY_1` , `OPENAI_API_KEY_2` , plus `OPENCLAW_LIVE_OPENAI_KEY` (single override)
2026-02-05 16:54:44 -05:00
- Example model: `openai/gpt-5.1-codex`
2026-01-30 03:15:10 +01:00
- CLI: `openclaw onboard --auth-choice openai-api-key`
2026-01-10 21:37:38 +01:00
```json5
{
2026-02-05 16:54:44 -05:00
agents: { defaults: { model: { primary: "openai/gpt-5.1-codex" } } },
2026-01-10 21:37:38 +01:00
}
```
### Anthropic
- Provider: `anthropic`
- Auth: `ANTHROPIC_API_KEY` or `claude setup-token`
2026-02-18 01:31:11 +01:00
- Optional rotation: `ANTHROPIC_API_KEYS` , `ANTHROPIC_API_KEY_1` , `ANTHROPIC_API_KEY_2` , plus `OPENCLAW_LIVE_ANTHROPIC_KEY` (single override)
2026-02-05 16:54:44 -05:00
- Example model: `anthropic/claude-opus-4-6`
2026-01-30 03:15:10 +01:00
- CLI: `openclaw onboard --auth-choice token` (paste setup-token) or `openclaw models auth paste-token --provider anthropic`
2026-01-10 21:37:38 +01:00
```json5
{
2026-02-05 16:54:44 -05:00
agents: { defaults: { model: { primary: "anthropic/claude-opus-4-6" } } },
2026-01-10 21:37:38 +01:00
}
```
### OpenAI Code (Codex)
- Provider: `openai-codex`
2026-01-26 19:04:46 +00:00
- Auth: OAuth (ChatGPT)
2026-02-05 16:54:44 -05:00
- Example model: `openai-codex/gpt-5.3-codex`
2026-01-30 03:15:10 +01:00
- CLI: `openclaw onboard --auth-choice openai-codex` or `openclaw models auth login --provider openai-codex`
2026-01-10 21:37:38 +01:00
```json5
{
2026-02-05 16:54:44 -05:00
agents: { defaults: { model: { primary: "openai-codex/gpt-5.3-codex" } } },
2026-01-10 21:37:38 +01:00
}
```
### OpenCode Zen
- Provider: `opencode`
- Auth: `OPENCODE_API_KEY` (or `OPENCODE_ZEN_API_KEY` )
2026-02-05 16:54:44 -05:00
- Example model: `opencode/claude-opus-4-6`
2026-01-30 03:15:10 +01:00
- CLI: `openclaw onboard --auth-choice opencode-zen`
2026-01-10 21:37:38 +01:00
```json5
{
2026-02-05 16:54:44 -05:00
agents: { defaults: { model: { primary: "opencode/claude-opus-4-6" } } },
2026-01-10 21:37:38 +01:00
}
```
### Google Gemini (API key)
- Provider: `google`
- Auth: `GEMINI_API_KEY`
2026-02-18 01:31:11 +01:00
- Optional rotation: `GEMINI_API_KEYS` , `GEMINI_API_KEY_1` , `GEMINI_API_KEY_2` , `GOOGLE_API_KEY` fallback, and `OPENCLAW_LIVE_GEMINI_KEY` (single override)
2026-01-12 06:58:31 +00:00
- Example model: `google/gemini-3-pro-preview`
2026-01-30 03:15:10 +01:00
- CLI: `openclaw onboard --auth-choice gemini-api-key`
2026-01-10 21:37:38 +01:00
2026-02-01 22:27:31 +08:00
### Google Vertex, Antigravity, and Gemini CLI
2026-01-10 21:37:38 +01:00
- Providers: `google-vertex` , `google-antigravity` , `google-gemini-cli`
- Auth: Vertex uses gcloud ADC; Antigravity/Gemini CLI use their respective auth flows
2026-01-17 09:33:56 +00:00
- Antigravity OAuth is shipped as a bundled plugin (`google-antigravity-auth` , disabled by default).
2026-01-30 03:15:10 +01:00
- Enable: `openclaw plugins enable google-antigravity-auth`
- Login: `openclaw models auth login --provider google-antigravity --set-default`
2026-01-17 09:33:56 +00:00
- Gemini CLI OAuth is shipped as a bundled plugin (`google-gemini-cli-auth` , disabled by default).
2026-01-30 03:15:10 +01:00
- Enable: `openclaw plugins enable google-gemini-cli-auth`
- Login: `openclaw models auth login --provider google-gemini-cli --set-default`
- Note: you do **not** paste a client id or secret into `openclaw.json` . The CLI login flow stores
2026-01-25 05:53:17 +00:00
tokens in auth profiles on the gateway host.
2026-01-10 21:37:38 +01:00
### Z.AI (GLM)
- Provider: `zai`
- Auth: `ZAI_API_KEY`
- Example model: `zai/glm-4.7`
2026-01-30 03:15:10 +01:00
- CLI: `openclaw onboard --auth-choice zai-api-key`
2026-01-10 21:37:38 +01:00
- Aliases: `z.ai/*` and `z-ai/*` normalize to `zai/*`
2026-01-16 14:40:56 +01:00
### Vercel AI Gateway
- Provider: `vercel-ai-gateway`
- Auth: `AI_GATEWAY_API_KEY`
2026-02-05 16:54:44 -05:00
- Example model: `vercel-ai-gateway/anthropic/claude-opus-4.6`
2026-01-30 03:15:10 +01:00
- CLI: `openclaw onboard --auth-choice ai-gateway-api-key`
2026-01-16 14:40:56 +01:00
2026-01-10 21:37:38 +01:00
### Other built-in providers
- OpenRouter: `openrouter` (`OPENROUTER_API_KEY` )
- Example model: `openrouter/anthropic/claude-sonnet-4-5`
- xAI: `xai` (`XAI_API_KEY` )
- Groq: `groq` (`GROQ_API_KEY` )
- Cerebras: `cerebras` (`CEREBRAS_API_KEY` )
2026-01-12 05:57:49 +00:00
- GLM models on Cerebras use ids `zai-glm-4.7` and `zai-glm-4.6` .
- OpenAI-compatible base URL: `https://api.cerebras.ai/v1` .
2026-01-10 21:37:38 +01:00
- Mistral: `mistral` (`MISTRAL_API_KEY` )
- GitHub Copilot: `github-copilot` (`COPILOT_GITHUB_TOKEN` / `GH_TOKEN` / `GITHUB_TOKEN` )
2026-02-13 16:18:16 +01:00
- Hugging Face Inference: `huggingface` (`HUGGINGFACE_HUB_TOKEN` or `HF_TOKEN` ) — OpenAI-compatible router; example model: `huggingface/deepseek-ai/DeepSeek-R1` ; CLI: `openclaw onboard --auth-choice huggingface-api-key` . See [Hugging Face (Inference) ](/providers/huggingface ).
2026-01-10 21:37:38 +01:00
## Providers via `models.providers` (custom/base URL)
Use `models.providers` (or `models.json` ) to add **custom** providers or
OpenAI/Anthropic‑ compatible proxies.
2026-01-12 06:47:57 +00:00
### Moonshot AI (Kimi)
Moonshot uses OpenAI-compatible endpoints, so configure it as a custom provider:
- Provider: `moonshot`
- Auth: `MOONSHOT_API_KEY`
2026-01-27 21:10:09 +08:00
- Example model: `moonshot/kimi-k2.5`
2026-02-01 21:38:14 -05:00
Kimi K2 model IDs:
2026-02-06 10:08:59 -05:00
{/_moonshot-kimi-k2-model-refs:start_/ & & null}
2026-02-01 21:38:14 -05:00
- `moonshot/kimi-k2.5`
- `moonshot/kimi-k2-0905-preview`
- `moonshot/kimi-k2-turbo-preview`
- `moonshot/kimi-k2-thinking`
- `moonshot/kimi-k2-thinking-turbo`
2026-02-06 10:08:59 -05:00
{/_moonshot-kimi-k2-model-refs:end_/ & & null}
2026-01-31 21:13:13 +09:00
2026-01-12 06:47:57 +00:00
```json5
{
agents: {
2026-01-31 21:13:13 +09:00
defaults: { model: { primary: "moonshot/kimi-k2.5" } },
2026-01-12 06:47:57 +00:00
},
models: {
mode: "merge",
providers: {
moonshot: {
baseUrl: "https://api.moonshot.ai/v1",
apiKey: "${MOONSHOT_API_KEY}",
api: "openai-completions",
2026-01-31 21:13:13 +09:00
models: [{ id: "kimi-k2.5", name: "Kimi K2.5" }],
},
},
},
2026-01-12 06:47:57 +00:00
}
```
2026-01-31 06:04:10 +01:00
### Kimi Coding
2026-01-17 17:35:40 +00:00
2026-01-31 06:04:10 +01:00
Kimi Coding uses Moonshot AI's Anthropic-compatible endpoint:
2026-01-17 17:35:40 +00:00
2026-01-31 06:04:10 +01:00
- Provider: `kimi-coding`
- Auth: `KIMI_API_KEY`
- Example model: `kimi-coding/k2p5`
2026-01-17 17:35:40 +00:00
```json5
{
2026-01-31 06:04:10 +01:00
env: { KIMI_API_KEY: "sk-..." },
2026-01-17 17:35:40 +00:00
agents: {
2026-01-31 21:13:13 +09:00
defaults: { model: { primary: "kimi-coding/k2p5" } },
},
2026-01-17 17:35:40 +00:00
}
```
2026-01-17 20:28:15 +00:00
### Qwen OAuth (free tier)
2026-01-17 20:20:25 +00:00
2026-01-17 20:28:15 +00:00
Qwen provides OAuth access to Qwen Coder + Vision via a device-code flow.
2026-01-17 20:20:25 +00:00
Enable the bundled plugin, then log in:
```bash
2026-01-30 03:15:10 +01:00
openclaw plugins enable qwen-portal-auth
openclaw models auth login --provider qwen-portal --set-default
2026-01-17 20:20:25 +00:00
```
Model refs:
2026-01-31 21:13:13 +09:00
2026-01-17 20:20:25 +00:00
- `qwen-portal/coder-model`
- `qwen-portal/vision-model`
See [/providers/qwen ](/providers/qwen ) for setup details and notes.
2026-01-13 00:22:03 +00:00
### Synthetic
Synthetic provides Anthropic-compatible models behind the `synthetic` provider:
- Provider: `synthetic`
- Auth: `SYNTHETIC_API_KEY`
- Example model: `synthetic/hf:MiniMaxAI/MiniMax-M2.1`
2026-01-30 03:15:10 +01:00
- CLI: `openclaw onboard --auth-choice synthetic-api-key`
2026-01-13 00:22:03 +00:00
```json5
{
agents: {
2026-01-31 21:13:13 +09:00
defaults: { model: { primary: "synthetic/hf:MiniMaxAI/MiniMax-M2.1" } },
2026-01-13 00:22:03 +00:00
},
models: {
mode: "merge",
providers: {
synthetic: {
baseUrl: "https://api.synthetic.new/anthropic",
apiKey: "${SYNTHETIC_API_KEY}",
api: "anthropic-messages",
2026-01-31 21:13:13 +09:00
models: [{ id: "hf:MiniMaxAI/MiniMax-M2.1", name: "MiniMax M2.1" }],
},
},
},
2026-01-13 00:22:03 +00:00
}
```
2026-01-10 21:37:38 +01:00
### MiniMax
MiniMax is configured via `models.providers` because it uses custom endpoints:
2026-01-12 05:49:02 +00:00
- MiniMax (Anthropic‑ compatible): `--auth-choice minimax-api`
2026-01-10 21:37:38 +01:00
- Auth: `MINIMAX_API_KEY`
2026-01-12 00:57:17 +00:00
See [/providers/minimax ](/providers/minimax ) for setup details, model options, and config snippets.
2026-01-24 22:38:52 +00:00
### Ollama
Ollama is a local LLM runtime that provides an OpenAI-compatible API:
- Provider: `ollama`
- Auth: None required (local server)
- Example model: `ollama/llama3.3`
2026-02-06 10:08:59 -05:00
- Installation: [https://ollama.ai ](https://ollama.ai )
2026-01-24 22:38:52 +00:00
```bash
# Install Ollama, then pull a model:
ollama pull llama3.3
```
```json5
{
agents: {
2026-01-31 21:13:13 +09:00
defaults: { model: { primary: "ollama/llama3.3" } },
},
2026-01-24 22:38:52 +00:00
}
```
Ollama is automatically detected when running locally at `http://127.0.0.1:11434/v1` . See [/providers/ollama ](/providers/ollama ) for model recommendations and custom configuration.
2026-02-09 10:20:45 +00:00
### vLLM
vLLM is a local (or self-hosted) OpenAI-compatible server:
- Provider: `vllm`
- Auth: Optional (depends on your server)
- Default base URL: `http://127.0.0.1:8000/v1`
To opt in to auto-discovery locally (any value works if your server doesn’ t enforce auth):
```bash
export VLLM_API_KEY="vllm-local"
```
Then set a model (replace with one of the IDs returned by `/v1/models` ):
```json5
{
agents: {
defaults: { model: { primary: "vllm/your-model-id" } },
},
}
```
See [/providers/vllm ](/providers/vllm ) for details.
2026-01-10 21:37:38 +01:00
### Local proxies (LM Studio, vLLM, LiteLLM, etc.)
Example (OpenAI‑ compatible):
```json5
{
agents: {
defaults: {
model: { primary: "lmstudio/minimax-m2.1-gs32" },
2026-01-31 21:13:13 +09:00
models: { "lmstudio/minimax-m2.1-gs32": { alias: "Minimax" } },
},
2026-01-10 21:37:38 +01:00
},
models: {
providers: {
lmstudio: {
baseUrl: "http://localhost:1234/v1",
apiKey: "LMSTUDIO_KEY",
api: "openai-completions",
models: [
{
id: "minimax-m2.1-gs32",
name: "MiniMax M2.1",
reasoning: false,
input: ["text"],
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
contextWindow: 200000,
2026-01-31 21:13:13 +09:00
maxTokens: 8192,
},
],
},
},
},
2026-01-10 21:37:38 +01:00
}
```
2026-01-25 00:01:33 +00:00
Notes:
2026-01-31 21:13:13 +09:00
2026-01-25 00:01:33 +00:00
- For custom providers, `reasoning` , `input` , `cost` , `contextWindow` , and `maxTokens` are optional.
2026-01-30 03:15:10 +01:00
When omitted, OpenClaw defaults to:
2026-01-25 00:01:33 +00:00
- `reasoning: false`
- `input: ["text"]`
- `cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 }`
- `contextWindow: 200000`
- `maxTokens: 8192`
- Recommended: set explicit values that match your proxy/model limits.
2026-01-10 21:37:38 +01:00
## CLI examples
```bash
2026-01-30 03:15:10 +01:00
openclaw onboard --auth-choice opencode-zen
2026-02-05 16:54:44 -05:00
openclaw models set opencode/claude-opus-4-6
2026-01-30 03:15:10 +01:00
openclaw models list
2026-01-10 21:37:38 +01:00
```
See also: [/gateway/configuration ](/gateway/configuration ) for full configuration examples.