Replace the multi-step MiniMax onboarding wizard with 4 flat options:
- MiniMax Global — OAuth (minimax.io)
- MiniMax Global — API Key (minimax.io)
- MiniMax CN — OAuth (minimaxi.com)
- MiniMax CN — API Key (minimaxi.com)
Storage changes:
- Unify CN and Global under provider "minimax" (baseUrl distinguishes region)
- Profiles: minimax:global / minimax:cn (both regions can coexist)
- Model ref: minimax/MiniMax-M2.5 (no more minimax-cn/ prefix)
- Remove LM Studio local mode and Lightning/Highspeed choice
Backward compatibility:
- Keep minimax-cn in provider-env-vars for existing configs
- Accept minimax-cn as legacy tokenProvider in CI pipelines
- Error with migration hint for removed auth choices in non-interactive mode
- Warn when dual-profile overwrites shared provider baseUrl
Made-with: Cursor
- Rename provider ID, constants, functions, CLI flags, and types from
"bailian" to "modelstudio" to match the official English name
"Alibaba Cloud Model Studio".
- Fix P2 bug: global endpoint variant now always overwrites baseUrl
instead of silently preserving a stale CN URL.
- Fix P1 bug: add modelstudio entry to PROVIDER_ENV_VARS so
secret-input-mode=ref no longer throws.
- Move Model Studio imports to top of onboard-auth.config-core.ts.
- Remove unused BAILIAN_BASE_URL export.
Made-with: Cursor
* feat: Add Kilo Gateway provider
Add support for Kilo Gateway as a model provider, similar to OpenRouter.
Kilo Gateway provides a unified API that routes requests to many models
behind a single endpoint and API key.
Changes:
- Add kilocode provider option to auth-choice and onboarding flows
- Add KILOCODE_API_KEY environment variable support
- Add kilocode/ model prefix handling in model-auth and extra-params
- Add provider documentation in docs/providers/kilocode.md
- Update model-providers.md with Kilo Gateway section
- Add design doc for the integration
* kilocode: add provider tests and normalize onboard auth-choice registration
* kilocode: register in resolveImplicitProviders so models appear in provider filter
* kilocode: update base URL from /api/openrouter/ to /api/gateway/
* docs: fix formatting in kilocode docs
* fix: address PR review — remove kilocode from cacheRetention, fix stale model refs and CLI name in docs, fix TS2742
* docs: fix stale refs in design doc — Moltbot to OpenClaw, MoltbotConfig to OpenClawConfig, remove extra-params section, fix doc path
* fix: use resolveAgentModelPrimaryValue for AgentModelConfig union type
---------
Co-authored-by: Mark IJbema <mark@kilocode.ai>
- Add 'minimax-api-key-cn' auth choice for Chinese users
- Reuse existing --minimax-api-key CLI option
- Use MINIMAX_CN_API_BASE_URL (https://api.minimaxi.com/anthropic)
- Similar to how moonshot supports moonshot-api-key-cn
Tested: build ✅, check ✅, test ✅
* initial commit
* removes assesment from docs
* resolves automated review comments
* resolves lint , type , tests , refactors , and submits
* solves : why do we have to lint the tests xD
* adds greptile fixes
* solves a type error
* solves a ci error
* refactors auths
* solves a failing test after i pulled from main lol
* solves a failing test after i pulled from main lol
* resolves token naming issue to comply with better practices when using hf / huggingface
* fixes curly lints !
* fixes failing tests for google api from main
* solve merge conflicts
* solve failing tests with a defensive check 'undefined' openrouterapi key
* fix: preserve Hugging Face auth-choice intent and token behavior (#13472) (thanks @Josephrp)
* test: resolve auth-choice cherry-pick conflict cleanup (#13472)
---------
Co-authored-by: Cursor <cursoragent@cursor.com>
Co-authored-by: Peter Steinberger <steipete@gmail.com>
* feat: add LiteLLM provider types, env var, credentials, and auth choice
Add litellm-api-key auth choice, LITELLM_API_KEY env var mapping,
setLitellmApiKey() credential storage, and LITELLM_DEFAULT_MODEL_REF.
* feat: add LiteLLM onboarding handler and provider config
Add applyLitellmProviderConfig which properly registers
models.providers.litellm with baseUrl, api type, and model definitions.
This fixes the critical bug from PR #6488 where the provider entry was
never created, causing model resolution to fail at runtime.
* docs: add LiteLLM provider documentation
Add setup guide covering onboarding, manual config, virtual keys,
model routing, and usage tracking. Link from provider index.
* docs: add LiteLLM to sidebar navigation in docs.json
Add providers/litellm to both English and Chinese provider page lists
so the docs page appears in the sidebar navigation.
* test: add LiteLLM non-interactive onboarding test
Wire up litellmApiKey flag inference and auth-choice handler for the
non-interactive onboarding path, and add an integration test covering
profile, model default, and credential storage.
* fix: register --litellm-api-key CLI flag and add preferred provider mapping
Wire up the missing Commander CLI option, action handler mapping, and
help text for --litellm-api-key. Add litellm-api-key to the preferred
provider map for consistency with other providers.
* fix: remove zh-CN sidebar entry for litellm (no localized page yet)
* style: format buildLitellmModelDefinition return type
* fix(onboarding): harden LiteLLM provider setup (#12823)
* refactor(onboarding): keep auth-choice provider dispatcher under size limit
---------
Co-authored-by: Peter Steinberger <steipete@gmail.com>
Venice AI is a privacy-focused AI inference provider with support for
uncensored models and access to major proprietary models via their
anonymized proxy.
This integration adds:
- Complete model catalog with 25 models:
- 15 private models (Llama, Qwen, DeepSeek, Venice Uncensored, etc.)
- 10 anonymized models (Claude, GPT-5.2, Gemini, Grok, Kimi, MiniMax)
- Auto-discovery from Venice API with fallback to static catalog
- VENICE_API_KEY environment variable support
- Interactive onboarding via 'venice-api-key' auth choice
- Model selection prompt showing all available Venice models
- Provider auto-registration when API key is detected
- Comprehensive documentation covering:
- Privacy modes (private vs anonymized)
- All 25 models with context windows and features
- Streaming, function calling, and vision support
- Model selection recommendations
Privacy modes:
- Private: Fully private, no logging (open-source models)
- Anonymized: Proxied through Venice (proprietary models)
Default model: venice/llama-3.3-70b (good balance of capability + privacy)
Venice API: https://api.venice.ai/api/v1 (OpenAI-compatible)
Add --auth-choice minimax-api for direct MiniMax API usage at
https://api.minimax.io/anthropic using the anthropic-messages API.
Changes:
- Add applyMinimaxApiConfig() function with provider/model config
- Add minimax-api to AuthChoice type and CLI options
- Add handler and non-interactive support
- Fix duplicate minimax entry in envMap
- Update live test to use anthropic-messages API
- Add 11 unit tests covering all edge cases
- Document configuration in gateway docs
Test results:
- 11/11 unit tests pass
- 1/1 live API test passes (verified with real API key)
Co-Authored-By: Claude <noreply@anthropic.com>