Files
lobehub/packages/model-runtime
Hardy ad2087cf65 feat: add Coding Plan providers support (#13203)
*  feat: add Aliyun Bailian Coding Plan provider

- Add new AI provider for Bailian Coding Plan (coding.dashscope.aliyuncs.com/v1)
- Support 8 coding-optimized models: Qwen3.5 Plus, Qwen3 Coder Plus/Next, Qwen3 Max, GLM-5/4.7, Kimi K2.5, MiniMax M2.5
- Reuse QwenAIStream for stream processing
- Static model list (Coding Plan does not support API model fetching)
- Add i18n translations for provider description

*  feat: add MiniMax Coding Plan provider

- Add new AI provider for MiniMax Token Plan (api.minimax.io/v1)
- Support 6 models: MiniMax-M2.7, M2.7-highspeed, M2.5, M2.5-highspeed, M2.1, M2
- Static model list (Coding Plan does not support API model fetching)
- Add i18n translations for provider description

*  feat: add GLM Coding Plan provider

- Add new AI provider for GLM Coding Plan (api.z.ai/api/paas/v4)
- Support 6 models: GLM-5, GLM-5-Turbo, GLM-4.7, GLM-4.6, GLM-4.5, GLM-4.5-Air
- Static model list (Coding Plan does not support API model fetching)
- Add i18n translations for provider description

*  feat: add Kimi Code Plan provider

- Add new AI provider for Kimi Code Plan (api.moonshot.ai/v1)
- Support 3 models: Kimi K2.5, Kimi K2, Kimi K2 Thinking
- Static model list (Coding Plan does not support API model fetching)
- Add i18n translations for provider description

*  feat: add Volcengine Coding Plan provider

- Add new AI provider for Volcengine Coding Plan (ark.cn-beijing.volces.com/api/coding/v3)
- Support 5 models: Doubao-Seed-Code, Doubao-Seed-Code-2.0, GLM-4.7, DeepSeek-V3.2, Kimi-K2.5
- Static model list (Coding Plan does not support API model fetching)
- Add i18n translations for provider description

*  feat: update coding plan providers default enabled models and configurations

*  feat: add reasoningBudgetToken32k and reasoningBudgetToken80k slider variants

- Add ReasoningTokenSlider32k component (max 32*1024)
- Add ReasoningTokenSlider80k component (max 80*1024)
- Add reasoningBudgetToken32k and reasoningBudgetToken80k to ExtendParamsType
- Update ControlsForm to render appropriate slider based on extendParams
- Update ExtendParamsSelect with new options and previews
- Fix ReasoningTokenSlider max value to use 64*Kibi (65536) instead of 64000

* 🔧 fix: support reasoningBudgetToken32k/80k in ControlsForm and modelParamsResolver

- Add reasoningBudgetToken32k and reasoningBudgetToken80k fields to chatConfig type and schema
- Update ControlsForm to use correct name matching for 32k/80k sliders
- Add processing logic for 32k/80k params in modelParamsResolver
- Add i18n translations for extendParams hints

* 🎨 style: use linear marks for reasoning token sliders (32k/80k)

- Switch from log2 scale to linear scale for equal mark spacing
- Add minWidth/maxWidth constraints to limit slider length
- Fix 64k and 80k marks being too close together

* 🎨 fix: use equal-spaced index for reasoning token sliders (32k/80k)

- Slider uses index [0,1,2,3,...] for equal mark spacing
- Map index to token values via MARK_TOKENS array
- Add minWidth/maxWidth to limit slider length when marks increase

*  feat: add reasoningBudgetToken32k for GLM-5 and GLM-4.7 in Bailian Coding Plan

* 🔧 fix: update coding plan API endpoints and model configurations

- minimaxCodingPlan: change API URL to api.minimaxi.com (China site)
- kimiCodingPlan: change API URL to api.kimi.com/coding/v1
- volcengineCodingPlan: update doubao-seed models with correct deploymentName, pricing
- volcengineCodingPlan: add minimax-m2.5 model
- bailianCodingPlan & volcengineCodingPlan: remove unsupported extendParams from minimax-m2.5

*  feat: add Coding Plan tag to provider cards with i18n support

* ♻️ refactor: set showModelFetcher to false for Bailian Coding Plan

- Coding Plan does not support fetching model list via API
- Set both modelList.showModelFetcher and settings.showModelFetcher to false

* 🔧 fix: correct Coding Plan exports case in package.json

*  feat: update coding plan models with releasedAt and remove pricing

* 🔧 fix: remove unsupported reasoning abilities from MiniMax Coding Plan models

* 🐛 fix(modelParamsResolver): fix reasoningBudgetToken32k/80k not being read when enableReasoning is present

- Add nested logic to check which budget field (32k/80k/generic) the model supports when enableReasoning is true
- Move reasoningBudgetToken32k/80k else-if branches before reasoningBudgetToken to ensure correct field is read
- Fix GLM-5/GLM-4.7 models sending wrong budget_tokens value to API
2026-03-25 11:53:16 +08:00
..
2025-12-24 12:54:36 +08:00