* 🐛 fix(memory): respect agent-level memory toggle when injecting memories
When the user disables the memory toggle in ChatInput (which writes to
agent-level chatConfig.memory.enabled), the actual message-sending path
in chat/index.ts was only checking the user-level memoryEnabled setting,
completely ignoring the agent-level override.
This aligns the injection logic with useMemoryEnabled hook:
agent-level config takes priority, falls back to user-level setting.
Also fix pre-commit hook to use bunx instead of npx to ensure the
correct ESLint version (v10) is used in monorepo context.
Adds regression tests verifying all three priority scenarios.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* Update pre-commit
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* support desktop gateway
* support device mode
* ✨ feat(desktop): add device gateway status indicator in titlebar
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* ✅ test(desktop): update getDeviceInfo test to include name and description fields
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* ✏️ chore(i18n): update gateway status copy to reference Gateway instead of cloud
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* ✏️ chore(i18n): translate Gateway to 网关 in zh-CN
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* ✏️ chore(i18n): simplify description placeholder to Optional
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* ✏️ chore(desktop): use fixed title 'Connect to Gateway' in device popover
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* 🐛 fix: use partial-json fallback in ToolArgumentsRepairer to recover incomplete args
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* 🐛 fix: use display messages for token counting in group chats
The TokenTag component used dbMessageSelectors.activeDbMessages which
generates a key without groupId, causing empty results in group chats.
This made the Context Details token tag invisible for group agents.
Switch to using the messageString prop (from mainAIChatsMessageString)
which correctly includes groupId in its key generation.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* ✨ feat: add Aliyun Bailian Coding Plan provider
- Add new AI provider for Bailian Coding Plan (coding.dashscope.aliyuncs.com/v1)
- Support 8 coding-optimized models: Qwen3.5 Plus, Qwen3 Coder Plus/Next, Qwen3 Max, GLM-5/4.7, Kimi K2.5, MiniMax M2.5
- Reuse QwenAIStream for stream processing
- Static model list (Coding Plan does not support API model fetching)
- Add i18n translations for provider description
* ✨ feat: add MiniMax Coding Plan provider
- Add new AI provider for MiniMax Token Plan (api.minimax.io/v1)
- Support 6 models: MiniMax-M2.7, M2.7-highspeed, M2.5, M2.5-highspeed, M2.1, M2
- Static model list (Coding Plan does not support API model fetching)
- Add i18n translations for provider description
* ✨ feat: add GLM Coding Plan provider
- Add new AI provider for GLM Coding Plan (api.z.ai/api/paas/v4)
- Support 6 models: GLM-5, GLM-5-Turbo, GLM-4.7, GLM-4.6, GLM-4.5, GLM-4.5-Air
- Static model list (Coding Plan does not support API model fetching)
- Add i18n translations for provider description
* ✨ feat: add Kimi Code Plan provider
- Add new AI provider for Kimi Code Plan (api.moonshot.ai/v1)
- Support 3 models: Kimi K2.5, Kimi K2, Kimi K2 Thinking
- Static model list (Coding Plan does not support API model fetching)
- Add i18n translations for provider description
* ✨ feat: add Volcengine Coding Plan provider
- Add new AI provider for Volcengine Coding Plan (ark.cn-beijing.volces.com/api/coding/v3)
- Support 5 models: Doubao-Seed-Code, Doubao-Seed-Code-2.0, GLM-4.7, DeepSeek-V3.2, Kimi-K2.5
- Static model list (Coding Plan does not support API model fetching)
- Add i18n translations for provider description
* ✨ feat: update coding plan providers default enabled models and configurations
* ✨ feat: add reasoningBudgetToken32k and reasoningBudgetToken80k slider variants
- Add ReasoningTokenSlider32k component (max 32*1024)
- Add ReasoningTokenSlider80k component (max 80*1024)
- Add reasoningBudgetToken32k and reasoningBudgetToken80k to ExtendParamsType
- Update ControlsForm to render appropriate slider based on extendParams
- Update ExtendParamsSelect with new options and previews
- Fix ReasoningTokenSlider max value to use 64*Kibi (65536) instead of 64000
* 🔧 fix: support reasoningBudgetToken32k/80k in ControlsForm and modelParamsResolver
- Add reasoningBudgetToken32k and reasoningBudgetToken80k fields to chatConfig type and schema
- Update ControlsForm to use correct name matching for 32k/80k sliders
- Add processing logic for 32k/80k params in modelParamsResolver
- Add i18n translations for extendParams hints
* 🎨 style: use linear marks for reasoning token sliders (32k/80k)
- Switch from log2 scale to linear scale for equal mark spacing
- Add minWidth/maxWidth constraints to limit slider length
- Fix 64k and 80k marks being too close together
* 🎨 fix: use equal-spaced index for reasoning token sliders (32k/80k)
- Slider uses index [0,1,2,3,...] for equal mark spacing
- Map index to token values via MARK_TOKENS array
- Add minWidth/maxWidth to limit slider length when marks increase
* ✨ feat: add reasoningBudgetToken32k for GLM-5 and GLM-4.7 in Bailian Coding Plan
* 🔧 fix: update coding plan API endpoints and model configurations
- minimaxCodingPlan: change API URL to api.minimaxi.com (China site)
- kimiCodingPlan: change API URL to api.kimi.com/coding/v1
- volcengineCodingPlan: update doubao-seed models with correct deploymentName, pricing
- volcengineCodingPlan: add minimax-m2.5 model
- bailianCodingPlan & volcengineCodingPlan: remove unsupported extendParams from minimax-m2.5
* ✨ feat: add Coding Plan tag to provider cards with i18n support
* ♻️ refactor: set showModelFetcher to false for Bailian Coding Plan
- Coding Plan does not support fetching model list via API
- Set both modelList.showModelFetcher and settings.showModelFetcher to false
* 🔧 fix: correct Coding Plan exports case in package.json
* ✨ feat: update coding plan models with releasedAt and remove pricing
* 🔧 fix: remove unsupported reasoning abilities from MiniMax Coding Plan models
* 🐛 fix(modelParamsResolver): fix reasoningBudgetToken32k/80k not being read when enableReasoning is present
- Add nested logic to check which budget field (32k/80k/generic) the model supports when enableReasoning is true
- Move reasoningBudgetToken32k/80k else-if branches before reasoningBudgetToken to ensure correct field is read
- Fix GLM-5/GLM-4.7 models sending wrong budget_tokens value to API
* ✨ feat(bot): implement /new and /stop slash commands
Add Chat SDK slash command handlers for bot integrations:
- /new: resets conversation state so the next message starts a fresh topic
- /stop: cancels any active agent execution on the current thread
https://claude.ai/code/session_01MDofskrz64tRjh2T6xzGBL
* feat: support telegram text type commands
* fix: stop commands
* feat: register discord slash commands
* feat: add chat adapter patch
* feat: add interuption action
* chore: add agent thread interuption signal
* chore: optimize interruption result
* fix: /stop command message edit
* chore: create a message when interrupted
* chore: add bot test case
* chore: fix test case
* chore: fix test case and remove duplicate completion
* fix: lint error
---------
Co-authored-by: Claude <noreply@anthropic.com>
* 🐛 fix: compress uploaded images to max 1920px before sending to API
Anthropic API rejects images exceeding 2000px in multi-image requests.
Compress images during upload to stay within limits while preserving
original aspect ratio and format (no webp conversion).
Fixes LOBE-6315
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* 🐛 fix: skip canvas compression for GIF and SVG images
Canvas serialization flattens animated GIFs and rasterizes SVGs.
Restrict compression to safe raster formats: JPEG, PNG, WebP.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* 🐛 fix: always compress images to PNG to avoid MIME mismatch
canvas.toDataURL with original file type can produce content that
doesn't match the declared MIME type, causing Anthropic API errors.
Always output PNG which is universally supported and consistent.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* 🐛 fix: progressively shrink images to stay under 5MB API limit
If compressed PNG still exceeds 5MB, progressively reduce dimensions
by 20% until it fits. Also triggers compression for small-dimension
images that exceed 5MB file size.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* ♻️ refactor: extract compressImageFile to utils and add comprehensive tests
Move compressImageFile, COMPRESSIBLE_IMAGE_TYPES, and constants to
@lobechat/utils/compressImage for reusability and testability.
Add tests for: dimension compression, file size limit, format filtering,
error handling, and progressive shrinking.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* 🐛 fix: add document parsing to knowledge base chunking pipeline
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix plugin title
* update
* 🐛 fix: add missing findByFileId mock in document service tests
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: add the user creds modules & skill should auto inject the need creds
* feat: add the builtin creds tools
* fix: add some prompt in creds & codesandbox
* fix: open this settings/creds in community plan
* fix: refacoter the settings/creds the ui
* feat: improve the tools inject system Role
* feat: change the settings/creds mananger ui
* fix: add the creds upload Files api
* feat: should call back the files creds url
* 🐛 fix: correct Search1API response parsing to match actual API format
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix tests
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* 👷 build(cli): migrate bundler from tsup to tsdown
Made-with: Cursor
* 🔧 chore(cli): update package.json and tsdown.config.ts dependencies
- Moved several dependencies from "dependencies" to "devDependencies" in package.json.
- Updated the bundling configuration in tsdown.config.ts to simplify the bundling process.
Signed-off-by: Innei <tukon479@gmail.com>
* 🔧 chore(cli): reorganize package.json and tsdown.config.ts
- Moved "fast-glob" from "dependencies" to "devDependencies" in package.json for better clarity.
- Removed the "onlyBundle" option from tsdown.config.ts to streamline the configuration.
Signed-off-by: Innei <tukon479@gmail.com>
* ✨ feat(cli): add shell completion support
---------
Signed-off-by: Innei <tukon479@gmail.com>