diff --git a/README.md b/README.md
index b8faef88..63a08e2c 100644
--- a/README.md
+++ b/README.md
@@ -650,14 +650,14 @@ A WebUI developed based on Gradio, with a simple interface and only core parsing
Faster than transformers |
Fast, compatible with the vLLM ecosystem |
Fast, compatible with the LMDeploy ecosystem |
- Suitable for OpenAI-compatible servers5 |
+ Suitable for OpenAI-compatible servers6 |
| Operating System |
Linux2 / Windows / macOS |
macOS3 |
Linux2 / Windows4 |
- Linux2 / Windows |
+ Linux2 / Windows5 |
Any |
@@ -693,7 +693,8 @@ A WebUI developed based on Gradio, with a simple interface and only core parsing
2 Linux supports only distributions released in 2019 or later.
3 MLX requires macOS 13.5 or later, recommended for use with version 14.0 or higher.
4 Windows vLLM support via WSL2(Windows Subsystem for Linux).
-5 Servers compatible with the OpenAI API, such as local or remote model services deployed via inference frameworks like `vLLM`, `SGLang`, or `LMDeploy`.
+5 Windows LMDeploy can only use the `turbomind` backend, which is slightly slower than the `pytorch` backend. If performance is critical, it is recommended to run it via WSL2.
+6 Servers compatible with the OpenAI API, such as local or remote model services deployed via inference frameworks like `vLLM`, `SGLang`, or `LMDeploy`.
### Install MinerU
diff --git a/README_zh-CN.md b/README_zh-CN.md
index 757c7dc7..64e2a2ff 100644
--- a/README_zh-CN.md
+++ b/README_zh-CN.md
@@ -637,14 +637,14 @@ https://github.com/user-attachments/assets/4bea02c9-6d54-4cd6-97ed-dff14340982c
| 比transformers快 |
速度快, 兼容vllm生态 |
速度快, 兼容lmdeploy生态 |
- 适用于OpenAI兼容服务器5 |
+ 适用于OpenAI兼容服务器6 |
| 操作系统 |
Linux2 / Windows / macOS |
macOS3 |
Linux2 / Windows4 |
- Linux2 / Windows |
+ Linux2 / Windows5 |
不限 |
@@ -680,7 +680,8 @@ https://github.com/user-attachments/assets/4bea02c9-6d54-4cd6-97ed-dff14340982c
2 Linux仅支持2019年及以后发行版
3 MLX需macOS 13.5及以上版本支持,推荐14.0以上版本使用
4 Windows vLLM通过WSL2(适用于 Linux 的 Windows 子系统)实现支持
-5 兼容OpenAI API的服务器,如通过`vLLM`/`SGLang`/`LMDeploy`等推理框架部署的本地模型服务器或远程模型服务
+5 Windows LMDeploy只能使用`turbomind`后端,速度比`pytorch`后端稍慢,如对速度有要求建议通过WSL2运行
+6 兼容OpenAI API的服务器,如通过`vLLM`/`SGLang`/`LMDeploy`等推理框架部署的本地模型服务器或远程模型服务
> [!TIP]
> 除以上主流环境与平台外,我们也收录了一些社区用户反馈的其他平台支持情况,详情请参考[其他加速卡适配](https://opendatalab.github.io/MinerU/zh/usage/)。