From 43881d5f66f1ecf1afeca56c6ddbc2560b86f490 Mon Sep 17 00:00:00 2001 From: myhloli Date: Mon, 17 Nov 2025 11:24:03 +0800 Subject: [PATCH] fix: update index.md and README files for improved clarity on lmdeploy-engine support --- README.md | 4 ++-- README_zh-CN.md | 4 ++-- docs/en/quick_start/index.md | 24 ++++++++++++++---------- docs/zh/quick_start/index.md | 24 ++++++++++++++---------- 4 files changed, 32 insertions(+), 24 deletions(-) diff --git a/README.md b/README.md index 63a08e2c..87625912 100644 --- a/README.md +++ b/README.md @@ -714,8 +714,8 @@ uv pip install -e .[core] ``` > [!TIP] -> `mineru[core]` includes all core features except `vLLM` acceleration, compatible with Windows / Linux / macOS systems, suitable for most users. -> If you need to use `vLLM` acceleration for VLM model inference or install a lightweight client on edge devices, please refer to the documentation [Extension Modules Installation Guide](https://opendatalab.github.io/MinerU/quick_start/extension_modules/). +> `mineru[core]` includes all core features except `vLLM`/`LMDeploy` acceleration, compatible with Windows / Linux / macOS systems, suitable for most users. +> If you need to use `vLLM`/`LMDeploy` acceleration for VLM model inference or install a lightweight client on edge devices, please refer to the documentation [Extension Modules Installation Guide](https://opendatalab.github.io/MinerU/quick_start/extension_modules/). --- diff --git a/README_zh-CN.md b/README_zh-CN.md index 64e2a2ff..cecc891c 100644 --- a/README_zh-CN.md +++ b/README_zh-CN.md @@ -704,8 +704,8 @@ uv pip install -e .[core] -i https://mirrors.aliyun.com/pypi/simple ``` > [!TIP] -> `mineru[core]`包含除`vLLM`加速外的所有核心功能,兼容Windows / Linux / macOS系统,适合绝大多数用户。 -> 如果您有使用`vLLM`加速VLM模型推理,或是在边缘设备安装轻量版client端等需求,可以参考文档[扩展模块安装指南](https://opendatalab.github.io/MinerU/zh/quick_start/extension_modules/)。 +> `mineru[core]`包含除`vLLM`/`LMDeploy`加速外的所有核心功能,兼容Windows / Linux / macOS系统,适合绝大多数用户。 +> 如果您需要使用`vLLM`/`LMDeploy`加速VLM模型推理,或是有在边缘设备安装轻量版client端等需求,可以参考文档[扩展模块安装指南](https://opendatalab.github.io/MinerU/zh/quick_start/extension_modules/)。 --- diff --git a/docs/en/quick_start/index.md b/docs/en/quick_start/index.md index cc349ba7..0be3b2ae 100644 --- a/docs/en/quick_start/index.md +++ b/docs/en/quick_start/index.md @@ -31,12 +31,13 @@ A WebUI developed based on Gradio, with a simple interface and only core parsing Parsing Backend pipeline
(Accuracy1 82+) - vlm (Accuracy1 90+) + vlm (Accuracy1 90+) transformers mlx-engine vllm-engine /
vllm-async-engine + lmdeploy-engine http-client @@ -47,40 +48,42 @@ A WebUI developed based on Gradio, with a simple interface and only core parsing Good compatibility,
but slower Faster than transformers Fast, compatible with the vLLM ecosystem - Suitable for OpenAI-compatible servers5 + Fast, compatible with the LMDeploy ecosystem + Suitable for OpenAI-compatible servers6 Operating System Linux2 / Windows / macOS macOS3 Linux2 / Windows4 + Linux2 / Windows5 Any CPU inference support ✅ - ❌ + ❌ Not required GPU RequirementsVolta or later architectures, 6 GB VRAM or more, or Apple Silicon Apple Silicon - Volta or later architectures, 8 GB VRAM or more + Volta or later architectures, 8 GB VRAM or more Not required Memory Requirements - Minimum 16 GB, 32 GB recommended + Minimum 16 GB, 32 GB recommended 8 GB Disk Space Requirements - 20 GB or more, SSD recommended + 20 GB or more, SSD recommended 2 GB Python Version - 3.10-3.13 + 3.10-3.13 @@ -89,7 +92,8 @@ A WebUI developed based on Gradio, with a simple interface and only core parsing 2 Linux supports only distributions released in 2019 or later. 3 MLX requires macOS 13.5 or later, recommended for use with version 14.0 or higher. 4 Windows vLLM support via WSL2(Windows Subsystem for Linux). -5 Servers compatible with the OpenAI API, such as local or remote model services deployed via inference frameworks like `vLLM`, `SGLang`, or `LMDeploy`. +5 Windows LMDeploy can only use the `turbomind` backend, which is slightly slower than the `pytorch` backend. If performance is critical, it is recommended to run it via WSL2. +6 Servers compatible with the OpenAI API, such as local or remote model services deployed via inference frameworks like `vLLM`, `SGLang`, or `LMDeploy`. ### Install MinerU @@ -108,8 +112,8 @@ uv pip install -e .[core] ``` > [!TIP] -> `mineru[core]` includes all core features except `vllm` acceleration, compatible with Windows / Linux / macOS systems, suitable for most users. -> If you need to use `vllm` acceleration for VLM model inference or install a lightweight client on edge devices, please refer to the documentation [Extension Modules Installation Guide](./extension_modules.md). +> `mineru[core]` includes all core features except `vLLM`/`LMDeploy` acceleration, compatible with Windows / Linux / macOS systems, suitable for most users. +> If you need to use `vLLM`/`LMDeploy` acceleration for VLM model inference or install a lightweight client on edge devices, please refer to the documentation [Extension Modules Installation Guide](./extension_modules.md). --- diff --git a/docs/zh/quick_start/index.md b/docs/zh/quick_start/index.md index d00258b3..657184a1 100644 --- a/docs/zh/quick_start/index.md +++ b/docs/zh/quick_start/index.md @@ -31,12 +31,13 @@ 解析后端 pipeline
(精度1 82+) - vlm (精度1 90+) + vlm (精度1 90+) transformers mlx-engine vllm-engine /
vllm-async-engine + lmdeploy-engine http-client @@ -47,40 +48,42 @@ 兼容性好, 速度较慢 比transformers快 速度快, 兼容vllm生态 - 适用于OpenAI兼容服务器5 + 速度快, 兼容lmdeploy生态 + 适用于OpenAI兼容服务器6 操作系统 Linux2 / Windows / macOS macOS3 Linux2 / Windows4 + Linux2 / Windows5 不限 CPU推理支持 ✅ - ❌ + ❌ 不需要 GPU要求Volta及以后架构, 6G显存以上或Apple Silicon Apple Silicon - Volta及以后架构, 8G显存以上 + Volta及以后架构, 8G显存以上 不需要 内存要求 - 最低16GB以上, 推荐32GB以上 + 最低16GB以上, 推荐32GB以上 8GB 磁盘空间要求 - 20GB以上, 推荐使用SSD + 20GB以上, 推荐使用SSD 2GB python版本 - 3.10-3.13 + 3.10-3.13 @@ -89,7 +92,8 @@ 2 Linux仅支持2019年及以后发行版 3 MLX需macOS 13.5及以上版本支持,推荐14.0以上版本使用 4 Windows vLLM通过WSL2(适用于 Linux 的 Windows 子系统)实现支持 -5 兼容OpenAI API的服务器,如通过`vLLM`/`SGLang`/`LMDeploy`等推理框架部署的本地模型服务器或远程模型服务 +5 Windows LMDeploy只能使用`turbomind`后端,速度比`pytorch`后端稍慢,如对速度有要求建议通过WSL2运行 +6 兼容OpenAI API的服务器,如通过`vLLM`/`SGLang`/`LMDeploy`等推理框架部署的本地模型服务器或远程模型服务 > [!TIP] > 除以上主流环境与平台外,我们也收录了一些社区用户反馈的其他平台支持情况,详情请参考[其他加速卡适配](https://opendatalab.github.io/MinerU/zh/usage/)。 @@ -113,8 +117,8 @@ uv pip install -e .[core] -i https://mirrors.aliyun.com/pypi/simple ``` > [!TIP] -> `mineru[core]`包含除`vllm`加速外的所有核心功能,兼容Windows / Linux / macOS系统,适合绝大多数用户。 -> 如果您有使用`vllm`加速VLM模型推理,或是在边缘设备安装轻量版client端等需求,可以参考文档[扩展模块安装指南](./extension_modules.md)。 +> `mineru[core]`包含除`vLLM`/`LMDeploy`加速外的所有核心功能,兼容Windows / Linux / macOS系统,适合绝大多数用户。 +> 如果您需要使用`vLLM`/`LMDeploy`加速VLM模型推理,或是有在边缘设备安装轻量版client端等需求,可以参考文档[扩展模块安装指南](./extension_modules.md)。 ---