mirror of
https://github.com/opendatalab/MinerU.git
synced 2026-03-27 11:08:32 +07:00
fix: update index.md and README files for improved clarity on lmdeploy-engine support
This commit is contained in:
@@ -714,8 +714,8 @@ uv pip install -e .[core]
|
|||||||
```
|
```
|
||||||
|
|
||||||
> [!TIP]
|
> [!TIP]
|
||||||
> `mineru[core]` includes all core features except `vLLM` acceleration, compatible with Windows / Linux / macOS systems, suitable for most users.
|
> `mineru[core]` includes all core features except `vLLM`/`LMDeploy` acceleration, compatible with Windows / Linux / macOS systems, suitable for most users.
|
||||||
> If you need to use `vLLM` acceleration for VLM model inference or install a lightweight client on edge devices, please refer to the documentation [Extension Modules Installation Guide](https://opendatalab.github.io/MinerU/quick_start/extension_modules/).
|
> If you need to use `vLLM`/`LMDeploy` acceleration for VLM model inference or install a lightweight client on edge devices, please refer to the documentation [Extension Modules Installation Guide](https://opendatalab.github.io/MinerU/quick_start/extension_modules/).
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|||||||
@@ -704,8 +704,8 @@ uv pip install -e .[core] -i https://mirrors.aliyun.com/pypi/simple
|
|||||||
```
|
```
|
||||||
|
|
||||||
> [!TIP]
|
> [!TIP]
|
||||||
> `mineru[core]`包含除`vLLM`加速外的所有核心功能,兼容Windows / Linux / macOS系统,适合绝大多数用户。
|
> `mineru[core]`包含除`vLLM`/`LMDeploy`加速外的所有核心功能,兼容Windows / Linux / macOS系统,适合绝大多数用户。
|
||||||
> 如果您有使用`vLLM`加速VLM模型推理,或是在边缘设备安装轻量版client端等需求,可以参考文档[扩展模块安装指南](https://opendatalab.github.io/MinerU/zh/quick_start/extension_modules/)。
|
> 如果您需要使用`vLLM`/`LMDeploy`加速VLM模型推理,或是有在边缘设备安装轻量版client端等需求,可以参考文档[扩展模块安装指南](https://opendatalab.github.io/MinerU/zh/quick_start/extension_modules/)。
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|||||||
@@ -31,12 +31,13 @@ A WebUI developed based on Gradio, with a simple interface and only core parsing
|
|||||||
<tr>
|
<tr>
|
||||||
<th rowspan="2">Parsing Backend</th>
|
<th rowspan="2">Parsing Backend</th>
|
||||||
<th rowspan="2">pipeline <br> (Accuracy<sup>1</sup> 82+)</th>
|
<th rowspan="2">pipeline <br> (Accuracy<sup>1</sup> 82+)</th>
|
||||||
<th colspan="4" style="text-align:center;">vlm (Accuracy<sup>1</sup> 90+)</th>
|
<th colspan="5">vlm (Accuracy<sup>1</sup> 90+)</th>
|
||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<th>transformers</th>
|
<th>transformers</th>
|
||||||
<th>mlx-engine</th>
|
<th>mlx-engine</th>
|
||||||
<th>vllm-engine / <br>vllm-async-engine</th>
|
<th>vllm-engine / <br>vllm-async-engine</th>
|
||||||
|
<th>lmdeploy-engine</th>
|
||||||
<th>http-client</th>
|
<th>http-client</th>
|
||||||
</tr>
|
</tr>
|
||||||
</thead>
|
</thead>
|
||||||
@@ -47,40 +48,42 @@ A WebUI developed based on Gradio, with a simple interface and only core parsing
|
|||||||
<td>Good compatibility, <br>but slower</td>
|
<td>Good compatibility, <br>but slower</td>
|
||||||
<td>Faster than transformers</td>
|
<td>Faster than transformers</td>
|
||||||
<td>Fast, compatible with the vLLM ecosystem</td>
|
<td>Fast, compatible with the vLLM ecosystem</td>
|
||||||
<td>Suitable for OpenAI-compatible servers<sup>5</sup></td>
|
<td>Fast, compatible with the LMDeploy ecosystem</td>
|
||||||
|
<td>Suitable for OpenAI-compatible servers<sup>6</sup></td>
|
||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<th>Operating System</th>
|
<th>Operating System</th>
|
||||||
<td colspan="2" style="text-align:center;">Linux<sup>2</sup> / Windows / macOS</td>
|
<td colspan="2" style="text-align:center;">Linux<sup>2</sup> / Windows / macOS</td>
|
||||||
<td style="text-align:center;">macOS<sup>3</sup></td>
|
<td style="text-align:center;">macOS<sup>3</sup></td>
|
||||||
<td style="text-align:center;">Linux<sup>2</sup> / Windows<sup>4</sup> </td>
|
<td style="text-align:center;">Linux<sup>2</sup> / Windows<sup>4</sup> </td>
|
||||||
|
<td style="text-align:center;">Linux<sup>2</sup> / Windows<sup>5</sup> </td>
|
||||||
<td>Any</td>
|
<td>Any</td>
|
||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<th>CPU inference support</th>
|
<th>CPU inference support</th>
|
||||||
<td colspan="2" style="text-align:center;">✅</td>
|
<td colspan="2" style="text-align:center;">✅</td>
|
||||||
<td colspan="2" style="text-align:center;">❌</td>
|
<td colspan="3" style="text-align:center;">❌</td>
|
||||||
<td>Not required</td>
|
<td>Not required</td>
|
||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<th>GPU Requirements</th><td colspan="2" style="text-align:center;">Volta or later architectures, 6 GB VRAM or more, or Apple Silicon</td>
|
<th>GPU Requirements</th><td colspan="2" style="text-align:center;">Volta or later architectures, 6 GB VRAM or more, or Apple Silicon</td>
|
||||||
<td>Apple Silicon</td>
|
<td>Apple Silicon</td>
|
||||||
<td>Volta or later architectures, 8 GB VRAM or more</td>
|
<td colspan="2" style="text-align:center;">Volta or later architectures, 8 GB VRAM or more</td>
|
||||||
<td>Not required</td>
|
<td>Not required</td>
|
||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<th>Memory Requirements</th>
|
<th>Memory Requirements</th>
|
||||||
<td colspan="4" style="text-align:center;">Minimum 16 GB, 32 GB recommended</td>
|
<td colspan="5" style="text-align:center;">Minimum 16 GB, 32 GB recommended</td>
|
||||||
<td>8 GB</td>
|
<td>8 GB</td>
|
||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<th>Disk Space Requirements</th>
|
<th>Disk Space Requirements</th>
|
||||||
<td colspan="4" style="text-align:center;">20 GB or more, SSD recommended</td>
|
<td colspan="5" style="text-align:center;">20 GB or more, SSD recommended</td>
|
||||||
<td>2 GB</td>
|
<td>2 GB</td>
|
||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<th>Python Version</th>
|
<th>Python Version</th>
|
||||||
<td colspan="5" style="text-align:center;">3.10-3.13</td>
|
<td colspan="6" style="text-align:center;">3.10-3.13</td>
|
||||||
</tr>
|
</tr>
|
||||||
</tbody>
|
</tbody>
|
||||||
</table>
|
</table>
|
||||||
@@ -89,7 +92,8 @@ A WebUI developed based on Gradio, with a simple interface and only core parsing
|
|||||||
<sup>2</sup> Linux supports only distributions released in 2019 or later.
|
<sup>2</sup> Linux supports only distributions released in 2019 or later.
|
||||||
<sup>3</sup> MLX requires macOS 13.5 or later, recommended for use with version 14.0 or higher.
|
<sup>3</sup> MLX requires macOS 13.5 or later, recommended for use with version 14.0 or higher.
|
||||||
<sup>4</sup> Windows vLLM support via WSL2(Windows Subsystem for Linux).
|
<sup>4</sup> Windows vLLM support via WSL2(Windows Subsystem for Linux).
|
||||||
<sup>5</sup> Servers compatible with the OpenAI API, such as local or remote model services deployed via inference frameworks like `vLLM`, `SGLang`, or `LMDeploy`.
|
<sup>5</sup> Windows LMDeploy can only use the `turbomind` backend, which is slightly slower than the `pytorch` backend. If performance is critical, it is recommended to run it via WSL2.
|
||||||
|
<sup>6</sup> Servers compatible with the OpenAI API, such as local or remote model services deployed via inference frameworks like `vLLM`, `SGLang`, or `LMDeploy`.
|
||||||
|
|
||||||
### Install MinerU
|
### Install MinerU
|
||||||
|
|
||||||
@@ -108,8 +112,8 @@ uv pip install -e .[core]
|
|||||||
```
|
```
|
||||||
|
|
||||||
> [!TIP]
|
> [!TIP]
|
||||||
> `mineru[core]` includes all core features except `vllm` acceleration, compatible with Windows / Linux / macOS systems, suitable for most users.
|
> `mineru[core]` includes all core features except `vLLM`/`LMDeploy` acceleration, compatible with Windows / Linux / macOS systems, suitable for most users.
|
||||||
> If you need to use `vllm` acceleration for VLM model inference or install a lightweight client on edge devices, please refer to the documentation [Extension Modules Installation Guide](./extension_modules.md).
|
> If you need to use `vLLM`/`LMDeploy` acceleration for VLM model inference or install a lightweight client on edge devices, please refer to the documentation [Extension Modules Installation Guide](./extension_modules.md).
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|||||||
@@ -31,12 +31,13 @@
|
|||||||
<tr>
|
<tr>
|
||||||
<th rowspan="2">解析后端</th>
|
<th rowspan="2">解析后端</th>
|
||||||
<th rowspan="2">pipeline <br> (精度<sup>1</sup> 82+)</th>
|
<th rowspan="2">pipeline <br> (精度<sup>1</sup> 82+)</th>
|
||||||
<th colspan="4" style="text-align:center;">vlm (精度<sup>1</sup> 90+)</th>
|
<th colspan="5">vlm (精度<sup>1</sup> 90+)</th>
|
||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<th>transformers</th>
|
<th>transformers</th>
|
||||||
<th>mlx-engine</th>
|
<th>mlx-engine</th>
|
||||||
<th>vllm-engine / <br>vllm-async-engine</th>
|
<th>vllm-engine / <br>vllm-async-engine</th>
|
||||||
|
<th>lmdeploy-engine</th>
|
||||||
<th>http-client</th>
|
<th>http-client</th>
|
||||||
</tr>
|
</tr>
|
||||||
</thead>
|
</thead>
|
||||||
@@ -47,40 +48,42 @@
|
|||||||
<td>兼容性好, 速度较慢</td>
|
<td>兼容性好, 速度较慢</td>
|
||||||
<td>比transformers快</td>
|
<td>比transformers快</td>
|
||||||
<td>速度快, 兼容vllm生态</td>
|
<td>速度快, 兼容vllm生态</td>
|
||||||
<td>适用于OpenAI兼容服务器<sup>5</sup></td>
|
<td>速度快, 兼容lmdeploy生态</td>
|
||||||
|
<td>适用于OpenAI兼容服务器<sup>6</sup></td>
|
||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<th>操作系统</th>
|
<th>操作系统</th>
|
||||||
<td colspan="2" style="text-align:center;">Linux<sup>2</sup> / Windows / macOS</td>
|
<td colspan="2" style="text-align:center;">Linux<sup>2</sup> / Windows / macOS</td>
|
||||||
<td style="text-align:center;">macOS<sup>3</sup></td>
|
<td style="text-align:center;">macOS<sup>3</sup></td>
|
||||||
<td style="text-align:center;">Linux<sup>2</sup> / Windows<sup>4</sup> </td>
|
<td style="text-align:center;">Linux<sup>2</sup> / Windows<sup>4</sup> </td>
|
||||||
|
<td style="text-align:center;">Linux<sup>2</sup> / Windows<sup>5</sup> </td>
|
||||||
<td>不限</td>
|
<td>不限</td>
|
||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<th>CPU推理支持</th>
|
<th>CPU推理支持</th>
|
||||||
<td colspan="2" style="text-align:center;">✅</td>
|
<td colspan="2" style="text-align:center;">✅</td>
|
||||||
<td colspan="2" style="text-align:center;">❌</td>
|
<td colspan="3" style="text-align:center;">❌</td>
|
||||||
<td >不需要</td>
|
<td >不需要</td>
|
||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<th>GPU要求</th><td colspan="2" style="text-align:center;">Volta及以后架构, 6G显存以上或Apple Silicon</td>
|
<th>GPU要求</th><td colspan="2" style="text-align:center;">Volta及以后架构, 6G显存以上或Apple Silicon</td>
|
||||||
<td>Apple Silicon</td>
|
<td>Apple Silicon</td>
|
||||||
<td>Volta及以后架构, 8G显存以上</td>
|
<td colspan="2" style="text-align:center;">Volta及以后架构, 8G显存以上</td>
|
||||||
<td>不需要</td>
|
<td>不需要</td>
|
||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<th>内存要求</th>
|
<th>内存要求</th>
|
||||||
<td colspan="4" style="text-align:center;">最低16GB以上, 推荐32GB以上</td>
|
<td colspan="5" style="text-align:center;">最低16GB以上, 推荐32GB以上</td>
|
||||||
<td>8GB</td>
|
<td>8GB</td>
|
||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<th>磁盘空间要求</th>
|
<th>磁盘空间要求</th>
|
||||||
<td colspan="4" style="text-align:center;">20GB以上, 推荐使用SSD</td>
|
<td colspan="5" style="text-align:center;">20GB以上, 推荐使用SSD</td>
|
||||||
<td>2GB</td>
|
<td>2GB</td>
|
||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<th>python版本</th>
|
<th>python版本</th>
|
||||||
<td colspan="5" style="text-align:center;">3.10-3.13</td>
|
<td colspan="6" style="text-align:center;">3.10-3.13</td>
|
||||||
</tr>
|
</tr>
|
||||||
</tbody>
|
</tbody>
|
||||||
</table>
|
</table>
|
||||||
@@ -89,7 +92,8 @@
|
|||||||
<sup>2</sup> Linux仅支持2019年及以后发行版
|
<sup>2</sup> Linux仅支持2019年及以后发行版
|
||||||
<sup>3</sup> MLX需macOS 13.5及以上版本支持,推荐14.0以上版本使用
|
<sup>3</sup> MLX需macOS 13.5及以上版本支持,推荐14.0以上版本使用
|
||||||
<sup>4</sup> Windows vLLM通过WSL2(适用于 Linux 的 Windows 子系统)实现支持
|
<sup>4</sup> Windows vLLM通过WSL2(适用于 Linux 的 Windows 子系统)实现支持
|
||||||
<sup>5</sup> 兼容OpenAI API的服务器,如通过`vLLM`/`SGLang`/`LMDeploy`等推理框架部署的本地模型服务器或远程模型服务
|
<sup>5</sup> Windows LMDeploy只能使用`turbomind`后端,速度比`pytorch`后端稍慢,如对速度有要求建议通过WSL2运行
|
||||||
|
<sup>6</sup> 兼容OpenAI API的服务器,如通过`vLLM`/`SGLang`/`LMDeploy`等推理框架部署的本地模型服务器或远程模型服务
|
||||||
|
|
||||||
> [!TIP]
|
> [!TIP]
|
||||||
> 除以上主流环境与平台外,我们也收录了一些社区用户反馈的其他平台支持情况,详情请参考[其他加速卡适配](https://opendatalab.github.io/MinerU/zh/usage/)。
|
> 除以上主流环境与平台外,我们也收录了一些社区用户反馈的其他平台支持情况,详情请参考[其他加速卡适配](https://opendatalab.github.io/MinerU/zh/usage/)。
|
||||||
@@ -113,8 +117,8 @@ uv pip install -e .[core] -i https://mirrors.aliyun.com/pypi/simple
|
|||||||
```
|
```
|
||||||
|
|
||||||
> [!TIP]
|
> [!TIP]
|
||||||
> `mineru[core]`包含除`vllm`加速外的所有核心功能,兼容Windows / Linux / macOS系统,适合绝大多数用户。
|
> `mineru[core]`包含除`vLLM`/`LMDeploy`加速外的所有核心功能,兼容Windows / Linux / macOS系统,适合绝大多数用户。
|
||||||
> 如果您有使用`vllm`加速VLM模型推理,或是在边缘设备安装轻量版client端等需求,可以参考文档[扩展模块安装指南](./extension_modules.md)。
|
> 如果您需要使用`vLLM`/`LMDeploy`加速VLM模型推理,或是有在边缘设备安装轻量版client端等需求,可以参考文档[扩展模块安装指南](./extension_modules.md)。
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|||||||
Reference in New Issue
Block a user