mirror of
https://github.com/opendatalab/MinerU.git
synced 2026-03-27 19:18:34 +07:00
Compare commits
41 Commits
release-2.
...
release-2.
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
3e51cb4e81 | ||
|
|
bc63b17ae4 | ||
|
|
7f986fc1e3 | ||
|
|
5fb8d50b70 | ||
|
|
3ce9500894 | ||
|
|
142dc30a03 | ||
|
|
5e3db4a472 | ||
|
|
90b77a2809 | ||
|
|
948161c527 | ||
|
|
5397c74a34 | ||
|
|
97450688d6 | ||
|
|
6e7c6b082d | ||
|
|
6f281be4ff | ||
|
|
880cdd02b2 | ||
|
|
73b31d1118 | ||
|
|
74ec4894e0 | ||
|
|
c1022fc3e2 | ||
|
|
6270b05d3a | ||
|
|
bbd214dbc3 | ||
|
|
5fa66202a7 | ||
|
|
4dc45f6621 | ||
|
|
65b3204d5a | ||
|
|
636bd89b38 | ||
|
|
586a4fb06b | ||
|
|
951ebd8c04 | ||
|
|
30758634e3 | ||
|
|
aa960b105a | ||
|
|
eba787c22b | ||
|
|
0a288743ba | ||
|
|
538280f589 | ||
|
|
0af0080c85 | ||
|
|
25058ea982 | ||
|
|
41f0e3e26d | ||
|
|
0624f7eb5b | ||
|
|
4fef9e863c | ||
|
|
97d1a9b1ed | ||
|
|
d17a5ff7f2 | ||
|
|
47c207a906 | ||
|
|
a91c35137a | ||
|
|
c2c998ae11 | ||
|
|
3fff71b76a |
17
README.md
17
README.md
@@ -45,17 +45,22 @@
|
||||
|
||||
# Changelog
|
||||
|
||||
- 2026/01/30 2.7.4 Release
|
||||
- Added support for domestic computing platforms IluvatarCorex and Cambricon. Currently, the officially supported domestic computing platforms include:
|
||||
- [Ascend](https://opendatalab.github.io/MinerU/zh/usage/acceleration_cards/Ascend/)
|
||||
- [T-Head](https://opendatalab.github.io/MinerU/zh/usage/acceleration_cards/THead/)
|
||||
- [METAX](https://opendatalab.github.io/MinerU/zh/usage/acceleration_cards/METAX/)
|
||||
- 2026/02/06 2.7.6 Release
|
||||
- Added support for the domestic computing platforms Kunlunxin and Tecorigin; currently, the domestic computing platforms that have been adapted and supported by the official team and vendors include:
|
||||
- [Ascend](https://opendatalab.github.io/MinerU/zh/usage/acceleration_cards/Ascend)
|
||||
- [T-Head](https://opendatalab.github.io/MinerU/zh/usage/acceleration_cards/THead)
|
||||
- [METAX](https://opendatalab.github.io/MinerU/zh/usage/acceleration_cards/METAX)
|
||||
- [Hygon](https://opendatalab.github.io/MinerU/zh/usage/acceleration_cards/Hygon/)
|
||||
- [Enflame](https://opendatalab.github.io/MinerU/zh/usage/acceleration_cards/Enflame/)
|
||||
- [MooreThreads](https://opendatalab.github.io/MinerU/zh/usage/acceleration_cards/MooreThreads/)
|
||||
- [IluvatarCorex](https://opendatalab.github.io/MinerU/zh/usage/acceleration_cards/IluvatarCorex/)
|
||||
- [Cambricon](https://opendatalab.github.io/MinerU/zh/usage/acceleration_cards/Cambricon/)
|
||||
- MinerU continues to ensure compatibility with domestic hardware platforms, supporting mainstream chip architectures. With secure and reliable technology, we empower researchers, government, and enterprises to reach new heights in document digitization!
|
||||
- [Kunlunxin](https://opendatalab.github.io/MinerU/zh/usage/acceleration_cards/Kunlunxin/)
|
||||
- [Tecorigin](https://opendatalab.github.io/MinerU/zh/usage/acceleration_cards/Tecorigin/)
|
||||
- MinerU continues to support domestic hardware platforms and mainstream chip architectures. With secure and reliable technology, it helps research, government, and enterprise users reach new heights in document digitization!
|
||||
|
||||
- 2026/01/30 2.7.4 Release
|
||||
- Added support for domestic computing platforms IluvatarCorex and Cambricon.
|
||||
|
||||
- 2026/01/23 2.7.2 Release
|
||||
- Added support for domestic computing platforms Hygon, Enflame, and Moore Threads.
|
||||
|
||||
@@ -45,8 +45,8 @@
|
||||
|
||||
# 更新记录
|
||||
|
||||
- 2026/01/30 2.7.4 发布
|
||||
- 新增国产算力平台天数智芯、寒武纪的适配支持,目前已由官方适配并支持的国产算力平台包括:
|
||||
- 2026/02/06 2.7.6 发布
|
||||
- 新增国产算力平台昆仑芯、太初元碁的适配支持,目前已由官方和厂商适配并支持的国产算力平台包括:
|
||||
- [昇腾 Ascend](https://opendatalab.github.io/MinerU/zh/usage/acceleration_cards/Ascend)
|
||||
- [平头哥 T-Head](https://opendatalab.github.io/MinerU/zh/usage/acceleration_cards/THead)
|
||||
- [沐曦 METAX](https://opendatalab.github.io/MinerU/zh/usage/acceleration_cards/METAX)
|
||||
@@ -55,8 +55,13 @@
|
||||
- [摩尔线程 MooreThreads](https://opendatalab.github.io/MinerU/zh/usage/acceleration_cards/MooreThreads/)
|
||||
- [天数智芯 IluvatarCorex](https://opendatalab.github.io/MinerU/zh/usage/acceleration_cards/IluvatarCorex/)
|
||||
- [寒武纪 Cambricon](https://opendatalab.github.io/MinerU/zh/usage/acceleration_cards/Cambricon/)
|
||||
- [昆仑芯 Kunlunxin](https://opendatalab.github.io/MinerU/zh/usage/acceleration_cards/Kunlunxin/)
|
||||
- [太初元碁 Tecorigin](https://opendatalab.github.io/MinerU/zh/usage/acceleration_cards/Tecorigin/)
|
||||
- MinerU 持续兼容国产硬件平台,支持主流芯片架构。以安全可靠的技术,助力科研、政企用户迈向文档数字化新高度!
|
||||
|
||||
- 2026/01/30 2.7.4 发布
|
||||
- 新增国产算力平台天数智芯、寒武纪的适配支持。
|
||||
|
||||
- 2026/01/23 2.7.2 发布
|
||||
- 新增国产算力平台海光、燧原、摩尔线程的适配支持
|
||||
- 跨页表合并优化,提升合并成功率与合并效果
|
||||
|
||||
33
docker/china/kxpu.Dockerfile
Normal file
33
docker/china/kxpu.Dockerfile
Normal file
@@ -0,0 +1,33 @@
|
||||
# Base image containing the vLLM inference environment, requiring amd64(x86-64) CPU + Kunlun XPU.
|
||||
FROM docker.1ms.run/wjie520/vllm_kunlun:v0.10.1.1rc1
|
||||
|
||||
|
||||
# Install Noto fonts for Chinese characters
|
||||
RUN apt-get update && \
|
||||
apt-get install -y \
|
||||
fonts-noto-core \
|
||||
fonts-noto-cjk \
|
||||
fontconfig && \
|
||||
fc-cache -fv && \
|
||||
apt-get clean && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Install mineru latest
|
||||
RUN python3 -m pip install -U pip -i https://mirrors.aliyun.com/pypi/simple && \
|
||||
python3 -m pip install "mineru[api,gradio]>=2.7.6" \
|
||||
"matplotlib>=3.10,<4" \
|
||||
"ultralytics>=8.3.48,<9" \
|
||||
"doclayout_yolo==0.0.4" \
|
||||
"ftfy>=6.3.1,<7" \
|
||||
"shapely>=2.0.7,<3" \
|
||||
"pyclipper>=1.3.0,<2" \
|
||||
"omegaconf>=2.3.0,<3" \
|
||||
-i https://mirrors.aliyun.com/pypi/simple && \
|
||||
sed -i '1,200{s/self\.act = act_layer()/self.act = nn.GELU()/;t;b};' /root/miniconda/envs/vllm_kunlun_0.10.1.1/lib/python3.10/site-packages/vllm_kunlun/models/qwen2_vl.py && \
|
||||
python3 -m pip cache purge
|
||||
|
||||
# Download models and update the configuration file
|
||||
RUN /bin/bash -c "mineru-models-download -s modelscope -m all"
|
||||
|
||||
# Set the entry point to activate the virtual environment and run the command line tool
|
||||
ENTRYPOINT ["/bin/bash", "-c", "export MINERU_MODEL_SOURCE=local && exec \"$@\"", "--"]
|
||||
@@ -1,6 +1,6 @@
|
||||
# 基础镜像配置 vLLM 或 LMDeploy ,请根据实际需要选择其中一个,要求 amd64(x86-64) CPU + Cambricon MLU.
|
||||
# Base image containing the LMDEPLOY inference environment, requiring amd64(x86-64) CPU + Cambricon MLU.
|
||||
FROM crpi-4crprmm5baj1v8iv.cn-hangzhou.personal.cr.aliyuncs.com/lmdeploy_dlinfer/camb:qwen_vl2.5
|
||||
FROM crpi-4crprmm5baj1v8iv.cn-hangzhou.personal.cr.aliyuncs.com/lmdeploy_dlinfer/camb:qwen2.5_vl
|
||||
ARG BACKEND=lmdeploy
|
||||
# Base image containing the vLLM inference environment, requiring amd64(x86-64) CPU + Cambricon MLU.
|
||||
# FROM crpi-vofi3w62lkohhxsp.cn-shanghai.personal.cr.aliyuncs.com/opendatalab-mineru/mlu:vllm0.8.3-torch2.6.0-torchmlu1.26.1-ubuntu22.04-py310
|
||||
|
||||
@@ -77,7 +77,8 @@ Here are the environment variables and their descriptions:
|
||||
- `MINERU_MODEL_SOURCE`:
|
||||
* Used to specify model source
|
||||
* supports `huggingface/modelscope/local`
|
||||
* defaults to `huggingface`, can be switched to `modelscope` or local models through environment variables.
|
||||
* Default is `huggingface`; you can switch via an environment variable to `modelscope` to use a domestic acceleration mirror, or switch to `local` to use a local model.
|
||||
|
||||
|
||||
- `MINERU_TOOLS_CONFIG_JSON`:
|
||||
* Used to specify configuration file path
|
||||
@@ -101,8 +102,14 @@ Here are the environment variables and their descriptions:
|
||||
* Default is `true`, can be set to `false` via environment variable to disable table merging functionality.
|
||||
|
||||
- `MINERU_PDF_RENDER_TIMEOUT`:
|
||||
* Used to set the timeout period (in seconds) for rendering PDF to images
|
||||
* Default is `300` seconds, can be set to other values via environment variable to adjust the image rendering timeout.
|
||||
* Used to set the timeout (in seconds) for rendering PDFs to images.
|
||||
* Default is `300` seconds; you can set a different value via an environment variable to adjust the rendering timeout.
|
||||
* Only effective on Linux and macOS systems.
|
||||
|
||||
- `MINERU_PDF_RENDER_THREADS`:
|
||||
* Used to set the number of threads used when rendering PDFs to images.
|
||||
* Default is `4`; you can set a different value via an environment variable to adjust the number of threads for image rendering.
|
||||
* Only effective on Linux and macOS systems.
|
||||
|
||||
- `MINERU_INTRA_OP_NUM_THREADS`:
|
||||
* Used to set the intra_op thread count for ONNX models, affects the computation speed of individual operators
|
||||
|
||||
@@ -175,4 +175,5 @@ docker run -u root --name mineru_docker --privileged=true \
|
||||
🔴: 不支持,无法运行,或精度存在较大差异
|
||||
|
||||
>[!TIP]
|
||||
>NPU加速卡指定可用加速卡的方式与NVIDIA GPU类似,请参考[ASCEND_RT_VISIBLE_DEVICES](https://www.hiascend.com/document/detail/zh/CANNCommunityEdition/850alpha001/maintenref/envvar/envref_07_0028.html)
|
||||
> - NPU加速卡指定可用加速卡的方式与NVIDIA GPU类似,请参考[ASCEND_RT_VISIBLE_DEVICES](https://www.hiascend.com/document/detail/zh/CANNCommunityEdition/850alpha001/maintenref/envvar/envref_07_0028.html)
|
||||
> - 在Ascend平台可以通过`npu-smi info`命令查看加速卡的使用情况,并根据需要指定空闲的加速卡ID以避免资源冲突。
|
||||
@@ -3,7 +3,7 @@
|
||||
```
|
||||
os: Ubuntu 22.04.5 LTS
|
||||
cpu: Hygon Hygon C86 7490
|
||||
gcu: MLU590-M9D
|
||||
mlu: MLU590-M9D
|
||||
driver: v6.2.11
|
||||
docker: 28.3.0
|
||||
```
|
||||
@@ -11,7 +11,7 @@ docker: 28.3.0
|
||||
## 2. 环境准备
|
||||
|
||||
>[!NOTE]
|
||||
>Ascend加速卡支持使用`lmdeploy`或`vllm`进行VLM模型推理加速。请根据实际需求选择安装和使用其中之一:
|
||||
>Cambricon加速卡支持使用`lmdeploy`或`vllm`进行VLM模型推理加速。请根据实际需求选择安装和使用其中之一:
|
||||
|
||||
### 2.1 使用 Dockerfile 构建镜像 (lmdeploy)
|
||||
|
||||
@@ -36,24 +36,11 @@ docker run --name mineru_docker \
|
||||
--privileged \
|
||||
--ipc=host \
|
||||
--network=host \
|
||||
--cap-add SYS_PTRACE \
|
||||
--device=/dev/mem \
|
||||
--device=/dev/dri \
|
||||
--device=/dev/infiniband \
|
||||
--device=/dev/cambricon_ctl \
|
||||
--device=/dev/cambricon_dev0 \
|
||||
--device=/dev/cambricon_dev1 \
|
||||
--device=/dev/cambricon_dev2 \
|
||||
--device=/dev/cambricon_dev3 \
|
||||
--device=/dev/cambricon_dev4 \
|
||||
--device=/dev/cambricon_dev5 \
|
||||
--device=/dev/cambricon_dev6 \
|
||||
--device=/dev/cambricon_dev7 \
|
||||
--group-add video \
|
||||
--shm-size=400g \
|
||||
--ulimit memlock=-1 \
|
||||
--security-opt seccomp=unconfined \
|
||||
--security-opt apparmor=unconfined \
|
||||
-v /dev:/dev \
|
||||
-v /lib/modules:/lib/modules:ro \
|
||||
-v /usr/bin/cnmon:/usr/bin/cnmon \
|
||||
-e MINERU_MODEL_SOURCE=local \
|
||||
-e MINERU_LMDEPLOY_DEVICE=camb \
|
||||
-it mineru:mlu-lmdeploy-latest \
|
||||
@@ -62,11 +49,14 @@ docker run --name mineru_docker \
|
||||
|
||||
>[!TIP]
|
||||
> 请根据实际情况选择使用`vllm`或`lmdeploy`版本的镜像,如需使用`vllm`,请执行以下操作:
|
||||
>
|
||||
> - 替换上述命令中的`mineru:mlu-lmdeploy-latest`为`mineru:mlu-vllm-latest`
|
||||
>
|
||||
> - 进入容器后,通过以下命令切换venv环境:
|
||||
> ```bash
|
||||
> source /torch/venv3/pytorch_infer/bin/activate
|
||||
> ```
|
||||
>
|
||||
> - 切换成功后,您可以在命令行前看到`(pytorch_infer)`的标识,这表示您已成功进入`vllm`的虚拟环境。
|
||||
|
||||
执行该命令后,您将进入到Docker容器的交互式终端,您可以直接在容器内运行MinerU相关命令来使用MinerU的功能。
|
||||
@@ -83,7 +73,7 @@ docker run --name mineru_docker \
|
||||
不同环境下,MinerU对Cambricon加速卡的支持情况如下表所示:
|
||||
|
||||
>[!TIP]
|
||||
> - `lmdeploy`黄灯问题为不能批量输出文件夹,单文件输入正常
|
||||
> - `lmdeploy`黄灯问题为不能输入文件夹使用批量解析功能,输入单个文件时表现正常。
|
||||
> - `vllm`黄灯问题为在精度未对齐,在部分场景下可能出现预期外结果。
|
||||
|
||||
<table border="1">
|
||||
@@ -165,5 +155,6 @@ docker run --name mineru_docker \
|
||||
🔴: 不支持,无法运行,或精度存在较大差异
|
||||
|
||||
>[!TIP]
|
||||
>Cambricon加速卡指定可用加速卡的方式与NVIDIA GPU类似,请参考[使用指定GPU设备](https://opendatalab.github.io/MinerU/zh/usage/advanced_cli_parameters/#cuda_visible_devices)章节说明,
|
||||
>将环境变量`CUDA_VISIBLE_DEVICES`替换为`MLU_VISIBLE_DEVICES`即可。
|
||||
> - Cambricon加速卡指定可用加速卡的方式与NVIDIA GPU类似,请参考[使用指定GPU设备](https://opendatalab.github.io/MinerU/zh/usage/advanced_cli_parameters/#cuda_visible_devices)章节说明,
|
||||
>将环境变量`CUDA_VISIBLE_DEVICES`替换为`MLU_VISIBLE_DEVICES`即可。
|
||||
> - 在Cambricon平台可以通过`cnmon`命令查看加速卡的使用情况,并根据需要指定空闲的加速卡ID以避免资源冲突。
|
||||
@@ -105,5 +105,6 @@ docker run -u root --name mineru_docker \
|
||||
🔴: 不支持,无法运行,或精度存在较大差异
|
||||
|
||||
>[!TIP]
|
||||
>GCU加速卡指定可用加速卡的方式与NVIDIA GPU类似,请参考[使用指定GPU设备](https://opendatalab.github.io/MinerU/zh/usage/advanced_cli_parameters/#cuda_visible_devices)章节说明,
|
||||
>将环境变量`CUDA_VISIBLE_DEVICES`替换为`TOPS_VISIBLE_DEVICES`即可。
|
||||
> - GCU加速卡指定可用加速卡的方式与NVIDIA GPU类似,请参考[使用指定GPU设备](https://opendatalab.github.io/MinerU/zh/usage/advanced_cli_parameters/#cuda_visible_devices)章节说明,
|
||||
>将环境变量`CUDA_VISIBLE_DEVICES`替换为`TOPS_VISIBLE_DEVICES`即可。
|
||||
> - 在Enflame平台可以通过`efsmi`命令查看加速卡的使用情况,并根据需要指定空闲的加速卡ID以避免资源冲突。
|
||||
@@ -2,7 +2,7 @@
|
||||
以下为本指南测试使用的平台信息,供参考:
|
||||
```
|
||||
os: Ubuntu 22.04.3 LTS
|
||||
cpu: Hygon Hygon C86-4G(x86-64)
|
||||
cpu: Hygon C86-4G(x86-64)
|
||||
dcu: BW200
|
||||
driver: 6.3.13-V1.12.0a
|
||||
docker: 20.10.24
|
||||
@@ -112,4 +112,5 @@ docker run -u root --name mineru_docker \
|
||||
🔴: 不支持,无法运行,或精度存在较大差异
|
||||
|
||||
>[!TIP]
|
||||
>DCU加速卡指定可用加速卡的方式与AMD GPU类似,请参考[GPU isolation techniques](https://rocm.docs.amd.com/en/docs-6.2.4/conceptual/gpu-isolation.html)
|
||||
> - DCU加速卡指定可用加速卡的方式与AMD GPU类似,请参考[GPU isolation techniques](https://rocm.docs.amd.com/en/docs-6.2.4/conceptual/gpu-isolation.html)
|
||||
> - 在Hygon平台可以通过`hy-smi`命令查看加速卡的使用情况,并根据需要指定空闲的加速卡ID以避免资源冲突。
|
||||
@@ -3,7 +3,7 @@
|
||||
```
|
||||
os: Ubuntu 22.04.5 LTS
|
||||
cpu: Intel x86-64
|
||||
gcu: Iluvatar BI-V150
|
||||
gpu: Iluvatar BI-V150
|
||||
driver: 4.4.0
|
||||
docker: 28.1.1
|
||||
```
|
||||
@@ -36,7 +36,7 @@ docker run --name mineru_docker \
|
||||
--security-opt apparmor=unconfined \
|
||||
-e VLLM_ENFORCE_CUDA_GRAPH=1 \
|
||||
-e MINERU_MODEL_SOURCE=local \
|
||||
-e MINERU_LMDEPLOY_DEVICE=corex \
|
||||
-e MINERU_VLLM_DEVICE=corex \
|
||||
-it mineru:corex-vllm-latest \
|
||||
/bin/bash
|
||||
```
|
||||
@@ -119,4 +119,5 @@ docker run --name mineru_docker \
|
||||
🔴: 不支持,无法运行,或精度存在较大差异
|
||||
|
||||
>[!TIP]
|
||||
>Iluvatar加速卡指定可用加速卡的方式与NVIDIA GPU类似,请参考[使用指定GPU设备](https://opendatalab.github.io/MinerU/zh/usage/advanced_cli_parameters/#cuda_visible_devices)章节说明
|
||||
> - Iluvatar加速卡指定可用加速卡的方式与NVIDIA GPU类似,请参考[使用指定GPU设备](https://opendatalab.github.io/MinerU/zh/usage/advanced_cli_parameters/#cuda_visible_devices)章节说明
|
||||
> - 在Iluvatar平台可以通过`ixsmi`命令查看加速卡的使用情况,并根据需要指定空闲的加速卡ID以避免资源冲突。
|
||||
124
docs/zh/usage/acceleration_cards/Kunlunxin.md
Normal file
124
docs/zh/usage/acceleration_cards/Kunlunxin.md
Normal file
@@ -0,0 +1,124 @@
|
||||
## 1. 测试平台
|
||||
以下为本指南测试使用的平台信息,供参考:
|
||||
```
|
||||
os: Ubuntu 22.04.5 LTS
|
||||
cpu: Intel x86-64
|
||||
xpu: P800
|
||||
driver: 515.58
|
||||
docker: 20.10.5
|
||||
```
|
||||
|
||||
## 2. 环境准备
|
||||
|
||||
### 2.1 使用 Dockerfile 构建镜像 (vllm)
|
||||
|
||||
```bash
|
||||
wget https://gcore.jsdelivr.net/gh/opendatalab/MinerU@master/docker/china/kxpu.Dockerfile
|
||||
docker build --network=host -t mineru:kxpu-vllm-latest -f kxpu.Dockerfile .
|
||||
```
|
||||
|
||||
## 3. 启动 Docker 容器
|
||||
|
||||
```bash
|
||||
docker run -u root --name mineru_docker \
|
||||
--device=/dev/xpu0:/dev/xpu0 \
|
||||
--device=/dev/xpu1:/dev/xpu1 \
|
||||
--device=/dev/xpu2:/dev/xpu2 \
|
||||
--device=/dev/xpu3:/dev/xpu3 \
|
||||
--device=/dev/xpu4:/dev/xpu4 \
|
||||
--device=/dev/xpu5:/dev/xpu5 \
|
||||
--device=/dev/xpu6:/dev/xpu6 \
|
||||
--device=/dev/xpu7:/dev/xpu7 \
|
||||
--device=/dev/xpuctrl:/dev/xpuctrl \
|
||||
--net=host \
|
||||
--cap-add=SYS_PTRACE --security-opt seccomp=unconfined \
|
||||
--tmpfs /dev/shm:rw,nosuid,nodev,exec,size=32g \
|
||||
--cap-add=SYS_PTRACE \
|
||||
-v /home/users/vllm-kunlun:/home/vllm-kunlun \
|
||||
-v /usr/local/bin/xpu-smi:/usr/local/bin/xpu-smi \
|
||||
-w /workspace \
|
||||
-e MINERU_MODEL_SOURCE=local \
|
||||
-e MINERU_FORMULA_CH_SUPPORT=true \
|
||||
-e MINERU_VLLM_DEVICE=kxpu \
|
||||
-it mineru:kxpu-vllm-latest \
|
||||
/bin/bash
|
||||
```
|
||||
|
||||
执行该命令后,您将进入到Docker容器的交互式终端,您可以直接在容器内运行MinerU相关命令来使用MinerU的功能。
|
||||
您也可以直接通过替换`/bin/bash`为服务启动命令来启动MinerU服务,详细说明请参考[通过命令启动服务](https://opendatalab.github.io/MinerU/zh/usage/quick_usage/#apiwebuihttp-clientserver)。
|
||||
|
||||
|
||||
## 4. 注意事项
|
||||
|
||||
不同环境下,MinerU对Kunlunxin加速卡的支持情况如下表所示:
|
||||
|
||||
<table border="1">
|
||||
<thead>
|
||||
<tr>
|
||||
<th rowspan="2" colspan="2">使用场景</th>
|
||||
<th colspan="2">容器环境</th>
|
||||
</tr>
|
||||
<tr>
|
||||
<th>vllm</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr>
|
||||
<td rowspan="3">命令行工具(mineru)</td>
|
||||
<td>pipeline</td>
|
||||
<td>🟢</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><vlm/hybrid>-auto-engine</td>
|
||||
<td>🟢</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><vlm/hybrid>-http-client</td>
|
||||
<td>🟢</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td rowspan="3">fastapi服务(mineru-api)</td>
|
||||
<td>pipeline</td>
|
||||
<td>🟢</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><vlm/hybrid>-auto-engine</td>
|
||||
<td>🟢</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><vlm/hybrid>-http-client</td>
|
||||
<td>🟢</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td rowspan="3">gradio界面(mineru-gradio)</td>
|
||||
<td>pipeline</td>
|
||||
<td>🟢</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><vlm/hybrid>-auto-engine</td>
|
||||
<td>🟢</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><vlm/hybrid>-http-client</td>
|
||||
<td>🟢</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td colspan="2">openai-server服务(mineru-openai-server)</td>
|
||||
<td>🟢</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td colspan="2">数据并行 (--data-parallel-size)</td>
|
||||
<td>🔴</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
注:
|
||||
🟢: 支持,运行较稳定,精度与Nvidia GPU基本一致
|
||||
🟡: 支持但较不稳定,在某些场景下可能出现异常,或精度存在一定差异
|
||||
🔴: 不支持,无法运行,或精度存在较大差异
|
||||
|
||||
>[!TIP]
|
||||
> - Kunlunxin加速卡指定可用加速卡的方式与NVIDIA GPU类似,请参考[使用指定GPU设备](https://opendatalab.github.io/MinerU/zh/usage/advanced_cli_parameters/#cuda_visible_devices)章节说明,
|
||||
>将环境变量`CUDA_VISIBLE_DEVICES`替换为`XPU_VISIBLE_DEVICES`即可。
|
||||
> - 在Kunlunxin平台可以通过`xpu-smi`命令查看加速卡的使用情况,并根据需要指定空闲的加速卡ID以避免资源冲突。
|
||||
@@ -148,4 +148,5 @@ docker run --ipc host \
|
||||
🔴: 不支持,无法运行,或精度存在较大差异
|
||||
|
||||
>[!TIP]
|
||||
>MACA加速卡指定可用加速卡的方式与NVIDIA GPU类似,请参考[使用指定GPU设备](https://opendatalab.github.io/MinerU/zh/usage/advanced_cli_parameters/#cuda_visible_devices)章节说明。
|
||||
> - MACA加速卡指定可用加速卡的方式与NVIDIA GPU类似,请参考[使用指定GPU设备](https://opendatalab.github.io/MinerU/zh/usage/advanced_cli_parameters/#cuda_visible_devices)章节说明。
|
||||
> - 在METAX平台可以通过`mx-smi`命令查看加速卡的使用情况,并根据需要指定空闲的加速卡ID以避免资源冲突。
|
||||
@@ -27,6 +27,7 @@ docker run -u root --name mineru_docker \
|
||||
--shm-size=80g \
|
||||
--privileged \
|
||||
-e MTHREADS_VISIBLE_DEVICES=all \
|
||||
-e MINERU_VLLM_DEVICE=musa \
|
||||
-e MINERU_MODEL_SOURCE=local \
|
||||
-it mineru:musa-vllm-latest \
|
||||
/bin/bash
|
||||
@@ -112,4 +113,5 @@ docker run -u root --name mineru_docker \
|
||||
🔴: 不支持,无法运行,或精度存在较大差异
|
||||
|
||||
>[!TIP]
|
||||
>MooreThreads加速卡指定可用加速卡的方式与NVIDIA GPU类似,请参考[GPU 枚举](https://docs.mthreads.com/cloud-native/cloud-native-doc-online/install_guide/#gpu-%E6%9E%9A%E4%B8%BE)
|
||||
> - MooreThreads加速卡指定可用加速卡的方式与NVIDIA GPU类似,请参考[GPU 枚举](https://docs.mthreads.com/cloud-native/cloud-native-doc-online/install_guide/#gpu-%E6%9E%9A%E4%B8%BE)
|
||||
> - 在MooreThreads平台可以通过`mthreads-gmi`命令查看加速卡的使用情况,并根据需要指定空闲的加速卡ID以避免资源冲突。
|
||||
@@ -127,7 +127,7 @@ docker run --privileged=true \
|
||||
</tr>
|
||||
<tr>
|
||||
<td colspan="2">数据并行 (--data-parallel-size/--dp)</td>
|
||||
<td>🟡</td>
|
||||
<td>🔴</td>
|
||||
<td>🔴</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
@@ -139,4 +139,5 @@ docker run --privileged=true \
|
||||
🔴: 不支持,无法运行,或精度存在较大差异
|
||||
|
||||
>[!TIP]
|
||||
>PPU加速卡指定可用加速卡的方式与NVIDIA GPU类似,请参考[使用指定GPU设备](https://opendatalab.github.io/MinerU/zh/usage/advanced_cli_parameters/#cuda_visible_devices)章节说明。
|
||||
> - PPU加速卡指定可用加速卡的方式与NVIDIA GPU类似,请参考[使用指定GPU设备](https://opendatalab.github.io/MinerU/zh/usage/advanced_cli_parameters/#cuda_visible_devices)章节说明。
|
||||
> - 在T-Head平台可以通过`ppu-smi`命令查看加速卡的使用情况,并根据需要指定空闲的加速卡ID以避免资源冲突。
|
||||
@@ -1,73 +1,120 @@
|
||||
# TECO适配
|
||||
## 1. 测试平台
|
||||
以下为本指南测试使用的平台信息,供参考:
|
||||
```
|
||||
os: Ubuntu 22.04.5 LTS
|
||||
cpu: AMD EPYC (amd64)
|
||||
gpu: T100
|
||||
driver: 3.0.0
|
||||
docker: 28.0.4
|
||||
```
|
||||
|
||||
## 快速开始
|
||||
使用本工具执行推理的主要流程如下:
|
||||
1. 基础环境安装:介绍推理前需要完成的基础环境检查和安装。
|
||||
3. 构建Docker环境:介绍如何使用Dockerfile创建模型推理时所需的Docker环境。
|
||||
4. 启动推理:介绍如何启动推理。
|
||||
## 2. 环境准备
|
||||
|
||||
### 1 基础环境安装
|
||||
请参考[Teco用户手册的安装准备章节](http://docs.tecorigin.com/release/torch_2.4/v2.2.0/#fc980a30f1125aa88bad4246ff0cedcc),完成训练前的基础环境检查和安装。
|
||||
### 2.1 下载并加载镜像 (vllm)
|
||||
|
||||
### 2 构建docker
|
||||
#### 2.1 执行以下命令,下载Docker镜像至本地(Docker镜像包:pytorch-3.0.0-torch_sdaa3.0.0.tar)
|
||||
```bash
|
||||
wget http://wb.tecorigin.com:8082/repository/teco-customer-repo/Course/MinerU/mineru-vllm.tar
|
||||
|
||||
wget 镜像下载链接(链接获取请联系太初内部人员)
|
||||
docker load -i mineru-vllm.tar
|
||||
```
|
||||
|
||||
#### 2.2 校验Docker镜像包,执行以下命令,生成MD5码是否与官方MD5码b2a7f60508c0d199a99b8b6b35da3954一致:
|
||||
## 3. 启动 Docker 容器
|
||||
|
||||
md5sum pytorch-3.0.0-torch_sdaa3.0.0.tar
|
||||
```bash
|
||||
docker run -dit --name mineru_docker \
|
||||
--privileged \
|
||||
--cap-add SYS_PTRACE \
|
||||
--cap-add SYS_ADMIN \
|
||||
--network=host \
|
||||
--shm-size=500G \
|
||||
mineru:sdaa-vllm-latest \
|
||||
/bin/bash
|
||||
```
|
||||
|
||||
#### 2.3 执行以下命令,导入Docker镜像
|
||||
>[!TIP]
|
||||
> 如需使用`vllm`环境,请执行以下操作:
|
||||
> - 进入容器后,通过以下命令切换到conda环境:
|
||||
> ```bash
|
||||
> conda activate vllm_env_py310
|
||||
> ```
|
||||
>
|
||||
> - 切换成功后,您可以在命令行前看到`(vllm_env_py310)`的标识,这表示您已成功进入`vllm`的虚拟环境。
|
||||
|
||||
docker load < pytorch-3.0.0-torch_sdaa3.0.0.tar
|
||||
|
||||
#### 2.4 执行以下命令,构建名为MinerU的Docker容器
|
||||
|
||||
docker run -itd --name="MinerU" --net=host --device=/dev/tcaicard0 --device=/dev/tcaicard1 --device=/dev/tcaicard2 --device=/dev/tcaicard3 --cap-add SYS_PTRACE --cap-add SYS_ADMIN --shm-size 64g jfrog.tecorigin.net/tecotp-docker/release/ubuntu22.04/x86_64/pytorch:3.0.0-torch_sdaa3.0.0 /bin/bash
|
||||
|
||||
#### 2.5 执行以下命令,进入名称为tecopytorch_docker的Docker容器。
|
||||
|
||||
docker exec -it MinerU bash
|
||||
执行该命令后,您将进入到Docker容器的交互式终端,您可以直接在容器内运行MinerU相关命令来使用MinerU的功能。
|
||||
您也可以直接通过替换`/bin/bash`为服务启动命令来启动MinerU服务,详细说明请参考[通过命令启动服务](https://opendatalab.github.io/MinerU/zh/usage/quick_usage/#apiwebuihttp-clientserver)。
|
||||
|
||||
|
||||
### 3 执行以下命令安装MinerU
|
||||
- 安装前的准备
|
||||
```
|
||||
cd <MinerU>
|
||||
pip install --upgrade pip
|
||||
pip install uv
|
||||
```
|
||||
- 由于镜像中安装了torch,并且不需要安装nvidia-nccl-cu12、nvidia-cudnn-cu12等包,因此需要注释掉一部分安装依赖。
|
||||
- 请注释掉<MinerU>/pyproject.toml文件中所有的"doclayout_yolo==0.0.4"依赖,并且将torch开头的包也注释掉。
|
||||
- 执行以下命令安装MinerU
|
||||
```
|
||||
uv pip install -e .[core]
|
||||
```
|
||||
- 下载安装doclayout_yolo==0.0.4
|
||||
```
|
||||
pip install doclayout_yolo==0.0.4 --no-deps
|
||||
```
|
||||
- 下载安装其他包(doclayout_yolo==0.0.4的依赖)
|
||||
```
|
||||
pip install albumentations py-cpuinfo seaborn thop numpy==1.24.4
|
||||
```
|
||||
- 由于部分张量内部内存分布不连续,需要修改如下两个文件
|
||||
<ultralytics安装路径>/ultralytics/utils/tal.py(330行左右,将view --> reshape)
|
||||
<doclayout_yolo安装路径>/doclayout_yolo/utils/tal.py(375行左右,将view --> reshape)
|
||||
### 4 执行推理
|
||||
- 开启sdaa环境
|
||||
```
|
||||
export TORCH_SDAA_AUTOLOAD=cuda_migrate
|
||||
```
|
||||
- 首次运行推理命令前请添加以下环境下载模型权重
|
||||
```
|
||||
export HF_ENDPOINT=https://hf-mirror.com
|
||||
```
|
||||
- 运行以下命令执行推理
|
||||
```
|
||||
mineru -p 'input path' -o 'output_path' --lang 'model_name'
|
||||
```
|
||||
其中model_name可从'ch', 'ch_server', 'ch_lite', 'en', 'korean', 'japan', 'chinese_cht', 'ta', 'te', 'ka', 'latin', 'arabic', 'east_slavic', 'cyrillic', 'devanagari'选择
|
||||
### 5 适配用到的软件栈版本列表
|
||||
使用v3.0.0软件栈版本适配,获取方式联系太初内部人员
|
||||
## 4. 注意事项
|
||||
|
||||
不同环境下,MinerU对Tecorigin加速卡的支持情况如下表所示:
|
||||
|
||||
<table border="1">
|
||||
<thead>
|
||||
<tr>
|
||||
<th rowspan="2" colspan="2">使用场景</th>
|
||||
<th colspan="2">容器环境</th>
|
||||
</tr>
|
||||
<tr>
|
||||
<th>vllm</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr>
|
||||
<td rowspan="3">命令行工具(mineru)</td>
|
||||
<td>pipeline</td>
|
||||
<td>🟢</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><vlm/hybrid>-auto-engine</td>
|
||||
<td>🟢</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><vlm/hybrid>-http-client</td>
|
||||
<td>🟢</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td rowspan="3">fastapi服务(mineru-api)</td>
|
||||
<td>pipeline</td>
|
||||
<td>🟢</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><vlm/hybrid>-auto-engine</td>
|
||||
<td>🟢</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><vlm/hybrid>-http-client</td>
|
||||
<td>🟢</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td rowspan="3">gradio界面(mineru-gradio)</td>
|
||||
<td>pipeline</td>
|
||||
<td>🟢</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><vlm/hybrid>-auto-engine</td>
|
||||
<td>🟢</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><vlm/hybrid>-http-client</td>
|
||||
<td>🟢</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td colspan="2">openai-server服务(mineru-openai-server)</td>
|
||||
<td>🟢</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td colspan="2">数据并行 (--data-parallel-size)</td>
|
||||
<td>🔴</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
注:
|
||||
🟢: 支持,运行较稳定,精度与Nvidia GPU基本一致
|
||||
🟡: 支持但较不稳定,在某些场景下可能出现异常,或精度存在一定差异
|
||||
🔴: 不支持,无法运行,或精度存在较大差异
|
||||
|
||||
>[!TIP]
|
||||
> - Tecorigin加速卡指定可用加速卡的方式与NVIDIA GPU类似,请参考[使用指定GPU设备](https://opendatalab.github.io/MinerU/zh/usage/advanced_cli_parameters/#cuda_visible_devices)章节说明,
|
||||
>将环境变量`CUDA_VISIBLE_DEVICES`替换为`SDAA_VISIBLE_DEVICES`即可。
|
||||
> - 在太初平台可以通过`teco-smi -c`命令查看加速卡的使用情况,并根据需要指定空闲的加速卡ID以避免资源冲突。
|
||||
@@ -72,7 +72,7 @@ MinerU命令行工具的某些参数存在相同功能的环境变量配置,
|
||||
- `MINERU_MODEL_SOURCE`:
|
||||
* 用于指定模型来源
|
||||
* 支持`huggingface/modelscope/local`
|
||||
* 默认为`huggingface`可通过环境变量切换为`modelscope`或使用本地模型。
|
||||
* 默认为`huggingface`可通过环境变量切换为`modelscope`使用国内加速源或切换至`local`以使用本地模型。
|
||||
|
||||
- `MINERU_TOOLS_CONFIG_JSON`:
|
||||
* 用于指定配置文件路径
|
||||
@@ -98,6 +98,12 @@ MinerU命令行工具的某些参数存在相同功能的环境变量配置,
|
||||
- `MINERU_PDF_RENDER_TIMEOUT`:
|
||||
* 用于设置将PDF渲染为图片的超时时间(秒)
|
||||
* 默认为`300`秒,可通过环境变量设置为其他值以调整渲染图片的超时时间。
|
||||
* 仅在linux和macOS系统中生效。
|
||||
|
||||
- `MINERU_PDF_RENDER_THREADS`:
|
||||
* 用于设置将PDF渲染为图片时使用的线程数
|
||||
* 默认为`4`,可通过环境变量设置为其他值以调整渲染图片时的线程数。
|
||||
* 仅在linux和macOS系统中生效。
|
||||
|
||||
- `MINERU_INTRA_OP_NUM_THREADS`:
|
||||
* 用于设置onnx模型的intra_op线程数,影响单个算子的计算速度
|
||||
|
||||
@@ -17,8 +17,9 @@
|
||||
* [摩尔线程 MooreThreads](acceleration_cards/MooreThreads.md) 🚀
|
||||
* [天数智芯 IluvatarCorex](acceleration_cards/IluvatarCorex.md) 🚀
|
||||
* [寒武纪 Cambricon](acceleration_cards/Cambricon.md) 🚀
|
||||
* [昆仑芯 Kunlunxin](acceleration_cards/Kunlunxin.md) 🚀
|
||||
* [太初元碁 Tecorigin](acceleration_cards/Tecorigin.md) ❤️
|
||||
* [AMD](acceleration_cards/AMD.md) [#3662](https://github.com/opendatalab/MinerU/discussions/3662) ❤️
|
||||
* [太初元碁 Tecorigin](acceleration_cards/Tecorigin.md) [#3767](https://github.com/opendatalab/MinerU/pull/3767) ❤️
|
||||
* [瀚博 VastAI](acceleration_cards/VastAI.md) [#4237](https://github.com/opendatalab/MinerU/discussions/4237)❤️
|
||||
- 插件与生态
|
||||
* [Cherry Studio](plugin/Cherry_Studio.md)
|
||||
|
||||
@@ -24,6 +24,9 @@ def enable_custom_logits_processors() -> bool:
|
||||
compute_capability = "8.0"
|
||||
elif hasattr(torch, 'mlu') and torch.mlu.is_available():
|
||||
compute_capability = "8.0"
|
||||
elif hasattr(torch, 'sdaa') and torch.sdaa.is_available():
|
||||
compute_capability = "8.0"
|
||||
|
||||
else:
|
||||
logger.info("CUDA not available, disabling custom_logits_processors")
|
||||
return False
|
||||
@@ -102,4 +105,128 @@ def set_default_batch_size() -> int:
|
||||
except Exception as e:
|
||||
logger.warning(f'Error determining VRAM: {e}, using default batch_ratio: 1')
|
||||
batch_size = 1
|
||||
return batch_size
|
||||
return batch_size
|
||||
|
||||
|
||||
def _get_device_config(device_type: str) -> dict | None:
|
||||
"""获取不同设备类型的配置参数"""
|
||||
|
||||
# 各设备类型的配置定义
|
||||
DEVICE_CONFIGS = {
|
||||
# "musa": {
|
||||
# "compilation_config_dict": {
|
||||
# "cudagraph_capture_sizes": [1, 2, 3, 4, 5, 6, 7, 8, 10, 12, 14, 16, 18, 20, 24, 28, 30],
|
||||
# "simple_cuda_graph": True
|
||||
# },
|
||||
# "block_size": 32,
|
||||
# },
|
||||
"corex": {
|
||||
"compilation_config_dict": {
|
||||
"cudagraph_mode": "FULL_DECODE_ONLY",
|
||||
"level": 0
|
||||
},
|
||||
},
|
||||
"kxpu": {
|
||||
"compilation_config_dict": {
|
||||
"splitting_ops": [
|
||||
"vllm.unified_attention", "vllm.unified_attention_with_output",
|
||||
"vllm.unified_attention_with_output_kunlun", "vllm.mamba_mixer2",
|
||||
"vllm.mamba_mixer", "vllm.short_conv", "vllm.linear_attention",
|
||||
"vllm.plamo2_mamba_mixer", "vllm.gdn_attention", "vllm.sparse_attn_indexer"
|
||||
]
|
||||
},
|
||||
"block_size": 128,
|
||||
"dtype": "float16",
|
||||
"distributed_executor_backend": "mp",
|
||||
"enable_chunked_prefill": False,
|
||||
"enable_prefix_caching": False,
|
||||
},
|
||||
}
|
||||
|
||||
return DEVICE_CONFIGS.get(device_type.lower())
|
||||
|
||||
|
||||
def _check_server_arg_exists(args: list, arg_name: str) -> bool:
|
||||
"""检查命令行参数列表中是否已存在指定参数"""
|
||||
return any(arg == f"--{arg_name}" or arg.startswith(f"--{arg_name}=") for arg in args)
|
||||
|
||||
|
||||
def _add_server_arg_if_missing(args: list, arg_name: str, value: str) -> None:
|
||||
"""如果参数不存在,则添加到命令行参数列表"""
|
||||
if not _check_server_arg_exists(args, arg_name):
|
||||
args.extend([f"--{arg_name}", value])
|
||||
|
||||
|
||||
def _add_server_flag_if_missing(args: list, flag_name: str) -> None:
|
||||
"""如果 flag 不存在,则添加到命令行参数列表"""
|
||||
if not _check_server_arg_exists(args, flag_name):
|
||||
args.append(f"--{flag_name}")
|
||||
|
||||
|
||||
def _add_engine_kwarg_if_missing(kwargs: dict, key: str, value) -> None:
|
||||
"""如果参数不存在,则添加到 kwargs 字典"""
|
||||
if key not in kwargs:
|
||||
kwargs[key] = value
|
||||
|
||||
|
||||
def mod_kwargs_by_device_type(kwargs_or_args: dict | list, vllm_mode: str) -> dict | list:
|
||||
"""根据设备类型修改 vllm 配置参数
|
||||
|
||||
Args:
|
||||
kwargs_or_args: 配置参数,server 模式为 list,engine 模式为 dict
|
||||
vllm_mode: vllm 运行模式 ("server", "sync_engine", "async_engine")
|
||||
|
||||
Returns:
|
||||
修改后的配置参数
|
||||
"""
|
||||
device_type = os.getenv("MINERU_VLLM_DEVICE", "")
|
||||
config = _get_device_config(device_type)
|
||||
|
||||
if config is None:
|
||||
return kwargs_or_args
|
||||
|
||||
if vllm_mode == "server":
|
||||
_apply_server_config(kwargs_or_args, config)
|
||||
else:
|
||||
_apply_engine_config(kwargs_or_args, config, vllm_mode)
|
||||
|
||||
return kwargs_or_args
|
||||
|
||||
|
||||
def _apply_server_config(args: list, config: dict) -> None:
|
||||
"""应用 server 模式的配置"""
|
||||
import json
|
||||
|
||||
for key, value in config.items():
|
||||
if key == "compilation_config_dict":
|
||||
_add_server_arg_if_missing(
|
||||
args, "compilation-config",
|
||||
json.dumps(value, separators=(',', ':'))
|
||||
)
|
||||
else:
|
||||
# 转换 key 格式: block_size -> block-size
|
||||
arg_name = key.replace("_", "-")
|
||||
if arg_name in {"enable-chunked-prefill", "enable-prefix-caching"} and value is False:
|
||||
_add_server_flag_if_missing(args, f"no-{arg_name}")
|
||||
continue
|
||||
_add_server_arg_if_missing(args, arg_name, str(value))
|
||||
|
||||
|
||||
def _apply_engine_config(kwargs: dict, config: dict, vllm_mode: str) -> None:
|
||||
"""应用 engine 模式的配置"""
|
||||
try:
|
||||
from vllm.config import CompilationConfig
|
||||
except ImportError:
|
||||
raise ImportError("Please install vllm to use the vllm-async-engine backend.")
|
||||
|
||||
for key, value in config.items():
|
||||
if key == "compilation_config_dict":
|
||||
if vllm_mode == "sync_engine":
|
||||
compilation_config = value
|
||||
elif vllm_mode == "async_engine":
|
||||
compilation_config = CompilationConfig(**value)
|
||||
else:
|
||||
continue
|
||||
_add_engine_kwarg_if_missing(kwargs, "compilation_config", compilation_config)
|
||||
else:
|
||||
_add_engine_kwarg_if_missing(kwargs, key, value)
|
||||
|
||||
@@ -6,7 +6,7 @@ import json
|
||||
from loguru import logger
|
||||
|
||||
from .utils import enable_custom_logits_processors, set_default_gpu_memory_utilization, set_default_batch_size, \
|
||||
set_lmdeploy_backend
|
||||
set_lmdeploy_backend, mod_kwargs_by_device_type
|
||||
from .model_output_to_middle_json import result_to_middle_json
|
||||
from ...data.data_reader_writer import DataWriter
|
||||
from mineru.utils.pdf_image_tools import load_images_from_pdf
|
||||
@@ -101,27 +101,7 @@ class ModelSingleton:
|
||||
except ImportError:
|
||||
raise ImportError("Please install vllm to use the vllm-engine backend.")
|
||||
|
||||
# musa vllm v1 引擎特殊配置
|
||||
# device = get_device()
|
||||
# if device_type.startswith("musa"):
|
||||
# import torch
|
||||
# if torch.musa.is_available():
|
||||
# compilation_config = {
|
||||
# "cudagraph_capture_sizes": [1, 2, 3, 4, 5, 6, 7, 8, 10, 12, 14, 16, 18, 20, 24, 28, 30],
|
||||
# "simple_cuda_graph": True
|
||||
# }
|
||||
# block_size = 32
|
||||
# kwargs["compilation_config"] = compilation_config
|
||||
# kwargs["block_size"] = block_size
|
||||
|
||||
# corex vllm v1 引擎特殊配置
|
||||
device_type = os.getenv("MINERU_LMDEPLOY_DEVICE", "")
|
||||
if device_type.lower() == "corex":
|
||||
compilation_config = {
|
||||
"cudagraph_mode": "FULL_DECODE_ONLY",
|
||||
"level": 0
|
||||
}
|
||||
kwargs["compilation_config"] = compilation_config
|
||||
kwargs = mod_kwargs_by_device_type(kwargs, vllm_mode="sync_engine")
|
||||
|
||||
if "compilation_config" in kwargs:
|
||||
if isinstance(kwargs["compilation_config"], str):
|
||||
@@ -148,28 +128,7 @@ class ModelSingleton:
|
||||
except ImportError:
|
||||
raise ImportError("Please install vllm to use the vllm-async-engine backend.")
|
||||
|
||||
|
||||
# musa vllm v1 引擎特殊配置
|
||||
# device = get_device()
|
||||
# if device.startswith("musa"):
|
||||
# import torch
|
||||
# if torch.musa.is_available():
|
||||
# compilation_config = CompilationConfig(
|
||||
# cudagraph_capture_sizes=[1, 2, 3, 4, 5, 6, 7, 8, 10, 12, 14, 16, 18, 20, 24, 28, 30],
|
||||
# simple_cuda_graph=True
|
||||
# )
|
||||
# block_size = 32
|
||||
# kwargs["compilation_config"] = compilation_config
|
||||
# kwargs["block_size"] = block_size
|
||||
|
||||
# corex vllm v1 引擎特殊配置
|
||||
device_type = os.getenv("MINERU_LMDEPLOY_DEVICE", "")
|
||||
if device_type.lower() == "corex":
|
||||
compilation_config = CompilationConfig(
|
||||
cudagraph_mode="FULL_DECODE_ONLY",
|
||||
level=0
|
||||
)
|
||||
kwargs["compilation_config"] = compilation_config
|
||||
kwargs = mod_kwargs_by_device_type(kwargs, vllm_mode="async_engine")
|
||||
|
||||
if "compilation_config" in kwargs:
|
||||
if isinstance(kwargs["compilation_config"], dict):
|
||||
|
||||
@@ -89,7 +89,11 @@ class FormulaRecognizer(BaseOCRV20):
|
||||
return rec_formula
|
||||
|
||||
def batch_predict(
|
||||
self, images_mfd_res: list, images: list, batch_size: int = 64
|
||||
self,
|
||||
images_mfd_res: list,
|
||||
images: list,
|
||||
batch_size: int = 64,
|
||||
interline_enable: bool = True,
|
||||
) -> list:
|
||||
images_formula_list = []
|
||||
mf_image_list = []
|
||||
@@ -105,6 +109,8 @@ class FormulaRecognizer(BaseOCRV20):
|
||||
for idx, (xyxy, conf, cla) in enumerate(
|
||||
zip(mfd_res.boxes.xyxy, mfd_res.boxes.conf, mfd_res.boxes.cls)
|
||||
):
|
||||
if not interline_enable and cla.item() == 1:
|
||||
continue # Skip interline regions if not enabled
|
||||
xmin, ymin, xmax, ymax = [int(p.item()) for p in xyxy]
|
||||
new_item = {
|
||||
"category_id": 13 + int(cla.item()),
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
import os
|
||||
import sys
|
||||
|
||||
from mineru.backend.vlm.utils import set_default_gpu_memory_utilization, enable_custom_logits_processors
|
||||
from mineru.utils.config_reader import get_device
|
||||
from mineru.backend.vlm.utils import set_default_gpu_memory_utilization, enable_custom_logits_processors, \
|
||||
mod_kwargs_by_device_type
|
||||
from mineru.utils.models_download_utils import auto_download_and_get_model_root_path
|
||||
|
||||
from vllm.entrypoints.cli.main import main as vllm_main
|
||||
@@ -14,8 +14,6 @@ def main():
|
||||
has_port_arg = False
|
||||
has_gpu_memory_utilization_arg = False
|
||||
has_logits_processors_arg = False
|
||||
has_block_size_arg = False
|
||||
has_compilation_config = False
|
||||
model_path = None
|
||||
model_arg_indices = []
|
||||
|
||||
@@ -27,10 +25,6 @@ def main():
|
||||
has_gpu_memory_utilization_arg = True
|
||||
if arg == "--logits-processors" or arg.startswith("--logits-processors="):
|
||||
has_logits_processors_arg = True
|
||||
if arg == "--block-size" or arg.startswith("--block-size="):
|
||||
has_block_size_arg = True
|
||||
if arg == "--compilation-config" or arg.startswith("--compilation-config="):
|
||||
has_compilation_config = True
|
||||
if arg == "--model":
|
||||
if i + 1 < len(args):
|
||||
model_path = args[i + 1]
|
||||
@@ -57,21 +51,7 @@ def main():
|
||||
if (not has_logits_processors_arg) and custom_logits_processors:
|
||||
args.extend(["--logits-processors", "mineru_vl_utils:MinerULogitsProcessor"])
|
||||
|
||||
# musa vllm v1 引擎特殊配置
|
||||
# device = get_device()
|
||||
# if device.startswith("musa"):
|
||||
# import torch
|
||||
# if torch.musa.is_available():
|
||||
# if not has_block_size_arg:
|
||||
# args.extend(["--block-size", "32"])
|
||||
# if not has_compilation_config:
|
||||
# args.extend(["--compilation-config", '{"cudagraph_capture_sizes": [1,2,3,4,5,6,7,8,10,12,14,16,18,20,24,28,30], "simple_cuda_graph": true}'])
|
||||
|
||||
# corex vllm v1 引擎特殊配置
|
||||
device_type = os.getenv("MINERU_LMDEPLOY_DEVICE", "")
|
||||
if device_type.lower() == "corex":
|
||||
if not has_compilation_config:
|
||||
args.extend(["--compilation-config", '{"cudagraph_mode": "FULL_DECODE_ONLY", "level": 0}'])
|
||||
args = mod_kwargs_by_device_type(args, vllm_mode="server")
|
||||
|
||||
# 重构参数,将模型路径作为位置参数
|
||||
sys.argv = [sys.argv[0]] + ["serve", model_path] + args
|
||||
|
||||
@@ -202,6 +202,10 @@ def model_init(model_name: str):
|
||||
if hasattr(torch, 'mlu') and torch.mlu.is_available():
|
||||
if torch.mlu.is_bf16_supported():
|
||||
bf_16_support = True
|
||||
elif device_name.startswith("sdaa"):
|
||||
if hasattr(torch, 'sdaa') and torch.sdaa.is_available():
|
||||
if torch.sdaa.is_bf16_supported():
|
||||
bf_16_support = True
|
||||
|
||||
if model_name == 'layoutreader':
|
||||
# 检测modelscope的缓存目录是否存在
|
||||
|
||||
@@ -98,7 +98,12 @@ def get_device():
|
||||
if torch.mlu.is_available():
|
||||
return "mlu"
|
||||
except Exception as e:
|
||||
pass
|
||||
try:
|
||||
if torch.sdaa.is_available():
|
||||
return "sdaa"
|
||||
except Exception as e:
|
||||
pass
|
||||
|
||||
return "cpu"
|
||||
|
||||
|
||||
|
||||
@@ -432,6 +432,9 @@ def clean_memory(device='cuda'):
|
||||
elif str(device).startswith("mlu"):
|
||||
if torch.mlu.is_available():
|
||||
torch.mlu.empty_cache()
|
||||
elif str(device).startswith("sdaa"):
|
||||
if torch.sdaa.is_available():
|
||||
torch.sdaa.empty_cache()
|
||||
gc.collect()
|
||||
|
||||
|
||||
@@ -476,5 +479,8 @@ def get_vram(device) -> int:
|
||||
elif str(device).startswith("mlu"):
|
||||
if torch.mlu.is_available():
|
||||
total_memory = round(torch.mlu.get_device_properties(device).total_memory / (1024 ** 3)) # 转为 GB
|
||||
elif str(device).startswith("sdaa"):
|
||||
if torch.sdaa.is_available():
|
||||
total_memory = round(torch.sdaa.get_device_properties(device).total_memory / (1024 ** 3)) # 转为 GB
|
||||
|
||||
return total_memory
|
||||
|
||||
@@ -11,6 +11,11 @@ def get_load_images_timeout() -> int:
|
||||
return get_value_from_string(env_value, 300)
|
||||
|
||||
|
||||
def get_load_images_threads() -> int:
|
||||
env_value = os.getenv('MINERU_PDF_RENDER_THREADS', None)
|
||||
return get_value_from_string(env_value, 4)
|
||||
|
||||
|
||||
def get_value_from_string(env_value: str, default_value: int) -> int:
|
||||
if env_value is not None:
|
||||
try:
|
||||
|
||||
@@ -1,5 +1,7 @@
|
||||
# Copyright (c) Opendatalab. All rights reserved.
|
||||
import os
|
||||
import signal
|
||||
import time
|
||||
from io import BytesIO
|
||||
|
||||
import numpy as np
|
||||
@@ -9,13 +11,13 @@ from PIL import Image, ImageOps
|
||||
|
||||
from mineru.data.data_reader_writer import FileBasedDataWriter
|
||||
from mineru.utils.check_sys_env import is_windows_environment
|
||||
from mineru.utils.os_env_config import get_load_images_timeout
|
||||
from mineru.utils.os_env_config import get_load_images_timeout, get_load_images_threads
|
||||
from mineru.utils.pdf_reader import image_to_b64str, image_to_bytes, page_to_image
|
||||
from mineru.utils.enum_class import ImageType
|
||||
from mineru.utils.hash_utils import str_sha256
|
||||
from mineru.utils.pdf_page_id import get_end_page_id
|
||||
|
||||
from concurrent.futures import ProcessPoolExecutor, TimeoutError as FuturesTimeoutError
|
||||
from concurrent.futures import ProcessPoolExecutor, wait, ALL_COMPLETED
|
||||
|
||||
|
||||
def pdf_page_to_image(page: pdfium.PdfPage, dpi=200, image_type=ImageType.PIL) -> dict:
|
||||
@@ -57,7 +59,7 @@ def load_images_from_pdf(
|
||||
end_page_id=None,
|
||||
image_type=ImageType.PIL,
|
||||
timeout=None,
|
||||
threads=4,
|
||||
threads=None,
|
||||
):
|
||||
"""带超时控制的 PDF 转图片函数,支持多进程加速
|
||||
|
||||
@@ -67,8 +69,8 @@ def load_images_from_pdf(
|
||||
start_page_id (int, optional): 起始页码. Defaults to 0.
|
||||
end_page_id (int | None, optional): 结束页码. Defaults to None.
|
||||
image_type (ImageType, optional): 图片类型. Defaults to ImageType.PIL.
|
||||
timeout (int | None, optional): 超时时间(秒)。如果为 None,则从环境变量 MINERU_PDF_LOAD_IMAGES_TIMEOUT 读取,若未设置则默认为 300 秒。
|
||||
threads (int): 进程数,默认 4
|
||||
timeout (int | None, optional): 超时时间(秒)。如果为 None,则从环境变量 MINERU_PDF_RENDER_TIMEOUT 读取,若未设置则默认为 300 秒。
|
||||
threads (int): 进程数, 如果为 None,则从环境变量 MINERU_PDF_RENDER_THREADS 读取,若未设置则默认为 4.
|
||||
|
||||
Raises:
|
||||
TimeoutError: 当转换超时时抛出
|
||||
@@ -86,6 +88,9 @@ def load_images_from_pdf(
|
||||
else:
|
||||
if timeout is None:
|
||||
timeout = get_load_images_timeout()
|
||||
if threads is None:
|
||||
threads = get_load_images_threads()
|
||||
|
||||
end_page_id = get_end_page_id(end_page_id, len(pdf_doc))
|
||||
|
||||
# 计算总页数
|
||||
@@ -108,11 +113,13 @@ def load_images_from_pdf(
|
||||
|
||||
page_ranges.append((range_start, range_end))
|
||||
|
||||
# logger.debug(f"PDF to images using {actual_threads} processes, page ranges: {page_ranges}")
|
||||
logger.debug(f"PDF to images using {actual_threads} processes, page ranges: {page_ranges}")
|
||||
|
||||
with ProcessPoolExecutor(max_workers=actual_threads) as executor:
|
||||
executor = ProcessPoolExecutor(max_workers=actual_threads)
|
||||
try:
|
||||
# 提交所有任务
|
||||
futures = []
|
||||
future_to_range = {}
|
||||
for range_start, range_end in page_ranges:
|
||||
future = executor.submit(
|
||||
_load_images_from_pdf_worker,
|
||||
@@ -122,27 +129,68 @@ def load_images_from_pdf(
|
||||
range_end,
|
||||
image_type,
|
||||
)
|
||||
futures.append((range_start, future))
|
||||
futures.append(future)
|
||||
future_to_range[future] = range_start
|
||||
|
||||
try:
|
||||
# 收集结果并按页码排序
|
||||
all_results = []
|
||||
for range_start, future in futures:
|
||||
images_list = future.result(timeout=timeout)
|
||||
all_results.append((range_start, images_list))
|
||||
# 使用 wait() 设置单一全局超时
|
||||
done, not_done = wait(futures, timeout=timeout, return_when=ALL_COMPLETED)
|
||||
|
||||
# 按起始页码排序并合并结果
|
||||
all_results.sort(key=lambda x: x[0])
|
||||
images_list = []
|
||||
for _, imgs in all_results:
|
||||
images_list.extend(imgs)
|
||||
|
||||
return images_list, pdf_doc
|
||||
except FuturesTimeoutError:
|
||||
# 检查是否有未完成的任务(超时情况)
|
||||
if not_done:
|
||||
# 超时:强制终止所有子进程
|
||||
_terminate_executor_processes(executor)
|
||||
pdf_doc.close()
|
||||
executor.shutdown(wait=False, cancel_futures=True)
|
||||
raise TimeoutError(f"PDF to images conversion timeout after {timeout}s")
|
||||
|
||||
# 所有任务完成,收集结果
|
||||
all_results = []
|
||||
for future in futures:
|
||||
range_start = future_to_range[future]
|
||||
# 这里不需要 timeout,因为任务已完成
|
||||
images_list = future.result()
|
||||
all_results.append((range_start, images_list))
|
||||
|
||||
# 按起始页码排序并合并结果
|
||||
all_results.sort(key=lambda x: x[0])
|
||||
images_list = []
|
||||
for _, imgs in all_results:
|
||||
images_list.extend(imgs)
|
||||
|
||||
return images_list, pdf_doc
|
||||
|
||||
except Exception as e:
|
||||
# 发生任何异常时,确保清理子进程
|
||||
_terminate_executor_processes(executor)
|
||||
pdf_doc.close()
|
||||
if isinstance(e, TimeoutError):
|
||||
raise
|
||||
raise
|
||||
finally:
|
||||
executor.shutdown(wait=False, cancel_futures=True)
|
||||
|
||||
|
||||
def _terminate_executor_processes(executor):
|
||||
"""强制终止 ProcessPoolExecutor 中的所有子进程"""
|
||||
if hasattr(executor, '_processes'):
|
||||
for pid, process in executor._processes.items():
|
||||
if process.is_alive():
|
||||
try:
|
||||
# 先发送 SIGTERM 允许优雅退出
|
||||
os.kill(pid, signal.SIGTERM)
|
||||
except (ProcessLookupError, OSError):
|
||||
pass
|
||||
|
||||
# 给子进程一点时间响应 SIGTERM
|
||||
time.sleep(0.1)
|
||||
|
||||
# 对仍然存活的进程发送 SIGKILL 强制终止
|
||||
for pid, process in executor._processes.items():
|
||||
if process.is_alive():
|
||||
try:
|
||||
os.kill(pid, signal.SIGKILL)
|
||||
except (ProcessLookupError, OSError):
|
||||
pass
|
||||
|
||||
|
||||
def load_images_from_pdf_core(
|
||||
pdf_bytes: bytes,
|
||||
|
||||
@@ -1 +1 @@
|
||||
__version__ = "2.7.3"
|
||||
__version__ = "2.7.5"
|
||||
|
||||
Reference in New Issue
Block a user