mirror of
https://github.com/opendatalab/MinerU.git
synced 2026-03-27 11:08:32 +07:00
docs(README): add Ascend NPU acceleration guide
- Add new file README_Ascend_NPU_Acceleration_zh_CN.md in docs folder - Update README.md and README_zh-CN.md to include link to new NPU acceleration guide - Provide instructions for building and running Docker image for Ascend NPU - List known issues and limitations when using Ascend NPU
This commit is contained in:
@@ -288,7 +288,7 @@ If your device supports CUDA and meets the GPU requirements of the mainline envi
|
||||
### Using NPU
|
||||
|
||||
If your device has NPU acceleration hardware, you can follow the tutorial below to use NPU acceleration:
|
||||
|
||||
[Ascend NPU Acceleration](docs/README_Ascend_NPU_Acceleration_zh_CN.md)
|
||||
|
||||
## Usage
|
||||
|
||||
|
||||
@@ -284,7 +284,7 @@ pip install -U magic-pdf[full] --extra-index-url https://wheels.myhloli.com -i h
|
||||
> docker run --rm --gpus=all nvidia/cuda:12.1.0-base-ubuntu22.04 nvidia-smi
|
||||
> ```
|
||||
```bash
|
||||
wget https://github.com/opendatalab/MinerU/raw/master/docker/china/Dockerfile -O Dockerfile
|
||||
wget https://gitee.com/myhloli/MinerU/raw/master/docker/china/Dockerfile -O Dockerfile
|
||||
docker build -t mineru:latest .
|
||||
docker run --rm -it --gpus=all mineru:latest /bin/bash
|
||||
magic-pdf --help
|
||||
@@ -292,6 +292,7 @@ pip install -U magic-pdf[full] --extra-index-url https://wheels.myhloli.com -i h
|
||||
### 使用NPU
|
||||
|
||||
如果您的设备存在NPU加速硬件,则可以通过以下教程使用NPU加速:
|
||||
[NPU加速教程](docs/README_Ascend_NPU_Acceleration_zh_CN.md)
|
||||
|
||||
## 使用
|
||||
|
||||
|
||||
@@ -33,7 +33,7 @@ RUN python3 -m venv /opt/mineru_venv
|
||||
# Activate the virtual environment and install necessary Python packages
|
||||
RUN /bin/bash -c "source /opt/mineru_venv/bin/activate && \
|
||||
pip3 install --upgrade pip -i https://mirrors.aliyun.com/pypi/simple && \
|
||||
wget https://gitee.com/myhloli/MinerU/raw/master/docker/huawei_npu/requirements.txt -O requirements.txt && \
|
||||
wget https://gitee.com/myhloli/MinerU/raw/master/docker/ascend_npu/requirements.txt -O requirements.txt && \
|
||||
pip3 install -r requirements.txt --extra-index-url https://wheels.myhloli.com -i https://mirrors.aliyun.com/pypi/simple && \
|
||||
wget https://gitee.com/ascend/pytorch/releases/download/v6.0.rc2-pytorch2.3.1/torch_npu-2.3.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl && \
|
||||
pip install torch_npu-2.3.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl"
|
||||
54
docs/README_Ascend_NPU_Acceleration_zh_CN.md
Normal file
54
docs/README_Ascend_NPU_Acceleration_zh_CN.md
Normal file
@@ -0,0 +1,54 @@
|
||||
# Ascend NPU 加速
|
||||
|
||||
## 简介
|
||||
|
||||
本文档介绍如何在 Ascend NPU 上使用 MinerU。本文档内容已在`华为Atlas 800T A2`服务器上测试通过。
|
||||
```
|
||||
CPU:鲲鹏 920 aarch64 2.6GHz
|
||||
NPU:Ascend 910B 64GB
|
||||
OS:openEuler 22.03 (LTS-SP3)
|
||||
```
|
||||
由于适配 Ascend NPU 的环境较为复杂,建议使用 Docker 容器运行 MinerU。
|
||||
|
||||
|
||||
## 构建镜像
|
||||
请保持网络状况良好,并执行以下代码构建镜像。
|
||||
```bash
|
||||
wget https://gitee.com/myhloli/MinerU/raw/master/docker/ascend_npu/Dockerfile -O Dockerfile
|
||||
docker build -t mineru_npu:latest .
|
||||
```
|
||||
如果构建过程中未发生报错则说明镜像构建成功。
|
||||
|
||||
|
||||
## 运行容器
|
||||
|
||||
```bash
|
||||
docker run --rm -it -u root --privileged=true \
|
||||
--ipc=host \
|
||||
--network=host \
|
||||
--device=/dev/davinci0 \
|
||||
--device=/dev/davinci1 \
|
||||
--device=/dev/davinci2 \
|
||||
--device=/dev/davinci3 \
|
||||
--device=/dev/davinci4 \
|
||||
--device=/dev/davinci5 \
|
||||
--device=/dev/davinci6 \
|
||||
--device=/dev/davinci7 \
|
||||
--device=/dev/davinci_manager \
|
||||
--device=/dev/devmm_svm \
|
||||
--device=/dev/hisi_hdc \
|
||||
-v /var/log/npu/:/usr/slog \
|
||||
-v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \
|
||||
-v /usr/local/Ascend/driver:/usr/local/Ascend/driver \
|
||||
mineru_npu:latest \
|
||||
/bin/bash -c "echo 'source /opt/mineru_venv/bin/activate' >> ~/.bashrc && exec bash"
|
||||
|
||||
magic-pdf --help
|
||||
```
|
||||
|
||||
|
||||
## 已知问题
|
||||
|
||||
- paddlepaddle使用内嵌onnx模型,仅支持中英文ocr,不支持其他语言
|
||||
- layout模型使用layoutlmv3时会发生间歇性崩溃,建议使用默认配置的doclayout_yolo模型
|
||||
- 表格解析仅适配了rapid_table模型,其他模型可能会无法使用
|
||||
Reference in New Issue
Block a user