Compare commits

..

484 Commits

Author SHA1 Message Date
Xiaomeng Zhao
db666bfdcf Merge pull request #4106 from myhloli/dev
fix: simplify GPU memory batch ratio calculation and enhance logging message
2025-12-02 03:48:09 +08:00
myhloli
4a86044b30 fix: remove unused batch_ratio variable from pipeline_analyze.py 2025-12-02 03:46:57 +08:00
myhloli
9fc13e3d88 fix: simplify GPU memory batch ratio calculation and enhance logging message 2025-12-02 03:43:32 +08:00
Xiaomeng Zhao
23c292409f Update projects/mineru_tianshu/api_server.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-12-02 03:39:13 +08:00
Xiaomeng Zhao
b3775b4a3d Merge pull request #4104 from myhloli/dev
fix: pin gradio version to 5.49.1 in pyproject.toml
2025-12-02 03:14:50 +08:00
myhloli
cf0ff63359 fix: pin gradio version to 5.49.1 in pyproject.toml 2025-12-02 03:13:08 +08:00
Xiaomeng Zhao
285190b9ce Merge pull request #4103 from myhloli/dev
Dev
2025-12-02 02:22:39 +08:00
myhloli
dfa02df68f fix: add copy button to Markdown components in gradio_app.py 2025-12-02 02:21:51 +08:00
myhloli
37714b2842 fix: pin gradio-pdf version to 0.0.22 in pyproject.toml 2025-12-02 02:10:08 +08:00
myhloli
6229dcf4b2 fix: remove unnecessary parameters from Markdown components in gradio_app.py 2025-12-02 01:58:37 +08:00
Xiaomeng Zhao
9f27f77713 Merge pull request #4102 from myhloli/dev
fix: enhance error messages and update descriptions in FastAPI endpoints
2025-12-02 01:52:49 +08:00
myhloli
0393f8b642 fix: enhance error messages and update descriptions in FastAPI endpoints 2025-12-02 01:51:41 +08:00
Xiaomeng Zhao
6c473caa5f Merge pull request #4101 from myhloli/dev
fix: update documentation for mineru-api and improve concurrency settings
2025-12-02 01:30:54 +08:00
myhloli
e36ef652ee fix: update documentation for mineru-api and improve concurrency settings 2025-12-02 01:29:10 +08:00
Xiaomeng Zhao
5ab9cf8f2b Merge pull request #4100 from myhloli/dev
fix: update Ascend.md with notes on bf16 precision limitations for 310p accelerator
2025-12-01 20:19:05 +08:00
myhloli
0bf3ed7970 fix: update Ascend.md with notes on bf16 precision limitations for 310p accelerator 2025-12-01 20:17:46 +08:00
Xiaomeng Zhao
4f69e75ffc Merge pull request #4099 from myhloli/dev
fix: enhance documentation for parsing options in FastAPI and client.py
2025-12-01 20:14:52 +08:00
myhloli
fe70c21dfa fix: enhance documentation for parsing options in FastAPI and client.py 2025-12-01 20:13:17 +08:00
Xiaomeng Zhao
dd43f25214 Merge pull request #4098 from myhloli/dev
Dev
2025-12-01 20:03:14 +08:00
myhloli
0e1e27a7a8 fix: update Ascend.md with additional usage instructions for Atlas 300I Duo and vllm image 2025-12-01 20:00:55 +08:00
myhloli
bcb30fe79c fix: simplify VRAM size retrieval and improve error handling in memory management 2025-12-01 18:31:07 +08:00
myhloli
f7c8ab2121 fix: update tag version for Atlas 300I Duo in Ascend.md 2025-12-01 17:19:05 +08:00
Xiaomeng Zhao
cce0c96265 Merge pull request #4096 from myhloli/dev
fix: update environment variable handling for FastAPI documentation and concurrency control
2025-12-01 15:20:45 +08:00
myhloli
9380ec0f27 fix: update environment variable for max concurrent requests in FastAPI 2025-12-01 15:19:25 +08:00
myhloli
0c8de2e626 fix: update environment variable handling for FastAPI documentation and concurrency control 2025-12-01 15:12:55 +08:00
Xiaomeng Zhao
9baa830d2f Merge pull request #4046 from Flynn-Zh/dev
feat: Increase API concurrency control to avoid service downtime
2025-12-01 10:42:53 +08:00
Xiaomeng Zhao
90efe06a0f Update FastAPI documentation endpoint comments 2025-12-01 10:37:46 +08:00
Xiaomeng Zhao
7c4ff05591 Update environment variable handling for FastAPI docs 2025-12-01 10:36:13 +08:00
Xiaomeng Zhao
4cb4f40f17 Merge pull request #4070 from zyileven/add-interface-for-more-data
Add an interface for obtaining more Mineru processing data
2025-12-01 10:29:09 +08:00
zyileven
ab2c67d477 Fix MinIO API calls and improve error handling 2025-11-28 15:02:26 +08:00
Xiaomeng Zhao
e8531cec03 Merge pull request #4083 from myhloli/dev
fix: improve list formatting for clarity in METAX.md
2025-11-27 19:04:58 +08:00
myhloli
3108c265f5 fix: improve list formatting for clarity in METAX.md 2025-11-27 19:03:49 +08:00
Xiaomeng Zhao
40a0f44690 Merge pull request #4082 from myhloli/dev
fix: improve list formatting for clarity in METAX.md
2025-11-27 18:57:18 +08:00
myhloli
fad6449d34 fix: improve list formatting for clarity in METAX.md 2025-11-27 18:56:16 +08:00
Xiaomeng Zhao
cbcff5b584 Merge pull request #4081 from myhloli/dev
fix: update link formatting for clarity in METAX.md
2025-11-27 18:13:11 +08:00
myhloli
ee5b4d258e fix: update link formatting for clarity in METAX.md 2025-11-27 18:10:55 +08:00
Xiaomeng Zhao
61e56eadd5 Merge pull request #4078 from myhloli/dev
fix: enhance formatting and clarity of device support information in …
2025-11-27 14:47:25 +08:00
myhloli
449c942727 fix: enhance formatting and clarity of device support information in Ascend.md 2025-11-27 14:46:16 +08:00
Xiaomeng Zhao
323a2a592f Merge pull request #4077 from myhloli/dev
fix: improve formatting and clarity of device support information in Ascend.md
2025-11-27 14:41:10 +08:00
myhloli
640d92d464 fix: improve formatting and clarity of device support information in Ascend.md 2025-11-27 14:39:43 +08:00
Xiaomeng Zhao
b8fa84eaf8 Merge pull request #4076 from myhloli/dev
fix: add device support information and Dockerfile tag instructions in Ascend.md
2025-11-27 14:31:42 +08:00
myhloli
23014f2954 fix: update Dockerfile build instructions in Ascend.md for clarity and organization 2025-11-27 14:30:39 +08:00
myhloli
c7a6dd96a6 fix: add device support information and Dockerfile tag instructions in Ascend.md 2025-11-27 14:26:34 +08:00
Flynn-Zh
098ac5d43f fix: fix global variable not init when uvicorn reload 2025-11-26 19:01:36 +08:00
Xiaomeng Zhao
865ca20262 Merge pull request #4073 from myhloli/dev
fix: update base image version in npu.Dockerfile to v0.11.0rc2 and refine pip install command
2025-11-26 18:32:53 +08:00
myhloli
b7b970ff2a fix: update base image version in npu.Dockerfile to v0.11.0rc2 and refine pip install command 2025-11-26 18:17:59 +08:00
zyileven
d800df6ae5 Add an interface for obtaining more Mineru processing data 2025-11-26 14:28:16 +08:00
Xiaomeng Zhao
77c18e958f Merge pull request #4067 from myhloli/dev
fix: remove unnecessary dependencies from ppu.Dockerfile
2025-11-26 14:27:22 +08:00
myhloli
16997bea1b fix: remove unnecessary dependencies from ppu.Dockerfile 2025-11-26 14:25:35 +08:00
Xiaomeng Zhao
1a15dcee32 Merge pull request #4065 from opendatalab/master
master->dev
2025-11-26 12:03:45 +08:00
Xiaomeng Zhao
f78b25e3de Update feedback link for国产化平台适配方案 2025-11-26 12:01:13 +08:00
myhloli
24c973d99e Update version.py with new version 2025-11-26 03:48:32 +00:00
Xiaomeng Zhao
4e5f03bba1 Merge pull request #4063 from opendatalab/release-2.6.5 2025-11-26 11:39:19 +08:00
Xiaomeng Zhao
dfd99baccd 更新 index.md
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-11-26 11:38:13 +08:00
Xiaomeng Zhao
c291cc1a59 更新 RagFlow.md
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-11-26 11:33:26 +08:00
Xiaomeng Zhao
6f20fefadf 更新 utils.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-11-26 11:27:09 +08:00
Xiaomeng Zhao
700321b23d Merge pull request #4062 from myhloli/dev
Adapted for NPU, PPU, and MACA.
2025-11-26 11:09:54 +08:00
Xiaomeng Zhao
0ba3992173 Update docs/zh/usage/acceleration_cards/METAX.md
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-11-26 11:09:33 +08:00
Xiaomeng Zhao
096717e4d0 Update mineru/cli/vlm_server.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-11-26 11:07:30 +08:00
Xiaomeng Zhao
ab365420b9 Update mineru/backend/vlm/utils.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-11-26 11:05:08 +08:00
myhloli
27e1fd63e7 fix: clarify feedback process for issues encountered on domestic platform adaptation in README_zh-CN.md 2025-11-26 10:45:23 +08:00
myhloli
4d47634913 fix: add platform information for testing in Ascend.md, METAX.md, and THead.md 2025-11-26 00:23:22 +08:00
myhloli
7e33501cd0 fix: update engine support indicators in Ascend.md, METAX.md, and THead.md for clarity and consistency 2025-11-26 00:15:33 +08:00
myhloli
a6f4eb3727 fix: update mineru-vl-utils version and adjust transformers constraint in pyproject.toml; enhance support note for vlm-lmdeploy-engine in README_zh-CN.md 2025-11-26 00:08:22 +08:00
myhloli
0d2bebd8b1 fix: add support for vlm-lmdeploy-engine and enhance compatibility with domestic acceleration platforms in README files 2025-11-25 20:33:52 +08:00
myhloli
b7a209a4a7 fix: add usage tips for NPU and MACA acceleration cards in Ascend.md, METAX.md, and THead.md 2025-11-25 19:51:55 +08:00
myhloli
08c9fadbcb fix: update lmdeploy version range in pyproject.toml for compatibility 2025-11-25 19:18:36 +08:00
myhloli
424c37984b fix: add note formatting for VLM model inference support in Ascend.md, METAX.md, and THead.md 2025-11-25 18:41:18 +08:00
myhloli
35d5ba8b8f fix: update maca.Dockerfile to use absolute paths for Python and mineru commands 2025-11-25 18:38:34 +08:00
myhloli
b4f725258d fix: update MACA Dockerfile and METAX.md for improved clarity and support 2025-11-25 17:42:46 +08:00
myhloli
4a081c3214 fix: update stability indicators and descriptions in Ascend.md, METAX.md, and THead.md for clarity 2025-11-25 17:16:48 +08:00
myhloli
0e2e12ca84 fix: add MINERU_LMDEPLOY_DEVICE environment variable for MACA in METAX.md 2025-11-25 16:01:10 +08:00
myhloli
48ed75d935 fix: update torchvision version in maca.Dockerfile for compatibility 2025-11-25 16:00:33 +08:00
myhloli
34c46cb83d fix: disable cuDNN for MACA device in common.py to improve compatibility 2025-11-25 15:43:30 +08:00
myhloli
16f167b351 fix: update transformers version constraint in pyproject.toml for compatibility 2025-11-25 14:50:27 +08:00
myhloli
91df5c8bb7 fix: update index.md and METAX.md to enhance documentation for METAX deployment and usage 2025-11-25 03:34:35 +08:00
myhloli
86b1fca74c fix: update Dockerfiles to improve base image configurations and dependency installations 2025-11-25 02:20:25 +08:00
myhloli
444fd6f027 fix: update ppu.Dockerfile to include additional dependencies for mineru installation 2025-11-25 02:12:11 +08:00
myhloli
9aa46e9c6c fix: update Ascend.md and THead.md to correct the order of vllm and lmdeploy for VLM model inference 2025-11-25 02:04:26 +08:00
myhloli
7beee5be62 fix: update THead.md to reflect correct status indicators for vlm-engine 2025-11-25 01:22:34 +08:00
myhloli
64586d03ea fix: import MinerULogitsProcessor conditionally for vllm-engine and vllm-async-engine backends 2025-11-25 00:15:15 +08:00
Flynn-Zh
5a551fec89 feat: Increase API concurrency control to avoid service downtime 2025-11-24 10:16:46 +08:00
myhloli
9c1c9d0c89 fix: update status indicators in Ascend.md to reflect correct state 2025-11-21 01:54:01 +08:00
myhloli
2111c35b83 feat: add Cambricon support documentation and update index for acceleration cards 2025-11-21 00:52:15 +08:00
myhloli
83ad8e81a9 fix: update status indicators in Ascend.md and THead.md for various components 2025-11-21 00:38:47 +08:00
myhloli
18ee522c77 fix: update THead.md to reference ppu.Dockerfile for building images 2025-11-20 23:36:43 +08:00
myhloli
72fa59bab2 fix: update THead.md to reference ppu.Dockerfile for building images 2025-11-20 23:36:29 +08:00
myhloli
28ebc0e2e8 fix: update THead.md to reference ppu.Dockerfile for building images 2025-11-20 23:21:29 +08:00
myhloli
3bcdd0a10a fix: update Docker build tags in Ascend.md for npu images 2025-11-20 23:17:59 +08:00
myhloli
cd1c5c5e50 fix: update ppu.Dockerfile to include vLLM base image and specific package versions 2025-11-20 23:09:54 +08:00
myhloli
a83d351ccc fix: swap Dockerfile instructions for lmdeploy and vllm in Ascend.md and npu.Dockerfile 2025-11-20 18:46:54 +08:00
myhloli
7196f71153 fix: correct package name for qwen-vl-utils in pyproject.toml 2025-11-20 17:24:20 +08:00
myhloli
1c530f64f5 fix: enhance CUDA and NPU availability checks in utils.py 2025-11-20 17:10:04 +08:00
myhloli
997a131278 fix: update notes on backend type switching for NPU cards in Ascend.md 2025-11-20 16:17:55 +08:00
myhloli
eeeaca85f8 fix: update notes on backend type switching for NPU cards in Ascend.md 2025-11-20 16:13:57 +08:00
myhloli
c884d7ddb9 fix: correct vllm component names in Ascend.md 2025-11-20 16:02:42 +08:00
myhloli
7bffbe2541 fix: correct vllm component names in Ascend.md 2025-11-20 15:54:26 +08:00
myhloli
ed6fc3e44e fix: correct vllm component names in Ascend.md 2025-11-20 15:49:46 +08:00
myhloli
a01a5d798b fix: add environment variable for local model source in Docker usage instructions 2025-11-20 15:36:43 +08:00
myhloli
38f5995ae4 fix: clarify Dockerfile usage instructions for lmdeploy and vllm in Ascend.md 2025-11-19 20:17:37 +08:00
myhloli
e7c80da602 fix: update Python version support details for Windows and clarify dependency limitations 2025-11-19 20:08:50 +08:00
myhloli
33696974fe fix: update qwen_vl_utils version constraint and specify platform dependencies for mineru 2025-11-19 19:55:02 +08:00
myhloli
376d1e38d5 fix: update quick_usage.md to clarify support for vllm and lmdeploy acceleration 2025-11-19 19:43:11 +08:00
myhloli
c5385af754 fix: update advanced_cli_parameters.md to clarify parameter passing for vllm and lmdeploy 2025-11-19 19:35:54 +08:00
myhloli
422ee671d8 fix: update installation tips in extension_modules.md to clarify package terminology 2025-11-19 19:32:08 +08:00
myhloli
76b1a559f8 fix: add MINERU_LMDEPLOY_DEVICE environment variable and update Ascend.md with usage scenarios 2025-11-19 19:18:30 +08:00
myhloli
afc6dcd7b0 fix: update mineru-vl-utils version and add qwen_vl_utils dependency in pyproject.toml 2025-11-19 14:41:23 +08:00
myhloli
cf1fbd2923 fix: enhance device and backend configuration handling in lmdeploy and vlm modules 2025-11-19 14:41:01 +08:00
myhloli
a0f27bd80b fix: remove unnecessary port mappings in Docker run command for Ascend.md 2025-11-18 21:20:52 +08:00
myhloli
46f8c6d082 fix: update Ascend.md for clarity in Dockerfile editing instructions 2025-11-18 21:10:42 +08:00
myhloli
5f9fdd9b62 fix: update npu.Dockerfile to set TORCH_DEVICE_BACKEND_AUTOLOAD=0 for model download 2025-11-18 21:05:41 +08:00
myhloli
9ed6636ad2 fix: update Ascend.md to use --network=host in Docker build commands for improved network configuration 2025-11-18 20:52:07 +08:00
myhloli
f8af29e3a1 fix: simplify Docker build commands in Ascend.md for clarity 2025-11-18 20:32:50 +08:00
myhloli
669b6cd629 fix: update Ascend.md and npu.Dockerfile for improved clarity on Docker image tags and usage instructions 2025-11-18 20:21:26 +08:00
myhloli
281c965213 fix: update Ascend.md and cli_tools.md for improved clarity on environment setup and backend options 2025-11-18 20:06:58 +08:00
myhloli
80445f24bf fix: remove commented-out official vllm image lines in Dockerfile for cleaner configuration 2025-11-18 19:05:58 +08:00
myhloli
10af19f419 fix: update docker_deployment.md and extension_modules.md for clarity on GPU architecture requirements and service naming 2025-11-18 16:36:59 +08:00
myhloli
a149a8da50 fix: enhance comments in compose.yaml for clearer engine selection and GPU configuration guidance 2025-11-18 15:58:25 +08:00
myhloli
843ab52da0 fix: rename vllm-server to openai-server in compose.yaml for clarity and update command parameters 2025-11-18 15:51:14 +08:00
myhloli
506179f0c8 feat: add openai-server command for flexible inference engine selection in vlm_server 2025-11-18 15:28:19 +08:00
myhloli
43881d5f66 fix: update index.md and README files for improved clarity on lmdeploy-engine support 2025-11-17 11:24:03 +08:00
myhloli
ad9521528e fix: update base image descriptions in Dockerfiles for clarity on CPU architecture 2025-11-14 10:47:16 +08:00
myhloli
d67be0c7de fix: add lmdeploy-engine parameters to compose.yaml for improved multi-GPU support 2025-11-14 10:34:29 +08:00
myhloli
056f8af0ae fix: add libglib2.0-0 dependency in npu.Dockerfile for improved package support 2025-11-14 01:33:06 +08:00
Xiaomeng Zhao
4f8d897342 Merge pull request #3995 from myhloli/dev
fix: enhance http-client backend parameters in vlm_analyze.py for improved configuration options
2025-11-13 17:12:32 +08:00
myhloli
0a4c9e307f fix: enhance http-client backend parameters in vlm_analyze.py for improved configuration options 2025-11-13 17:11:07 +08:00
Xiaomeng Zhao
79f2d03d32 Merge pull request #3990 from myhloli/dev
Dev
2025-11-13 14:59:07 +08:00
myhloli
d2c93b770f fix: refactor backend handling in vlm_analyze.py for improved model loading and error handling 2025-11-13 14:27:53 +08:00
myhloli
bb25385097 fix: update docker_deployment.md to use 'mineru:latest' instead of 'mineru-vllm:latest' 2025-11-13 11:40:38 +08:00
myhloli
60c5f7d890 feat: add mineru-lmdeploy-server service to compose.yaml with configuration 2025-11-13 11:37:45 +08:00
myhloli
3293299f34 fix: update README to clarify Windows LMDeploy backend performance and compatibility 2025-11-13 11:22:12 +08:00
myhloli
6581af72b4 fix: update README to clarify Windows LMDeploy backend performance and compatibility 2025-11-12 19:59:21 +08:00
myhloli
4ba9c73458 feat: add Dockerfiles for camb and maca environments, update ppu base image 2025-11-12 19:48:52 +08:00
myhloli
f7509e7dc9 feat: add Dockerfiles for NPU and PPU environments with necessary dependencies 2025-11-12 19:22:17 +08:00
myhloli
19c2a6612b fix: enhance argument handling for device type and backend in lmdeploy server 2025-11-12 19:05:10 +08:00
myhloli
1b440a8e92 fix: enhance argument handling for device type and backend in lmdeploy server 2025-11-12 17:56:58 +08:00
myhloli
39e7aa52a2 fix: improve device type and backend handling in lmdeploy configuration 2025-11-12 11:32:10 +08:00
myhloli
0c8e004874 fix: remove unused variable in set_lmdeploy_backend function 2025-11-11 19:55:22 +08:00
myhloli
f9f67ddef4 fix: remove unused variable in set_lmdeploy_backend function 2025-11-11 19:55:07 +08:00
Xiaomeng Zhao
2ac829ca32 Merge pull request #3980 from myhloli/dev
Dev
2025-11-11 19:53:26 +08:00
myhloli
6bafca0555 fix: disable tokenizers parallelism in lmdeploy server configuration 2025-11-11 19:40:57 +08:00
myhloli
7516d3ddf4 fix: disable tokenizers parallelism in lmdeploy server configuration 2025-11-11 19:40:15 +08:00
myhloli
a2136c22a5 fix: add backend argument handling and logging for lmdeploy backend configuration 2025-11-11 19:37:54 +08:00
myhloli
3fcca35c73 fix: add Linux environment detection and set lmdeploy backend based on device type 2025-11-11 19:26:46 +08:00
myhloli
6c27bc7f53 fix: update README files to include lmdeploy-engine and adjust accuracy details 2025-11-11 12:00:14 +08:00
Xiaomeng Zhao
30f1db6e6d Merge pull request #3976 from opendatalab/add_lmdeploy_backend
Add lmdeploy backend
2025-11-11 11:46:43 +08:00
Xiaomeng Zhao
e80e53d4de Merge pull request #3975 from myhloli/add_lmdeploy_backend
fix: update device handling and backend configuration in analysis scripts
2025-11-11 11:46:12 +08:00
myhloli
8e5a780fc6 fix: clarify engine descriptions in client.py documentation 2025-11-11 11:45:11 +08:00
myhloli
ad35f0bbc2 fix: clarify engine descriptions in client.py documentation 2025-11-11 11:42:58 +08:00
myhloli
5c743dc169 fix: update device handling and backend configuration in analysis scripts 2025-11-11 11:40:52 +08:00
Xiaomeng Zhao
b26338d0ef Merge pull request #3974 from opendatalab/add_lmdeploy_backend
Add lmdeploy backend
2025-11-11 11:24:50 +08:00
Xiaomeng Zhao
275ae04e56 Merge pull request #3972 from myhloli/add_lmdeploy_backend
feat: add lmdeploy backend support and refactor related components
2025-11-11 11:17:29 +08:00
myhloli
672e252506 fix: set default device type to 'cuda' in lmdeploy server 2025-11-11 11:16:34 +08:00
myhloli
85558061ff feat: add lmdeploy backend support and refactor related components 2025-11-11 10:48:33 +08:00
Xiaomeng Zhao
b4c8a017ea Merge pull request #3964 from opendatalab/dev
Dev
2025-11-10 15:15:40 +08:00
Xiaomeng Zhao
5c8d05e076 Merge pull request #3963 from myhloli/dev
fix: improve PDF page import handling to skip failed pages and log warnings
2025-11-10 14:40:43 +08:00
myhloli
0cfc6c3d4e fix: improve PDF page import handling to skip failed pages and log warnings 2025-11-10 14:39:37 +08:00
Xiaomeng Zhao
cdedc13713 Merge pull request #3950 from myhloli/dev
feat: enhance RagFlow documentation with installation guide and MinerU integration details
2025-11-06 19:50:49 +08:00
myhloli
95172d7d17 feat: enhance RagFlow documentation with installation guide and MinerU integration details 2025-11-06 19:49:13 +08:00
Xiaomeng Zhao
5cc13b919a Merge pull request #3946 from jinminxi104/add_lmdeploy_backend
add lmdeploy-backend
2025-11-06 16:44:27 +08:00
jinminxi104
9ec03c0353 add lmdeploy-backend 2025-11-06 07:38:46 +00:00
myhloli
ef485db9a8 fix: update magika version constraint to allow for newer releases 2025-11-05 17:40:15 +08:00
Xiaomeng Zhao
16ff55b27f Merge pull request #3934 from opendatalab/master
master->dev
2025-11-05 00:35:01 +08:00
myhloli
fa1149cd4a Update version.py with new version 2025-11-04 12:25:58 +00:00
Xiaomeng Zhao
5a937d3059 Merge pull request #3932 from opendatalab/release-2.6.4
Release 2.6.4
2025-11-04 20:23:53 +08:00
Xiaomeng Zhao
f11e609a14 Merge pull request #3933 from myhloli/dev
Dev
2025-11-04 20:22:42 +08:00
Xiaomeng Zhao
e010b0974a Update mineru/utils/pdf_image_tools.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-11-04 20:21:37 +08:00
Xiaomeng Zhao
fe1549960d Update mineru/utils/pdf_image_tools.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-11-04 20:20:37 +08:00
myhloli
df23e45861 Merge remote-tracking branch 'origin/dev' into dev 2025-11-04 20:18:46 +08:00
myhloli
5ec07ee7ab feat: update environment variable for PDF rendering timeout and enhance documentation 2025-11-04 20:18:14 +08:00
Xiaomeng Zhao
f1ebf5a7f0 Merge pull request #3931 from myhloli/dev
Dev
2025-11-04 20:00:41 +08:00
Xiaomeng Zhao
dae2cc8514 Update mineru/utils/pdf_image_tools.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-11-04 19:59:59 +08:00
myhloli
5de8f1a19f feat: add environment variables for PDF rendering timeout and ONNX thread management 2025-11-04 19:47:59 +08:00
myhloli
be2369bdd4 feat: add ONNX configuration for thread management and integrate into table structure 2025-11-04 19:09:33 +08:00
myhloli
51df4d8508 refactor: enhance PDF conversion function parameters and improve thread handling logic 2025-11-04 09:54:45 +08:00
Xiaomeng Zhao
f7225d8e17 Merge pull request #3918 from myhloli/dev
Dev
2025-11-03 22:09:59 +08:00
Xiaomeng Zhao
a9c9501af6 Update mineru/utils/pdf_image_tools.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-11-03 22:09:29 +08:00
Xiaomeng Zhao
74de2725cb Update mineru/utils/pdf_image_tools.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-11-03 22:08:20 +08:00
Xiaomeng Zhao
6250c453d9 Update mineru/utils/pdf_image_tools.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-11-03 22:04:49 +08:00
myhloli
54417a51f8 refactor: reorder import statements for clarity and consistency 2025-11-03 21:27:00 +08:00
myhloli
2f120db20e fix: update JSON URL to point to the master branch for model configuration 2025-11-03 21:24:13 +08:00
myhloli
2079395774 refactor: adjust thread count based on CPU cores and comment out image loading time logging 2025-11-03 21:02:58 +08:00
myhloli
b4c57116c1 refactor: move PDF byte conversion logic to pdf_page_id and simplify image conversion process 2025-11-03 20:57:18 +08:00
myhloli
ace7f76869 refactor: move PDF byte conversion functions to pdf_page_tools and simplify logic 2025-11-03 20:26:34 +08:00
myhloli
5349fd7ccd refactor: enhance PDF image loading by removing multiprocessing for Windows environment and improving logging 2025-11-03 19:41:22 +08:00
myhloli
5999f6664f refactor: simplify PDF byte preparation by removing multiprocessing and enhancing direct conversion 2025-11-03 19:31:39 +08:00
myhloli
245ae28c27 refactor: optimize page range calculation and enhance logging for image conversion process 2025-11-03 19:11:05 +08:00
myhloli
4afa045545 refactor: update import statement to use check_sys_env and adjust logging level for image loading 2025-11-03 19:10:25 +08:00
myhloli
c32ff88400 refactor: rename check_mac_env to check_sys_env and add Windows environment detection 2025-11-03 19:07:19 +08:00
myhloli
4214634de8 feat: add timing logs for PDF byte preparation to improve performance monitoring 2025-11-03 18:48:31 +08:00
myhloli
bffc6aff53 fix: streamline PDF conversion process by restructuring try-except block and ensuring proper resource management 2025-11-03 15:44:33 +08:00
myhloli
05e114f8b9 feat: implement multiprocessing for PDF conversion to enhance performance 2025-11-03 15:32:40 +08:00
myhloli
66d5f3dfd2 feat: refactor PDF image conversion to use get_end_page_id utility function and add multi-threading support 2025-11-03 15:08:31 +08:00
myhloli
305e3a61e8 fix: disable tokenizers parallelism to prevent potential issues 2025-11-01 02:00:04 +08:00
myhloli
b614bef035 feat: add multiprocessing support for PDF to image conversion with timeout handling 2025-10-31 17:50:59 +08:00
myhloli
cce16daf1f fix: update JSON URL to point to the dev branch in configure_model function 2025-10-31 15:37:08 +08:00
Xiaomeng Zhao
94eb35ffda Merge pull request #3905 from opendatalab/master
master->dev
2025-10-31 15:14:09 +08:00
myhloli
1ebc1ae841 Update version.py with new version 2025-10-31 07:09:16 +00:00
Xiaomeng Zhao
e90a17a3d2 Merge pull request #3902 from myhloli/dev
Dev
2025-10-31 14:59:32 +08:00
myhloli
61747bafdd fix: center-align column header for vlm accuracy in index.md table 2025-10-31 14:57:17 +08:00
Xiaomeng Zhao
374ace0a34 Merge pull request #3900 from opendatalab/release-2.6.3
Release 2.6.3
2025-10-31 14:50:54 +08:00
Xiaomeng Zhao
2c355d2d68 Merge pull request #3899 from myhloli/dev
fix: correct formatting of footnotes in README and README_zh-CN for c…
2025-10-31 14:50:31 +08:00
myhloli
512554196b fix: correct formatting of footnotes in README and README_zh-CN for clarity 2025-10-31 14:49:23 +08:00
Xiaomeng Zhao
a33715c015 Merge pull request #3887 from opendatalab/release-2.6.3
Release 2.6.3
2025-10-31 14:44:34 +08:00
Xiaomeng Zhao
3bc44c8526 Merge pull request #3898 from opendatalab/dev
Dev
2025-10-31 14:44:11 +08:00
Xiaomeng Zhao
4ccd0528f4 Merge pull request #3897 from myhloli/dev
Dev
2025-10-31 14:43:37 +08:00
myhloli
64d6a38bf5 fix: update help text formatting for PDF parsing options and bump config version check 2025-10-31 14:42:04 +08:00
Xiaomeng Zhao
9ede336a0c Merge pull request #3895 from opendatalab/dev
Dev
2025-10-31 14:19:27 +08:00
myhloli
1c0d4b8bc6 Merge remote-tracking branch 'origin/dev' into dev 2025-10-31 12:14:26 +08:00
myhloli
0b53696181 fix: update config version check to 1.3.0 in models_download.py 2025-10-31 12:13:52 +08:00
Xiaomeng Zhao
d06b105102 Merge pull request #3891 from myhloli/dev
Dev
2025-10-31 12:03:12 +08:00
myhloli
b70f49522e fix: prevent processing of empty content lists in pipeline middle JSON handling 2025-10-31 12:02:28 +08:00
myhloli
23d75bac09 refactor: simplify content list handling by consolidating layout and discarded paragraphs 2025-10-31 11:47:08 +08:00
myhloli
14ca71eed0 docs: enhance quick usage documentation with configuration examples and improve mac environment check 2025-10-31 11:42:37 +08:00
Xiaomeng Zhao
d519095436 Merge pull request #3888 from myhloli/dev
docs: update OCR language support to reflect recognition of 109 languages
2025-10-31 11:18:50 +08:00
myhloli
2238c49352 docs: update OCR language support to reflect recognition of 109 languages 2025-10-31 11:17:42 +08:00
Xiaomeng Zhao
ef71228e1a Merge pull request #3886 from myhloli/dev
Dev
2025-10-31 11:13:59 +08:00
myhloli
8bf407a5e5 docs: update quick_usage.md to format parameter name for clarity 2025-10-31 11:13:36 +08:00
myhloli
79fe3757b1 docs: add changelog entries for 2.6.3 release, highlighting new vlm-mlx-engine support and bug fixes 2025-10-31 11:11:06 +08:00
myhloli
c9dc5df28d docs: update system requirements and OCR language support in documentation 2025-10-31 09:42:35 +08:00
myhloli
57b2c819f9 docs: add release notes for version 2.6.3 and highlight new vlm-mlx-engine support 2025-10-30 21:12:55 +08:00
myhloli
04860456e8 Merge remote-tracking branch 'origin/dev' into dev 2025-10-30 20:32:02 +08:00
myhloli
14c334d2b0 feat: add macOS version check for mlx-engine backend support 2025-10-30 20:31:50 +08:00
myhloli
d57796a667 fix: update mineru-vl-utils version constraint to 0.1.15 2025-10-30 18:31:57 +08:00
myhloli
551802aebb docs: format version constraints in bug_report.yml for improved readability 2025-10-30 18:31:40 +08:00
myhloli
59b5ffaf95 docs: update default model in llm-aided-config and clarify enable_thinking parameter usage in quick_usage.md 2025-10-30 17:54:21 +08:00
myhloli
d975836b25 refactor: streamline resolution grouping and padding logic in batch_analyze.py 2025-10-30 17:08:09 +08:00
Xiaomeng Zhao
5351c76c5d Merge branch 'opendatalab:dev' into dev 2025-10-30 16:46:41 +08:00
Xiaomeng Zhao
324dd75a52 Merge pull request #3880 from baymax2099/2.6.2fix
Fix rounding error for height and width normalization
2025-10-30 16:44:34 +08:00
Xiaomeng Zhao
bb830c6cbf Merge pull request #3870 from aopstudio/add-quote
Quote pip install arguments in extension module docs
2025-10-30 16:42:41 +08:00
max
1fd357dd97 fix:当h恰好是RESOLUTION_GROUP_STRIDE的倍数时,会错误地向上取整到下一个倍数。 2025-10-30 16:20:27 +08:00
myhloli
51726f7ac4 fix: correct root directory path in pytorch_paddle.py 2025-10-30 15:55:31 +08:00
myhloli
d306abf8d7 docs: enhance table structure and content for backend features and system requirements in index.md 2025-10-30 01:19:49 +08:00
myhloli
a2aae1fa48 docs: enhance table structure and content for backend features and system requirements in index.md 2025-10-30 01:01:39 +08:00
myhloli
05ce84c5e8 docs: simplify phrasing for OpenAI compatibility in README and README_zh-CN 2025-10-30 00:57:41 +08:00
Xiaomeng Zhao
b2a2cac32e Merge pull request #3873 from myhloli/dev
Dev
2025-10-30 00:55:30 +08:00
myhloli
2dbb265cf9 docs: correct phrasing in README_zh-CN for OpenAI compatibility and CPU inference support 2025-10-30 00:54:01 +08:00
myhloli
737207582a docs: correct phrasing in README_zh-CN for OpenAI compatibility and CPU inference support 2025-10-30 00:48:46 +08:00
myhloli
d654238115 docs: update backend features and CPU inference support sections in README and README_zh-CN 2025-10-30 00:43:03 +08:00
myhloli
279e84bf58 fix: improve device compatibility check for bf16 support in model initialization 2025-10-30 00:33:24 +08:00
myhloli
9dfbdb8aec docs: enhance README and README_zh-CN with improved backend feature table and community feedback section 2025-10-29 22:18:35 +08:00
myhloli
931aebc5d5 docs: enhance README and README_zh-CN with improved backend feature table and community feedback section 2025-10-29 22:14:02 +08:00
myhloli
3896079940 docs: update README_zh-CN.md with improved backend feature table and clarifications 2025-10-29 21:56:05 +08:00
myhloli
a69e39860a feat: update README_zh-CN.md with enhanced backend feature table and requirements 2025-10-29 18:53:54 +08:00
aopstudio
5cd31f97b6 Quote pip install arguments in extension module docs
Updated the pip install commands in both English and Chinese quick start guides to quote the mineru extras arguments, ensuring correct parsing by the shell.
2025-10-29 16:17:38 +08:00
myhloli
08ee48c1d7 remove svg logos 2025-10-29 11:10:40 +08:00
myhloli
05cf5a491e fix: update config version check to 1.3.1 in models_download.py 2025-10-29 10:49:58 +08:00
myhloli
8a8fc59d20 feat: add new SVG logos for mineru and modelscope 2025-10-29 10:48:15 +08:00
Xiaomeng Zhao
7f96fa94b7 Merge pull request #3860 from myhloli/dev
feat: enhance API call parameters with conditional extra_body for thinking mode
2025-10-28 21:40:28 +08:00
Xiaomeng Zhao
ad29a6a02a Update mineru/utils/llm_aided.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-10-28 21:38:49 +08:00
Xiaomeng Zhao
54ac866554 Update mineru/utils/llm_aided.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-10-28 21:38:40 +08:00
myhloli
2080677d83 chore: bump config_version to 1.3.1 in mineru.template.json 2025-10-28 21:37:14 +08:00
myhloli
11a1f04b0f feat: enhance API call parameters with conditional extra_body for thinking mode 2025-10-28 21:31:43 +08:00
myhloli
8a7b216d67 Merge remote-tracking branch 'origin/dev' into dev 2025-10-28 17:24:07 +08:00
myhloli
e5dba06035 fix: improve help text for device mode option in client.py 2025-10-28 17:23:57 +08:00
Xiaomeng Zhao
beeef7068f Merge pull request #3841 from xvlincaigou/master
修改文档,已经支持对于vlm-transformers backend的device指定
2025-10-28 17:21:23 +08:00
Xiaomeng Zhao
1f5db12adb Merge pull request #3855 from myhloli/dev
feat: add Mac environment checks and support for Apple Silicon in backend selection
2025-10-28 17:08:36 +08:00
Xiaomeng Zhao
e5c8508ad7 Update mineru/utils/check_mac_env.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-10-28 17:08:28 +08:00
Xiaomeng Zhao
633afeb9e2 Update mineru/utils/check_mac_env.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-10-28 17:08:14 +08:00
Xiaomeng Zhao
797011879a Update mineru/utils/check_mac_env.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-10-28 17:08:02 +08:00
Xiaomeng Zhao
7365f8137c Update mineru/utils/check_mac_env.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-10-28 17:07:51 +08:00
myhloli
2f1369a877 feat: add Mac environment checks and support for Apple Silicon in backend selection 2025-10-28 17:03:56 +08:00
Xiaomeng Zhao
e803facba6 Merge pull request #3854 from myhloli/dev
refactor: update import paths for PytorchPaddleOCR and rename file
2025-10-28 17:03:41 +08:00
myhloli
dc7b341e02 refactor: update import paths for PytorchPaddleOCR and rename file 2025-10-28 15:57:36 +08:00
Xiaomeng Zhao
73c52b95f5 Merge pull request #3851 from myhloli/dev
fix: enhance handling of discarded blocks in content generation
2025-10-28 10:18:32 +08:00
myhloli
1037fd56bc fix: enhance handling of discarded blocks in content generation 2025-10-27 20:47:52 +08:00
xvlincaigou
25525ad899 Merge branch 'master' of github.com:xvlincaigou/MinerU 2025-10-25 22:19:45 +08:00
xvlincaigou
55a0cb95b7 [fix]docs about when param: device take effect 2025-10-25 22:10:12 +08:00
Xiaomeng Zhao
00d438d5fb Merge pull request #3837 from opendatalab/master
master->dev
2025-10-24 19:00:18 +08:00
myhloli
eb02745e06 Update version.py with new version 2025-10-24 10:45:27 +00:00
Xiaomeng Zhao
fe4985f6f0 Merge pull request #3836 from opendatalab/release-2.6.2
Release 2.6.2
2025-10-24 18:43:33 +08:00
Xiaomeng Zhao
8825235088 Merge pull request #3835 from myhloli/dev
chore: update changelog for 2.6.2 release with OCR model optimizations and backend improvements
2025-10-24 18:35:17 +08:00
myhloli
44a60785c6 chore: update changelog for 2.6.2 release with OCR model optimizations and backend improvements 2025-10-24 18:33:15 +08:00
Xiaomeng Zhao
473e235397 Merge pull request #3834 from myhloli/dev
refactor: remove deprecated model configurations from arch_config.yaml and models_config.yml
2025-10-24 18:29:59 +08:00
myhloli
16814e1e1d refactor: remove deprecated model configurations from arch_config.yaml and models_config.yml 2025-10-24 18:11:50 +08:00
myhloli
3546766e72 fix: update CTCLabelDecode output channels and clean up Latin dictionary 2025-10-24 18:04:28 +08:00
Xiaomeng Zhao
b57d9caef3 Merge pull request #3833 from opendatalab/master
master->dev
2025-10-24 17:39:27 +08:00
myhloli
0603edc202 Update version.py with new version 2025-10-24 09:28:52 +00:00
Xiaomeng Zhao
2a0cb7963a Merge pull request #3829 from opendatalab/release-2.6.1
Release 2.6.1
2025-10-24 17:27:18 +08:00
Xiaomeng Zhao
a56bd6c334 Merge pull request #3831 from opendatalab/dev
Dev
2025-10-24 17:25:03 +08:00
Xiaomeng Zhao
f5400f0c94 Merge pull request #3830 from myhloli/dev
fix: correct spelling of set_default_gpu_memory_utilization and set_default_batch_size functions
2025-10-24 17:24:31 +08:00
myhloli
6a6c650062 fix: correct spelling of set_default_gpu_memory_utilization and set_default_batch_size functions 2025-10-24 17:23:13 +08:00
Xiaomeng Zhao
ae084eb317 Merge pull request #3828 from myhloli/dev
Dev
2025-10-24 17:17:23 +08:00
myhloli
7c77db7135 fix: import enable_custom_logits_processors in server.py 2025-10-24 17:16:07 +08:00
myhloli
7b14a87b9d fix: update version number to 2.6.1 in README and README_zh-CN 2025-10-24 17:13:08 +08:00
myhloli
0d0ebfd7bc fix: improve GPU memory utilization handling and ensure OMP_NUM_THREADS is set only if not defined 2025-10-24 17:11:19 +08:00
myhloli
dc438fa620 Update version.py with new version 2025-10-24 08:12:26 +00:00
Xiaomeng Zhao
f5a5644d12 Merge pull request #3825 from opendatalab/dev
Dev
2025-10-24 16:01:37 +08:00
Xiaomeng Zhao
91cc2524d5 Merge pull request #3824 from myhloli/dev
fix: update README and Chinese README to include GitHub link for optimization contributor
2025-10-24 16:00:54 +08:00
myhloli
e504e5e012 fix: update README and Chinese README to include GitHub link for optimization contributor 2025-10-24 15:58:23 +08:00
Xiaomeng Zhao
6b2f414438 Merge pull request #3823 from opendatalab/release-2.6.0
Release 2.6.0
2025-10-24 15:54:23 +08:00
Xiaomeng Zhao
a0da3029fd Update mineru/model/utils/pytorchocr/modeling/backbones/rec_lcnetv3.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-10-24 15:54:12 +08:00
Xiaomeng Zhao
30fe325428 Update mineru/model/utils/tools/infer/predict_rec.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-10-24 15:53:55 +08:00
Xiaomeng Zhao
6131013ce9 Merge pull request #3822 from opendatalab/dev
Dev
2025-10-24 15:46:40 +08:00
Xiaomeng Zhao
f1c145054a Merge pull request #3821 from myhloli/dev
Dev
2025-10-24 15:46:09 +08:00
myhloli
078aaaf150 fix: remove unnecessary parameters from kwargs in vlm_analyze.py initialization 2025-10-24 15:39:44 +08:00
myhloli
c3a55fffab fix: add utility functions for GPU memory utilization and batch size configuration 2025-10-24 15:29:23 +08:00
Xiaomeng Zhao
4eddf28c8f Merge pull request #3820 from opendatalab/dev
Dev
2025-10-24 14:59:35 +08:00
Xiaomeng Zhao
dd92c5b723 Merge pull request #3819 from myhloli/dev
update docs
2025-10-24 14:59:03 +08:00
myhloli
b5922086cb fix: add environment variable configurations for Chinese formula parsing and table merging features 2025-10-24 14:53:00 +08:00
myhloli
df12e4fc79 fix: update README and utils for table merge feature and environment variable configuration 2025-10-24 11:37:14 +08:00
myhloli
90ed311198 fix: refactor table merging logic and add cross-page table merge utility 2025-10-24 10:52:05 +08:00
myhloli
c922c63fbc fix: correct formatting in kernel initialization in rec_lcnetv3.py 2025-10-24 10:22:10 +08:00
myhloli
28b278508f fix: add error handling for PDF conversion in common.py 2025-10-24 10:19:50 +08:00
Xiaomeng Zhao
6b54f321b4 Merge pull request #3814 from myhloli/dev
Dev
2025-10-23 18:00:51 +08:00
myhloli
e47ec7cd10 fix: refactor language lists for improved readability and maintainability in gradio_app.py and pytorch_paddle.py 2025-10-23 17:51:26 +08:00
myhloli
701f6018f2 fix: add logging for improved traceability in prediction logic of predict_formula.py 2025-10-23 17:26:16 +08:00
myhloli
5ade203e31 fix: remove commented-out code for autocasting in prediction logic of predict_formula.py 2025-10-23 17:12:00 +08:00
Xiaomeng Zhao
6e83f37754 Merge branch 'opendatalab:dev' into dev 2025-10-23 17:09:20 +08:00
Xiaomeng Zhao
972161a991 Merge pull request #3812 from Sidney233/dev
feat: add PPv5 arabic cyrillic devanagari ta te
2025-10-23 17:08:52 +08:00
Sidney233
700e11d342 feat: add PPv5 arabic cyrillic devanagari ta te 2025-10-23 16:49:01 +08:00
myhloli
fd79885b23 fix: remove commented-out code for autocasting in prediction logic of predict_formula.py 2025-10-23 16:03:34 +08:00
myhloli
a0810b5b6e fix: add debug logging for LaTeX text processing in processors.py 2025-10-23 02:30:47 +08:00
myhloli
39271b45de fix: adjust batch size calculation in prediction logic of predict_formula.py 2025-10-23 02:15:14 +08:00
Xiaomeng Zhao
db68aaf4ac Merge pull request #3806 from myhloli/dev
fix: update Gradio API access instructions in quick_usage.md
2025-10-22 22:51:37 +08:00
myhloli
a6cc8fa90d fix: update Gradio API access instructions in quick_usage.md 2025-10-22 22:50:36 +08:00
Xiaomeng Zhao
47f34f4ce8 Merge pull request #3805 from myhloli/dev
fix: handle empty input in prediction logic of predict_formula.py
2025-10-22 22:21:38 +08:00
myhloli
b7a8347f45 fix: handle empty input in prediction logic of predict_formula.py 2025-10-22 22:20:06 +08:00
Xiaomeng Zhao
c6d241f4f4 Merge pull request #3804 from myhloli/dev
fix: update model paths in models_download.py to include pp_formulanet_plus_m
2025-10-22 20:47:26 +08:00
myhloli
06b2fda1c1 fix: update model paths in models_download.py to include pp_formulanet_plus_m 2025-10-22 20:46:15 +08:00
Xiaomeng Zhao
5c1ca9271e Merge pull request #3803 from myhloli/dev
Dev
2025-10-22 20:33:42 +08:00
Xiaomeng Zhao
e7485c5d79 Update mineru/model/mfr/pp_formulanet_plus_m/predict_formula.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-10-22 20:32:36 +08:00
Xiaomeng Zhao
80436a89f9 Update mineru/model/utils/pytorchocr/modeling/heads/rec_ppformulanet_head.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-10-22 20:32:06 +08:00
Xiaomeng Zhao
b36793cef0 Update mineru/model/mfr/utils.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-10-22 20:31:50 +08:00
myhloli
43b51e78fc fix: add environment variable handling for table merging in JSON processing 2025-10-22 20:19:59 +08:00
myhloli
9688f73046 fix: update package path for PaddleOCR utilities in pyproject.toml 2025-10-22 20:08:52 +08:00
myhloli
c02edd9cba fix: correct docstring for remove_up_commands function in utils.py 2025-10-22 20:07:11 +08:00
myhloli
b4d08e994c feat: implement LaTeX formatting utilities and refactor processing logic 2025-10-22 20:02:59 +08:00
myhloli
a220b8a208 refactor: enhance title hierarchy logic and update model configuration 2025-10-22 15:57:07 +08:00
myhloli
ab480a7a86 fix: update progress bar description in formula prediction 2025-10-22 15:51:56 +08:00
myhloli
f57a6d8d9e refactor: remove commented-out device assignment in predict_formula.py 2025-10-21 18:45:21 +08:00
myhloli
915ba87f7d feat: adjust batch size calculation and enhance device management in model heads 2025-10-21 18:21:25 +08:00
myhloli
42a95e8e20 refactor: improve variable naming and streamline input processing in predict_formula.py 2025-10-21 14:57:57 +08:00
Xiaomeng Zhao
a513357607 Merge pull request #3779 from myhloli/dev
mfr add paddle
2025-10-20 19:14:46 +08:00
Xiaomeng Zhao
c8ccf4cf20 Update mineru/model/utils/pytorchocr/modeling/heads/rec_ppformulanet_head.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-10-20 19:14:16 +08:00
Xiaomeng Zhao
33d43a5afc Update mineru/model/utils/pytorchocr/modeling/heads/rec_ppformulanet_head.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-10-20 19:14:05 +08:00
Xiaomeng Zhao
3b057c7996 Merge pull request #19 from myhloli/mfr-add-paddle
Mfr add paddle
2025-10-20 18:59:48 +08:00
myhloli
34547262a2 refactor: remove unused Formula constant from model_list.py 2025-10-20 18:57:35 +08:00
myhloli
cd0ed982c0 fix: revert MFR_MODEL to unimernet_small in model initialization 2025-10-20 18:55:30 +08:00
myhloli
52dcbcbfa5 Bump mineru-vl-utils version to 0.1.14 2025-10-20 15:03:39 +08:00
myhloli
0758de6d24 Update vllm version and increase default GPU memory utilization 2025-10-20 11:45:58 +08:00
Xiaomeng Zhao
ae7892a6f9 Merge pull request #3770 from myhloli/dev
Update acceleration card links to include discussion and pull request references
2025-10-17 19:01:33 +08:00
myhloli
73567ccedc Update acceleration card links to include discussion and pull request references 2025-10-17 19:00:15 +08:00
Xiaomeng Zhao
bb552282f3 Merge pull request #3769 from myhloli/dev
Add support for domestic acceleration cards in documentation
2025-10-17 18:54:34 +08:00
myhloli
14c38101f7 Add support for domestic acceleration cards in documentation 2025-10-17 18:53:31 +08:00
Xiaomeng Zhao
cb3a30e9ad Merge pull request #3768 from myhloli/dev
Add support for domestic acceleration cards in documentation
2025-10-17 18:41:31 +08:00
myhloli
f4db41d0cb Add support for domestic acceleration cards in documentation 2025-10-17 18:40:40 +08:00
Xiaomeng Zhao
dad59f7d52 Merge pull request #3760 from magicyuan876/master
feat(tianshu): v2.0 架构升级 - Worker主动拉取模式
2025-10-17 18:31:38 +08:00
myhloli
499e877165 refactor: rename files and update import paths for consistency 2025-10-17 18:09:19 +08:00
myhloli
2d249666ba feat: integrate PP-FormulaNet_plus-M architecture and update model initialization 2025-10-17 17:00:22 +08:00
Magic_yuan
cedc62a728 完善markitdown依赖 2025-10-17 16:17:03 +08:00
Xiaomeng Zhao
1e40bac24f Merge pull request #3761 from Sidney233/dev
feat: add PPFormula
2025-10-17 14:40:10 +08:00
Sidney233
23701d0db4 feat: add PPFormula 2025-10-17 14:02:26 +08:00
Magic_yuan
e7d8bf097a 修复codereview建议 2025-10-17 13:04:49 +08:00
Magic_yuan
08a89aeca1 feat(tianshu): v2.0 架构升级 - Worker主动拉取模式
主要改进:
- Worker主动拉取任务,响应速度提升10-20倍 (5-10s → 0.5s)
- 数据库并发安全增强,使用原子操作防止任务重复
- 调度器变为可选监控组件,默认不启动
- 修复多GPU显存占用问题,完全隔离各进程

新增功能:
- API自动返回解析内容
- 结果文件自动清理(可配置)
- 支持图片上传MinIO
2025-10-17 11:46:42 +08:00
Xiaomeng Zhao
1b724f3336 Merge pull request #3756 from myhloli/dev
Set OMP_NUM_THREADS environment variable to 1 for vllm backend initialization
2025-10-16 19:06:45 +08:00
myhloli
ea4271ab37 Set OMP_NUM_THREADS environment variable to 1 for vllm backend initialization 2025-10-16 18:26:06 +08:00
Xiaomeng Zhao
d83b83a5ad Merge pull request #3755 from myhloli/dev
Dev
2025-10-16 17:46:44 +08:00
myhloli
0853b84e87 Update README files to use external image link for MinerU logo 2025-10-16 17:45:42 +08:00
myhloli
36225160a3 Update arXiv badge to reflect MinerU technical report and add badge for MinerU2.5 2025-10-16 17:41:41 +08:00
myhloli
a36118f8ba Add mineru_tianshu project to README files for version 2.0 compatibility 2025-10-16 17:38:57 +08:00
myhloli
a38384e7fb Update mineru-vl-utils dependency version to allow upgrades to 0.1.13 2025-10-16 17:36:45 +08:00
Xiaomeng Zhao
4b7c2bbcc0 Merge pull request #3754 from myhloli/dev
Refactor table merging logic to enhance colspan adjustments and improve caption handling
2025-10-16 17:35:28 +08:00
Xiaomeng Zhao
504fe6ada3 Merge pull request #3742 from magicyuan876/master
feat: MinerU Tianshu 项目 - 开箱即用的多GPU文档解析服务
2025-10-16 17:33:54 +08:00
myhloli
39be54023b Refactor table merging logic to enhance colspan adjustments and improve caption handling 2025-10-16 17:31:57 +08:00
Magic_yuan
484ff5a6f9 修复codereview问题 2025-10-16 16:04:42 +08:00
myhloli
59a7a577b3 Add backend name dropdown and update version constraints in bug report template 2025-10-16 14:55:48 +08:00
Xiaomeng Zhao
0e73ef9615 Merge pull request #3750 from myhloli/dev
Update openai dependency version to allow upgrades to version 3
2025-10-16 14:43:57 +08:00
myhloli
d580d6c7f8 Update openai dependency version to allow upgrades to version 3 2025-10-16 14:43:05 +08:00
Xiaomeng Zhao
4c8bb038ce Merge pull request #3748 from myhloli/dev
Enhance table merging logic to adjust colspan attributes based on row structures
2025-10-16 14:24:14 +08:00
myhloli
a89715b9a2 Refactor table merging logic to improve caption handling and prevent merging with non-continuation captions 2025-10-16 14:11:15 +08:00
myhloli
f05ea7c2e6 Simplify model output path handling by removing conditional checks for backend type 2025-10-16 14:09:30 +08:00
Xiaomeng Zhao
b68db3ab90 Merge pull request #3740 from yongtenglei/master
docs: Fix outdated sample data for output reference
2025-10-16 10:43:22 +08:00
yongtenglei
3539cfba36 docs: Fix sample data for output reference 2025-10-16 10:33:13 +08:00
Magic_yuan
3bf50d5267 feat: MinerU Tianshu 项目 - 开箱即用的多GPU文档解析服务
项目简介:
天枢(Tianshu)是基于 MinerU 的文档解析服务,采用 SQLite 任务队列 +
LitServe GPU 负载均衡架构,支持异步处理、任务持久化和多格式文档智能解析。

核心功能:
- 异步任务处理:客户端立即响应,后台处理任务
- 智能解析器:PDF/图片使用 MinerU(GPU加速),Office/文本使用 MarkItDown
- GPU 负载均衡:基于 LitServe 实现多GPU自动调度
- 任务持久化:SQLite 存储,服务重启任务不丢失
- 优先级队列:支持任务优先级设置
- RESTful API:完整的任务管理接口
- MinIO 集成:支持图片上传到对象存储

项目架构:
- api_server.py: FastAPI Web 服务器,提供 RESTful API
- task_db.py: SQLite 任务数据库管理器
- litserve_worker.py: LitServe Worker Pool,GPU 负载均衡
- task_scheduler.py: 异步任务调度器
- start_all.py: 统一启动脚本
- client_example.py: Python 客户端示例

技术栈:
FastAPI, LitServe, SQLite, MinerU, MarkItDown, MinIO, Loguru
2025-10-16 08:41:51 +08:00
myhloli
2108019698 Enhance table merging logic to adjust colspan attributes based on row structures 2025-10-15 19:05:28 +08:00
Xiaomeng Zhao
17a9921ba9 Merge pull request #3737 from myhloli/dev
Refactor block processing to handle non-contiguous indices in captions and footnotes
2025-10-15 17:06:22 +08:00
myhloli
3baee1d077 Refactor block processing to handle non-contiguous indices in captions and footnotes 2025-10-15 17:04:29 +08:00
myhloli
e1ee728e31 Sort blocks by index and clean up unprocessed blocks handling 2025-10-15 16:06:03 +08:00
Xiaomeng Zhao
1b45e6e1bc Merge pull request #3723 from myhloli/dev
Rename plugin documentation files for consistency and update index links
2025-10-14 19:00:38 +08:00
myhloli
966aadd1d3 Rename plugin documentation files for consistency and update index links 2025-10-14 18:58:24 +08:00
Xiaomeng Zhao
ecb8e3f0ac Merge pull request #3722 from myhloli/dev
Add documentation for Cherry Studio, Sider, Dify, n8n, Coze, FastGPT, ModelWhale, DingTalk, DataFlow, BISHENG, and RagFlow plugins
2025-10-14 18:55:19 +08:00
myhloli
1bef6e3526 Add documentation for Cherry Studio, Sider, Dify, n8n, Coze, FastGPT, ModelWhale, DingTalk, DataFlow, BISHENG, and RagFlow plugins 2025-10-14 18:54:15 +08:00
myhloli
4c4d1d0f95 Update supported version range in bug_report.yml to include 2.2.x and 2.5.x 2025-10-14 16:09:30 +08:00
Xiaomeng Zhao
c36aa54370 Merge pull request #3709 from myhloli/dev
Add max_concurrency parameter to improve backend processing
2025-10-13 15:57:34 +08:00
myhloli
4b480cfcf7 Add max_concurrency parameter to improve backend processing 2025-10-13 15:56:49 +08:00
Xiaomeng Zhao
7e18e1bb76 Merge pull request #3707 from myhloli/dev
Refactor async function and improve output directory handling in prediction
2025-10-13 11:59:33 +08:00
myhloli
44fdeb663f Refactor async function and improve output directory handling in prediction 2025-10-13 11:32:28 +08:00
myhloli
cf59949ba9 add tiff 2025-10-12 11:45:49 +08:00
Xiaomeng Zhao
c8c2f28afc Merge pull request #3701 from opendatalab/ocr_enhance
Ocr enhance
2025-10-11 19:33:32 +08:00
Xiaomeng Zhao
aa4bc6259b Merge pull request #3700 from myhloli/ocr_enhance
Reduce recognition batch size from 8 to 6
2025-10-11 19:29:09 +08:00
myhloli
b7e4ea0b49 Reduce recognition batch size from 8 to 6 for improved OCR performance 2025-10-11 19:28:16 +08:00
Xiaomeng Zhao
998197a47f Merge pull request #3672 from cjsdurj/optimize_ocr
优化pytorch_paddle ocr的推理性性能,总体提升约400%
2025-10-11 18:44:02 +08:00
Xiaomeng Zhao
3c8b6e6b6b Merge pull request #3499 from jinghuan-Chen/fix/fill_blank_rec_crop_empty_image
Avoid cropping empty images.
2025-10-11 11:14:05 +08:00
Xiaomeng Zhao
be42b46ff9 Merge pull request #3688 from myhloli/dev 2025-10-10 19:43:03 +08:00
myhloli
7c689e33b8 Refactor fix_two_layer_blocks function to improve handling of captions and footnotes in table blocks 2025-10-10 19:12:18 +08:00
cjsdurj
af66bc02c2 优化ocr推理性能400% 2025-10-09 13:03:22 +00:00
Xiaomeng Zhao
752f75ad8e Merge pull request #3651 from opendatalab/dev
Dev
2025-09-30 06:31:24 +08:00
Xiaomeng Zhao
1cfde98585 Merge pull request #3650 from myhloli/dev
Dev
2025-09-30 06:30:12 +08:00
Xiaomeng Zhao
54676295d5 Update README_zh-CN.md 2025-09-30 06:29:05 +08:00
Xiaomeng Zhao
61c7c65d8b Update README.md 2025-09-30 06:18:00 +08:00
Xiaomeng Zhao
6f05f735d0 Update header.html 2025-09-30 06:11:43 +08:00
Xiaomeng Zhao
befb16e531 Merge pull request #3649 from opendatalab/master
master->dev
2025-09-30 06:08:54 +08:00
Bin Wang
abc433d6f2 Merge pull request #3635 from wangbinDL/master
docs: Update arXiv link for technical report
2025-09-29 09:36:45 +08:00
wangbinDL
e7c1385068 docs: Update arXiv link for technical report 2025-09-29 09:32:30 +08:00
Bin Wang
342c5aa34a Merge pull request #3619 from wangbinDL/master
docs: Update MinerU2.5 Technical Report
2025-09-26 18:35:31 +08:00
wangbinDL
f25ddfa024 docs: Update MinerU2.5 Technical Report 2025-09-26 18:27:22 +08:00
Bin Wang
e31de3a453 Merge pull request #3615 from wangbinDL/master
docs: Add MinerU2.5 technical report and BibTeX
2025-09-26 11:51:45 +08:00
wangbinDL
2f01754410 docs: Add MinerU2.5 technical report and BibTeX 2025-09-26 11:42:59 +08:00
Xiaomeng Zhao
8a9921fb22 Merge pull request #3610 from opendatalab/master
master->dev
2025-09-26 06:17:20 +08:00
myhloli
652e11a253 Update version.py with new version 2025-09-25 21:57:26 +00:00
Xiaomeng Zhao
61cc6886fe Merge pull request #3608 from opendatalab/release-2.5.4
Release 2.5.4
2025-09-26 05:53:36 +08:00
Xiaomeng Zhao
80dc57e7ce Merge pull request #3609 from myhloli/dev
Bump mineru-vl-utils dependency to version 0.1.11
2025-09-26 05:48:32 +08:00
myhloli
d84a006f6d Bump mineru-vl-utils dependency to version 0.1.11 2025-09-26 05:47:27 +08:00
Xiaomeng Zhao
2c5361bf8e Merge pull request #3607 from myhloli/dev
Update changelog for version 2.5.4 to document PDF identification fix
2025-09-26 05:43:50 +08:00
myhloli
eb01b7acf9 Update changelog for version 2.5.4 to document PDF identification fix 2025-09-26 05:42:43 +08:00
Xiaomeng Zhao
5656f1363b Merge pull request #3606 from myhloli/dev
Dev
2025-09-26 05:35:29 +08:00
myhloli
c9315b8e10 Refactor suffix guessing to handle PDF extensions for AI files 2025-09-26 05:31:46 +08:00
myhloli
907099762f Normalize PDF suffix handling for AI files to be case-insensitive 2025-09-26 05:09:19 +08:00
myhloli
2c356cccee Fix suffix identification for AI files to correctly handle PDF extensions 2025-09-26 05:02:56 +08:00
myhloli
0f62f166e6 Enhance image link replacement to handle only .jpg files while preserving other formats 2025-09-26 04:52:05 +08:00
Xiaomeng Zhao
c7a64e72dc Merge pull request #3563 from myhloli/dev
Update model output handling in test_e2e.py to write JSON format instead of text
2025-09-21 02:49:31 +08:00
myhloli
3cb3a94830 Merge remote-tracking branch 'origin/dev' into dev 2025-09-21 02:48:45 +08:00
myhloli
8301fa4c20 Update model output handling in test_e2e.py to write JSON format instead of text 2025-09-21 02:47:56 +08:00
Xiaomeng Zhao
4400f4b75f Merge pull request #3558 from opendatalab/master
master->dev
2025-09-20 15:37:45 +08:00
myhloli
92efb8f96e Update version.py with new version 2025-09-20 07:36:01 +00:00
Xiaomeng Zhao
9a88cbfb09 Merge pull request #3545 from opendatalab/release-2.5.3
Release 2.5.3
2025-09-20 15:33:58 +08:00
Xiaomeng Zhao
e96e4a0ce4 Merge pull request #3557 from opendatalab/dev
Dev
2025-09-20 15:30:40 +08:00
Xiaomeng Zhao
c7bde0ab39 Merge pull request #3556 from myhloli/dev
Refactor batch image orientation classification logic for improved cl…
2025-09-20 15:30:08 +08:00
myhloli
8754c24e42 Refactor batch image orientation classification logic for improved clarity and performance 2025-09-20 15:24:28 +08:00
Xiaomeng Zhao
4f8c00cc34 Merge pull request #3555 from opendatalab/dev
Dev
2025-09-20 15:18:19 +08:00
Xiaomeng Zhao
89681f98ad Merge pull request #3554 from myhloli/dev
Fix formatting in changelog sections of README.md and README_zh-CN.md…
2025-09-20 15:14:16 +08:00
myhloli
66d328dbc5 Fix formatting in changelog sections of README.md and README_zh-CN.md for improved readability 2025-09-20 15:13:29 +08:00
Xiaomeng Zhao
f0c1318545 Merge pull request #3553 from myhloli/dev
Fix formatting in changelog sections of README.md and README_zh-CN.md…
2025-09-20 15:11:43 +08:00
myhloli
6e97f3cf70 Fix formatting in changelog sections of README.md and README_zh-CN.md for improved readability 2025-09-20 15:10:25 +08:00
Xiaomeng Zhao
aede62167e Merge pull request #3552 from opendatalab/dev
Dev
2025-09-20 15:08:40 +08:00
Xiaomeng Zhao
5f2740f743 Merge pull request #3551 from myhloli/dev
Fix compute capability comparison in custom_logits_processors.py for …
2025-09-20 15:08:14 +08:00
myhloli
a888d2b625 Fix compute capability comparison in custom_logits_processors.py for correct version handling 2025-09-20 15:06:49 +08:00
Xiaomeng Zhao
4275876331 Merge pull request #3550 from opendatalab/dev
Dev
2025-09-20 15:01:39 +08:00
Xiaomeng Zhao
ec9f7f54ab Merge pull request #3549 from myhloli/dev
Update README.md and README_zh-CN.md to include changelog for v2.5.3 …
2025-09-20 15:00:50 +08:00
myhloli
7861e5e369 Remove redundant newline in README.md for improved formatting 2025-09-20 15:00:12 +08:00
myhloli
159f3a89a3 Update README.md and README_zh-CN.md to include changelog for v2.5.3 release with compatibility fixes and performance adjustments 2025-09-20 14:57:54 +08:00
Xiaomeng Zhao
d9452bbeb9 Merge pull request #3546 from myhloli/dev
Update docker_deployment.md for improved clarity on base image usage …
2025-09-20 14:48:50 +08:00
myhloli
d808a32c0b Update docker_deployment.md for improved clarity on base image usage and GPU support 2025-09-20 13:52:16 +08:00
Xiaomeng Zhao
12ce3bd024 Merge pull request #3544 from myhloli/dev
Dev
2025-09-20 13:26:18 +08:00
myhloli
e3d7aece50 Remove warning log for default VLLM_USE_V1 value in custom_logits_processors.py 2025-09-20 13:25:11 +08:00
Xiaomeng Zhao
7c55a0ea65 Update mineru/backend/vlm/custom_logits_processors.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-09-20 13:22:40 +08:00
myhloli
f1659eb7a7 Refactor logits processor handling in server.py and vlm_analyze.py for improved clarity and consistency 2025-09-20 13:21:05 +08:00
myhloli
c6bffd9382 Restrict vllm version to <0.11 for compatibility 2025-09-20 11:49:06 +08:00
myhloli
857dcb2ef5 Update docker_deployment.md to clarify GPU model support and base image options for vLLM 2025-09-20 11:45:33 +08:00
myhloli
ef69f98cd6 Update Dockerfile to include comments for GPU architecture compatibility based on Compute Capability 2025-09-20 03:15:58 +08:00
myhloli
6d5d1cf26b Refactor image rotation handling in batch_analyze.py and paddle_ori_cls.py for improved compatibility with torch versions 2025-09-20 03:07:47 +08:00
myhloli
7c481796f8 Refactor custom logits processors to include vllm version checks and improve logging 2025-09-20 01:22:06 +08:00
myhloli
7d62b7b7cc Update mineru-vl-utils dependency version to 0.1.8 2025-09-20 00:31:14 +08:00
myhloli
5a0cf9af7f Enhance custom logits processors with improved compute capability checks and environment variable handling 2025-09-20 00:21:43 +08:00
myhloli
f5e0e67545 Add custom logits processors functionality with compute capability check 2025-09-19 19:21:56 +08:00
myhloli
a4cac624df Add compute capability check for custom logits processors in server.py and vlm_analyze.py 2025-09-19 19:00:41 +08:00
Xiaomeng Zhao
e1eb318b9b Merge pull request #3535 from opendatalab/master
master->dev
2025-09-19 16:51:13 +08:00
myhloli
31834b1e68 Update version.py with new version 2025-09-19 08:48:17 +00:00
Xiaomeng Zhao
100ace2e99 Merge pull request #3534 from opendatalab/release-2.5.2
Release 2.5.2
2025-09-19 16:45:57 +08:00
Xiaomeng Zhao
6aac639686 Merge pull request #3533 from myhloli/dev
Update ModelScope link in README_zh-CN.md for MinerU2.5 release
2025-09-19 16:39:40 +08:00
myhloli
82f94a9a84 Update ModelScope link in README_zh-CN.md for MinerU2.5 release 2025-09-19 16:36:42 +08:00
Xiaomeng Zhao
d928334c61 Merge pull request #3532 from myhloli/dev
Fix formatting in vlm_middle_json_mkcontent.py to ensure proper line breaks in list items
2025-09-19 16:34:29 +08:00
myhloli
ebad82bd8c Update version in README to 2.5.2 for MinerU2.5 release 2025-09-19 16:31:30 +08:00
myhloli
b03c5fb449 Fix formatting in vlm_middle_json_mkcontent.py to ensure proper line breaks in list items 2025-09-19 16:30:43 +08:00
myhloli
c343afd20c Update version.py with new version 2025-09-19 03:45:08 +00:00
Xiaomeng Zhao
6586c7c01e Merge pull request #3529 from opendatalab/release-2.5.1
Release 2.5.1
2025-09-19 11:43:51 +08:00
Xiaomeng Zhao
304a6d9d8c Merge pull request #3527 from myhloli/dev
fix: Update mineru-vl-utils version and add logits processors support
2025-09-19 11:42:43 +08:00
myhloli
bce9bb6d1d Add support for --logits-processors argument in server.py 2025-09-19 11:42:05 +08:00
myhloli
920220e48e Update version in README for MinerU2.5 release to 2.5.1 2025-09-19 11:40:44 +08:00
myhloli
9fc3d6c742 Remove direct import of MinerULogitsProcessor and add it conditionally in vllm backend 2025-09-19 11:36:20 +08:00
myhloli
8fd544273e Update mineru-vl-utils version and add logits processors support 2025-09-19 11:20:34 +08:00
myhloli
72f1f5f935 Update mineru-vl-utils version and add logits processors support 2025-09-19 11:16:55 +08:00
Xiaomeng Zhao
5559a4701a Merge pull request #3523 from opendatalab/master
master->dev
2025-09-19 10:44:51 +08:00
myhloli
437022abfa Specify version constraints for mineru-vl-utils in pyproject.toml 2025-09-19 03:39:57 +08:00
myhloli
4653ed1502 Remove version constraints for mineru-vl-utils in pyproject.toml 2025-09-19 03:31:13 +08:00
Xiaomeng Zhao
b58c7f8d6e Merge pull request #3517 from opendatalab/dev
Dev
2025-09-19 03:27:30 +08:00
Xiaomeng Zhao
f6133b1731 Merge pull request #3516 from myhloli/dev
Update dependency name for mineru-vl-utils in pyproject.toml
2025-09-19 03:26:31 +08:00
myhloli
12d72c7c17 Update dependency name for mineru-vl-utils in pyproject.toml 2025-09-19 03:25:18 +08:00
Xiaomeng Zhao
5f3f35c009 Merge pull request #3515 from opendatalab/master
master->dev
2025-09-19 03:14:48 +08:00
myhloli
16ad71446b Update version.py with new version 2025-09-18 19:12:56 +00:00
Xiaomeng Zhao
d4b364eb9f Merge pull request #3513 from opendatalab/release-2.5.0
Release 2.5.0
2025-09-19 03:10:02 +08:00
Xiaomeng Zhao
5db08afef6 Merge pull request #3509 from opendatalab/release-2.5.0
Release 2.5.0
2025-09-19 02:51:50 +08:00
jinghuan-Chen
8bb8b715c1 Avoid cropping empty images. 2025-09-18 17:08:40 +08:00
254 changed files with 16743 additions and 25904 deletions

View File

@@ -122,7 +122,21 @@ body:
#multiple: false
options:
-
- "2.0.x"
- "`<2.2.0`"
- "`2.2.x`"
- "`>=2.5`"
validations:
required: true
- type: dropdown
id: backend_name
attributes:
label: Backend name | 解析后端
#multiple: false
options:
-
- "vlm"
- "pipeline"
validations:
required: true

169
README.md
View File

@@ -1,7 +1,7 @@
<div align="center" xmlns="http://www.w3.org/1999/html">
<!-- logo -->
<p align="center">
<img src="docs/images/MinerU-logo.png" width="300px" style="vertical-align:middle;">
<img src="https://gcore.jsdelivr.net/gh/opendatalab/MinerU@master/docs/images/MinerU-logo.png" width="300px" style="vertical-align:middle;">
</p>
<!-- icon -->
@@ -18,7 +18,8 @@
[![HuggingFace](https://img.shields.io/badge/Demo_on_HuggingFace-yellow.svg?logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAF8AAABYCAMAAACkl9t/AAAAk1BMVEVHcEz/nQv/nQv/nQr/nQv/nQr/nQv/nQv/nQr/wRf/txT/pg7/yRr/rBD/zRz/ngv/oAz/zhz/nwv/txT/ngv/0B3+zBz/nQv/0h7/wxn/vRb/thXkuiT/rxH/pxD/ogzcqyf/nQvTlSz/czCxky7/SjifdjT/Mj3+Mj3wMj15aTnDNz+DSD9RTUBsP0FRO0Q6O0WyIxEIAAAAGHRSTlMADB8zSWF3krDDw8TJ1NbX5efv8ff9/fxKDJ9uAAAGKklEQVR42u2Z63qjOAyGC4RwCOfB2JAGqrSb2WnTw/1f3UaWcSGYNKTdf/P+mOkTrE+yJBulvfvLT2A5ruenaVHyIks33npl/6C4s/ZLAM45SOi/1FtZPyFur1OYofBX3w7d54Bxm+E8db+nDr12ttmESZ4zludJEG5S7TO72YPlKZFyE+YCYUJTBZsMiNS5Sd7NlDmKM2Eg2JQg8awbglfqgbhArjxkS7dgp2RH6hc9AMLdZYUtZN5DJr4molC8BfKrEkPKEnEVjLbgW1fLy77ZVOJagoIcLIl+IxaQZGjiX597HopF5CkaXVMDO9Pyix3AFV3kw4lQLCbHuMovz8FallbcQIJ5Ta0vks9RnolbCK84BtjKRS5uA43hYoZcOBGIG2Epbv6CvFVQ8m8loh66WNySsnN7htL58LNp+NXT8/PhXiBXPMjLSxtwp8W9f/1AngRierBkA+kk/IpUSOeKByzn8y3kAAAfh//0oXgV4roHm/kz4E2z//zRc3/lgwBzbM2mJxQEa5pqgX7d1L0htrhx7LKxOZlKbwcAWyEOWqYSI8YPtgDQVjpB5nvaHaSnBaQSD6hweDi8PosxD6/PT09YY3xQA7LTCTKfYX+QHpA0GCcqmEHvr/cyfKQTEuwgbs2kPxJEB0iNjfJcCTPyocx+A0griHSmADiC91oNGVwJ69RudYe65vJmoqfpul0lrqXadW0jFKH5BKwAeCq+Den7s+3zfRJzA61/Uj/9H/VzLKTx9jFPPdXeeP+L7WEvDLAKAIoF8bPTKT0+TM7W8ePj3Rz/Yn3kOAp2f1Kf0Weony7pn/cPydvhQYV+eFOfmOu7VB/ViPe34/EN3RFHY/yRuT8ddCtMPH/McBAT5s+vRde/gf2c/sPsjLK+m5IBQF5tO+h2tTlBGnP6693JdsvofjOPnnEHkh2TnV/X1fBl9S5zrwuwF8NFrAVJVwCAPTe8gaJlomqlp0pv4Pjn98tJ/t/fL++6unpR1YGC2n/KCoa0tTLoKiEeUPDl94nj+5/Tv3/eT5vBQ60X1S0oZr+IWRR8Ldhu7AlLjPISlJcO9vrFotky9SpzDequlwEir5beYAc0R7D9KS1DXva0jhYRDXoExPdc6yw5GShkZXe9QdO/uOvHofxjrV/TNS6iMJS+4TcSTgk9n5agJdBQbB//IfF/HpvPt3Tbi7b6I6K0R72p6ajryEJrENW2bbeVUGjfgoals4L443c7BEE4mJO2SpbRngxQrAKRudRzGQ8jVOL2qDVjjI8K1gc3TIJ5KiFZ1q+gdsARPB4NQS4AjwVSt72DSoXNyOWUrU5mQ9nRYyjp89Xo7oRI6Bga9QNT1mQ/ptaJq5T/7WcgAZywR/XlPGAUDdet3LE+qS0TI+g+aJU8MIqjo0Kx8Ly+maxLjJmjQ18rA0YCkxLQbUZP1WqdmyQGJLUm7VnQFqodmXSqmRrdVpqdzk5LvmvgtEcW8PMGdaS23EOWyDVbACZzUJPaqMbjDxpA3Qrgl0AikimGDbqmyT8P8NOYiqrldF8rX+YN7TopX4UoHuSCYY7cgX4gHwclQKl1zhx0THf+tCAUValzjI7Wg9EhptrkIcfIJjA94evOn8B2eHaVzvBrnl2ig0So6hvPaz0IGcOvTHvUIlE2+prqAxLSQxZlU2stql1NqCCLdIiIN/i1DBEHUoElM9dBravbiAnKqgpi4IBkw+utSPIoBijDXJipSVV7MpOEJUAc5Qmm3BnUN+w3hteEieYKfRZSIUcXKMVf0u5wD4EwsUNVvZOtUT7A2GkffHjByWpHqvRBYrTV72a6j8zZ6W0DTE86Hn04bmyWX3Ri9WH7ZU6Q7h+ZHo0nHUAcsQvVhXRDZHChwiyi/hnPuOsSEF6Exk3o6Y9DT1eZ+6cASXk2Y9k+6EOQMDGm6WBK10wOQJCBwren86cPPWUcRAnTVjGcU1LBgs9FURiX/e6479yZcLwCBmTxiawEwrOcleuu12t3tbLv/N4RLYIBhYexm7Fcn4OJcn0+zc+s8/VfPeddZHAGN6TT8eGczHdR/Gts1/MzDkThr23zqrVfAMFT33Nx1RJsx1k5zuWILLnG/vsH+Fv5D4NTVcp1Gzo8AAAAAElFTkSuQmCC&labelColor=white)](https://huggingface.co/spaces/opendatalab/MinerU)
[![ModelScope](https://img.shields.io/badge/Demo_on_ModelScope-purple?logo=data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iMjIzIiBoZWlnaHQ9IjIwMCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIj4KCiA8Zz4KICA8dGl0bGU+TGF5ZXIgMTwvdGl0bGU+CiAgPHBhdGggaWQ9InN2Z18xNCIgZmlsbD0iIzYyNGFmZiIgZD0ibTAsODkuODRsMjUuNjUsMGwwLDI1LjY0OTk5bC0yNS42NSwwbDAsLTI1LjY0OTk5eiIvPgogIDxwYXRoIGlkPSJzdmdfMTUiIGZpbGw9IiM2MjRhZmYiIGQ9Im05OS4xNCwxMTUuNDlsMjUuNjUsMGwwLDI1LjY1bC0yNS42NSwwbDAsLTI1LjY1eiIvPgogIDxwYXRoIGlkPSJzdmdfMTYiIGZpbGw9IiM2MjRhZmYiIGQ9Im0xNzYuMDksMTQxLjE0bC0yNS42NDk5OSwwbDAsMjIuMTlsNDcuODQsMGwwLC00Ny44NGwtMjIuMTksMGwwLDI1LjY1eiIvPgogIDxwYXRoIGlkPSJzdmdfMTciIGZpbGw9IiMzNmNmZDEiIGQ9Im0xMjQuNzksODkuODRsMjUuNjUsMGwwLDI1LjY0OTk5bC0yNS42NSwwbDAsLTI1LjY0OTk5eiIvPgogIDxwYXRoIGlkPSJzdmdfMTgiIGZpbGw9IiMzNmNmZDEiIGQ9Im0wLDY0LjE5bDI1LjY1LDBsMCwyNS42NWwtMjUuNjUsMGwwLC0yNS42NXoiLz4KICA8cGF0aCBpZD0ic3ZnXzE5IiBmaWxsPSIjNjI0YWZmIiBkPSJtMTk4LjI4LDg5Ljg0bDI1LjY0OTk5LDBsMCwyNS42NDk5OWwtMjUuNjQ5OTksMGwwLC0yNS42NDk5OXoiLz4KICA8cGF0aCBpZD0ic3ZnXzIwIiBmaWxsPSIjMzZjZmQxIiBkPSJtMTk4LjI4LDY0LjE5bDI1LjY0OTk5LDBsMCwyNS42NWwtMjUuNjQ5OTksMGwwLC0yNS42NXoiLz4KICA8cGF0aCBpZD0ic3ZnXzIxIiBmaWxsPSIjNjI0YWZmIiBkPSJtMTUwLjQ0LDQybDAsMjIuMTlsMjUuNjQ5OTksMGwwLDI1LjY1bDIyLjE5LDBsMCwtNDcuODRsLTQ3Ljg0LDB6Ii8+CiAgPHBhdGggaWQ9InN2Z18yMiIgZmlsbD0iIzM2Y2ZkMSIgZD0ibTczLjQ5LDg5Ljg0bDI1LjY1LDBsMCwyNS42NDk5OWwtMjUuNjUsMGwwLC0yNS42NDk5OXoiLz4KICA8cGF0aCBpZD0ic3ZnXzIzIiBmaWxsPSIjNjI0YWZmIiBkPSJtNDcuODQsNjQuMTlsMjUuNjUsMGwwLC0yMi4xOWwtNDcuODQsMGwwLDQ3Ljg0bDIyLjE5LDBsMCwtMjUuNjV6Ii8+CiAgPHBhdGggaWQ9InN2Z18yNCIgZmlsbD0iIzYyNGFmZiIgZD0ibTQ3Ljg0LDExNS40OWwtMjIuMTksMGwwLDQ3Ljg0bDQ3Ljg0LDBsMCwtMjIuMTlsLTI1LjY1LDBsMCwtMjUuNjV6Ii8+CiA8L2c+Cjwvc3ZnPg==&labelColor=white)](https://www.modelscope.cn/studios/OpenDataLab/MinerU)
[![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/gist/myhloli/a3cb16570ab3cfeadf9d8f0ac91b4fca/mineru_demo.ipynb)
[![arXiv](https://img.shields.io/badge/arXiv-2409.18839-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2409.18839)
[![arXiv](https://img.shields.io/badge/MinerU-Technical%20Report-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2409.18839)
[![arXiv](https://img.shields.io/badge/MinerU2.5-Technical%20Report-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2509.22186)
[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/opendatalab/MinerU)
@@ -44,7 +45,47 @@
# Changelog
- 2025/09/19 2.5.0 Released
- 2025/12/02 2.6.6 Release
- `mineru-api` tool optimizations
- Added descriptive text to `mineru-api` interface parameters to improve API documentation readability.
- You can use the environment variable `MINERU_API_ENABLE_FASTAPI_DOCS` to control whether the auto-generated interface documentation page is enabled (enabled by default).
- Added concurrency configuration options for the `vlm-vllm-async-engine`, `vlm-lmdeploy-engine`, and `vlm-http-client` backends. Users can use the environment variable `MINERU_API_MAX_CONCURRENT_REQUESTS` to set the maximum number of concurrent API requests (unlimited by default).
- 2025/11/26 2.6.5 Release
- Added support for a new backend vlm-lmdeploy-engine. Its usage is similar to vlm-vllm-(async)engine, but it uses lmdeploy as the inference engine and additionally supports native inference acceleration on Windows platforms compared to vllm.
- 2025/11/04 2.6.4 Release
- Added timeout configuration for PDF image rendering, default is 300 seconds, can be configured via environment variable `MINERU_PDF_RENDER_TIMEOUT` to prevent long blocking of the rendering process caused by some abnormal PDF files.
- Added CPU thread count configuration options for ONNX models, default is the system CPU core count, can be configured via environment variables `MINERU_INTRA_OP_NUM_THREADS` and `MINERU_INTER_OP_NUM_THREADS` to reduce CPU resource contention conflicts in high concurrency scenarios.
- 2025/10/31 2.6.3 Release
- Added support for a new backend `vlm-mlx-engine`, enabling MLX-accelerated inference for the MinerU2.5 model on Apple Silicon devices. Compared to the `vlm-transformers` backend, `vlm-mlx-engine` delivers a 100%200% speed improvement.
- Bug fixes: #3849, #3859
- 2025/10/24 2.6.2 Release
- `pipeline` backend optimizations
- Added experimental support for Chinese formulas, which can be enabled by setting the environment variable `export MINERU_FORMULA_CH_SUPPORT=1`. This feature may cause a slight decrease in MFR speed and failures in recognizing some long formulas. It is recommended to enable it only when parsing Chinese formulas is needed. To disable this feature, set the environment variable to `0`.
- `OCR` speed significantly improved by 200%~300%, thanks to the optimization solution provided by [@cjsdurj](https://github.com/cjsdurj)
- `OCR` models optimized for improved accuracy and coverage of Latin script recognition, and updated Cyrillic, Arabic, Devanagari, Telugu (te), and Tamil (ta) language systems to `ppocr-v5` version, with accuracy improved by over 40% compared to previous models
- `vlm` backend optimizations
- `table_caption` and `table_footnote` matching logic optimized to improve the accuracy of table caption and footnote matching and reading order rationality in scenarios with multiple consecutive tables on a page
- Optimized CPU resource usage during high concurrency when using `vllm` backend, reducing server pressure
- Adapted to `vllm` version 0.11.0
- General optimizations
- Cross-page table merging effect optimized, added support for cross-page continuation table merging, improving table merging effectiveness in multi-column merge scenarios
- Added environment variable configuration option `MINERU_TABLE_MERGE_ENABLE` for table merging feature. Table merging is enabled by default and can be disabled by setting this variable to `0`
- 2025/09/26 2.5.4 released
- 🎉🎉 The MinerU2.5 [Technical Report](https://arxiv.org/abs/2509.22186) is now available! We welcome you to read it for a comprehensive overview of its model architecture, training strategy, data engineering and evaluation results.
- Fixed an issue where some `PDF` files were mistakenly identified as `AI` files, causing parsing failures
- 2025/09/20 2.5.3 Released
- Dependency version range adjustment to enable Turing and earlier architecture GPUs to use vLLM acceleration for MinerU2.5 model inference.
- `pipeline` backend compatibility fixes for torch 2.8.0.
- Reduced default concurrency for vLLM async backend to lower server pressure and avoid connection closure issues caused by high load.
- More compatibility-related details can be found in the [announcement](https://github.com/opendatalab/MinerU/discussions/3548)
- 2025/09/19 2.5.2 Released
We are officially releasing MinerU2.5, currently the most powerful multimodal large model for document parsing.
With only 1.2B parameters, MinerU2.5's accuracy on the OmniDocBench benchmark comprehensively surpasses top-tier multimodal models like Gemini 2.5 Pro, GPT-4o, and Qwen2.5-VL-72B. It also significantly outperforms leading specialized models such as dots.ocr, MonkeyOCR, and PP-StructureV3.
@@ -560,7 +601,7 @@ https://github.com/user-attachments/assets/4bea02c9-6d54-4cd6-97ed-dff14340982c
- Automatically recognize and convert formulas in the document to LaTeX format.
- Automatically recognize and convert tables in the document to HTML format.
- Automatically detect scanned PDFs and garbled PDFs and enable OCR functionality.
- OCR supports detection and recognition of 84 languages.
- OCR supports detection and recognition of 109 languages.
- Supports multiple output formats, such as multimodal and NLP Markdown, JSON sorted by reading order, and rich intermediate formats.
- Supports various visualization results, including layout visualization and span visualization, for efficient confirmation of output quality.
- Supports running in a pure CPU environment, and also supports GPU(CUDA)/NPU(CANN)/MPS acceleration
@@ -597,41 +638,75 @@ A WebUI developed based on Gradio, with a simple interface and only core parsing
> In non-mainline environments, due to the diversity of hardware and software configurations, as well as third-party dependency compatibility issues, we cannot guarantee 100% project availability. Therefore, for users who wish to use this project in non-recommended environments, we suggest carefully reading the documentation and FAQ first. Most issues already have corresponding solutions in the FAQ. We also encourage community feedback to help us gradually expand support.
<table>
<tr>
<td>Parsing Backend</td>
<td>pipeline</td>
<td>vlm-transformers</td>
<td>vlm-vllm</td>
</tr>
<tr>
<td>Operating System</td>
<td>Linux / Windows / macOS</td>
<td>Linux / Windows</td>
<td>Linux / Windows (via WSL2)</td>
</tr>
<tr>
<td>CPU Inference Support</td>
<td>✅</td>
<td colspan="2">❌</td>
</tr>
<tr>
<td>GPU Requirements</td>
<td>Turing architecture and later, 6GB+ VRAM or Apple Silicon</td>
<td colspan="2">Turing architecture and later, 8GB+ VRAM</td>
</tr>
<tr>
<td>Memory Requirements</td>
<td colspan="3">Minimum 16GB+, recommended 32GB+</td>
</tr>
<tr>
<td>Disk Space Requirements</td>
<td colspan="3">20GB+, SSD recommended</td>
</tr>
<tr>
<td>Python Version</td>
<td colspan="3">3.10-3.13</td>
</tr>
<thead>
<tr>
<th rowspan="2">Parsing Backend</th>
<th rowspan="2">pipeline <br> (Accuracy<sup>1</sup> 82+)</th>
<th colspan="5">vlm (Accuracy<sup>1</sup> 90+)</th>
</tr>
<tr>
<th>transformers</th>
<th>mlx-engine</th>
<th>vllm-engine / <br>vllm-async-engine</th>
<th>lmdeploy-engine</th>
<th>http-client</th>
</tr>
</thead>
<tbody>
<tr>
<th>Backend Features</th>
<td>Fast, no hallucinations</td>
<td>Good compatibility, <br>but slower</td>
<td>Faster than transformers</td>
<td>Fast, compatible with the vLLM ecosystem</td>
<td>Fast, compatible with the LMDeploy ecosystem</td>
<td>Suitable for OpenAI-compatible servers<sup>6</sup></td>
</tr>
<tr>
<th>Operating System</th>
<td colspan="2" style="text-align:center;">Linux<sup>2</sup> / Windows / macOS</td>
<td style="text-align:center;">macOS<sup>3</sup></td>
<td style="text-align:center;">Linux<sup>2</sup> / Windows<sup>4</sup> </td>
<td style="text-align:center;">Linux<sup>2</sup> / Windows<sup>5</sup> </td>
<td>Any</td>
</tr>
<tr>
<th>CPU inference support</th>
<td colspan="2" style="text-align:center;">✅</td>
<td colspan="3" style="text-align:center;">❌</td>
<td>Not required</td>
</tr>
<tr>
<th>GPU Requirements</th><td colspan="2" style="text-align:center;">Volta or later architectures, 6 GB VRAM or more, or Apple Silicon</td>
<td>Apple Silicon</td>
<td colspan="2" style="text-align:center;">Volta or later architectures, 8 GB VRAM or more</td>
<td>Not required</td>
</tr>
<tr>
<th>Memory Requirements</th>
<td colspan="5" style="text-align:center;">Minimum 16 GB, 32 GB recommended</td>
<td>8 GB</td>
</tr>
<tr>
<th>Disk Space Requirements</th>
<td colspan="5" style="text-align:center;">20 GB or more, SSD recommended</td>
<td>2 GB</td>
</tr>
<tr>
<th>Python Version</th>
<td colspan="6" style="text-align:center;">3.10-3.13<sup>7</sup></td>
</tr>
</tbody>
</table>
<sup>1</sup> Accuracy metric is the End-to-End Evaluation Overall score of OmniDocBench (v1.5), tested on the latest `MinerU` version.
<sup>2</sup> Linux supports only distributions released in 2019 or later.
<sup>3</sup> MLX requires macOS 13.5 or later, recommended for use with version 14.0 or higher.
<sup>4</sup> Windows vLLM support via WSL2(Windows Subsystem for Linux).
<sup>5</sup> Windows LMDeploy can only use the `turbomind` backend, which is slightly slower than the `pytorch` backend. If performance is critical, it is recommended to run it via WSL2.
<sup>6</sup> Servers compatible with the OpenAI API, such as local or remote model services deployed via inference frameworks like `vLLM`, `SGLang`, or `LMDeploy`.
<sup>7</sup> Windows + LMDeploy only supports Python versions 3.103.12, as the critical dependency `ray` does not yet support Python 3.13 on Windows.
### Install MinerU
@@ -650,8 +725,8 @@ uv pip install -e .[core]
```
> [!TIP]
> `mineru[core]` includes all core features except `vLLM` acceleration, compatible with Windows / Linux / macOS systems, suitable for most users.
> If you need to use `vLLM` acceleration for VLM model inference or install a lightweight client on edge devices, please refer to the documentation [Extension Modules Installation Guide](https://opendatalab.github.io/MinerU/quick_start/extension_modules/).
> `mineru[core]` includes all core features except `vLLM`/`LMDeploy` acceleration, compatible with Windows / Linux / macOS systems, suitable for most users.
> If you need to use `vLLM`/`LMDeploy` acceleration for VLM model inference or install a lightweight client on edge devices, please refer to the documentation [Extension Modules Installation Guide](https://opendatalab.github.io/MinerU/quick_start/extension_modules/).
---
@@ -729,10 +804,22 @@ Currently, some models in this project are trained based on YOLO. However, since
- [pdfminer.six](https://github.com/pdfminer/pdfminer.six)
- [pypdf](https://github.com/py-pdf/pypdf)
- [magika](https://github.com/google/magika)
- [vLLM](https://github.com/vllm-project/vllm)
- [LMDeploy](https://github.com/InternLM/lmdeploy)
# Citation
```bibtex
@misc{niu2025mineru25decoupledvisionlanguagemodel,
title={MinerU2.5: A Decoupled Vision-Language Model for Efficient High-Resolution Document Parsing},
author={Junbo Niu and Zheng Liu and Zhuangcheng Gu and Bin Wang and Linke Ouyang and Zhiyuan Zhao and Tao Chu and Tianyao He and Fan Wu and Qintong Zhang and Zhenjiang Jin and Guang Liang and Rui Zhang and Wenzheng Zhang and Yuan Qu and Zhifei Ren and Yuefeng Sun and Yuanhong Zheng and Dongsheng Ma and Zirui Tang and Boyu Niu and Ziyang Miao and Hejun Dong and Siyi Qian and Junyuan Zhang and Jingzhou Chen and Fangdong Wang and Xiaomeng Zhao and Liqun Wei and Wei Li and Shasha Wang and Ruiliang Xu and Yuanyuan Cao and Lu Chen and Qianqian Wu and Huaiyu Gu and Lindong Lu and Keming Wang and Dechen Lin and Guanlin Shen and Xuanhe Zhou and Linfeng Zhang and Yuhang Zang and Xiaoyi Dong and Jiaqi Wang and Bo Zhang and Lei Bai and Pei Chu and Weijia Li and Jiang Wu and Lijun Wu and Zhenxiang Li and Guangyu Wang and Zhongying Tu and Chao Xu and Kai Chen and Yu Qiao and Bowen Zhou and Dahua Lin and Wentao Zhang and Conghui He},
year={2025},
eprint={2509.22186},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2509.22186},
}
@misc{wang2024mineruopensourcesolutionprecise,
title={MinerU: An Open-Source Solution for Precise Document Content Extraction},
author={Bin Wang and Chao Xu and Xiaomeng Zhao and Linke Ouyang and Fan Wu and Zhiyuan Zhao and Rui Xu and Kaiwen Liu and Yuan Qu and Fukai Shang and Bo Zhang and Liqun Wei and Zhihao Sui and Wei Li and Botian Shi and Yu Qiao and Dahua Lin and Conghui He},
@@ -771,4 +858,4 @@ Currently, some models in this project are trained based on YOLO. However, since
- [OmniDocBench (A Comprehensive Benchmark for Document Parsing and Evaluation)](https://github.com/opendatalab/OmniDocBench)
- [Magic-HTML (Mixed web page extraction tool)](https://github.com/opendatalab/magic-html)
- [Magic-Doc (Fast speed ppt/pptx/doc/docx/pdf extraction tool)](https://github.com/InternLM/magic-doc)
- [Dingo: A Comprehensive AI Data Quality Evaluation Tool](https://github.com/MigoXLab/dingo)
- [Dingo: A Comprehensive AI Data Quality Evaluation Tool](https://github.com/MigoXLab/dingo)

View File

@@ -1,7 +1,7 @@
<div align="center" xmlns="http://www.w3.org/1999/html">
<!-- logo -->
<p align="center">
<img src="docs/images/MinerU-logo.png" width="300px" style="vertical-align:middle;">
<img src="https://gcore.jsdelivr.net/gh/opendatalab/MinerU@master/docs/images/MinerU-logo.png" width="300px" style="vertical-align:middle;">
</p>
<!-- icon -->
@@ -18,7 +18,8 @@
[![ModelScope](https://img.shields.io/badge/Demo_on_ModelScope-purple?logo=data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iMjIzIiBoZWlnaHQ9IjIwMCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIj4KCiA8Zz4KICA8dGl0bGU+TGF5ZXIgMTwvdGl0bGU+CiAgPHBhdGggaWQ9InN2Z18xNCIgZmlsbD0iIzYyNGFmZiIgZD0ibTAsODkuODRsMjUuNjUsMGwwLDI1LjY0OTk5bC0yNS42NSwwbDAsLTI1LjY0OTk5eiIvPgogIDxwYXRoIGlkPSJzdmdfMTUiIGZpbGw9IiM2MjRhZmYiIGQ9Im05OS4xNCwxMTUuNDlsMjUuNjUsMGwwLDI1LjY1bC0yNS42NSwwbDAsLTI1LjY1eiIvPgogIDxwYXRoIGlkPSJzdmdfMTYiIGZpbGw9IiM2MjRhZmYiIGQ9Im0xNzYuMDksMTQxLjE0bC0yNS42NDk5OSwwbDAsMjIuMTlsNDcuODQsMGwwLC00Ny44NGwtMjIuMTksMGwwLDI1LjY1eiIvPgogIDxwYXRoIGlkPSJzdmdfMTciIGZpbGw9IiMzNmNmZDEiIGQ9Im0xMjQuNzksODkuODRsMjUuNjUsMGwwLDI1LjY0OTk5bC0yNS42NSwwbDAsLTI1LjY0OTk5eiIvPgogIDxwYXRoIGlkPSJzdmdfMTgiIGZpbGw9IiMzNmNmZDEiIGQ9Im0wLDY0LjE5bDI1LjY1LDBsMCwyNS42NWwtMjUuNjUsMGwwLC0yNS42NXoiLz4KICA8cGF0aCBpZD0ic3ZnXzE5IiBmaWxsPSIjNjI0YWZmIiBkPSJtMTk4LjI4LDg5Ljg0bDI1LjY0OTk5LDBsMCwyNS42NDk5OWwtMjUuNjQ5OTksMGwwLC0yNS42NDk5OXoiLz4KICA8cGF0aCBpZD0ic3ZnXzIwIiBmaWxsPSIjMzZjZmQxIiBkPSJtMTk4LjI4LDY0LjE5bDI1LjY0OTk5LDBsMCwyNS42NWwtMjUuNjQ5OTksMGwwLC0yNS42NXoiLz4KICA8cGF0aCBpZD0ic3ZnXzIxIiBmaWxsPSIjNjI0YWZmIiBkPSJtMTUwLjQ0LDQybDAsMjIuMTlsMjUuNjQ5OTksMGwwLDI1LjY1bDIyLjE5LDBsMCwtNDcuODRsLTQ3Ljg0LDB6Ii8+CiAgPHBhdGggaWQ9InN2Z18yMiIgZmlsbD0iIzM2Y2ZkMSIgZD0ibTczLjQ5LDg5Ljg0bDI1LjY1LDBsMCwyNS42NDk5OWwtMjUuNjUsMGwwLC0yNS42NDk5OXoiLz4KICA8cGF0aCBpZD0ic3ZnXzIzIiBmaWxsPSIjNjI0YWZmIiBkPSJtNDcuODQsNjQuMTlsMjUuNjUsMGwwLC0yMi4xOWwtNDcuODQsMGwwLDQ3Ljg0bDIyLjE5LDBsMCwtMjUuNjV6Ii8+CiAgPHBhdGggaWQ9InN2Z18yNCIgZmlsbD0iIzYyNGFmZiIgZD0ibTQ3Ljg0LDExNS40OWwtMjIuMTksMGwwLDQ3Ljg0bDQ3Ljg0LDBsMCwtMjIuMTlsLTI1LjY1LDBsMCwtMjUuNjV6Ii8+CiA8L2c+Cjwvc3ZnPg==&labelColor=white)](https://www.modelscope.cn/studios/OpenDataLab/MinerU)
[![HuggingFace](https://img.shields.io/badge/Demo_on_HuggingFace-yellow.svg?logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAF8AAABYCAMAAACkl9t/AAAAk1BMVEVHcEz/nQv/nQv/nQr/nQv/nQr/nQv/nQv/nQr/wRf/txT/pg7/yRr/rBD/zRz/ngv/oAz/zhz/nwv/txT/ngv/0B3+zBz/nQv/0h7/wxn/vRb/thXkuiT/rxH/pxD/ogzcqyf/nQvTlSz/czCxky7/SjifdjT/Mj3+Mj3wMj15aTnDNz+DSD9RTUBsP0FRO0Q6O0WyIxEIAAAAGHRSTlMADB8zSWF3krDDw8TJ1NbX5efv8ff9/fxKDJ9uAAAGKklEQVR42u2Z63qjOAyGC4RwCOfB2JAGqrSb2WnTw/1f3UaWcSGYNKTdf/P+mOkTrE+yJBulvfvLT2A5ruenaVHyIks33npl/6C4s/ZLAM45SOi/1FtZPyFur1OYofBX3w7d54Bxm+E8db+nDr12ttmESZ4zludJEG5S7TO72YPlKZFyE+YCYUJTBZsMiNS5Sd7NlDmKM2Eg2JQg8awbglfqgbhArjxkS7dgp2RH6hc9AMLdZYUtZN5DJr4molC8BfKrEkPKEnEVjLbgW1fLy77ZVOJagoIcLIl+IxaQZGjiX597HopF5CkaXVMDO9Pyix3AFV3kw4lQLCbHuMovz8FallbcQIJ5Ta0vks9RnolbCK84BtjKRS5uA43hYoZcOBGIG2Epbv6CvFVQ8m8loh66WNySsnN7htL58LNp+NXT8/PhXiBXPMjLSxtwp8W9f/1AngRierBkA+kk/IpUSOeKByzn8y3kAAAfh//0oXgV4roHm/kz4E2z//zRc3/lgwBzbM2mJxQEa5pqgX7d1L0htrhx7LKxOZlKbwcAWyEOWqYSI8YPtgDQVjpB5nvaHaSnBaQSD6hweDi8PosxD6/PT09YY3xQA7LTCTKfYX+QHpA0GCcqmEHvr/cyfKQTEuwgbs2kPxJEB0iNjfJcCTPyocx+A0griHSmADiC91oNGVwJ69RudYe65vJmoqfpul0lrqXadW0jFKH5BKwAeCq+Den7s+3zfRJzA61/Uj/9H/VzLKTx9jFPPdXeeP+L7WEvDLAKAIoF8bPTKT0+TM7W8ePj3Rz/Yn3kOAp2f1Kf0Weony7pn/cPydvhQYV+eFOfmOu7VB/ViPe34/EN3RFHY/yRuT8ddCtMPH/McBAT5s+vRde/gf2c/sPsjLK+m5IBQF5tO+h2tTlBGnP6693JdsvofjOPnnEHkh2TnV/X1fBl9S5zrwuwF8NFrAVJVwCAPTe8gaJlomqlp0pv4Pjn98tJ/t/fL++6unpR1YGC2n/KCoa0tTLoKiEeUPDl94nj+5/Tv3/eT5vBQ60X1S0oZr+IWRR8Ldhu7AlLjPISlJcO9vrFotky9SpzDequlwEir5beYAc0R7D9KS1DXva0jhYRDXoExPdc6yw5GShkZXe9QdO/uOvHofxjrV/TNS6iMJS+4TcSTgk9n5agJdBQbB//IfF/HpvPt3Tbi7b6I6K0R72p6ajryEJrENW2bbeVUGjfgoals4L443c7BEE4mJO2SpbRngxQrAKRudRzGQ8jVOL2qDVjjI8K1gc3TIJ5KiFZ1q+gdsARPB4NQS4AjwVSt72DSoXNyOWUrU5mQ9nRYyjp89Xo7oRI6Bga9QNT1mQ/ptaJq5T/7WcgAZywR/XlPGAUDdet3LE+qS0TI+g+aJU8MIqjo0Kx8Ly+maxLjJmjQ18rA0YCkxLQbUZP1WqdmyQGJLUm7VnQFqodmXSqmRrdVpqdzk5LvmvgtEcW8PMGdaS23EOWyDVbACZzUJPaqMbjDxpA3Qrgl0AikimGDbqmyT8P8NOYiqrldF8rX+YN7TopX4UoHuSCYY7cgX4gHwclQKl1zhx0THf+tCAUValzjI7Wg9EhptrkIcfIJjA94evOn8B2eHaVzvBrnl2ig0So6hvPaz0IGcOvTHvUIlE2+prqAxLSQxZlU2stql1NqCCLdIiIN/i1DBEHUoElM9dBravbiAnKqgpi4IBkw+utSPIoBijDXJipSVV7MpOEJUAc5Qmm3BnUN+w3hteEieYKfRZSIUcXKMVf0u5wD4EwsUNVvZOtUT7A2GkffHjByWpHqvRBYrTV72a6j8zZ6W0DTE86Hn04bmyWX3Ri9WH7ZU6Q7h+ZHo0nHUAcsQvVhXRDZHChwiyi/hnPuOsSEF6Exk3o6Y9DT1eZ+6cASXk2Y9k+6EOQMDGm6WBK10wOQJCBwren86cPPWUcRAnTVjGcU1LBgs9FURiX/e6479yZcLwCBmTxiawEwrOcleuu12t3tbLv/N4RLYIBhYexm7Fcn4OJcn0+zc+s8/VfPeddZHAGN6TT8eGczHdR/Gts1/MzDkThr23zqrVfAMFT33Nx1RJsx1k5zuWILLnG/vsH+Fv5D4NTVcp1Gzo8AAAAAElFTkSuQmCC&labelColor=white)](https://huggingface.co/spaces/opendatalab/MinerU)
[![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/gist/myhloli/a3cb16570ab3cfeadf9d8f0ac91b4fca/mineru_demo.ipynb)
[![arXiv](https://img.shields.io/badge/arXiv-2409.18839-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2409.18839)
[![arXiv](https://img.shields.io/badge/MinerU-Technical%20Report-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2409.18839)
[![arXiv](https://img.shields.io/badge/MinerU2.5-Technical%20Report-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2509.22186)
[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/opendatalab/MinerU)
@@ -44,9 +45,55 @@
# 更新记录
- 2025/09/19 2.5.0 发布
- 2025/12/02 2.6.6 发布
- `Ascend`适配优化
- 优化命令行工具初始化流程使Ascend适配方案中`vlm-vllm-engine`后端在命令行工具中可用。
- 为Atlas 300I Duo(310p)设备更新适配文档。
- `mineru-api`工具优化
-`mineru-api`接口参数增加描述性文本,优化接口文档可读性。
- 可通过环境变量`MINERU_API_ENABLE_FASTAPI_DOCS`控制是否启用自动生成的接口文档页面,默认为启用。
-`vlm-vllm-async-engine``vlm-lmdeploy-engine``vlm-http-client`后端增加并发数配置选项,用户可通过环境变量`MINERU_API_MAX_CONCURRENT_REQUESTS`控制api接口的最大并发请求数默认为不限制数量。
- 2025/11/26 2.6.5 发布
- 增加新后端`vlm-lmdeploy-engine`支持,使用方式与`vlm-vllm-(async)engine`类似,但使用`lmdeploy`作为推理引擎,与`vllm`相比额外支持Windows平台原生推理加速。
- 新增国产算力平台`昇腾/npu``平头哥/ppu``沐曦/maca`的适配支持,用户可在对应平台上使用`pipeline``vlm`模型,并使用`vllm`/`lmdeploy`引擎加速vlm模型推理具体使用方式请参考[其他加速卡适配](https://opendatalab.github.io/MinerU/zh/usage/)。
- 国产平台适配不易,我们已尽量确保适配的完整性和稳定性,但仍可能存在一些稳定性/兼容问题与精度对齐问题,请大家根据适配文档页面内红绿灯情况自行选择合适的环境与场景进行使用。
- 如在使用国产化平台适配方案的过程中遇到任何文档未提及的问题为便于其他用户查找解决方案请在discussions的[指定帖子](https://github.com/opendatalab/MinerU/discussions/4064)中进行反馈。
- 2025/11/04 2.6.4 发布
- 为pdf渲染图片增加超时配置默认为300秒可通过环境变量`MINERU_PDF_RENDER_TIMEOUT`进行配置防止部分异常pdf文件导致渲染过程长时间阻塞。
- 为onnx模型增加cpu线程数配置选项默认为系统cpu核心数可通过环境变量`MINERU_INTRA_OP_NUM_THREADS``MINERU_INTER_OP_NUM_THREADS`进行配置以减少高并发场景下的对cpu资源的抢占冲突。
- 2025/10/31 2.6.3 发布
- 增加新后端`vlm-mlx-engine`支持在Apple Silicon设备上支持使用`MLX`加速`MinerU2.5`模型推理,相比`vlm-transformers`后端,`vlm-mlx-engine`后端速度提升100%~200%。
- bug修复: #3849 #3859
- 2025/10/24 2.6.2 发布
- `pipline`后端优化
- 增加对中文公式的实验性支持,可通过配置环境变量`export MINERU_FORMULA_CH_SUPPORT=1`开启。该功能可能会导致MFR速率略微下降、部分长公式识别失败等问题建议仅在需要解析中文公式的场景下开启。如需关闭该功能可将环境变量设置为`0`
- `OCR`速度大幅提升200%~300%,感谢 [@cjsdurj](https://github.com/cjsdurj) 提供的优化方案
- `OCR`模型优化拉丁文识别的准度和广度,并更新西里尔文(cyrillic)、阿拉伯文(arabic)、天城文(devanagari)、泰卢固语(te)、泰米尔语(ta)语系至`ppocr-v5`版本精度相比上代模型提升40%以上
- `vlm`后端优化
- `table_caption``table_footnote`匹配逻辑优化,提升页内多张连续表场景下的表格标题和脚注的匹配准确率和阅读顺序合理性
- 优化使用`vllm`后端时高并发时的cpu资源占用降低服务端压力
- 适配`vllm`0.11.0版本
- 通用优化
- 跨页表格合并效果优化,新增跨页续表合并支持,提升在多列合并场景下的表格合并效果
- 为表格合并功能增加环境变量配置选项`MINERU_TABLE_MERGE_ENABLE`,表格合并功能默认开启,可通过设置该变量为`0`来关闭表格合并功能
- 2025/09/26 2.5.4 发布
- 🎉🎉 MinerU2.5[技术报告](https://arxiv.org/abs/2509.22186)现已发布,欢迎阅读全面了解其模型架构、训练策略、数据工程和评测结果。
- 修复部分`pdf`文件被识别成`ai`文件导致无法解析的问题
- 2025/09/20 2.5.3 发布
- 依赖版本范围调整使得Turing及更早架构显卡可以使用vLLM加速推理MinerU2.5模型。
- `pipeline`后端对torch 2.8.0的一些兼容性修复。
- 降低vLLM异步后端默认的并发数降低服务端压力以避免高压导致的链接关闭问题。
- 更多兼容性相关内容详见[公告](https://github.com/opendatalab/MinerU/discussions/3547)
- 2025/09/19 2.5.2 发布
我们正式发布 MinerU2.5,当前最强文档解析多模态大模型。仅凭 1.2B 参数MinerU2.5 在 OmniDocBench 文档解析评测中,精度已全面超越 Gemini2.5-Pro、GPT-4o、Qwen2.5-VL-72B等顶级多模态大模型并显著领先于主流文档解析专用模型如 dots.ocr, MonkeyOCR, PP-StructureV3 等)。
模型已发布至[HuggingFace](https://huggingface.co/opendatalab/MinerU2.5-2509-1.2B)和[ModelScope](https://huggingface.co/opendatalab/MinerU2.5-2509-1.2B)平台,欢迎大家下载使用!
模型已发布至[HuggingFace](https://huggingface.co/opendatalab/MinerU2.5-2509-1.2B)和[ModelScope](https://modelscope.cn/models/opendatalab/MinerU2.5-2509-1.2B)平台,欢迎大家下载使用!
- 核心亮点
- 极致能效性能SOTA: 以 1.2B 的轻量化规模实现了超越百亿乃至千亿级模型的SOTA性能重新定义了文档解析的能效比。
- 先进架构,全面领先: 通过 “两阶段推理” (解耦布局分析与内容识别) 与 原生高分辨率架构 的结合,在布局分析、文本识别、公式识别、表格识别及阅读顺序五大方面均达到 SOTA 水平。
@@ -547,7 +594,7 @@ https://github.com/user-attachments/assets/4bea02c9-6d54-4cd6-97ed-dff14340982c
- 自动识别并转换文档中的公式为LaTeX格式
- 自动识别并转换文档中的表格为HTML格式
- 自动检测扫描版PDF和乱码PDF并启用OCR功能
- OCR支持84种语言的检测与识别
- OCR支持109种语言的检测与识别
- 支持多种输出格式如多模态与NLP的Markdown、按阅读顺序排序的JSON、含有丰富信息的中间格式等
- 支持多种可视化结果包括layout可视化、span可视化等便于高效确认输出效果与质检
- 支持纯CPU环境运行并支持 GPU(CUDA)/NPU(CANN)/MPS 加速
@@ -582,42 +629,80 @@ https://github.com/user-attachments/assets/4bea02c9-6d54-4cd6-97ed-dff14340982c
>
> 在非主线环境中由于硬件、软件配置的多样性以及第三方依赖项的兼容性问题我们无法100%保证项目的完全可用性。因此对于希望在非推荐环境中使用本项目的用户我们建议先仔细阅读文档以及FAQ大多数问题已经在FAQ中有对应的解决方案除此之外我们鼓励社区反馈问题以便我们能够逐步扩大支持范围。
<table>
<tr>
<td>解析后端</td>
<td>pipeline</td>
<td>vlm-transformers</td>
<td>vlm-vllm</td>
</tr>
<tr>
<td>操作系统</td>
<td>Linux / Windows / macOS</td>
<td>Linux / Windows</td>
<td>Linux / Windows (via WSL2)</td>
</tr>
<tr>
<td>CPU推理支持</td>
<td>✅</td>
<td colspan="2">❌</td>
</tr>
<tr>
<td>GPU要求</td>
<td>Turing及以后架构6G显存以上或Apple Silicon</td>
<td colspan="2">Turing及以后架构8G显存以上</td>
</tr>
<tr>
<td>内存要求</td>
<td colspan="3">最低16G以上推荐32G以上</td>
</tr>
<tr>
<td>磁盘空间要求</td>
<td colspan="3">20G以上推荐使用SSD</td>
</tr>
<tr>
<td>python版本</td>
<td colspan="3">3.10-3.13</td>
</tr>
</table>
<thead>
<tr>
<th rowspan="2">解析后端</th>
<th rowspan="2">pipeline <br> (精度<sup>1</sup> 82+)</th>
<th colspan="5">vlm (精度<sup>1</sup> 90+)</th>
</tr>
<tr>
<th>transformers</th>
<th>mlx-engine</th>
<th>vllm-engine / <br>vllm-async-engine</th>
<th>lmdeploy-engine</th>
<th>http-client</th>
</tr>
</thead>
<tbody>
<tr>
<th>后端特性</th>
<td>速度快, 无幻觉</td>
<td>兼容性好, 速度较慢</td>
<td>比transformers快</td>
<td>速度快, 兼容vllm生态</td>
<td>速度快, 兼容lmdeploy生态</td>
<td>适用于OpenAI兼容服务器<sup>6</sup></td>
</tr>
<tr>
<th>操作系统</th>
<td colspan="2" style="text-align:center;">Linux<sup>2</sup> / Windows / macOS</td>
<td style="text-align:center;">macOS<sup>3</sup></td>
<td style="text-align:center;">Linux<sup>2</sup> / Windows<sup>4</sup> </td>
<td style="text-align:center;">Linux<sup>2</sup> / Windows<sup>5</sup> </td>
<td>不限</td>
</tr>
<tr>
<th>CPU推理支持</th>
<td colspan="2" style="text-align:center;">✅</td>
<td colspan="3" style="text-align:center;">❌</td>
<td >不需要</td>
</tr>
<tr>
<th>GPU要求</th><td colspan="2" style="text-align:center;">Volta及以后架构, 6G显存以上或Apple Silicon</td>
<td>Apple Silicon</td>
<td colspan="2" style="text-align:center;">Volta及以后架构, 8G显存以上</td>
<td>不需要</td>
</tr>
<tr>
<th>内存要求</th>
<td colspan="5" style="text-align:center;">最低16GB以上, 推荐32GB以上</td>
<td>8GB</td>
</tr>
<tr>
<th>磁盘空间要求</th>
<td colspan="5" style="text-align:center;">20GB以上, 推荐使用SSD</td>
<td>2GB</td>
</tr>
<tr>
<th>python版本</th>
<td colspan="6" style="text-align:center;">3.10-3.13<sup>7</sup></td>
</tr>
</tbody>
</table>
<sup>1</sup> 精度指标为OmniDocBench (v1.5)的End-to-End Evaluation Overall分数基于`MinerU`最新版本测试
<sup>2</sup> Linux仅支持2019年及以后发行版
<sup>3</sup> MLX需macOS 13.5及以上版本支持推荐14.0以上版本使用
<sup>4</sup> Windows vLLM通过WSL2(适用于 Linux 的 Windows 子系统)实现支持
<sup>5</sup> Windows LMDeploy只能使用`turbomind`后端,速度比`pytorch`后端稍慢如对速度有要求建议通过WSL2运行
<sup>6</sup> 兼容OpenAI API的服务器如通过`vLLM`/`SGLang`/`LMDeploy`等推理框架部署的本地模型服务器或远程模型服务
<sup>7</sup> Windows + LMDeploy 由于关键依赖`ray`未能在windows平台支持Python 3.13故仅支持至3.10~3.12版本
> [!TIP]
> 除以上主流环境与平台外,我们也收录了一些社区用户反馈的其他平台支持情况,详情请参考[其他加速卡适配](https://opendatalab.github.io/MinerU/zh/usage/)。
> 如果您有意将自己的环境适配经验分享给社区,欢迎通过[show-and-tell](https://github.com/opendatalab/MinerU/discussions/categories/show-and-tell)提交或提交PR至[其他加速卡适配](https://github.com/opendatalab/MinerU/tree/master/docs/zh/usage/acceleration_cards)文档。
### 安装 MinerU
@@ -636,8 +721,8 @@ uv pip install -e .[core] -i https://mirrors.aliyun.com/pypi/simple
```
> [!TIP]
> `mineru[core]`包含除`vLLM`加速外的所有核心功能兼容Windows / Linux / macOS系统适合绝大多数用户。
> 如果您使用`vLLM`加速VLM模型推理或是在边缘设备安装轻量版client端等需求可以参考文档[扩展模块安装指南](https://opendatalab.github.io/MinerU/zh/quick_start/extension_modules/)。
> `mineru[core]`包含除`vLLM`/`LMDeploy`加速外的所有核心功能兼容Windows / Linux / macOS系统适合绝大多数用户。
> 如果您需要使用`vLLM`/`LMDeploy`加速VLM模型推理或是在边缘设备安装轻量版client端等需求可以参考文档[扩展模块安装指南](https://opendatalab.github.io/MinerU/zh/quick_start/extension_modules/)。
---
@@ -715,10 +800,22 @@ mineru -p <input_path> -o <output_path>
- [pdfminer.six](https://github.com/pdfminer/pdfminer.six)
- [pypdf](https://github.com/py-pdf/pypdf)
- [magika](https://github.com/google/magika)
- [vLLM](https://github.com/vllm-project/vllm)
- [LMDeploy](https://github.com/InternLM/lmdeploy)
# Citation
```bibtex
@misc{niu2025mineru25decoupledvisionlanguagemodel,
title={MinerU2.5: A Decoupled Vision-Language Model for Efficient High-Resolution Document Parsing},
author={Junbo Niu and Zheng Liu and Zhuangcheng Gu and Bin Wang and Linke Ouyang and Zhiyuan Zhao and Tao Chu and Tianyao He and Fan Wu and Qintong Zhang and Zhenjiang Jin and Guang Liang and Rui Zhang and Wenzheng Zhang and Yuan Qu and Zhifei Ren and Yuefeng Sun and Yuanhong Zheng and Dongsheng Ma and Zirui Tang and Boyu Niu and Ziyang Miao and Hejun Dong and Siyi Qian and Junyuan Zhang and Jingzhou Chen and Fangdong Wang and Xiaomeng Zhao and Liqun Wei and Wei Li and Shasha Wang and Ruiliang Xu and Yuanyuan Cao and Lu Chen and Qianqian Wu and Huaiyu Gu and Lindong Lu and Keming Wang and Dechen Lin and Guanlin Shen and Xuanhe Zhou and Linfeng Zhang and Yuhang Zang and Xiaoyi Dong and Jiaqi Wang and Bo Zhang and Lei Bai and Pei Chu and Weijia Li and Jiang Wu and Lijun Wu and Zhenxiang Li and Guangyu Wang and Zhongying Tu and Chao Xu and Kai Chen and Yu Qiao and Bowen Zhou and Dahua Lin and Wentao Zhang and Conghui He},
year={2025},
eprint={2509.22186},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2509.22186},
}
@misc{wang2024mineruopensourcesolutionprecise,
title={MinerU: An Open-Source Solution for Precise Document Content Extraction},
author={Bin Wang and Chao Xu and Xiaomeng Zhao and Linke Ouyang and Fan Wu and Zhiyuan Zhao and Rui Xu and Kaiwen Liu and Yuan Qu and Fukai Shang and Bo Zhang and Liqun Wei and Zhihao Sui and Wei Li and Botian Shi and Yu Qiao and Dahua Lin and Conghui He},
@@ -757,4 +854,4 @@ mineru -p <input_path> -o <output_path>
- [OmniDocBench (A Comprehensive Benchmark for Document Parsing and Evaluation)](https://github.com/opendatalab/OmniDocBench)
- [Magic-HTML (Mixed web page extraction tool)](https://github.com/opendatalab/magic-html)
- [Magic-Doc (Fast speed ppt/pptx/doc/docx/pdf extraction tool)](https://github.com/InternLM/magic-doc)
- [Dingo: A Comprehensive AI Data Quality Evaluation Tool](https://github.com/MigoXLab/dingo)
- [Dingo: A Comprehensive AI Data Quality Evaluation Tool](https://github.com/MigoXLab/dingo)

View File

@@ -235,5 +235,7 @@ if __name__ == '__main__':
"""To enable VLM mode, change the backend to 'vlm-xxx'"""
# parse_doc(doc_path_list, output_dir, backend="vlm-transformers") # more general.
# parse_doc(doc_path_list, output_dir, backend="vlm-vllm-engine") # faster(engine).
# parse_doc(doc_path_list, output_dir, backend="vlm-mlx-engine") # faster than transformers in macOS 13.5+.
# parse_doc(doc_path_list, output_dir, backend="vlm-vllm-engine") # faster(vllm-engine).
# parse_doc(doc_path_list, output_dir, backend="vlm-lmdeploy-engine") # faster(lmdeploy-engine).
# parse_doc(doc_path_list, output_dir, backend="vlm-http-client", server_url="http://127.0.0.1:30000") # faster(client).

View File

@@ -1,8 +1,9 @@
# Use DaoCloud mirrored vllm image for China region
# Use DaoCloud mirrored vllm image for China region for gpu with Ampere architecture and above (Compute Capability>=8.0)
# Compute Capability version query (https://developer.nvidia.com/cuda-gpus)
FROM docker.m.daocloud.io/vllm/vllm-openai:v0.10.1.1
# Use the official vllm image
# FROM vllm/vllm-openai:v0.10.1.1
# Use DaoCloud mirrored vllm image for China region for gpu with Turing architecture and below (Compute Capability<8.0)
# FROM docker.m.daocloud.io/vllm/vllm-openai:v0.10.2
# Install libgl for opencv support & Noto fonts for Chinese characters
RUN apt-get update && \

View File

@@ -0,0 +1,34 @@
# 基础镜像配置 vLLM 或 LMDeploy 推理环境,请根据实际需要选择其中一个,要求 amd64(x86-64) CPU + metax GPU。
# Base image containing the vLLM inference environment, requiring amd64(x86-64) CPU + metax GPU.
FROM cr.metax-tech.com/public-ai-release/maca/vllm:maca.ai3.1.0.7-torch2.6-py310-ubuntu22.04-amd64
# Base image containing the LMDeploy inference environment, requiring amd64(x86-64) CPU + metax GPU.
# FROM crpi-vofi3w62lkohhxsp.cn-shanghai.personal.cr.aliyuncs.com/opendatalab-mineru/maca:maca.ai3.1.0.7-torch2.6-py310-ubuntu22.04-lmdeploy0.10.2-amd64
# Install libgl for opencv support & Noto fonts for Chinese characters
RUN apt-get update && \
apt-get install -y \
fonts-noto-core \
fonts-noto-cjk \
fontconfig \
libgl1 && \
fc-cache -fv && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
# mod torchvision to be compatible with torch 2.6
RUN sed -i '3s/^Version: 0.15.1+metax3\.1\.0\.4$/Version: 0.21.0+metax3.1.0.4/' /opt/conda/lib/python3.10/site-packages/torchvision-0.15.1+metax3.1.0.4.dist-info/METADATA && \
mv /opt/conda/lib/python3.10/site-packages/torchvision-0.15.1+metax3.1.0.4.dist-info /opt/conda/lib/python3.10/site-packages/torchvision-0.21.0+metax3.1.0.4.dist-info
# Install mineru latest
RUN /opt/conda/bin/python3 -m pip install -U pip -i https://mirrors.aliyun.com/pypi/simple && \
/opt/conda/bin/python3 -m pip install 'mineru[core]>=2.6.5' \
numpy==1.26.4 \
opencv-python==4.11.0.86 \
-i https://mirrors.aliyun.com/pypi/simple && \
/opt/conda/bin/python3 -m pip cache purge
# Download models and update the configuration file
RUN /bin/bash -c "/opt/conda/bin/mineru-models-download -s modelscope -m all"
# Set the entry point to activate the virtual environment and run the command line tool
ENTRYPOINT ["/bin/bash", "-c", "export MINERU_MODEL_SOURCE=local && exec \"$@\"", "--"]

View File

@@ -0,0 +1,32 @@
# 基础镜像配置 vLLM 或 LMDeploy ,请根据实际需要选择其中一个,要求 ARM(AArch64) CPU + Ascend NPU。
# Base image containing the vLLM inference environment, requiring ARM(AArch64) CPU + Ascend NPU.
FROM quay.m.daocloud.io/ascend/vllm-ascend:v0.11.0rc2
# Base image containing the LMDeploy inference environment, requiring ARM(AArch64) CPU + Ascend NPU.
# FROM crpi-4crprmm5baj1v8iv.cn-hangzhou.personal.cr.aliyuncs.com/lmdeploy_dlinfer/ascend:mineru-a2
# Install libgl for opencv support & Noto fonts for Chinese characters
RUN apt-get update && \
apt-get install -y \
fonts-noto-core \
fonts-noto-cjk \
fontconfig \
libgl1 \
libglib2.0-0 && \
fc-cache -fv && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
# Install mineru latest
RUN python3 -m pip install -U pip -i https://mirrors.aliyun.com/pypi/simple && \
python3 -m pip install 'mineru[core]>=2.6.5' \
numpy==1.26.4 \
opencv-python==4.11.0.86 \
-i https://mirrors.aliyun.com/pypi/simple && \
python3 -m pip cache purge
# Download models and update the configuration file
RUN TORCH_DEVICE_BACKEND_AUTOLOAD=0 /bin/bash -c "mineru-models-download -s modelscope -m all"
# Set the entry point to activate the virtual environment and run the command line tool
ENTRYPOINT ["/bin/bash", "-c", "export MINERU_MODEL_SOURCE=local && exec \"$@\"", "--"]

View File

@@ -0,0 +1,30 @@
# 基础镜像配置 vLLM 或 LMDeploy 推理环境,请根据实际需要选择其中一个,要求 amd64(x86-64) CPU + t-head PPU。
# Base image containing the vLLM inference environment, requiring amd64(x86-64) CPU + t-head PPU.
FROM crpi-vofi3w62lkohhxsp.cn-shanghai.personal.cr.aliyuncs.com/opendatalab-mineru/ppu:ppu-pytorch2.6.0-ubuntu24.04-cuda12.6-vllm0.8.5-py312
# Base image containing the LMDeploy inference environment, requiring amd64(x86-64) CPU + t-head PPU.
# FROM crpi-4crprmm5baj1v8iv.cn-hangzhou.personal.cr.aliyuncs.com/lmdeploy_dlinfer/ppu:mineru-ppu
# Install libgl for opencv support & Noto fonts for Chinese characters
RUN apt-get update && \
apt-get install -y \
fonts-noto-core \
fonts-noto-cjk \
fontconfig \
libgl1 && \
fc-cache -fv && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
# Install mineru latest
RUN python3 -m pip install -U pip -i https://mirrors.aliyun.com/pypi/simple && \
python3 -m pip install 'mineru[core]>=2.6.5' \
numpy==1.26.4 \
opencv-python==4.11.0.86 \
-i https://mirrors.aliyun.com/pypi/simple && \
python3 -m pip cache purge
# Download models and update the configuration file
RUN /bin/bash -c "mineru-models-download -s modelscope -m all"
# Set the entry point to activate the virtual environment and run the command line tool
ENTRYPOINT ["/bin/bash", "-c", "export MINERU_MODEL_SOURCE=local && exec \"$@\"", "--"]

View File

@@ -1,19 +1,38 @@
services:
mineru-vllm-server:
image: mineru-vllm:latest
container_name: mineru-vllm-server
mineru-openai-server:
image: mineru:latest
container_name: mineru-openai-server
restart: always
profiles: ["vllm-server"]
profiles: ["openai-server"]
ports:
- 30000:30000
environment:
MINERU_MODEL_SOURCE: local
entrypoint: mineru-vllm-server
entrypoint: mineru-openai-server
command:
# ==================== Engine Selection ====================
# WARNING: Only ONE engine can be enabled at a time!
# Choose 'vllm' OR 'lmdeploy' (uncomment one line below)
--engine vllm
# --engine lmdeploy
# ==================== vLLM Engine Parameters ====================
# Uncomment if using --engine vllm
--host 0.0.0.0
--port 30000
# --data-parallel-size 2 # If using multiple GPUs, increase throughput using vllm's multi-GPU parallel mode
# --gpu-memory-utilization 0.5 # If running on a single GPU and encountering VRAM shortage, reduce the KV cache size by this parameter, if VRAM issues persist, try lowering it further to `0.4` or below.
# Multi-GPU configuration (increase throughput)
# --data-parallel-size 2
# Single GPU memory optimization (reduce if VRAM insufficient)
# --gpu-memory-utilization 0.5 # Try 0.4 or lower if issues persist
# ==================== LMDeploy Engine Parameters ====================
# Uncomment if using --engine lmdeploy
# --server-name 0.0.0.0
# --server-port 30000
# Multi-GPU configuration (increase throughput)
# --dp 2
# Single GPU memory optimization (reduce if VRAM insufficient)
# --cache-max-entry-count 0.5 # Try 0.4 or lower if issues persist
ulimits:
memlock: -1
stack: 67108864
@@ -25,11 +44,11 @@ services:
reservations:
devices:
- driver: nvidia
device_ids: ["0"]
device_ids: ["0"] # Modify for multiple GPUs: ["0", "1"]
capabilities: [gpu]
mineru-api:
image: mineru-vllm:latest
image: mineru:latest
container_name: mineru-api
restart: always
profiles: ["api"]
@@ -39,11 +58,21 @@ services:
MINERU_MODEL_SOURCE: local
entrypoint: mineru-api
command:
# ==================== Server Configuration ====================
--host 0.0.0.0
--port 8000
# parameters for vllm-engine
# --data-parallel-size 2 # If using multiple GPUs, increase throughput using vllm's multi-GPU parallel mode
# --gpu-memory-utilization 0.5 # If running on a single GPU and encountering VRAM shortage, reduce the KV cache size by this parameter, if VRAM issues persist, try lowering it further to `0.4` or below.
# ==================== vLLM Engine Parameters ====================
# Multi-GPU configuration
# --data-parallel-size 2
# Single GPU memory optimization
# --gpu-memory-utilization 0.5 # Try 0.4 or lower if VRAM insufficient
# ==================== LMDeploy Engine Parameters ====================
# Multi-GPU configuration
# --dp 2
# Single GPU memory optimization
# --cache-max-entry-count 0.5 # Try 0.4 or lower if VRAM insufficient
ulimits:
memlock: -1
stack: 67108864
@@ -53,11 +82,11 @@ services:
reservations:
devices:
- driver: nvidia
device_ids: [ "0" ]
capabilities: [ gpu ]
device_ids: ["0"] # Modify for multiple GPUs: ["0", "1"]
capabilities: [gpu]
mineru-gradio:
image: mineru-vllm:latest
image: mineru:latest
container_name: mineru-gradio
restart: always
profiles: ["gradio"]
@@ -67,14 +96,30 @@ services:
MINERU_MODEL_SOURCE: local
entrypoint: mineru-gradio
command:
# ==================== Gradio Server Configuration ====================
--server-name 0.0.0.0
--server-port 7860
--enable-vllm-engine true # Enable the vllm engine for Gradio
# --enable-api false # If you want to disable the API, set this to false
# --max-convert-pages 20 # If you want to limit the number of pages for conversion, set this to a specific number
# parameters for vllm-engine
# --data-parallel-size 2 # If using multiple GPUs, increase throughput using vllm's multi-GPU parallel mode
# --gpu-memory-utilization 0.5 # If running on a single GPU and encountering VRAM shortage, reduce the KV cache size by this parameter, if VRAM issues persist, try lowering it further to `0.4` or below.
# ==================== Gradio Feature Settings ====================
# --enable-api false # Disable API endpoint
# --max-convert-pages 20 # Limit conversion page count
# ==================== Engine Selection ====================
# WARNING: Only ONE engine can be enabled at a time!
# Option 1: vLLM Engine (recommended for most users)
--enable-vllm-engine true
# Multi-GPU configuration
# --data-parallel-size 2
# Single GPU memory optimization
# --gpu-memory-utilization 0.5 # Try 0.4 or lower if VRAM insufficient
# Option 2: LMDeploy Engine
# --enable-lmdeploy-engine true
# Multi-GPU configuration
# --dp 2
# Single GPU memory optimization
# --cache-max-entry-count 0.5 # Try 0.4 or lower if VRAM insufficient
ulimits:
memlock: -1
stack: 67108864
@@ -84,5 +129,5 @@ services:
reservations:
devices:
- driver: nvidia
device_ids: [ "0" ]
capabilities: [ gpu ]
device_ids: ["0"] # Modify for multiple GPUs: ["0", "1"]
capabilities: [gpu]

View File

@@ -1,6 +1,10 @@
# Use the official vllm image
# Use the official vllm image for gpu with Ampere architecture and above (Compute Capability>=8.0)
# Compute Capability version query (https://developer.nvidia.com/cuda-gpus)
FROM vllm/vllm-openai:v0.10.1.1
# Use the official vllm image for gpu with Turing architecture and below (Compute Capability<8.0)
# FROM vllm/vllm-openai:v0.10.2
# Install libgl for opencv support & Noto fonts for Chinese characters
RUN apt-get update && \
apt-get install -y \

Binary file not shown.

After

Width:  |  Height:  |  Size: 96 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 51 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 72 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 55 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 75 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 56 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 88 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 76 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 110 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 79 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 104 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 72 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 87 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 201 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 261 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 261 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 53 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 145 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 130 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 95 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 110 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 102 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 101 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 214 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 151 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 83 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 89 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 147 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 108 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 81 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 85 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 129 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 249 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 255 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 107 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 125 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 180 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 105 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 236 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 177 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 77 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 118 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 94 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 133 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 161 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 190 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 263 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 264 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 261 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 286 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 50 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 136 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 110 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 133 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 185 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 92 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 246 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 71 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 72 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 116 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 151 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 62 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 92 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 276 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 67 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 74 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 71 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 72 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 70 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 63 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 89 KiB

View File

@@ -19,7 +19,8 @@
[![HuggingFace](https://img.shields.io/badge/Demo_on_HuggingFace-yellow.svg?logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAF8AAABYCAMAAACkl9t/AAAAk1BMVEVHcEz/nQv/nQv/nQr/nQv/nQr/nQv/nQv/nQr/wRf/txT/pg7/yRr/rBD/zRz/ngv/oAz/zhz/nwv/txT/ngv/0B3+zBz/nQv/0h7/wxn/vRb/thXkuiT/rxH/pxD/ogzcqyf/nQvTlSz/czCxky7/SjifdjT/Mj3+Mj3wMj15aTnDNz+DSD9RTUBsP0FRO0Q6O0WyIxEIAAAAGHRSTlMADB8zSWF3krDDw8TJ1NbX5efv8ff9/fxKDJ9uAAAGKklEQVR42u2Z63qjOAyGC4RwCOfB2JAGqrSb2WnTw/1f3UaWcSGYNKTdf/P+mOkTrE+yJBulvfvLT2A5ruenaVHyIks33npl/6C4s/ZLAM45SOi/1FtZPyFur1OYofBX3w7d54Bxm+E8db+nDr12ttmESZ4zludJEG5S7TO72YPlKZFyE+YCYUJTBZsMiNS5Sd7NlDmKM2Eg2JQg8awbglfqgbhArjxkS7dgp2RH6hc9AMLdZYUtZN5DJr4molC8BfKrEkPKEnEVjLbgW1fLy77ZVOJagoIcLIl+IxaQZGjiX597HopF5CkaXVMDO9Pyix3AFV3kw4lQLCbHuMovz8FallbcQIJ5Ta0vks9RnolbCK84BtjKRS5uA43hYoZcOBGIG2Epbv6CvFVQ8m8loh66WNySsnN7htL58LNp+NXT8/PhXiBXPMjLSxtwp8W9f/1AngRierBkA+kk/IpUSOeKByzn8y3kAAAfh//0oXgV4roHm/kz4E2z//zRc3/lgwBzbM2mJxQEa5pqgX7d1L0htrhx7LKxOZlKbwcAWyEOWqYSI8YPtgDQVjpB5nvaHaSnBaQSD6hweDi8PosxD6/PT09YY3xQA7LTCTKfYX+QHpA0GCcqmEHvr/cyfKQTEuwgbs2kPxJEB0iNjfJcCTPyocx+A0griHSmADiC91oNGVwJ69RudYe65vJmoqfpul0lrqXadW0jFKH5BKwAeCq+Den7s+3zfRJzA61/Uj/9H/VzLKTx9jFPPdXeeP+L7WEvDLAKAIoF8bPTKT0+TM7W8ePj3Rz/Yn3kOAp2f1Kf0Weony7pn/cPydvhQYV+eFOfmOu7VB/ViPe34/EN3RFHY/yRuT8ddCtMPH/McBAT5s+vRde/gf2c/sPsjLK+m5IBQF5tO+h2tTlBGnP6693JdsvofjOPnnEHkh2TnV/X1fBl9S5zrwuwF8NFrAVJVwCAPTe8gaJlomqlp0pv4Pjn98tJ/t/fL++6unpR1YGC2n/KCoa0tTLoKiEeUPDl94nj+5/Tv3/eT5vBQ60X1S0oZr+IWRR8Ldhu7AlLjPISlJcO9vrFotky9SpzDequlwEir5beYAc0R7D9KS1DXva0jhYRDXoExPdc6yw5GShkZXe9QdO/uOvHofxjrV/TNS6iMJS+4TcSTgk9n5agJdBQbB//IfF/HpvPt3Tbi7b6I6K0R72p6ajryEJrENW2bbeVUGjfgoals4L443c7BEE4mJO2SpbRngxQrAKRudRzGQ8jVOL2qDVjjI8K1gc3TIJ5KiFZ1q+gdsARPB4NQS4AjwVSt72DSoXNyOWUrU5mQ9nRYyjp89Xo7oRI6Bga9QNT1mQ/ptaJq5T/7WcgAZywR/XlPGAUDdet3LE+qS0TI+g+aJU8MIqjo0Kx8Ly+maxLjJmjQ18rA0YCkxLQbUZP1WqdmyQGJLUm7VnQFqodmXSqmRrdVpqdzk5LvmvgtEcW8PMGdaS23EOWyDVbACZzUJPaqMbjDxpA3Qrgl0AikimGDbqmyT8P8NOYiqrldF8rX+YN7TopX4UoHuSCYY7cgX4gHwclQKl1zhx0THf+tCAUValzjI7Wg9EhptrkIcfIJjA94evOn8B2eHaVzvBrnl2ig0So6hvPaz0IGcOvTHvUIlE2+prqAxLSQxZlU2stql1NqCCLdIiIN/i1DBEHUoElM9dBravbiAnKqgpi4IBkw+utSPIoBijDXJipSVV7MpOEJUAc5Qmm3BnUN+w3hteEieYKfRZSIUcXKMVf0u5wD4EwsUNVvZOtUT7A2GkffHjByWpHqvRBYrTV72a6j8zZ6W0DTE86Hn04bmyWX3Ri9WH7ZU6Q7h+ZHo0nHUAcsQvVhXRDZHChwiyi/hnPuOsSEF6Exk3o6Y9DT1eZ+6cASXk2Y9k+6EOQMDGm6WBK10wOQJCBwren86cPPWUcRAnTVjGcU1LBgs9FURiX/e6479yZcLwCBmTxiawEwrOcleuu12t3tbLv/N4RLYIBhYexm7Fcn4OJcn0+zc+s8/VfPeddZHAGN6TT8eGczHdR/Gts1/MzDkThr23zqrVfAMFT33Nx1RJsx1k5zuWILLnG/vsH+Fv5D4NTVcp1Gzo8AAAAAElFTkSuQmCC&labelColor=white)](https://huggingface.co/spaces/opendatalab/MinerU)
[![ModelScope](https://img.shields.io/badge/Demo_on_ModelScope-purple?logo=data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iMjIzIiBoZWlnaHQ9IjIwMCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIj4KCiA8Zz4KICA8dGl0bGU+TGF5ZXIgMTwvdGl0bGU+CiAgPHBhdGggaWQ9InN2Z18xNCIgZmlsbD0iIzYyNGFmZiIgZD0ibTAsODkuODRsMjUuNjUsMGwwLDI1LjY0OTk5bC0yNS42NSwwbDAsLTI1LjY0OTk5eiIvPgogIDxwYXRoIGlkPSJzdmdfMTUiIGZpbGw9IiM2MjRhZmYiIGQ9Im05OS4xNCwxMTUuNDlsMjUuNjUsMGwwLDI1LjY1bC0yNS42NSwwbDAsLTI1LjY1eiIvPgogIDxwYXRoIGlkPSJzdmdfMTYiIGZpbGw9IiM2MjRhZmYiIGQ9Im0xNzYuMDksMTQxLjE0bC0yNS42NDk5OSwwbDAsMjIuMTlsNDcuODQsMGwwLC00Ny44NGwtMjIuMTksMGwwLDI1LjY1eiIvPgogIDxwYXRoIGlkPSJzdmdfMTciIGZpbGw9IiMzNmNmZDEiIGQ9Im0xMjQuNzksODkuODRsMjUuNjUsMGwwLDI1LjY0OTk5bC0yNS42NSwwbDAsLTI1LjY0OTk5eiIvPgogIDxwYXRoIGlkPSJzdmdfMTgiIGZpbGw9IiMzNmNmZDEiIGQ9Im0wLDY0LjE5bDI1LjY1LDBsMCwyNS42NWwtMjUuNjUsMGwwLC0yNS42NXoiLz4KICA8cGF0aCBpZD0ic3ZnXzE5IiBmaWxsPSIjNjI0YWZmIiBkPSJtMTk4LjI4LDg5Ljg0bDI1LjY0OTk5LDBsMCwyNS42NDk5OWwtMjUuNjQ5OTksMGwwLC0yNS42NDk5OXoiLz4KICA8cGF0aCBpZD0ic3ZnXzIwIiBmaWxsPSIjMzZjZmQxIiBkPSJtMTk4LjI4LDY0LjE5bDI1LjY0OTk5LDBsMCwyNS42NWwtMjUuNjQ5OTksMGwwLC0yNS42NXoiLz4KICA8cGF0aCBpZD0ic3ZnXzIxIiBmaWxsPSIjNjI0YWZmIiBkPSJtMTUwLjQ0LDQybDAsMjIuMTlsMjUuNjQ5OTksMGwwLDI1LjY1bDIyLjE5LDBsMCwtNDcuODRsLTQ3Ljg0LDB6Ii8+CiAgPHBhdGggaWQ9InN2Z18yMiIgZmlsbD0iIzM2Y2ZkMSIgZD0ibTczLjQ5LDg5Ljg0bDI1LjY1LDBsMCwyNS42NDk5OWwtMjUuNjUsMGwwLC0yNS42NDk5OXoiLz4KICA8cGF0aCBpZD0ic3ZnXzIzIiBmaWxsPSIjNjI0YWZmIiBkPSJtNDcuODQsNjQuMTlsMjUuNjUsMGwwLC0yMi4xOWwtNDcuODQsMGwwLDQ3Ljg0bDIyLjE5LDBsMCwtMjUuNjV6Ii8+CiAgPHBhdGggaWQ9InN2Z18yNCIgZmlsbD0iIzYyNGFmZiIgZD0ibTQ3Ljg0LDExNS40OWwtMjIuMTksMGwwLDQ3Ljg0bDQ3Ljg0LDBsMCwtMjIuMTlsLTI1LjY1LDBsMCwtMjUuNjV6Ii8+CiA8L2c+Cjwvc3ZnPg==&labelColor=white)](https://www.modelscope.cn/studios/OpenDataLab/MinerU)
[![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/gist/myhloli/a3cb16570ab3cfeadf9d8f0ac91b4fca/mineru_demo.ipynb)
[![arXiv](https://img.shields.io/badge/arXiv-2409.18839-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2409.18839)
[![arXiv](https://img.shields.io/badge/MinerU-Technical%20Report-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2409.18839)
[![arXiv](https://img.shields.io/badge/MinerU2.5-Technical%20Report-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2509.22186)
[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/opendatalab/MinerU)
<div align="center">
@@ -56,7 +57,7 @@ Compared to well-known commercial products domestically and internationally, Min
- Automatically identify and convert formulas in documents to LaTeX format
- Automatically identify and convert tables in documents to HTML format
- Automatically detect scanned PDFs and garbled PDFs, and enable OCR functionality
- OCR supports detection and recognition of 84 languages
- OCR supports detection and recognition of 109 languages
- Support multiple output formats, such as multimodal and NLP Markdown, reading-order-sorted JSON, and information-rich intermediate formats
- Support multiple visualization results, including layout visualization, span visualization, etc., for efficient confirmation of output effects and quality inspection
- Support pure CPU environment operation, and support GPU(CUDA)/NPU(CANN)/MPS acceleration

View File

@@ -6,11 +6,12 @@ MinerU provides a convenient Docker deployment method, which helps quickly set u
```bash
wget https://gcore.jsdelivr.net/gh/opendatalab/MinerU@master/docker/global/Dockerfile
docker build -t mineru-vllm:latest -f Dockerfile .
docker build -t mineru:latest -f Dockerfile .
```
> [!TIP]
> The [Dockerfile](https://github.com/opendatalab/MinerU/blob/master/docker/global/Dockerfile) uses `vllm/vllm-openai:v0.10.1.1` as the base image by default, supporting Turing/Ampere/Ada Lovelace/Hopper/Blackwell platforms.
> The [Dockerfile](https://github.com/opendatalab/MinerU/blob/master/docker/global/Dockerfile) uses `vllm/vllm-openai:v0.10.1.1` as the base image by default. This version of vLLM v1 engine has limited support for GPU models.
> If you cannot use vLLM accelerated inference on Turing and earlier architecture GPUs, you can resolve this issue by changing the base image to `vllm/vllm-openai:v0.10.2`.
## Docker Description
@@ -19,7 +20,7 @@ MinerU's Docker uses `vllm/vllm-openai` as the base image, so it includes the `v
> [!NOTE]
> Requirements for using `vllm` to accelerate VLM model inference:
>
> - Device must have Turing architecture or later graphics cards with 8GB+ available VRAM.
> - Device must have Volta architecture or later graphics cards with 8GB+ available VRAM.
> - The host machine's graphics driver should support CUDA 12.8 or higher; You can check the driver version using the `nvidia-smi` command.
> - Docker container must have access to the host machine's graphics devices.
@@ -30,7 +31,7 @@ docker run --gpus all \
--shm-size 32g \
-p 30000:30000 -p 7860:7860 -p 8000:8000 \
--ipc=host \
-it mineru-vllm:latest \
-it mineru:latest \
/bin/bash
```
@@ -50,17 +51,17 @@ wget https://gcore.jsdelivr.net/gh/opendatalab/MinerU@master/docker/compose.yaml
>
>- The `compose.yaml` file contains configurations for multiple services of MinerU, you can choose to start specific services as needed.
>- Different services might have additional parameter configurations, which you can view and edit in the `compose.yaml` file.
>- Due to the pre-allocation of GPU memory by the `vllm` inference acceleration framework, you may not be able to run multiple `vllm` services simultaneously on the same machine. Therefore, ensure that other services that might use GPU memory have been stopped before starting the `vlm-vllm-server` service or using the `vlm-vllm-engine` backend.
>- Due to the pre-allocation of GPU memory by the `vllm` inference acceleration framework, you may not be able to run multiple `vllm` services simultaneously on the same machine. Therefore, ensure that other services that might use GPU memory have been stopped before starting the `vlm-openai-server` service or using the `vlm-vllm-engine` backend.
---
### Start vllm-server service
connect to `vllm-server` via `vlm-http-client` backend
### Start OpenAI-compatible server service
connect to `openai-server` via `vlm-http-client` backend
```bash
docker compose -f compose.yaml --profile vllm-server up -d
docker compose -f compose.yaml --profile openai-server up -d
```
>[!TIP]
>In another terminal, connect to vllm server via http client (only requires CPU and network, no vllm environment needed)
>In another terminal, connect to openai server via http client (only requires CPU and network, no vllm environment needed)
> ```bash
> mineru -p <input_path> -o <output_path> -b vlm-http-client -u http://<server_ip>:30000
> ```
@@ -83,4 +84,3 @@ connect to `vllm-server` via `vlm-http-client` backend
>[!TIP]
>
>- Access `http://<server_ip>:7860` in your browser to use the Gradio WebUI.
>- Access `http://<server_ip>:7860/?view=api` to use the Gradio API.

View File

@@ -4,26 +4,43 @@ MinerU supports installing extension modules on demand based on different needs
## Common Scenarios
### Core Functionality Installation
The `core` module is the core dependency of MinerU, containing all functional modules except `vllm`. Installing this module ensures the basic functionality of MinerU works properly.
The `core` module is the core dependency of MinerU, containing all functional modules except `vllm`/`lmdeploy`. Installing this module ensures the basic functionality of MinerU works properly.
```bash
uv pip install mineru[core]
uv pip install "mineru[core]"
```
---
### Using `vllm` to Accelerate VLM Model Inference
The `vllm` module provides acceleration support for VLM model inference, suitable for graphics cards with Turing architecture and later (8GB+ VRAM). Installing this module can significantly improve model inference speed.
In the configuration, `all` includes both `core` and `vllm` modules, so `mineru[all]` and `mineru[core,vllm]` are equivalent.
> [!NOTE]
> `vllm` and `lmdeploy` have nearly identical VLM inference acceleration effects and usage methods. You can choose one of them to install and use based on your actual needs, but it is not recommended to install both modules simultaneously to avoid potential dependency conflicts.
The `vllm` module provides acceleration support for VLM model inference, suitable for graphics cards with Volta architecture and later (8GB+ VRAM). Installing this module can significantly improve model inference speed.
```bash
uv pip install mineru[all]
uv pip install "mineru[core,vllm]"
```
> [!TIP]
> If exceptions occur during installation of the complete package including vllm, please refer to the [vllm official documentation](https://docs.vllm.ai/en/latest/getting_started/installation/index.html) to try to resolve the issue, or directly use the [Docker](./docker_deployment.md) deployment method.
> If exceptions occur during installation of the extra package including vllm, please refer to the [vllm official documentation](https://docs.vllm.ai/en/latest/getting_started/installation/index.html) to try to resolve the issue, or directly use the [Docker](./docker_deployment.md) deployment method.
---
### Installing Lightweight Client to Connect to vllm-server
If you need to install a lightweight client on edge devices to connect to `vllm-server`, you can install the basic mineru package, which is very lightweight and suitable for devices with only CPU and network connectivity.
### Using `lmdeploy` to Accelerate VLM Model Inference
> [!NOTE]
> `vllm` and `lmdeploy` have nearly identical VLM inference acceleration effects and usage methods. You can choose one of them to install and use based on your actual needs, but it is not recommended to install both modules simultaneously to avoid potential dependency conflicts.
The `lmdeploy` module provides acceleration support for VLM model inference, suitable for graphics cards with Volta architecture and later (8GB+ VRAM). Installing this module can significantly improve model inference speed.
```bash
uv pip install "mineru[core,lmdeploy]"
```
> [!TIP]
> If exceptions occur during installation of the extra package including lmdeploy, please refer to the [lmdeploy official documentation](https://lmdeploy.readthedocs.io/en/latest/get_started/installation.html) to try to resolve the issue.
---
### Installing Lightweight Client to Connect to OpenAI-compatible servers
If you need to install a lightweight client on edge devices to connect to an OpenAI-compatible server for using VLM mode, you can install the basic mineru package, which is very lightweight and suitable for devices with only CPU and network connectivity.
```bash
uv pip install mineru
```

View File

@@ -27,41 +27,75 @@ A WebUI developed based on Gradio, with a simple interface and only core parsing
> In non-mainstream environments, due to the diversity of hardware and software configurations, as well as compatibility issues with third-party dependencies, we cannot guarantee 100% usability of the project. Therefore, for users who wish to use this project in non-recommended environments, we suggest carefully reading the documentation and FAQ first, as most issues have corresponding solutions in the FAQ. Additionally, we encourage community feedback on issues so that we can gradually expand our support range.
<table border="1">
<tr>
<td>Parsing Backend</td>
<td>pipeline</td>
<td>vlm-transformers</td>
<td>vlm-vllm</td>
</tr>
<tr>
<td>Operating System</td>
<td>Linux / Windows / macOS</td>
<td>Linux / Windows</td>
<td>Linux / Windows (via WSL2)</td>
</tr>
<tr>
<td>CPU Inference Support</td>
<td>✅</td>
<td colspan="2">❌</td>
</tr>
<tr>
<td>GPU Requirements</td>
<td>Turing architecture and later, 6GB+ VRAM or Apple Silicon</td>
<td colspan="2">Turing architecture and later, 8GB+ VRAM</td>
</tr>
<tr>
<td>Memory Requirements</td>
<td colspan="3">Minimum 16GB+, recommended 32GB+</td>
</tr>
<tr>
<td>Disk Space Requirements</td>
<td colspan="3">20GB+, SSD recommended</td>
</tr>
<tr>
<td>Python Version</td>
<td colspan="3">3.10-3.13</td>
</tr>
<thead>
<tr>
<th rowspan="2">Parsing Backend</th>
<th rowspan="2">pipeline <br> (Accuracy<sup>1</sup> 82+)</th>
<th colspan="5">vlm (Accuracy<sup>1</sup> 90+)</th>
</tr>
<tr>
<th>transformers</th>
<th>mlx-engine</th>
<th>vllm-engine / <br>vllm-async-engine</th>
<th>lmdeploy-engine</th>
<th>http-client</th>
</tr>
</thead>
<tbody>
<tr>
<th>Backend Features</th>
<td>Fast, no hallucinations</td>
<td>Good compatibility, <br>but slower</td>
<td>Faster than transformers</td>
<td>Fast, compatible with the vLLM ecosystem</td>
<td>Fast, compatible with the LMDeploy ecosystem</td>
<td>Suitable for OpenAI-compatible servers<sup>6</sup></td>
</tr>
<tr>
<th>Operating System</th>
<td colspan="2" style="text-align:center;">Linux<sup>2</sup> / Windows / macOS</td>
<td style="text-align:center;">macOS<sup>3</sup></td>
<td style="text-align:center;">Linux<sup>2</sup> / Windows<sup>4</sup> </td>
<td style="text-align:center;">Linux<sup>2</sup> / Windows<sup>5</sup> </td>
<td>Any</td>
</tr>
<tr>
<th>CPU inference support</th>
<td colspan="2" style="text-align:center;">✅</td>
<td colspan="3" style="text-align:center;">❌</td>
<td>Not required</td>
</tr>
<tr>
<th>GPU Requirements</th><td colspan="2" style="text-align:center;">Volta or later architectures, 6 GB VRAM or more, or Apple Silicon</td>
<td>Apple Silicon</td>
<td colspan="2" style="text-align:center;">Volta or later architectures, 8 GB VRAM or more</td>
<td>Not required</td>
</tr>
<tr>
<th>Memory Requirements</th>
<td colspan="5" style="text-align:center;">Minimum 16 GB, 32 GB recommended</td>
<td>8 GB</td>
</tr>
<tr>
<th>Disk Space Requirements</th>
<td colspan="5" style="text-align:center;">20 GB or more, SSD recommended</td>
<td>2 GB</td>
</tr>
<tr>
<th>Python Version</th>
<td colspan="6" style="text-align:center;">3.10-3.13<sup>7</sup></td>
</tr>
</tbody>
</table>
<sup>1</sup> Accuracy metric is the End-to-End Evaluation Overall score of OmniDocBench (v1.5), tested on the latest `MinerU` version.
<sup>2</sup> Linux supports only distributions released in 2019 or later.
<sup>3</sup> MLX requires macOS 13.5 or later, recommended for use with version 14.0 or higher.
<sup>4</sup> Windows vLLM support via WSL2(Windows Subsystem for Linux).
<sup>5</sup> Windows LMDeploy can only use the `turbomind` backend, which is slightly slower than the `pytorch` backend. If performance is critical, it is recommended to run it via WSL2.
<sup>6</sup> Servers compatible with the OpenAI API, such as local or remote model services deployed via inference frameworks like `vLLM`, `SGLang`, or `LMDeploy`.
<sup>7</sup> Windows + LMDeploy only supports Python versions 3.103.12, as the critical dependency `ray` does not yet support Python 3.13 on Windows.
### Install MinerU
@@ -80,8 +114,8 @@ uv pip install -e .[core]
```
> [!TIP]
> `mineru[core]` includes all core features except `vllm` acceleration, compatible with Windows / Linux / macOS systems, suitable for most users.
> If you need to use `vllm` acceleration for VLM model inference or install a lightweight client on edge devices, please refer to the documentation [Extension Modules Installation Guide](./extension_modules.md).
> `mineru[core]` includes all core features except `vLLM`/`LMDeploy` acceleration, compatible with Windows / Linux / macOS systems, suitable for most users.
> If you need to use `vLLM`/`LMDeploy` acceleration for VLM model inference or install a lightweight client on edge devices, please refer to the documentation [Extension Modules Installation Guide](./extension_modules.md).
---

View File

@@ -397,10 +397,10 @@ Text levels are distinguished through the `text_level` field:
{
"type": "image",
"img_path": "images/a8ecda1c69b27e4f79fce1589175a9d721cbdc1cf78b4cc06a015f3746f6b9d8.jpg",
"img_caption": [
"image_caption": [
"Fig. 1. Annual flow duration curves of daily flows from Pine Creek, Australia, 19892000. "
],
"img_footnote": [],
"image_footnote": [],
"bbox": [
62,
480,

View File

@@ -1,8 +1,8 @@
# Advanced Command Line Parameters
## vllm Acceleration Parameter Optimization
## Pass-through of inference engine parameters
### Performance Optimization Parameters
### vllm Acceleration Parameter Optimization
> [!TIP]
> If you can already use vllm normally for accelerated VLM model inference but still want to further improve inference speed, you can try the following parameters:
>
@@ -10,8 +10,9 @@
### Parameter Passing Instructions
> [!TIP]
> - All officially supported vllm parameters can be passed to MinerU through command line arguments, including the following commands: `mineru`, `mineru-vllm-server`, `mineru-gradio`, `mineru-api`
> - All officially supported vllm/lmdeploy parameters can be passed to MinerU through command line arguments, including the following commands: `mineru`, `mineru-openai-server`, `mineru-gradio`, `mineru-api`
> - If you want to learn more about `vllm` parameter usage, please refer to the [vllm official documentation](https://docs.vllm.ai/en/latest/cli/serve.html)
> - If you want to learn more about `lmdeploy` parameter usage, please refer to the [lmdeploy official documentation](https://lmdeploy.readthedocs.io/en/latest/llm/api_server.html)
## GPU Device Selection and Configuration
@@ -21,7 +22,7 @@
> ```bash
> CUDA_VISIBLE_DEVICES=1 mineru -p <input_path> -o <output_path>
> ```
> - This specification method is effective for all command line calls, including `mineru`, `mineru-vllm-server`, `mineru-gradio`, and `mineru-api`, and applies to both `pipeline` and `vlm` backends.
> - This specification method is effective for all command line calls, including `mineru`, `mineru-openai-server`, `mineru-gradio`, and `mineru-api`, and applies to both `pipeline` and `vlm` backends.
### Common Device Configuration Examples
> [!TIP]
@@ -38,9 +39,9 @@
> [!TIP]
> Here are some possible usage scenarios:
>
> - If you have multiple graphics cards and need to specify cards 0 and 1, using multi-card parallelism to start `vllm-server`, you can use the following command:
> - If you have multiple graphics cards and need to specify cards 0 and 1, using multi-card parallelism to start `openai-server`, you can use the following command:
> ```bash
> CUDA_VISIBLE_DEVICES=0,1 mineru-vllm-server --port 30000 --data-parallel-size 2
> CUDA_VISIBLE_DEVICES=0,1 mineru-openai-server --engine vllm --port 30000 --data-parallel-size 2
> ```
>
> - If you have multiple graphics cards and need to start two `fastapi` services on cards 0 and 1, listening on different ports respectively, you can use the following commands:

View File

@@ -11,7 +11,7 @@ Options:
-p, --path PATH Input file path or directory (required)
-o, --output PATH Output directory (required)
-m, --method [auto|txt|ocr] Parsing method: auto (default), txt, ocr (pipeline backend only)
-b, --backend [pipeline|vlm-transformers|vlm-vllm-engine|vlm-http-client]
-b, --backend [pipeline|vlm-transformers|vlm-vllm-engine|vlm-lmdeploy-engine|vlm-http-client]
Parsing backend (default: pipeline)
-l, --lang [ch|ch_server|ch_lite|en|korean|japan|chinese_cht|ta|te|ka|th|el|latin|arabic|east_slavic|cyrillic|devanagari]
Specify document language (improves OCR accuracy, pipeline backend only)
@@ -20,7 +20,7 @@ Options:
-e, --end INTEGER Ending page number for parsing (0-based)
-f, --formula BOOLEAN Enable formula parsing (default: enabled)
-t, --table BOOLEAN Enable table parsing (default: enabled)
-d, --device TEXT Inference device (e.g., cpu/cuda/cuda:0/npu/mps, pipeline backend only)
-d, --device TEXT Inference device (e.g., cpu/cuda/cuda:0/npu/mps, pipeline and vlm-transformers backend only)
--vram INTEGER Maximum GPU VRAM usage per process (GB) (pipeline backend only)
--source [huggingface|modelscope|local]
Model source, default: huggingface
@@ -68,7 +68,7 @@ Here are the environment variables and their descriptions:
- `MINERU_DEVICE_MODE`:
* Used to specify inference device
* supports device types like `cpu/cuda/cuda:0/npu/mps`
* only effective for `pipeline` backend.
* only effective for `pipeline` and `vlm-transformers` backends.
- `MINERU_VIRTUAL_VRAM_SIZE`:
* Used to specify maximum GPU VRAM usage per process (GB)
@@ -87,6 +87,27 @@ Here are the environment variables and their descriptions:
* Used to enable formula parsing
* defaults to `true`, can be set to `false` through environment variables to disable formula parsing.
- `MINERU_TABLE_ENABLE`:
- `MINERU_FORMULA_CH_SUPPORT`:
* Used to enable Chinese formula parsing optimization (experimental feature)
* Default is `false`, can be set to `true` via environment variable to enable Chinese formula parsing optimization.
* Only effective for `pipeline` backend.
- `MINERU_TABLE_ENABLE`:
* Used to enable table parsing
* defaults to `true`, can be set to `false` through environment variables to disable table parsing.
* Default is `true`, can be set to `false` via environment variable to disable table parsing.
- `MINERU_TABLE_MERGE_ENABLE`:
* Used to enable table merging functionality
* Default is `true`, can be set to `false` via environment variable to disable table merging functionality.
- `MINERU_PDF_RENDER_TIMEOUT`:
* Used to set the timeout period (in seconds) for rendering PDF to images
* Default is `300` seconds, can be set to other values via environment variable to adjust the image rendering timeout.
- `MINERU_INTRA_OP_NUM_THREADS`:
* Used to set the intra_op thread count for ONNX models, affects the computation speed of individual operators
* Default is `-1` (auto-select), can be set to other values via environment variable to adjust the thread count.
- `MINERU_INTER_OP_NUM_THREADS`:
* Used to set the inter_op thread count for ONNX models, affects the parallel execution of multiple operators
* Default is `-1` (auto-select), can be set to other values via environment variable to adjust the thread count.

View File

@@ -29,7 +29,7 @@ mineru -p <input_path> -o <output_path>
mineru -p <input_path> -o <output_path> -b vlm-transformers
```
> [!TIP]
> The vlm backend additionally supports `vllm` acceleration. Compared to the `transformers` backend, `vllm` can achieve 20-30x speedup. You can check the installation method for the complete package supporting `vllm` acceleration in the [Extension Modules Installation Guide](../quick_start/extension_modules.md).
> The vlm backend additionally supports `vllm`/`lmdeploy` acceleration. Compared to the `transformers` backend, inference speed can be significantly improved. You can check the installation method for the complete package supporting `vllm`/`lmdeploy` acceleration in the [Extension Modules Installation Guide](../quick_start/extension_modules.md).
If you need to adjust parsing options through custom parameters, you can also check the more detailed [Command Line Tools Usage Instructions](./cli_tools.md) in the documentation.
@@ -48,15 +48,21 @@ If you need to adjust parsing options through custom parameters, you can also ch
mineru-gradio --server-name 0.0.0.0 --server-port 7860
# Or using vlm-vllm-engine/pipeline backends (requires vllm environment)
mineru-gradio --server-name 0.0.0.0 --server-port 7860 --enable-vllm-engine true
# Or using vlm-lmdeploy-engine/pipeline backends (requires lmdeploy environment)
mineru-gradio --server-name 0.0.0.0 --server-port 7860 --enable-lmdeploy-engine true
```
>[!TIP]
>
>- Access `http://127.0.0.1:7860` in your browser to use the Gradio WebUI.
>- Access `http://127.0.0.1:7860/?view=api` to use the Gradio API.
- Using `http-client/server` method:
```bash
# Start vllm server (requires vllm environment)
mineru-vllm-server --port 30000
# Start openai compatible server (requires vllm or lmdeploy environment)
mineru-openai-server
# Or start vllm server (requires vllm environment)
mineru-openai-server --engine vllm --port 30000
# Or start lmdeploy server (requires lmdeploy environment)
mineru-openai-server --engine lmdeploy --server-port 30000
```
>[!TIP]
>In another terminal, connect to vllm server via http client (only requires CPU and network, no vllm environment needed)
@@ -65,8 +71,8 @@ If you need to adjust parsing options through custom parameters, you can also ch
> ```
> [!NOTE]
> All officially supported vllm parameters can be passed to MinerU through command line arguments, including the following commands: `mineru`, `mineru-vllm-server`, `mineru-gradio`, `mineru-api`.
> We have compiled some commonly used parameters and usage methods for `vllm`, which can be found in the documentation [Advanced Command Line Parameters](./advanced_cli_parameters.md).
> All officially supported `vllm/lmdeploy` parameters can be passed to MinerU through command line arguments, including the following commands: `mineru`, `mineru-openai-server`, `mineru-gradio`, `mineru-api`.
> We have compiled some commonly used parameters and usage methods for `vllm/lmdeploy`, which can be found in the documentation [Advanced Command Line Parameters](./advanced_cli_parameters.md).
## Extending MinerU Functionality with Configuration Files
@@ -83,8 +89,28 @@ Here are some available configuration options:
- `llm-aided-config`:
* Used to configure parameters for LLM-assisted title hierarchy
* Compatible with all LLM models supporting `openai protocol`, defaults to using Alibaba Cloud Bailian's `qwen2.5-32b-instruct` model.
* Compatible with all LLM models supporting `openai protocol`, defaults to using Alibaba Cloud Bailian's `qwen3-next-80b-a3b-instruct` model.
* You need to configure your own API key and set `enable` to `true` to enable this feature.
* If your API provider does not support the `enable_thinking` parameter, please manually remove it.
* For example, in your configuration file, the `llm-aided-config` section may look like:
```json
"llm-aided-config": {
"api_key": "your_api_key",
"base_url": "https://dashscope.aliyuncs.com/compatible-mode/v1",
"model": "qwen3-next-80b-a3b-instruct",
"enable_thinking": false,
"enable": false
}
```
* To remove the `enable_thinking` parameter, simply delete the line containing `"enable_thinking": false`, resulting in:
```json
"llm-aided-config": {
"api_key": "your_api_key",
"base_url": "https://dashscope.aliyuncs.com/compatible-mode/v1",
"model": "qwen3-next-80b-a3b-instruct",
"enable": false
}
```
- `models-dir`:
* Used to specify local model storage directory

View File

@@ -19,7 +19,8 @@
[![ModelScope](https://img.shields.io/badge/Demo_on_ModelScope-purple?logo=data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iMjIzIiBoZWlnaHQ9IjIwMCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIj4KCiA8Zz4KICA8dGl0bGU+TGF5ZXIgMTwvdGl0bGU+CiAgPHBhdGggaWQ9InN2Z18xNCIgZmlsbD0iIzYyNGFmZiIgZD0ibTAsODkuODRsMjUuNjUsMGwwLDI1LjY0OTk5bC0yNS42NSwwbDAsLTI1LjY0OTk5eiIvPgogIDxwYXRoIGlkPSJzdmdfMTUiIGZpbGw9IiM2MjRhZmYiIGQ9Im05OS4xNCwxMTUuNDlsMjUuNjUsMGwwLDI1LjY1bC0yNS42NSwwbDAsLTI1LjY1eiIvPgogIDxwYXRoIGlkPSJzdmdfMTYiIGZpbGw9IiM2MjRhZmYiIGQ9Im0xNzYuMDksMTQxLjE0bC0yNS42NDk5OSwwbDAsMjIuMTlsNDcuODQsMGwwLC00Ny44NGwtMjIuMTksMGwwLDI1LjY1eiIvPgogIDxwYXRoIGlkPSJzdmdfMTciIGZpbGw9IiMzNmNmZDEiIGQ9Im0xMjQuNzksODkuODRsMjUuNjUsMGwwLDI1LjY0OTk5bC0yNS42NSwwbDAsLTI1LjY0OTk5eiIvPgogIDxwYXRoIGlkPSJzdmdfMTgiIGZpbGw9IiMzNmNmZDEiIGQ9Im0wLDY0LjE5bDI1LjY1LDBsMCwyNS42NWwtMjUuNjUsMGwwLC0yNS42NXoiLz4KICA8cGF0aCBpZD0ic3ZnXzE5IiBmaWxsPSIjNjI0YWZmIiBkPSJtMTk4LjI4LDg5Ljg0bDI1LjY0OTk5LDBsMCwyNS42NDk5OWwtMjUuNjQ5OTksMGwwLC0yNS42NDk5OXoiLz4KICA8cGF0aCBpZD0ic3ZnXzIwIiBmaWxsPSIjMzZjZmQxIiBkPSJtMTk4LjI4LDY0LjE5bDI1LjY0OTk5LDBsMCwyNS42NWwtMjUuNjQ5OTksMGwwLC0yNS42NXoiLz4KICA8cGF0aCBpZD0ic3ZnXzIxIiBmaWxsPSIjNjI0YWZmIiBkPSJtMTUwLjQ0LDQybDAsMjIuMTlsMjUuNjQ5OTksMGwwLDI1LjY1bDIyLjE5LDBsMCwtNDcuODRsLTQ3Ljg0LDB6Ii8+CiAgPHBhdGggaWQ9InN2Z18yMiIgZmlsbD0iIzM2Y2ZkMSIgZD0ibTczLjQ5LDg5Ljg0bDI1LjY1LDBsMCwyNS42NDk5OWwtMjUuNjUsMGwwLC0yNS42NDk5OXoiLz4KICA8cGF0aCBpZD0ic3ZnXzIzIiBmaWxsPSIjNjI0YWZmIiBkPSJtNDcuODQsNjQuMTlsMjUuNjUsMGwwLC0yMi4xOWwtNDcuODQsMGwwLDQ3Ljg0bDIyLjE5LDBsMCwtMjUuNjV6Ii8+CiAgPHBhdGggaWQ9InN2Z18yNCIgZmlsbD0iIzYyNGFmZiIgZD0ibTQ3Ljg0LDExNS40OWwtMjIuMTksMGwwLDQ3Ljg0bDQ3Ljg0LDBsMCwtMjIuMTlsLTI1LjY1LDBsMCwtMjUuNjV6Ii8+CiA8L2c+Cjwvc3ZnPg==&labelColor=white)](https://www.modelscope.cn/studios/OpenDataLab/MinerU)
[![HuggingFace](https://img.shields.io/badge/Demo_on_HuggingFace-yellow.svg?logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAF8AAABYCAMAAACkl9t/AAAAk1BMVEVHcEz/nQv/nQv/nQr/nQv/nQr/nQv/nQv/nQr/wRf/txT/pg7/yRr/rBD/zRz/ngv/oAz/zhz/nwv/txT/ngv/0B3+zBz/nQv/0h7/wxn/vRb/thXkuiT/rxH/pxD/ogzcqyf/nQvTlSz/czCxky7/SjifdjT/Mj3+Mj3wMj15aTnDNz+DSD9RTUBsP0FRO0Q6O0WyIxEIAAAAGHRSTlMADB8zSWF3krDDw8TJ1NbX5efv8ff9/fxKDJ9uAAAGKklEQVR42u2Z63qjOAyGC4RwCOfB2JAGqrSb2WnTw/1f3UaWcSGYNKTdf/P+mOkTrE+yJBulvfvLT2A5ruenaVHyIks33npl/6C4s/ZLAM45SOi/1FtZPyFur1OYofBX3w7d54Bxm+E8db+nDr12ttmESZ4zludJEG5S7TO72YPlKZFyE+YCYUJTBZsMiNS5Sd7NlDmKM2Eg2JQg8awbglfqgbhArjxkS7dgp2RH6hc9AMLdZYUtZN5DJr4molC8BfKrEkPKEnEVjLbgW1fLy77ZVOJagoIcLIl+IxaQZGjiX597HopF5CkaXVMDO9Pyix3AFV3kw4lQLCbHuMovz8FallbcQIJ5Ta0vks9RnolbCK84BtjKRS5uA43hYoZcOBGIG2Epbv6CvFVQ8m8loh66WNySsnN7htL58LNp+NXT8/PhXiBXPMjLSxtwp8W9f/1AngRierBkA+kk/IpUSOeKByzn8y3kAAAfh//0oXgV4roHm/kz4E2z//zRc3/lgwBzbM2mJxQEa5pqgX7d1L0htrhx7LKxOZlKbwcAWyEOWqYSI8YPtgDQVjpB5nvaHaSnBaQSD6hweDi8PosxD6/PT09YY3xQA7LTCTKfYX+QHpA0GCcqmEHvr/cyfKQTEuwgbs2kPxJEB0iNjfJcCTPyocx+A0griHSmADiC91oNGVwJ69RudYe65vJmoqfpul0lrqXadW0jFKH5BKwAeCq+Den7s+3zfRJzA61/Uj/9H/VzLKTx9jFPPdXeeP+L7WEvDLAKAIoF8bPTKT0+TM7W8ePj3Rz/Yn3kOAp2f1Kf0Weony7pn/cPydvhQYV+eFOfmOu7VB/ViPe34/EN3RFHY/yRuT8ddCtMPH/McBAT5s+vRde/gf2c/sPsjLK+m5IBQF5tO+h2tTlBGnP6693JdsvofjOPnnEHkh2TnV/X1fBl9S5zrwuwF8NFrAVJVwCAPTe8gaJlomqlp0pv4Pjn98tJ/t/fL++6unpR1YGC2n/KCoa0tTLoKiEeUPDl94nj+5/Tv3/eT5vBQ60X1S0oZr+IWRR8Ldhu7AlLjPISlJcO9vrFotky9SpzDequlwEir5beYAc0R7D9KS1DXva0jhYRDXoExPdc6yw5GShkZXe9QdO/uOvHofxjrV/TNS6iMJS+4TcSTgk9n5agJdBQbB//IfF/HpvPt3Tbi7b6I6K0R72p6ajryEJrENW2bbeVUGjfgoals4L443c7BEE4mJO2SpbRngxQrAKRudRzGQ8jVOL2qDVjjI8K1gc3TIJ5KiFZ1q+gdsARPB4NQS4AjwVSt72DSoXNyOWUrU5mQ9nRYyjp89Xo7oRI6Bga9QNT1mQ/ptaJq5T/7WcgAZywR/XlPGAUDdet3LE+qS0TI+g+aJU8MIqjo0Kx8Ly+maxLjJmjQ18rA0YCkxLQbUZP1WqdmyQGJLUm7VnQFqodmXSqmRrdVpqdzk5LvmvgtEcW8PMGdaS23EOWyDVbACZzUJPaqMbjDxpA3Qrgl0AikimGDbqmyT8P8NOYiqrldF8rX+YN7TopX4UoHuSCYY7cgX4gHwclQKl1zhx0THf+tCAUValzjI7Wg9EhptrkIcfIJjA94evOn8B2eHaVzvBrnl2ig0So6hvPaz0IGcOvTHvUIlE2+prqAxLSQxZlU2stql1NqCCLdIiIN/i1DBEHUoElM9dBravbiAnKqgpi4IBkw+utSPIoBijDXJipSVV7MpOEJUAc5Qmm3BnUN+w3hteEieYKfRZSIUcXKMVf0u5wD4EwsUNVvZOtUT7A2GkffHjByWpHqvRBYrTV72a6j8zZ6W0DTE86Hn04bmyWX3Ri9WH7ZU6Q7h+ZHo0nHUAcsQvVhXRDZHChwiyi/hnPuOsSEF6Exk3o6Y9DT1eZ+6cASXk2Y9k+6EOQMDGm6WBK10wOQJCBwren86cPPWUcRAnTVjGcU1LBgs9FURiX/e6479yZcLwCBmTxiawEwrOcleuu12t3tbLv/N4RLYIBhYexm7Fcn4OJcn0+zc+s8/VfPeddZHAGN6TT8eGczHdR/Gts1/MzDkThr23zqrVfAMFT33Nx1RJsx1k5zuWILLnG/vsH+Fv5D4NTVcp1Gzo8AAAAAElFTkSuQmCC&labelColor=white)](https://huggingface.co/spaces/opendatalab/MinerU)
[![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/gist/myhloli/a3cb16570ab3cfeadf9d8f0ac91b4fca/mineru_demo.ipynb)
[![arXiv](https://img.shields.io/badge/arXiv-2409.18839-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2409.18839)
[![arXiv](https://img.shields.io/badge/MinerU-Technical%20Report-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2409.18839)
[![arXiv](https://img.shields.io/badge/MinerU2.5-Technical%20Report-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2509.22186)
[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/opendatalab/MinerU)
<div align="center">
@@ -55,7 +56,7 @@ MinerU诞生于[书生-浦语](https://github.com/InternLM/InternLM)的预训练
- 自动识别并转换文档中的公式为LaTeX格式
- 自动识别并转换文档中的表格为HTML格式
- 自动检测扫描版PDF和乱码PDF并启用OCR功能
- OCR支持84种语言的检测与识别
- OCR支持109种语言的检测与识别
- 支持多种输出格式如多模态与NLP的Markdown、按阅读顺序排序的JSON、含有丰富信息的中间格式等
- 支持多种可视化结果包括layout可视化、span可视化等便于高效确认输出效果与质检
- 支持纯CPU环境运行并支持 GPU(CUDA)/NPU(CANN)/MPS 加速

View File

@@ -6,11 +6,12 @@ MinerU提供了便捷的docker部署方式这有助于快速搭建环境并
```bash
wget https://gcore.jsdelivr.net/gh/opendatalab/MinerU@master/docker/china/Dockerfile
docker build -t mineru-vllm:latest -f Dockerfile .
docker build -t mineru:latest -f Dockerfile .
```
> [!TIP]
> [Dockerfile](https://github.com/opendatalab/MinerU/blob/master/docker/china/Dockerfile)默认使用`vllm/vllm-openai:v0.10.1.1`作为基础镜像,支持Turing/Ampere/Ada Lovelace/Hopper/Blackwell平台
> [Dockerfile](https://github.com/opendatalab/MinerU/blob/master/docker/china/Dockerfile)默认使用`vllm/vllm-openai:v0.10.1.1`作为基础镜像,
> 该版本的vLLM v1 engine对显卡型号支持有限如您无法在Turing及更早架构的显卡上使用vLLM加速推理可通过更改基础镜像为`vllm/vllm-openai:v0.10.2`来解决该问题。
## Docker说明
@@ -18,7 +19,7 @@ Mineru的docker使用了`vllm/vllm-openai`作为基础镜像因此在docker
> [!NOTE]
> 使用`vllm`加速VLM模型推理需要满足的条件是
>
> - 设备包含Turing及以后架构的显卡且可用显存大于等于8G。
> - 设备包含Volta及以后架构的显卡且可用显存大于等于8G。
> - 物理机的显卡驱动应支持CUDA 12.8或更高版本,可通过`nvidia-smi`命令检查驱动版本。
> - docker中能够访问物理机的显卡设备。
@@ -30,7 +31,7 @@ docker run --gpus all \
--shm-size 32g \
-p 30000:30000 -p 7860:7860 -p 8000:8000 \
--ipc=host \
-it mineru-vllm:latest \
-it mineru:latest \
/bin/bash
```
@@ -49,17 +50,17 @@ wget https://gcore.jsdelivr.net/gh/opendatalab/MinerU@master/docker/compose.yaml
>
>- `compose.yaml`文件中包含了MinerU的多个服务配置您可以根据需要选择启动特定的服务。
>- 不同的服务可能会有额外的参数配置,您可以在`compose.yaml`文件中查看并编辑。
>- 由于`vllm`推理加速框架预分配显存的特性,您可能无法在同一台机器上同时运行多个`vllm`服务,因此请确保在启动`vlm-vllm-server`服务或使用`vlm-vllm-engine`后端时,其他可能使用显存的服务已停止。
>- 由于`vllm`推理加速框架预分配显存的特性,您可能无法在同一台机器上同时运行多个`vllm`服务,因此请确保在启动`vlm-openai-server`服务或使用`vlm-vllm-engine`后端时,其他可能使用显存的服务已停止。
---
### 启动 vllm-server 服务
并通过`vlm-http-client`后端连接`vllm-server`
### 启动 openai兼容接口 服务
并通过`vlm-http-client`后端连接`openai-server`
```bash
docker compose -f compose.yaml --profile vllm-server up -d
docker compose -f compose.yaml --profile openai-server up -d
```
>[!TIP]
>在另一个终端中通过http client连接vllm server只需cpu与网络不需要vllm环境
>在另一个终端中通过http client连接openai server只需cpu与网络不需要vllm环境
> ```bash
> mineru -p <input_path> -o <output_path> -b vlm-http-client -u http://<server_ip>:30000
> ```
@@ -81,5 +82,4 @@ wget https://gcore.jsdelivr.net/gh/opendatalab/MinerU@master/docker/compose.yaml
```
>[!TIP]
>
>- 在浏览器中访问 `http://<server_ip>:7860` 使用 Gradio WebUI。
>- 访问 `http://<server_ip>:7860/?view=api` 使用 Gradio API。
>- 在浏览器中访问 `http://<server_ip>:7860` 使用 Gradio WebUI。

View File

@@ -4,26 +4,41 @@ MinerU 支持根据不同需求,按需安装扩展模块,以增强功能或
## 常见场景
### 核心功能安装
`core` 模块是 MinerU 的核心依赖,包含了除`vllm`外的所有功能模块。安装此模块可以确保 MinerU 的基本功能正常运行。
`core` 模块是 MinerU 的核心依赖,包含了除`vllm`/`lmdeploy`外的所有功能模块。安装此模块可以确保 MinerU 的基本功能正常运行。
```bash
uv pip install mineru[core]
uv pip install "mineru[core]"
```
---
### 使用`vllm`加速 VLM 模型推理
`vllm` 模块提供了对 VLM 模型推理的加速支持,适用于具有 Turing 及以后架构的显卡8G 显存及以上)。安装此模块可以显著提升模型推理速度。
在配置中,`all`包含了`core``vllm`模块,因此`mineru[all]``mineru[core,vllm]`是等价的
> [!NOTE]
> `vllm`和`lmdeploy`对vlm的推理加速效果和使用方式几乎相同您可以根据实际情况选择其中之一进行安装和使用但不建议同时安装这两个模块以避免潜在的依赖冲突
`vllm` 模块提供了对 VLM 模型推理的加速支持,适用于具有 Volta 及以后架构的显卡8G 显存及以上)。安装此模块可以显著提升模型推理速度。
```bash
uv pip install mineru[all]
uv pip install "mineru[core,vllm]"
```
> [!TIP]
> 如在安装包含vllm的完整包过程中发生异常,请参考 [vllm 官方文档](https://docs.vllm.ai/en/latest/getting_started/installation/index.html) 尝试解决,或直接使用 [Docker](./docker_deployment.md) 方式部署镜像。
> 如在安装包含`vllm`的扩展包过程中发生异常,请参考 [vllm 官方文档](https://docs.vllm.ai/en/latest/getting_started/installation/index.html) 尝试解决,或直接使用 [Docker](./docker_deployment.md) 方式部署镜像。
---
### 安装轻量版client连接vllm-server使用
如果您需要在边缘设备上安装轻量版的 client 端以连接 `vllm-server`可以安装mineru的基础包非常轻量适合在只有cpu和网络连接的设备上使用。
### 使用`lmdeploy`加速 VLM 模型推理
> [!NOTE]
> `vllm`和`lmdeploy`对vlm的推理加速效果和使用方式几乎相同您可以根据实际情况选择其中之一进行安装和使用但不建议同时安装这两个模块以避免潜在的依赖冲突。
`lmdeploy` 模块提供了对 VLM 模型推理的加速支持,适用于具有 Volta 及以后架构的显卡8G 显存及以上)。安装此模块可以显著提升模型推理速度。
```bash
uv pip install "mineru[core,lmdeploy]"
```
> [!TIP]
> 如在安装包含`lmdeploy`的扩展包过程中发生异常,请参考 [lmdeploy 官方文档](https://lmdeploy.readthedocs.io/en/latest/get_started/installation.html) 尝试解决。
---
### 安装轻量版client连接兼容openai服务器使用
如果您需要在边缘设备上安装轻量版的 client 端以连接兼容 openai 接口的服务端来使用vlm模式可以安装mineru的基础包非常轻量适合在只有cpu和网络连接的设备上使用。
```bash
uv pip install mineru
```

Some files were not shown because too many files have changed in this diff Show More