Compare commits
416 Commits
release-2.
...
release-2.
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
94dcf754b3 | ||
|
|
77c18e958f | ||
|
|
16997bea1b | ||
|
|
1a15dcee32 | ||
|
|
f78b25e3de | ||
|
|
24c973d99e | ||
|
|
4e5f03bba1 | ||
|
|
dfd99baccd | ||
|
|
c291cc1a59 | ||
|
|
6f20fefadf | ||
|
|
700321b23d | ||
|
|
0ba3992173 | ||
|
|
096717e4d0 | ||
|
|
ab365420b9 | ||
|
|
27e1fd63e7 | ||
|
|
4d47634913 | ||
|
|
7e33501cd0 | ||
|
|
a6f4eb3727 | ||
|
|
0d2bebd8b1 | ||
|
|
b7a209a4a7 | ||
|
|
08c9fadbcb | ||
|
|
424c37984b | ||
|
|
35d5ba8b8f | ||
|
|
b4f725258d | ||
|
|
4a081c3214 | ||
|
|
0e2e12ca84 | ||
|
|
48ed75d935 | ||
|
|
34c46cb83d | ||
|
|
16f167b351 | ||
|
|
91df5c8bb7 | ||
|
|
86b1fca74c | ||
|
|
444fd6f027 | ||
|
|
9aa46e9c6c | ||
|
|
7beee5be62 | ||
|
|
64586d03ea | ||
|
|
9c1c9d0c89 | ||
|
|
2111c35b83 | ||
|
|
83ad8e81a9 | ||
|
|
18ee522c77 | ||
|
|
72fa59bab2 | ||
|
|
28ebc0e2e8 | ||
|
|
3bcdd0a10a | ||
|
|
cd1c5c5e50 | ||
|
|
a83d351ccc | ||
|
|
7196f71153 | ||
|
|
1c530f64f5 | ||
|
|
997a131278 | ||
|
|
eeeaca85f8 | ||
|
|
c884d7ddb9 | ||
|
|
7bffbe2541 | ||
|
|
ed6fc3e44e | ||
|
|
a01a5d798b | ||
|
|
38f5995ae4 | ||
|
|
e7c80da602 | ||
|
|
33696974fe | ||
|
|
376d1e38d5 | ||
|
|
c5385af754 | ||
|
|
422ee671d8 | ||
|
|
76b1a559f8 | ||
|
|
afc6dcd7b0 | ||
|
|
cf1fbd2923 | ||
|
|
a0f27bd80b | ||
|
|
46f8c6d082 | ||
|
|
5f9fdd9b62 | ||
|
|
9ed6636ad2 | ||
|
|
f8af29e3a1 | ||
|
|
669b6cd629 | ||
|
|
281c965213 | ||
|
|
80445f24bf | ||
|
|
10af19f419 | ||
|
|
a149a8da50 | ||
|
|
843ab52da0 | ||
|
|
506179f0c8 | ||
|
|
43881d5f66 | ||
|
|
ad9521528e | ||
|
|
d67be0c7de | ||
|
|
056f8af0ae | ||
|
|
4f8d897342 | ||
|
|
0a4c9e307f | ||
|
|
79f2d03d32 | ||
|
|
d2c93b770f | ||
|
|
bb25385097 | ||
|
|
60c5f7d890 | ||
|
|
3293299f34 | ||
|
|
6581af72b4 | ||
|
|
4ba9c73458 | ||
|
|
f7509e7dc9 | ||
|
|
19c2a6612b | ||
|
|
1b440a8e92 | ||
|
|
39e7aa52a2 | ||
|
|
0c8e004874 | ||
|
|
f9f67ddef4 | ||
|
|
2ac829ca32 | ||
|
|
6bafca0555 | ||
|
|
7516d3ddf4 | ||
|
|
a2136c22a5 | ||
|
|
3fcca35c73 | ||
|
|
6c27bc7f53 | ||
|
|
30f1db6e6d | ||
|
|
e80e53d4de | ||
|
|
8e5a780fc6 | ||
|
|
ad35f0bbc2 | ||
|
|
5c743dc169 | ||
|
|
b26338d0ef | ||
|
|
275ae04e56 | ||
|
|
672e252506 | ||
|
|
85558061ff | ||
|
|
b4c8a017ea | ||
|
|
5c8d05e076 | ||
|
|
0cfc6c3d4e | ||
|
|
cdedc13713 | ||
|
|
95172d7d17 | ||
|
|
5cc13b919a | ||
|
|
9ec03c0353 | ||
|
|
ef485db9a8 | ||
|
|
16ff55b27f | ||
|
|
fa1149cd4a | ||
|
|
5a937d3059 | ||
|
|
f11e609a14 | ||
|
|
e010b0974a | ||
|
|
fe1549960d | ||
|
|
df23e45861 | ||
|
|
5ec07ee7ab | ||
|
|
f1ebf5a7f0 | ||
|
|
dae2cc8514 | ||
|
|
5de8f1a19f | ||
|
|
be2369bdd4 | ||
|
|
51df4d8508 | ||
|
|
f7225d8e17 | ||
|
|
a9c9501af6 | ||
|
|
74de2725cb | ||
|
|
6250c453d9 | ||
|
|
54417a51f8 | ||
|
|
2f120db20e | ||
|
|
2079395774 | ||
|
|
b4c57116c1 | ||
|
|
ace7f76869 | ||
|
|
5349fd7ccd | ||
|
|
5999f6664f | ||
|
|
245ae28c27 | ||
|
|
4afa045545 | ||
|
|
c32ff88400 | ||
|
|
4214634de8 | ||
|
|
bffc6aff53 | ||
|
|
05e114f8b9 | ||
|
|
66d5f3dfd2 | ||
|
|
305e3a61e8 | ||
|
|
b614bef035 | ||
|
|
cce16daf1f | ||
|
|
94eb35ffda | ||
|
|
1ebc1ae841 | ||
|
|
e90a17a3d2 | ||
|
|
61747bafdd | ||
|
|
374ace0a34 | ||
|
|
2c355d2d68 | ||
|
|
512554196b | ||
|
|
a33715c015 | ||
|
|
3bc44c8526 | ||
|
|
4ccd0528f4 | ||
|
|
64d6a38bf5 | ||
|
|
9ede336a0c | ||
|
|
1c0d4b8bc6 | ||
|
|
0b53696181 | ||
|
|
d06b105102 | ||
|
|
b70f49522e | ||
|
|
23d75bac09 | ||
|
|
14ca71eed0 | ||
|
|
d519095436 | ||
|
|
2238c49352 | ||
|
|
ef71228e1a | ||
|
|
8bf407a5e5 | ||
|
|
79fe3757b1 | ||
|
|
c9dc5df28d | ||
|
|
57b2c819f9 | ||
|
|
04860456e8 | ||
|
|
14c334d2b0 | ||
|
|
d57796a667 | ||
|
|
551802aebb | ||
|
|
59b5ffaf95 | ||
|
|
d975836b25 | ||
|
|
5351c76c5d | ||
|
|
324dd75a52 | ||
|
|
bb830c6cbf | ||
|
|
1fd357dd97 | ||
|
|
51726f7ac4 | ||
|
|
d306abf8d7 | ||
|
|
a2aae1fa48 | ||
|
|
05ce84c5e8 | ||
|
|
b2a2cac32e | ||
|
|
2dbb265cf9 | ||
|
|
737207582a | ||
|
|
d654238115 | ||
|
|
279e84bf58 | ||
|
|
9dfbdb8aec | ||
|
|
931aebc5d5 | ||
|
|
3896079940 | ||
|
|
a69e39860a | ||
|
|
5cd31f97b6 | ||
|
|
08ee48c1d7 | ||
|
|
05cf5a491e | ||
|
|
8a8fc59d20 | ||
|
|
7f96fa94b7 | ||
|
|
ad29a6a02a | ||
|
|
54ac866554 | ||
|
|
2080677d83 | ||
|
|
11a1f04b0f | ||
|
|
8a7b216d67 | ||
|
|
e5dba06035 | ||
|
|
beeef7068f | ||
|
|
1f5db12adb | ||
|
|
e5c8508ad7 | ||
|
|
633afeb9e2 | ||
|
|
797011879a | ||
|
|
7365f8137c | ||
|
|
2f1369a877 | ||
|
|
e803facba6 | ||
|
|
dc7b341e02 | ||
|
|
73c52b95f5 | ||
|
|
1037fd56bc | ||
|
|
25525ad899 | ||
|
|
55a0cb95b7 | ||
|
|
00d438d5fb | ||
|
|
eb02745e06 | ||
|
|
fe4985f6f0 | ||
|
|
8825235088 | ||
|
|
44a60785c6 | ||
|
|
473e235397 | ||
|
|
16814e1e1d | ||
|
|
3546766e72 | ||
|
|
b57d9caef3 | ||
|
|
0603edc202 | ||
|
|
2a0cb7963a | ||
|
|
a56bd6c334 | ||
|
|
f5400f0c94 | ||
|
|
6a6c650062 | ||
|
|
ae084eb317 | ||
|
|
7c77db7135 | ||
|
|
7b14a87b9d | ||
|
|
0d0ebfd7bc | ||
|
|
dc438fa620 | ||
|
|
f5a5644d12 | ||
|
|
91cc2524d5 | ||
|
|
e504e5e012 | ||
|
|
6b2f414438 | ||
|
|
a0da3029fd | ||
|
|
30fe325428 | ||
|
|
6131013ce9 | ||
|
|
f1c145054a | ||
|
|
078aaaf150 | ||
|
|
c3a55fffab | ||
|
|
4eddf28c8f | ||
|
|
dd92c5b723 | ||
|
|
b5922086cb | ||
|
|
df12e4fc79 | ||
|
|
90ed311198 | ||
|
|
c922c63fbc | ||
|
|
28b278508f | ||
|
|
6b54f321b4 | ||
|
|
e47ec7cd10 | ||
|
|
701f6018f2 | ||
|
|
5ade203e31 | ||
|
|
6e83f37754 | ||
|
|
972161a991 | ||
|
|
700e11d342 | ||
|
|
fd79885b23 | ||
|
|
a0810b5b6e | ||
|
|
39271b45de | ||
|
|
db68aaf4ac | ||
|
|
a6cc8fa90d | ||
|
|
47f34f4ce8 | ||
|
|
b7a8347f45 | ||
|
|
c6d241f4f4 | ||
|
|
06b2fda1c1 | ||
|
|
5c1ca9271e | ||
|
|
e7485c5d79 | ||
|
|
80436a89f9 | ||
|
|
b36793cef0 | ||
|
|
43b51e78fc | ||
|
|
9688f73046 | ||
|
|
c02edd9cba | ||
|
|
b4d08e994c | ||
|
|
a220b8a208 | ||
|
|
ab480a7a86 | ||
|
|
f57a6d8d9e | ||
|
|
915ba87f7d | ||
|
|
42a95e8e20 | ||
|
|
a513357607 | ||
|
|
c8ccf4cf20 | ||
|
|
33d43a5afc | ||
|
|
3b057c7996 | ||
|
|
34547262a2 | ||
|
|
cd0ed982c0 | ||
|
|
52dcbcbfa5 | ||
|
|
0758de6d24 | ||
|
|
ae7892a6f9 | ||
|
|
73567ccedc | ||
|
|
bb552282f3 | ||
|
|
14c38101f7 | ||
|
|
cb3a30e9ad | ||
|
|
f4db41d0cb | ||
|
|
dad59f7d52 | ||
|
|
499e877165 | ||
|
|
2d249666ba | ||
|
|
cedc62a728 | ||
|
|
1e40bac24f | ||
|
|
23701d0db4 | ||
|
|
e7d8bf097a | ||
|
|
08a89aeca1 | ||
|
|
1b724f3336 | ||
|
|
ea4271ab37 | ||
|
|
d83b83a5ad | ||
|
|
0853b84e87 | ||
|
|
36225160a3 | ||
|
|
a36118f8ba | ||
|
|
a38384e7fb | ||
|
|
4b7c2bbcc0 | ||
|
|
504fe6ada3 | ||
|
|
39be54023b | ||
|
|
484ff5a6f9 | ||
|
|
59a7a577b3 | ||
|
|
0e73ef9615 | ||
|
|
d580d6c7f8 | ||
|
|
4c8bb038ce | ||
|
|
a89715b9a2 | ||
|
|
f05ea7c2e6 | ||
|
|
b68db3ab90 | ||
|
|
3539cfba36 | ||
|
|
3bf50d5267 | ||
|
|
2108019698 | ||
|
|
17a9921ba9 | ||
|
|
3baee1d077 | ||
|
|
e1ee728e31 | ||
|
|
1b45e6e1bc | ||
|
|
966aadd1d3 | ||
|
|
ecb8e3f0ac | ||
|
|
1bef6e3526 | ||
|
|
4c4d1d0f95 | ||
|
|
c36aa54370 | ||
|
|
4b480cfcf7 | ||
|
|
7e18e1bb76 | ||
|
|
44fdeb663f | ||
|
|
cf59949ba9 | ||
|
|
c8c2f28afc | ||
|
|
aa4bc6259b | ||
|
|
b7e4ea0b49 | ||
|
|
998197a47f | ||
|
|
3c8b6e6b6b | ||
|
|
be42b46ff9 | ||
|
|
7c689e33b8 | ||
|
|
af66bc02c2 | ||
|
|
752f75ad8e | ||
|
|
1cfde98585 | ||
|
|
54676295d5 | ||
|
|
61c7c65d8b | ||
|
|
6f05f735d0 | ||
|
|
befb16e531 | ||
|
|
abc433d6f2 | ||
|
|
e7c1385068 | ||
|
|
342c5aa34a | ||
|
|
f25ddfa024 | ||
|
|
e31de3a453 | ||
|
|
2f01754410 | ||
|
|
8a9921fb22 | ||
|
|
652e11a253 | ||
|
|
61cc6886fe | ||
|
|
80dc57e7ce | ||
|
|
d84a006f6d | ||
|
|
2c5361bf8e | ||
|
|
eb01b7acf9 | ||
|
|
5656f1363b | ||
|
|
c9315b8e10 | ||
|
|
907099762f | ||
|
|
2c356cccee | ||
|
|
0f62f166e6 | ||
|
|
c7a64e72dc | ||
|
|
3cb3a94830 | ||
|
|
8301fa4c20 | ||
|
|
4400f4b75f | ||
|
|
92efb8f96e | ||
|
|
9a88cbfb09 | ||
|
|
e96e4a0ce4 | ||
|
|
c7bde0ab39 | ||
|
|
8754c24e42 | ||
|
|
4f8c00cc34 | ||
|
|
89681f98ad | ||
|
|
66d328dbc5 | ||
|
|
f0c1318545 | ||
|
|
6e97f3cf70 | ||
|
|
aede62167e | ||
|
|
5f2740f743 | ||
|
|
a888d2b625 | ||
|
|
4275876331 | ||
|
|
ec9f7f54ab | ||
|
|
7861e5e369 | ||
|
|
159f3a89a3 | ||
|
|
d9452bbeb9 | ||
|
|
d808a32c0b | ||
|
|
12ce3bd024 | ||
|
|
e3d7aece50 | ||
|
|
7c55a0ea65 | ||
|
|
f1659eb7a7 | ||
|
|
c6bffd9382 | ||
|
|
857dcb2ef5 | ||
|
|
ef69f98cd6 | ||
|
|
6d5d1cf26b | ||
|
|
7c481796f8 | ||
|
|
7d62b7b7cc | ||
|
|
5a0cf9af7f | ||
|
|
f5e0e67545 | ||
|
|
a4cac624df | ||
|
|
e1eb318b9b | ||
|
|
31834b1e68 | ||
|
|
100ace2e99 | ||
|
|
c343afd20c | ||
|
|
6586c7c01e | ||
|
|
8bb8b715c1 |
16
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
@@ -122,7 +122,21 @@ body:
|
||||
#multiple: false
|
||||
options:
|
||||
-
|
||||
- "2.0.x"
|
||||
- "`<2.2.0`"
|
||||
- "`2.2.x`"
|
||||
- "`>=2.5`"
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: dropdown
|
||||
id: backend_name
|
||||
attributes:
|
||||
label: Backend name | 解析后端
|
||||
#multiple: false
|
||||
options:
|
||||
-
|
||||
- "vlm"
|
||||
- "pipeline"
|
||||
validations:
|
||||
required: true
|
||||
|
||||
|
||||
158
README.md
@@ -1,7 +1,7 @@
|
||||
<div align="center" xmlns="http://www.w3.org/1999/html">
|
||||
<!-- logo -->
|
||||
<p align="center">
|
||||
<img src="docs/images/MinerU-logo.png" width="300px" style="vertical-align:middle;">
|
||||
<img src="https://gcore.jsdelivr.net/gh/opendatalab/MinerU@master/docs/images/MinerU-logo.png" width="300px" style="vertical-align:middle;">
|
||||
</p>
|
||||
|
||||
<!-- icon -->
|
||||
@@ -18,7 +18,8 @@
|
||||
[](https://huggingface.co/spaces/opendatalab/MinerU)
|
||||
[](https://www.modelscope.cn/studios/OpenDataLab/MinerU)
|
||||
[](https://colab.research.google.com/gist/myhloli/a3cb16570ab3cfeadf9d8f0ac91b4fca/mineru_demo.ipynb)
|
||||
[](https://arxiv.org/abs/2409.18839)
|
||||
[](https://arxiv.org/abs/2409.18839)
|
||||
[](https://arxiv.org/abs/2509.22186)
|
||||
[](https://deepwiki.com/opendatalab/MinerU)
|
||||
|
||||
|
||||
@@ -43,6 +44,39 @@
|
||||
</div>
|
||||
|
||||
# Changelog
|
||||
- 2025/11/26 2.6.5 Release
|
||||
- Added support for a new backend vlm-lmdeploy-engine. Its usage is similar to vlm-vllm-(async)engine, but it uses lmdeploy as the inference engine and additionally supports native inference acceleration on Windows platforms compared to vllm.
|
||||
|
||||
- 2025/11/04 2.6.4 Release
|
||||
- Added timeout configuration for PDF image rendering, default is 300 seconds, can be configured via environment variable `MINERU_PDF_RENDER_TIMEOUT` to prevent long blocking of the rendering process caused by some abnormal PDF files.
|
||||
- Added CPU thread count configuration options for ONNX models, default is the system CPU core count, can be configured via environment variables `MINERU_INTRA_OP_NUM_THREADS` and `MINERU_INTER_OP_NUM_THREADS` to reduce CPU resource contention conflicts in high concurrency scenarios.
|
||||
|
||||
- 2025/10/31 2.6.3 Release
|
||||
- Added support for a new backend `vlm-mlx-engine`, enabling MLX-accelerated inference for the MinerU2.5 model on Apple Silicon devices. Compared to the `vlm-transformers` backend, `vlm-mlx-engine` delivers a 100%–200% speed improvement.
|
||||
- Bug fixes: #3849, #3859
|
||||
|
||||
- 2025/10/24 2.6.2 Release
|
||||
- `pipeline` backend optimizations
|
||||
- Added experimental support for Chinese formulas, which can be enabled by setting the environment variable `export MINERU_FORMULA_CH_SUPPORT=1`. This feature may cause a slight decrease in MFR speed and failures in recognizing some long formulas. It is recommended to enable it only when parsing Chinese formulas is needed. To disable this feature, set the environment variable to `0`.
|
||||
- `OCR` speed significantly improved by 200%~300%, thanks to the optimization solution provided by [@cjsdurj](https://github.com/cjsdurj)
|
||||
- `OCR` models optimized for improved accuracy and coverage of Latin script recognition, and updated Cyrillic, Arabic, Devanagari, Telugu (te), and Tamil (ta) language systems to `ppocr-v5` version, with accuracy improved by over 40% compared to previous models
|
||||
- `vlm` backend optimizations
|
||||
- `table_caption` and `table_footnote` matching logic optimized to improve the accuracy of table caption and footnote matching and reading order rationality in scenarios with multiple consecutive tables on a page
|
||||
- Optimized CPU resource usage during high concurrency when using `vllm` backend, reducing server pressure
|
||||
- Adapted to `vllm` version 0.11.0
|
||||
- General optimizations
|
||||
- Cross-page table merging effect optimized, added support for cross-page continuation table merging, improving table merging effectiveness in multi-column merge scenarios
|
||||
- Added environment variable configuration option `MINERU_TABLE_MERGE_ENABLE` for table merging feature. Table merging is enabled by default and can be disabled by setting this variable to `0`
|
||||
|
||||
- 2025/09/26 2.5.4 released
|
||||
- 🎉🎉 The MinerU2.5 [Technical Report](https://arxiv.org/abs/2509.22186) is now available! We welcome you to read it for a comprehensive overview of its model architecture, training strategy, data engineering and evaluation results.
|
||||
- Fixed an issue where some `PDF` files were mistakenly identified as `AI` files, causing parsing failures
|
||||
|
||||
- 2025/09/20 2.5.3 Released
|
||||
- Dependency version range adjustment to enable Turing and earlier architecture GPUs to use vLLM acceleration for MinerU2.5 model inference.
|
||||
- `pipeline` backend compatibility fixes for torch 2.8.0.
|
||||
- Reduced default concurrency for vLLM async backend to lower server pressure and avoid connection closure issues caused by high load.
|
||||
- More compatibility-related details can be found in the [announcement](https://github.com/opendatalab/MinerU/discussions/3548)
|
||||
|
||||
- 2025/09/19 2.5.2 Released
|
||||
|
||||
@@ -560,7 +594,7 @@ https://github.com/user-attachments/assets/4bea02c9-6d54-4cd6-97ed-dff14340982c
|
||||
- Automatically recognize and convert formulas in the document to LaTeX format.
|
||||
- Automatically recognize and convert tables in the document to HTML format.
|
||||
- Automatically detect scanned PDFs and garbled PDFs and enable OCR functionality.
|
||||
- OCR supports detection and recognition of 84 languages.
|
||||
- OCR supports detection and recognition of 109 languages.
|
||||
- Supports multiple output formats, such as multimodal and NLP Markdown, JSON sorted by reading order, and rich intermediate formats.
|
||||
- Supports various visualization results, including layout visualization and span visualization, for efficient confirmation of output quality.
|
||||
- Supports running in a pure CPU environment, and also supports GPU(CUDA)/NPU(CANN)/MPS acceleration
|
||||
@@ -597,41 +631,75 @@ A WebUI developed based on Gradio, with a simple interface and only core parsing
|
||||
> In non-mainline environments, due to the diversity of hardware and software configurations, as well as third-party dependency compatibility issues, we cannot guarantee 100% project availability. Therefore, for users who wish to use this project in non-recommended environments, we suggest carefully reading the documentation and FAQ first. Most issues already have corresponding solutions in the FAQ. We also encourage community feedback to help us gradually expand support.
|
||||
|
||||
<table>
|
||||
<tr>
|
||||
<td>Parsing Backend</td>
|
||||
<td>pipeline</td>
|
||||
<td>vlm-transformers</td>
|
||||
<td>vlm-vllm</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Operating System</td>
|
||||
<td>Linux / Windows / macOS</td>
|
||||
<td>Linux / Windows</td>
|
||||
<td>Linux / Windows (via WSL2)</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>CPU Inference Support</td>
|
||||
<td>✅</td>
|
||||
<td colspan="2">❌</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>GPU Requirements</td>
|
||||
<td>Turing architecture and later, 6GB+ VRAM or Apple Silicon</td>
|
||||
<td colspan="2">Turing architecture and later, 8GB+ VRAM</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Memory Requirements</td>
|
||||
<td colspan="3">Minimum 16GB+, recommended 32GB+</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Disk Space Requirements</td>
|
||||
<td colspan="3">20GB+, SSD recommended</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Python Version</td>
|
||||
<td colspan="3">3.10-3.13</td>
|
||||
</tr>
|
||||
<thead>
|
||||
<tr>
|
||||
<th rowspan="2">Parsing Backend</th>
|
||||
<th rowspan="2">pipeline <br> (Accuracy<sup>1</sup> 82+)</th>
|
||||
<th colspan="5">vlm (Accuracy<sup>1</sup> 90+)</th>
|
||||
</tr>
|
||||
<tr>
|
||||
<th>transformers</th>
|
||||
<th>mlx-engine</th>
|
||||
<th>vllm-engine / <br>vllm-async-engine</th>
|
||||
<th>lmdeploy-engine</th>
|
||||
<th>http-client</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr>
|
||||
<th>Backend Features</th>
|
||||
<td>Fast, no hallucinations</td>
|
||||
<td>Good compatibility, <br>but slower</td>
|
||||
<td>Faster than transformers</td>
|
||||
<td>Fast, compatible with the vLLM ecosystem</td>
|
||||
<td>Fast, compatible with the LMDeploy ecosystem</td>
|
||||
<td>Suitable for OpenAI-compatible servers<sup>6</sup></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th>Operating System</th>
|
||||
<td colspan="2" style="text-align:center;">Linux<sup>2</sup> / Windows / macOS</td>
|
||||
<td style="text-align:center;">macOS<sup>3</sup></td>
|
||||
<td style="text-align:center;">Linux<sup>2</sup> / Windows<sup>4</sup> </td>
|
||||
<td style="text-align:center;">Linux<sup>2</sup> / Windows<sup>5</sup> </td>
|
||||
<td>Any</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th>CPU inference support</th>
|
||||
<td colspan="2" style="text-align:center;">✅</td>
|
||||
<td colspan="3" style="text-align:center;">❌</td>
|
||||
<td>Not required</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th>GPU Requirements</th><td colspan="2" style="text-align:center;">Volta or later architectures, 6 GB VRAM or more, or Apple Silicon</td>
|
||||
<td>Apple Silicon</td>
|
||||
<td colspan="2" style="text-align:center;">Volta or later architectures, 8 GB VRAM or more</td>
|
||||
<td>Not required</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th>Memory Requirements</th>
|
||||
<td colspan="5" style="text-align:center;">Minimum 16 GB, 32 GB recommended</td>
|
||||
<td>8 GB</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th>Disk Space Requirements</th>
|
||||
<td colspan="5" style="text-align:center;">20 GB or more, SSD recommended</td>
|
||||
<td>2 GB</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th>Python Version</th>
|
||||
<td colspan="6" style="text-align:center;">3.10-3.13<sup>7</sup></td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
<sup>1</sup> Accuracy metric is the End-to-End Evaluation Overall score of OmniDocBench (v1.5), tested on the latest `MinerU` version.
|
||||
<sup>2</sup> Linux supports only distributions released in 2019 or later.
|
||||
<sup>3</sup> MLX requires macOS 13.5 or later, recommended for use with version 14.0 or higher.
|
||||
<sup>4</sup> Windows vLLM support via WSL2(Windows Subsystem for Linux).
|
||||
<sup>5</sup> Windows LMDeploy can only use the `turbomind` backend, which is slightly slower than the `pytorch` backend. If performance is critical, it is recommended to run it via WSL2.
|
||||
<sup>6</sup> Servers compatible with the OpenAI API, such as local or remote model services deployed via inference frameworks like `vLLM`, `SGLang`, or `LMDeploy`.
|
||||
<sup>7</sup> Windows + LMDeploy only supports Python versions 3.10–3.12, as the critical dependency `ray` does not yet support Python 3.13 on Windows.
|
||||
|
||||
|
||||
### Install MinerU
|
||||
|
||||
@@ -650,8 +718,8 @@ uv pip install -e .[core]
|
||||
```
|
||||
|
||||
> [!TIP]
|
||||
> `mineru[core]` includes all core features except `vLLM` acceleration, compatible with Windows / Linux / macOS systems, suitable for most users.
|
||||
> If you need to use `vLLM` acceleration for VLM model inference or install a lightweight client on edge devices, please refer to the documentation [Extension Modules Installation Guide](https://opendatalab.github.io/MinerU/quick_start/extension_modules/).
|
||||
> `mineru[core]` includes all core features except `vLLM`/`LMDeploy` acceleration, compatible with Windows / Linux / macOS systems, suitable for most users.
|
||||
> If you need to use `vLLM`/`LMDeploy` acceleration for VLM model inference or install a lightweight client on edge devices, please refer to the documentation [Extension Modules Installation Guide](https://opendatalab.github.io/MinerU/quick_start/extension_modules/).
|
||||
|
||||
---
|
||||
|
||||
@@ -733,6 +801,16 @@ Currently, some models in this project are trained based on YOLO. However, since
|
||||
# Citation
|
||||
|
||||
```bibtex
|
||||
@misc{niu2025mineru25decoupledvisionlanguagemodel,
|
||||
title={MinerU2.5: A Decoupled Vision-Language Model for Efficient High-Resolution Document Parsing},
|
||||
author={Junbo Niu and Zheng Liu and Zhuangcheng Gu and Bin Wang and Linke Ouyang and Zhiyuan Zhao and Tao Chu and Tianyao He and Fan Wu and Qintong Zhang and Zhenjiang Jin and Guang Liang and Rui Zhang and Wenzheng Zhang and Yuan Qu and Zhifei Ren and Yuefeng Sun and Yuanhong Zheng and Dongsheng Ma and Zirui Tang and Boyu Niu and Ziyang Miao and Hejun Dong and Siyi Qian and Junyuan Zhang and Jingzhou Chen and Fangdong Wang and Xiaomeng Zhao and Liqun Wei and Wei Li and Shasha Wang and Ruiliang Xu and Yuanyuan Cao and Lu Chen and Qianqian Wu and Huaiyu Gu and Lindong Lu and Keming Wang and Dechen Lin and Guanlin Shen and Xuanhe Zhou and Linfeng Zhang and Yuhang Zang and Xiaoyi Dong and Jiaqi Wang and Bo Zhang and Lei Bai and Pei Chu and Weijia Li and Jiang Wu and Lijun Wu and Zhenxiang Li and Guangyu Wang and Zhongying Tu and Chao Xu and Kai Chen and Yu Qiao and Bowen Zhou and Dahua Lin and Wentao Zhang and Conghui He},
|
||||
year={2025},
|
||||
eprint={2509.22186},
|
||||
archivePrefix={arXiv},
|
||||
primaryClass={cs.CV},
|
||||
url={https://arxiv.org/abs/2509.22186},
|
||||
}
|
||||
|
||||
@misc{wang2024mineruopensourcesolutionprecise,
|
||||
title={MinerU: An Open-Source Solution for Precise Document Content Extraction},
|
||||
author={Bin Wang and Chao Xu and Xiaomeng Zhao and Linke Ouyang and Fan Wu and Zhiyuan Zhao and Rui Xu and Kaiwen Liu and Yuan Qu and Fukai Shang and Bo Zhang and Liqun Wei and Zhihao Sui and Wei Li and Botian Shi and Yu Qiao and Dahua Lin and Conghui He},
|
||||
@@ -771,4 +849,4 @@ Currently, some models in this project are trained based on YOLO. However, since
|
||||
- [OmniDocBench (A Comprehensive Benchmark for Document Parsing and Evaluation)](https://github.com/opendatalab/OmniDocBench)
|
||||
- [Magic-HTML (Mixed web page extraction tool)](https://github.com/opendatalab/magic-html)
|
||||
- [Magic-Doc (Fast speed ppt/pptx/doc/docx/pdf extraction tool)](https://github.com/InternLM/magic-doc)
|
||||
- [Dingo: A Comprehensive AI Data Quality Evaluation Tool](https://github.com/MigoXLab/dingo)
|
||||
- [Dingo: A Comprehensive AI Data Quality Evaluation Tool](https://github.com/MigoXLab/dingo)
|
||||
|
||||
168
README_zh-CN.md
@@ -1,7 +1,7 @@
|
||||
<div align="center" xmlns="http://www.w3.org/1999/html">
|
||||
<!-- logo -->
|
||||
<p align="center">
|
||||
<img src="docs/images/MinerU-logo.png" width="300px" style="vertical-align:middle;">
|
||||
<img src="https://gcore.jsdelivr.net/gh/opendatalab/MinerU@master/docs/images/MinerU-logo.png" width="300px" style="vertical-align:middle;">
|
||||
</p>
|
||||
|
||||
<!-- icon -->
|
||||
@@ -18,7 +18,8 @@
|
||||
[](https://www.modelscope.cn/studios/OpenDataLab/MinerU)
|
||||
[](https://huggingface.co/spaces/opendatalab/MinerU)
|
||||
[](https://colab.research.google.com/gist/myhloli/a3cb16570ab3cfeadf9d8f0ac91b4fca/mineru_demo.ipynb)
|
||||
[](https://arxiv.org/abs/2409.18839)
|
||||
[](https://arxiv.org/abs/2409.18839)
|
||||
[](https://arxiv.org/abs/2509.22186)
|
||||
[](https://deepwiki.com/opendatalab/MinerU)
|
||||
|
||||
|
||||
@@ -44,6 +45,43 @@
|
||||
|
||||
# 更新记录
|
||||
|
||||
- 2025/11/26 2.6.5 发布
|
||||
- 增加新后端`vlm-lmdeploy-engine`支持,使用方式与`vlm-vllm-(async)engine`类似,但使用`lmdeploy`作为推理引擎,与`vllm`相比额外支持Windows平台原生推理加速。
|
||||
- 新增国产算力平台`昇腾/npu`、`平头哥/ppu`、`沐曦/maca`的适配支持,用户可在对应平台上使用`pipeline`与`vlm`模型,并使用`vllm`/`lmdeploy`引擎加速vlm模型推理,具体使用方式请参考[其他加速卡适配](https://opendatalab.github.io/MinerU/zh/usage/)。
|
||||
- 国产平台适配不易,我们已尽量确保适配的完整性和稳定性,但仍可能存在一些稳定性/兼容问题与精度对齐问题,请大家根据适配文档页面内红绿灯情况自行选择合适的环境与场景进行使用。
|
||||
- 如在使用国产化平台适配方案的过程中遇到任何文档未提及的问题,为便于其他用户查找解决方案,请在discussions的[指定帖子](https://github.com/opendatalab/MinerU/discussions/4064)中进行反馈。
|
||||
|
||||
- 2025/11/04 2.6.4 发布
|
||||
- 为pdf渲染图片增加超时配置,默认为300秒,可通过环境变量`MINERU_PDF_RENDER_TIMEOUT`进行配置,防止部分异常pdf文件导致渲染过程长时间阻塞。
|
||||
- 为onnx模型增加cpu线程数配置选项,默认为系统cpu核心数,可通过环境变量`MINERU_INTRA_OP_NUM_THREADS`和`MINERU_INTER_OP_NUM_THREADS`进行配置,以减少高并发场景下的对cpu资源的抢占冲突。
|
||||
|
||||
- 2025/10/31 2.6.3 发布
|
||||
- 增加新后端`vlm-mlx-engine`支持,在Apple Silicon设备上支持使用`MLX`加速`MinerU2.5`模型推理,相比`vlm-transformers`后端,`vlm-mlx-engine`后端速度提升100%~200%。
|
||||
- bug修复: #3849 #3859
|
||||
|
||||
- 2025/10/24 2.6.2 发布
|
||||
- `pipline`后端优化
|
||||
- 增加对中文公式的实验性支持,可通过配置环境变量`export MINERU_FORMULA_CH_SUPPORT=1`开启。该功能可能会导致MFR速率略微下降、部分长公式识别失败等问题,建议仅在需要解析中文公式的场景下开启。如需关闭该功能,可将环境变量设置为`0`。
|
||||
- `OCR`速度大幅提升200%~300%,感谢 [@cjsdurj](https://github.com/cjsdurj) 提供的优化方案
|
||||
- `OCR`模型优化拉丁文识别的准度和广度,并更新西里尔文(cyrillic)、阿拉伯文(arabic)、天城文(devanagari)、泰卢固语(te)、泰米尔语(ta)语系至`ppocr-v5`版本,精度相比上代模型提升40%以上
|
||||
- `vlm`后端优化
|
||||
- `table_caption`、`table_footnote`匹配逻辑优化,提升页内多张连续表场景下的表格标题和脚注的匹配准确率和阅读顺序合理性
|
||||
- 优化使用`vllm`后端时高并发时的cpu资源占用,降低服务端压力
|
||||
- 适配`vllm`0.11.0版本
|
||||
- 通用优化
|
||||
- 跨页表格合并效果优化,新增跨页续表合并支持,提升在多列合并场景下的表格合并效果
|
||||
- 为表格合并功能增加环境变量配置选项`MINERU_TABLE_MERGE_ENABLE`,表格合并功能默认开启,可通过设置该变量为`0`来关闭表格合并功能
|
||||
|
||||
- 2025/09/26 2.5.4 发布
|
||||
- 🎉🎉 MinerU2.5[技术报告](https://arxiv.org/abs/2509.22186)现已发布,欢迎阅读全面了解其模型架构、训练策略、数据工程和评测结果。
|
||||
- 修复部分`pdf`文件被识别成`ai`文件导致无法解析的问题
|
||||
|
||||
- 2025/09/20 2.5.3 发布
|
||||
- 依赖版本范围调整,使得Turing及更早架构显卡可以使用vLLM加速推理MinerU2.5模型。
|
||||
- `pipeline`后端对torch 2.8.0的一些兼容性修复。
|
||||
- 降低vLLM异步后端默认的并发数,降低服务端压力以避免高压导致的链接关闭问题。
|
||||
- 更多兼容性相关内容详见[公告](https://github.com/opendatalab/MinerU/discussions/3547)
|
||||
|
||||
- 2025/09/19 2.5.2 发布
|
||||
我们正式发布 MinerU2.5,当前最强文档解析多模态大模型。仅凭 1.2B 参数,MinerU2.5 在 OmniDocBench 文档解析评测中,精度已全面超越 Gemini2.5-Pro、GPT-4o、Qwen2.5-VL-72B等顶级多模态大模型,并显著领先于主流文档解析专用模型(如 dots.ocr, MonkeyOCR, PP-StructureV3 等)。
|
||||
模型已发布至[HuggingFace](https://huggingface.co/opendatalab/MinerU2.5-2509-1.2B)和[ModelScope](https://modelscope.cn/models/opendatalab/MinerU2.5-2509-1.2B)平台,欢迎大家下载使用!
|
||||
@@ -547,7 +585,7 @@ https://github.com/user-attachments/assets/4bea02c9-6d54-4cd6-97ed-dff14340982c
|
||||
- 自动识别并转换文档中的公式为LaTeX格式
|
||||
- 自动识别并转换文档中的表格为HTML格式
|
||||
- 自动检测扫描版PDF和乱码PDF,并启用OCR功能
|
||||
- OCR支持84种语言的检测与识别
|
||||
- OCR支持109种语言的检测与识别
|
||||
- 支持多种输出格式,如多模态与NLP的Markdown、按阅读顺序排序的JSON、含有丰富信息的中间格式等
|
||||
- 支持多种可视化结果,包括layout可视化、span可视化等,便于高效确认输出效果与质检
|
||||
- 支持纯CPU环境运行,并支持 GPU(CUDA)/NPU(CANN)/MPS 加速
|
||||
@@ -582,42 +620,80 @@ https://github.com/user-attachments/assets/4bea02c9-6d54-4cd6-97ed-dff14340982c
|
||||
>
|
||||
> 在非主线环境中,由于硬件、软件配置的多样性,以及第三方依赖项的兼容性问题,我们无法100%保证项目的完全可用性。因此,对于希望在非推荐环境中使用本项目的用户,我们建议先仔细阅读文档以及FAQ,大多数问题已经在FAQ中有对应的解决方案,除此之外我们鼓励社区反馈问题,以便我们能够逐步扩大支持范围。
|
||||
|
||||
|
||||
<table>
|
||||
<tr>
|
||||
<td>解析后端</td>
|
||||
<td>pipeline</td>
|
||||
<td>vlm-transformers</td>
|
||||
<td>vlm-vllm</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>操作系统</td>
|
||||
<td>Linux / Windows / macOS</td>
|
||||
<td>Linux / Windows</td>
|
||||
<td>Linux / Windows (via WSL2)</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>CPU推理支持</td>
|
||||
<td>✅</td>
|
||||
<td colspan="2">❌</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>GPU要求</td>
|
||||
<td>Turing及以后架构,6G显存以上或Apple Silicon</td>
|
||||
<td colspan="2">Turing及以后架构,8G显存以上</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>内存要求</td>
|
||||
<td colspan="3">最低16G以上,推荐32G以上</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>磁盘空间要求</td>
|
||||
<td colspan="3">20G以上,推荐使用SSD</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>python版本</td>
|
||||
<td colspan="3">3.10-3.13</td>
|
||||
</tr>
|
||||
</table>
|
||||
<thead>
|
||||
<tr>
|
||||
<th rowspan="2">解析后端</th>
|
||||
<th rowspan="2">pipeline <br> (精度<sup>1</sup> 82+)</th>
|
||||
<th colspan="5">vlm (精度<sup>1</sup> 90+)</th>
|
||||
</tr>
|
||||
<tr>
|
||||
<th>transformers</th>
|
||||
<th>mlx-engine</th>
|
||||
<th>vllm-engine / <br>vllm-async-engine</th>
|
||||
<th>lmdeploy-engine</th>
|
||||
<th>http-client</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr>
|
||||
<th>后端特性</th>
|
||||
<td>速度快, 无幻觉</td>
|
||||
<td>兼容性好, 速度较慢</td>
|
||||
<td>比transformers快</td>
|
||||
<td>速度快, 兼容vllm生态</td>
|
||||
<td>速度快, 兼容lmdeploy生态</td>
|
||||
<td>适用于OpenAI兼容服务器<sup>6</sup></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th>操作系统</th>
|
||||
<td colspan="2" style="text-align:center;">Linux<sup>2</sup> / Windows / macOS</td>
|
||||
<td style="text-align:center;">macOS<sup>3</sup></td>
|
||||
<td style="text-align:center;">Linux<sup>2</sup> / Windows<sup>4</sup> </td>
|
||||
<td style="text-align:center;">Linux<sup>2</sup> / Windows<sup>5</sup> </td>
|
||||
<td>不限</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th>CPU推理支持</th>
|
||||
<td colspan="2" style="text-align:center;">✅</td>
|
||||
<td colspan="3" style="text-align:center;">❌</td>
|
||||
<td >不需要</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th>GPU要求</th><td colspan="2" style="text-align:center;">Volta及以后架构, 6G显存以上或Apple Silicon</td>
|
||||
<td>Apple Silicon</td>
|
||||
<td colspan="2" style="text-align:center;">Volta及以后架构, 8G显存以上</td>
|
||||
<td>不需要</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th>内存要求</th>
|
||||
<td colspan="5" style="text-align:center;">最低16GB以上, 推荐32GB以上</td>
|
||||
<td>8GB</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th>磁盘空间要求</th>
|
||||
<td colspan="5" style="text-align:center;">20GB以上, 推荐使用SSD</td>
|
||||
<td>2GB</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th>python版本</th>
|
||||
<td colspan="6" style="text-align:center;">3.10-3.13<sup>7</sup></td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
<sup>1</sup> 精度指标为OmniDocBench (v1.5)的End-to-End Evaluation Overall分数,基于`MinerU`最新版本测试
|
||||
<sup>2</sup> Linux仅支持2019年及以后发行版
|
||||
<sup>3</sup> MLX需macOS 13.5及以上版本支持,推荐14.0以上版本使用
|
||||
<sup>4</sup> Windows vLLM通过WSL2(适用于 Linux 的 Windows 子系统)实现支持
|
||||
<sup>5</sup> Windows LMDeploy只能使用`turbomind`后端,速度比`pytorch`后端稍慢,如对速度有要求建议通过WSL2运行
|
||||
<sup>6</sup> 兼容OpenAI API的服务器,如通过`vLLM`/`SGLang`/`LMDeploy`等推理框架部署的本地模型服务器或远程模型服务
|
||||
<sup>7</sup> Windows + LMDeploy 由于关键依赖`ray`未能在windows平台支持Python 3.13,故仅支持至3.10~3.12版本
|
||||
|
||||
> [!TIP]
|
||||
> 除以上主流环境与平台外,我们也收录了一些社区用户反馈的其他平台支持情况,详情请参考[其他加速卡适配](https://opendatalab.github.io/MinerU/zh/usage/)。
|
||||
> 如果您有意将自己的环境适配经验分享给社区,欢迎通过[show-and-tell](https://github.com/opendatalab/MinerU/discussions/categories/show-and-tell)提交或提交PR至[其他加速卡适配](https://github.com/opendatalab/MinerU/tree/master/docs/zh/usage/acceleration_cards)文档。
|
||||
|
||||
### 安装 MinerU
|
||||
|
||||
@@ -636,8 +712,8 @@ uv pip install -e .[core] -i https://mirrors.aliyun.com/pypi/simple
|
||||
```
|
||||
|
||||
> [!TIP]
|
||||
> `mineru[core]`包含除`vLLM`加速外的所有核心功能,兼容Windows / Linux / macOS系统,适合绝大多数用户。
|
||||
> 如果您有使用`vLLM`加速VLM模型推理,或是在边缘设备安装轻量版client端等需求,可以参考文档[扩展模块安装指南](https://opendatalab.github.io/MinerU/zh/quick_start/extension_modules/)。
|
||||
> `mineru[core]`包含除`vLLM`/`LMDeploy`加速外的所有核心功能,兼容Windows / Linux / macOS系统,适合绝大多数用户。
|
||||
> 如果您需要使用`vLLM`/`LMDeploy`加速VLM模型推理,或是有在边缘设备安装轻量版client端等需求,可以参考文档[扩展模块安装指南](https://opendatalab.github.io/MinerU/zh/quick_start/extension_modules/)。
|
||||
|
||||
---
|
||||
|
||||
@@ -719,6 +795,16 @@ mineru -p <input_path> -o <output_path>
|
||||
# Citation
|
||||
|
||||
```bibtex
|
||||
@misc{niu2025mineru25decoupledvisionlanguagemodel,
|
||||
title={MinerU2.5: A Decoupled Vision-Language Model for Efficient High-Resolution Document Parsing},
|
||||
author={Junbo Niu and Zheng Liu and Zhuangcheng Gu and Bin Wang and Linke Ouyang and Zhiyuan Zhao and Tao Chu and Tianyao He and Fan Wu and Qintong Zhang and Zhenjiang Jin and Guang Liang and Rui Zhang and Wenzheng Zhang and Yuan Qu and Zhifei Ren and Yuefeng Sun and Yuanhong Zheng and Dongsheng Ma and Zirui Tang and Boyu Niu and Ziyang Miao and Hejun Dong and Siyi Qian and Junyuan Zhang and Jingzhou Chen and Fangdong Wang and Xiaomeng Zhao and Liqun Wei and Wei Li and Shasha Wang and Ruiliang Xu and Yuanyuan Cao and Lu Chen and Qianqian Wu and Huaiyu Gu and Lindong Lu and Keming Wang and Dechen Lin and Guanlin Shen and Xuanhe Zhou and Linfeng Zhang and Yuhang Zang and Xiaoyi Dong and Jiaqi Wang and Bo Zhang and Lei Bai and Pei Chu and Weijia Li and Jiang Wu and Lijun Wu and Zhenxiang Li and Guangyu Wang and Zhongying Tu and Chao Xu and Kai Chen and Yu Qiao and Bowen Zhou and Dahua Lin and Wentao Zhang and Conghui He},
|
||||
year={2025},
|
||||
eprint={2509.22186},
|
||||
archivePrefix={arXiv},
|
||||
primaryClass={cs.CV},
|
||||
url={https://arxiv.org/abs/2509.22186},
|
||||
}
|
||||
|
||||
@misc{wang2024mineruopensourcesolutionprecise,
|
||||
title={MinerU: An Open-Source Solution for Precise Document Content Extraction},
|
||||
author={Bin Wang and Chao Xu and Xiaomeng Zhao and Linke Ouyang and Fan Wu and Zhiyuan Zhao and Rui Xu and Kaiwen Liu and Yuan Qu and Fukai Shang and Bo Zhang and Liqun Wei and Zhihao Sui and Wei Li and Botian Shi and Yu Qiao and Dahua Lin and Conghui He},
|
||||
@@ -757,4 +843,4 @@ mineru -p <input_path> -o <output_path>
|
||||
- [OmniDocBench (A Comprehensive Benchmark for Document Parsing and Evaluation)](https://github.com/opendatalab/OmniDocBench)
|
||||
- [Magic-HTML (Mixed web page extraction tool)](https://github.com/opendatalab/magic-html)
|
||||
- [Magic-Doc (Fast speed ppt/pptx/doc/docx/pdf extraction tool)](https://github.com/InternLM/magic-doc)
|
||||
- [Dingo: A Comprehensive AI Data Quality Evaluation Tool](https://github.com/MigoXLab/dingo)
|
||||
- [Dingo: A Comprehensive AI Data Quality Evaluation Tool](https://github.com/MigoXLab/dingo)
|
||||
|
||||
@@ -235,5 +235,7 @@ if __name__ == '__main__':
|
||||
|
||||
"""To enable VLM mode, change the backend to 'vlm-xxx'"""
|
||||
# parse_doc(doc_path_list, output_dir, backend="vlm-transformers") # more general.
|
||||
# parse_doc(doc_path_list, output_dir, backend="vlm-vllm-engine") # faster(engine).
|
||||
# parse_doc(doc_path_list, output_dir, backend="vlm-mlx-engine") # faster than transformers in macOS 13.5+.
|
||||
# parse_doc(doc_path_list, output_dir, backend="vlm-vllm-engine") # faster(vllm-engine).
|
||||
# parse_doc(doc_path_list, output_dir, backend="vlm-lmdeploy-engine") # faster(lmdeploy-engine).
|
||||
# parse_doc(doc_path_list, output_dir, backend="vlm-http-client", server_url="http://127.0.0.1:30000") # faster(client).
|
||||
@@ -1,8 +1,9 @@
|
||||
# Use DaoCloud mirrored vllm image for China region
|
||||
# Use DaoCloud mirrored vllm image for China region for gpu with Ampere architecture and above (Compute Capability>=8.0)
|
||||
# Compute Capability version query (https://developer.nvidia.com/cuda-gpus)
|
||||
FROM docker.m.daocloud.io/vllm/vllm-openai:v0.10.1.1
|
||||
|
||||
# Use the official vllm image
|
||||
# FROM vllm/vllm-openai:v0.10.1.1
|
||||
# Use DaoCloud mirrored vllm image for China region for gpu with Turing architecture and below (Compute Capability<8.0)
|
||||
# FROM docker.m.daocloud.io/vllm/vllm-openai:v0.10.2
|
||||
|
||||
# Install libgl for opencv support & Noto fonts for Chinese characters
|
||||
RUN apt-get update && \
|
||||
|
||||
34
docker/china/maca.Dockerfile
Normal file
@@ -0,0 +1,34 @@
|
||||
# 基础镜像配置 vLLM 或 LMDeploy 推理环境,请根据实际需要选择其中一个,要求 amd64(x86-64) CPU + metax GPU。
|
||||
# Base image containing the vLLM inference environment, requiring amd64(x86-64) CPU + metax GPU.
|
||||
FROM cr.metax-tech.com/public-ai-release/maca/vllm:maca.ai3.1.0.7-torch2.6-py310-ubuntu22.04-amd64
|
||||
# Base image containing the LMDeploy inference environment, requiring amd64(x86-64) CPU + metax GPU.
|
||||
# FROM crpi-vofi3w62lkohhxsp.cn-shanghai.personal.cr.aliyuncs.com/opendatalab-mineru/maca:maca.ai3.1.0.7-torch2.6-py310-ubuntu22.04-lmdeploy0.10.2-amd64
|
||||
|
||||
# Install libgl for opencv support & Noto fonts for Chinese characters
|
||||
RUN apt-get update && \
|
||||
apt-get install -y \
|
||||
fonts-noto-core \
|
||||
fonts-noto-cjk \
|
||||
fontconfig \
|
||||
libgl1 && \
|
||||
fc-cache -fv && \
|
||||
apt-get clean && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# mod torchvision to be compatible with torch 2.6
|
||||
RUN sed -i '3s/^Version: 0.15.1+metax3\.1\.0\.4$/Version: 0.21.0+metax3.1.0.4/' /opt/conda/lib/python3.10/site-packages/torchvision-0.15.1+metax3.1.0.4.dist-info/METADATA && \
|
||||
mv /opt/conda/lib/python3.10/site-packages/torchvision-0.15.1+metax3.1.0.4.dist-info /opt/conda/lib/python3.10/site-packages/torchvision-0.21.0+metax3.1.0.4.dist-info
|
||||
|
||||
# Install mineru latest
|
||||
RUN /opt/conda/bin/python3 -m pip install -U pip -i https://mirrors.aliyun.com/pypi/simple && \
|
||||
/opt/conda/bin/python3 -m pip install 'mineru[core]>=2.6.5' \
|
||||
numpy==1.26.4 \
|
||||
opencv-python==4.11.0.86 \
|
||||
-i https://mirrors.aliyun.com/pypi/simple && \
|
||||
/opt/conda/bin/python3 -m pip cache purge
|
||||
|
||||
# Download models and update the configuration file
|
||||
RUN /bin/bash -c "/opt/conda/bin/mineru-models-download -s modelscope -m all"
|
||||
|
||||
# Set the entry point to activate the virtual environment and run the command line tool
|
||||
ENTRYPOINT ["/bin/bash", "-c", "export MINERU_MODEL_SOURCE=local && exec \"$@\"", "--"]
|
||||
29
docker/china/npu.Dockerfile
Normal file
@@ -0,0 +1,29 @@
|
||||
# 基础镜像配置 vLLM 或 LMDeploy ,请根据实际需要选择其中一个,要求 ARM(AArch64) CPU + Ascend NPU。
|
||||
# Base image containing the vLLM inference environment, requiring ARM(AArch64) CPU + Ascend NPU.
|
||||
FROM quay.io/ascend/vllm-ascend:v0.11.0rc1
|
||||
# Base image containing the LMDeploy inference environment, requiring ARM(AArch64) CPU + Ascend NPU.
|
||||
# FROM crpi-4crprmm5baj1v8iv.cn-hangzhou.personal.cr.aliyuncs.com/lmdeploy_dlinfer/ascend:mineru-a2
|
||||
|
||||
|
||||
# Install libgl for opencv support & Noto fonts for Chinese characters
|
||||
RUN apt-get update && \
|
||||
apt-get install -y \
|
||||
fonts-noto-core \
|
||||
fonts-noto-cjk \
|
||||
fontconfig \
|
||||
libgl1 \
|
||||
libglib2.0-0 && \
|
||||
fc-cache -fv && \
|
||||
apt-get clean && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Install mineru latest
|
||||
RUN python3 -m pip install -U pip -i https://mirrors.aliyun.com/pypi/simple && \
|
||||
python3 -m pip install -U 'mineru[core]>=2.6.5' -i https://mirrors.aliyun.com/pypi/simple && \
|
||||
python3 -m pip cache purge
|
||||
|
||||
# Download models and update the configuration file
|
||||
RUN TORCH_DEVICE_BACKEND_AUTOLOAD=0 /bin/bash -c "mineru-models-download -s modelscope -m all"
|
||||
|
||||
# Set the entry point to activate the virtual environment and run the command line tool
|
||||
ENTRYPOINT ["/bin/bash", "-c", "export MINERU_MODEL_SOURCE=local && exec \"$@\"", "--"]
|
||||
30
docker/china/ppu.Dockerfile
Normal file
@@ -0,0 +1,30 @@
|
||||
# 基础镜像配置 vLLM 或 LMDeploy 推理环境,请根据实际需要选择其中一个,要求 amd64(x86-64) CPU + t-head PPU。
|
||||
# Base image containing the vLLM inference environment, requiring amd64(x86-64) CPU + t-head PPU.
|
||||
FROM crpi-vofi3w62lkohhxsp.cn-shanghai.personal.cr.aliyuncs.com/opendatalab-mineru/ppu:ppu-pytorch2.6.0-ubuntu24.04-cuda12.6-vllm0.8.5-py312
|
||||
# Base image containing the LMDeploy inference environment, requiring amd64(x86-64) CPU + t-head PPU.
|
||||
# FROM crpi-4crprmm5baj1v8iv.cn-hangzhou.personal.cr.aliyuncs.com/lmdeploy_dlinfer/ppu:mineru-ppu
|
||||
|
||||
# Install libgl for opencv support & Noto fonts for Chinese characters
|
||||
RUN apt-get update && \
|
||||
apt-get install -y \
|
||||
fonts-noto-core \
|
||||
fonts-noto-cjk \
|
||||
fontconfig \
|
||||
libgl1 && \
|
||||
fc-cache -fv && \
|
||||
apt-get clean && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Install mineru latest
|
||||
RUN python3 -m pip install -U pip -i https://mirrors.aliyun.com/pypi/simple && \
|
||||
python3 -m pip install 'mineru[core]>=2.6.5' \
|
||||
numpy==1.26.4 \
|
||||
opencv-python==4.11.0.86 \
|
||||
-i https://mirrors.aliyun.com/pypi/simple && \
|
||||
python3 -m pip cache purge
|
||||
|
||||
# Download models and update the configuration file
|
||||
RUN /bin/bash -c "mineru-models-download -s modelscope -m all"
|
||||
|
||||
# Set the entry point to activate the virtual environment and run the command line tool
|
||||
ENTRYPOINT ["/bin/bash", "-c", "export MINERU_MODEL_SOURCE=local && exec \"$@\"", "--"]
|
||||
@@ -1,19 +1,38 @@
|
||||
services:
|
||||
mineru-vllm-server:
|
||||
image: mineru-vllm:latest
|
||||
container_name: mineru-vllm-server
|
||||
mineru-openai-server:
|
||||
image: mineru:latest
|
||||
container_name: mineru-openai-server
|
||||
restart: always
|
||||
profiles: ["vllm-server"]
|
||||
profiles: ["openai-server"]
|
||||
ports:
|
||||
- 30000:30000
|
||||
environment:
|
||||
MINERU_MODEL_SOURCE: local
|
||||
entrypoint: mineru-vllm-server
|
||||
entrypoint: mineru-openai-server
|
||||
command:
|
||||
# ==================== Engine Selection ====================
|
||||
# WARNING: Only ONE engine can be enabled at a time!
|
||||
# Choose 'vllm' OR 'lmdeploy' (uncomment one line below)
|
||||
--engine vllm
|
||||
# --engine lmdeploy
|
||||
|
||||
# ==================== vLLM Engine Parameters ====================
|
||||
# Uncomment if using --engine vllm
|
||||
--host 0.0.0.0
|
||||
--port 30000
|
||||
# --data-parallel-size 2 # If using multiple GPUs, increase throughput using vllm's multi-GPU parallel mode
|
||||
# --gpu-memory-utilization 0.5 # If running on a single GPU and encountering VRAM shortage, reduce the KV cache size by this parameter, if VRAM issues persist, try lowering it further to `0.4` or below.
|
||||
# Multi-GPU configuration (increase throughput)
|
||||
# --data-parallel-size 2
|
||||
# Single GPU memory optimization (reduce if VRAM insufficient)
|
||||
# --gpu-memory-utilization 0.5 # Try 0.4 or lower if issues persist
|
||||
|
||||
# ==================== LMDeploy Engine Parameters ====================
|
||||
# Uncomment if using --engine lmdeploy
|
||||
# --server-name 0.0.0.0
|
||||
# --server-port 30000
|
||||
# Multi-GPU configuration (increase throughput)
|
||||
# --dp 2
|
||||
# Single GPU memory optimization (reduce if VRAM insufficient)
|
||||
# --cache-max-entry-count 0.5 # Try 0.4 or lower if issues persist
|
||||
ulimits:
|
||||
memlock: -1
|
||||
stack: 67108864
|
||||
@@ -25,11 +44,11 @@ services:
|
||||
reservations:
|
||||
devices:
|
||||
- driver: nvidia
|
||||
device_ids: ["0"]
|
||||
device_ids: ["0"] # Modify for multiple GPUs: ["0", "1"]
|
||||
capabilities: [gpu]
|
||||
|
||||
mineru-api:
|
||||
image: mineru-vllm:latest
|
||||
image: mineru:latest
|
||||
container_name: mineru-api
|
||||
restart: always
|
||||
profiles: ["api"]
|
||||
@@ -39,11 +58,21 @@ services:
|
||||
MINERU_MODEL_SOURCE: local
|
||||
entrypoint: mineru-api
|
||||
command:
|
||||
# ==================== Server Configuration ====================
|
||||
--host 0.0.0.0
|
||||
--port 8000
|
||||
# parameters for vllm-engine
|
||||
# --data-parallel-size 2 # If using multiple GPUs, increase throughput using vllm's multi-GPU parallel mode
|
||||
# --gpu-memory-utilization 0.5 # If running on a single GPU and encountering VRAM shortage, reduce the KV cache size by this parameter, if VRAM issues persist, try lowering it further to `0.4` or below.
|
||||
|
||||
# ==================== vLLM Engine Parameters ====================
|
||||
# Multi-GPU configuration
|
||||
# --data-parallel-size 2
|
||||
# Single GPU memory optimization
|
||||
# --gpu-memory-utilization 0.5 # Try 0.4 or lower if VRAM insufficient
|
||||
|
||||
# ==================== LMDeploy Engine Parameters ====================
|
||||
# Multi-GPU configuration
|
||||
# --dp 2
|
||||
# Single GPU memory optimization
|
||||
# --cache-max-entry-count 0.5 # Try 0.4 or lower if VRAM insufficient
|
||||
ulimits:
|
||||
memlock: -1
|
||||
stack: 67108864
|
||||
@@ -53,11 +82,11 @@ services:
|
||||
reservations:
|
||||
devices:
|
||||
- driver: nvidia
|
||||
device_ids: [ "0" ]
|
||||
capabilities: [ gpu ]
|
||||
device_ids: ["0"] # Modify for multiple GPUs: ["0", "1"]
|
||||
capabilities: [gpu]
|
||||
|
||||
mineru-gradio:
|
||||
image: mineru-vllm:latest
|
||||
image: mineru:latest
|
||||
container_name: mineru-gradio
|
||||
restart: always
|
||||
profiles: ["gradio"]
|
||||
@@ -67,14 +96,30 @@ services:
|
||||
MINERU_MODEL_SOURCE: local
|
||||
entrypoint: mineru-gradio
|
||||
command:
|
||||
# ==================== Gradio Server Configuration ====================
|
||||
--server-name 0.0.0.0
|
||||
--server-port 7860
|
||||
--enable-vllm-engine true # Enable the vllm engine for Gradio
|
||||
# --enable-api false # If you want to disable the API, set this to false
|
||||
# --max-convert-pages 20 # If you want to limit the number of pages for conversion, set this to a specific number
|
||||
# parameters for vllm-engine
|
||||
# --data-parallel-size 2 # If using multiple GPUs, increase throughput using vllm's multi-GPU parallel mode
|
||||
# --gpu-memory-utilization 0.5 # If running on a single GPU and encountering VRAM shortage, reduce the KV cache size by this parameter, if VRAM issues persist, try lowering it further to `0.4` or below.
|
||||
|
||||
# ==================== Gradio Feature Settings ====================
|
||||
# --enable-api false # Disable API endpoint
|
||||
# --max-convert-pages 20 # Limit conversion page count
|
||||
|
||||
# ==================== Engine Selection ====================
|
||||
# WARNING: Only ONE engine can be enabled at a time!
|
||||
|
||||
# Option 1: vLLM Engine (recommended for most users)
|
||||
--enable-vllm-engine true
|
||||
# Multi-GPU configuration
|
||||
# --data-parallel-size 2
|
||||
# Single GPU memory optimization
|
||||
# --gpu-memory-utilization 0.5 # Try 0.4 or lower if VRAM insufficient
|
||||
|
||||
# Option 2: LMDeploy Engine
|
||||
# --enable-lmdeploy-engine true
|
||||
# Multi-GPU configuration
|
||||
# --dp 2
|
||||
# Single GPU memory optimization
|
||||
# --cache-max-entry-count 0.5 # Try 0.4 or lower if VRAM insufficient
|
||||
ulimits:
|
||||
memlock: -1
|
||||
stack: 67108864
|
||||
@@ -84,5 +129,5 @@ services:
|
||||
reservations:
|
||||
devices:
|
||||
- driver: nvidia
|
||||
device_ids: [ "0" ]
|
||||
capabilities: [ gpu ]
|
||||
device_ids: ["0"] # Modify for multiple GPUs: ["0", "1"]
|
||||
capabilities: [gpu]
|
||||
|
||||
@@ -1,6 +1,10 @@
|
||||
# Use the official vllm image
|
||||
# Use the official vllm image for gpu with Ampere architecture and above (Compute Capability>=8.0)
|
||||
# Compute Capability version query (https://developer.nvidia.com/cuda-gpus)
|
||||
FROM vllm/vllm-openai:v0.10.1.1
|
||||
|
||||
# Use the official vllm image for gpu with Turing architecture and below (Compute Capability<8.0)
|
||||
# FROM vllm/vllm-openai:v0.10.2
|
||||
|
||||
# Install libgl for opencv support & Noto fonts for Chinese characters
|
||||
RUN apt-get update && \
|
||||
apt-get install -y \
|
||||
|
||||
BIN
docs/assets/images/BISHENG_01.png
Normal file
|
After Width: | Height: | Size: 96 KiB |
BIN
docs/assets/images/Cherry_Studio_1.png
Normal file
|
After Width: | Height: | Size: 34 KiB |
BIN
docs/assets/images/Cherry_Studio_2.png
Normal file
|
After Width: | Height: | Size: 51 KiB |
BIN
docs/assets/images/Cherry_Studio_3.png
Normal file
|
After Width: | Height: | Size: 72 KiB |
BIN
docs/assets/images/Cherry_Studio_4.png
Normal file
|
After Width: | Height: | Size: 55 KiB |
BIN
docs/assets/images/Cherry_Studio_5.png
Normal file
|
After Width: | Height: | Size: 64 KiB |
BIN
docs/assets/images/Cherry_Studio_6.png
Normal file
|
After Width: | Height: | Size: 75 KiB |
BIN
docs/assets/images/Cherry_Studio_7.png
Normal file
|
After Width: | Height: | Size: 56 KiB |
BIN
docs/assets/images/Cherry_Studio_8.png
Normal file
|
After Width: | Height: | Size: 28 KiB |
BIN
docs/assets/images/Coze_1.png
Normal file
|
After Width: | Height: | Size: 64 KiB |
BIN
docs/assets/images/Coze_10.png
Normal file
|
After Width: | Height: | Size: 88 KiB |
BIN
docs/assets/images/Coze_11.png
Normal file
|
After Width: | Height: | Size: 76 KiB |
BIN
docs/assets/images/Coze_12.png
Normal file
|
After Width: | Height: | Size: 110 KiB |
BIN
docs/assets/images/Coze_13.png
Normal file
|
After Width: | Height: | Size: 79 KiB |
BIN
docs/assets/images/Coze_14.png
Normal file
|
After Width: | Height: | Size: 104 KiB |
BIN
docs/assets/images/Coze_15.png
Normal file
|
After Width: | Height: | Size: 72 KiB |
BIN
docs/assets/images/Coze_16.png
Normal file
|
After Width: | Height: | Size: 87 KiB |
BIN
docs/assets/images/Coze_17.png
Normal file
|
After Width: | Height: | Size: 201 KiB |
BIN
docs/assets/images/Coze_18.png
Normal file
|
After Width: | Height: | Size: 261 KiB |
BIN
docs/assets/images/Coze_19.png
Normal file
|
After Width: | Height: | Size: 261 KiB |
BIN
docs/assets/images/Coze_2.png
Normal file
|
After Width: | Height: | Size: 53 KiB |
BIN
docs/assets/images/Coze_20.png
Normal file
|
After Width: | Height: | Size: 145 KiB |
BIN
docs/assets/images/Coze_21.png
Normal file
|
After Width: | Height: | Size: 130 KiB |
BIN
docs/assets/images/Coze_3.png
Normal file
|
After Width: | Height: | Size: 95 KiB |
BIN
docs/assets/images/Coze_4.png
Normal file
|
After Width: | Height: | Size: 110 KiB |
BIN
docs/assets/images/Coze_5.png
Normal file
|
After Width: | Height: | Size: 102 KiB |
BIN
docs/assets/images/Coze_6.png
Normal file
|
After Width: | Height: | Size: 101 KiB |
BIN
docs/assets/images/Coze_7.png
Normal file
|
After Width: | Height: | Size: 214 KiB |
BIN
docs/assets/images/Coze_8.png
Normal file
|
After Width: | Height: | Size: 151 KiB |
BIN
docs/assets/images/Coze_9.png
Normal file
|
After Width: | Height: | Size: 83 KiB |
BIN
docs/assets/images/DataFLow_01.png
Normal file
|
After Width: | Height: | Size: 89 KiB |
BIN
docs/assets/images/DataFlow_02.png
Normal file
|
After Width: | Height: | Size: 147 KiB |
BIN
docs/assets/images/Dify_1.png
Normal file
|
After Width: | Height: | Size: 108 KiB |
BIN
docs/assets/images/Dify_10.png
Normal file
|
After Width: | Height: | Size: 81 KiB |
BIN
docs/assets/images/Dify_11.png
Normal file
|
After Width: | Height: | Size: 85 KiB |
BIN
docs/assets/images/Dify_12.png
Normal file
|
After Width: | Height: | Size: 129 KiB |
BIN
docs/assets/images/Dify_13.png
Normal file
|
After Width: | Height: | Size: 35 KiB |
BIN
docs/assets/images/Dify_14.png
Normal file
|
After Width: | Height: | Size: 249 KiB |
BIN
docs/assets/images/Dify_15.png
Normal file
|
After Width: | Height: | Size: 255 KiB |
BIN
docs/assets/images/Dify_16.png
Normal file
|
After Width: | Height: | Size: 107 KiB |
BIN
docs/assets/images/Dify_17.png
Normal file
|
After Width: | Height: | Size: 125 KiB |
BIN
docs/assets/images/Dify_18.png
Normal file
|
After Width: | Height: | Size: 180 KiB |
BIN
docs/assets/images/Dify_19.png
Normal file
|
After Width: | Height: | Size: 105 KiB |
BIN
docs/assets/images/Dify_2.png
Normal file
|
After Width: | Height: | Size: 236 KiB |
BIN
docs/assets/images/Dify_20.png
Normal file
|
After Width: | Height: | Size: 177 KiB |
BIN
docs/assets/images/Dify_21.png
Normal file
|
After Width: | Height: | Size: 77 KiB |
BIN
docs/assets/images/Dify_22.png
Normal file
|
After Width: | Height: | Size: 118 KiB |
BIN
docs/assets/images/Dify_23.png
Normal file
|
After Width: | Height: | Size: 94 KiB |
BIN
docs/assets/images/Dify_24.png
Normal file
|
After Width: | Height: | Size: 133 KiB |
BIN
docs/assets/images/Dify_25.png
Normal file
|
After Width: | Height: | Size: 161 KiB |
BIN
docs/assets/images/Dify_26.png
Normal file
|
After Width: | Height: | Size: 190 KiB |
BIN
docs/assets/images/Dify_3.png
Normal file
|
After Width: | Height: | Size: 263 KiB |
BIN
docs/assets/images/Dify_4.png
Normal file
|
After Width: | Height: | Size: 264 KiB |
BIN
docs/assets/images/Dify_5.png
Normal file
|
After Width: | Height: | Size: 261 KiB |
BIN
docs/assets/images/Dify_6.png
Normal file
|
After Width: | Height: | Size: 286 KiB |
BIN
docs/assets/images/Dify_7.png
Normal file
|
After Width: | Height: | Size: 50 KiB |
BIN
docs/assets/images/Dify_8.png
Normal file
|
After Width: | Height: | Size: 136 KiB |
BIN
docs/assets/images/Dify_9.png
Normal file
|
After Width: | Height: | Size: 110 KiB |
BIN
docs/assets/images/DingTalk_01.png
Normal file
|
After Width: | Height: | Size: 133 KiB |
BIN
docs/assets/images/FastGPT_01.png
Normal file
|
After Width: | Height: | Size: 185 KiB |
BIN
docs/assets/images/FastGPT_02.png
Normal file
|
After Width: | Height: | Size: 92 KiB |
BIN
docs/assets/images/ModelWhale_01.png
Normal file
|
After Width: | Height: | Size: 246 KiB |
BIN
docs/assets/images/ModelWhale_02.png
Normal file
|
After Width: | Height: | Size: 71 KiB |
BIN
docs/assets/images/ModelWhale_1.png
Normal file
|
After Width: | Height: | Size: 72 KiB |
BIN
docs/assets/images/RagFlow_01.png
Normal file
|
After Width: | Height: | Size: 116 KiB |
BIN
docs/assets/images/RagFlow_02.png
Normal file
|
After Width: | Height: | Size: 151 KiB |
BIN
docs/assets/images/Sider_1.png
Normal file
|
After Width: | Height: | Size: 62 KiB |
BIN
docs/assets/images/coze_0.png
Normal file
|
After Width: | Height: | Size: 92 KiB |
BIN
docs/assets/images/n8n_0.png
Normal file
|
After Width: | Height: | Size: 276 KiB |
BIN
docs/assets/images/n8n_1.png
Normal file
|
After Width: | Height: | Size: 67 KiB |
BIN
docs/assets/images/n8n_10.png
Normal file
|
After Width: | Height: | Size: 14 KiB |
BIN
docs/assets/images/n8n_2.png
Normal file
|
After Width: | Height: | Size: 74 KiB |
BIN
docs/assets/images/n8n_3.png
Normal file
|
After Width: | Height: | Size: 71 KiB |
BIN
docs/assets/images/n8n_4.png
Normal file
|
After Width: | Height: | Size: 72 KiB |
BIN
docs/assets/images/n8n_5.png
Normal file
|
After Width: | Height: | Size: 70 KiB |
BIN
docs/assets/images/n8n_6.png
Normal file
|
After Width: | Height: | Size: 63 KiB |
BIN
docs/assets/images/n8n_7.png
Normal file
|
After Width: | Height: | Size: 23 KiB |
BIN
docs/assets/images/n8n_8.png
Normal file
|
After Width: | Height: | Size: 33 KiB |
BIN
docs/assets/images/n8n_9.png
Normal file
|
After Width: | Height: | Size: 89 KiB |
@@ -19,7 +19,8 @@
|
||||
[](https://huggingface.co/spaces/opendatalab/MinerU)
|
||||
[](https://www.modelscope.cn/studios/OpenDataLab/MinerU)
|
||||
[](https://colab.research.google.com/gist/myhloli/a3cb16570ab3cfeadf9d8f0ac91b4fca/mineru_demo.ipynb)
|
||||
[](https://arxiv.org/abs/2409.18839)
|
||||
[](https://arxiv.org/abs/2409.18839)
|
||||
[](https://arxiv.org/abs/2509.22186)
|
||||
[](https://deepwiki.com/opendatalab/MinerU)
|
||||
|
||||
<div align="center">
|
||||
@@ -56,7 +57,7 @@ Compared to well-known commercial products domestically and internationally, Min
|
||||
- Automatically identify and convert formulas in documents to LaTeX format
|
||||
- Automatically identify and convert tables in documents to HTML format
|
||||
- Automatically detect scanned PDFs and garbled PDFs, and enable OCR functionality
|
||||
- OCR supports detection and recognition of 84 languages
|
||||
- OCR supports detection and recognition of 109 languages
|
||||
- Support multiple output formats, such as multimodal and NLP Markdown, reading-order-sorted JSON, and information-rich intermediate formats
|
||||
- Support multiple visualization results, including layout visualization, span visualization, etc., for efficient confirmation of output effects and quality inspection
|
||||
- Support pure CPU environment operation, and support GPU(CUDA)/NPU(CANN)/MPS acceleration
|
||||
|
||||
@@ -6,11 +6,12 @@ MinerU provides a convenient Docker deployment method, which helps quickly set u
|
||||
|
||||
```bash
|
||||
wget https://gcore.jsdelivr.net/gh/opendatalab/MinerU@master/docker/global/Dockerfile
|
||||
docker build -t mineru-vllm:latest -f Dockerfile .
|
||||
docker build -t mineru:latest -f Dockerfile .
|
||||
```
|
||||
|
||||
> [!TIP]
|
||||
> The [Dockerfile](https://github.com/opendatalab/MinerU/blob/master/docker/global/Dockerfile) uses `vllm/vllm-openai:v0.10.1.1` as the base image by default, supporting Turing/Ampere/Ada Lovelace/Hopper/Blackwell platforms.
|
||||
> The [Dockerfile](https://github.com/opendatalab/MinerU/blob/master/docker/global/Dockerfile) uses `vllm/vllm-openai:v0.10.1.1` as the base image by default. This version of vLLM v1 engine has limited support for GPU models.
|
||||
> If you cannot use vLLM accelerated inference on Turing and earlier architecture GPUs, you can resolve this issue by changing the base image to `vllm/vllm-openai:v0.10.2`.
|
||||
|
||||
## Docker Description
|
||||
|
||||
@@ -19,7 +20,7 @@ MinerU's Docker uses `vllm/vllm-openai` as the base image, so it includes the `v
|
||||
> [!NOTE]
|
||||
> Requirements for using `vllm` to accelerate VLM model inference:
|
||||
>
|
||||
> - Device must have Turing architecture or later graphics cards with 8GB+ available VRAM.
|
||||
> - Device must have Volta architecture or later graphics cards with 8GB+ available VRAM.
|
||||
> - The host machine's graphics driver should support CUDA 12.8 or higher; You can check the driver version using the `nvidia-smi` command.
|
||||
> - Docker container must have access to the host machine's graphics devices.
|
||||
|
||||
@@ -30,7 +31,7 @@ docker run --gpus all \
|
||||
--shm-size 32g \
|
||||
-p 30000:30000 -p 7860:7860 -p 8000:8000 \
|
||||
--ipc=host \
|
||||
-it mineru-vllm:latest \
|
||||
-it mineru:latest \
|
||||
/bin/bash
|
||||
```
|
||||
|
||||
@@ -50,17 +51,17 @@ wget https://gcore.jsdelivr.net/gh/opendatalab/MinerU@master/docker/compose.yaml
|
||||
>
|
||||
>- The `compose.yaml` file contains configurations for multiple services of MinerU, you can choose to start specific services as needed.
|
||||
>- Different services might have additional parameter configurations, which you can view and edit in the `compose.yaml` file.
|
||||
>- Due to the pre-allocation of GPU memory by the `vllm` inference acceleration framework, you may not be able to run multiple `vllm` services simultaneously on the same machine. Therefore, ensure that other services that might use GPU memory have been stopped before starting the `vlm-vllm-server` service or using the `vlm-vllm-engine` backend.
|
||||
>- Due to the pre-allocation of GPU memory by the `vllm` inference acceleration framework, you may not be able to run multiple `vllm` services simultaneously on the same machine. Therefore, ensure that other services that might use GPU memory have been stopped before starting the `vlm-openai-server` service or using the `vlm-vllm-engine` backend.
|
||||
|
||||
---
|
||||
|
||||
### Start vllm-server service
|
||||
connect to `vllm-server` via `vlm-http-client` backend
|
||||
### Start OpenAI-compatible server service
|
||||
connect to `openai-server` via `vlm-http-client` backend
|
||||
```bash
|
||||
docker compose -f compose.yaml --profile vllm-server up -d
|
||||
docker compose -f compose.yaml --profile openai-server up -d
|
||||
```
|
||||
>[!TIP]
|
||||
>In another terminal, connect to vllm server via http client (only requires CPU and network, no vllm environment needed)
|
||||
>In another terminal, connect to openai server via http client (only requires CPU and network, no vllm environment needed)
|
||||
> ```bash
|
||||
> mineru -p <input_path> -o <output_path> -b vlm-http-client -u http://<server_ip>:30000
|
||||
> ```
|
||||
@@ -83,4 +84,3 @@ connect to `vllm-server` via `vlm-http-client` backend
|
||||
>[!TIP]
|
||||
>
|
||||
>- Access `http://<server_ip>:7860` in your browser to use the Gradio WebUI.
|
||||
>- Access `http://<server_ip>:7860/?view=api` to use the Gradio API.
|
||||
|
||||
@@ -4,26 +4,43 @@ MinerU supports installing extension modules on demand based on different needs
|
||||
## Common Scenarios
|
||||
|
||||
### Core Functionality Installation
|
||||
The `core` module is the core dependency of MinerU, containing all functional modules except `vllm`. Installing this module ensures the basic functionality of MinerU works properly.
|
||||
The `core` module is the core dependency of MinerU, containing all functional modules except `vllm`/`lmdeploy`. Installing this module ensures the basic functionality of MinerU works properly.
|
||||
```bash
|
||||
uv pip install mineru[core]
|
||||
uv pip install "mineru[core]"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Using `vllm` to Accelerate VLM Model Inference
|
||||
The `vllm` module provides acceleration support for VLM model inference, suitable for graphics cards with Turing architecture and later (8GB+ VRAM). Installing this module can significantly improve model inference speed.
|
||||
In the configuration, `all` includes both `core` and `vllm` modules, so `mineru[all]` and `mineru[core,vllm]` are equivalent.
|
||||
> [!NOTE]
|
||||
> `vllm` and `lmdeploy` have nearly identical VLM inference acceleration effects and usage methods. You can choose one of them to install and use based on your actual needs, but it is not recommended to install both modules simultaneously to avoid potential dependency conflicts.
|
||||
|
||||
The `vllm` module provides acceleration support for VLM model inference, suitable for graphics cards with Volta architecture and later (8GB+ VRAM). Installing this module can significantly improve model inference speed.
|
||||
|
||||
```bash
|
||||
uv pip install mineru[all]
|
||||
uv pip install "mineru[core,vllm]"
|
||||
```
|
||||
> [!TIP]
|
||||
> If exceptions occur during installation of the complete package including vllm, please refer to the [vllm official documentation](https://docs.vllm.ai/en/latest/getting_started/installation/index.html) to try to resolve the issue, or directly use the [Docker](./docker_deployment.md) deployment method.
|
||||
> If exceptions occur during installation of the extra package including vllm, please refer to the [vllm official documentation](https://docs.vllm.ai/en/latest/getting_started/installation/index.html) to try to resolve the issue, or directly use the [Docker](./docker_deployment.md) deployment method.
|
||||
|
||||
---
|
||||
|
||||
### Installing Lightweight Client to Connect to vllm-server
|
||||
If you need to install a lightweight client on edge devices to connect to `vllm-server`, you can install the basic mineru package, which is very lightweight and suitable for devices with only CPU and network connectivity.
|
||||
### Using `lmdeploy` to Accelerate VLM Model Inference
|
||||
> [!NOTE]
|
||||
> `vllm` and `lmdeploy` have nearly identical VLM inference acceleration effects and usage methods. You can choose one of them to install and use based on your actual needs, but it is not recommended to install both modules simultaneously to avoid potential dependency conflicts.
|
||||
|
||||
The `lmdeploy` module provides acceleration support for VLM model inference, suitable for graphics cards with Volta architecture and later (8GB+ VRAM). Installing this module can significantly improve model inference speed.
|
||||
|
||||
```bash
|
||||
uv pip install "mineru[core,lmdeploy]"
|
||||
```
|
||||
> [!TIP]
|
||||
> If exceptions occur during installation of the extra package including lmdeploy, please refer to the [lmdeploy official documentation](https://lmdeploy.readthedocs.io/en/latest/get_started/installation.html) to try to resolve the issue.
|
||||
|
||||
---
|
||||
|
||||
### Installing Lightweight Client to Connect to OpenAI-compatible servers
|
||||
If you need to install a lightweight client on edge devices to connect to an OpenAI-compatible server for using VLM mode, you can install the basic mineru package, which is very lightweight and suitable for devices with only CPU and network connectivity.
|
||||
```bash
|
||||
uv pip install mineru
|
||||
```
|
||||
|
||||
@@ -27,41 +27,75 @@ A WebUI developed based on Gradio, with a simple interface and only core parsing
|
||||
> In non-mainstream environments, due to the diversity of hardware and software configurations, as well as compatibility issues with third-party dependencies, we cannot guarantee 100% usability of the project. Therefore, for users who wish to use this project in non-recommended environments, we suggest carefully reading the documentation and FAQ first, as most issues have corresponding solutions in the FAQ. Additionally, we encourage community feedback on issues so that we can gradually expand our support range.
|
||||
|
||||
<table border="1">
|
||||
<tr>
|
||||
<td>Parsing Backend</td>
|
||||
<td>pipeline</td>
|
||||
<td>vlm-transformers</td>
|
||||
<td>vlm-vllm</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Operating System</td>
|
||||
<td>Linux / Windows / macOS</td>
|
||||
<td>Linux / Windows</td>
|
||||
<td>Linux / Windows (via WSL2)</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>CPU Inference Support</td>
|
||||
<td>✅</td>
|
||||
<td colspan="2">❌</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>GPU Requirements</td>
|
||||
<td>Turing architecture and later, 6GB+ VRAM or Apple Silicon</td>
|
||||
<td colspan="2">Turing architecture and later, 8GB+ VRAM</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Memory Requirements</td>
|
||||
<td colspan="3">Minimum 16GB+, recommended 32GB+</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Disk Space Requirements</td>
|
||||
<td colspan="3">20GB+, SSD recommended</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Python Version</td>
|
||||
<td colspan="3">3.10-3.13</td>
|
||||
</tr>
|
||||
<thead>
|
||||
<tr>
|
||||
<th rowspan="2">Parsing Backend</th>
|
||||
<th rowspan="2">pipeline <br> (Accuracy<sup>1</sup> 82+)</th>
|
||||
<th colspan="5">vlm (Accuracy<sup>1</sup> 90+)</th>
|
||||
</tr>
|
||||
<tr>
|
||||
<th>transformers</th>
|
||||
<th>mlx-engine</th>
|
||||
<th>vllm-engine / <br>vllm-async-engine</th>
|
||||
<th>lmdeploy-engine</th>
|
||||
<th>http-client</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr>
|
||||
<th>Backend Features</th>
|
||||
<td>Fast, no hallucinations</td>
|
||||
<td>Good compatibility, <br>but slower</td>
|
||||
<td>Faster than transformers</td>
|
||||
<td>Fast, compatible with the vLLM ecosystem</td>
|
||||
<td>Fast, compatible with the LMDeploy ecosystem</td>
|
||||
<td>Suitable for OpenAI-compatible servers<sup>6</sup></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th>Operating System</th>
|
||||
<td colspan="2" style="text-align:center;">Linux<sup>2</sup> / Windows / macOS</td>
|
||||
<td style="text-align:center;">macOS<sup>3</sup></td>
|
||||
<td style="text-align:center;">Linux<sup>2</sup> / Windows<sup>4</sup> </td>
|
||||
<td style="text-align:center;">Linux<sup>2</sup> / Windows<sup>5</sup> </td>
|
||||
<td>Any</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th>CPU inference support</th>
|
||||
<td colspan="2" style="text-align:center;">✅</td>
|
||||
<td colspan="3" style="text-align:center;">❌</td>
|
||||
<td>Not required</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th>GPU Requirements</th><td colspan="2" style="text-align:center;">Volta or later architectures, 6 GB VRAM or more, or Apple Silicon</td>
|
||||
<td>Apple Silicon</td>
|
||||
<td colspan="2" style="text-align:center;">Volta or later architectures, 8 GB VRAM or more</td>
|
||||
<td>Not required</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th>Memory Requirements</th>
|
||||
<td colspan="5" style="text-align:center;">Minimum 16 GB, 32 GB recommended</td>
|
||||
<td>8 GB</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th>Disk Space Requirements</th>
|
||||
<td colspan="5" style="text-align:center;">20 GB or more, SSD recommended</td>
|
||||
<td>2 GB</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th>Python Version</th>
|
||||
<td colspan="6" style="text-align:center;">3.10-3.13<sup>7</sup></td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
<sup>1</sup> Accuracy metric is the End-to-End Evaluation Overall score of OmniDocBench (v1.5), tested on the latest `MinerU` version.
|
||||
<sup>2</sup> Linux supports only distributions released in 2019 or later.
|
||||
<sup>3</sup> MLX requires macOS 13.5 or later, recommended for use with version 14.0 or higher.
|
||||
<sup>4</sup> Windows vLLM support via WSL2(Windows Subsystem for Linux).
|
||||
<sup>5</sup> Windows LMDeploy can only use the `turbomind` backend, which is slightly slower than the `pytorch` backend. If performance is critical, it is recommended to run it via WSL2.
|
||||
<sup>6</sup> Servers compatible with the OpenAI API, such as local or remote model services deployed via inference frameworks like `vLLM`, `SGLang`, or `LMDeploy`.
|
||||
<sup>7</sup> Windows + LMDeploy only supports Python versions 3.10–3.12, as the critical dependency `ray` does not yet support Python 3.13 on Windows.
|
||||
|
||||
|
||||
### Install MinerU
|
||||
|
||||
@@ -80,8 +114,8 @@ uv pip install -e .[core]
|
||||
```
|
||||
|
||||
> [!TIP]
|
||||
> `mineru[core]` includes all core features except `vllm` acceleration, compatible with Windows / Linux / macOS systems, suitable for most users.
|
||||
> If you need to use `vllm` acceleration for VLM model inference or install a lightweight client on edge devices, please refer to the documentation [Extension Modules Installation Guide](./extension_modules.md).
|
||||
> `mineru[core]` includes all core features except `vLLM`/`LMDeploy` acceleration, compatible with Windows / Linux / macOS systems, suitable for most users.
|
||||
> If you need to use `vLLM`/`LMDeploy` acceleration for VLM model inference or install a lightweight client on edge devices, please refer to the documentation [Extension Modules Installation Guide](./extension_modules.md).
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -397,10 +397,10 @@ Text levels are distinguished through the `text_level` field:
|
||||
{
|
||||
"type": "image",
|
||||
"img_path": "images/a8ecda1c69b27e4f79fce1589175a9d721cbdc1cf78b4cc06a015f3746f6b9d8.jpg",
|
||||
"img_caption": [
|
||||
"image_caption": [
|
||||
"Fig. 1. Annual flow duration curves of daily flows from Pine Creek, Australia, 1989–2000. "
|
||||
],
|
||||
"img_footnote": [],
|
||||
"image_footnote": [],
|
||||
"bbox": [
|
||||
62,
|
||||
480,
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
# Advanced Command Line Parameters
|
||||
|
||||
## vllm Acceleration Parameter Optimization
|
||||
## Pass-through of inference engine parameters
|
||||
|
||||
### Performance Optimization Parameters
|
||||
### vllm Acceleration Parameter Optimization
|
||||
> [!TIP]
|
||||
> If you can already use vllm normally for accelerated VLM model inference but still want to further improve inference speed, you can try the following parameters:
|
||||
>
|
||||
@@ -10,8 +10,9 @@
|
||||
|
||||
### Parameter Passing Instructions
|
||||
> [!TIP]
|
||||
> - All officially supported vllm parameters can be passed to MinerU through command line arguments, including the following commands: `mineru`, `mineru-vllm-server`, `mineru-gradio`, `mineru-api`
|
||||
> - All officially supported vllm/lmdeploy parameters can be passed to MinerU through command line arguments, including the following commands: `mineru`, `mineru-openai-server`, `mineru-gradio`, `mineru-api`
|
||||
> - If you want to learn more about `vllm` parameter usage, please refer to the [vllm official documentation](https://docs.vllm.ai/en/latest/cli/serve.html)
|
||||
> - If you want to learn more about `lmdeploy` parameter usage, please refer to the [lmdeploy official documentation](https://lmdeploy.readthedocs.io/en/latest/llm/api_server.html)
|
||||
|
||||
## GPU Device Selection and Configuration
|
||||
|
||||
@@ -21,7 +22,7 @@
|
||||
> ```bash
|
||||
> CUDA_VISIBLE_DEVICES=1 mineru -p <input_path> -o <output_path>
|
||||
> ```
|
||||
> - This specification method is effective for all command line calls, including `mineru`, `mineru-vllm-server`, `mineru-gradio`, and `mineru-api`, and applies to both `pipeline` and `vlm` backends.
|
||||
> - This specification method is effective for all command line calls, including `mineru`, `mineru-openai-server`, `mineru-gradio`, and `mineru-api`, and applies to both `pipeline` and `vlm` backends.
|
||||
|
||||
### Common Device Configuration Examples
|
||||
> [!TIP]
|
||||
@@ -38,9 +39,9 @@
|
||||
> [!TIP]
|
||||
> Here are some possible usage scenarios:
|
||||
>
|
||||
> - If you have multiple graphics cards and need to specify cards 0 and 1, using multi-card parallelism to start `vllm-server`, you can use the following command:
|
||||
> - If you have multiple graphics cards and need to specify cards 0 and 1, using multi-card parallelism to start `openai-server`, you can use the following command:
|
||||
> ```bash
|
||||
> CUDA_VISIBLE_DEVICES=0,1 mineru-vllm-server --port 30000 --data-parallel-size 2
|
||||
> CUDA_VISIBLE_DEVICES=0,1 mineru-openai-server --engine vllm --port 30000 --data-parallel-size 2
|
||||
> ```
|
||||
>
|
||||
> - If you have multiple graphics cards and need to start two `fastapi` services on cards 0 and 1, listening on different ports respectively, you can use the following commands:
|
||||
|
||||
@@ -11,7 +11,7 @@ Options:
|
||||
-p, --path PATH Input file path or directory (required)
|
||||
-o, --output PATH Output directory (required)
|
||||
-m, --method [auto|txt|ocr] Parsing method: auto (default), txt, ocr (pipeline backend only)
|
||||
-b, --backend [pipeline|vlm-transformers|vlm-vllm-engine|vlm-http-client]
|
||||
-b, --backend [pipeline|vlm-transformers|vlm-vllm-engine|vlm-lmdeploy-engine|vlm-http-client]
|
||||
Parsing backend (default: pipeline)
|
||||
-l, --lang [ch|ch_server|ch_lite|en|korean|japan|chinese_cht|ta|te|ka|th|el|latin|arabic|east_slavic|cyrillic|devanagari]
|
||||
Specify document language (improves OCR accuracy, pipeline backend only)
|
||||
@@ -20,7 +20,7 @@ Options:
|
||||
-e, --end INTEGER Ending page number for parsing (0-based)
|
||||
-f, --formula BOOLEAN Enable formula parsing (default: enabled)
|
||||
-t, --table BOOLEAN Enable table parsing (default: enabled)
|
||||
-d, --device TEXT Inference device (e.g., cpu/cuda/cuda:0/npu/mps, pipeline backend only)
|
||||
-d, --device TEXT Inference device (e.g., cpu/cuda/cuda:0/npu/mps, pipeline and vlm-transformers backend only)
|
||||
--vram INTEGER Maximum GPU VRAM usage per process (GB) (pipeline backend only)
|
||||
--source [huggingface|modelscope|local]
|
||||
Model source, default: huggingface
|
||||
@@ -68,7 +68,7 @@ Here are the environment variables and their descriptions:
|
||||
- `MINERU_DEVICE_MODE`:
|
||||
* Used to specify inference device
|
||||
* supports device types like `cpu/cuda/cuda:0/npu/mps`
|
||||
* only effective for `pipeline` backend.
|
||||
* only effective for `pipeline` and `vlm-transformers` backends.
|
||||
|
||||
- `MINERU_VIRTUAL_VRAM_SIZE`:
|
||||
* Used to specify maximum GPU VRAM usage per process (GB)
|
||||
@@ -87,6 +87,27 @@ Here are the environment variables and their descriptions:
|
||||
* Used to enable formula parsing
|
||||
* defaults to `true`, can be set to `false` through environment variables to disable formula parsing.
|
||||
|
||||
- `MINERU_TABLE_ENABLE`:
|
||||
- `MINERU_FORMULA_CH_SUPPORT`:
|
||||
* Used to enable Chinese formula parsing optimization (experimental feature)
|
||||
* Default is `false`, can be set to `true` via environment variable to enable Chinese formula parsing optimization.
|
||||
* Only effective for `pipeline` backend.
|
||||
|
||||
- `MINERU_TABLE_ENABLE`:
|
||||
* Used to enable table parsing
|
||||
* defaults to `true`, can be set to `false` through environment variables to disable table parsing.
|
||||
* Default is `true`, can be set to `false` via environment variable to disable table parsing.
|
||||
|
||||
- `MINERU_TABLE_MERGE_ENABLE`:
|
||||
* Used to enable table merging functionality
|
||||
* Default is `true`, can be set to `false` via environment variable to disable table merging functionality.
|
||||
|
||||
- `MINERU_PDF_RENDER_TIMEOUT`:
|
||||
* Used to set the timeout period (in seconds) for rendering PDF to images
|
||||
* Default is `300` seconds, can be set to other values via environment variable to adjust the image rendering timeout.
|
||||
|
||||
- `MINERU_INTRA_OP_NUM_THREADS`:
|
||||
* Used to set the intra_op thread count for ONNX models, affects the computation speed of individual operators
|
||||
* Default is `-1` (auto-select), can be set to other values via environment variable to adjust the thread count.
|
||||
|
||||
- `MINERU_INTER_OP_NUM_THREADS`:
|
||||
* Used to set the inter_op thread count for ONNX models, affects the parallel execution of multiple operators
|
||||
* Default is `-1` (auto-select), can be set to other values via environment variable to adjust the thread count.
|
||||
|
||||
@@ -29,7 +29,7 @@ mineru -p <input_path> -o <output_path>
|
||||
mineru -p <input_path> -o <output_path> -b vlm-transformers
|
||||
```
|
||||
> [!TIP]
|
||||
> The vlm backend additionally supports `vllm` acceleration. Compared to the `transformers` backend, `vllm` can achieve 20-30x speedup. You can check the installation method for the complete package supporting `vllm` acceleration in the [Extension Modules Installation Guide](../quick_start/extension_modules.md).
|
||||
> The vlm backend additionally supports `vllm`/`lmdeploy` acceleration. Compared to the `transformers` backend, inference speed can be significantly improved. You can check the installation method for the complete package supporting `vllm`/`lmdeploy` acceleration in the [Extension Modules Installation Guide](../quick_start/extension_modules.md).
|
||||
|
||||
If you need to adjust parsing options through custom parameters, you can also check the more detailed [Command Line Tools Usage Instructions](./cli_tools.md) in the documentation.
|
||||
|
||||
@@ -48,15 +48,21 @@ If you need to adjust parsing options through custom parameters, you can also ch
|
||||
mineru-gradio --server-name 0.0.0.0 --server-port 7860
|
||||
# Or using vlm-vllm-engine/pipeline backends (requires vllm environment)
|
||||
mineru-gradio --server-name 0.0.0.0 --server-port 7860 --enable-vllm-engine true
|
||||
# Or using vlm-lmdeploy-engine/pipeline backends (requires lmdeploy environment)
|
||||
mineru-gradio --server-name 0.0.0.0 --server-port 7860 --enable-lmdeploy-engine true
|
||||
```
|
||||
>[!TIP]
|
||||
>
|
||||
>- Access `http://127.0.0.1:7860` in your browser to use the Gradio WebUI.
|
||||
>- Access `http://127.0.0.1:7860/?view=api` to use the Gradio API.
|
||||
|
||||
- Using `http-client/server` method:
|
||||
```bash
|
||||
# Start vllm server (requires vllm environment)
|
||||
mineru-vllm-server --port 30000
|
||||
# Start openai compatible server (requires vllm or lmdeploy environment)
|
||||
mineru-openai-server
|
||||
# Or start vllm server (requires vllm environment)
|
||||
mineru-openai-server --engine vllm --port 30000
|
||||
# Or start lmdeploy server (requires lmdeploy environment)
|
||||
mineru-openai-server --engine lmdeploy --server-port 30000
|
||||
```
|
||||
>[!TIP]
|
||||
>In another terminal, connect to vllm server via http client (only requires CPU and network, no vllm environment needed)
|
||||
@@ -65,8 +71,8 @@ If you need to adjust parsing options through custom parameters, you can also ch
|
||||
> ```
|
||||
|
||||
> [!NOTE]
|
||||
> All officially supported vllm parameters can be passed to MinerU through command line arguments, including the following commands: `mineru`, `mineru-vllm-server`, `mineru-gradio`, `mineru-api`.
|
||||
> We have compiled some commonly used parameters and usage methods for `vllm`, which can be found in the documentation [Advanced Command Line Parameters](./advanced_cli_parameters.md).
|
||||
> All officially supported `vllm/lmdeploy` parameters can be passed to MinerU through command line arguments, including the following commands: `mineru`, `mineru-openai-server`, `mineru-gradio`, `mineru-api`.
|
||||
> We have compiled some commonly used parameters and usage methods for `vllm/lmdeploy`, which can be found in the documentation [Advanced Command Line Parameters](./advanced_cli_parameters.md).
|
||||
|
||||
## Extending MinerU Functionality with Configuration Files
|
||||
|
||||
@@ -83,8 +89,28 @@ Here are some available configuration options:
|
||||
|
||||
- `llm-aided-config`:
|
||||
* Used to configure parameters for LLM-assisted title hierarchy
|
||||
* Compatible with all LLM models supporting `openai protocol`, defaults to using Alibaba Cloud Bailian's `qwen2.5-32b-instruct` model.
|
||||
* Compatible with all LLM models supporting `openai protocol`, defaults to using Alibaba Cloud Bailian's `qwen3-next-80b-a3b-instruct` model.
|
||||
* You need to configure your own API key and set `enable` to `true` to enable this feature.
|
||||
* If your API provider does not support the `enable_thinking` parameter, please manually remove it.
|
||||
* For example, in your configuration file, the `llm-aided-config` section may look like:
|
||||
```json
|
||||
"llm-aided-config": {
|
||||
"api_key": "your_api_key",
|
||||
"base_url": "https://dashscope.aliyuncs.com/compatible-mode/v1",
|
||||
"model": "qwen3-next-80b-a3b-instruct",
|
||||
"enable_thinking": false,
|
||||
"enable": false
|
||||
}
|
||||
```
|
||||
* To remove the `enable_thinking` parameter, simply delete the line containing `"enable_thinking": false`, resulting in:
|
||||
```json
|
||||
"llm-aided-config": {
|
||||
"api_key": "your_api_key",
|
||||
"base_url": "https://dashscope.aliyuncs.com/compatible-mode/v1",
|
||||
"model": "qwen3-next-80b-a3b-instruct",
|
||||
"enable": false
|
||||
}
|
||||
```
|
||||
|
||||
- `models-dir`:
|
||||
* Used to specify local model storage directory
|
||||
|
||||
@@ -19,7 +19,8 @@
|
||||
[](https://www.modelscope.cn/studios/OpenDataLab/MinerU)
|
||||
[](https://huggingface.co/spaces/opendatalab/MinerU)
|
||||
[](https://colab.research.google.com/gist/myhloli/a3cb16570ab3cfeadf9d8f0ac91b4fca/mineru_demo.ipynb)
|
||||
[](https://arxiv.org/abs/2409.18839)
|
||||
[](https://arxiv.org/abs/2409.18839)
|
||||
[](https://arxiv.org/abs/2509.22186)
|
||||
[](https://deepwiki.com/opendatalab/MinerU)
|
||||
|
||||
<div align="center">
|
||||
@@ -55,7 +56,7 @@ MinerU诞生于[书生-浦语](https://github.com/InternLM/InternLM)的预训练
|
||||
- 自动识别并转换文档中的公式为LaTeX格式
|
||||
- 自动识别并转换文档中的表格为HTML格式
|
||||
- 自动检测扫描版PDF和乱码PDF,并启用OCR功能
|
||||
- OCR支持84种语言的检测与识别
|
||||
- OCR支持109种语言的检测与识别
|
||||
- 支持多种输出格式,如多模态与NLP的Markdown、按阅读顺序排序的JSON、含有丰富信息的中间格式等
|
||||
- 支持多种可视化结果,包括layout可视化、span可视化等,便于高效确认输出效果与质检
|
||||
- 支持纯CPU环境运行,并支持 GPU(CUDA)/NPU(CANN)/MPS 加速
|
||||
|
||||
@@ -6,11 +6,12 @@ MinerU提供了便捷的docker部署方式,这有助于快速搭建环境并
|
||||
|
||||
```bash
|
||||
wget https://gcore.jsdelivr.net/gh/opendatalab/MinerU@master/docker/china/Dockerfile
|
||||
docker build -t mineru-vllm:latest -f Dockerfile .
|
||||
docker build -t mineru:latest -f Dockerfile .
|
||||
```
|
||||
|
||||
> [!TIP]
|
||||
> [Dockerfile](https://github.com/opendatalab/MinerU/blob/master/docker/china/Dockerfile)默认使用`vllm/vllm-openai:v0.10.1.1`作为基础镜像,支持Turing/Ampere/Ada Lovelace/Hopper/Blackwell平台,
|
||||
> [Dockerfile](https://github.com/opendatalab/MinerU/blob/master/docker/china/Dockerfile)默认使用`vllm/vllm-openai:v0.10.1.1`作为基础镜像,
|
||||
> 该版本的vLLM v1 engine对显卡型号支持有限,如您无法在Turing及更早架构的显卡上使用vLLM加速推理,可通过更改基础镜像为`vllm/vllm-openai:v0.10.2`来解决该问题。
|
||||
|
||||
## Docker说明
|
||||
|
||||
@@ -18,7 +19,7 @@ Mineru的docker使用了`vllm/vllm-openai`作为基础镜像,因此在docker
|
||||
> [!NOTE]
|
||||
> 使用`vllm`加速VLM模型推理需要满足的条件是:
|
||||
>
|
||||
> - 设备包含Turing及以后架构的显卡,且可用显存大于等于8G。
|
||||
> - 设备包含Volta及以后架构的显卡,且可用显存大于等于8G。
|
||||
> - 物理机的显卡驱动应支持CUDA 12.8或更高版本,可通过`nvidia-smi`命令检查驱动版本。
|
||||
> - docker中能够访问物理机的显卡设备。
|
||||
|
||||
@@ -30,7 +31,7 @@ docker run --gpus all \
|
||||
--shm-size 32g \
|
||||
-p 30000:30000 -p 7860:7860 -p 8000:8000 \
|
||||
--ipc=host \
|
||||
-it mineru-vllm:latest \
|
||||
-it mineru:latest \
|
||||
/bin/bash
|
||||
```
|
||||
|
||||
@@ -49,17 +50,17 @@ wget https://gcore.jsdelivr.net/gh/opendatalab/MinerU@master/docker/compose.yaml
|
||||
>
|
||||
>- `compose.yaml`文件中包含了MinerU的多个服务配置,您可以根据需要选择启动特定的服务。
|
||||
>- 不同的服务可能会有额外的参数配置,您可以在`compose.yaml`文件中查看并编辑。
|
||||
>- 由于`vllm`推理加速框架预分配显存的特性,您可能无法在同一台机器上同时运行多个`vllm`服务,因此请确保在启动`vlm-vllm-server`服务或使用`vlm-vllm-engine`后端时,其他可能使用显存的服务已停止。
|
||||
>- 由于`vllm`推理加速框架预分配显存的特性,您可能无法在同一台机器上同时运行多个`vllm`服务,因此请确保在启动`vlm-openai-server`服务或使用`vlm-vllm-engine`后端时,其他可能使用显存的服务已停止。
|
||||
|
||||
---
|
||||
|
||||
### 启动 vllm-server 服务
|
||||
并通过`vlm-http-client`后端连接`vllm-server`
|
||||
### 启动 openai兼容接口 服务
|
||||
并通过`vlm-http-client`后端连接`openai-server`
|
||||
```bash
|
||||
docker compose -f compose.yaml --profile vllm-server up -d
|
||||
docker compose -f compose.yaml --profile openai-server up -d
|
||||
```
|
||||
>[!TIP]
|
||||
>在另一个终端中通过http client连接vllm server(只需cpu与网络,不需要vllm环境)
|
||||
>在另一个终端中通过http client连接openai server(只需cpu与网络,不需要vllm环境)
|
||||
> ```bash
|
||||
> mineru -p <input_path> -o <output_path> -b vlm-http-client -u http://<server_ip>:30000
|
||||
> ```
|
||||
@@ -81,5 +82,4 @@ wget https://gcore.jsdelivr.net/gh/opendatalab/MinerU@master/docker/compose.yaml
|
||||
```
|
||||
>[!TIP]
|
||||
>
|
||||
>- 在浏览器中访问 `http://<server_ip>:7860` 使用 Gradio WebUI。
|
||||
>- 访问 `http://<server_ip>:7860/?view=api` 使用 Gradio API。
|
||||
>- 在浏览器中访问 `http://<server_ip>:7860` 使用 Gradio WebUI。
|
||||
@@ -4,26 +4,41 @@ MinerU 支持根据不同需求,按需安装扩展模块,以增强功能或
|
||||
## 常见场景
|
||||
|
||||
### 核心功能安装
|
||||
`core` 模块是 MinerU 的核心依赖,包含了除`vllm`外的所有功能模块。安装此模块可以确保 MinerU 的基本功能正常运行。
|
||||
`core` 模块是 MinerU 的核心依赖,包含了除`vllm`/`lmdeploy`外的所有功能模块。安装此模块可以确保 MinerU 的基本功能正常运行。
|
||||
```bash
|
||||
uv pip install mineru[core]
|
||||
uv pip install "mineru[core]"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 使用`vllm`加速 VLM 模型推理
|
||||
`vllm` 模块提供了对 VLM 模型推理的加速支持,适用于具有 Turing 及以后架构的显卡(8G 显存及以上)。安装此模块可以显著提升模型推理速度。
|
||||
在配置中,`all`包含了`core`和`vllm`模块,因此`mineru[all]`和`mineru[core,vllm]`是等价的。
|
||||
> [!NOTE]
|
||||
> `vllm`和`lmdeploy`对vlm的推理加速效果和使用方式几乎相同,您可以根据实际情况选择其中之一进行安装和使用,但不建议同时安装这两个模块,以避免潜在的依赖冲突。
|
||||
|
||||
`vllm` 模块提供了对 VLM 模型推理的加速支持,适用于具有 Volta 及以后架构的显卡(8G 显存及以上)。安装此模块可以显著提升模型推理速度。
|
||||
```bash
|
||||
uv pip install mineru[all]
|
||||
uv pip install "mineru[core,vllm]"
|
||||
```
|
||||
> [!TIP]
|
||||
> 如在安装包含vllm的完整包过程中发生异常,请参考 [vllm 官方文档](https://docs.vllm.ai/en/latest/getting_started/installation/index.html) 尝试解决,或直接使用 [Docker](./docker_deployment.md) 方式部署镜像。
|
||||
> 如在安装包含`vllm`的扩展包过程中发生异常,请参考 [vllm 官方文档](https://docs.vllm.ai/en/latest/getting_started/installation/index.html) 尝试解决,或直接使用 [Docker](./docker_deployment.md) 方式部署镜像。
|
||||
|
||||
---
|
||||
|
||||
### 安装轻量版client连接vllm-server使用
|
||||
如果您需要在边缘设备上安装轻量版的 client 端以连接 `vllm-server`,可以安装mineru的基础包,非常轻量,适合在只有cpu和网络连接的设备上使用。
|
||||
### 使用`lmdeploy`加速 VLM 模型推理
|
||||
> [!NOTE]
|
||||
> `vllm`和`lmdeploy`对vlm的推理加速效果和使用方式几乎相同,您可以根据实际情况选择其中之一进行安装和使用,但不建议同时安装这两个模块,以避免潜在的依赖冲突。
|
||||
|
||||
`lmdeploy` 模块提供了对 VLM 模型推理的加速支持,适用于具有 Volta 及以后架构的显卡(8G 显存及以上)。安装此模块可以显著提升模型推理速度。
|
||||
```bash
|
||||
uv pip install "mineru[core,lmdeploy]"
|
||||
```
|
||||
> [!TIP]
|
||||
> 如在安装包含`lmdeploy`的扩展包过程中发生异常,请参考 [lmdeploy 官方文档](https://lmdeploy.readthedocs.io/en/latest/get_started/installation.html) 尝试解决。
|
||||
|
||||
---
|
||||
|
||||
### 安装轻量版client连接兼容openai服务器使用
|
||||
如果您需要在边缘设备上安装轻量版的 client 端以连接兼容 openai 接口的服务端来使用vlm模式,可以安装mineru的基础包,非常轻量,适合在只有cpu和网络连接的设备上使用。
|
||||
```bash
|
||||
uv pip install mineru
|
||||
```
|
||||
|
||||