Compare commits

..

732 Commits

Author SHA1 Message Date
Xiaomeng Zhao
94dcf754b3 Merge pull request #4068 from opendatalab/dev
Dev
2025-11-26 14:28:01 +08:00
Xiaomeng Zhao
77c18e958f Merge pull request #4067 from myhloli/dev
fix: remove unnecessary dependencies from ppu.Dockerfile
2025-11-26 14:27:22 +08:00
myhloli
16997bea1b fix: remove unnecessary dependencies from ppu.Dockerfile 2025-11-26 14:25:35 +08:00
Xiaomeng Zhao
1a15dcee32 Merge pull request #4065 from opendatalab/master
master->dev
2025-11-26 12:03:45 +08:00
Xiaomeng Zhao
f78b25e3de Update feedback link for国产化平台适配方案 2025-11-26 12:01:13 +08:00
myhloli
24c973d99e Update version.py with new version 2025-11-26 03:48:32 +00:00
Xiaomeng Zhao
4e5f03bba1 Merge pull request #4063 from opendatalab/release-2.6.5 2025-11-26 11:39:19 +08:00
Xiaomeng Zhao
dfd99baccd 更新 index.md
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-11-26 11:38:13 +08:00
Xiaomeng Zhao
c291cc1a59 更新 RagFlow.md
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-11-26 11:33:26 +08:00
Xiaomeng Zhao
6f20fefadf 更新 utils.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-11-26 11:27:09 +08:00
Xiaomeng Zhao
700321b23d Merge pull request #4062 from myhloli/dev
Adapted for NPU, PPU, and MACA.
2025-11-26 11:09:54 +08:00
Xiaomeng Zhao
0ba3992173 Update docs/zh/usage/acceleration_cards/METAX.md
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-11-26 11:09:33 +08:00
Xiaomeng Zhao
096717e4d0 Update mineru/cli/vlm_server.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-11-26 11:07:30 +08:00
Xiaomeng Zhao
ab365420b9 Update mineru/backend/vlm/utils.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-11-26 11:05:08 +08:00
myhloli
27e1fd63e7 fix: clarify feedback process for issues encountered on domestic platform adaptation in README_zh-CN.md 2025-11-26 10:45:23 +08:00
myhloli
4d47634913 fix: add platform information for testing in Ascend.md, METAX.md, and THead.md 2025-11-26 00:23:22 +08:00
myhloli
7e33501cd0 fix: update engine support indicators in Ascend.md, METAX.md, and THead.md for clarity and consistency 2025-11-26 00:15:33 +08:00
myhloli
a6f4eb3727 fix: update mineru-vl-utils version and adjust transformers constraint in pyproject.toml; enhance support note for vlm-lmdeploy-engine in README_zh-CN.md 2025-11-26 00:08:22 +08:00
myhloli
0d2bebd8b1 fix: add support for vlm-lmdeploy-engine and enhance compatibility with domestic acceleration platforms in README files 2025-11-25 20:33:52 +08:00
myhloli
b7a209a4a7 fix: add usage tips for NPU and MACA acceleration cards in Ascend.md, METAX.md, and THead.md 2025-11-25 19:51:55 +08:00
myhloli
08c9fadbcb fix: update lmdeploy version range in pyproject.toml for compatibility 2025-11-25 19:18:36 +08:00
myhloli
424c37984b fix: add note formatting for VLM model inference support in Ascend.md, METAX.md, and THead.md 2025-11-25 18:41:18 +08:00
myhloli
35d5ba8b8f fix: update maca.Dockerfile to use absolute paths for Python and mineru commands 2025-11-25 18:38:34 +08:00
myhloli
b4f725258d fix: update MACA Dockerfile and METAX.md for improved clarity and support 2025-11-25 17:42:46 +08:00
myhloli
4a081c3214 fix: update stability indicators and descriptions in Ascend.md, METAX.md, and THead.md for clarity 2025-11-25 17:16:48 +08:00
myhloli
0e2e12ca84 fix: add MINERU_LMDEPLOY_DEVICE environment variable for MACA in METAX.md 2025-11-25 16:01:10 +08:00
myhloli
48ed75d935 fix: update torchvision version in maca.Dockerfile for compatibility 2025-11-25 16:00:33 +08:00
myhloli
34c46cb83d fix: disable cuDNN for MACA device in common.py to improve compatibility 2025-11-25 15:43:30 +08:00
myhloli
16f167b351 fix: update transformers version constraint in pyproject.toml for compatibility 2025-11-25 14:50:27 +08:00
myhloli
91df5c8bb7 fix: update index.md and METAX.md to enhance documentation for METAX deployment and usage 2025-11-25 03:34:35 +08:00
myhloli
86b1fca74c fix: update Dockerfiles to improve base image configurations and dependency installations 2025-11-25 02:20:25 +08:00
myhloli
444fd6f027 fix: update ppu.Dockerfile to include additional dependencies for mineru installation 2025-11-25 02:12:11 +08:00
myhloli
9aa46e9c6c fix: update Ascend.md and THead.md to correct the order of vllm and lmdeploy for VLM model inference 2025-11-25 02:04:26 +08:00
myhloli
7beee5be62 fix: update THead.md to reflect correct status indicators for vlm-engine 2025-11-25 01:22:34 +08:00
myhloli
64586d03ea fix: import MinerULogitsProcessor conditionally for vllm-engine and vllm-async-engine backends 2025-11-25 00:15:15 +08:00
myhloli
9c1c9d0c89 fix: update status indicators in Ascend.md to reflect correct state 2025-11-21 01:54:01 +08:00
myhloli
2111c35b83 feat: add Cambricon support documentation and update index for acceleration cards 2025-11-21 00:52:15 +08:00
myhloli
83ad8e81a9 fix: update status indicators in Ascend.md and THead.md for various components 2025-11-21 00:38:47 +08:00
myhloli
18ee522c77 fix: update THead.md to reference ppu.Dockerfile for building images 2025-11-20 23:36:43 +08:00
myhloli
72fa59bab2 fix: update THead.md to reference ppu.Dockerfile for building images 2025-11-20 23:36:29 +08:00
myhloli
28ebc0e2e8 fix: update THead.md to reference ppu.Dockerfile for building images 2025-11-20 23:21:29 +08:00
myhloli
3bcdd0a10a fix: update Docker build tags in Ascend.md for npu images 2025-11-20 23:17:59 +08:00
myhloli
cd1c5c5e50 fix: update ppu.Dockerfile to include vLLM base image and specific package versions 2025-11-20 23:09:54 +08:00
myhloli
a83d351ccc fix: swap Dockerfile instructions for lmdeploy and vllm in Ascend.md and npu.Dockerfile 2025-11-20 18:46:54 +08:00
myhloli
7196f71153 fix: correct package name for qwen-vl-utils in pyproject.toml 2025-11-20 17:24:20 +08:00
myhloli
1c530f64f5 fix: enhance CUDA and NPU availability checks in utils.py 2025-11-20 17:10:04 +08:00
myhloli
997a131278 fix: update notes on backend type switching for NPU cards in Ascend.md 2025-11-20 16:17:55 +08:00
myhloli
eeeaca85f8 fix: update notes on backend type switching for NPU cards in Ascend.md 2025-11-20 16:13:57 +08:00
myhloli
c884d7ddb9 fix: correct vllm component names in Ascend.md 2025-11-20 16:02:42 +08:00
myhloli
7bffbe2541 fix: correct vllm component names in Ascend.md 2025-11-20 15:54:26 +08:00
myhloli
ed6fc3e44e fix: correct vllm component names in Ascend.md 2025-11-20 15:49:46 +08:00
myhloli
a01a5d798b fix: add environment variable for local model source in Docker usage instructions 2025-11-20 15:36:43 +08:00
myhloli
38f5995ae4 fix: clarify Dockerfile usage instructions for lmdeploy and vllm in Ascend.md 2025-11-19 20:17:37 +08:00
myhloli
e7c80da602 fix: update Python version support details for Windows and clarify dependency limitations 2025-11-19 20:08:50 +08:00
myhloli
33696974fe fix: update qwen_vl_utils version constraint and specify platform dependencies for mineru 2025-11-19 19:55:02 +08:00
myhloli
376d1e38d5 fix: update quick_usage.md to clarify support for vllm and lmdeploy acceleration 2025-11-19 19:43:11 +08:00
myhloli
c5385af754 fix: update advanced_cli_parameters.md to clarify parameter passing for vllm and lmdeploy 2025-11-19 19:35:54 +08:00
myhloli
422ee671d8 fix: update installation tips in extension_modules.md to clarify package terminology 2025-11-19 19:32:08 +08:00
myhloli
76b1a559f8 fix: add MINERU_LMDEPLOY_DEVICE environment variable and update Ascend.md with usage scenarios 2025-11-19 19:18:30 +08:00
myhloli
afc6dcd7b0 fix: update mineru-vl-utils version and add qwen_vl_utils dependency in pyproject.toml 2025-11-19 14:41:23 +08:00
myhloli
cf1fbd2923 fix: enhance device and backend configuration handling in lmdeploy and vlm modules 2025-11-19 14:41:01 +08:00
myhloli
a0f27bd80b fix: remove unnecessary port mappings in Docker run command for Ascend.md 2025-11-18 21:20:52 +08:00
myhloli
46f8c6d082 fix: update Ascend.md for clarity in Dockerfile editing instructions 2025-11-18 21:10:42 +08:00
myhloli
5f9fdd9b62 fix: update npu.Dockerfile to set TORCH_DEVICE_BACKEND_AUTOLOAD=0 for model download 2025-11-18 21:05:41 +08:00
myhloli
9ed6636ad2 fix: update Ascend.md to use --network=host in Docker build commands for improved network configuration 2025-11-18 20:52:07 +08:00
myhloli
f8af29e3a1 fix: simplify Docker build commands in Ascend.md for clarity 2025-11-18 20:32:50 +08:00
myhloli
669b6cd629 fix: update Ascend.md and npu.Dockerfile for improved clarity on Docker image tags and usage instructions 2025-11-18 20:21:26 +08:00
myhloli
281c965213 fix: update Ascend.md and cli_tools.md for improved clarity on environment setup and backend options 2025-11-18 20:06:58 +08:00
myhloli
80445f24bf fix: remove commented-out official vllm image lines in Dockerfile for cleaner configuration 2025-11-18 19:05:58 +08:00
myhloli
10af19f419 fix: update docker_deployment.md and extension_modules.md for clarity on GPU architecture requirements and service naming 2025-11-18 16:36:59 +08:00
myhloli
a149a8da50 fix: enhance comments in compose.yaml for clearer engine selection and GPU configuration guidance 2025-11-18 15:58:25 +08:00
myhloli
843ab52da0 fix: rename vllm-server to openai-server in compose.yaml for clarity and update command parameters 2025-11-18 15:51:14 +08:00
myhloli
506179f0c8 feat: add openai-server command for flexible inference engine selection in vlm_server 2025-11-18 15:28:19 +08:00
myhloli
43881d5f66 fix: update index.md and README files for improved clarity on lmdeploy-engine support 2025-11-17 11:24:03 +08:00
myhloli
ad9521528e fix: update base image descriptions in Dockerfiles for clarity on CPU architecture 2025-11-14 10:47:16 +08:00
myhloli
d67be0c7de fix: add lmdeploy-engine parameters to compose.yaml for improved multi-GPU support 2025-11-14 10:34:29 +08:00
myhloli
056f8af0ae fix: add libglib2.0-0 dependency in npu.Dockerfile for improved package support 2025-11-14 01:33:06 +08:00
Xiaomeng Zhao
4f8d897342 Merge pull request #3995 from myhloli/dev
fix: enhance http-client backend parameters in vlm_analyze.py for improved configuration options
2025-11-13 17:12:32 +08:00
myhloli
0a4c9e307f fix: enhance http-client backend parameters in vlm_analyze.py for improved configuration options 2025-11-13 17:11:07 +08:00
Xiaomeng Zhao
79f2d03d32 Merge pull request #3990 from myhloli/dev
Dev
2025-11-13 14:59:07 +08:00
myhloli
d2c93b770f fix: refactor backend handling in vlm_analyze.py for improved model loading and error handling 2025-11-13 14:27:53 +08:00
myhloli
bb25385097 fix: update docker_deployment.md to use 'mineru:latest' instead of 'mineru-vllm:latest' 2025-11-13 11:40:38 +08:00
myhloli
60c5f7d890 feat: add mineru-lmdeploy-server service to compose.yaml with configuration 2025-11-13 11:37:45 +08:00
myhloli
3293299f34 fix: update README to clarify Windows LMDeploy backend performance and compatibility 2025-11-13 11:22:12 +08:00
myhloli
6581af72b4 fix: update README to clarify Windows LMDeploy backend performance and compatibility 2025-11-12 19:59:21 +08:00
myhloli
4ba9c73458 feat: add Dockerfiles for camb and maca environments, update ppu base image 2025-11-12 19:48:52 +08:00
myhloli
f7509e7dc9 feat: add Dockerfiles for NPU and PPU environments with necessary dependencies 2025-11-12 19:22:17 +08:00
myhloli
19c2a6612b fix: enhance argument handling for device type and backend in lmdeploy server 2025-11-12 19:05:10 +08:00
myhloli
1b440a8e92 fix: enhance argument handling for device type and backend in lmdeploy server 2025-11-12 17:56:58 +08:00
myhloli
39e7aa52a2 fix: improve device type and backend handling in lmdeploy configuration 2025-11-12 11:32:10 +08:00
myhloli
0c8e004874 fix: remove unused variable in set_lmdeploy_backend function 2025-11-11 19:55:22 +08:00
myhloli
f9f67ddef4 fix: remove unused variable in set_lmdeploy_backend function 2025-11-11 19:55:07 +08:00
Xiaomeng Zhao
2ac829ca32 Merge pull request #3980 from myhloli/dev
Dev
2025-11-11 19:53:26 +08:00
myhloli
6bafca0555 fix: disable tokenizers parallelism in lmdeploy server configuration 2025-11-11 19:40:57 +08:00
myhloli
7516d3ddf4 fix: disable tokenizers parallelism in lmdeploy server configuration 2025-11-11 19:40:15 +08:00
myhloli
a2136c22a5 fix: add backend argument handling and logging for lmdeploy backend configuration 2025-11-11 19:37:54 +08:00
myhloli
3fcca35c73 fix: add Linux environment detection and set lmdeploy backend based on device type 2025-11-11 19:26:46 +08:00
myhloli
6c27bc7f53 fix: update README files to include lmdeploy-engine and adjust accuracy details 2025-11-11 12:00:14 +08:00
Xiaomeng Zhao
30f1db6e6d Merge pull request #3976 from opendatalab/add_lmdeploy_backend
Add lmdeploy backend
2025-11-11 11:46:43 +08:00
Xiaomeng Zhao
e80e53d4de Merge pull request #3975 from myhloli/add_lmdeploy_backend
fix: update device handling and backend configuration in analysis scripts
2025-11-11 11:46:12 +08:00
myhloli
8e5a780fc6 fix: clarify engine descriptions in client.py documentation 2025-11-11 11:45:11 +08:00
myhloli
ad35f0bbc2 fix: clarify engine descriptions in client.py documentation 2025-11-11 11:42:58 +08:00
myhloli
5c743dc169 fix: update device handling and backend configuration in analysis scripts 2025-11-11 11:40:52 +08:00
Xiaomeng Zhao
b26338d0ef Merge pull request #3974 from opendatalab/add_lmdeploy_backend
Add lmdeploy backend
2025-11-11 11:24:50 +08:00
Xiaomeng Zhao
275ae04e56 Merge pull request #3972 from myhloli/add_lmdeploy_backend
feat: add lmdeploy backend support and refactor related components
2025-11-11 11:17:29 +08:00
myhloli
672e252506 fix: set default device type to 'cuda' in lmdeploy server 2025-11-11 11:16:34 +08:00
myhloli
85558061ff feat: add lmdeploy backend support and refactor related components 2025-11-11 10:48:33 +08:00
Xiaomeng Zhao
b4c8a017ea Merge pull request #3964 from opendatalab/dev
Dev
2025-11-10 15:15:40 +08:00
Xiaomeng Zhao
5c8d05e076 Merge pull request #3963 from myhloli/dev
fix: improve PDF page import handling to skip failed pages and log warnings
2025-11-10 14:40:43 +08:00
myhloli
0cfc6c3d4e fix: improve PDF page import handling to skip failed pages and log warnings 2025-11-10 14:39:37 +08:00
Xiaomeng Zhao
cdedc13713 Merge pull request #3950 from myhloli/dev
feat: enhance RagFlow documentation with installation guide and MinerU integration details
2025-11-06 19:50:49 +08:00
myhloli
95172d7d17 feat: enhance RagFlow documentation with installation guide and MinerU integration details 2025-11-06 19:49:13 +08:00
Xiaomeng Zhao
5cc13b919a Merge pull request #3946 from jinminxi104/add_lmdeploy_backend
add lmdeploy-backend
2025-11-06 16:44:27 +08:00
jinminxi104
9ec03c0353 add lmdeploy-backend 2025-11-06 07:38:46 +00:00
myhloli
ef485db9a8 fix: update magika version constraint to allow for newer releases 2025-11-05 17:40:15 +08:00
Xiaomeng Zhao
16ff55b27f Merge pull request #3934 from opendatalab/master
master->dev
2025-11-05 00:35:01 +08:00
myhloli
fa1149cd4a Update version.py with new version 2025-11-04 12:25:58 +00:00
Xiaomeng Zhao
5a937d3059 Merge pull request #3932 from opendatalab/release-2.6.4
Release 2.6.4
2025-11-04 20:23:53 +08:00
Xiaomeng Zhao
f11e609a14 Merge pull request #3933 from myhloli/dev
Dev
2025-11-04 20:22:42 +08:00
Xiaomeng Zhao
e010b0974a Update mineru/utils/pdf_image_tools.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-11-04 20:21:37 +08:00
Xiaomeng Zhao
fe1549960d Update mineru/utils/pdf_image_tools.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-11-04 20:20:37 +08:00
myhloli
df23e45861 Merge remote-tracking branch 'origin/dev' into dev 2025-11-04 20:18:46 +08:00
myhloli
5ec07ee7ab feat: update environment variable for PDF rendering timeout and enhance documentation 2025-11-04 20:18:14 +08:00
Xiaomeng Zhao
f1ebf5a7f0 Merge pull request #3931 from myhloli/dev
Dev
2025-11-04 20:00:41 +08:00
Xiaomeng Zhao
dae2cc8514 Update mineru/utils/pdf_image_tools.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-11-04 19:59:59 +08:00
myhloli
5de8f1a19f feat: add environment variables for PDF rendering timeout and ONNX thread management 2025-11-04 19:47:59 +08:00
myhloli
be2369bdd4 feat: add ONNX configuration for thread management and integrate into table structure 2025-11-04 19:09:33 +08:00
myhloli
51df4d8508 refactor: enhance PDF conversion function parameters and improve thread handling logic 2025-11-04 09:54:45 +08:00
Xiaomeng Zhao
f7225d8e17 Merge pull request #3918 from myhloli/dev
Dev
2025-11-03 22:09:59 +08:00
Xiaomeng Zhao
a9c9501af6 Update mineru/utils/pdf_image_tools.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-11-03 22:09:29 +08:00
Xiaomeng Zhao
74de2725cb Update mineru/utils/pdf_image_tools.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-11-03 22:08:20 +08:00
Xiaomeng Zhao
6250c453d9 Update mineru/utils/pdf_image_tools.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-11-03 22:04:49 +08:00
myhloli
54417a51f8 refactor: reorder import statements for clarity and consistency 2025-11-03 21:27:00 +08:00
myhloli
2f120db20e fix: update JSON URL to point to the master branch for model configuration 2025-11-03 21:24:13 +08:00
myhloli
2079395774 refactor: adjust thread count based on CPU cores and comment out image loading time logging 2025-11-03 21:02:58 +08:00
myhloli
b4c57116c1 refactor: move PDF byte conversion logic to pdf_page_id and simplify image conversion process 2025-11-03 20:57:18 +08:00
myhloli
ace7f76869 refactor: move PDF byte conversion functions to pdf_page_tools and simplify logic 2025-11-03 20:26:34 +08:00
myhloli
5349fd7ccd refactor: enhance PDF image loading by removing multiprocessing for Windows environment and improving logging 2025-11-03 19:41:22 +08:00
myhloli
5999f6664f refactor: simplify PDF byte preparation by removing multiprocessing and enhancing direct conversion 2025-11-03 19:31:39 +08:00
myhloli
245ae28c27 refactor: optimize page range calculation and enhance logging for image conversion process 2025-11-03 19:11:05 +08:00
myhloli
4afa045545 refactor: update import statement to use check_sys_env and adjust logging level for image loading 2025-11-03 19:10:25 +08:00
myhloli
c32ff88400 refactor: rename check_mac_env to check_sys_env and add Windows environment detection 2025-11-03 19:07:19 +08:00
myhloli
4214634de8 feat: add timing logs for PDF byte preparation to improve performance monitoring 2025-11-03 18:48:31 +08:00
myhloli
bffc6aff53 fix: streamline PDF conversion process by restructuring try-except block and ensuring proper resource management 2025-11-03 15:44:33 +08:00
myhloli
05e114f8b9 feat: implement multiprocessing for PDF conversion to enhance performance 2025-11-03 15:32:40 +08:00
myhloli
66d5f3dfd2 feat: refactor PDF image conversion to use get_end_page_id utility function and add multi-threading support 2025-11-03 15:08:31 +08:00
myhloli
305e3a61e8 fix: disable tokenizers parallelism to prevent potential issues 2025-11-01 02:00:04 +08:00
myhloli
b614bef035 feat: add multiprocessing support for PDF to image conversion with timeout handling 2025-10-31 17:50:59 +08:00
myhloli
cce16daf1f fix: update JSON URL to point to the dev branch in configure_model function 2025-10-31 15:37:08 +08:00
Xiaomeng Zhao
94eb35ffda Merge pull request #3905 from opendatalab/master
master->dev
2025-10-31 15:14:09 +08:00
myhloli
1ebc1ae841 Update version.py with new version 2025-10-31 07:09:16 +00:00
Xiaomeng Zhao
e90a17a3d2 Merge pull request #3902 from myhloli/dev
Dev
2025-10-31 14:59:32 +08:00
myhloli
61747bafdd fix: center-align column header for vlm accuracy in index.md table 2025-10-31 14:57:17 +08:00
Xiaomeng Zhao
374ace0a34 Merge pull request #3900 from opendatalab/release-2.6.3
Release 2.6.3
2025-10-31 14:50:54 +08:00
Xiaomeng Zhao
2c355d2d68 Merge pull request #3899 from myhloli/dev
fix: correct formatting of footnotes in README and README_zh-CN for c…
2025-10-31 14:50:31 +08:00
myhloli
512554196b fix: correct formatting of footnotes in README and README_zh-CN for clarity 2025-10-31 14:49:23 +08:00
Xiaomeng Zhao
a33715c015 Merge pull request #3887 from opendatalab/release-2.6.3
Release 2.6.3
2025-10-31 14:44:34 +08:00
Xiaomeng Zhao
3bc44c8526 Merge pull request #3898 from opendatalab/dev
Dev
2025-10-31 14:44:11 +08:00
Xiaomeng Zhao
4ccd0528f4 Merge pull request #3897 from myhloli/dev
Dev
2025-10-31 14:43:37 +08:00
myhloli
64d6a38bf5 fix: update help text formatting for PDF parsing options and bump config version check 2025-10-31 14:42:04 +08:00
Xiaomeng Zhao
9ede336a0c Merge pull request #3895 from opendatalab/dev
Dev
2025-10-31 14:19:27 +08:00
myhloli
1c0d4b8bc6 Merge remote-tracking branch 'origin/dev' into dev 2025-10-31 12:14:26 +08:00
myhloli
0b53696181 fix: update config version check to 1.3.0 in models_download.py 2025-10-31 12:13:52 +08:00
Xiaomeng Zhao
d06b105102 Merge pull request #3891 from myhloli/dev
Dev
2025-10-31 12:03:12 +08:00
myhloli
b70f49522e fix: prevent processing of empty content lists in pipeline middle JSON handling 2025-10-31 12:02:28 +08:00
myhloli
23d75bac09 refactor: simplify content list handling by consolidating layout and discarded paragraphs 2025-10-31 11:47:08 +08:00
myhloli
14ca71eed0 docs: enhance quick usage documentation with configuration examples and improve mac environment check 2025-10-31 11:42:37 +08:00
Xiaomeng Zhao
d519095436 Merge pull request #3888 from myhloli/dev
docs: update OCR language support to reflect recognition of 109 languages
2025-10-31 11:18:50 +08:00
myhloli
2238c49352 docs: update OCR language support to reflect recognition of 109 languages 2025-10-31 11:17:42 +08:00
Xiaomeng Zhao
ef71228e1a Merge pull request #3886 from myhloli/dev
Dev
2025-10-31 11:13:59 +08:00
myhloli
8bf407a5e5 docs: update quick_usage.md to format parameter name for clarity 2025-10-31 11:13:36 +08:00
myhloli
79fe3757b1 docs: add changelog entries for 2.6.3 release, highlighting new vlm-mlx-engine support and bug fixes 2025-10-31 11:11:06 +08:00
myhloli
c9dc5df28d docs: update system requirements and OCR language support in documentation 2025-10-31 09:42:35 +08:00
myhloli
57b2c819f9 docs: add release notes for version 2.6.3 and highlight new vlm-mlx-engine support 2025-10-30 21:12:55 +08:00
myhloli
04860456e8 Merge remote-tracking branch 'origin/dev' into dev 2025-10-30 20:32:02 +08:00
myhloli
14c334d2b0 feat: add macOS version check for mlx-engine backend support 2025-10-30 20:31:50 +08:00
myhloli
d57796a667 fix: update mineru-vl-utils version constraint to 0.1.15 2025-10-30 18:31:57 +08:00
myhloli
551802aebb docs: format version constraints in bug_report.yml for improved readability 2025-10-30 18:31:40 +08:00
myhloli
59b5ffaf95 docs: update default model in llm-aided-config and clarify enable_thinking parameter usage in quick_usage.md 2025-10-30 17:54:21 +08:00
myhloli
d975836b25 refactor: streamline resolution grouping and padding logic in batch_analyze.py 2025-10-30 17:08:09 +08:00
Xiaomeng Zhao
5351c76c5d Merge branch 'opendatalab:dev' into dev 2025-10-30 16:46:41 +08:00
Xiaomeng Zhao
324dd75a52 Merge pull request #3880 from baymax2099/2.6.2fix
Fix rounding error for height and width normalization
2025-10-30 16:44:34 +08:00
Xiaomeng Zhao
bb830c6cbf Merge pull request #3870 from aopstudio/add-quote
Quote pip install arguments in extension module docs
2025-10-30 16:42:41 +08:00
max
1fd357dd97 fix:当h恰好是RESOLUTION_GROUP_STRIDE的倍数时,会错误地向上取整到下一个倍数。 2025-10-30 16:20:27 +08:00
myhloli
51726f7ac4 fix: correct root directory path in pytorch_paddle.py 2025-10-30 15:55:31 +08:00
myhloli
d306abf8d7 docs: enhance table structure and content for backend features and system requirements in index.md 2025-10-30 01:19:49 +08:00
myhloli
a2aae1fa48 docs: enhance table structure and content for backend features and system requirements in index.md 2025-10-30 01:01:39 +08:00
myhloli
05ce84c5e8 docs: simplify phrasing for OpenAI compatibility in README and README_zh-CN 2025-10-30 00:57:41 +08:00
Xiaomeng Zhao
b2a2cac32e Merge pull request #3873 from myhloli/dev
Dev
2025-10-30 00:55:30 +08:00
myhloli
2dbb265cf9 docs: correct phrasing in README_zh-CN for OpenAI compatibility and CPU inference support 2025-10-30 00:54:01 +08:00
myhloli
737207582a docs: correct phrasing in README_zh-CN for OpenAI compatibility and CPU inference support 2025-10-30 00:48:46 +08:00
myhloli
d654238115 docs: update backend features and CPU inference support sections in README and README_zh-CN 2025-10-30 00:43:03 +08:00
myhloli
279e84bf58 fix: improve device compatibility check for bf16 support in model initialization 2025-10-30 00:33:24 +08:00
myhloli
9dfbdb8aec docs: enhance README and README_zh-CN with improved backend feature table and community feedback section 2025-10-29 22:18:35 +08:00
myhloli
931aebc5d5 docs: enhance README and README_zh-CN with improved backend feature table and community feedback section 2025-10-29 22:14:02 +08:00
myhloli
3896079940 docs: update README_zh-CN.md with improved backend feature table and clarifications 2025-10-29 21:56:05 +08:00
myhloli
a69e39860a feat: update README_zh-CN.md with enhanced backend feature table and requirements 2025-10-29 18:53:54 +08:00
aopstudio
5cd31f97b6 Quote pip install arguments in extension module docs
Updated the pip install commands in both English and Chinese quick start guides to quote the mineru extras arguments, ensuring correct parsing by the shell.
2025-10-29 16:17:38 +08:00
myhloli
08ee48c1d7 remove svg logos 2025-10-29 11:10:40 +08:00
myhloli
05cf5a491e fix: update config version check to 1.3.1 in models_download.py 2025-10-29 10:49:58 +08:00
myhloli
8a8fc59d20 feat: add new SVG logos for mineru and modelscope 2025-10-29 10:48:15 +08:00
Xiaomeng Zhao
7f96fa94b7 Merge pull request #3860 from myhloli/dev
feat: enhance API call parameters with conditional extra_body for thinking mode
2025-10-28 21:40:28 +08:00
Xiaomeng Zhao
ad29a6a02a Update mineru/utils/llm_aided.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-10-28 21:38:49 +08:00
Xiaomeng Zhao
54ac866554 Update mineru/utils/llm_aided.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-10-28 21:38:40 +08:00
myhloli
2080677d83 chore: bump config_version to 1.3.1 in mineru.template.json 2025-10-28 21:37:14 +08:00
myhloli
11a1f04b0f feat: enhance API call parameters with conditional extra_body for thinking mode 2025-10-28 21:31:43 +08:00
myhloli
8a7b216d67 Merge remote-tracking branch 'origin/dev' into dev 2025-10-28 17:24:07 +08:00
myhloli
e5dba06035 fix: improve help text for device mode option in client.py 2025-10-28 17:23:57 +08:00
Xiaomeng Zhao
beeef7068f Merge pull request #3841 from xvlincaigou/master
修改文档,已经支持对于vlm-transformers backend的device指定
2025-10-28 17:21:23 +08:00
Xiaomeng Zhao
1f5db12adb Merge pull request #3855 from myhloli/dev
feat: add Mac environment checks and support for Apple Silicon in backend selection
2025-10-28 17:08:36 +08:00
Xiaomeng Zhao
e5c8508ad7 Update mineru/utils/check_mac_env.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-10-28 17:08:28 +08:00
Xiaomeng Zhao
633afeb9e2 Update mineru/utils/check_mac_env.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-10-28 17:08:14 +08:00
Xiaomeng Zhao
797011879a Update mineru/utils/check_mac_env.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-10-28 17:08:02 +08:00
Xiaomeng Zhao
7365f8137c Update mineru/utils/check_mac_env.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-10-28 17:07:51 +08:00
myhloli
2f1369a877 feat: add Mac environment checks and support for Apple Silicon in backend selection 2025-10-28 17:03:56 +08:00
Xiaomeng Zhao
e803facba6 Merge pull request #3854 from myhloli/dev
refactor: update import paths for PytorchPaddleOCR and rename file
2025-10-28 17:03:41 +08:00
myhloli
dc7b341e02 refactor: update import paths for PytorchPaddleOCR and rename file 2025-10-28 15:57:36 +08:00
Xiaomeng Zhao
73c52b95f5 Merge pull request #3851 from myhloli/dev
fix: enhance handling of discarded blocks in content generation
2025-10-28 10:18:32 +08:00
myhloli
1037fd56bc fix: enhance handling of discarded blocks in content generation 2025-10-27 20:47:52 +08:00
xvlincaigou
25525ad899 Merge branch 'master' of github.com:xvlincaigou/MinerU 2025-10-25 22:19:45 +08:00
xvlincaigou
55a0cb95b7 [fix]docs about when param: device take effect 2025-10-25 22:10:12 +08:00
Xiaomeng Zhao
00d438d5fb Merge pull request #3837 from opendatalab/master
master->dev
2025-10-24 19:00:18 +08:00
myhloli
eb02745e06 Update version.py with new version 2025-10-24 10:45:27 +00:00
Xiaomeng Zhao
fe4985f6f0 Merge pull request #3836 from opendatalab/release-2.6.2
Release 2.6.2
2025-10-24 18:43:33 +08:00
Xiaomeng Zhao
8825235088 Merge pull request #3835 from myhloli/dev
chore: update changelog for 2.6.2 release with OCR model optimizations and backend improvements
2025-10-24 18:35:17 +08:00
myhloli
44a60785c6 chore: update changelog for 2.6.2 release with OCR model optimizations and backend improvements 2025-10-24 18:33:15 +08:00
Xiaomeng Zhao
473e235397 Merge pull request #3834 from myhloli/dev
refactor: remove deprecated model configurations from arch_config.yaml and models_config.yml
2025-10-24 18:29:59 +08:00
myhloli
16814e1e1d refactor: remove deprecated model configurations from arch_config.yaml and models_config.yml 2025-10-24 18:11:50 +08:00
myhloli
3546766e72 fix: update CTCLabelDecode output channels and clean up Latin dictionary 2025-10-24 18:04:28 +08:00
Xiaomeng Zhao
b57d9caef3 Merge pull request #3833 from opendatalab/master
master->dev
2025-10-24 17:39:27 +08:00
myhloli
0603edc202 Update version.py with new version 2025-10-24 09:28:52 +00:00
Xiaomeng Zhao
2a0cb7963a Merge pull request #3829 from opendatalab/release-2.6.1
Release 2.6.1
2025-10-24 17:27:18 +08:00
Xiaomeng Zhao
a56bd6c334 Merge pull request #3831 from opendatalab/dev
Dev
2025-10-24 17:25:03 +08:00
Xiaomeng Zhao
f5400f0c94 Merge pull request #3830 from myhloli/dev
fix: correct spelling of set_default_gpu_memory_utilization and set_default_batch_size functions
2025-10-24 17:24:31 +08:00
myhloli
6a6c650062 fix: correct spelling of set_default_gpu_memory_utilization and set_default_batch_size functions 2025-10-24 17:23:13 +08:00
Xiaomeng Zhao
ae084eb317 Merge pull request #3828 from myhloli/dev
Dev
2025-10-24 17:17:23 +08:00
myhloli
7c77db7135 fix: import enable_custom_logits_processors in server.py 2025-10-24 17:16:07 +08:00
myhloli
7b14a87b9d fix: update version number to 2.6.1 in README and README_zh-CN 2025-10-24 17:13:08 +08:00
myhloli
0d0ebfd7bc fix: improve GPU memory utilization handling and ensure OMP_NUM_THREADS is set only if not defined 2025-10-24 17:11:19 +08:00
myhloli
dc438fa620 Update version.py with new version 2025-10-24 08:12:26 +00:00
Xiaomeng Zhao
f5a5644d12 Merge pull request #3825 from opendatalab/dev
Dev
2025-10-24 16:01:37 +08:00
Xiaomeng Zhao
91cc2524d5 Merge pull request #3824 from myhloli/dev
fix: update README and Chinese README to include GitHub link for optimization contributor
2025-10-24 16:00:54 +08:00
myhloli
e504e5e012 fix: update README and Chinese README to include GitHub link for optimization contributor 2025-10-24 15:58:23 +08:00
Xiaomeng Zhao
6b2f414438 Merge pull request #3823 from opendatalab/release-2.6.0
Release 2.6.0
2025-10-24 15:54:23 +08:00
Xiaomeng Zhao
a0da3029fd Update mineru/model/utils/pytorchocr/modeling/backbones/rec_lcnetv3.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-10-24 15:54:12 +08:00
Xiaomeng Zhao
30fe325428 Update mineru/model/utils/tools/infer/predict_rec.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-10-24 15:53:55 +08:00
Xiaomeng Zhao
6131013ce9 Merge pull request #3822 from opendatalab/dev
Dev
2025-10-24 15:46:40 +08:00
Xiaomeng Zhao
f1c145054a Merge pull request #3821 from myhloli/dev
Dev
2025-10-24 15:46:09 +08:00
myhloli
078aaaf150 fix: remove unnecessary parameters from kwargs in vlm_analyze.py initialization 2025-10-24 15:39:44 +08:00
myhloli
c3a55fffab fix: add utility functions for GPU memory utilization and batch size configuration 2025-10-24 15:29:23 +08:00
Xiaomeng Zhao
4eddf28c8f Merge pull request #3820 from opendatalab/dev
Dev
2025-10-24 14:59:35 +08:00
Xiaomeng Zhao
dd92c5b723 Merge pull request #3819 from myhloli/dev
update docs
2025-10-24 14:59:03 +08:00
myhloli
b5922086cb fix: add environment variable configurations for Chinese formula parsing and table merging features 2025-10-24 14:53:00 +08:00
myhloli
df12e4fc79 fix: update README and utils for table merge feature and environment variable configuration 2025-10-24 11:37:14 +08:00
myhloli
90ed311198 fix: refactor table merging logic and add cross-page table merge utility 2025-10-24 10:52:05 +08:00
myhloli
c922c63fbc fix: correct formatting in kernel initialization in rec_lcnetv3.py 2025-10-24 10:22:10 +08:00
myhloli
28b278508f fix: add error handling for PDF conversion in common.py 2025-10-24 10:19:50 +08:00
Xiaomeng Zhao
6b54f321b4 Merge pull request #3814 from myhloli/dev
Dev
2025-10-23 18:00:51 +08:00
myhloli
e47ec7cd10 fix: refactor language lists for improved readability and maintainability in gradio_app.py and pytorch_paddle.py 2025-10-23 17:51:26 +08:00
myhloli
701f6018f2 fix: add logging for improved traceability in prediction logic of predict_formula.py 2025-10-23 17:26:16 +08:00
myhloli
5ade203e31 fix: remove commented-out code for autocasting in prediction logic of predict_formula.py 2025-10-23 17:12:00 +08:00
Xiaomeng Zhao
6e83f37754 Merge branch 'opendatalab:dev' into dev 2025-10-23 17:09:20 +08:00
Xiaomeng Zhao
972161a991 Merge pull request #3812 from Sidney233/dev
feat: add PPv5 arabic cyrillic devanagari ta te
2025-10-23 17:08:52 +08:00
Sidney233
700e11d342 feat: add PPv5 arabic cyrillic devanagari ta te 2025-10-23 16:49:01 +08:00
myhloli
fd79885b23 fix: remove commented-out code for autocasting in prediction logic of predict_formula.py 2025-10-23 16:03:34 +08:00
myhloli
a0810b5b6e fix: add debug logging for LaTeX text processing in processors.py 2025-10-23 02:30:47 +08:00
myhloli
39271b45de fix: adjust batch size calculation in prediction logic of predict_formula.py 2025-10-23 02:15:14 +08:00
Xiaomeng Zhao
db68aaf4ac Merge pull request #3806 from myhloli/dev
fix: update Gradio API access instructions in quick_usage.md
2025-10-22 22:51:37 +08:00
myhloli
a6cc8fa90d fix: update Gradio API access instructions in quick_usage.md 2025-10-22 22:50:36 +08:00
Xiaomeng Zhao
47f34f4ce8 Merge pull request #3805 from myhloli/dev
fix: handle empty input in prediction logic of predict_formula.py
2025-10-22 22:21:38 +08:00
myhloli
b7a8347f45 fix: handle empty input in prediction logic of predict_formula.py 2025-10-22 22:20:06 +08:00
Xiaomeng Zhao
c6d241f4f4 Merge pull request #3804 from myhloli/dev
fix: update model paths in models_download.py to include pp_formulanet_plus_m
2025-10-22 20:47:26 +08:00
myhloli
06b2fda1c1 fix: update model paths in models_download.py to include pp_formulanet_plus_m 2025-10-22 20:46:15 +08:00
Xiaomeng Zhao
5c1ca9271e Merge pull request #3803 from myhloli/dev
Dev
2025-10-22 20:33:42 +08:00
Xiaomeng Zhao
e7485c5d79 Update mineru/model/mfr/pp_formulanet_plus_m/predict_formula.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-10-22 20:32:36 +08:00
Xiaomeng Zhao
80436a89f9 Update mineru/model/utils/pytorchocr/modeling/heads/rec_ppformulanet_head.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-10-22 20:32:06 +08:00
Xiaomeng Zhao
b36793cef0 Update mineru/model/mfr/utils.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-10-22 20:31:50 +08:00
myhloli
43b51e78fc fix: add environment variable handling for table merging in JSON processing 2025-10-22 20:19:59 +08:00
myhloli
9688f73046 fix: update package path for PaddleOCR utilities in pyproject.toml 2025-10-22 20:08:52 +08:00
myhloli
c02edd9cba fix: correct docstring for remove_up_commands function in utils.py 2025-10-22 20:07:11 +08:00
myhloli
b4d08e994c feat: implement LaTeX formatting utilities and refactor processing logic 2025-10-22 20:02:59 +08:00
myhloli
a220b8a208 refactor: enhance title hierarchy logic and update model configuration 2025-10-22 15:57:07 +08:00
myhloli
ab480a7a86 fix: update progress bar description in formula prediction 2025-10-22 15:51:56 +08:00
myhloli
f57a6d8d9e refactor: remove commented-out device assignment in predict_formula.py 2025-10-21 18:45:21 +08:00
myhloli
915ba87f7d feat: adjust batch size calculation and enhance device management in model heads 2025-10-21 18:21:25 +08:00
myhloli
42a95e8e20 refactor: improve variable naming and streamline input processing in predict_formula.py 2025-10-21 14:57:57 +08:00
Xiaomeng Zhao
a513357607 Merge pull request #3779 from myhloli/dev
mfr add paddle
2025-10-20 19:14:46 +08:00
Xiaomeng Zhao
c8ccf4cf20 Update mineru/model/utils/pytorchocr/modeling/heads/rec_ppformulanet_head.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-10-20 19:14:16 +08:00
Xiaomeng Zhao
33d43a5afc Update mineru/model/utils/pytorchocr/modeling/heads/rec_ppformulanet_head.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-10-20 19:14:05 +08:00
Xiaomeng Zhao
3b057c7996 Merge pull request #19 from myhloli/mfr-add-paddle
Mfr add paddle
2025-10-20 18:59:48 +08:00
myhloli
34547262a2 refactor: remove unused Formula constant from model_list.py 2025-10-20 18:57:35 +08:00
myhloli
cd0ed982c0 fix: revert MFR_MODEL to unimernet_small in model initialization 2025-10-20 18:55:30 +08:00
myhloli
52dcbcbfa5 Bump mineru-vl-utils version to 0.1.14 2025-10-20 15:03:39 +08:00
myhloli
0758de6d24 Update vllm version and increase default GPU memory utilization 2025-10-20 11:45:58 +08:00
Xiaomeng Zhao
ae7892a6f9 Merge pull request #3770 from myhloli/dev
Update acceleration card links to include discussion and pull request references
2025-10-17 19:01:33 +08:00
myhloli
73567ccedc Update acceleration card links to include discussion and pull request references 2025-10-17 19:00:15 +08:00
Xiaomeng Zhao
bb552282f3 Merge pull request #3769 from myhloli/dev
Add support for domestic acceleration cards in documentation
2025-10-17 18:54:34 +08:00
myhloli
14c38101f7 Add support for domestic acceleration cards in documentation 2025-10-17 18:53:31 +08:00
Xiaomeng Zhao
cb3a30e9ad Merge pull request #3768 from myhloli/dev
Add support for domestic acceleration cards in documentation
2025-10-17 18:41:31 +08:00
myhloli
f4db41d0cb Add support for domestic acceleration cards in documentation 2025-10-17 18:40:40 +08:00
Xiaomeng Zhao
dad59f7d52 Merge pull request #3760 from magicyuan876/master
feat(tianshu): v2.0 架构升级 - Worker主动拉取模式
2025-10-17 18:31:38 +08:00
myhloli
499e877165 refactor: rename files and update import paths for consistency 2025-10-17 18:09:19 +08:00
myhloli
2d249666ba feat: integrate PP-FormulaNet_plus-M architecture and update model initialization 2025-10-17 17:00:22 +08:00
Magic_yuan
cedc62a728 完善markitdown依赖 2025-10-17 16:17:03 +08:00
Xiaomeng Zhao
1e40bac24f Merge pull request #3761 from Sidney233/dev
feat: add PPFormula
2025-10-17 14:40:10 +08:00
Sidney233
23701d0db4 feat: add PPFormula 2025-10-17 14:02:26 +08:00
Magic_yuan
e7d8bf097a 修复codereview建议 2025-10-17 13:04:49 +08:00
Magic_yuan
08a89aeca1 feat(tianshu): v2.0 架构升级 - Worker主动拉取模式
主要改进:
- Worker主动拉取任务,响应速度提升10-20倍 (5-10s → 0.5s)
- 数据库并发安全增强,使用原子操作防止任务重复
- 调度器变为可选监控组件,默认不启动
- 修复多GPU显存占用问题,完全隔离各进程

新增功能:
- API自动返回解析内容
- 结果文件自动清理(可配置)
- 支持图片上传MinIO
2025-10-17 11:46:42 +08:00
Xiaomeng Zhao
1b724f3336 Merge pull request #3756 from myhloli/dev
Set OMP_NUM_THREADS environment variable to 1 for vllm backend initialization
2025-10-16 19:06:45 +08:00
myhloli
ea4271ab37 Set OMP_NUM_THREADS environment variable to 1 for vllm backend initialization 2025-10-16 18:26:06 +08:00
Xiaomeng Zhao
d83b83a5ad Merge pull request #3755 from myhloli/dev
Dev
2025-10-16 17:46:44 +08:00
myhloli
0853b84e87 Update README files to use external image link for MinerU logo 2025-10-16 17:45:42 +08:00
myhloli
36225160a3 Update arXiv badge to reflect MinerU technical report and add badge for MinerU2.5 2025-10-16 17:41:41 +08:00
myhloli
a36118f8ba Add mineru_tianshu project to README files for version 2.0 compatibility 2025-10-16 17:38:57 +08:00
myhloli
a38384e7fb Update mineru-vl-utils dependency version to allow upgrades to 0.1.13 2025-10-16 17:36:45 +08:00
Xiaomeng Zhao
4b7c2bbcc0 Merge pull request #3754 from myhloli/dev
Refactor table merging logic to enhance colspan adjustments and improve caption handling
2025-10-16 17:35:28 +08:00
Xiaomeng Zhao
504fe6ada3 Merge pull request #3742 from magicyuan876/master
feat: MinerU Tianshu 项目 - 开箱即用的多GPU文档解析服务
2025-10-16 17:33:54 +08:00
myhloli
39be54023b Refactor table merging logic to enhance colspan adjustments and improve caption handling 2025-10-16 17:31:57 +08:00
Magic_yuan
484ff5a6f9 修复codereview问题 2025-10-16 16:04:42 +08:00
myhloli
59a7a577b3 Add backend name dropdown and update version constraints in bug report template 2025-10-16 14:55:48 +08:00
Xiaomeng Zhao
0e73ef9615 Merge pull request #3750 from myhloli/dev
Update openai dependency version to allow upgrades to version 3
2025-10-16 14:43:57 +08:00
myhloli
d580d6c7f8 Update openai dependency version to allow upgrades to version 3 2025-10-16 14:43:05 +08:00
Xiaomeng Zhao
4c8bb038ce Merge pull request #3748 from myhloli/dev
Enhance table merging logic to adjust colspan attributes based on row structures
2025-10-16 14:24:14 +08:00
myhloli
a89715b9a2 Refactor table merging logic to improve caption handling and prevent merging with non-continuation captions 2025-10-16 14:11:15 +08:00
myhloli
f05ea7c2e6 Simplify model output path handling by removing conditional checks for backend type 2025-10-16 14:09:30 +08:00
Xiaomeng Zhao
b68db3ab90 Merge pull request #3740 from yongtenglei/master
docs: Fix outdated sample data for output reference
2025-10-16 10:43:22 +08:00
yongtenglei
3539cfba36 docs: Fix sample data for output reference 2025-10-16 10:33:13 +08:00
Magic_yuan
3bf50d5267 feat: MinerU Tianshu 项目 - 开箱即用的多GPU文档解析服务
项目简介:
天枢(Tianshu)是基于 MinerU 的文档解析服务,采用 SQLite 任务队列 +
LitServe GPU 负载均衡架构,支持异步处理、任务持久化和多格式文档智能解析。

核心功能:
- 异步任务处理:客户端立即响应,后台处理任务
- 智能解析器:PDF/图片使用 MinerU(GPU加速),Office/文本使用 MarkItDown
- GPU 负载均衡:基于 LitServe 实现多GPU自动调度
- 任务持久化:SQLite 存储,服务重启任务不丢失
- 优先级队列:支持任务优先级设置
- RESTful API:完整的任务管理接口
- MinIO 集成:支持图片上传到对象存储

项目架构:
- api_server.py: FastAPI Web 服务器,提供 RESTful API
- task_db.py: SQLite 任务数据库管理器
- litserve_worker.py: LitServe Worker Pool,GPU 负载均衡
- task_scheduler.py: 异步任务调度器
- start_all.py: 统一启动脚本
- client_example.py: Python 客户端示例

技术栈:
FastAPI, LitServe, SQLite, MinerU, MarkItDown, MinIO, Loguru
2025-10-16 08:41:51 +08:00
myhloli
2108019698 Enhance table merging logic to adjust colspan attributes based on row structures 2025-10-15 19:05:28 +08:00
Xiaomeng Zhao
17a9921ba9 Merge pull request #3737 from myhloli/dev
Refactor block processing to handle non-contiguous indices in captions and footnotes
2025-10-15 17:06:22 +08:00
myhloli
3baee1d077 Refactor block processing to handle non-contiguous indices in captions and footnotes 2025-10-15 17:04:29 +08:00
myhloli
e1ee728e31 Sort blocks by index and clean up unprocessed blocks handling 2025-10-15 16:06:03 +08:00
Xiaomeng Zhao
1b45e6e1bc Merge pull request #3723 from myhloli/dev
Rename plugin documentation files for consistency and update index links
2025-10-14 19:00:38 +08:00
myhloli
966aadd1d3 Rename plugin documentation files for consistency and update index links 2025-10-14 18:58:24 +08:00
Xiaomeng Zhao
ecb8e3f0ac Merge pull request #3722 from myhloli/dev
Add documentation for Cherry Studio, Sider, Dify, n8n, Coze, FastGPT, ModelWhale, DingTalk, DataFlow, BISHENG, and RagFlow plugins
2025-10-14 18:55:19 +08:00
myhloli
1bef6e3526 Add documentation for Cherry Studio, Sider, Dify, n8n, Coze, FastGPT, ModelWhale, DingTalk, DataFlow, BISHENG, and RagFlow plugins 2025-10-14 18:54:15 +08:00
myhloli
4c4d1d0f95 Update supported version range in bug_report.yml to include 2.2.x and 2.5.x 2025-10-14 16:09:30 +08:00
Xiaomeng Zhao
c36aa54370 Merge pull request #3709 from myhloli/dev
Add max_concurrency parameter to improve backend processing
2025-10-13 15:57:34 +08:00
myhloli
4b480cfcf7 Add max_concurrency parameter to improve backend processing 2025-10-13 15:56:49 +08:00
Xiaomeng Zhao
7e18e1bb76 Merge pull request #3707 from myhloli/dev
Refactor async function and improve output directory handling in prediction
2025-10-13 11:59:33 +08:00
myhloli
44fdeb663f Refactor async function and improve output directory handling in prediction 2025-10-13 11:32:28 +08:00
myhloli
cf59949ba9 add tiff 2025-10-12 11:45:49 +08:00
Xiaomeng Zhao
c8c2f28afc Merge pull request #3701 from opendatalab/ocr_enhance
Ocr enhance
2025-10-11 19:33:32 +08:00
Xiaomeng Zhao
aa4bc6259b Merge pull request #3700 from myhloli/ocr_enhance
Reduce recognition batch size from 8 to 6
2025-10-11 19:29:09 +08:00
myhloli
b7e4ea0b49 Reduce recognition batch size from 8 to 6 for improved OCR performance 2025-10-11 19:28:16 +08:00
Xiaomeng Zhao
998197a47f Merge pull request #3672 from cjsdurj/optimize_ocr
优化pytorch_paddle ocr的推理性性能,总体提升约400%
2025-10-11 18:44:02 +08:00
Xiaomeng Zhao
3c8b6e6b6b Merge pull request #3499 from jinghuan-Chen/fix/fill_blank_rec_crop_empty_image
Avoid cropping empty images.
2025-10-11 11:14:05 +08:00
Xiaomeng Zhao
be42b46ff9 Merge pull request #3688 from myhloli/dev 2025-10-10 19:43:03 +08:00
myhloli
7c689e33b8 Refactor fix_two_layer_blocks function to improve handling of captions and footnotes in table blocks 2025-10-10 19:12:18 +08:00
cjsdurj
af66bc02c2 优化ocr推理性能400% 2025-10-09 13:03:22 +00:00
Xiaomeng Zhao
752f75ad8e Merge pull request #3651 from opendatalab/dev
Dev
2025-09-30 06:31:24 +08:00
Xiaomeng Zhao
1cfde98585 Merge pull request #3650 from myhloli/dev
Dev
2025-09-30 06:30:12 +08:00
Xiaomeng Zhao
54676295d5 Update README_zh-CN.md 2025-09-30 06:29:05 +08:00
Xiaomeng Zhao
61c7c65d8b Update README.md 2025-09-30 06:18:00 +08:00
Xiaomeng Zhao
6f05f735d0 Update header.html 2025-09-30 06:11:43 +08:00
Xiaomeng Zhao
befb16e531 Merge pull request #3649 from opendatalab/master
master->dev
2025-09-30 06:08:54 +08:00
Bin Wang
abc433d6f2 Merge pull request #3635 from wangbinDL/master
docs: Update arXiv link for technical report
2025-09-29 09:36:45 +08:00
wangbinDL
e7c1385068 docs: Update arXiv link for technical report 2025-09-29 09:32:30 +08:00
Bin Wang
342c5aa34a Merge pull request #3619 from wangbinDL/master
docs: Update MinerU2.5 Technical Report
2025-09-26 18:35:31 +08:00
wangbinDL
f25ddfa024 docs: Update MinerU2.5 Technical Report 2025-09-26 18:27:22 +08:00
Bin Wang
e31de3a453 Merge pull request #3615 from wangbinDL/master
docs: Add MinerU2.5 technical report and BibTeX
2025-09-26 11:51:45 +08:00
wangbinDL
2f01754410 docs: Add MinerU2.5 technical report and BibTeX 2025-09-26 11:42:59 +08:00
Xiaomeng Zhao
8a9921fb22 Merge pull request #3610 from opendatalab/master
master->dev
2025-09-26 06:17:20 +08:00
myhloli
652e11a253 Update version.py with new version 2025-09-25 21:57:26 +00:00
Xiaomeng Zhao
61cc6886fe Merge pull request #3608 from opendatalab/release-2.5.4
Release 2.5.4
2025-09-26 05:53:36 +08:00
Xiaomeng Zhao
80dc57e7ce Merge pull request #3609 from myhloli/dev
Bump mineru-vl-utils dependency to version 0.1.11
2025-09-26 05:48:32 +08:00
myhloli
d84a006f6d Bump mineru-vl-utils dependency to version 0.1.11 2025-09-26 05:47:27 +08:00
Xiaomeng Zhao
2c5361bf8e Merge pull request #3607 from myhloli/dev
Update changelog for version 2.5.4 to document PDF identification fix
2025-09-26 05:43:50 +08:00
myhloli
eb01b7acf9 Update changelog for version 2.5.4 to document PDF identification fix 2025-09-26 05:42:43 +08:00
Xiaomeng Zhao
5656f1363b Merge pull request #3606 from myhloli/dev
Dev
2025-09-26 05:35:29 +08:00
myhloli
c9315b8e10 Refactor suffix guessing to handle PDF extensions for AI files 2025-09-26 05:31:46 +08:00
myhloli
907099762f Normalize PDF suffix handling for AI files to be case-insensitive 2025-09-26 05:09:19 +08:00
myhloli
2c356cccee Fix suffix identification for AI files to correctly handle PDF extensions 2025-09-26 05:02:56 +08:00
myhloli
0f62f166e6 Enhance image link replacement to handle only .jpg files while preserving other formats 2025-09-26 04:52:05 +08:00
Xiaomeng Zhao
c7a64e72dc Merge pull request #3563 from myhloli/dev
Update model output handling in test_e2e.py to write JSON format instead of text
2025-09-21 02:49:31 +08:00
myhloli
3cb3a94830 Merge remote-tracking branch 'origin/dev' into dev 2025-09-21 02:48:45 +08:00
myhloli
8301fa4c20 Update model output handling in test_e2e.py to write JSON format instead of text 2025-09-21 02:47:56 +08:00
Xiaomeng Zhao
4400f4b75f Merge pull request #3558 from opendatalab/master
master->dev
2025-09-20 15:37:45 +08:00
myhloli
92efb8f96e Update version.py with new version 2025-09-20 07:36:01 +00:00
Xiaomeng Zhao
9a88cbfb09 Merge pull request #3545 from opendatalab/release-2.5.3
Release 2.5.3
2025-09-20 15:33:58 +08:00
Xiaomeng Zhao
e96e4a0ce4 Merge pull request #3557 from opendatalab/dev
Dev
2025-09-20 15:30:40 +08:00
Xiaomeng Zhao
c7bde0ab39 Merge pull request #3556 from myhloli/dev
Refactor batch image orientation classification logic for improved cl…
2025-09-20 15:30:08 +08:00
myhloli
8754c24e42 Refactor batch image orientation classification logic for improved clarity and performance 2025-09-20 15:24:28 +08:00
Xiaomeng Zhao
4f8c00cc34 Merge pull request #3555 from opendatalab/dev
Dev
2025-09-20 15:18:19 +08:00
Xiaomeng Zhao
89681f98ad Merge pull request #3554 from myhloli/dev
Fix formatting in changelog sections of README.md and README_zh-CN.md…
2025-09-20 15:14:16 +08:00
myhloli
66d328dbc5 Fix formatting in changelog sections of README.md and README_zh-CN.md for improved readability 2025-09-20 15:13:29 +08:00
Xiaomeng Zhao
f0c1318545 Merge pull request #3553 from myhloli/dev
Fix formatting in changelog sections of README.md and README_zh-CN.md…
2025-09-20 15:11:43 +08:00
myhloli
6e97f3cf70 Fix formatting in changelog sections of README.md and README_zh-CN.md for improved readability 2025-09-20 15:10:25 +08:00
Xiaomeng Zhao
aede62167e Merge pull request #3552 from opendatalab/dev
Dev
2025-09-20 15:08:40 +08:00
Xiaomeng Zhao
5f2740f743 Merge pull request #3551 from myhloli/dev
Fix compute capability comparison in custom_logits_processors.py for …
2025-09-20 15:08:14 +08:00
myhloli
a888d2b625 Fix compute capability comparison in custom_logits_processors.py for correct version handling 2025-09-20 15:06:49 +08:00
Xiaomeng Zhao
4275876331 Merge pull request #3550 from opendatalab/dev
Dev
2025-09-20 15:01:39 +08:00
Xiaomeng Zhao
ec9f7f54ab Merge pull request #3549 from myhloli/dev
Update README.md and README_zh-CN.md to include changelog for v2.5.3 …
2025-09-20 15:00:50 +08:00
myhloli
7861e5e369 Remove redundant newline in README.md for improved formatting 2025-09-20 15:00:12 +08:00
myhloli
159f3a89a3 Update README.md and README_zh-CN.md to include changelog for v2.5.3 release with compatibility fixes and performance adjustments 2025-09-20 14:57:54 +08:00
Xiaomeng Zhao
d9452bbeb9 Merge pull request #3546 from myhloli/dev
Update docker_deployment.md for improved clarity on base image usage …
2025-09-20 14:48:50 +08:00
myhloli
d808a32c0b Update docker_deployment.md for improved clarity on base image usage and GPU support 2025-09-20 13:52:16 +08:00
Xiaomeng Zhao
12ce3bd024 Merge pull request #3544 from myhloli/dev
Dev
2025-09-20 13:26:18 +08:00
myhloli
e3d7aece50 Remove warning log for default VLLM_USE_V1 value in custom_logits_processors.py 2025-09-20 13:25:11 +08:00
Xiaomeng Zhao
7c55a0ea65 Update mineru/backend/vlm/custom_logits_processors.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-09-20 13:22:40 +08:00
myhloli
f1659eb7a7 Refactor logits processor handling in server.py and vlm_analyze.py for improved clarity and consistency 2025-09-20 13:21:05 +08:00
myhloli
c6bffd9382 Restrict vllm version to <0.11 for compatibility 2025-09-20 11:49:06 +08:00
myhloli
857dcb2ef5 Update docker_deployment.md to clarify GPU model support and base image options for vLLM 2025-09-20 11:45:33 +08:00
myhloli
ef69f98cd6 Update Dockerfile to include comments for GPU architecture compatibility based on Compute Capability 2025-09-20 03:15:58 +08:00
myhloli
6d5d1cf26b Refactor image rotation handling in batch_analyze.py and paddle_ori_cls.py for improved compatibility with torch versions 2025-09-20 03:07:47 +08:00
myhloli
7c481796f8 Refactor custom logits processors to include vllm version checks and improve logging 2025-09-20 01:22:06 +08:00
myhloli
7d62b7b7cc Update mineru-vl-utils dependency version to 0.1.8 2025-09-20 00:31:14 +08:00
myhloli
5a0cf9af7f Enhance custom logits processors with improved compute capability checks and environment variable handling 2025-09-20 00:21:43 +08:00
myhloli
f5e0e67545 Add custom logits processors functionality with compute capability check 2025-09-19 19:21:56 +08:00
myhloli
a4cac624df Add compute capability check for custom logits processors in server.py and vlm_analyze.py 2025-09-19 19:00:41 +08:00
Xiaomeng Zhao
e1eb318b9b Merge pull request #3535 from opendatalab/master
master->dev
2025-09-19 16:51:13 +08:00
myhloli
31834b1e68 Update version.py with new version 2025-09-19 08:48:17 +00:00
Xiaomeng Zhao
100ace2e99 Merge pull request #3534 from opendatalab/release-2.5.2
Release 2.5.2
2025-09-19 16:45:57 +08:00
Xiaomeng Zhao
6aac639686 Merge pull request #3533 from myhloli/dev
Update ModelScope link in README_zh-CN.md for MinerU2.5 release
2025-09-19 16:39:40 +08:00
myhloli
82f94a9a84 Update ModelScope link in README_zh-CN.md for MinerU2.5 release 2025-09-19 16:36:42 +08:00
Xiaomeng Zhao
d928334c61 Merge pull request #3532 from myhloli/dev
Fix formatting in vlm_middle_json_mkcontent.py to ensure proper line breaks in list items
2025-09-19 16:34:29 +08:00
myhloli
ebad82bd8c Update version in README to 2.5.2 for MinerU2.5 release 2025-09-19 16:31:30 +08:00
myhloli
b03c5fb449 Fix formatting in vlm_middle_json_mkcontent.py to ensure proper line breaks in list items 2025-09-19 16:30:43 +08:00
myhloli
c343afd20c Update version.py with new version 2025-09-19 03:45:08 +00:00
Xiaomeng Zhao
6586c7c01e Merge pull request #3529 from opendatalab/release-2.5.1
Release 2.5.1
2025-09-19 11:43:51 +08:00
Xiaomeng Zhao
304a6d9d8c Merge pull request #3527 from myhloli/dev
fix: Update mineru-vl-utils version and add logits processors support
2025-09-19 11:42:43 +08:00
myhloli
bce9bb6d1d Add support for --logits-processors argument in server.py 2025-09-19 11:42:05 +08:00
myhloli
920220e48e Update version in README for MinerU2.5 release to 2.5.1 2025-09-19 11:40:44 +08:00
myhloli
9fc3d6c742 Remove direct import of MinerULogitsProcessor and add it conditionally in vllm backend 2025-09-19 11:36:20 +08:00
myhloli
8fd544273e Update mineru-vl-utils version and add logits processors support 2025-09-19 11:20:34 +08:00
myhloli
72f1f5f935 Update mineru-vl-utils version and add logits processors support 2025-09-19 11:16:55 +08:00
Xiaomeng Zhao
5559a4701a Merge pull request #3523 from opendatalab/master
master->dev
2025-09-19 10:44:51 +08:00
myhloli
437022abfa Specify version constraints for mineru-vl-utils in pyproject.toml 2025-09-19 03:39:57 +08:00
myhloli
4653ed1502 Remove version constraints for mineru-vl-utils in pyproject.toml 2025-09-19 03:31:13 +08:00
Xiaomeng Zhao
b58c7f8d6e Merge pull request #3517 from opendatalab/dev
Dev
2025-09-19 03:27:30 +08:00
Xiaomeng Zhao
f6133b1731 Merge pull request #3516 from myhloli/dev
Update dependency name for mineru-vl-utils in pyproject.toml
2025-09-19 03:26:31 +08:00
myhloli
12d72c7c17 Update dependency name for mineru-vl-utils in pyproject.toml 2025-09-19 03:25:18 +08:00
Xiaomeng Zhao
5f3f35c009 Merge pull request #3515 from opendatalab/master
master->dev
2025-09-19 03:14:48 +08:00
myhloli
16ad71446b Update version.py with new version 2025-09-18 19:12:56 +00:00
Xiaomeng Zhao
d4b364eb9f Merge pull request #3513 from opendatalab/release-2.5.0
Release 2.5.0
2025-09-19 03:10:02 +08:00
Xiaomeng Zhao
446188adf4 Merge pull request #3514 from myhloli/dev
update dependencies
2025-09-19 03:09:50 +08:00
myhloli
ff90c600aa update dependencies 2025-09-19 03:07:23 +08:00
Xiaomeng Zhao
3f2c7e5e7c Merge pull request #3512 from myhloli/dev
update docs
2025-09-19 03:04:12 +08:00
myhloli
2ba1c35fbd update docs 2025-09-19 03:03:06 +08:00
Xiaomeng Zhao
d3f92a0b20 Merge pull request #3511 from opendatalab/dev
Dev
2025-09-19 03:00:56 +08:00
Xiaomeng Zhao
4b6f151351 Merge pull request #3510 from myhloli/dev
update docs
2025-09-19 03:00:14 +08:00
myhloli
5fcd428cb5 update docs 2025-09-19 02:56:44 +08:00
Xiaomeng Zhao
5db08afef6 Merge pull request #3509 from opendatalab/release-2.5.0
Release 2.5.0
2025-09-19 02:51:50 +08:00
Xiaomeng Zhao
6b182f8378 Update mineru/cli/gradio_app.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-09-19 02:51:09 +08:00
Xiaomeng Zhao
ae9526127f Merge pull request #3508 from myhloli/dev
update docs
2025-09-19 02:27:40 +08:00
myhloli
39790095bf update docs 2025-09-19 02:26:36 +08:00
Xiaomeng Zhao
fef3081bdf Merge pull request #3507 from myhloli/dev
update docs
2025-09-19 02:24:30 +08:00
myhloli
5425da9571 update docs 2025-09-19 02:23:42 +08:00
Xiaomeng Zhao
9af1824328 Merge pull request #3506 from myhloli/dev
Dev
2025-09-19 02:17:16 +08:00
myhloli
e47b19c416 update docs 2025-09-19 02:16:17 +08:00
myhloli
5646f46606 update docs 2025-09-19 02:04:04 +08:00
Xiaomeng Zhao
9d5568a9cb Merge pull request #3505 from myhloli/dev
update docs
2025-09-19 01:58:12 +08:00
myhloli
ec3549702f update docs 2025-09-19 01:55:35 +08:00
Xiaomeng Zhao
d185d1822b Merge pull request #3504 from myhloli/dev
update docs
2025-09-19 01:49:57 +08:00
myhloli
4864a086ce update docs 2025-09-19 01:48:50 +08:00
Xiaomeng Zhao
f736e29cc0 Merge pull request #3503 from myhloli/dev
update docs
2025-09-19 01:23:09 +08:00
myhloli
34fab4f5b8 update docs 2025-09-19 01:22:25 +08:00
Xiaomeng Zhao
2496875c33 Merge pull request #3502 from myhloli/dev
Dev
2025-09-19 01:13:20 +08:00
myhloli
ec4cc37861 Merge remote-tracking branch 'origin/dev' into dev 2025-09-19 00:00:29 +08:00
myhloli
c2208d84cb feat: update output_files.md to include new block types and fields for code and list structures 2025-09-18 23:52:01 +08:00
Xiaomeng Zhao
cdc025a9ec Merge pull request #3490 from myhloli/dev
Add vlm 2.5 support
2025-09-18 23:04:43 +08:00
Xiaomeng Zhao
cdbe6ba9b6 Update mineru/utils/enum_class.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-09-18 23:04:29 +08:00
myhloli
75f576ad0c fix: correct capitalization of "HuggingFace" in README files 2025-09-18 22:55:56 +08:00
myhloli
52844f0794 feat: update README files to reflect the release of MinerU2.5 and its enhancements 2025-09-18 22:55:01 +08:00
myhloli
8d178b2b7e feat: enhance file type detection by using guess_suffix_by_path for document parsing 2025-09-18 22:41:58 +08:00
myhloli
1083476a02 fix: typo 2025-09-18 21:45:02 +08:00
myhloli
da29782a26 feat: add contrast calculation for span images to improve OCR accuracy 2025-09-18 19:55:40 +08:00
myhloli
75797a3b7c feat: update header title to MinerU 2.5 and add model link in header.html; add Dingo tool link in README_zh-CN.md 2025-09-18 18:46:50 +08:00
myhloli
5b73b89ceb fix: add handling for reference text blocks in draw_bbox.py 2025-09-18 17:24:15 +08:00
myhloli
c5b2926c7b fix: extend text block handling to include reference text in draw_bbox.py 2025-09-18 17:23:11 +08:00
jinghuan-Chen
8bb8b715c1 Avoid cropping empty images. 2025-09-18 17:08:40 +08:00
myhloli
3ca520a3fe feat: implement dynamic batch size calculation based on GPU memory in vlm_analyze.py 2025-09-18 14:55:34 +08:00
myhloli
ba36a94aa0 fix: streamline model argument handling in server.py 2025-09-18 01:14:16 +08:00
myhloli
11ebb47891 fix: remove redundant model_path checks for vllm backends in vlm_analyze.py 2025-09-18 00:17:09 +08:00
myhloli
dd8dd5197b fix: correct variable usage for language guessing in code block formatting 2025-09-17 23:56:16 +08:00
myhloli
7a71cfe288 feat: add support for vllm-async-engine backend in vlm_analyze.py 2025-09-17 22:58:47 +08:00
myhloli
bba31191a4 fix: update backend handling to enforce correct usage of vlm engines in sync and async modes 2025-09-17 22:43:44 +08:00
Xiaomeng Zhao
9041f04588 Merge pull request #18 from myhloli/vlm_2.5
Vlm 2.5
2025-09-17 21:53:26 +08:00
Xiaomeng Zhao
69a9d11b0b Merge pull request #3489 from e06084/dev
docs: README add dingo link
2025-09-17 21:51:06 +08:00
chupei
36e7267ce1 docs: README add dingo link 2025-09-17 20:31:23 +08:00
myhloli
14f347d613 feat: add code_content_clean function to sanitize Markdown code blocks 2025-09-17 19:20:34 +08:00
myhloli
6ea2cfeb21 fix: update MinerU version references in enum_class.py and header.html 2025-09-17 16:48:08 +08:00
myhloli
078099f19d feat: enhance language guessing for code blocks by integrating guess_lang into line structure 2025-09-17 16:03:27 +08:00
myhloli
25d4a4588a fix: specify version range for Magika dependency in pyproject.toml 2025-09-17 00:46:35 +08:00
myhloli
679dad3aac fix: streamline temporary file handling for image and PDF processing in fast_api.py 2025-09-17 00:41:37 +08:00
myhloli
e60da65cca feat: enhance file type detection using Magika for improved suffix guessing 2025-09-17 00:19:44 +08:00
myhloli
f081d36a3a feat: implement language guessing for code blocks using Magika 2025-09-16 23:40:51 +08:00
myhloli
c74e712918 fix: correct language guessing in code block formatting in vlm_middle_json_mkcontent.py 2025-09-16 22:19:44 +08:00
myhloli
f2b944ab06 fix: enhance language guessing for code blocks in VLM processing 2025-09-16 21:43:18 +08:00
myhloli
2e945adcc0 docs: update output_files.md to reflect significant changes in VLM backend output for version 2.5 2025-09-16 19:38:57 +08:00
myhloli
39eaf31fb9 docs: update output_files.md to reflect significant changes in VLM backend output for version 2.5 2025-09-16 19:02:50 +08:00
myhloli
7717534ea7 fix: remove unused import of list_iterator from draw_bbox.py 2025-09-16 01:30:09 +08:00
Xiaomeng Zhao
6166b98cd4 Merge pull request #17 from myhloli/dev
fix: adjust overlap area ratio for image and table spans in span_block_fix
2025-09-15 20:48:43 +08:00
Xiaomeng Zhao
a02ab97ea0 Merge pull request #3473 from myhloli/dev
fix: adjust overlap area ratio for image and table spans in span_block_fix
2025-09-15 20:46:36 +08:00
myhloli
beadb7a689 fix: adjust overlap area ratio for image and table spans in span_block_fix 2025-09-15 19:22:57 +08:00
myhloli
de5449fd40 refactor: consolidate output processing into a single _process_output function 2025-09-15 11:24:21 +08:00
myhloli
76f74e7c70 fix: enhance draw_bbox functionality to include list items in bounding box drawing 2025-09-15 02:32:09 +08:00
myhloli
efbf1422c6 fix: update header title to reflect MinerU version 2.5 2025-09-15 02:04:21 +08:00
myhloli
3ec6479462 fix: update backend comment to reflect renaming from sglang-engine to vlm-vllm-engine 2025-09-15 02:00:58 +08:00
myhloli
80e6f4ded4 fix: update coverage omit list to reflect renaming from sglang to vllm 2025-09-15 01:54:49 +08:00
myhloli
376b5d924a Merge remote-tracking branch 'origin/vlm_2.5' into vlm_2.5 2025-09-15 01:52:36 +08:00
myhloli
6608615012 docs: update demo.py to reflect changes in backend naming from sglang to vllm 2025-09-15 01:52:14 +08:00
myhloli
12dea70793 Merge remote-tracking branch 'origin/vlm_2.5' into vlm_2.5 2025-09-15 01:50:37 +08:00
myhloli
96a0a45c9a fix: update sys.argv to include 'serve' for vllm server startup 2025-09-15 01:50:14 +08:00
myhloli
745954ca08 docs: update references from sglang to vllm in documentation and configuration files 2025-09-15 01:45:35 +08:00
myhloli
e120a90d11 docs: update documentation for vllm integration and parameter optimization 2025-09-15 01:25:23 +08:00
myhloli
8c75e0fce2 docs: update changelog for version 2.5.0 release 2025-09-14 23:10:51 +08:00
Xiaomeng Zhao
978c94f680 Merge pull request #16 from myhloli/dev
Dev
2025-09-14 23:00:57 +08:00
myhloli
c4eae4e0ef fix: add timing log for model predictor retrieval in vlm_analyze.py 2025-09-14 22:28:52 +08:00
myhloli
411f3b7855 fix: comment out debug logging in vllm_analyze.py 2025-09-12 15:10:04 +08:00
myhloli
60e257e5f1 fix: set default values for gpu_memory_utilization and model in vllm_analyze.py 2025-09-12 15:07:51 +08:00
myhloli
20e1dfe984 fix: enhance model initialization for transformers and vllm-engine backends in vlm_analyze.py 2025-09-12 11:39:28 +08:00
myhloli
f2553dd89a fix: add default arguments for port and GPU memory utilization in server.py 2025-09-12 11:07:51 +08:00
myhloli
b35c3345c0 fix: add default arguments for port and GPU memory utilization in server.py 2025-09-12 11:07:23 +08:00
myhloli
af3ee06aa3 fix: update import path for vllm entrypoint in server.py 2025-09-12 10:23:16 +08:00
myhloli
4f6ac22ce6 fix: update import path for vllm entrypoint in server.py 2025-09-12 10:19:03 +08:00
myhloli
0f47a22bb3 refactor: update option names and server script for vLLM engine integration 2025-09-12 10:08:57 +08:00
myhloli
2ca6ee1708 refactor: rename server files and update model path handling for vllm integration 2025-09-12 10:01:23 +08:00
myhloli
55eaad224d feat: add support for vlm 2.5 2025-09-11 19:42:51 +08:00
Xiaomeng Zhao
bb94e73fc9 Merge pull request #3451 from opendatalab/master
master->dev
2025-09-10 14:46:22 +08:00
myhloli
70f62046e7 Update version.py with new version 2025-09-10 06:44:57 +00:00
Xiaomeng Zhao
fd38cdff80 Merge pull request #3450 from opendatalab/release-2.2.2
Release 2.2.2
2025-09-10 14:44:05 +08:00
Xiaomeng Zhao
d30f762ac8 Merge pull request #3449 from myhloli/dev
docs: update changelog for version 2.2.2 release
2025-09-10 14:43:09 +08:00
myhloli
f65ff12eea docs: update changelog for version 2.2.2 release 2025-09-10 14:42:28 +08:00
myhloli
8b8ac3e62e docs: update changelog for version 2.2.2 release 2025-09-10 14:33:30 +08:00
Xiaomeng Zhao
473154c2b3 Merge pull request #3448 from myhloli/dev
fix: improve HTML code handling and logging in batch_analyze and main…
2025-09-10 14:31:19 +08:00
myhloli
e2fd491760 fix: improve HTML code handling and logging in batch_analyze and main modules 2025-09-10 14:27:50 +08:00
Xiaomeng Zhao
c29e2d0ca2 Merge pull request #3438 from opendatalab/master
master->dev
2025-09-08 10:59:47 +08:00
myhloli
a5687394d5 Update version.py with new version 2025-09-08 02:54:47 +00:00
Xiaomeng Zhao
13819c0596 Merge pull request #3437 from opendatalab/release-2.2.1
Release 2.2.1
2025-09-08 10:53:01 +08:00
Xiaomeng Zhao
d775f76eec Merge pull request #3435 from myhloli/dev
feat: add new models to download list
2025-09-08 10:51:52 +08:00
myhloli
5dd73dbcca Merge remote-tracking branch 'origin/dev' into dev 2025-09-08 10:46:08 +08:00
myhloli
3eda0d10a0 feat: add new models to download list and update changelog for version 2.2.1 2025-09-08 10:45:21 +08:00
Xiaomeng Zhao
e0c3cbb34a Merge pull request #3429 from opendatalab/master
master->dev
2025-09-05 19:23:07 +08:00
myhloli
d2fcdd0fa4 Update version.py with new version 2025-09-05 11:21:16 +00:00
Xiaomeng Zhao
af887d63c0 Merge pull request #3428 from opendatalab/release-2.2.0
Release 2.2.0
2025-09-05 19:19:42 +08:00
Xiaomeng Zhao
b5a69c5258 Merge pull request #3427 from opendatalab/dev
Dev
2025-09-05 19:19:15 +08:00
Xiaomeng Zhao
ecfb4a03fb Merge pull request #3426 from myhloli/dev
feat: remove legacy `pipeline_old_linux` installation option for improved support
2025-09-05 19:18:44 +08:00
myhloli
0bbefad67b feat: remove legacy pipeline_old_linux installation option for improved support 2025-09-05 19:17:24 +08:00
Xiaomeng Zhao
a9f28b4436 Merge pull request #3425 from opendatalab/release-2.2.0
Release 2.2.0
2025-09-05 19:10:08 +08:00
Xiaomeng Zhao
05a9920ffe Merge pull request #3424 from myhloli/dev
Dev
2025-09-05 19:01:48 +08:00
myhloli
d96c3fc4d2 feat: add TableStructureRec link to README 2025-09-05 19:00:58 +08:00
myhloli
64e12cb924 feat: update changelog for version 2.2.0 with new table recognition model and OCR enhancements 2025-09-05 18:46:07 +08:00
myhloli
29e37933aa feat: update changelog for version 2.2.0 with new table recognition model and OCR enhancements 2025-09-05 18:45:49 +08:00
myhloli
287e5b6cfc feat: add bbox field to content blocks for bounding box coordinates 2025-09-05 18:17:36 +08:00
myhloli
9003f50a22 feat: enhance make_blocks_to_content_list to include page size and bbox calculations 2025-09-05 18:10:57 +08:00
myhloli
cb4d1cceb3 feat: update dtype handling based on transformers version 2025-09-05 17:31:54 +08:00
myhloli
b670ebdd63 feat: add support for Thai and Greek languages in OCR language options 2025-09-05 16:45:44 +08:00
myhloli
82323549c3 feat: add support for Thai and Greek languages in OCR language options 2025-09-05 16:44:20 +08:00
Xiaomeng Zhao
f24a30714f Merge pull request #3419 from Sidney233/dev
Feat: add ppocrv5 el, th, env5
2025-09-05 16:37:58 +08:00
Xiaomeng Zhao
c497e4b1fc Merge pull request #3421 from myhloli/dev
Dev
2025-09-05 16:33:42 +08:00
Xiaomeng Zhao
719154fe21 Merge pull request #3374 from zhanluxianshen/fix-err-logs-for-multi_gpu_v2
fix error logs for multi_gpu endpoint.
2025-09-05 16:32:29 +08:00
Xiaomeng Zhao
6ac9ebb3da Merge pull request #15 from myhloli/feature_table_merge
feat: implement cross-page table merging functionality
2025-09-05 16:24:05 +08:00
myhloli
30dce2063f feat: implement cross-page table merging functionality 2025-09-05 16:20:48 +08:00
Sidney233
41017331c6 Merge branch 'opendatalab:dev' into dev 2025-09-05 14:44:38 +08:00
Sidney233
d9618d9107 Merge remote-tracking branch 'origin/dev' into dev 2025-09-05 14:37:51 +08:00
Sidney233
3da1ed8443 feat: add ppocr el, th env5 2025-09-05 14:37:32 +08:00
Xiaomeng Zhao
32a4bed808 Merge pull request #3407 from opendatalab/master
master->dev
2025-09-02 14:52:13 +08:00
Xiaomeng Zhao
244a1d9161 Merge pull request #3406 from opendatalab/update_wechat_url
fix: update WeChat URL in documentation
2025-09-02 14:51:24 +08:00
myhloli
9b0c88a489 fix: update WeChat URL in documentation 2025-09-02 14:50:36 +08:00
Xiaomeng Zhao
45a8ca81e8 Merge pull request #3405 from myhloli/dev
Dev
2025-09-01 20:49:18 +08:00
myhloli
06a158e56b fix: enhance wired table prediction logic and improve classification criteria 2025-09-01 19:33:37 +08:00
myhloli
bae254fa72 feat: refactor table image cropping and processing for improved clarity and functionality 2025-08-30 03:13:44 +08:00
myhloli
aa39e61fef fix: add version check for PyTorch to prevent errors in batch prediction 2025-08-29 18:45:44 +08:00
Xiaomeng Zhao
733cbca6dd Merge pull request #3397 from myhloli/dev
Dev
2025-08-29 18:35:47 +08:00
myhloli
3bff7cd017 Merge remote-tracking branch 'origin/dev' into dev 2025-08-29 18:33:49 +08:00
myhloli
7a2286890b fix: enhance OCR text matching criteria for improved accuracy in predictions 2025-08-29 18:33:27 +08:00
Xiaomeng Zhao
1bf3817be7 Merge pull request #3396 from myhloli/dev
Dev
2025-08-29 18:25:57 +08:00
Xiaomeng Zhao
b8730977e5 Merge pull request #14 from myhloli/feat_rapid_table_v3
Feat rapid table v3
2025-08-29 18:25:22 +08:00
myhloli
28d0360ec3 fix: remove commented-out code for wireless table prediction to enhance code clarity 2025-08-29 18:16:22 +08:00
myhloli
d0e68a3018 feat: implement RapidTable model for enhanced table structure prediction and batch processing 2025-08-29 18:15:25 +08:00
myhloli
2c8accf9d0 fix: comment out unused code for wireless table prediction to improve readability 2025-08-29 14:29:24 +08:00
myhloli
62fb477beb fix: update predict method to handle optional OCR results and improve image processing flow 2025-08-29 14:28:51 +08:00
myhloli
65a8097704 fix: refine logic for selecting table model based on cell analysis criteria 2025-08-29 01:45:57 +08:00
myhloli
254a0a483b fix: add HTML parsing for wireless and wired table results to improve cell analysis 2025-08-28 20:18:42 +08:00
myhloli
c65fb7de8a fix: integrate clean_vram function to manage GPU memory usage during predictions 2025-08-28 19:05:49 +08:00
myhloli
33f4a21ae8 fix: adjust OCR box coordinates and confidence threshold for improved accuracy 2025-08-28 15:45:38 +08:00
Xiaomeng Zhao
7cb9d67ea4 Merge pull request #3388 from Sidney233/dev
Test: update test file
2025-08-28 14:17:31 +08:00
Sidney233
a746bac44b Merge branch 'opendatalab:dev' into dev 2025-08-28 12:09:32 +08:00
Sidney233
98a7d66d28 test: update test.pdf 2025-08-28 12:07:11 +08:00
Xiaomeng Zhao
650ff3c683 Merge branch 'opendatalab:dev' into dev 2025-08-28 01:11:49 +08:00
myhloli
a084153411 fix: update OCR progress description to include language context 2025-08-27 19:12:48 +08:00
myhloli
90047f9bd5 fix: disable progress bar in predictions for cleaner output 2025-08-27 16:58:30 +08:00
Sidney233
43e5b8da0e test: update test.pdf 2025-08-27 16:53:24 +08:00
myhloli
7d3a76f80f fix: include attention mask in model input for improved inference 2025-08-27 16:35:32 +08:00
Xiaomeng Zhao
b3a3c2ccd2 Merge pull request #3384 from myhloli/dev
fix: add onnxruntime dependency to pyproject.toml
2025-08-27 16:15:27 +08:00
myhloli
7a1603978f fix: add onnxruntime dependency to pyproject.toml 2025-08-27 16:14:20 +08:00
Xiaomeng Zhao
3e88f78c5c Merge pull request #3383 from myhloli/dev
Dev
2025-08-27 16:07:40 +08:00
Xiaomeng Zhao
cd9ae14d1e Merge branch 'dev' into dev 2025-08-27 16:07:32 +08:00
Xiaomeng Zhao
959163a5b5 Merge pull request #3382 from myhloli/feat_table_batch
Feat table batch
2025-08-27 16:04:21 +08:00
myhloli
f1fb900ea5 fix: enhance batch analysis with table orientation classification and prediction improvements 2025-08-27 16:03:34 +08:00
myhloli
be587e31fa fix: update dependencies in pyproject.toml to include openai package and adjust torch version constraint 2025-08-27 15:51:10 +08:00
myhloli
396cf8b81d fix: update batch analysis to use PIL images for layout prediction and adjust rotation condition 2025-08-26 20:01:09 +08:00
myhloli
0bb4238114 fix: update batch_analyze to use PIL images for layout prediction 2025-08-26 19:14:04 +08:00
myhloli
10bb08c875 fix: update plot_rec_box_with_logic_info to accept image directly instead of file path 2025-08-26 17:30:13 +08:00
Xiaomeng Zhao
98c8761361 Merge pull request #3371 from Sidney233/dev
Feat: add batch predict for table rec
2025-08-26 15:39:35 +08:00
Sidney233
65b2ddc07f fix: copilot suggestion 2025-08-26 15:30:19 +08:00
Sidney233
832d28e512 fix: add tdqm for wired table, remove import, remove img ori cls lang group 2025-08-26 15:00:05 +08:00
Sidney233
0641cc07f7 fix: merge dev 2025-08-26 10:42:12 +08:00
zhanluxianshen
1671e68367 fix error logs for multi_gpu endpoint.
Signed-off-by: zhanluxianshen <zhanluxianshen@163.com>
2025-08-26 10:26:10 +08:00
Sidney233
320cd60c81 fix: merge dev 2025-08-25 20:26:04 +08:00
Sidney233
c8ff2f2778 Merge branch 'cxz-dev' into dev
# Conflicts:
#	mineru/backend/pipeline/batch_analyze.py
#	tests/unittest/test_e2e.py
2025-08-25 19:57:52 +08:00
Xiaomeng Zhao
3a33acaeb0 Merge pull request #3372 from myhloli/refactor_pil_to_numpy
Refactor pil to numpy
2025-08-25 18:57:37 +08:00
myhloli
2fcffcb0af fix: refactor image handling to use numpy arrays instead of PIL images 2025-08-25 18:53:05 +08:00
Sidney233
ffb2ffcd76 feat: remove rapid_table dependency 2025-08-25 17:23:05 +08:00
Sidney233
da1431558a feat: add batch predict for table ocr 2025-08-25 17:18:07 +08:00
myhloli
51a6077876 fix: remove outdated todo comment in model_utils.py 2025-08-25 11:00:38 +08:00
Xiaomeng Zhao
532cfd20f8 Merge pull request #3365 from myhloli/dev
Dev
2025-08-22 19:20:50 +08:00
myhloli
8634e0b51c fix: refactor bounding box handling in remove_overlaps_min_blocks function 2025-08-22 18:59:21 +08:00
myhloli
cf2b74b030 fix: rename doclayout_yolo.py to doclayoutyolo.py and add visualization method for bounding box results 2025-08-21 19:33:50 +08:00
myhloli
c8a17c5f98 fix: rename doclayout_yolo.py to doclayoutyolo.py and add visualization method for bounding box results 2025-08-21 19:24:51 +08:00
Sidney233
512f40fdfb feat: add batch predict for slanet_plus 2025-08-21 18:18:56 +08:00
Sidney233
193d5d8e44 feat: add batch predict for slanet_plus 2025-08-21 18:18:52 +08:00
Sidney233
17a7758fee fix: remove unitable 2025-08-20 11:17:37 +08:00
Sidney233
9d10bb13f5 fix: remove unitable 2025-08-20 11:15:41 +08:00
Sidney233
58cccf0825 fix: remove unitable 2025-08-20 11:13:36 +08:00
Xiaomeng Zhao
3b9221de18 Merge pull request #3281 from loveRhythm1990/feat/return-zip
feat: support return parse result by zip
2025-08-19 14:45:26 +08:00
lr90
4a237eef36 feat: support return parse result by zip 2025-08-19 12:19:21 +08:00
Xiaomeng Zhao
a5b09b8479 Merge pull request #3340 from gary-Shen/dev 2025-08-18 19:29:43 +08:00
gary-Shen
2803ad4dd6 fix: update ga code 2025-08-18 19:27:01 +08:00
Xiaomeng Zhao
a73de7746a Merge pull request #3339 from gary-Shen/dev 2025-08-18 19:25:52 +08:00
gary-Shen
b0d40dd236 feat: add google analyze code 2025-08-18 19:24:00 +08:00
Xiaomeng Zhao
dde265f148 Merge pull request #3336 from Sidney233/test_fix
test: fix assertion and path
2025-08-18 18:47:26 +08:00
Sidney233
aad384f2e7 fix: remove rapid-table 2025-08-18 17:57:03 +08:00
Sidney233
60dd005dd5 test: fix assertion and path 2025-08-18 17:26:39 +08:00
Sidney233
dee840afc7 test: fix assertion and path 2025-08-18 17:23:47 +08:00
Xiaomeng Zhao
aeacfc8d50 Merge pull request #3335 from myhloli/dev
Dev
2025-08-18 17:03:00 +08:00
myhloli
b54dc524bf fix: update base image version in Dockerfile and documentation 2025-08-18 16:58:26 +08:00
myhloli
c3db578247 fix: add line sorting logic for bounding boxes in PDF processing 2025-08-18 16:48:30 +08:00
Sidney233
2b7eb741dc Merge branch 'opendatalab:dev' into dev 2025-08-18 16:29:16 +08:00
Sidney233
27e2ea44b1 fix: invalid escape sequence '\d' 2025-08-18 16:28:00 +08:00
Sidney233
f0126cfc23 feat: replace rapid-table with local code 2025-08-18 16:19:10 +08:00
Sidney233
efa4a5b7f1 Merge branch 'cxz-dev' into dev
# Conflicts:
#	mineru/backend/pipeline/batch_analyze.py
#	mineru/model/ori_cls/paddle_ori_cls.py
#	mineru/model/table/cls/paddle_table_cls.py
2025-08-18 15:13:14 +08:00
Xiaomeng Zhao
bb933ff9f6 Merge pull request #3325 from myhloli/dev
fix: improve bounding box update logic based on score comparison
2025-08-15 19:04:05 +08:00
myhloli
80c1b995bc fix: refine bounding box removal logic based on score comparison 2025-08-15 19:03:30 +08:00
myhloli
ca155df027 fix: improve bounding box update logic based on score comparison 2025-08-15 18:48:20 +08:00
Xiaomeng Zhao
c85d5d271a Merge pull request #3324 from myhloli/dev
fix: validate box coordinates and aspect ratio for improved image cro…
2025-08-15 17:52:53 +08:00
myhloli
91826697c9 fix: validate box coordinates and aspect ratio for improved image cropping 2025-08-15 17:51:45 +08:00
Xiaomeng Zhao
0137913fd2 Merge pull request #3321 from myhloli/dev
Dev
2025-08-15 15:22:27 +08:00
myhloli
a086cfad0d fix: adjust threshold for wired model detection based on wireless model results 2025-08-15 11:16:00 +08:00
myhloli
de41fa5859 Update version.py with new version 2025-08-14 11:59:59 +00:00
Xiaomeng Zhao
30b698ecc5 Merge pull request #3315 from opendatalab/fix-torch_2_8
Fix torch 2 8
2025-08-14 19:58:32 +08:00
Xiaomeng Zhao
5c00dcaee7 Merge pull request #3314 from myhloli/fix-torch_2_8
fix: add support for Torch 2.8.0 and MPS devices in batch analysis
2025-08-14 19:57:19 +08:00
myhloli
c7e456033d fix: add support for Torch 2.8.0 and MPS devices in batch analysis 2025-08-14 19:56:05 +08:00
myhloli
7676543ff8 fix: adjust resolution grouping stride for improved image normalization 2025-08-14 19:46:01 +08:00
myhloli
9ca1bf232b fix: reduce recognition batch size for improved performance 2025-08-14 18:58:51 +08:00
myhloli
6081d01da1 fix: remove outdated pipeline dependencies from pyproject.toml 2025-08-13 18:06:16 +08:00
myhloli
8c8ac2c667 fix: update torch dependency version for compatibility with latest features 2025-08-13 18:03:23 +08:00
myhloli
8d2871e827 fix: update text formatting in table recovery logic for improved output consistency 2025-08-13 17:46:56 +08:00
myhloli
1cd85ccfae fix: update text formatting in table recovery logic for improved output consistency 2025-08-13 17:43:52 +08:00
myhloli
1e18361273 feat: add table structure recognition and recovery modules for improved table processing 2025-08-13 17:40:19 +08:00
myhloli
866ad6ae51 fix: refactor image processing and table classification logic for improved accuracy 2025-08-13 17:37:20 +08:00
Sidney233
0a91a69596 test: fix path 2025-08-13 14:42:10 +08:00
Sidney233
84a9e5f13e test: fix path 2025-08-13 14:40:55 +08:00
myhloli
ac15ad0e61 feat: implement visualization for table results and enhance bounding box drawing logic 2025-08-11 18:31:29 +08:00
Sidney233
dfbccbc624 feat: Add batch prediction for image rotation classification and table classification 2025-08-11 17:25:37 +08:00
myhloli
e96eac481e fix: enhance table bounding box handling and improve overlap removal logic 2025-08-11 16:04:23 +08:00
myhloli
97d6ff955d fix: adjust condition for wireless model comparison to improve table detection logic 2025-08-07 19:13:01 +08:00
Xiaomeng Zhao
7e5b1a45d7 Merge pull request #3277 from myhloli/dev
Dev
2025-08-07 18:22:50 +08:00
Xiaomeng Zhao
4b71e1855b Merge branch 'opendatalab:dev' into dev 2025-08-07 18:22:26 +08:00
myhloli
29754f7c17 fix: update image loading to support BASE64 format for improved compatibility 2025-08-07 17:34:10 +08:00
Xiaomeng Zhao
5db0f7b9ce Merge pull request #3275 from myhloli/dev
Dev
2025-08-07 17:09:23 +08:00
myhloli
139d9b15bc Merge remote-tracking branch 'origin/dev' into dev 2025-08-07 17:08:11 +08:00
myhloli
d58a440ffc fix: refine confidence handling in predictions and update hash utility for image processing 2025-08-07 17:07:52 +08:00
Xiaomeng Zhao
5f9cb12fd8 Update mineru/model/table/rec/unet_table/unet_table.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-08-07 16:58:52 +08:00
Xiaomeng Zhao
26aa3d81e2 Update mineru/utils/pdf_reader.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-08-07 16:55:16 +08:00
myhloli
34360ba642 fix: adjust condition for block count to enhance detection logic 2025-08-07 16:43:43 +08:00
myhloli
a01c4bbe66 fix: adjust confidence thresholds and refine table detection logic for improved accuracy 2025-08-07 14:50:21 +08:00
myhloli
706eadbf5d fix: enhance table prediction logic by incorporating table classification score and refining model selection criteria 2025-08-07 00:45:27 +08:00
myhloli
c702302684 fix: enhance table prediction logic by incorporating table classification score and refining model selection criteria 2025-08-06 23:23:50 +08:00
Xiaomeng Zhao
eb67c36a81 Merge branch 'opendatalab:dev' into dev 2025-08-06 14:47:55 +08:00
Xiaomeng Zhao
948e4aefff Merge pull request #3269 from yeahjack/fix/rotation-indirectobject
fix: cast PDF /Rotate value to int to avoid IndirectObject type error
2025-08-06 14:47:12 +08:00
Xiaomeng Zhao
0eec0d90b5 Update mineru/utils/draw_bbox.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-08-06 14:43:19 +08:00
Xiaomeng Zhao
6978f09be0 Update mineru/utils/draw_bbox.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-08-06 14:42:34 +08:00
Yijie Xu
f3c933770a fix: handle IndirectObject in /Rotate field of PDF pages 2025-08-05 16:53:01 +00:00
myhloli
1d55925954 fix: add enable_merge_det_boxes parameter to model initialization for improved box merging control 2025-08-05 18:24:39 +08:00
myhloli
e00c090616 fix: update default batch size for inference to improve performance 2025-08-05 11:31:58 +08:00
myhloli
21ce40a90c fix: refine image rotation logic and replace logging with logger for better consistency 2025-08-05 11:24:37 +08:00
myhloli
11c4a0c6b6 fix: improve image orientation handling in batch analysis and streamline table classification process 2025-08-05 11:05:48 +08:00
myhloli
433b37589c fix: update vertical count condition for text orientation and adjust default batch size for inference 2025-08-05 11:05:19 +08:00
myhloli
e429c5a840 fix: refactor PDF processing logic to ensure proper resource management and improve error handling 2025-08-05 02:07:23 +08:00
myhloli
ff11e602fc feat: enhance image processing by introducing ImageType enum and updating related functions 2025-08-05 02:06:45 +08:00
myhloli
f5afd61eb0 fix: simplify calculate_center_rotate_angle function and remove unused variables 2025-08-05 02:05:42 +08:00
myhloli
2ccad698d1 Merge remote-tracking branch 'origin/dev' into dev 2025-08-04 21:44:58 +08:00
myhloli
99d1fddc8c fix: update default DPI and dimensions for image processing functions in pdf_reader.py 2025-08-04 21:44:36 +08:00
Xiaomeng Zhao
8b6d217efe Merge pull request #3242 from myhloli/dev
feat:  Improve the parsing accuracy of wired tables
2025-08-04 20:02:00 +08:00
Xiaomeng Zhao
324681f77c Update mineru/model/table/rec/unet_table/wired_table_rec_utils.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-08-04 20:01:30 +08:00
myhloli
133984a514 fix: specify version range for scikit-image in pyproject.toml 2025-08-04 19:48:48 +08:00
myhloli
fc5179ce5e feat: add initial project structure and CLI entry points for mineru 2025-08-04 19:40:05 +08:00
myhloli
86b20f3283 fix: improve docstring for gather_ocr_list_by_row and refactor image loading logic in wired_table_rec_utils.py 2025-08-04 18:13:08 +08:00
myhloli
0298de844f fix: replace hardcoded model name with constant in batch_analyze.py 2025-08-04 17:57:34 +08:00
myhloli
5be16aa4cb fix: remove unused imports and clean up code in multiple files 2025-08-04 17:51:46 +08:00
myhloli
2a8e6f9d45 fix: increase confidence threshold for table classification in paddle_table_cls.py 2025-08-04 16:28:32 +08:00
myhloli
fdf7e4f771 fix: replace hardcoded threshold values with constants in table_recover_utils.py 2025-08-04 16:28:09 +08:00
myhloli
c98cba1e30 fix: remove unused imports and clean up code in wired_table_rec_utils.py 2025-08-04 15:43:27 +08:00
Xiaomeng Zhao
7d0c39df3b Merge branch 'opendatalab:dev' into dev 2025-08-01 18:42:56 +08:00
Xiaomeng Zhao
d6c8199326 Merge pull request #3241 from opendatalab/master
master->dev
2025-08-01 18:42:31 +08:00
myhloli
be4f3de32b Update version.py with new version 2025-08-01 10:36:35 +00:00
Xiaomeng Zhao
176bf3d845 Merge pull request #3240 from opendatalab/release-2.1.10
Release 2.1.10
2025-08-01 18:27:30 +08:00
Xiaomeng Zhao
a50616b089 Update mineru/utils/model_utils.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-08-01 18:27:09 +08:00
Xiaomeng Zhao
da56746668 Merge pull request #3239 from myhloli/fix-block-coverage
fix: optimize block removal logic by tracking processed indices
2025-08-01 18:19:56 +08:00
myhloli
259ab11e74 fix: optimize block removal logic by tracking processed indices 2025-08-01 18:13:23 +08:00
myhloli
64c7ac0083 fix: update changelog for version 2.1.10 release and add block overlap issue fix 2025-08-01 16:34:06 +08:00
Xiaomeng Zhao
20e8d7fcd7 Merge branch 'opendatalab:dev' into dev 2025-08-01 15:17:31 +08:00
Xiaomeng Zhao
4d0b6a0513 Merge pull request #3237 from myhloli/fix-block-coverage
Fix block coverage
2025-08-01 15:16:54 +08:00
myhloli
74726662ce fix: remove unused import for low confidence span overlap removal 2025-08-01 15:09:28 +08:00
myhloli
990782acfc fix: add logic to remove low confidence overlapping blocks in layout results 2025-08-01 15:04:16 +08:00
myhloli
2ce4352a25 fix: add logic to remove low confidence overlapping blocks in layout results 2025-08-01 14:45:58 +08:00
myhloli
e76f29639f fix: improve text block detection logic by simplifying overlap checks 2025-08-01 10:28:55 +08:00
myhloli
60d32b8ac1 fix: enhance block coverage detection by merging OCR and table results 2025-08-01 01:28:57 +08:00
myhloli
6a9035bdf9 refactor: Optimizing the judgment logic of the text graph 2025-07-31 23:28:58 +08:00
myhloli
865b44a517 feat: enhance table classification logic and add OCR detection flag 2025-07-31 20:41:07 +08:00
Xiaomeng Zhao
6db7df1cde Merge branch 'opendatalab:dev' into dev 2025-07-31 18:27:00 +08:00
Xiaomeng Zhao
1dbe356fa4 Merge pull request #3229 from opendatalab/master
master->dev
2025-07-31 18:26:20 +08:00
Xiaomeng Zhao
a67ff8707e Merge pull request #3228 from myhloli/update_colab
chore: update Colab demo link in documentation
2025-07-31 18:24:27 +08:00
myhloli
7b701e4907 chore: update Colab demo link in documentation 2025-07-31 18:19:54 +08:00
myhloli
ebb5e317db feat: add scikit-image to project dependencies 2025-07-31 16:42:40 +08:00
myhloli
bc17d77fa9 Merge remote-tracking branch 'origin/dev' into dev 2025-07-31 15:50:16 +08:00
myhloli
bf5b750565 feat: add new models for table classification and orientation detection 2025-07-31 15:50:04 +08:00
Xiaomeng Zhao
1b6ed5d0a0 Merge pull request #3224 from opendatalab/dev
Dev
2025-07-31 15:44:02 +08:00
Xiaomeng Zhao
d85b5e86cd Merge pull request #3222 from SirlyDreamer/master
Set sglang base image to DaoCloud mirror for China Users.
2025-07-31 15:43:04 +08:00
SirlyDreamer
9c6778d5ad Set sglang base image to DaoCloud mirror for China Users. 2025-07-31 06:17:22 +00:00
Xiaomeng Zhao
22f5b0b4b4 Merge pull request #3218 from opendatalab/master
master->dev
2025-07-30 17:53:45 +08:00
Xiaomeng Zhao
2b3b4331f2 Merge pull request #3214 from opendatalab/release-2.1.9
Release 2.1.9
2025-07-30 17:01:27 +08:00
myhloli
0de4075586 Update version.py with new version 2025-07-30 08:59:58 +00:00
Xiaomeng Zhao
80cbeb2a3f Merge pull request #3213 from myhloli/dev
fix: add support for additional keyword arguments in modeling_unimer_mbart.py to adaptation transformers 4.54.1
2025-07-30 16:56:05 +08:00
myhloli
be489ab780 chore: update changelog for version 2.1.9 release with transformers 4.54.1 adaptation 2025-07-30 16:53:15 +08:00
myhloli
0041919d22 chore: update Dockerfile and documentation for sglang version 0.4.9.post6 2025-07-30 16:51:44 +08:00
myhloli
f7affcefb7 fix: add support for additional keyword arguments in modeling_unimer_mbart.py to adaptation transformers 4.54.1 2025-07-30 16:42:54 +08:00
Xiaomeng Zhao
a11f6de370 Merge pull request #3198 from opendatalab/master
master->dev
2025-07-28 19:30:03 +08:00
myhloli
93a3bc776b Update version.py with new version 2025-07-28 11:24:37 +00:00
Xiaomeng Zhao
7ceee7f7bf Merge pull request #3197 from opendatalab/release-2.1.8
Release 2.1.8
2025-07-28 19:18:03 +08:00
299 changed files with 24555 additions and 30429 deletions

View File

@@ -122,7 +122,21 @@ body:
#multiple: false
options:
-
- "2.0.x"
- "`<2.2.0`"
- "`2.2.x`"
- "`>=2.5`"
validations:
required: true
- type: dropdown
id: backend_name
attributes:
label: Backend name | 解析后端
#multiple: false
options:
-
- "vlm"
- "pipeline"
validations:
required: true

365
README.md
View File

@@ -1,7 +1,7 @@
<div align="center" xmlns="http://www.w3.org/1999/html">
<!-- logo -->
<p align="center">
<img src="docs/images/MinerU-logo.png" width="300px" style="vertical-align:middle;">
<img src="https://gcore.jsdelivr.net/gh/opendatalab/MinerU@master/docs/images/MinerU-logo.png" width="300px" style="vertical-align:middle;">
</p>
<!-- icon -->
@@ -17,8 +17,9 @@
[![OpenDataLab](https://img.shields.io/badge/webapp_on_mineru.net-blue?logo=data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iMTM0IiBoZWlnaHQ9IjEzNCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIj48cGF0aCBkPSJtMTIyLDljMCw1LTQsOS05LDlzLTktNC05LTksNC05LDktOSw5LDQsOSw5eiIgZmlsbD0idXJsKCNhKSIvPjxwYXRoIGQ9Im0xMjIsOWMwLDUtNCw5LTksOXMtOS00LTktOSw0LTksOS05LDksNCw5LDl6IiBmaWxsPSIjMDEwMTAxIi8+PHBhdGggZD0ibTkxLDE4YzAsNS00LDktOSw5cy05LTQtOS05LDQtOSw5LTksOSw0LDksOXoiIGZpbGw9InVybCgjYikiLz48cGF0aCBkPSJtOTEsMThjMCw1LTQsOS05LDlzLTktNC05LTksNC05LDktOSw5LDQsOSw5eiIgZmlsbD0iIzAxMDEwMSIvPjxwYXRoIGZpbGwtcnVsZT0iZXZlbm9kZCIgY2xpcC1ydWxlPSJldmVub2RkIiBkPSJtMzksNjJjMCwxNiw4LDMwLDIwLDM4LDctNiwxMi0xNiwxMi0yNlY0OWMwLTQsMy03LDYtOGw0Ni0xMmM1LTEsMTEsMywxMSw4djMxYzAsMzctMzAsNjYtNjYsNjYtMzcsMC02Ni0zMC02Ni02NlY0NmMwLTQsMy03LDYtOGwyMC02YzUtMSwxMSwzLDExLDh2MjF6bS0yOSw2YzAsMTYsNiwzMCwxNyw0MCwzLDEsNSwxLDgsMSw1LDAsMTAtMSwxNS0zQzM3LDk1LDI5LDc5LDI5LDYyVjQybC0xOSw1djIweiIgZmlsbD0idXJsKCNjKSIvPjxwYXRoIGZpbGwtcnVsZT0iZXZlbm9kZCIgY2xpcC1ydWxlPSJldmVub2RkIiBkPSJtMzksNjJjMCwxNiw4LDMwLDIwLDM4LDctNiwxMi0xNiwxMi0yNlY0OWMwLTQsMy03LDYtOGw0Ni0xMmM1LTEsMTEsMywxMSw4djMxYzAsMzctMzAsNjYtNjYsNjYtMzcsMC02Ni0zMC02Ni02NlY0NmMwLTQsMy03LDYtOGwyMC02YzUtMSwxMSwzLDExLDh2MjF6bS0yOSw2YzAsMTYsNiwzMCwxNyw0MCwzLDEsNSwxLDgsMSw1LDAsMTAtMSwxNS0zQzM3LDk1LDI5LDc5LDI5LDYyVjQybC0xOSw1djIweiIgZmlsbD0iIzAxMDEwMSIvPjxkZWZzPjxsaW5lYXJHcmFkaWVudCBpZD0iYSIgeDE9Ijg0IiB5MT0iNDEiIHgyPSI3NSIgeTI9IjEyMCIgZ3JhZGllbnRVbml0cz0idXNlclNwYWNlT25Vc2UiPjxzdG9wIHN0b3AtY29sb3I9IiNmZmYiLz48c3RvcCBvZmZzZXQ9IjEiIHN0b3AtY29sb3I9IiMyZTJlMmUiLz48L2xpbmVhckdyYWRpZW50PjxsaW5lYXJHcmFkaWVudCBpZD0iYiIgeDE9Ijg0IiB5MT0iNDEiIHgyPSI3NSIgeTI9IjEyMCIgZ3JhZGllbnRVbml0cz0idXNlclNwYWNlT25Vc2UiPjxzdG9wIHN0b3AtY29sb3I9IiNmZmYiLz48c3RvcCBvZmZzZXQ9IjEiIHN0b3AtY29sb3I9IiMyZTJlMmUiLz48L2xpbmVhckdyYWRpZW50PjxsaW5lYXJHcmFkaWVudCBpZD0iYyIgeDE9Ijg0IiB5MT0iNDEiIHgyPSI3NSIgeTI9IjEyMCIgZ3JhZGllbnRVbml0cz0idXNlclNwYWNlT25Vc2UiPjxzdG9wIHN0b3AtY29sb3I9IiNmZmYiLz48c3RvcCBvZmZzZXQ9IjEiIHN0b3AtY29sb3I9IiMyZTJlMmUiLz48L2xpbmVhckdyYWRpZW50PjwvZGVmcz48L3N2Zz4=&labelColor=white)](https://mineru.net/OpenSourceTools/Extractor?source=github)
[![HuggingFace](https://img.shields.io/badge/Demo_on_HuggingFace-yellow.svg?logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAF8AAABYCAMAAACkl9t/AAAAk1BMVEVHcEz/nQv/nQv/nQr/nQv/nQr/nQv/nQv/nQr/wRf/txT/pg7/yRr/rBD/zRz/ngv/oAz/zhz/nwv/txT/ngv/0B3+zBz/nQv/0h7/wxn/vRb/thXkuiT/rxH/pxD/ogzcqyf/nQvTlSz/czCxky7/SjifdjT/Mj3+Mj3wMj15aTnDNz+DSD9RTUBsP0FRO0Q6O0WyIxEIAAAAGHRSTlMADB8zSWF3krDDw8TJ1NbX5efv8ff9/fxKDJ9uAAAGKklEQVR42u2Z63qjOAyGC4RwCOfB2JAGqrSb2WnTw/1f3UaWcSGYNKTdf/P+mOkTrE+yJBulvfvLT2A5ruenaVHyIks33npl/6C4s/ZLAM45SOi/1FtZPyFur1OYofBX3w7d54Bxm+E8db+nDr12ttmESZ4zludJEG5S7TO72YPlKZFyE+YCYUJTBZsMiNS5Sd7NlDmKM2Eg2JQg8awbglfqgbhArjxkS7dgp2RH6hc9AMLdZYUtZN5DJr4molC8BfKrEkPKEnEVjLbgW1fLy77ZVOJagoIcLIl+IxaQZGjiX597HopF5CkaXVMDO9Pyix3AFV3kw4lQLCbHuMovz8FallbcQIJ5Ta0vks9RnolbCK84BtjKRS5uA43hYoZcOBGIG2Epbv6CvFVQ8m8loh66WNySsnN7htL58LNp+NXT8/PhXiBXPMjLSxtwp8W9f/1AngRierBkA+kk/IpUSOeKByzn8y3kAAAfh//0oXgV4roHm/kz4E2z//zRc3/lgwBzbM2mJxQEa5pqgX7d1L0htrhx7LKxOZlKbwcAWyEOWqYSI8YPtgDQVjpB5nvaHaSnBaQSD6hweDi8PosxD6/PT09YY3xQA7LTCTKfYX+QHpA0GCcqmEHvr/cyfKQTEuwgbs2kPxJEB0iNjfJcCTPyocx+A0griHSmADiC91oNGVwJ69RudYe65vJmoqfpul0lrqXadW0jFKH5BKwAeCq+Den7s+3zfRJzA61/Uj/9H/VzLKTx9jFPPdXeeP+L7WEvDLAKAIoF8bPTKT0+TM7W8ePj3Rz/Yn3kOAp2f1Kf0Weony7pn/cPydvhQYV+eFOfmOu7VB/ViPe34/EN3RFHY/yRuT8ddCtMPH/McBAT5s+vRde/gf2c/sPsjLK+m5IBQF5tO+h2tTlBGnP6693JdsvofjOPnnEHkh2TnV/X1fBl9S5zrwuwF8NFrAVJVwCAPTe8gaJlomqlp0pv4Pjn98tJ/t/fL++6unpR1YGC2n/KCoa0tTLoKiEeUPDl94nj+5/Tv3/eT5vBQ60X1S0oZr+IWRR8Ldhu7AlLjPISlJcO9vrFotky9SpzDequlwEir5beYAc0R7D9KS1DXva0jhYRDXoExPdc6yw5GShkZXe9QdO/uOvHofxjrV/TNS6iMJS+4TcSTgk9n5agJdBQbB//IfF/HpvPt3Tbi7b6I6K0R72p6ajryEJrENW2bbeVUGjfgoals4L443c7BEE4mJO2SpbRngxQrAKRudRzGQ8jVOL2qDVjjI8K1gc3TIJ5KiFZ1q+gdsARPB4NQS4AjwVSt72DSoXNyOWUrU5mQ9nRYyjp89Xo7oRI6Bga9QNT1mQ/ptaJq5T/7WcgAZywR/XlPGAUDdet3LE+qS0TI+g+aJU8MIqjo0Kx8Ly+maxLjJmjQ18rA0YCkxLQbUZP1WqdmyQGJLUm7VnQFqodmXSqmRrdVpqdzk5LvmvgtEcW8PMGdaS23EOWyDVbACZzUJPaqMbjDxpA3Qrgl0AikimGDbqmyT8P8NOYiqrldF8rX+YN7TopX4UoHuSCYY7cgX4gHwclQKl1zhx0THf+tCAUValzjI7Wg9EhptrkIcfIJjA94evOn8B2eHaVzvBrnl2ig0So6hvPaz0IGcOvTHvUIlE2+prqAxLSQxZlU2stql1NqCCLdIiIN/i1DBEHUoElM9dBravbiAnKqgpi4IBkw+utSPIoBijDXJipSVV7MpOEJUAc5Qmm3BnUN+w3hteEieYKfRZSIUcXKMVf0u5wD4EwsUNVvZOtUT7A2GkffHjByWpHqvRBYrTV72a6j8zZ6W0DTE86Hn04bmyWX3Ri9WH7ZU6Q7h+ZHo0nHUAcsQvVhXRDZHChwiyi/hnPuOsSEF6Exk3o6Y9DT1eZ+6cASXk2Y9k+6EOQMDGm6WBK10wOQJCBwren86cPPWUcRAnTVjGcU1LBgs9FURiX/e6479yZcLwCBmTxiawEwrOcleuu12t3tbLv/N4RLYIBhYexm7Fcn4OJcn0+zc+s8/VfPeddZHAGN6TT8eGczHdR/Gts1/MzDkThr23zqrVfAMFT33Nx1RJsx1k5zuWILLnG/vsH+Fv5D4NTVcp1Gzo8AAAAAElFTkSuQmCC&labelColor=white)](https://huggingface.co/spaces/opendatalab/MinerU)
[![ModelScope](https://img.shields.io/badge/Demo_on_ModelScope-purple?logo=data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iMjIzIiBoZWlnaHQ9IjIwMCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIj4KCiA8Zz4KICA8dGl0bGU+TGF5ZXIgMTwvdGl0bGU+CiAgPHBhdGggaWQ9InN2Z18xNCIgZmlsbD0iIzYyNGFmZiIgZD0ibTAsODkuODRsMjUuNjUsMGwwLDI1LjY0OTk5bC0yNS42NSwwbDAsLTI1LjY0OTk5eiIvPgogIDxwYXRoIGlkPSJzdmdfMTUiIGZpbGw9IiM2MjRhZmYiIGQ9Im05OS4xNCwxMTUuNDlsMjUuNjUsMGwwLDI1LjY1bC0yNS42NSwwbDAsLTI1LjY1eiIvPgogIDxwYXRoIGlkPSJzdmdfMTYiIGZpbGw9IiM2MjRhZmYiIGQ9Im0xNzYuMDksMTQxLjE0bC0yNS42NDk5OSwwbDAsMjIuMTlsNDcuODQsMGwwLC00Ny44NGwtMjIuMTksMGwwLDI1LjY1eiIvPgogIDxwYXRoIGlkPSJzdmdfMTciIGZpbGw9IiMzNmNmZDEiIGQ9Im0xMjQuNzksODkuODRsMjUuNjUsMGwwLDI1LjY0OTk5bC0yNS42NSwwbDAsLTI1LjY0OTk5eiIvPgogIDxwYXRoIGlkPSJzdmdfMTgiIGZpbGw9IiMzNmNmZDEiIGQ9Im0wLDY0LjE5bDI1LjY1LDBsMCwyNS42NWwtMjUuNjUsMGwwLC0yNS42NXoiLz4KICA8cGF0aCBpZD0ic3ZnXzE5IiBmaWxsPSIjNjI0YWZmIiBkPSJtMTk4LjI4LDg5Ljg0bDI1LjY0OTk5LDBsMCwyNS42NDk5OWwtMjUuNjQ5OTksMGwwLC0yNS42NDk5OXoiLz4KICA8cGF0aCBpZD0ic3ZnXzIwIiBmaWxsPSIjMzZjZmQxIiBkPSJtMTk4LjI4LDY0LjE5bDI1LjY0OTk5LDBsMCwyNS42NWwtMjUuNjQ5OTksMGwwLC0yNS42NXoiLz4KICA8cGF0aCBpZD0ic3ZnXzIxIiBmaWxsPSIjNjI0YWZmIiBkPSJtMTUwLjQ0LDQybDAsMjIuMTlsMjUuNjQ5OTksMGwwLDI1LjY1bDIyLjE5LDBsMCwtNDcuODRsLTQ3Ljg0LDB6Ii8+CiAgPHBhdGggaWQ9InN2Z18yMiIgZmlsbD0iIzM2Y2ZkMSIgZD0ibTczLjQ5LDg5Ljg0bDI1LjY1LDBsMCwyNS42NDk5OWwtMjUuNjUsMGwwLC0yNS42NDk5OXoiLz4KICA8cGF0aCBpZD0ic3ZnXzIzIiBmaWxsPSIjNjI0YWZmIiBkPSJtNDcuODQsNjQuMTlsMjUuNjUsMGwwLC0yMi4xOWwtNDcuODQsMGwwLDQ3Ljg0bDIyLjE5LDBsMCwtMjUuNjV6Ii8+CiAgPHBhdGggaWQ9InN2Z18yNCIgZmlsbD0iIzYyNGFmZiIgZD0ibTQ3Ljg0LDExNS40OWwtMjIuMTksMGwwLDQ3Ljg0bDQ3Ljg0LDBsMCwtMjIuMTlsLTI1LjY1LDBsMCwtMjUuNjV6Ii8+CiA8L2c+Cjwvc3ZnPg==&labelColor=white)](https://www.modelscope.cn/studios/OpenDataLab/MinerU)
[![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/gist/myhloli/3b3a00a4a0a61577b6c30f989092d20d/mineru_demo.ipynb)
[![arXiv](https://img.shields.io/badge/arXiv-2409.18839-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2409.18839)
[![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/gist/myhloli/a3cb16570ab3cfeadf9d8f0ac91b4fca/mineru_demo.ipynb)
[![arXiv](https://img.shields.io/badge/MinerU-Technical%20Report-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2409.18839)
[![arXiv](https://img.shields.io/badge/MinerU2.5-Technical%20Report-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2509.22186)
[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/opendatalab/MinerU)
@@ -37,50 +38,209 @@
<!-- join us -->
<p align="center">
👋 join us on <a href="https://discord.gg/Tdedn9GTXq" target="_blank">Discord</a> and <a href="http://mineru.space/s/V85Yl" target="_blank">WeChat</a>
👋 join us on <a href="https://discord.gg/Tdedn9GTXq" target="_blank">Discord</a> and <a href="https://mineru.net/community-portal/?aliasId=3c430f94" target="_blank">WeChat</a>
</p>
</div>
# Changelog
- 2025/07/28 version 2.1.8 Released
- `sglang` 0.4.9.post5 version adaptation
- 2025/07/27 version 2.1.7 Released
- `transformers` 4.54.0 version adaptation
- 2025/07/26 2.1.6 Released
- Fixed table parsing issues in handwritten documents when using `vlm` backend
- Fixed visualization box position drift issue when document is rotated #3175
- 2025/07/24 2.1.5 Released
- `sglang` 0.4.9 version adaptation, synchronously upgrading the dockerfile base image to sglang 0.4.9.post3
- 2025/07/23 2.1.4 Released
- Bug Fixes
- Fixed the issue of excessive memory consumption during the `MFR` step in the `pipeline` backend under certain scenarios #2771
- Fixed the inaccurate matching between `image`/`table` and `caption`/`footnote` under certain conditions #3129
- 2025/07/16 2.1.1 Released
- Bug fixes
- Fixed text block content loss issue that could occur in certain `pipeline` scenarios #3005
- Fixed issue where `sglang-client` required unnecessary packages like `torch` #2968
- Updated `dockerfile` to fix incomplete text content parsing due to missing fonts in Linux #2915
- Usability improvements
- Updated `compose.yaml` to facilitate direct startup of `sglang-server`, `mineru-api`, and `mineru-gradio` services
- Launched brand new [online documentation site](https://opendatalab.github.io/MinerU/), simplified readme, providing better documentation experience
- 2025/07/05 Version 2.1.0 Released
- This is the first major update of MinerU 2, which includes a large number of new features and improvements, covering significant performance optimizations, user experience enhancements, and bug fixes. The detailed update contents are as follows:
- **Performance Optimizations:**
- Significantly improved preprocessing speed for documents with specific resolutions (around 2000 pixels on the long side).
- Greatly enhanced post-processing speed when the `pipeline` backend handles batch processing of documents with fewer pages (<10 pages).
- Layout analysis speed of the `pipeline` backend has been increased by approximately 20%.
- **Experience Enhancements:**
- Built-in ready-to-use `fastapi service` and `gradio webui`. For detailed usage instructions, please refer to [Documentation](https://opendatalab.github.io/MinerU/usage/quick_usage/#advanced-usage-via-api-webui-sglang-clientserver).
- Adapted to `sglang` version `0.4.8`, significantly reducing the GPU memory requirements for the `vlm-sglang` backend. It can now run on graphics cards with as little as `8GB GPU memory` (Turing architecture or newer).
- Added transparent parameter passing for all commands related to `sglang`, allowing the `sglang-engine` backend to receive all `sglang` parameters consistently with the `sglang-server`.
- Supports feature extensions based on configuration files, including `custom formula delimiters`, `enabling heading classification`, and `customizing local model directories`. For detailed usage instructions, please refer to [Documentation](https://opendatalab.github.io/MinerU/usage/quick_usage/#extending-mineru-functionality-with-configuration-files).
- **New Features:**
- Updated the `pipeline` backend with the PP-OCRv5 multilingual text recognition model, supporting text recognition in 37 languages such as French, Spanish, Portuguese, Russian, and Korean, with an average accuracy improvement of over 30%. [Details](https://paddlepaddle.github.io/PaddleOCR/latest/en/version3.x/algorithm/PP-OCRv5/PP-OCRv5_multi_languages.html)
- Introduced limited support for vertical text layout in the `pipeline` backend.
- 2025/11/26 2.6.5 Release
- Added support for a new backend vlm-lmdeploy-engine. Its usage is similar to vlm-vllm-(async)engine, but it uses lmdeploy as the inference engine and additionally supports native inference acceleration on Windows platforms compared to vllm.
- 2025/11/04 2.6.4 Release
- Added timeout configuration for PDF image rendering, default is 300 seconds, can be configured via environment variable `MINERU_PDF_RENDER_TIMEOUT` to prevent long blocking of the rendering process caused by some abnormal PDF files.
- Added CPU thread count configuration options for ONNX models, default is the system CPU core count, can be configured via environment variables `MINERU_INTRA_OP_NUM_THREADS` and `MINERU_INTER_OP_NUM_THREADS` to reduce CPU resource contention conflicts in high concurrency scenarios.
- 2025/10/31 2.6.3 Release
- Added support for a new backend `vlm-mlx-engine`, enabling MLX-accelerated inference for the MinerU2.5 model on Apple Silicon devices. Compared to the `vlm-transformers` backend, `vlm-mlx-engine` delivers a 100%200% speed improvement.
- Bug fixes: #3849, #3859
- 2025/10/24 2.6.2 Release
- `pipeline` backend optimizations
- Added experimental support for Chinese formulas, which can be enabled by setting the environment variable `export MINERU_FORMULA_CH_SUPPORT=1`. This feature may cause a slight decrease in MFR speed and failures in recognizing some long formulas. It is recommended to enable it only when parsing Chinese formulas is needed. To disable this feature, set the environment variable to `0`.
- `OCR` speed significantly improved by 200%~300%, thanks to the optimization solution provided by [@cjsdurj](https://github.com/cjsdurj)
- `OCR` models optimized for improved accuracy and coverage of Latin script recognition, and updated Cyrillic, Arabic, Devanagari, Telugu (te), and Tamil (ta) language systems to `ppocr-v5` version, with accuracy improved by over 40% compared to previous models
- `vlm` backend optimizations
- `table_caption` and `table_footnote` matching logic optimized to improve the accuracy of table caption and footnote matching and reading order rationality in scenarios with multiple consecutive tables on a page
- Optimized CPU resource usage during high concurrency when using `vllm` backend, reducing server pressure
- Adapted to `vllm` version 0.11.0
- General optimizations
- Cross-page table merging effect optimized, added support for cross-page continuation table merging, improving table merging effectiveness in multi-column merge scenarios
- Added environment variable configuration option `MINERU_TABLE_MERGE_ENABLE` for table merging feature. Table merging is enabled by default and can be disabled by setting this variable to `0`
- 2025/09/26 2.5.4 released
- 🎉🎉 The MinerU2.5 [Technical Report](https://arxiv.org/abs/2509.22186) is now available! We welcome you to read it for a comprehensive overview of its model architecture, training strategy, data engineering and evaluation results.
- Fixed an issue where some `PDF` files were mistakenly identified as `AI` files, causing parsing failures
- 2025/09/20 2.5.3 Released
- Dependency version range adjustment to enable Turing and earlier architecture GPUs to use vLLM acceleration for MinerU2.5 model inference.
- `pipeline` backend compatibility fixes for torch 2.8.0.
- Reduced default concurrency for vLLM async backend to lower server pressure and avoid connection closure issues caused by high load.
- More compatibility-related details can be found in the [announcement](https://github.com/opendatalab/MinerU/discussions/3548)
- 2025/09/19 2.5.2 Released
We are officially releasing MinerU2.5, currently the most powerful multimodal large model for document parsing.
With only 1.2B parameters, MinerU2.5's accuracy on the OmniDocBench benchmark comprehensively surpasses top-tier multimodal models like Gemini 2.5 Pro, GPT-4o, and Qwen2.5-VL-72B. It also significantly outperforms leading specialized models such as dots.ocr, MonkeyOCR, and PP-StructureV3.
The model has been released on [HuggingFace](https://huggingface.co/opendatalab/MinerU2.5-2509-1.2B) and [ModelScope](https://modelscope.cn/models/opendatalab/MinerU2.5-2509-1.2B) platforms. Welcome to download and use!
- Core Highlights:
- SOTA Performance with Extreme Efficiency: As a 1.2B model, it achieves State-of-the-Art (SOTA) results that exceed models in the 10B and 100B+ classes, redefining the performance-per-parameter standard in document AI.
- Advanced Architecture for Across-the-Board Leadership: By combining a two-stage inference pipeline (decoupling layout analysis from content recognition) with a native high-resolution architecture, it achieves SOTA performance across five key areas: layout analysis, text recognition, formula recognition, table recognition, and reading order.
- Key Capability Enhancements:
- Layout Detection: Delivers more complete results by accurately covering non-body content like headers, footers, and page numbers. It also provides more precise element localization and natural format reconstruction for lists and references.
- Table Parsing: Drastically improves parsing for challenging cases, including rotated tables, borderless/semi-structured tables, and long/complex tables.
- Formula Recognition: Significantly boosts accuracy for complex, long-form, and hybrid Chinese-English formulas, greatly enhancing the parsing capability for mathematical documents.
Additionally, with the release of vlm 2.5, we have made some adjustments to the repository:
- The vlm backend has been upgraded to version 2.5, supporting the MinerU2.5 model and no longer compatible with the MinerU2.0-2505-0.9B model. The last version supporting the 2.0 model is mineru-2.2.2.
- VLM inference-related code has been moved to [mineru_vl_utils](https://github.com/opendatalab/mineru-vl-utils), reducing coupling with the main mineru repository and facilitating independent iteration in the future.
- The vlm accelerated inference framework has been switched from `sglang` to `vllm`, achieving full compatibility with the vllm ecosystem, allowing users to use the MinerU2.5 model and accelerated inference on any platform that supports the vllm framework.
- Due to major upgrades in the vlm model supporting more layout types, we have made some adjustments to the structure of the parsing intermediate file `middle.json` and result file `content_list.json`. Please refer to the [documentation](https://opendatalab.github.io/MinerU/reference/output_files/) for details.
Other repository optimizations:
- Removed file extension whitelist validation for input files. When input files are PDF documents or images, there are no longer requirements for file extensions, improving usability.
<details>
<summary>History Log</summary>
<details>
<summary>2025/09/10 2.2.2 Released</summary>
<ul>
<li>Fixed the issue where the new table recognition model would affect the overall parsing task when some table parsing failed</li>
</ul>
</details>
<details>
<summary>2025/09/08 2.2.1 Released</summary>
<ul>
<li>Fixed the issue where some newly added models were not downloaded when using the model download command.</li>
</ul>
</details>
<details>
<summary>2025/09/05 2.2.0 Released</summary>
<ul>
<li>
Major Updates
<ul>
<li>In this version, we focused on improving table parsing accuracy by introducing a new <a href="https://github.com/RapidAI/TableStructureRec">wired table recognition model</a> and a brand-new hybrid table structure parsing algorithm, significantly enhancing the table recognition capabilities of the <code>pipeline</code> backend.</li>
<li>We also added support for cross-page table merging, which is supported by both <code>pipeline</code> and <code>vlm</code> backends, further improving the completeness and accuracy of table parsing.</li>
</ul>
</li>
<li>
Other Updates
<ul>
<li>The <code>pipeline</code> backend now supports 270-degree rotated table parsing, bringing support for table parsing in 0/90/270-degree orientations</li>
<li><code>pipeline</code> added OCR capability support for Thai and Greek, and updated the English OCR model to the latest version. English recognition accuracy improved by 11%, Thai recognition model accuracy is 82.68%, and Greek recognition model accuracy is 89.28% (by PPOCRv5)</li>
<li>Added <code>bbox</code> field (mapped to 0-1000 range) in the output <code>content_list.json</code>, making it convenient for users to directly obtain position information for each content block</li>
<li>Removed the <code>pipeline_old_linux</code> installation option, no longer supporting legacy Linux systems such as <code>CentOS 7</code>, to provide better support for <code>uv</code>'s <code>sync</code>/<code>run</code> commands</li>
</ul>
</li>
</ul>
</details>
<details>
<summary>2025/08/01 2.1.10 Released</summary>
<ul>
<li>Fixed an issue in the <code>pipeline</code> backend where block overlap caused the parsing results to deviate from expectations #3232</li>
</ul>
</details>
<details>
<summary>2025/07/30 2.1.9 Released</summary>
<ul>
<li><code>transformers</code> 4.54.1 version adaptation</li>
</ul>
</details>
<details>
<summary>2025/07/28 2.1.8 Released</summary>
<ul>
<li><code>sglang</code> 0.4.9.post5 version adaptation</li>
</ul>
</details>
<details>
<summary>2025/07/27 2.1.7 Released</summary>
<ul>
<li><code>transformers</code> 4.54.0 version adaptation</li>
</ul>
</details>
<details>
<summary>2025/07/26 2.1.6 Released</summary>
<ul>
<li>Fixed table parsing issues in handwritten documents when using <code>vlm</code> backend</li>
<li>Fixed visualization box position drift issue when document is rotated #3175</li>
</ul>
</details>
<details>
<summary>2025/07/24 2.1.5 Released</summary>
<ul>
<li><code>sglang</code> 0.4.9 version adaptation, synchronously upgrading the dockerfile base image to sglang 0.4.9.post3</li>
</ul>
</details>
<details>
<summary>2025/07/23 2.1.4 Released</summary>
<ul>
<li><strong>Bug Fixes</strong>
<ul>
<li>Fixed the issue of excessive memory consumption during the <code>MFR</code> step in the <code>pipeline</code> backend under certain scenarios #2771</li>
<li>Fixed the inaccurate matching between <code>image</code>/<code>table</code> and <code>caption</code>/<code>footnote</code> under certain conditions #3129</li>
</ul>
</li>
</ul>
</details>
<details>
<summary>2025/07/16 2.1.1 Released</summary>
<ul>
<li><strong>Bug fixes</strong>
<ul>
<li>Fixed text block content loss issue that could occur in certain <code>pipeline</code> scenarios #3005</li>
<li>Fixed issue where <code>sglang-client</code> required unnecessary packages like <code>torch</code> #2968</li>
<li>Updated <code>dockerfile</code> to fix incomplete text content parsing due to missing fonts in Linux #2915</li>
</ul>
</li>
<li><strong>Usability improvements</strong>
<ul>
<li>Updated <code>compose.yaml</code> to facilitate direct startup of <code>sglang-server</code>, <code>mineru-api</code>, and <code>mineru-gradio</code> services</li>
<li>Launched brand new <a href="https://opendatalab.github.io/MinerU/">online documentation site</a>, simplified readme, providing better documentation experience</li>
</ul>
</li>
</ul>
</details>
<details>
<summary>2025/07/05 2.1.0 Released</summary>
<ul>
<li>This is the first major update of MinerU 2, which includes a large number of new features and improvements, covering significant performance optimizations, user experience enhancements, and bug fixes. The detailed update contents are as follows:</li>
<li><strong>Performance Optimizations:</strong>
<ul>
<li>Significantly improved preprocessing speed for documents with specific resolutions (around 2000 pixels on the long side).</li>
<li>Greatly enhanced post-processing speed when the <code>pipeline</code> backend handles batch processing of documents with fewer pages (&lt;10 pages).</li>
<li>Layout analysis speed of the <code>pipeline</code> backend has been increased by approximately 20%.</li>
</ul>
</li>
<li><strong>Experience Enhancements:</strong>
<ul>
<li>Built-in ready-to-use <code>fastapi service</code> and <code>gradio webui</code>. For detailed usage instructions, please refer to <a href="https://opendatalab.github.io/MinerU/usage/quick_usage/#advanced-usage-via-api-webui-sglang-clientserver">Documentation</a>.</li>
<li>Adapted to <code>sglang</code> version <code>0.4.8</code>, significantly reducing the GPU memory requirements for the <code>vlm-sglang</code> backend. It can now run on graphics cards with as little as <code>8GB GPU memory</code> (Turing architecture or newer).</li>
<li>Added transparent parameter passing for all commands related to <code>sglang</code>, allowing the <code>sglang-engine</code> backend to receive all <code>sglang</code> parameters consistently with the <code>sglang-server</code>.</li>
<li>Supports feature extensions based on configuration files, including <code>custom formula delimiters</code>, <code>enabling heading classification</code>, and <code>customizing local model directories</code>. For detailed usage instructions, please refer to <a href="https://opendatalab.github.io/MinerU/usage/quick_usage/#extending-mineru-functionality-with-configuration-files">Documentation</a>.</li>
</ul>
</li>
<li><strong>New Features:</strong>
<ul>
<li>Updated the <code>pipeline</code> backend with the PP-OCRv5 multilingual text recognition model, supporting text recognition in 37 languages such as French, Spanish, Portuguese, Russian, and Korean, with an average accuracy improvement of over 30%. <a href="https://paddlepaddle.github.io/PaddleOCR/latest/en/version3.x/algorithm/PP-OCRv5/PP-OCRv5_multi_languages.html">Details</a></li>
<li>Introduced limited support for vertical text layout in the <code>pipeline</code> backend.</li>
</ul>
</li>
</ul>
</details>
<details>
<summary>2025/06/20 2.0.6 Released</summary>
<ul>
@@ -434,7 +594,7 @@ https://github.com/user-attachments/assets/4bea02c9-6d54-4cd6-97ed-dff14340982c
- Automatically recognize and convert formulas in the document to LaTeX format.
- Automatically recognize and convert tables in the document to HTML format.
- Automatically detect scanned PDFs and garbled PDFs and enable OCR functionality.
- OCR supports detection and recognition of 84 languages.
- OCR supports detection and recognition of 109 languages.
- Supports multiple output formats, such as multimodal and NLP Markdown, JSON sorted by reading order, and rich intermediate formats.
- Supports various visualization results, including layout visualization and span visualization, for efficient confirmation of output quality.
- Supports running in a pure CPU environment, and also supports GPU(CUDA)/NPU(CANN)/MPS acceleration
@@ -471,41 +631,75 @@ A WebUI developed based on Gradio, with a simple interface and only core parsing
> In non-mainline environments, due to the diversity of hardware and software configurations, as well as third-party dependency compatibility issues, we cannot guarantee 100% project availability. Therefore, for users who wish to use this project in non-recommended environments, we suggest carefully reading the documentation and FAQ first. Most issues already have corresponding solutions in the FAQ. We also encourage community feedback to help us gradually expand support.
<table>
<tr>
<td>Parsing Backend</td>
<td>pipeline</td>
<td>vlm-transformers</td>
<td>vlm-sglang</td>
</tr>
<tr>
<td>Operating System</td>
<td>Linux / Windows / macOS</td>
<td>Linux / Windows</td>
<td>Linux / Windows (via WSL2)</td>
</tr>
<tr>
<td>CPU Inference Support</td>
<td>✅</td>
<td colspan="2">❌</td>
</tr>
<tr>
<td>GPU Requirements</td>
<td>Turing architecture and later, 6GB+ VRAM or Apple Silicon</td>
<td colspan="2">Turing architecture and later, 8GB+ VRAM</td>
</tr>
<tr>
<td>Memory Requirements</td>
<td colspan="3">Minimum 16GB+, recommended 32GB+</td>
</tr>
<tr>
<td>Disk Space Requirements</td>
<td colspan="3">20GB+, SSD recommended</td>
</tr>
<tr>
<td>Python Version</td>
<td colspan="3">3.10-3.13</td>
</tr>
<thead>
<tr>
<th rowspan="2">Parsing Backend</th>
<th rowspan="2">pipeline <br> (Accuracy<sup>1</sup> 82+)</th>
<th colspan="5">vlm (Accuracy<sup>1</sup> 90+)</th>
</tr>
<tr>
<th>transformers</th>
<th>mlx-engine</th>
<th>vllm-engine / <br>vllm-async-engine</th>
<th>lmdeploy-engine</th>
<th>http-client</th>
</tr>
</thead>
<tbody>
<tr>
<th>Backend Features</th>
<td>Fast, no hallucinations</td>
<td>Good compatibility, <br>but slower</td>
<td>Faster than transformers</td>
<td>Fast, compatible with the vLLM ecosystem</td>
<td>Fast, compatible with the LMDeploy ecosystem</td>
<td>Suitable for OpenAI-compatible servers<sup>6</sup></td>
</tr>
<tr>
<th>Operating System</th>
<td colspan="2" style="text-align:center;">Linux<sup>2</sup> / Windows / macOS</td>
<td style="text-align:center;">macOS<sup>3</sup></td>
<td style="text-align:center;">Linux<sup>2</sup> / Windows<sup>4</sup> </td>
<td style="text-align:center;">Linux<sup>2</sup> / Windows<sup>5</sup> </td>
<td>Any</td>
</tr>
<tr>
<th>CPU inference support</th>
<td colspan="2" style="text-align:center;">✅</td>
<td colspan="3" style="text-align:center;">❌</td>
<td>Not required</td>
</tr>
<tr>
<th>GPU Requirements</th><td colspan="2" style="text-align:center;">Volta or later architectures, 6 GB VRAM or more, or Apple Silicon</td>
<td>Apple Silicon</td>
<td colspan="2" style="text-align:center;">Volta or later architectures, 8 GB VRAM or more</td>
<td>Not required</td>
</tr>
<tr>
<th>Memory Requirements</th>
<td colspan="5" style="text-align:center;">Minimum 16 GB, 32 GB recommended</td>
<td>8 GB</td>
</tr>
<tr>
<th>Disk Space Requirements</th>
<td colspan="5" style="text-align:center;">20 GB or more, SSD recommended</td>
<td>2 GB</td>
</tr>
<tr>
<th>Python Version</th>
<td colspan="6" style="text-align:center;">3.10-3.13<sup>7</sup></td>
</tr>
</tbody>
</table>
<sup>1</sup> Accuracy metric is the End-to-End Evaluation Overall score of OmniDocBench (v1.5), tested on the latest `MinerU` version.
<sup>2</sup> Linux supports only distributions released in 2019 or later.
<sup>3</sup> MLX requires macOS 13.5 or later, recommended for use with version 14.0 or higher.
<sup>4</sup> Windows vLLM support via WSL2(Windows Subsystem for Linux).
<sup>5</sup> Windows LMDeploy can only use the `turbomind` backend, which is slightly slower than the `pytorch` backend. If performance is critical, it is recommended to run it via WSL2.
<sup>6</sup> Servers compatible with the OpenAI API, such as local or remote model services deployed via inference frameworks like `vLLM`, `SGLang`, or `LMDeploy`.
<sup>7</sup> Windows + LMDeploy only supports Python versions 3.103.12, as the critical dependency `ray` does not yet support Python 3.13 on Windows.
### Install MinerU
@@ -524,8 +718,8 @@ uv pip install -e .[core]
```
> [!TIP]
> `mineru[core]` includes all core features except `sglang` acceleration, compatible with Windows / Linux / macOS systems, suitable for most users.
> If you need to use `sglang` acceleration for VLM model inference or install a lightweight client on edge devices, please refer to the documentation [Extension Modules Installation Guide](https://opendatalab.github.io/MinerU/quick_start/extension_modules/).
> `mineru[core]` includes all core features except `vLLM`/`LMDeploy` acceleration, compatible with Windows / Linux / macOS systems, suitable for most users.
> If you need to use `vLLM`/`LMDeploy` acceleration for VLM model inference or install a lightweight client on edge devices, please refer to the documentation [Extension Modules Installation Guide](https://opendatalab.github.io/MinerU/quick_start/extension_modules/).
---
@@ -553,8 +747,8 @@ You can use MinerU for PDF parsing through various methods such as command line,
- [x] Handwritten Text Recognition
- [x] Vertical Text Recognition
- [x] Latin Accent Mark Recognition
- [ ] Code block recognition in the main text
- [ ] [Chemical formula recognition](docs/chemical_knowledge_introduction/introduction.pdf)
- [x] Code block recognition in the main text
- [x] [Chemical formula recognition](docs/chemical_knowledge_introduction/introduction.pdf)(mineru.net)
- [ ] Geometric shape recognition
# Known Issues
@@ -572,7 +766,7 @@ You can use MinerU for PDF parsing through various methods such as command line,
- If you encounter any issues during usage, you can first check the [FAQ](https://opendatalab.github.io/MinerU/faq/) for solutions.
- If your issue remains unresolved, you may also use [DeepWiki](https://deepwiki.com/opendatalab/MinerU) to interact with an AI assistant, which can address most common problems.
- If you still cannot resolve the issue, you are welcome to join our community via [Discord](https://discord.gg/Tdedn9GTXq) or [WeChat](http://mineru.space/s/V85Yl) to discuss with other users and developers.
- If you still cannot resolve the issue, you are welcome to join our community via [Discord](https://discord.gg/Tdedn9GTXq) or [WeChat](https://mineru.net/community-portal/?aliasId=3c430f94) to discuss with other users and developers.
# All Thanks To Our Contributors
@@ -592,6 +786,7 @@ Currently, some models in this project are trained based on YOLO. However, since
- [DocLayout-YOLO](https://github.com/opendatalab/DocLayout-YOLO)
- [UniMERNet](https://github.com/opendatalab/UniMERNet)
- [RapidTable](https://github.com/RapidAI/RapidTable)
- [TableStructureRec](https://github.com/RapidAI/TableStructureRec)
- [PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR)
- [PaddleOCR2Pytorch](https://github.com/frotms/PaddleOCR2Pytorch)
- [layoutreader](https://github.com/ppaanngggg/layoutreader)
@@ -601,10 +796,21 @@ Currently, some models in this project are trained based on YOLO. However, since
- [pdftext](https://github.com/datalab-to/pdftext)
- [pdfminer.six](https://github.com/pdfminer/pdfminer.six)
- [pypdf](https://github.com/py-pdf/pypdf)
- [magika](https://github.com/google/magika)
# Citation
```bibtex
@misc{niu2025mineru25decoupledvisionlanguagemodel,
title={MinerU2.5: A Decoupled Vision-Language Model for Efficient High-Resolution Document Parsing},
author={Junbo Niu and Zheng Liu and Zhuangcheng Gu and Bin Wang and Linke Ouyang and Zhiyuan Zhao and Tao Chu and Tianyao He and Fan Wu and Qintong Zhang and Zhenjiang Jin and Guang Liang and Rui Zhang and Wenzheng Zhang and Yuan Qu and Zhifei Ren and Yuefeng Sun and Yuanhong Zheng and Dongsheng Ma and Zirui Tang and Boyu Niu and Ziyang Miao and Hejun Dong and Siyi Qian and Junyuan Zhang and Jingzhou Chen and Fangdong Wang and Xiaomeng Zhao and Liqun Wei and Wei Li and Shasha Wang and Ruiliang Xu and Yuanyuan Cao and Lu Chen and Qianqian Wu and Huaiyu Gu and Lindong Lu and Keming Wang and Dechen Lin and Guanlin Shen and Xuanhe Zhou and Linfeng Zhang and Yuhang Zang and Xiaoyi Dong and Jiaqi Wang and Bo Zhang and Lei Bai and Pei Chu and Weijia Li and Jiang Wu and Lijun Wu and Zhenxiang Li and Guangyu Wang and Zhongying Tu and Chao Xu and Kai Chen and Yu Qiao and Bowen Zhou and Dahua Lin and Wentao Zhang and Conghui He},
year={2025},
eprint={2509.22186},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2509.22186},
}
@misc{wang2024mineruopensourcesolutionprecise,
title={MinerU: An Open-Source Solution for Precise Document Content Extraction},
author={Bin Wang and Chao Xu and Xiaomeng Zhao and Linke Ouyang and Fan Wu and Zhiyuan Zhao and Rui Xu and Kaiwen Liu and Yuan Qu and Fukai Shang and Bo Zhang and Liqun Wei and Zhihao Sui and Wei Li and Botian Shi and Yu Qiao and Dahua Lin and Conghui He},
@@ -643,3 +849,4 @@ Currently, some models in this project are trained based on YOLO. However, since
- [OmniDocBench (A Comprehensive Benchmark for Document Parsing and Evaluation)](https://github.com/opendatalab/OmniDocBench)
- [Magic-HTML (Mixed web page extraction tool)](https://github.com/opendatalab/magic-html)
- [Magic-Doc (Fast speed ppt/pptx/doc/docx/pdf extraction tool)](https://github.com/InternLM/magic-doc)
- [Dingo: A Comprehensive AI Data Quality Evaluation Tool](https://github.com/MigoXLab/dingo)

View File

@@ -1,7 +1,7 @@
<div align="center" xmlns="http://www.w3.org/1999/html">
<!-- logo -->
<p align="center">
<img src="docs/images/MinerU-logo.png" width="300px" style="vertical-align:middle;">
<img src="https://gcore.jsdelivr.net/gh/opendatalab/MinerU@master/docs/images/MinerU-logo.png" width="300px" style="vertical-align:middle;">
</p>
<!-- icon -->
@@ -17,8 +17,9 @@
[![OpenDataLab](https://img.shields.io/badge/webapp_on_mineru.net-blue?logo=data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iMTM0IiBoZWlnaHQ9IjEzNCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIj48cGF0aCBkPSJtMTIyLDljMCw1LTQsOS05LDlzLTktNC05LTksNC05LDktOSw5LDQsOSw5eiIgZmlsbD0idXJsKCNhKSIvPjxwYXRoIGQ9Im0xMjIsOWMwLDUtNCw5LTksOXMtOS00LTktOSw0LTksOS05LDksNCw5LDl6IiBmaWxsPSIjMDEwMTAxIi8+PHBhdGggZD0ibTkxLDE4YzAsNS00LDktOSw5cy05LTQtOS05LDQtOSw5LTksOSw0LDksOXoiIGZpbGw9InVybCgjYikiLz48cGF0aCBkPSJtOTEsMThjMCw1LTQsOS05LDlzLTktNC05LTksNC05LDktOSw5LDQsOSw5eiIgZmlsbD0iIzAxMDEwMSIvPjxwYXRoIGZpbGwtcnVsZT0iZXZlbm9kZCIgY2xpcC1ydWxlPSJldmVub2RkIiBkPSJtMzksNjJjMCwxNiw4LDMwLDIwLDM4LDctNiwxMi0xNiwxMi0yNlY0OWMwLTQsMy03LDYtOGw0Ni0xMmM1LTEsMTEsMywxMSw4djMxYzAsMzctMzAsNjYtNjYsNjYtMzcsMC02Ni0zMC02Ni02NlY0NmMwLTQsMy03LDYtOGwyMC02YzUtMSwxMSwzLDExLDh2MjF6bS0yOSw2YzAsMTYsNiwzMCwxNyw0MCwzLDEsNSwxLDgsMSw1LDAsMTAtMSwxNS0zQzM3LDk1LDI5LDc5LDI5LDYyVjQybC0xOSw1djIweiIgZmlsbD0idXJsKCNjKSIvPjxwYXRoIGZpbGwtcnVsZT0iZXZlbm9kZCIgY2xpcC1ydWxlPSJldmVub2RkIiBkPSJtMzksNjJjMCwxNiw4LDMwLDIwLDM4LDctNiwxMi0xNiwxMi0yNlY0OWMwLTQsMy03LDYtOGw0Ni0xMmM1LTEsMTEsMywxMSw4djMxYzAsMzctMzAsNjYtNjYsNjYtMzcsMC02Ni0zMC02Ni02NlY0NmMwLTQsMy03LDYtOGwyMC02YzUtMSwxMSwzLDExLDh2MjF6bS0yOSw2YzAsMTYsNiwzMCwxNyw0MCwzLDEsNSwxLDgsMSw1LDAsMTAtMSwxNS0zQzM3LDk1LDI5LDc5LDI5LDYyVjQybC0xOSw1djIweiIgZmlsbD0iIzAxMDEwMSIvPjxkZWZzPjxsaW5lYXJHcmFkaWVudCBpZD0iYSIgeDE9Ijg0IiB5MT0iNDEiIHgyPSI3NSIgeTI9IjEyMCIgZ3JhZGllbnRVbml0cz0idXNlclNwYWNlT25Vc2UiPjxzdG9wIHN0b3AtY29sb3I9IiNmZmYiLz48c3RvcCBvZmZzZXQ9IjEiIHN0b3AtY29sb3I9IiMyZTJlMmUiLz48L2xpbmVhckdyYWRpZW50PjxsaW5lYXJHcmFkaWVudCBpZD0iYiIgeDE9Ijg0IiB5MT0iNDEiIHgyPSI3NSIgeTI9IjEyMCIgZ3JhZGllbnRVbml0cz0idXNlclNwYWNlT25Vc2UiPjxzdG9wIHN0b3AtY29sb3I9IiNmZmYiLz48c3RvcCBvZmZzZXQ9IjEiIHN0b3AtY29sb3I9IiMyZTJlMmUiLz48L2xpbmVhckdyYWRpZW50PjxsaW5lYXJHcmFkaWVudCBpZD0iYyIgeDE9Ijg0IiB5MT0iNDEiIHgyPSI3NSIgeTI9IjEyMCIgZ3JhZGllbnRVbml0cz0idXNlclNwYWNlT25Vc2UiPjxzdG9wIHN0b3AtY29sb3I9IiNmZmYiLz48c3RvcCBvZmZzZXQ9IjEiIHN0b3AtY29sb3I9IiMyZTJlMmUiLz48L2xpbmVhckdyYWRpZW50PjwvZGVmcz48L3N2Zz4=&labelColor=white)](https://mineru.net/OpenSourceTools/Extractor?source=github)
[![ModelScope](https://img.shields.io/badge/Demo_on_ModelScope-purple?logo=data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iMjIzIiBoZWlnaHQ9IjIwMCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIj4KCiA8Zz4KICA8dGl0bGU+TGF5ZXIgMTwvdGl0bGU+CiAgPHBhdGggaWQ9InN2Z18xNCIgZmlsbD0iIzYyNGFmZiIgZD0ibTAsODkuODRsMjUuNjUsMGwwLDI1LjY0OTk5bC0yNS42NSwwbDAsLTI1LjY0OTk5eiIvPgogIDxwYXRoIGlkPSJzdmdfMTUiIGZpbGw9IiM2MjRhZmYiIGQ9Im05OS4xNCwxMTUuNDlsMjUuNjUsMGwwLDI1LjY1bC0yNS42NSwwbDAsLTI1LjY1eiIvPgogIDxwYXRoIGlkPSJzdmdfMTYiIGZpbGw9IiM2MjRhZmYiIGQ9Im0xNzYuMDksMTQxLjE0bC0yNS42NDk5OSwwbDAsMjIuMTlsNDcuODQsMGwwLC00Ny44NGwtMjIuMTksMGwwLDI1LjY1eiIvPgogIDxwYXRoIGlkPSJzdmdfMTciIGZpbGw9IiMzNmNmZDEiIGQ9Im0xMjQuNzksODkuODRsMjUuNjUsMGwwLDI1LjY0OTk5bC0yNS42NSwwbDAsLTI1LjY0OTk5eiIvPgogIDxwYXRoIGlkPSJzdmdfMTgiIGZpbGw9IiMzNmNmZDEiIGQ9Im0wLDY0LjE5bDI1LjY1LDBsMCwyNS42NWwtMjUuNjUsMGwwLC0yNS42NXoiLz4KICA8cGF0aCBpZD0ic3ZnXzE5IiBmaWxsPSIjNjI0YWZmIiBkPSJtMTk4LjI4LDg5Ljg0bDI1LjY0OTk5LDBsMCwyNS42NDk5OWwtMjUuNjQ5OTksMGwwLC0yNS42NDk5OXoiLz4KICA8cGF0aCBpZD0ic3ZnXzIwIiBmaWxsPSIjMzZjZmQxIiBkPSJtMTk4LjI4LDY0LjE5bDI1LjY0OTk5LDBsMCwyNS42NWwtMjUuNjQ5OTksMGwwLC0yNS42NXoiLz4KICA8cGF0aCBpZD0ic3ZnXzIxIiBmaWxsPSIjNjI0YWZmIiBkPSJtMTUwLjQ0LDQybDAsMjIuMTlsMjUuNjQ5OTksMGwwLDI1LjY1bDIyLjE5LDBsMCwtNDcuODRsLTQ3Ljg0LDB6Ii8+CiAgPHBhdGggaWQ9InN2Z18yMiIgZmlsbD0iIzM2Y2ZkMSIgZD0ibTczLjQ5LDg5Ljg0bDI1LjY1LDBsMCwyNS42NDk5OWwtMjUuNjUsMGwwLC0yNS42NDk5OXoiLz4KICA8cGF0aCBpZD0ic3ZnXzIzIiBmaWxsPSIjNjI0YWZmIiBkPSJtNDcuODQsNjQuMTlsMjUuNjUsMGwwLC0yMi4xOWwtNDcuODQsMGwwLDQ3Ljg0bDIyLjE5LDBsMCwtMjUuNjV6Ii8+CiAgPHBhdGggaWQ9InN2Z18yNCIgZmlsbD0iIzYyNGFmZiIgZD0ibTQ3Ljg0LDExNS40OWwtMjIuMTksMGwwLDQ3Ljg0bDQ3Ljg0LDBsMCwtMjIuMTlsLTI1LjY1LDBsMCwtMjUuNjV6Ii8+CiA8L2c+Cjwvc3ZnPg==&labelColor=white)](https://www.modelscope.cn/studios/OpenDataLab/MinerU)
[![HuggingFace](https://img.shields.io/badge/Demo_on_HuggingFace-yellow.svg?logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAF8AAABYCAMAAACkl9t/AAAAk1BMVEVHcEz/nQv/nQv/nQr/nQv/nQr/nQv/nQv/nQr/wRf/txT/pg7/yRr/rBD/zRz/ngv/oAz/zhz/nwv/txT/ngv/0B3+zBz/nQv/0h7/wxn/vRb/thXkuiT/rxH/pxD/ogzcqyf/nQvTlSz/czCxky7/SjifdjT/Mj3+Mj3wMj15aTnDNz+DSD9RTUBsP0FRO0Q6O0WyIxEIAAAAGHRSTlMADB8zSWF3krDDw8TJ1NbX5efv8ff9/fxKDJ9uAAAGKklEQVR42u2Z63qjOAyGC4RwCOfB2JAGqrSb2WnTw/1f3UaWcSGYNKTdf/P+mOkTrE+yJBulvfvLT2A5ruenaVHyIks33npl/6C4s/ZLAM45SOi/1FtZPyFur1OYofBX3w7d54Bxm+E8db+nDr12ttmESZ4zludJEG5S7TO72YPlKZFyE+YCYUJTBZsMiNS5Sd7NlDmKM2Eg2JQg8awbglfqgbhArjxkS7dgp2RH6hc9AMLdZYUtZN5DJr4molC8BfKrEkPKEnEVjLbgW1fLy77ZVOJagoIcLIl+IxaQZGjiX597HopF5CkaXVMDO9Pyix3AFV3kw4lQLCbHuMovz8FallbcQIJ5Ta0vks9RnolbCK84BtjKRS5uA43hYoZcOBGIG2Epbv6CvFVQ8m8loh66WNySsnN7htL58LNp+NXT8/PhXiBXPMjLSxtwp8W9f/1AngRierBkA+kk/IpUSOeKByzn8y3kAAAfh//0oXgV4roHm/kz4E2z//zRc3/lgwBzbM2mJxQEa5pqgX7d1L0htrhx7LKxOZlKbwcAWyEOWqYSI8YPtgDQVjpB5nvaHaSnBaQSD6hweDi8PosxD6/PT09YY3xQA7LTCTKfYX+QHpA0GCcqmEHvr/cyfKQTEuwgbs2kPxJEB0iNjfJcCTPyocx+A0griHSmADiC91oNGVwJ69RudYe65vJmoqfpul0lrqXadW0jFKH5BKwAeCq+Den7s+3zfRJzA61/Uj/9H/VzLKTx9jFPPdXeeP+L7WEvDLAKAIoF8bPTKT0+TM7W8ePj3Rz/Yn3kOAp2f1Kf0Weony7pn/cPydvhQYV+eFOfmOu7VB/ViPe34/EN3RFHY/yRuT8ddCtMPH/McBAT5s+vRde/gf2c/sPsjLK+m5IBQF5tO+h2tTlBGnP6693JdsvofjOPnnEHkh2TnV/X1fBl9S5zrwuwF8NFrAVJVwCAPTe8gaJlomqlp0pv4Pjn98tJ/t/fL++6unpR1YGC2n/KCoa0tTLoKiEeUPDl94nj+5/Tv3/eT5vBQ60X1S0oZr+IWRR8Ldhu7AlLjPISlJcO9vrFotky9SpzDequlwEir5beYAc0R7D9KS1DXva0jhYRDXoExPdc6yw5GShkZXe9QdO/uOvHofxjrV/TNS6iMJS+4TcSTgk9n5agJdBQbB//IfF/HpvPt3Tbi7b6I6K0R72p6ajryEJrENW2bbeVUGjfgoals4L443c7BEE4mJO2SpbRngxQrAKRudRzGQ8jVOL2qDVjjI8K1gc3TIJ5KiFZ1q+gdsARPB4NQS4AjwVSt72DSoXNyOWUrU5mQ9nRYyjp89Xo7oRI6Bga9QNT1mQ/ptaJq5T/7WcgAZywR/XlPGAUDdet3LE+qS0TI+g+aJU8MIqjo0Kx8Ly+maxLjJmjQ18rA0YCkxLQbUZP1WqdmyQGJLUm7VnQFqodmXSqmRrdVpqdzk5LvmvgtEcW8PMGdaS23EOWyDVbACZzUJPaqMbjDxpA3Qrgl0AikimGDbqmyT8P8NOYiqrldF8rX+YN7TopX4UoHuSCYY7cgX4gHwclQKl1zhx0THf+tCAUValzjI7Wg9EhptrkIcfIJjA94evOn8B2eHaVzvBrnl2ig0So6hvPaz0IGcOvTHvUIlE2+prqAxLSQxZlU2stql1NqCCLdIiIN/i1DBEHUoElM9dBravbiAnKqgpi4IBkw+utSPIoBijDXJipSVV7MpOEJUAc5Qmm3BnUN+w3hteEieYKfRZSIUcXKMVf0u5wD4EwsUNVvZOtUT7A2GkffHjByWpHqvRBYrTV72a6j8zZ6W0DTE86Hn04bmyWX3Ri9WH7ZU6Q7h+ZHo0nHUAcsQvVhXRDZHChwiyi/hnPuOsSEF6Exk3o6Y9DT1eZ+6cASXk2Y9k+6EOQMDGm6WBK10wOQJCBwren86cPPWUcRAnTVjGcU1LBgs9FURiX/e6479yZcLwCBmTxiawEwrOcleuu12t3tbLv/N4RLYIBhYexm7Fcn4OJcn0+zc+s8/VfPeddZHAGN6TT8eGczHdR/Gts1/MzDkThr23zqrVfAMFT33Nx1RJsx1k5zuWILLnG/vsH+Fv5D4NTVcp1Gzo8AAAAAElFTkSuQmCC&labelColor=white)](https://huggingface.co/spaces/opendatalab/MinerU)
[![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/gist/myhloli/3b3a00a4a0a61577b6c30f989092d20d/mineru_demo.ipynb)
[![arXiv](https://img.shields.io/badge/arXiv-2409.18839-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2409.18839)
[![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/gist/myhloli/a3cb16570ab3cfeadf9d8f0ac91b4fca/mineru_demo.ipynb)
[![arXiv](https://img.shields.io/badge/MinerU-Technical%20Report-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2409.18839)
[![arXiv](https://img.shields.io/badge/MinerU2.5-Technical%20Report-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2509.22186)
[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/opendatalab/MinerU)
@@ -37,50 +38,211 @@
<!-- join us -->
<p align="center">
👋 join us on <a href="https://discord.gg/Tdedn9GTXq" target="_blank">Discord</a> and <a href="http://mineru.space/s/V85Yl" target="_blank">WeChat</a>
👋 join us on <a href="https://discord.gg/Tdedn9GTXq" target="_blank">Discord</a> and <a href="https://mineru.net/community-portal/?aliasId=3c430f94" target="_blank">WeChat</a>
</p>
</div>
# 更新记录
- 2025/07/28 2.1.8 发布
- `sglang` 0.4.9.post5 版本适配
- 2025/07/27 2.1.7 发布
- `transformers` 4.54.0 版本适配
- 2025/07/26 2.1.6 发布
- 修复`vlm`后端解析部分手写文档时的表格异常问题
- 修复文档旋转时可视化框位置漂移问题 #3175
- 2025/07/24 2.1.5 发布
- `sglang` 0.4.9 版本适配同步升级dockerfile基础镜像为sglang 0.4.9.post3
- 2025/07/23 2.1.4 发布
- bug修复
- 修复`pipeline`后端中`MFR`步骤在某些情况下显存消耗过大的问题 #2771
- 修复某些情况下`image`/`table``caption`/`footnote`匹配不准确的问题 #3129
- 2025/07/16 2.1.1 发布
- bug修复
- 修复`pipeline`在某些情况可能发生的文本块内容丢失问题 #3005
- 修复`sglang-client`需要安装`torch`等不必要的包的问题 #2968
- 更新`dockerfile`以修复linux字体缺失导致的解析文本内容不完整问题 #2915
- 易用性更新
- 更新`compose.yaml`,便于用户直接启动`sglang-server``mineru-api``mineru-gradio`服务
- 启用全新的[在线文档站点](https://opendatalab.github.io/MinerU/zh/)简化readme提供更好的文档体验
- 2025/07/05 2.1.0 发布
- 这是 MinerU 2 的第一个大版本更新包含了大量新功能和改进包含众多性能优化、体验优化和bug修复具体更新内容如下
- 性能优化:
- 大幅提升某些特定分辨率长边2000像素左右文档的预处理速度
- 大幅提升`pipeline`后端批量处理大量页数较少(<10文档时的后处理速度
- `pipeline`后端的layout分析速度提升约20%
- 体验优化:
- 内置开箱即用的`fastapi服务``gradio webui`,详细使用方法请参考[文档](https://opendatalab.github.io/MinerU/zh/usage/quick_usage/#apiwebuisglang-clientserver)
- `sglang`适配`0.4.8`版本,大幅降低`vlm-sglang`后端的显存要求,最低可在`8G显存`(Turing及以后架构)的显卡上运行
- 对所有命令增加`sglang`的参数透传,使得`sglang-engine`后端可以与`sglang-server`一致,接收`sglang`的所有参数
- 支持基于配置文件的功能扩展,包含`自定义公式标识符``开启标题分级功能``自定义本地模型目录`,详细使用方法请参考[文档](https://opendatalab.github.io/MinerU/zh/usage/quick_usage/#mineru_1)
- 新特性:
- `pipeline`后端更新 PP-OCRv5 多语种文本识别模型,支持法语、西班牙语、葡萄牙语、俄语、韩语等 37 种语言的文字识别平均精度涨幅超30%。[详情](https://paddlepaddle.github.io/PaddleOCR/latest/version3.x/algorithm/PP-OCRv5/PP-OCRv5_multi_languages.html)
- `pipeline`后端增加对竖排文本的有限支持
- 2025/11/26 2.6.5 发布
- 增加新后端`vlm-lmdeploy-engine`支持,使用方式与`vlm-vllm-(async)engine`类似,但使用`lmdeploy`作为推理引擎,与`vllm`相比额外支持Windows平台原生推理加速。
- 新增国产算力平台`昇腾/npu``平头哥/ppu``沐曦/maca`的适配支持,用户可在对应平台上使用`pipeline``vlm`模型,并使用`vllm`/`lmdeploy`引擎加速vlm模型推理具体使用方式请参考[其他加速卡适配](https://opendatalab.github.io/MinerU/zh/usage/)。
- 国产平台适配不易,我们已尽量确保适配的完整性和稳定性,但仍可能存在一些稳定性/兼容问题与精度对齐问题,请大家根据适配文档页面内红绿灯情况自行选择合适的环境与场景进行使用。
- 如在使用国产化平台适配方案的过程中遇到任何文档未提及的问题为便于其他用户查找解决方案请在discussions的[指定帖子](https://github.com/opendatalab/MinerU/discussions/4064)中进行反馈。
- 2025/11/04 2.6.4 发布
- 为pdf渲染图片增加超时配置默认为300秒可通过环境变量`MINERU_PDF_RENDER_TIMEOUT`进行配置防止部分异常pdf文件导致渲染过程长时间阻塞。
- 为onnx模型增加cpu线程数配置选项默认为系统cpu核心数可通过环境变量`MINERU_INTRA_OP_NUM_THREADS``MINERU_INTER_OP_NUM_THREADS`进行配置以减少高并发场景下的对cpu资源的抢占冲突。
- 2025/10/31 2.6.3 发布
- 增加新后端`vlm-mlx-engine`支持在Apple Silicon设备上支持使用`MLX`加速`MinerU2.5`模型推理,相比`vlm-transformers`后端,`vlm-mlx-engine`后端速度提升100%~200%。
- bug修复: #3849 #3859
- 2025/10/24 2.6.2 发布
- `pipline`后端优化
- 增加对中文公式的实验性支持,可通过配置环境变量`export MINERU_FORMULA_CH_SUPPORT=1`开启。该功能可能会导致MFR速率略微下降、部分长公式识别失败等问题建议仅在需要解析中文公式的场景下开启。如需关闭该功能可将环境变量设置为`0`
- `OCR`速度大幅提升200%~300%,感谢 [@cjsdurj](https://github.com/cjsdurj) 提供的优化方案
- `OCR`模型优化拉丁文识别的准度和广度,并更新西里尔文(cyrillic)、阿拉伯文(arabic)、天城文(devanagari)、泰卢固语(te)、泰米尔语(ta)语系至`ppocr-v5`版本精度相比上代模型提升40%以上
- `vlm`后端优化
- `table_caption``table_footnote`匹配逻辑优化,提升页内多张连续表场景下的表格标题和脚注的匹配准确率和阅读顺序合理性
- 优化使用`vllm`后端时高并发时的cpu资源占用降低服务端压力
- 适配`vllm`0.11.0版本
- 通用优化
- 跨页表格合并效果优化,新增跨页续表合并支持,提升在多列合并场景下的表格合并效果
- 为表格合并功能增加环境变量配置选项`MINERU_TABLE_MERGE_ENABLE`,表格合并功能默认开启,可通过设置该变量为`0`来关闭表格合并功能
- 2025/09/26 2.5.4 发布
- 🎉🎉 MinerU2.5[技术报告](https://arxiv.org/abs/2509.22186)现已发布,欢迎阅读全面了解其模型架构、训练策略、数据工程和评测结果。
- 修复部分`pdf`文件被识别成`ai`文件导致无法解析的问题
- 2025/09/20 2.5.3 发布
- 依赖版本范围调整使得Turing及更早架构显卡可以使用vLLM加速推理MinerU2.5模型。
- `pipeline`后端对torch 2.8.0的一些兼容性修复。
- 降低vLLM异步后端默认的并发数降低服务端压力以避免高压导致的链接关闭问题。
- 更多兼容性相关内容详见[公告](https://github.com/opendatalab/MinerU/discussions/3547)
- 2025/09/19 2.5.2 发布
我们正式发布 MinerU2.5,当前最强文档解析多模态大模型。仅凭 1.2B 参数MinerU2.5 在 OmniDocBench 文档解析评测中,精度已全面超越 Gemini2.5-Pro、GPT-4o、Qwen2.5-VL-72B等顶级多模态大模型并显著领先于主流文档解析专用模型如 dots.ocr, MonkeyOCR, PP-StructureV3 等)。
模型已发布至[HuggingFace](https://huggingface.co/opendatalab/MinerU2.5-2509-1.2B)和[ModelScope](https://modelscope.cn/models/opendatalab/MinerU2.5-2509-1.2B)平台,欢迎大家下载使用!
- 核心亮点
- 极致能效性能SOTA: 以 1.2B 的轻量化规模实现了超越百亿乃至千亿级模型的SOTA性能重新定义了文档解析的能效比。
- 先进架构,全面领先: 通过 “两阶段推理” (解耦布局分析与内容识别) 与 原生高分辨率架构 的结合,在布局分析、文本识别、公式识别、表格识别及阅读顺序五大方面均达到 SOTA 水平。
- 关键能力提升
- 布局检测: 结果更完整,精准覆盖页眉、页脚、页码等非正文内容;同时提供更精准的元素定位与更自然的格式还原(如列表、参考文献)。
- 表格解析: 大幅优化了对旋转表格、无线/少线表、以及长难表格的解析能力。
- 公式识别: 显著提升中英混合公式及复杂长公式的识别准确率,大幅改善数学类文档解析能力。
此外伴随vlm 2.5的发布,我们对仓库做出一些调整:
- vlm后端升级至2.5版本支持MinerU2.5模型不再兼容MinerU2.0-2505-0.9B模型最后一个支持2.0模型的版本为mineru-2.2.2。
- vlm推理相关代码已移至[mineru_vl_utils](https://github.com/opendatalab/mineru-vl-utils),降低与mineru主仓库的耦合度便于后续独立迭代。
- vlm加速推理框架从`sglang`切换至`vllm`,并实现对vllm生态的完全兼容使得用户可以在任何支持vllm框架的平台上使用MinerU2.5模型并加速推理。
- 由于vlm模型的重大升级支持更多layout type因此我们对解析的中间文件`middle.json`和结果文件`content_list.json`的结构做出一些调整,请参考[文档](https://opendatalab.github.io/MinerU/zh/reference/output_files/)了解详情。
其他仓库优化:
- 移除对输入文件的后缀名白名单校验当输入文件为PDF文档或图片时对文件的后缀名不再有要求提升易用性。
<details>
<summary>历史日志</summary>
<details>
<summary>2025/09/10 2.2.2 发布</summary>
<ul>
<li>修复新的表格识别模型在部分表格解析失败时影响整体解析任务的问题</li>
</ul>
</details>
<details>
<summary>2025/09/08 2.2.1 发布</summary>
<ul>
<li>修复使用模型下载命令时,部分新增模型未下载的问题</li>
</ul>
</details>
<details>
<summary>2025/09/05 2.2.0 发布</summary>
<ul>
<li>
主要更新
<ul>
<li>在这个版本我们重点提升了表格的解析精度,通过引入新的<a href="https://github.com/RapidAI/TableStructureRec">有线表识别模型</a>和全新的混合表格结构解析算法,显著提升了<code>pipeline</code>后端的表格识别能力。</li>
<li>另外我们增加了对跨页表格合并的支持,这一功能同时支持<code>pipeline</code>和<code>vlm</code>后端,进一步提升了表格解析的完整性和准确性。</li>
</ul>
</li>
<li>
其他更新
<ul>
<li><code>pipeline</code>后端增加270度旋转的表格解析能力现已支持0/90/270度三个方向的表格解析</li>
<li><code>pipeline</code>增加对泰文、希腊文的ocr能力支持并更新了英文ocr模型至最新英文识别精度提升11%,泰文识别模型精度 82.68%,希腊文识别模型精度 89.28%by PPOCRv5</li>
<li>在输出的<code>content_list.json</code>中增加了<code>bbox</code>字段(映射至0-1000范围内),方便用户直接获取每个内容块的位置信息</li>
<li>移除<code>pipeline_old_linux</code>安装可选项不再支持老版本的Linux系统如<code>Centos 7</code>等,以便对<code>uv</code>的<code>sync</code>/<code>run</code>等命令进行更好的支持</li>
</ul>
</li>
</ul>
</details>
<details>
<summary>2025/08/01 2.1.10 发布</summary>
<ul>
<li>修复<code>pipeline</code>后端因block覆盖导致的解析结果与预期不符 #3232</li>
</ul>
</details>
<details>
<summary>2025/07/30 2.1.9 发布</summary>
<ul>
<li><code>transformers</code> 4.54.1 版本适配</li>
</ul>
</details>
<details>
<summary>2025/07/28 2.1.8 发布</summary>
<ul>
<li><code>sglang</code> 0.4.9.post5 版本适配</li>
</ul>
</details>
<details>
<summary>2025/07/27 2.1.7 发布</summary>
<ul>
<li><code>transformers</code> 4.54.0 版本适配</li>
</ul>
</details>
<details>
<summary>2025/07/26 2.1.6 发布</summary>
<ul>
<li>修复<code>vlm</code>后端解析部分手写文档时的表格异常问题</li>
<li>修复文档旋转时可视化框位置漂移问题 #3175</li>
</ul>
</details>
<details>
<summary>2025/07/24 2.1.5 发布</summary>
<ul>
<li><code>sglang</code> 0.4.9 版本适配同步升级dockerfile基础镜像为sglang 0.4.9.post3</li>
</ul>
</details>
<details>
<summary>2025/07/23 2.1.4 发布</summary>
<ul>
<li><strong>bug修复</strong>
<ul>
<li>修复<code>pipeline</code>后端中<code>MFR</code>步骤在某些情况下显存消耗过大的问题 #2771</li>
<li>修复某些情况下<code>image</code>/<code>table</code>与<code>caption</code>/<code>footnote</code>匹配不准确的问题 #3129</li>
</ul>
</li>
</ul>
</details>
<details>
<summary>2025/07/16 2.1.1 发布</summary>
<ul>
<li><strong>bug修复</strong>
<ul>
<li>修复<code>pipeline</code>在某些情况可能发生的文本块内容丢失问题 #3005</li>
<li>修复<code>sglang-client</code>需要安装<code>torch</code>等不必要的包的问题 #2968</li>
<li>更新<code>dockerfile</code>以修复linux字体缺失导致的解析文本内容不完整问题 #2915</li>
</ul>
</li>
<li><strong>易用性更新</strong>
<ul>
<li>更新<code>compose.yaml</code>,便于用户直接启动<code>sglang-server</code>、<code>mineru-api</code>、<code>mineru-gradio</code>服务</li>
<li>启用全新的<a href="https://opendatalab.github.io/MinerU/zh/">在线文档站点</a>简化readme提供更好的文档体验</li>
</ul>
</li>
</ul>
</details>
<details>
<summary>2025/07/05 2.1.0 发布</summary>
<p>这是 MinerU 2 的第一个大版本更新包含了大量新功能和改进包含众多性能优化、体验优化和bug修复具体更新内容如下</p>
<ul>
<li><strong>性能优化:</strong>
<ul>
<li>大幅提升某些特定分辨率长边2000像素左右文档的预处理速度</li>
<li>大幅提升<code>pipeline</code>后端批量处理大量页数较少(&lt;10文档时的后处理速度</li>
<li><code>pipeline</code>后端的layout分析速度提升约20%</li>
</ul>
</li>
<li><strong>体验优化:</strong>
<ul>
<li>内置开箱即用的<code>fastapi服务</code>和<code>gradio webui</code>,详细使用方法请参考<a href="https://opendatalab.github.io/MinerU/zh/usage/quick_usage/#apiwebuisglang-clientserver">文档</a></li>
<li><code>sglang</code>适配<code>0.4.8</code>版本,大幅降低<code>vlm-sglang</code>后端的显存要求,最低可在<code>8G显存</code>(Turing及以后架构)的显卡上运行</li>
<li>对所有命令增加<code>sglang</code>的参数透传,使得<code>sglang-engine</code>后端可以与<code>sglang-server</code>一致,接收<code>sglang</code>的所有参数</li>
<li>支持基于配置文件的功能扩展,包含<code>自定义公式标识符</code>、<code>开启标题分级功能</code>、<code>自定义本地模型目录</code>,详细使用方法请参考<a href="https://opendatalab.github.io/MinerU/zh/usage/quick_usage/#mineru_1">文档</a></li>
</ul>
</li>
<li><strong>新特性:</strong>
<ul>
<li><code>pipeline</code>后端更新 PP-OCRv5 多语种文本识别模型,支持法语、西班牙语、葡萄牙语、俄语、韩语等 37 种语言的文字识别平均精度涨幅超30%。<a href="https://paddlepaddle.github.io/PaddleOCR/latest/version3.x/algorithm/PP-OCRv5/PP-OCRv5_multi_languages.html">详情</a></li>
<li><code>pipeline</code>后端增加对竖排文本的有限支持</li>
</ul>
</li>
</ul>
</details>
<details>
<summary>2025/06/20 2.0.6发布</summary>
<ul>
@@ -423,7 +585,7 @@ https://github.com/user-attachments/assets/4bea02c9-6d54-4cd6-97ed-dff14340982c
- 自动识别并转换文档中的公式为LaTeX格式
- 自动识别并转换文档中的表格为HTML格式
- 自动检测扫描版PDF和乱码PDF并启用OCR功能
- OCR支持84种语言的检测与识别
- OCR支持109种语言的检测与识别
- 支持多种输出格式如多模态与NLP的Markdown、按阅读顺序排序的JSON、含有丰富信息的中间格式等
- 支持多种可视化结果包括layout可视化、span可视化等便于高效确认输出效果与质检
- 支持纯CPU环境运行并支持 GPU(CUDA)/NPU(CANN)/MPS 加速
@@ -458,42 +620,80 @@ https://github.com/user-attachments/assets/4bea02c9-6d54-4cd6-97ed-dff14340982c
>
> 在非主线环境中由于硬件、软件配置的多样性以及第三方依赖项的兼容性问题我们无法100%保证项目的完全可用性。因此对于希望在非推荐环境中使用本项目的用户我们建议先仔细阅读文档以及FAQ大多数问题已经在FAQ中有对应的解决方案除此之外我们鼓励社区反馈问题以便我们能够逐步扩大支持范围。
<table>
<tr>
<td>解析后端</td>
<td>pipeline</td>
<td>vlm-transformers</td>
<td>vlm-sglang</td>
</tr>
<tr>
<td>操作系统</td>
<td>Linux / Windows / macOS</td>
<td>Linux / Windows</td>
<td>Linux / Windows (via WSL2)</td>
</tr>
<tr>
<td>CPU推理支持</td>
<td>✅</td>
<td colspan="2">❌</td>
</tr>
<tr>
<td>GPU要求</td>
<td>Turing及以后架构6G显存以上或Apple Silicon</td>
<td colspan="2">Turing及以后架构8G显存以上</td>
</tr>
<tr>
<td>内存要求</td>
<td colspan="3">最低16G以上推荐32G以上</td>
</tr>
<tr>
<td>磁盘空间要求</td>
<td colspan="3">20G以上推荐使用SSD</td>
</tr>
<tr>
<td>python版本</td>
<td colspan="3">3.10-3.13</td>
</tr>
</table>
<thead>
<tr>
<th rowspan="2">解析后端</th>
<th rowspan="2">pipeline <br> (精度<sup>1</sup> 82+)</th>
<th colspan="5">vlm (精度<sup>1</sup> 90+)</th>
</tr>
<tr>
<th>transformers</th>
<th>mlx-engine</th>
<th>vllm-engine / <br>vllm-async-engine</th>
<th>lmdeploy-engine</th>
<th>http-client</th>
</tr>
</thead>
<tbody>
<tr>
<th>后端特性</th>
<td>速度快, 无幻觉</td>
<td>兼容性好, 速度较慢</td>
<td>比transformers快</td>
<td>速度快, 兼容vllm生态</td>
<td>速度快, 兼容lmdeploy生态</td>
<td>适用于OpenAI兼容服务器<sup>6</sup></td>
</tr>
<tr>
<th>操作系统</th>
<td colspan="2" style="text-align:center;">Linux<sup>2</sup> / Windows / macOS</td>
<td style="text-align:center;">macOS<sup>3</sup></td>
<td style="text-align:center;">Linux<sup>2</sup> / Windows<sup>4</sup> </td>
<td style="text-align:center;">Linux<sup>2</sup> / Windows<sup>5</sup> </td>
<td>不限</td>
</tr>
<tr>
<th>CPU推理支持</th>
<td colspan="2" style="text-align:center;">✅</td>
<td colspan="3" style="text-align:center;">❌</td>
<td >不需要</td>
</tr>
<tr>
<th>GPU要求</th><td colspan="2" style="text-align:center;">Volta及以后架构, 6G显存以上或Apple Silicon</td>
<td>Apple Silicon</td>
<td colspan="2" style="text-align:center;">Volta及以后架构, 8G显存以上</td>
<td>不需要</td>
</tr>
<tr>
<th>内存要求</th>
<td colspan="5" style="text-align:center;">最低16GB以上, 推荐32GB以上</td>
<td>8GB</td>
</tr>
<tr>
<th>磁盘空间要求</th>
<td colspan="5" style="text-align:center;">20GB以上, 推荐使用SSD</td>
<td>2GB</td>
</tr>
<tr>
<th>python版本</th>
<td colspan="6" style="text-align:center;">3.10-3.13<sup>7</sup></td>
</tr>
</tbody>
</table>
<sup>1</sup> 精度指标为OmniDocBench (v1.5)的End-to-End Evaluation Overall分数基于`MinerU`最新版本测试
<sup>2</sup> Linux仅支持2019年及以后发行版
<sup>3</sup> MLX需macOS 13.5及以上版本支持推荐14.0以上版本使用
<sup>4</sup> Windows vLLM通过WSL2(适用于 Linux 的 Windows 子系统)实现支持
<sup>5</sup> Windows LMDeploy只能使用`turbomind`后端,速度比`pytorch`后端稍慢如对速度有要求建议通过WSL2运行
<sup>6</sup> 兼容OpenAI API的服务器如通过`vLLM`/`SGLang`/`LMDeploy`等推理框架部署的本地模型服务器或远程模型服务
<sup>7</sup> Windows + LMDeploy 由于关键依赖`ray`未能在windows平台支持Python 3.13故仅支持至3.10~3.12版本
> [!TIP]
> 除以上主流环境与平台外,我们也收录了一些社区用户反馈的其他平台支持情况,详情请参考[其他加速卡适配](https://opendatalab.github.io/MinerU/zh/usage/)。
> 如果您有意将自己的环境适配经验分享给社区,欢迎通过[show-and-tell](https://github.com/opendatalab/MinerU/discussions/categories/show-and-tell)提交或提交PR至[其他加速卡适配](https://github.com/opendatalab/MinerU/tree/master/docs/zh/usage/acceleration_cards)文档。
### 安装 MinerU
@@ -512,8 +712,8 @@ uv pip install -e .[core] -i https://mirrors.aliyun.com/pypi/simple
```
> [!TIP]
> `mineru[core]`包含除`sglang`加速外的所有核心功能兼容Windows / Linux / macOS系统适合绝大多数用户。
> 如果您使用`sglang`加速VLM模型推理或是在边缘设备安装轻量版client端等需求可以参考文档[扩展模块安装指南](https://opendatalab.github.io/MinerU/zh/quick_start/extension_modules/)。
> `mineru[core]`包含除`vLLM`/`LMDeploy`加速外的所有核心功能兼容Windows / Linux / macOS系统适合绝大多数用户。
> 如果您需要使用`vLLM`/`LMDeploy`加速VLM模型推理或是在边缘设备安装轻量版client端等需求可以参考文档[扩展模块安装指南](https://opendatalab.github.io/MinerU/zh/quick_start/extension_modules/)。
---
@@ -541,8 +741,8 @@ mineru -p <input_path> -o <output_path>
- [x] 手写文本识别
- [x] 竖排文本识别
- [x] 拉丁字母重音符号识别
- [ ] 正文中代码块识别
- [ ] [化学式识别](docs/chemical_knowledge_introduction/introduction.pdf)
- [x] 正文中代码块识别
- [x] [化学式识别](docs/chemical_knowledge_introduction/introduction.pdf)(https://mineru.net)
- [ ] 图表内容识别
# Known Issues
@@ -560,7 +760,7 @@ mineru -p <input_path> -o <output_path>
- 如果您在使用过程中遇到问题,可以先查看[常见问题](https://opendatalab.github.io/MinerU/zh/faq/)是否有解答。
- 如果未能解决您的问题,您也可以使用[DeepWiki](https://deepwiki.com/opendatalab/MinerU)与AI助手交流这可以解决大部分常见问题。
- 如果您仍然无法解决问题,您可通过[Discord](https://discord.gg/Tdedn9GTXq)或[WeChat](http://mineru.space/s/V85Yl)加入社区,与其他用户和开发者交流。
- 如果您仍然无法解决问题,您可通过[Discord](https://discord.gg/Tdedn9GTXq)或[WeChat](https://mineru.net/community-portal/?aliasId=3c430f94)加入社区,与其他用户和开发者交流。
# All Thanks To Our Contributors
@@ -580,6 +780,7 @@ mineru -p <input_path> -o <output_path>
- [DocLayout-YOLO](https://github.com/opendatalab/DocLayout-YOLO)
- [UniMERNet](https://github.com/opendatalab/UniMERNet)
- [RapidTable](https://github.com/RapidAI/RapidTable)
- [TableStructureRec](https://github.com/RapidAI/TableStructureRec)
- [PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR)
- [PaddleOCR2Pytorch](https://github.com/frotms/PaddleOCR2Pytorch)
- [layoutreader](https://github.com/ppaanngggg/layoutreader)
@@ -589,10 +790,21 @@ mineru -p <input_path> -o <output_path>
- [pdftext](https://github.com/datalab-to/pdftext)
- [pdfminer.six](https://github.com/pdfminer/pdfminer.six)
- [pypdf](https://github.com/py-pdf/pypdf)
- [magika](https://github.com/google/magika)
# Citation
```bibtex
@misc{niu2025mineru25decoupledvisionlanguagemodel,
title={MinerU2.5: A Decoupled Vision-Language Model for Efficient High-Resolution Document Parsing},
author={Junbo Niu and Zheng Liu and Zhuangcheng Gu and Bin Wang and Linke Ouyang and Zhiyuan Zhao and Tao Chu and Tianyao He and Fan Wu and Qintong Zhang and Zhenjiang Jin and Guang Liang and Rui Zhang and Wenzheng Zhang and Yuan Qu and Zhifei Ren and Yuefeng Sun and Yuanhong Zheng and Dongsheng Ma and Zirui Tang and Boyu Niu and Ziyang Miao and Hejun Dong and Siyi Qian and Junyuan Zhang and Jingzhou Chen and Fangdong Wang and Xiaomeng Zhao and Liqun Wei and Wei Li and Shasha Wang and Ruiliang Xu and Yuanyuan Cao and Lu Chen and Qianqian Wu and Huaiyu Gu and Lindong Lu and Keming Wang and Dechen Lin and Guanlin Shen and Xuanhe Zhou and Linfeng Zhang and Yuhang Zang and Xiaoyi Dong and Jiaqi Wang and Bo Zhang and Lei Bai and Pei Chu and Weijia Li and Jiang Wu and Lijun Wu and Zhenxiang Li and Guangyu Wang and Zhongying Tu and Chao Xu and Kai Chen and Yu Qiao and Bowen Zhou and Dahua Lin and Wentao Zhang and Conghui He},
year={2025},
eprint={2509.22186},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2509.22186},
}
@misc{wang2024mineruopensourcesolutionprecise,
title={MinerU: An Open-Source Solution for Precise Document Content Extraction},
author={Bin Wang and Chao Xu and Xiaomeng Zhao and Linke Ouyang and Fan Wu and Zhiyuan Zhao and Rui Xu and Kaiwen Liu and Yuan Qu and Fukai Shang and Bo Zhang and Liqun Wei and Zhihao Sui and Wei Li and Botian Shi and Yu Qiao and Dahua Lin and Conghui He},
@@ -630,4 +842,5 @@ mineru -p <input_path> -o <output_path>
- [PDF-Extract-Kit (A Comprehensive Toolkit for High-Quality PDF Content Extraction)](https://github.com/opendatalab/PDF-Extract-Kit)
- [OmniDocBench (A Comprehensive Benchmark for Document Parsing and Evaluation)](https://github.com/opendatalab/OmniDocBench)
- [Magic-HTML (Mixed web page extraction tool)](https://github.com/opendatalab/magic-html)
- [Magic-Doc (Fast speed ppt/pptx/doc/docx/pdf extraction tool)](https://github.com/InternLM/magic-doc)
- [Magic-Doc (Fast speed ppt/pptx/doc/docx/pdf extraction tool)](https://github.com/InternLM/magic-doc)
- [Dingo: A Comprehensive AI Data Quality Evaluation Tool](https://github.com/MigoXLab/dingo)

View File

@@ -15,7 +15,7 @@ from mineru.backend.pipeline.pipeline_analyze import doc_analyze as pipeline_doc
from mineru.backend.pipeline.pipeline_middle_json_mkcontent import union_make as pipeline_union_make
from mineru.backend.pipeline.model_json_to_middle_json import result_to_middle_json as pipeline_result_to_middle_json
from mineru.backend.vlm.vlm_middle_json_mkcontent import union_make as vlm_union_make
from mineru.utils.models_download_utils import auto_download_and_get_model_root_path
from mineru.utils.guess_suffix_or_lang import guess_suffix_by_path
def do_parse(
@@ -27,7 +27,7 @@ def do_parse(
parse_method="auto", # The method for parsing PDF, default is 'auto'
formula_enable=True, # Enable formula parsing
table_enable=True, # Enable table parsing
server_url=None, # Server URL for vlm-sglang-client backend
server_url=None, # Server URL for vlm-http-client backend
f_draw_layout_bbox=True, # Whether to draw layout bounding boxes
f_draw_span_bbox=True, # Whether to draw span bounding boxes
f_dump_md=True, # Whether to dump markdown files
@@ -62,47 +62,12 @@ def do_parse(
pdf_info = middle_json["pdf_info"]
pdf_bytes = pdf_bytes_list[idx]
if f_draw_layout_bbox:
draw_layout_bbox(pdf_info, pdf_bytes, local_md_dir, f"{pdf_file_name}_layout.pdf")
if f_draw_span_bbox:
draw_span_bbox(pdf_info, pdf_bytes, local_md_dir, f"{pdf_file_name}_span.pdf")
if f_dump_orig_pdf:
md_writer.write(
f"{pdf_file_name}_origin.pdf",
pdf_bytes,
)
if f_dump_md:
image_dir = str(os.path.basename(local_image_dir))
md_content_str = pipeline_union_make(pdf_info, f_make_md_mode, image_dir)
md_writer.write_string(
f"{pdf_file_name}.md",
md_content_str,
)
if f_dump_content_list:
image_dir = str(os.path.basename(local_image_dir))
content_list = pipeline_union_make(pdf_info, MakeMode.CONTENT_LIST, image_dir)
md_writer.write_string(
f"{pdf_file_name}_content_list.json",
json.dumps(content_list, ensure_ascii=False, indent=4),
)
if f_dump_middle_json:
md_writer.write_string(
f"{pdf_file_name}_middle.json",
json.dumps(middle_json, ensure_ascii=False, indent=4),
)
if f_dump_model_output:
md_writer.write_string(
f"{pdf_file_name}_model.json",
json.dumps(model_json, ensure_ascii=False, indent=4),
)
logger.info(f"local output dir is {local_md_dir}")
_process_output(
pdf_info, pdf_bytes, pdf_file_name, local_md_dir, local_image_dir,
md_writer, f_draw_layout_bbox, f_draw_span_bbox, f_dump_orig_pdf,
f_dump_md, f_dump_content_list, f_dump_middle_json, f_dump_model_output,
f_make_md_mode, middle_json, model_json, is_pipeline=True
)
else:
if backend.startswith("vlm-"):
backend = backend[4:]
@@ -118,48 +83,77 @@ def do_parse(
pdf_info = middle_json["pdf_info"]
if f_draw_layout_bbox:
draw_layout_bbox(pdf_info, pdf_bytes, local_md_dir, f"{pdf_file_name}_layout.pdf")
_process_output(
pdf_info, pdf_bytes, pdf_file_name, local_md_dir, local_image_dir,
md_writer, f_draw_layout_bbox, f_draw_span_bbox, f_dump_orig_pdf,
f_dump_md, f_dump_content_list, f_dump_middle_json, f_dump_model_output,
f_make_md_mode, middle_json, infer_result, is_pipeline=False
)
if f_draw_span_bbox:
draw_span_bbox(pdf_info, pdf_bytes, local_md_dir, f"{pdf_file_name}_span.pdf")
if f_dump_orig_pdf:
md_writer.write(
f"{pdf_file_name}_origin.pdf",
pdf_bytes,
)
def _process_output(
pdf_info,
pdf_bytes,
pdf_file_name,
local_md_dir,
local_image_dir,
md_writer,
f_draw_layout_bbox,
f_draw_span_bbox,
f_dump_orig_pdf,
f_dump_md,
f_dump_content_list,
f_dump_middle_json,
f_dump_model_output,
f_make_md_mode,
middle_json,
model_output=None,
is_pipeline=True
):
"""处理输出文件"""
if f_draw_layout_bbox:
draw_layout_bbox(pdf_info, pdf_bytes, local_md_dir, f"{pdf_file_name}_layout.pdf")
if f_dump_md:
image_dir = str(os.path.basename(local_image_dir))
md_content_str = vlm_union_make(pdf_info, f_make_md_mode, image_dir)
md_writer.write_string(
f"{pdf_file_name}.md",
md_content_str,
)
if f_draw_span_bbox:
draw_span_bbox(pdf_info, pdf_bytes, local_md_dir, f"{pdf_file_name}_span.pdf")
if f_dump_content_list:
image_dir = str(os.path.basename(local_image_dir))
content_list = vlm_union_make(pdf_info, MakeMode.CONTENT_LIST, image_dir)
md_writer.write_string(
f"{pdf_file_name}_content_list.json",
json.dumps(content_list, ensure_ascii=False, indent=4),
)
if f_dump_orig_pdf:
md_writer.write(
f"{pdf_file_name}_origin.pdf",
pdf_bytes,
)
if f_dump_middle_json:
md_writer.write_string(
f"{pdf_file_name}_middle.json",
json.dumps(middle_json, ensure_ascii=False, indent=4),
)
image_dir = str(os.path.basename(local_image_dir))
if f_dump_model_output:
model_output = ("\n" + "-" * 50 + "\n").join(infer_result)
md_writer.write_string(
f"{pdf_file_name}_model_output.txt",
model_output,
)
if f_dump_md:
make_func = pipeline_union_make if is_pipeline else vlm_union_make
md_content_str = make_func(pdf_info, f_make_md_mode, image_dir)
md_writer.write_string(
f"{pdf_file_name}.md",
md_content_str,
)
logger.info(f"local output dir is {local_md_dir}")
if f_dump_content_list:
make_func = pipeline_union_make if is_pipeline else vlm_union_make
content_list = make_func(pdf_info, MakeMode.CONTENT_LIST, image_dir)
md_writer.write_string(
f"{pdf_file_name}_content_list.json",
json.dumps(content_list, ensure_ascii=False, indent=4),
)
if f_dump_middle_json:
md_writer.write_string(
f"{pdf_file_name}_middle.json",
json.dumps(middle_json, ensure_ascii=False, indent=4),
)
if f_dump_model_output:
md_writer.write_string(
f"{pdf_file_name}_model.json",
json.dumps(model_output, ensure_ascii=False, indent=4),
)
logger.info(f"local output dir is {local_md_dir}")
def parse_doc(
@@ -182,8 +176,8 @@ def parse_doc(
backend: the backend for parsing pdf:
pipeline: More general.
vlm-transformers: More general.
vlm-sglang-engine: Faster(engine).
vlm-sglang-client: Faster(client).
vlm-vllm-engine: Faster(engine).
vlm-http-client: Faster(client).
without method specified, pipeline will be used by default.
method: the method for parsing pdf:
auto: Automatically determine the method based on the file type.
@@ -191,7 +185,7 @@ def parse_doc(
ocr: Use OCR method for image-based PDFs.
Without method specified, 'auto' will be used by default.
Adapted only for the case where the backend is set to "pipeline".
server_url: When the backend is `sglang-client`, you need to specify the server_url, for example:`http://127.0.0.1:30000`
server_url: When the backend is `http-client`, you need to specify the server_url, for example:`http://127.0.0.1:30000`
start_page_id: Start page ID for parsing, default is 0
end_page_id: End page ID for parsing, default is None (parse all pages until the end of the document)
"""
@@ -225,12 +219,12 @@ if __name__ == '__main__':
__dir__ = os.path.dirname(os.path.abspath(__file__))
pdf_files_dir = os.path.join(__dir__, "pdfs")
output_dir = os.path.join(__dir__, "output")
pdf_suffixes = [".pdf"]
image_suffixes = [".png", ".jpeg", ".jpg"]
pdf_suffixes = ["pdf"]
image_suffixes = ["png", "jpeg", "jp2", "webp", "gif", "bmp", "jpg"]
doc_path_list = []
for doc_path in Path(pdf_files_dir).glob('*'):
if doc_path.suffix in pdf_suffixes + image_suffixes:
if guess_suffix_by_path(doc_path) in pdf_suffixes + image_suffixes:
doc_path_list.append(doc_path)
"""如果您由于网络问题无法下载模型可以设置环境变量MINERU_MODEL_SOURCE为modelscope使用免代理仓库下载模型"""
@@ -241,5 +235,7 @@ if __name__ == '__main__':
"""To enable VLM mode, change the backend to 'vlm-xxx'"""
# parse_doc(doc_path_list, output_dir, backend="vlm-transformers") # more general.
# parse_doc(doc_path_list, output_dir, backend="vlm-sglang-engine") # faster(engine).
# parse_doc(doc_path_list, output_dir, backend="vlm-sglang-client", server_url="http://127.0.0.1:30000") # faster(client).
# parse_doc(doc_path_list, output_dir, backend="vlm-mlx-engine") # faster than transformers in macOS 13.5+.
# parse_doc(doc_path_list, output_dir, backend="vlm-vllm-engine") # faster(vllm-engine).
# parse_doc(doc_path_list, output_dir, backend="vlm-lmdeploy-engine") # faster(lmdeploy-engine).
# parse_doc(doc_path_list, output_dir, backend="vlm-http-client", server_url="http://127.0.0.1:30000") # faster(client).

View File

@@ -1,7 +1,9 @@
# Use the official sglang image
FROM lmsysorg/sglang:v0.4.9.post5-cu126
# For blackwell GPU, use the following line instead:
# FROM lmsysorg/sglang:v0.4.9.post5-cu128-b200
# Use DaoCloud mirrored vllm image for China region for gpu with Ampere architecture and above (Compute Capability>=8.0)
# Compute Capability version query (https://developer.nvidia.com/cuda-gpus)
FROM docker.m.daocloud.io/vllm/vllm-openai:v0.10.1.1
# Use DaoCloud mirrored vllm image for China region for gpu with Turing architecture and below (Compute Capability<8.0)
# FROM docker.m.daocloud.io/vllm/vllm-openai:v0.10.2
# Install libgl for opencv support & Noto fonts for Chinese characters
RUN apt-get update && \

View File

@@ -0,0 +1,34 @@
# 基础镜像配置 vLLM 或 LMDeploy 推理环境,请根据实际需要选择其中一个,要求 amd64(x86-64) CPU + metax GPU。
# Base image containing the vLLM inference environment, requiring amd64(x86-64) CPU + metax GPU.
FROM cr.metax-tech.com/public-ai-release/maca/vllm:maca.ai3.1.0.7-torch2.6-py310-ubuntu22.04-amd64
# Base image containing the LMDeploy inference environment, requiring amd64(x86-64) CPU + metax GPU.
# FROM crpi-vofi3w62lkohhxsp.cn-shanghai.personal.cr.aliyuncs.com/opendatalab-mineru/maca:maca.ai3.1.0.7-torch2.6-py310-ubuntu22.04-lmdeploy0.10.2-amd64
# Install libgl for opencv support & Noto fonts for Chinese characters
RUN apt-get update && \
apt-get install -y \
fonts-noto-core \
fonts-noto-cjk \
fontconfig \
libgl1 && \
fc-cache -fv && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
# mod torchvision to be compatible with torch 2.6
RUN sed -i '3s/^Version: 0.15.1+metax3\.1\.0\.4$/Version: 0.21.0+metax3.1.0.4/' /opt/conda/lib/python3.10/site-packages/torchvision-0.15.1+metax3.1.0.4.dist-info/METADATA && \
mv /opt/conda/lib/python3.10/site-packages/torchvision-0.15.1+metax3.1.0.4.dist-info /opt/conda/lib/python3.10/site-packages/torchvision-0.21.0+metax3.1.0.4.dist-info
# Install mineru latest
RUN /opt/conda/bin/python3 -m pip install -U pip -i https://mirrors.aliyun.com/pypi/simple && \
/opt/conda/bin/python3 -m pip install 'mineru[core]>=2.6.5' \
numpy==1.26.4 \
opencv-python==4.11.0.86 \
-i https://mirrors.aliyun.com/pypi/simple && \
/opt/conda/bin/python3 -m pip cache purge
# Download models and update the configuration file
RUN /bin/bash -c "/opt/conda/bin/mineru-models-download -s modelscope -m all"
# Set the entry point to activate the virtual environment and run the command line tool
ENTRYPOINT ["/bin/bash", "-c", "export MINERU_MODEL_SOURCE=local && exec \"$@\"", "--"]

View File

@@ -0,0 +1,29 @@
# 基础镜像配置 vLLM 或 LMDeploy ,请根据实际需要选择其中一个,要求 ARM(AArch64) CPU + Ascend NPU。
# Base image containing the vLLM inference environment, requiring ARM(AArch64) CPU + Ascend NPU.
FROM quay.io/ascend/vllm-ascend:v0.11.0rc1
# Base image containing the LMDeploy inference environment, requiring ARM(AArch64) CPU + Ascend NPU.
# FROM crpi-4crprmm5baj1v8iv.cn-hangzhou.personal.cr.aliyuncs.com/lmdeploy_dlinfer/ascend:mineru-a2
# Install libgl for opencv support & Noto fonts for Chinese characters
RUN apt-get update && \
apt-get install -y \
fonts-noto-core \
fonts-noto-cjk \
fontconfig \
libgl1 \
libglib2.0-0 && \
fc-cache -fv && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
# Install mineru latest
RUN python3 -m pip install -U pip -i https://mirrors.aliyun.com/pypi/simple && \
python3 -m pip install -U 'mineru[core]>=2.6.5' -i https://mirrors.aliyun.com/pypi/simple && \
python3 -m pip cache purge
# Download models and update the configuration file
RUN TORCH_DEVICE_BACKEND_AUTOLOAD=0 /bin/bash -c "mineru-models-download -s modelscope -m all"
# Set the entry point to activate the virtual environment and run the command line tool
ENTRYPOINT ["/bin/bash", "-c", "export MINERU_MODEL_SOURCE=local && exec \"$@\"", "--"]

View File

@@ -0,0 +1,30 @@
# 基础镜像配置 vLLM 或 LMDeploy 推理环境,请根据实际需要选择其中一个,要求 amd64(x86-64) CPU + t-head PPU。
# Base image containing the vLLM inference environment, requiring amd64(x86-64) CPU + t-head PPU.
FROM crpi-vofi3w62lkohhxsp.cn-shanghai.personal.cr.aliyuncs.com/opendatalab-mineru/ppu:ppu-pytorch2.6.0-ubuntu24.04-cuda12.6-vllm0.8.5-py312
# Base image containing the LMDeploy inference environment, requiring amd64(x86-64) CPU + t-head PPU.
# FROM crpi-4crprmm5baj1v8iv.cn-hangzhou.personal.cr.aliyuncs.com/lmdeploy_dlinfer/ppu:mineru-ppu
# Install libgl for opencv support & Noto fonts for Chinese characters
RUN apt-get update && \
apt-get install -y \
fonts-noto-core \
fonts-noto-cjk \
fontconfig \
libgl1 && \
fc-cache -fv && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
# Install mineru latest
RUN python3 -m pip install -U pip -i https://mirrors.aliyun.com/pypi/simple && \
python3 -m pip install 'mineru[core]>=2.6.5' \
numpy==1.26.4 \
opencv-python==4.11.0.86 \
-i https://mirrors.aliyun.com/pypi/simple && \
python3 -m pip cache purge
# Download models and update the configuration file
RUN /bin/bash -c "mineru-models-download -s modelscope -m all"
# Set the entry point to activate the virtual environment and run the command line tool
ENTRYPOINT ["/bin/bash", "-c", "export MINERU_MODEL_SOURCE=local && exec \"$@\"", "--"]

View File

@@ -1,21 +1,38 @@
services:
mineru-sglang-server:
image: mineru-sglang:latest
container_name: mineru-sglang-server
mineru-openai-server:
image: mineru:latest
container_name: mineru-openai-server
restart: always
profiles: ["sglang-server"]
profiles: ["openai-server"]
ports:
- 30000:30000
environment:
MINERU_MODEL_SOURCE: local
entrypoint: mineru-sglang-server
entrypoint: mineru-openai-server
command:
# ==================== Engine Selection ====================
# WARNING: Only ONE engine can be enabled at a time!
# Choose 'vllm' OR 'lmdeploy' (uncomment one line below)
--engine vllm
# --engine lmdeploy
# ==================== vLLM Engine Parameters ====================
# Uncomment if using --engine vllm
--host 0.0.0.0
--port 30000
# --enable-torch-compile # You can also enable torch.compile to accelerate inference speed by approximately 15%
# --dp-size 2 # If using multiple GPUs, increase throughput using sglang's multi-GPU parallel mode
# --tp-size 2 # If you have more than one GPU, you can expand available VRAM using tensor parallelism (TP) mode.
# --mem-fraction-static 0.5 # If running on a single GPU and encountering VRAM shortage, reduce the KV cache size by this parameter, if VRAM issues persist, try lowering it further to `0.4` or below.
# Multi-GPU configuration (increase throughput)
# --data-parallel-size 2
# Single GPU memory optimization (reduce if VRAM insufficient)
# --gpu-memory-utilization 0.5 # Try 0.4 or lower if issues persist
# ==================== LMDeploy Engine Parameters ====================
# Uncomment if using --engine lmdeploy
# --server-name 0.0.0.0
# --server-port 30000
# Multi-GPU configuration (increase throughput)
# --dp 2
# Single GPU memory optimization (reduce if VRAM insufficient)
# --cache-max-entry-count 0.5 # Try 0.4 or lower if issues persist
ulimits:
memlock: -1
stack: 67108864
@@ -27,11 +44,11 @@ services:
reservations:
devices:
- driver: nvidia
device_ids: ["0"]
device_ids: ["0"] # Modify for multiple GPUs: ["0", "1"]
capabilities: [gpu]
mineru-api:
image: mineru-sglang:latest
image: mineru:latest
container_name: mineru-api
restart: always
profiles: ["api"]
@@ -41,13 +58,21 @@ services:
MINERU_MODEL_SOURCE: local
entrypoint: mineru-api
command:
# ==================== Server Configuration ====================
--host 0.0.0.0
--port 8000
# parameters for sglang-engine
# --enable-torch-compile # You can also enable torch.compile to accelerate inference speed by approximately 15%
# --dp-size 2 # If using multiple GPUs, increase throughput using sglang's multi-GPU parallel mode
# --tp-size 2 # If you have more than one GPU, you can expand available VRAM using tensor parallelism (TP) mode.
# --mem-fraction-static 0.5 # If running on a single GPU and encountering VRAM shortage, reduce the KV cache size by this parameter, if VRAM issues persist, try lowering it further to `0.4` or below.
# ==================== vLLM Engine Parameters ====================
# Multi-GPU configuration
# --data-parallel-size 2
# Single GPU memory optimization
# --gpu-memory-utilization 0.5 # Try 0.4 or lower if VRAM insufficient
# ==================== LMDeploy Engine Parameters ====================
# Multi-GPU configuration
# --dp 2
# Single GPU memory optimization
# --cache-max-entry-count 0.5 # Try 0.4 or lower if VRAM insufficient
ulimits:
memlock: -1
stack: 67108864
@@ -57,11 +82,11 @@ services:
reservations:
devices:
- driver: nvidia
device_ids: [ "0" ]
capabilities: [ gpu ]
device_ids: ["0"] # Modify for multiple GPUs: ["0", "1"]
capabilities: [gpu]
mineru-gradio:
image: mineru-sglang:latest
image: mineru:latest
container_name: mineru-gradio
restart: always
profiles: ["gradio"]
@@ -71,16 +96,30 @@ services:
MINERU_MODEL_SOURCE: local
entrypoint: mineru-gradio
command:
# ==================== Gradio Server Configuration ====================
--server-name 0.0.0.0
--server-port 7860
--enable-sglang-engine true # Enable the sglang engine for Gradio
# --enable-api false # If you want to disable the API, set this to false
# --max-convert-pages 20 # If you want to limit the number of pages for conversion, set this to a specific number
# parameters for sglang-engine
# --enable-torch-compile # You can also enable torch.compile to accelerate inference speed by approximately 15%
# --dp-size 2 # If using multiple GPUs, increase throughput using sglang's multi-GPU parallel mode
# --tp-size 2 # If you have more than one GPU, you can expand available VRAM using tensor parallelism (TP) mode.
# --mem-fraction-static 0.5 # If running on a single GPU and encountering VRAM shortage, reduce the KV cache size by this parameter, if VRAM issues persist, try lowering it further to `0.4` or below.
# ==================== Gradio Feature Settings ====================
# --enable-api false # Disable API endpoint
# --max-convert-pages 20 # Limit conversion page count
# ==================== Engine Selection ====================
# WARNING: Only ONE engine can be enabled at a time!
# Option 1: vLLM Engine (recommended for most users)
--enable-vllm-engine true
# Multi-GPU configuration
# --data-parallel-size 2
# Single GPU memory optimization
# --gpu-memory-utilization 0.5 # Try 0.4 or lower if VRAM insufficient
# Option 2: LMDeploy Engine
# --enable-lmdeploy-engine true
# Multi-GPU configuration
# --dp 2
# Single GPU memory optimization
# --cache-max-entry-count 0.5 # Try 0.4 or lower if VRAM insufficient
ulimits:
memlock: -1
stack: 67108864
@@ -90,5 +129,5 @@ services:
reservations:
devices:
- driver: nvidia
device_ids: [ "0" ]
capabilities: [ gpu ]
device_ids: ["0"] # Modify for multiple GPUs: ["0", "1"]
capabilities: [gpu]

View File

@@ -1,7 +1,9 @@
# Use the official sglang image
FROM lmsysorg/sglang:v0.4.9.post5-cu126
# For blackwell GPU, use the following line instead:
# FROM lmsysorg/sglang:v0.4.9.post5-cu128-b200
# Use the official vllm image for gpu with Ampere architecture and above (Compute Capability>=8.0)
# Compute Capability version query (https://developer.nvidia.com/cuda-gpus)
FROM vllm/vllm-openai:v0.10.1.1
# Use the official vllm image for gpu with Turing architecture and below (Compute Capability<8.0)
# FROM vllm/vllm-openai:v0.10.2
# Install libgl for opencv support & Noto fonts for Chinese characters
RUN apt-get update && \

Binary file not shown.

After

Width:  |  Height:  |  Size: 96 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 51 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 72 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 55 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 75 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 56 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 88 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 76 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 110 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 79 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 104 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 72 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 87 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 201 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 261 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 261 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 53 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 145 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 130 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 95 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 110 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 102 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 101 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 214 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 151 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 83 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 89 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 147 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 108 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 81 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 85 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 129 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 249 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 255 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 107 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 125 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 180 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 105 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 236 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 177 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 77 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 118 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 94 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 133 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 161 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 190 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 263 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 264 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 261 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 286 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 50 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 136 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 110 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 133 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 185 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 92 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 246 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 71 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 72 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 116 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 151 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 62 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 92 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 276 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 67 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 74 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 71 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 72 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 70 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 63 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 89 KiB

View File

@@ -2,7 +2,7 @@
If your question is not listed, try using [DeepWiki](https://deepwiki.com/opendatalab/MinerU)'s AI assistant for common issues.
For unresolved problems, join our [Discord](https://discord.gg/Tdedn9GTXq) or [WeChat](http://mineru.space/s/V85Yl) community for support.
For unresolved problems, join our [Discord](https://discord.gg/Tdedn9GTXq) or [WeChat](https://mineru.net/community-portal/?aliasId=3c430f94) community for support.
??? question "Encountered the error `ImportError: libGL.so.1: cannot open shared object file: No such file or directory` in Ubuntu 22.04 on WSL2"
@@ -15,18 +15,6 @@ For unresolved problems, join our [Discord](https://discord.gg/Tdedn9GTXq) or [W
Reference: [#388](https://github.com/opendatalab/MinerU/issues/388)
??? question "Error when installing MinerU on CentOS 7 or Ubuntu 18: `ERROR: Failed building wheel for simsimd`"
The new version of albumentations (1.4.21) introduces a dependency on simsimd. Since the pre-built package of simsimd for Linux requires a glibc version greater than or equal to 2.28, this causes installation issues on some Linux distributions released before 2019. You can resolve this issue by using the following command:
```
conda create -n mineru python=3.11 -y
conda activate mineru
pip install -U "mineru[pipeline_old_linux]"
```
Reference: [#1004](https://github.com/opendatalab/MinerU/issues/1004)
??? question "Missing text information in parsing results when installing and using on Linux systems."
MinerU uses `pypdfium2` instead of `pymupdf` as the PDF page rendering engine in versions >=2.0 to resolve AGPLv3 license issues. On some Linux distributions, due to missing CJK fonts, some text may be lost during the process of rendering PDFs to images.

View File

@@ -18,8 +18,9 @@
[![OpenDataLab](https://img.shields.io/badge/webapp_on_mineru.net-blue?logo=data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iMTM0IiBoZWlnaHQ9IjEzNCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIj48cGF0aCBkPSJtMTIyLDljMCw1LTQsOS05LDlzLTktNC05LTksNC05LDktOSw5LDQsOSw5eiIgZmlsbD0idXJsKCNhKSIvPjxwYXRoIGQ9Im0xMjIsOWMwLDUtNCw5LTksOXMtOS00LTktOSw0LTksOS05LDksNCw5LDl6IiBmaWxsPSIjMDEwMTAxIi8+PHBhdGggZD0ibTkxLDE4YzAsNS00LDktOSw5cy05LTQtOS05LDQtOSw5LTksOSw0LDksOXoiIGZpbGw9InVybCgjYikiLz48cGF0aCBkPSJtOTEsMThjMCw1LTQsOS05LDlzLTktNC05LTksNC05LDktOSw5LDQsOSw5eiIgZmlsbD0iIzAxMDEwMSIvPjxwYXRoIGZpbGwtcnVsZT0iZXZlbm9kZCIgY2xpcC1ydWxlPSJldmVub2RkIiBkPSJtMzksNjJjMCwxNiw4LDMwLDIwLDM4LDctNiwxMi0xNiwxMi0yNlY0OWMwLTQsMy03LDYtOGw0Ni0xMmM1LTEsMTEsMywxMSw4djMxYzAsMzctMzAsNjYtNjYsNjYtMzcsMC02Ni0zMC02Ni02NlY0NmMwLTQsMy03LDYtOGwyMC02YzUtMSwxMSwzLDExLDh2MjF6bS0yOSw2YzAsMTYsNiwzMCwxNyw0MCwzLDEsNSwxLDgsMSw1LDAsMTAtMSwxNS0zQzM3LDk1LDI5LDc5LDI5LDYyVjQybC0xOSw1djIweiIgZmlsbD0idXJsKCNjKSIvPjxwYXRoIGZpbGwtcnVsZT0iZXZlbm9kZCIgY2xpcC1ydWxlPSJldmVub2RkIiBkPSJtMzksNjJjMCwxNiw4LDMwLDIwLDM4LDctNiwxMi0xNiwxMi0yNlY0OWMwLTQsMy03LDYtOGw0Ni0xMmM1LTEsMTEsMywxMSw4djMxYzAsMzctMzAsNjYtNjYsNjYtMzcsMC02Ni0zMC02Ni02NlY0NmMwLTQsMy03LDYtOGwyMC02YzUtMSwxMSwzLDExLDh2MjF6bS0yOSw2YzAsMTYsNiwzMCwxNyw0MCwzLDEsNSwxLDgsMSw1LDAsMTAtMSwxNS0zQzM3LDk1LDI5LDc5LDI5LDYyVjQybC0xOSw1djIweiIgZmlsbD0iIzAxMDEwMSIvPjxkZWZzPjxsaW5lYXJHcmFkaWVudCBpZD0iYSIgeDE9Ijg0IiB5MT0iNDEiIHgyPSI3NSIgeTI9IjEyMCIgZ3JhZGllbnRVbml0cz0idXNlclNwYWNlT25Vc2UiPjxzdG9wIHN0b3AtY29sb3I9IiNmZmYiLz48c3RvcCBvZmZzZXQ9IjEiIHN0b3AtY29sb3I9IiMyZTJlMmUiLz48L2xpbmVhckdyYWRpZW50PjxsaW5lYXJHcmFkaWVudCBpZD0iYiIgeDE9Ijg0IiB5MT0iNDEiIHgyPSI3NSIgeTI9IjEyMCIgZ3JhZGllbnRVbml0cz0idXNlclNwYWNlT25Vc2UiPjxzdG9wIHN0b3AtY29sb3I9IiNmZmYiLz48c3RvcCBvZmZzZXQ9IjEiIHN0b3AtY29sb3I9IiMyZTJlMmUiLz48L2xpbmVhckdyYWRpZW50PjxsaW5lYXJHcmFkaWVudCBpZD0iYyIgeDE9Ijg0IiB5MT0iNDEiIHgyPSI3NSIgeTI9IjEyMCIgZ3JhZGllbnRVbml0cz0idXNlclNwYWNlT25Vc2UiPjxzdG9wIHN0b3AtY29sb3I9IiNmZmYiLz48c3RvcCBvZmZzZXQ9IjEiIHN0b3AtY29sb3I9IiMyZTJlMmUiLz48L2xpbmVhckdyYWRpZW50PjwvZGVmcz48L3N2Zz4=&labelColor=white)](https://mineru.net/OpenSourceTools/Extractor?source=github)
[![HuggingFace](https://img.shields.io/badge/Demo_on_HuggingFace-yellow.svg?logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAF8AAABYCAMAAACkl9t/AAAAk1BMVEVHcEz/nQv/nQv/nQr/nQv/nQr/nQv/nQv/nQr/wRf/txT/pg7/yRr/rBD/zRz/ngv/oAz/zhz/nwv/txT/ngv/0B3+zBz/nQv/0h7/wxn/vRb/thXkuiT/rxH/pxD/ogzcqyf/nQvTlSz/czCxky7/SjifdjT/Mj3+Mj3wMj15aTnDNz+DSD9RTUBsP0FRO0Q6O0WyIxEIAAAAGHRSTlMADB8zSWF3krDDw8TJ1NbX5efv8ff9/fxKDJ9uAAAGKklEQVR42u2Z63qjOAyGC4RwCOfB2JAGqrSb2WnTw/1f3UaWcSGYNKTdf/P+mOkTrE+yJBulvfvLT2A5ruenaVHyIks33npl/6C4s/ZLAM45SOi/1FtZPyFur1OYofBX3w7d54Bxm+E8db+nDr12ttmESZ4zludJEG5S7TO72YPlKZFyE+YCYUJTBZsMiNS5Sd7NlDmKM2Eg2JQg8awbglfqgbhArjxkS7dgp2RH6hc9AMLdZYUtZN5DJr4molC8BfKrEkPKEnEVjLbgW1fLy77ZVOJagoIcLIl+IxaQZGjiX597HopF5CkaXVMDO9Pyix3AFV3kw4lQLCbHuMovz8FallbcQIJ5Ta0vks9RnolbCK84BtjKRS5uA43hYoZcOBGIG2Epbv6CvFVQ8m8loh66WNySsnN7htL58LNp+NXT8/PhXiBXPMjLSxtwp8W9f/1AngRierBkA+kk/IpUSOeKByzn8y3kAAAfh//0oXgV4roHm/kz4E2z//zRc3/lgwBzbM2mJxQEa5pqgX7d1L0htrhx7LKxOZlKbwcAWyEOWqYSI8YPtgDQVjpB5nvaHaSnBaQSD6hweDi8PosxD6/PT09YY3xQA7LTCTKfYX+QHpA0GCcqmEHvr/cyfKQTEuwgbs2kPxJEB0iNjfJcCTPyocx+A0griHSmADiC91oNGVwJ69RudYe65vJmoqfpul0lrqXadW0jFKH5BKwAeCq+Den7s+3zfRJzA61/Uj/9H/VzLKTx9jFPPdXeeP+L7WEvDLAKAIoF8bPTKT0+TM7W8ePj3Rz/Yn3kOAp2f1Kf0Weony7pn/cPydvhQYV+eFOfmOu7VB/ViPe34/EN3RFHY/yRuT8ddCtMPH/McBAT5s+vRde/gf2c/sPsjLK+m5IBQF5tO+h2tTlBGnP6693JdsvofjOPnnEHkh2TnV/X1fBl9S5zrwuwF8NFrAVJVwCAPTe8gaJlomqlp0pv4Pjn98tJ/t/fL++6unpR1YGC2n/KCoa0tTLoKiEeUPDl94nj+5/Tv3/eT5vBQ60X1S0oZr+IWRR8Ldhu7AlLjPISlJcO9vrFotky9SpzDequlwEir5beYAc0R7D9KS1DXva0jhYRDXoExPdc6yw5GShkZXe9QdO/uOvHofxjrV/TNS6iMJS+4TcSTgk9n5agJdBQbB//IfF/HpvPt3Tbi7b6I6K0R72p6ajryEJrENW2bbeVUGjfgoals4L443c7BEE4mJO2SpbRngxQrAKRudRzGQ8jVOL2qDVjjI8K1gc3TIJ5KiFZ1q+gdsARPB4NQS4AjwVSt72DSoXNyOWUrU5mQ9nRYyjp89Xo7oRI6Bga9QNT1mQ/ptaJq5T/7WcgAZywR/XlPGAUDdet3LE+qS0TI+g+aJU8MIqjo0Kx8Ly+maxLjJmjQ18rA0YCkxLQbUZP1WqdmyQGJLUm7VnQFqodmXSqmRrdVpqdzk5LvmvgtEcW8PMGdaS23EOWyDVbACZzUJPaqMbjDxpA3Qrgl0AikimGDbqmyT8P8NOYiqrldF8rX+YN7TopX4UoHuSCYY7cgX4gHwclQKl1zhx0THf+tCAUValzjI7Wg9EhptrkIcfIJjA94evOn8B2eHaVzvBrnl2ig0So6hvPaz0IGcOvTHvUIlE2+prqAxLSQxZlU2stql1NqCCLdIiIN/i1DBEHUoElM9dBravbiAnKqgpi4IBkw+utSPIoBijDXJipSVV7MpOEJUAc5Qmm3BnUN+w3hteEieYKfRZSIUcXKMVf0u5wD4EwsUNVvZOtUT7A2GkffHjByWpHqvRBYrTV72a6j8zZ6W0DTE86Hn04bmyWX3Ri9WH7ZU6Q7h+ZHo0nHUAcsQvVhXRDZHChwiyi/hnPuOsSEF6Exk3o6Y9DT1eZ+6cASXk2Y9k+6EOQMDGm6WBK10wOQJCBwren86cPPWUcRAnTVjGcU1LBgs9FURiX/e6479yZcLwCBmTxiawEwrOcleuu12t3tbLv/N4RLYIBhYexm7Fcn4OJcn0+zc+s8/VfPeddZHAGN6TT8eGczHdR/Gts1/MzDkThr23zqrVfAMFT33Nx1RJsx1k5zuWILLnG/vsH+Fv5D4NTVcp1Gzo8AAAAAElFTkSuQmCC&labelColor=white)](https://huggingface.co/spaces/opendatalab/MinerU)
[![ModelScope](https://img.shields.io/badge/Demo_on_ModelScope-purple?logo=data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iMjIzIiBoZWlnaHQ9IjIwMCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIj4KCiA8Zz4KICA8dGl0bGU+TGF5ZXIgMTwvdGl0bGU+CiAgPHBhdGggaWQ9InN2Z18xNCIgZmlsbD0iIzYyNGFmZiIgZD0ibTAsODkuODRsMjUuNjUsMGwwLDI1LjY0OTk5bC0yNS42NSwwbDAsLTI1LjY0OTk5eiIvPgogIDxwYXRoIGlkPSJzdmdfMTUiIGZpbGw9IiM2MjRhZmYiIGQ9Im05OS4xNCwxMTUuNDlsMjUuNjUsMGwwLDI1LjY1bC0yNS42NSwwbDAsLTI1LjY1eiIvPgogIDxwYXRoIGlkPSJzdmdfMTYiIGZpbGw9IiM2MjRhZmYiIGQ9Im0xNzYuMDksMTQxLjE0bC0yNS42NDk5OSwwbDAsMjIuMTlsNDcuODQsMGwwLC00Ny44NGwtMjIuMTksMGwwLDI1LjY1eiIvPgogIDxwYXRoIGlkPSJzdmdfMTciIGZpbGw9IiMzNmNmZDEiIGQ9Im0xMjQuNzksODkuODRsMjUuNjUsMGwwLDI1LjY0OTk5bC0yNS42NSwwbDAsLTI1LjY0OTk5eiIvPgogIDxwYXRoIGlkPSJzdmdfMTgiIGZpbGw9IiMzNmNmZDEiIGQ9Im0wLDY0LjE5bDI1LjY1LDBsMCwyNS42NWwtMjUuNjUsMGwwLC0yNS42NXoiLz4KICA8cGF0aCBpZD0ic3ZnXzE5IiBmaWxsPSIjNjI0YWZmIiBkPSJtMTk4LjI4LDg5Ljg0bDI1LjY0OTk5LDBsMCwyNS42NDk5OWwtMjUuNjQ5OTksMGwwLC0yNS42NDk5OXoiLz4KICA8cGF0aCBpZD0ic3ZnXzIwIiBmaWxsPSIjMzZjZmQxIiBkPSJtMTk4LjI4LDY0LjE5bDI1LjY0OTk5LDBsMCwyNS42NWwtMjUuNjQ5OTksMGwwLC0yNS42NXoiLz4KICA8cGF0aCBpZD0ic3ZnXzIxIiBmaWxsPSIjNjI0YWZmIiBkPSJtMTUwLjQ0LDQybDAsMjIuMTlsMjUuNjQ5OTksMGwwLDI1LjY1bDIyLjE5LDBsMCwtNDcuODRsLTQ3Ljg0LDB6Ii8+CiAgPHBhdGggaWQ9InN2Z18yMiIgZmlsbD0iIzM2Y2ZkMSIgZD0ibTczLjQ5LDg5Ljg0bDI1LjY1LDBsMCwyNS42NDk5OWwtMjUuNjUsMGwwLC0yNS42NDk5OXoiLz4KICA8cGF0aCBpZD0ic3ZnXzIzIiBmaWxsPSIjNjI0YWZmIiBkPSJtNDcuODQsNjQuMTlsMjUuNjUsMGwwLC0yMi4xOWwtNDcuODQsMGwwLDQ3Ljg0bDIyLjE5LDBsMCwtMjUuNjV6Ii8+CiAgPHBhdGggaWQ9InN2Z18yNCIgZmlsbD0iIzYyNGFmZiIgZD0ibTQ3Ljg0LDExNS40OWwtMjIuMTksMGwwLDQ3Ljg0bDQ3Ljg0LDBsMCwtMjIuMTlsLTI1LjY1LDBsMCwtMjUuNjV6Ii8+CiA8L2c+Cjwvc3ZnPg==&labelColor=white)](https://www.modelscope.cn/studios/OpenDataLab/MinerU)
[![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/gist/myhloli/3b3a00a4a0a61577b6c30f989092d20d/mineru_demo.ipynb)
[![arXiv](https://img.shields.io/badge/arXiv-2409.18839-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2409.18839)
[![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/gist/myhloli/a3cb16570ab3cfeadf9d8f0ac91b4fca/mineru_demo.ipynb)
[![arXiv](https://img.shields.io/badge/MinerU-Technical%20Report-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2409.18839)
[![arXiv](https://img.shields.io/badge/MinerU2.5-Technical%20Report-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2509.22186)
[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/opendatalab/MinerU)
<div align="center">
@@ -34,7 +35,7 @@
<!-- join us -->
<p align="center">
👋 join us on <a href="https://discord.gg/Tdedn9GTXq" target="_blank">Discord</a> and <a href="http://mineru.space/s/V85Yl" target="_blank">WeChat</a>
👋 join us on <a href="https://discord.gg/Tdedn9GTXq" target="_blank">Discord</a> and <a href="https://mineru.net/community-portal/?aliasId=3c430f94" target="_blank">WeChat</a>
</p>
</div>
@@ -56,7 +57,7 @@ Compared to well-known commercial products domestically and internationally, Min
- Automatically identify and convert formulas in documents to LaTeX format
- Automatically identify and convert tables in documents to HTML format
- Automatically detect scanned PDFs and garbled PDFs, and enable OCR functionality
- OCR supports detection and recognition of 84 languages
- OCR supports detection and recognition of 109 languages
- Support multiple output formats, such as multimodal and NLP Markdown, reading-order-sorted JSON, and information-rich intermediate formats
- Support multiple visualization results, including layout visualization, span visualization, etc., for efficient confirmation of output effects and quality inspection
- Support pure CPU environment operation, and support GPU(CUDA)/NPU(CANN)/MPS acceleration

View File

@@ -6,25 +6,23 @@ MinerU provides a convenient Docker deployment method, which helps quickly set u
```bash
wget https://gcore.jsdelivr.net/gh/opendatalab/MinerU@master/docker/global/Dockerfile
docker build -t mineru-sglang:latest -f Dockerfile .
docker build -t mineru:latest -f Dockerfile .
```
> [!TIP]
> The [Dockerfile](https://github.com/opendatalab/MinerU/blob/master/docker/global/Dockerfile) uses `lmsysorg/sglang:v0.4.9.post5-cu126` as the base image by default, supporting Turing/Ampere/Ada Lovelace/Hopper platforms.
> If you are using the newer `Blackwell` platform, please modify the base image to `lmsysorg/sglang:v0.4.9.post5-cu128-b200` before executing the build operation.
> The [Dockerfile](https://github.com/opendatalab/MinerU/blob/master/docker/global/Dockerfile) uses `vllm/vllm-openai:v0.10.1.1` as the base image by default. This version of vLLM v1 engine has limited support for GPU models.
> If you cannot use vLLM accelerated inference on Turing and earlier architecture GPUs, you can resolve this issue by changing the base image to `vllm/vllm-openai:v0.10.2`.
## Docker Description
MinerU's Docker uses `lmsysorg/sglang` as the base image, so it includes the `sglang` inference acceleration framework and necessary dependencies by default. Therefore, on compatible devices, you can directly use `sglang` to accelerate VLM model inference.
MinerU's Docker uses `vllm/vllm-openai` as the base image, so it includes the `vllm` inference acceleration framework and necessary dependencies by default. Therefore, on compatible devices, you can directly use `vllm` to accelerate VLM model inference.
> [!NOTE]
> Requirements for using `sglang` to accelerate VLM model inference:
> Requirements for using `vllm` to accelerate VLM model inference:
>
> - Device must have Turing architecture or later graphics cards with 8GB+ available VRAM.
> - The host machine's graphics driver should support CUDA 12.6 or higher; `Blackwell` platform should support CUDA 12.8 or higher. You can check the driver version using the `nvidia-smi` command.
> - Device must have Volta architecture or later graphics cards with 8GB+ available VRAM.
> - The host machine's graphics driver should support CUDA 12.8 or higher; You can check the driver version using the `nvidia-smi` command.
> - Docker container must have access to the host machine's graphics devices.
>
> If your device doesn't meet the above requirements, you can still use other features of MinerU, but cannot use `sglang` to accelerate VLM model inference, meaning you cannot use the `vlm-sglang-engine` backend or start the `vlm-sglang-server` service.
## Start Docker Container
@@ -33,12 +31,12 @@ docker run --gpus all \
--shm-size 32g \
-p 30000:30000 -p 7860:7860 -p 8000:8000 \
--ipc=host \
-it mineru-sglang:latest \
-it mineru:latest \
/bin/bash
```
After executing this command, you will enter the Docker container's interactive terminal with some ports mapped for potential services. You can directly run MinerU-related commands within the container to use MinerU's features.
You can also directly start MinerU services by replacing `/bin/bash` with service startup commands. For detailed instructions, please refer to the [Start the service via command](https://opendatalab.github.io/MinerU/usage/quick_usage/#advanced-usage-via-api-webui-sglang-clientserver).
You can also directly start MinerU services by replacing `/bin/bash` with service startup commands. For detailed instructions, please refer to the [Start the service via command](https://opendatalab.github.io/MinerU/usage/quick_usage/#advanced-usage-via-api-webui-http-clientserver).
## Start Services Directly with Docker Compose
@@ -53,19 +51,19 @@ wget https://gcore.jsdelivr.net/gh/opendatalab/MinerU@master/docker/compose.yaml
>
>- The `compose.yaml` file contains configurations for multiple services of MinerU, you can choose to start specific services as needed.
>- Different services might have additional parameter configurations, which you can view and edit in the `compose.yaml` file.
>- Due to the pre-allocation of GPU memory by the `sglang` inference acceleration framework, you may not be able to run multiple `sglang` services simultaneously on the same machine. Therefore, ensure that other services that might use GPU memory have been stopped before starting the `vlm-sglang-server` service or using the `vlm-sglang-engine` backend.
>- Due to the pre-allocation of GPU memory by the `vllm` inference acceleration framework, you may not be able to run multiple `vllm` services simultaneously on the same machine. Therefore, ensure that other services that might use GPU memory have been stopped before starting the `vlm-openai-server` service or using the `vlm-vllm-engine` backend.
---
### Start sglang-server service
connect to `sglang-server` via `vlm-sglang-client` backend
### Start OpenAI-compatible server service
connect to `openai-server` via `vlm-http-client` backend
```bash
docker compose -f compose.yaml --profile sglang-server up -d
docker compose -f compose.yaml --profile openai-server up -d
```
>[!TIP]
>In another terminal, connect to sglang server via sglang client (only requires CPU and network, no sglang environment needed)
>In another terminal, connect to openai server via http client (only requires CPU and network, no vllm environment needed)
> ```bash
> mineru -p <input_path> -o <output_path> -b vlm-sglang-client -u http://<server_ip>:30000
> mineru -p <input_path> -o <output_path> -b vlm-http-client -u http://<server_ip>:30000
> ```
---
@@ -86,4 +84,3 @@ connect to `sglang-server` via `vlm-sglang-client` backend
>[!TIP]
>
>- Access `http://<server_ip>:7860` in your browser to use the Gradio WebUI.
>- Access `http://<server_ip>:7860/?view=api` to use the Gradio API.

View File

@@ -4,34 +4,43 @@ MinerU supports installing extension modules on demand based on different needs
## Common Scenarios
### Core Functionality Installation
The `core` module is the core dependency of MinerU, containing all functional modules except `sglang`. Installing this module ensures the basic functionality of MinerU works properly.
The `core` module is the core dependency of MinerU, containing all functional modules except `vllm`/`lmdeploy`. Installing this module ensures the basic functionality of MinerU works properly.
```bash
uv pip install mineru[core]
uv pip install "mineru[core]"
```
---
### Using `sglang` to Accelerate VLM Model Inference
The `sglang` module provides acceleration support for VLM model inference, suitable for graphics cards with Turing architecture and later (8GB+ VRAM). Installing this module can significantly improve model inference speed.
In the configuration, `all` includes both `core` and `sglang` modules, so `mineru[all]` and `mineru[core,sglang]` are equivalent.
### Using `vllm` to Accelerate VLM Model Inference
> [!NOTE]
> `vllm` and `lmdeploy` have nearly identical VLM inference acceleration effects and usage methods. You can choose one of them to install and use based on your actual needs, but it is not recommended to install both modules simultaneously to avoid potential dependency conflicts.
The `vllm` module provides acceleration support for VLM model inference, suitable for graphics cards with Volta architecture and later (8GB+ VRAM). Installing this module can significantly improve model inference speed.
```bash
uv pip install mineru[all]
uv pip install "mineru[core,vllm]"
```
> [!TIP]
> If exceptions occur during installation of the complete package including sglang, please refer to the [sglang official documentation](https://docs.sglang.ai/start/install.html) to try to resolve the issue, or directly use the [Docker](./docker_deployment.md) deployment method.
> If exceptions occur during installation of the extra package including vllm, please refer to the [vllm official documentation](https://docs.vllm.ai/en/latest/getting_started/installation/index.html) to try to resolve the issue, or directly use the [Docker](./docker_deployment.md) deployment method.
---
### Installing Lightweight Client to Connect to sglang-server
If you need to install a lightweight client on edge devices to connect to `sglang-server`, you can install the basic mineru package, which is very lightweight and suitable for devices with only CPU and network connectivity.
### Using `lmdeploy` to Accelerate VLM Model Inference
> [!NOTE]
> `vllm` and `lmdeploy` have nearly identical VLM inference acceleration effects and usage methods. You can choose one of them to install and use based on your actual needs, but it is not recommended to install both modules simultaneously to avoid potential dependency conflicts.
The `lmdeploy` module provides acceleration support for VLM model inference, suitable for graphics cards with Volta architecture and later (8GB+ VRAM). Installing this module can significantly improve model inference speed.
```bash
uv pip install "mineru[core,lmdeploy]"
```
> [!TIP]
> If exceptions occur during installation of the extra package including lmdeploy, please refer to the [lmdeploy official documentation](https://lmdeploy.readthedocs.io/en/latest/get_started/installation.html) to try to resolve the issue.
---
### Installing Lightweight Client to Connect to OpenAI-compatible servers
If you need to install a lightweight client on edge devices to connect to an OpenAI-compatible server for using VLM mode, you can install the basic mineru package, which is very lightweight and suitable for devices with only CPU and network connectivity.
```bash
uv pip install mineru
```
---
### Using Pipeline Backend on Outdated Linux Systems
If your system is too outdated to meet the dependency requirements of `mineru[core]`, this option can minimally meet MinerU's runtime requirements, suitable for old systems that cannot be upgraded and only need to use the pipeline backend.
```bash
uv pip install mineru[pipeline_old_linux]
```

View File

@@ -27,41 +27,75 @@ A WebUI developed based on Gradio, with a simple interface and only core parsing
> In non-mainstream environments, due to the diversity of hardware and software configurations, as well as compatibility issues with third-party dependencies, we cannot guarantee 100% usability of the project. Therefore, for users who wish to use this project in non-recommended environments, we suggest carefully reading the documentation and FAQ first, as most issues have corresponding solutions in the FAQ. Additionally, we encourage community feedback on issues so that we can gradually expand our support range.
<table border="1">
<tr>
<td>Parsing Backend</td>
<td>pipeline</td>
<td>vlm-transformers</td>
<td>vlm-sglang</td>
</tr>
<tr>
<td>Operating System</td>
<td>Linux / Windows / macOS</td>
<td>Linux / Windows</td>
<td>Linux / Windows (via WSL2)</td>
</tr>
<tr>
<td>CPU Inference Support</td>
<td>✅</td>
<td colspan="2">❌</td>
</tr>
<tr>
<td>GPU Requirements</td>
<td>Turing architecture and later, 6GB+ VRAM or Apple Silicon</td>
<td colspan="2">Turing architecture and later, 8GB+ VRAM</td>
</tr>
<tr>
<td>Memory Requirements</td>
<td colspan="3">Minimum 16GB+, recommended 32GB+</td>
</tr>
<tr>
<td>Disk Space Requirements</td>
<td colspan="3">20GB+, SSD recommended</td>
</tr>
<tr>
<td>Python Version</td>
<td colspan="3">3.10-3.13</td>
</tr>
<thead>
<tr>
<th rowspan="2">Parsing Backend</th>
<th rowspan="2">pipeline <br> (Accuracy<sup>1</sup> 82+)</th>
<th colspan="5">vlm (Accuracy<sup>1</sup> 90+)</th>
</tr>
<tr>
<th>transformers</th>
<th>mlx-engine</th>
<th>vllm-engine / <br>vllm-async-engine</th>
<th>lmdeploy-engine</th>
<th>http-client</th>
</tr>
</thead>
<tbody>
<tr>
<th>Backend Features</th>
<td>Fast, no hallucinations</td>
<td>Good compatibility, <br>but slower</td>
<td>Faster than transformers</td>
<td>Fast, compatible with the vLLM ecosystem</td>
<td>Fast, compatible with the LMDeploy ecosystem</td>
<td>Suitable for OpenAI-compatible servers<sup>6</sup></td>
</tr>
<tr>
<th>Operating System</th>
<td colspan="2" style="text-align:center;">Linux<sup>2</sup> / Windows / macOS</td>
<td style="text-align:center;">macOS<sup>3</sup></td>
<td style="text-align:center;">Linux<sup>2</sup> / Windows<sup>4</sup> </td>
<td style="text-align:center;">Linux<sup>2</sup> / Windows<sup>5</sup> </td>
<td>Any</td>
</tr>
<tr>
<th>CPU inference support</th>
<td colspan="2" style="text-align:center;">✅</td>
<td colspan="3" style="text-align:center;">❌</td>
<td>Not required</td>
</tr>
<tr>
<th>GPU Requirements</th><td colspan="2" style="text-align:center;">Volta or later architectures, 6 GB VRAM or more, or Apple Silicon</td>
<td>Apple Silicon</td>
<td colspan="2" style="text-align:center;">Volta or later architectures, 8 GB VRAM or more</td>
<td>Not required</td>
</tr>
<tr>
<th>Memory Requirements</th>
<td colspan="5" style="text-align:center;">Minimum 16 GB, 32 GB recommended</td>
<td>8 GB</td>
</tr>
<tr>
<th>Disk Space Requirements</th>
<td colspan="5" style="text-align:center;">20 GB or more, SSD recommended</td>
<td>2 GB</td>
</tr>
<tr>
<th>Python Version</th>
<td colspan="6" style="text-align:center;">3.10-3.13<sup>7</sup></td>
</tr>
</tbody>
</table>
<sup>1</sup> Accuracy metric is the End-to-End Evaluation Overall score of OmniDocBench (v1.5), tested on the latest `MinerU` version.
<sup>2</sup> Linux supports only distributions released in 2019 or later.
<sup>3</sup> MLX requires macOS 13.5 or later, recommended for use with version 14.0 or higher.
<sup>4</sup> Windows vLLM support via WSL2(Windows Subsystem for Linux).
<sup>5</sup> Windows LMDeploy can only use the `turbomind` backend, which is slightly slower than the `pytorch` backend. If performance is critical, it is recommended to run it via WSL2.
<sup>6</sup> Servers compatible with the OpenAI API, such as local or remote model services deployed via inference frameworks like `vLLM`, `SGLang`, or `LMDeploy`.
<sup>7</sup> Windows + LMDeploy only supports Python versions 3.103.12, as the critical dependency `ray` does not yet support Python 3.13 on Windows.
### Install MinerU
@@ -80,8 +114,8 @@ uv pip install -e .[core]
```
> [!TIP]
> `mineru[core]` includes all core features except `sglang` acceleration, compatible with Windows / Linux / macOS systems, suitable for most users.
> If you need to use `sglang` acceleration for VLM model inference or install a lightweight client on edge devices, please refer to the documentation [Extension Modules Installation Guide](./extension_modules.md).
> `mineru[core]` includes all core features except `vLLM`/`LMDeploy` acceleration, compatible with Windows / Linux / macOS systems, suitable for most users.
> If you need to use `vLLM`/`LMDeploy` acceleration for VLM model inference or install a lightweight client on edge devices, please refer to the documentation [Extension Modules Installation Guide](./extension_modules.md).
---

View File

@@ -51,14 +51,16 @@ The following sections provide detailed descriptions of each file's purpose and
## Structured Data Files
### Model Inference Results (model.json)
> [!IMPORTANT]
> The VLM backend output has significant changes in version 2.5 and is not backward-compatible with the pipeline backend. If you plan to build secondary development on structured outputs, please read this document carefully.
> [!NOTE]
> Only applicable to pipeline backend
### Pipeline Backend Output Results
#### Model Inference Results (model.json)
**File naming format**: `{original_filename}_model.json`
#### Data Structure Definition
##### Data Structure Definition
```python
from pydantic import BaseModel, Field
@@ -103,7 +105,7 @@ class PageInferenceResults(BaseModel):
inference_result: list[PageInferenceResults] = []
```
#### Coordinate System Description
##### Coordinate System Description
`poly` coordinate format: `[x0, y0, x1, y1, x2, y2, x3, y3]`
@@ -112,7 +114,7 @@ inference_result: list[PageInferenceResults] = []
![poly coordinate diagram](../images/poly.png)
#### Sample Data
##### Sample Data
```json
[
@@ -165,52 +167,11 @@ inference_result: list[PageInferenceResults] = []
]
```
### VLM Output Results (model_output.txt)
> [!NOTE]
> Only applicable to VLM backend
**File naming format**: `{original_filename}_model_output.txt`
#### File Format Description
- Uses `----` to separate output results for each page
- Each page contains multiple text blocks starting with `<|box_start|>` and ending with `<|md_end|>`
#### Field Meanings
| Tag | Format | Description |
|-----|--------|-------------|
| Bounding box | `<\|box_start\|>x0 y0 x1 y1<\|box_end\|>` | Quadrilateral coordinates (top-left, bottom-right points), coordinate values after scaling page to 1000×1000 |
| Type tag | `<\|ref_start\|>type<\|ref_end\|>` | Content block type identifier |
| Content | `<\|md_start\|>markdown content<\|md_end\|>` | Markdown content of the block |
#### Supported Content Types
```json
{
"text": "Text",
"title": "Title",
"image": "Image",
"image_caption": "Image caption",
"image_footnote": "Image footnote",
"table": "Table",
"table_caption": "Table caption",
"table_footnote": "Table footnote",
"equation": "Interline formula"
}
```
#### Special Tags
- `<|txt_contd|>`: Appears at the end of text, indicating that this text block can be connected with subsequent text blocks
- Table content uses `otsl` format and needs to be converted to HTML for rendering in Markdown
### Intermediate Processing Results (middle.json)
#### Intermediate Processing Results (middle.json)
**File naming format**: `{original_filename}_middle.json`
#### Top-level Structure
##### Top-level Structure
| Field Name | Type | Description |
|------------|------|-------------|
@@ -218,22 +179,20 @@ inference_result: list[PageInferenceResults] = []
| `_backend` | `string` | Parsing mode: `pipeline` or `vlm` |
| `_version_name` | `string` | MinerU version number |
#### Page Information Structure (pdf_info)
##### Page Information Structure (pdf_info)
| Field Name | Description |
|------------|-------------|
| `preproc_blocks` | Unsegmented intermediate results after PDF preprocessing |
| `layout_bboxes` | Layout segmentation results, including layout direction and bounding boxes, sorted by reading order |
| `page_idx` | Page number, starting from 0 |
| `page_size` | Page width and height `[width, height]` |
| `_layout_tree` | Layout tree structure |
| `images` | Image block information list |
| `tables` | Table block information list |
| `interline_equations` | Interline formula block information list |
| `discarded_blocks` | Block information to be discarded |
| `para_blocks` | Content block results after segmentation |
#### Block Structure Hierarchy
##### Block Structure Hierarchy
```
Level 1 blocks (table | image)
@@ -242,7 +201,7 @@ Level 1 blocks (table | image)
└── Spans
```
#### Level 1 Block Fields
##### Level 1 Block Fields
| Field Name | Description |
|------------|-------------|
@@ -250,7 +209,7 @@ Level 1 blocks (table | image)
| `bbox` | Rectangular box coordinates of the block `[x0, y0, x1, y1]` |
| `blocks` | List of contained level 2 blocks |
#### Level 2 Block Fields
##### Level 2 Block Fields
| Field Name | Description |
|------------|-------------|
@@ -258,7 +217,7 @@ Level 1 blocks (table | image)
| `bbox` | Rectangular box coordinates of the block |
| `lines` | List of contained line information |
#### Level 2 Block Types
##### Level 2 Block Types
| Type | Description |
|------|-------------|
@@ -274,7 +233,7 @@ Level 1 blocks (table | image)
| `list` | List block |
| `interline_equation` | Interline formula block |
#### Line and Span Structure
##### Line and Span Structure
**Line fields**:
- `bbox`: Rectangular box coordinates of the line
@@ -285,7 +244,7 @@ Level 1 blocks (table | image)
- `type`: Span type (`image`, `table`, `text`, `inline_equation`, `interline_equation`)
- `content` | `img_path`: Text content or image path
#### Sample Data
##### Sample Data
```json
{
@@ -388,15 +347,15 @@ Level 1 blocks (table | image)
}
```
### Content List (content_list.json)
#### Content List (content_list.json)
**File naming format**: `{original_filename}_content_list.json`
#### Functionality
##### Functionality
This is a simplified version of `middle.json` that stores all readable content blocks in reading order as a flat structure, removing complex layout information for easier subsequent processing.
#### Content Types
##### Content Types
| Type | Description |
|------|-------------|
@@ -405,7 +364,7 @@ This is a simplified version of `middle.json` that stores all readable content b
| `text` | Text/Title |
| `equation` | Interline formula |
#### Text Level Identification
##### Text Level Identification
Text levels are distinguished through the `text_level` field:
@@ -414,49 +373,40 @@ Text levels are distinguished through the `text_level` field:
- `text_level: 2`: Level 2 heading
- And so on...
#### Common Fields
##### Common Fields
All content blocks include a `page_idx` field indicating the page number (starting from 0).
- All content blocks include a `page_idx` field indicating the page number (starting from 0).
- All content blocks include a `bbox` field representing the bounding box coordinates of the content block `[x0, y0, x1, y1]`, mapped to a range of 0-1000.
#### Sample Data
##### Sample Data
```json
[
{
"type": "text",
"text": "The response of flow duration curves to afforestation ",
"text_level": 1,
"text_level": 1,
"bbox": [
62,
480,
946,
904
],
"page_idx": 0
},
{
"type": "text",
"text": "Received 1 October 2003; revised 22 December 2004; accepted 3 January 2005 ",
"page_idx": 0
},
{
"type": "text",
"text": "Abstract ",
"text_level": 2,
"page_idx": 0
},
{
"type": "text",
"text": "The hydrologic effect of replacing pasture or other short crops with trees is reasonably well understood on a mean annual basis. The impact on flow regime, as described by the annual flow duration curve (FDC) is less certain. A method to assess the impact of plantation establishment on FDCs was developed. The starting point for the analyses was the assumption that rainfall and vegetation age are the principal drivers of evapotranspiration. A key objective was to remove the variability in the rainfall signal, leaving changes in streamflow solely attributable to the evapotranspiration of the plantation. A method was developed to (1) fit a model to the observed annual time series of FDC percentiles; i.e. 10th percentile for each year of record with annual rainfall and plantation age as parameters, (2) replace the annual rainfall variation with the long term mean to obtain climate adjusted FDCs, and (3) quantify changes in FDC percentiles as plantations age. Data from 10 catchments from Australia, South Africa and New Zealand were used. The model was able to represent flow variation for the majority of percentiles at eight of the 10 catchments, particularly for the 1050th percentiles. The adjusted FDCs revealed variable patterns in flow reductions with two types of responses (groups) being identified. Group 1 catchments show a substantial increase in the number of zero flow days, with low flows being more affected than high flows. Group 2 catchments show a more uniform reduction in flows across all percentiles. The differences may be partly explained by storage characteristics. The modelled flow reductions were in accord with published results of paired catchment experiments. An additional analysis was performed to characterise the impact of afforestation on the number of zero flow days $( N _ { \\mathrm { z e r o } } )$ for the catchments in group 1. This model performed particularly well, and when adjusted for climate, indicated a significant increase in $N _ { \\mathrm { z e r o } }$ . The zero flow day method could be used to determine change in the occurrence of any given flow in response to afforestation. The methods used in this study proved satisfactory in removing the rainfall variability, and have added useful insight into the hydrologic impacts of plantation establishment. This approach provides a methodology for understanding catchment response to afforestation, where paired catchment data is not available. ",
"page_idx": 0
},
{
"type": "text",
"text": "1. Introduction ",
"text_level": 2,
"page_idx": 1
},
{
"type": "image",
"img_path": "images/a8ecda1c69b27e4f79fce1589175a9d721cbdc1cf78b4cc06a015f3746f6b9d8.jpg",
"img_caption": [
"image_caption": [
"Fig. 1. Annual flow duration curves of daily flows from Pine Creek, Australia, 19892000. "
],
"img_footnote": [],
"image_footnote": [],
"bbox": [
62,
480,
946,
904
],
"page_idx": 1
},
{
@@ -464,6 +414,12 @@ All content blocks include a `page_idx` field indicating the page number (starti
"img_path": "images/181ea56ef185060d04bf4e274685f3e072e922e7b839f093d482c29bf89b71e8.jpg",
"text": "$$\nQ _ { \\% } = f ( P ) + g ( T )\n$$",
"text_format": "latex",
"bbox": [
62,
480,
946,
904
],
"page_idx": 2
},
{
@@ -476,16 +432,281 @@ All content blocks include a `page_idx` field indicating the page number (starti
"indicates that the rainfall term was significant at the $5 \\%$ level, $T$ indicates that the time term was significant at the $5 \\%$ level, \\* represents significance at the $10 \\%$ level, and na denotes too few data points for meaningful analysis. "
],
"table_body": "<html><body><table><tr><td rowspan=\"2\">Site</td><td colspan=\"10\">Percentile</td></tr><tr><td>10</td><td>20</td><td>30</td><td>40</td><td>50</td><td>60</td><td>70</td><td>80</td><td>90</td><td>100</td></tr><tr><td>Traralgon Ck</td><td>P</td><td>P,*</td><td>P</td><td>P</td><td>P,</td><td>P,</td><td>P,</td><td>P,</td><td>P</td><td>P</td></tr><tr><td>Redhill</td><td>P,T</td><td>P,T</td><td>*</td><td>**</td><td>P.T</td><td>P,*</td><td>P*</td><td>P*</td><td>*</td><td>*</td></tr><tr><td>Pine Ck</td><td></td><td>P,T</td><td>P,T</td><td>P,T</td><td>P,T</td><td>T</td><td>T</td><td>T</td><td>na</td><td>na</td></tr><tr><td>Stewarts Ck 5</td><td>P,T</td><td>P,T</td><td>P,T</td><td>P,T</td><td>P.T</td><td>P.T</td><td>P,T</td><td>na</td><td>na</td><td>na</td></tr><tr><td>Glendhu 2</td><td>P</td><td>P,T</td><td>P,*</td><td>P,T</td><td>P.T</td><td>P,ns</td><td>P,T</td><td>P,T</td><td>P,T</td><td>P,T</td></tr><tr><td>Cathedral Peak 2</td><td>P,T</td><td>P,T</td><td>P,T</td><td>P,T</td><td>P,T</td><td>*,T</td><td>P,T</td><td>P,T</td><td>P,T</td><td>T</td></tr><tr><td>Cathedral Peak 3</td><td>P.T</td><td>P.T</td><td>P,T</td><td>P,T</td><td>P,T</td><td>T</td><td>P,T</td><td>P,T</td><td>P,T</td><td>T</td></tr><tr><td>Lambrechtsbos A</td><td>P,T</td><td>P</td><td>P</td><td>P,T</td><td>*,T</td><td>*,T</td><td>*,T</td><td>*,T</td><td>*,T</td><td>T</td></tr><tr><td>Lambrechtsbos B</td><td>P,T</td><td>P,T</td><td>P,T</td><td>P,T</td><td>P,T</td><td>P,T</td><td>P,T</td><td>P,T</td><td>T</td><td>T</td></tr><tr><td>Biesievlei</td><td>P,T</td><td>P.T</td><td>P,T</td><td>P,T</td><td>*,T</td><td>*,T</td><td>T</td><td>T</td><td>P,T</td><td>P,T</td></tr></table></body></html>",
"bbox": [
62,
480,
946,
904
],
"page_idx": 5
}
]
```
### VLM Backend Output Results
#### Model Inference Results (model.json)
**File naming format**: `{original_filename}_model.json`
##### File format description
- Two-level nested list: outer list = pages; inner list = content blocks of that page
- Each block is a dict with at least: `type`, `bbox`, `angle`, `content` (some types add extra fields like `score`, `block_tags`, `content_tags`, `format`)
- Designed for direct, raw model inspection
##### Supported content types (type field values)
```json
{
"text": "Plain text",
"title": "Title",
"equation": "Display (interline) formula",
"image": "Image",
"image_caption": "Image caption",
"image_footnote": "Image footnote",
"table": "Table",
"table_caption": "Table caption",
"table_footnote": "Table footnote",
"phonetic": "Phonetic annotation",
"code": "Code block",
"code_caption": "Code caption",
"ref_text": "Reference / citation entry",
"algorithm": "Algorithm block (treated as code subtype)",
"list": "List container",
"header": "Page header",
"footer": "Page footer",
"page_number": "Page number",
"aside_text": "Side / margin note",
"page_footnote": "Page footnote"
}
```
##### Coordinate system
- `bbox` = `[x0, y0, x1, y1]` (top-left, bottom-right)
- Origin at top-left of the page
- All coordinates are normalized percentages in `[0,1]`
##### Sample data
```json
[
[
{
"type": "header",
"bbox": [0.077, 0.095, 0.18, 0.181],
"angle": 0,
"score": null,
"block_tags": null,
"content": "ELSEVIER",
"format": null,
"content_tags": null
},
{
"type": "title",
"bbox": [0.157, 0.228, 0.833, 0.253],
"angle": 0,
"score": null,
"block_tags": null,
"content": "The response of flow duration curves to afforestation",
"format": null,
"content_tags": null
}
]
]
```
#### Intermediate Processing Results (middle.json)
**File naming format**: `{original_filename}_middle.json`
Structure is broadly similar to the pipeline backend, but with these differences:
- `list` becomes a secondlevel block, a new field `sub_type` distinguishes list categories:
* `text`: ordinary list
* `ref_text`: reference / bibliography style list
- New `code` block type with `sub_type`(a code block always has at least a `code_body`, it may optionally have a `code_caption`):
* `code`
* `algorithm`
- `discarded_blocks` may contain additional types:
* `header`
* `footer`
* `page_number`
* `aside_text`
* `page_footnote`
- All blocks include an `angle` field indicating rotation (one of `0, 90, 180, 270`).
##### Examples
- Example: list block
```json
{
"bbox": [174,155,818,333],
"type": "list",
"angle": 0,
"index": 11,
"blocks": [
{
"bbox": [174,157,311,175],
"type": "text",
"angle": 0,
"lines": [
{
"bbox": [174,157,311,175],
"spans": [
{
"bbox": [174,157,311,175],
"type": "text",
"content": "H.1 Introduction"
}
]
}
],
"index": 3
},
{
"bbox": [175,182,464,229],
"type": "text",
"angle": 0,
"lines": [
{
"bbox": [175,182,464,229],
"spans": [
{
"bbox": [175,182,464,229],
"type": "text",
"content": "H.2 Example: Divide by Zero without Exception Handling"
}
]
}
],
"index": 4
}
],
"sub_type": "text"
}
```
- Example: code block with optional caption:
```json
{
"type": "code",
"bbox": [114,780,885,1231],
"blocks": [
{
"bbox": [114,780,885,1231],
"lines": [
{
"bbox": [114,780,885,1231],
"spans": [
{
"bbox": [114,780,885,1231],
"type": "text",
"content": "1 // Fig. H.1: DivideByZeroNoExceptionHandling.java \n2 // Integer division without exception handling. \n3 import java.util.Scanner; \n4 \n5 public class DivideByZeroNoExceptionHandling \n6 { \n7 // demonstrates throwing an exception when a divide-by-zero occurs \n8 public static int quotient( int numerator, int denominator ) \n9 { \n10 return numerator / denominator; // possible division by zero \n11 } // end method quotient \n12 \n13 public static void main(String[] args) \n14 { \n15 Scanner scanner = new Scanner(System.in); // scanner for input \n16 \n17 System.out.print(\"Please enter an integer numerator: \"); \n18 int numerator = scanner.nextInt(); \n19 System.out.print(\"Please enter an integer denominator: \"); \n20 int denominator = scanner.nextInt(); \n21"
}
]
}
],
"index": 17,
"angle": 0,
"type": "code_body"
},
{
"bbox": [867,160,1280,189],
"lines": [
{
"bbox": [867,160,1280,189],
"spans": [
{
"bbox": [867,160,1280,189],
"type": "text",
"content": "Algorithm 1 Modules for MCTSteg"
}
]
}
],
"index": 19,
"angle": 0,
"type": "code_caption"
}
],
"index": 17,
"sub_type": "code"
}
```
#### Content List (content_list.json)
**File naming format**: `{original_filename}_content_list.json`
Based on the pipeline format, with these VLM-specific extensions:
- New `code` type with `sub_type` (`code` | `algorithm`):
* Fields: `code_body` (string), optional `code_caption` (list of strings)
- New `list` type with `sub_type` (`text` | `ref_text`):
* Field: `list_items` (array of strings)
- All `discarded_blocks` entries are also output (e.g., headers, footers, page numbers, margin notes, page footnotes).
- Existing types (`image`, `table`, `text`, `equation`) remain unchanged.
- `bbox` still uses the 01000 normalized coordinate mapping.
##### Examples
Example: code (algorithm) entry
```json
{
"type": "code",
"sub_type": "algorithm",
"code_caption": ["Algorithm 1 Modules for MCTSteg"],
"code_body": "1: function GETCOORDINATE(d) \n2: $x \\gets d / l$ , $y \\gets d$ mod $l$ \n3: return $(x, y)$ \n4: end function \n5: function BESTCHILD(v) \n6: $C \\gets$ child set of $v$ \n7: $v' \\gets \\arg \\max_{c \\in C} \\mathrm{UCTScore}(c)$ \n8: $v'.n \\gets v'.n + 1$ \n9: return $v'$ \n10: end function \n11: function BACK PROPAGATE(v) \n12: Calculate $R$ using Equation 11 \n13: while $v$ is not a root node do \n14: $v.r \\gets v.r + R$ , $v \\gets v.p$ \n15: end while \n16: end function \n17: function RANDOMSEARCH(v) \n18: while $v$ is not a leaf node do \n19: Randomly select an untried action $a \\in A(v)$ \n20: Create a new node $v'$ \n21: $(x, y) \\gets \\mathrm{GETCOORDINATE}(v'.d)$ \n22: $v'.p \\gets v$ , $v'.d \\gets v.d + 1$ , $v'.\\Gamma \\gets v.\\Gamma$ \n23: $v'.\\gamma_{x,y} \\gets a$ \n24: if $a = -1$ then \n25: $v.lc \\gets v'$ \n26: else if $a = 0$ then \n27: $v.mc \\gets v'$ \n28: else \n29: $v.rc \\gets v'$ \n30: end if \n31: $v \\gets v'$ \n32: end while \n33: return $v$ \n34: end function \n35: function SEARCH(v) \n36: while $v$ is fully expanded do \n37: $v \\gets$ BESTCHILD(v) \n38: end while \n39: if $v$ is not a leaf node then \n40: $v \\gets$ RANDOMSEARCH(v) \n41: end if \n42: return $v$ \n43: end function",
"bbox": [510,87,881,740],
"page_idx": 0
}
```
Example: list (text) entry
```json
{
"type": "list",
"sub_type": "text",
"list_items": [
"H.1 Introduction",
"H.2 Example: Divide by Zero without Exception Handling",
"H.3 Example: Divide by Zero with Exception Handling",
"H.4 Summary"
],
"bbox": [174,155,818,333],
"page_idx": 0
}
```
Example: discarded blocks output
```json
[
{
"type": "header",
"text": "Journal of Hydrology 310 (2005) 253-265",
"bbox": [363,164,623,177],
"page_idx": 0
},
{
"type": "page_footnote",
"text": "* Corresponding author. Address: Forest Science Centre, Department of Sustainability and Environment, P.O. Box 137, Heidelberg, Vic. 3084, Australia. Tel.: +61 3 9450 8719; fax: +61 3 9450 8644.",
"bbox": [71,815,915,841],
"page_idx": 0
}
]
```
## Summary
The above files constitute MinerU's complete output results. Users can choose appropriate files for subsequent processing based on their needs:
- **Model outputs**: Use raw outputs (model.json, model_output.txt)
- **Debugging and verification**: Use visualization files (layout.pdf, spans.pdf)
- **Content extraction**: Use simplified files (*.md, content_list.json)
- **Secondary development**: Use structured files (middle.json)
- **Model outputs** (Use raw outputs):
* model.json
- **Debugging and verification** (Use visualization files):
* layout.pdf
* spans.pdf
- **Content extraction**: (Use simplified files):
* *.md
* content_list.json
- **Secondary development**: (Use structured files):
* middle.json

View File

@@ -1,25 +1,18 @@
# Advanced Command Line Parameters
## SGLang Acceleration Parameter Optimization
## Pass-through of inference engine parameters
### Memory Optimization Parameters
### vllm Acceleration Parameter Optimization
> [!TIP]
> SGLang acceleration mode currently supports running on Turing architecture graphics cards with a minimum of 8GB VRAM, but graphics cards with <24GB VRAM may encounter insufficient memory issues. You can optimize memory usage with the following parameters:
> If you can already use vllm normally for accelerated VLM model inference but still want to further improve inference speed, you can try the following parameters:
>
> - If you encounter insufficient VRAM when using a single graphics card, you may need to reduce the KV cache size with `--mem-fraction-static 0.5`. If VRAM issues persist, try reducing it further to `0.4` or lower.
> - If you have two or more graphics cards, you can try using tensor parallelism (TP) mode to simply expand available VRAM: `--tp-size 2`
### Performance Optimization Parameters
> [!TIP]
> If you can already use SGLang normally for accelerated VLM model inference but still want to further improve inference speed, you can try the following parameters:
>
> - If you have multiple graphics cards, you can use SGLang's multi-card parallel mode to increase throughput: `--dp-size 2`
> - You can also enable `torch.compile` to accelerate inference speed by approximately 15%: `--enable-torch-compile`
> - If you have multiple graphics cards, you can use vllm's multi-card parallel mode to increase throughput: `--data-parallel-size 2`
### Parameter Passing Instructions
> [!TIP]
> - All officially supported SGLang parameters can be passed to MinerU through command line arguments, including the following commands: `mineru`, `mineru-sglang-server`, `mineru-gradio`, `mineru-api`
> - If you want to learn more about `sglang` parameter usage, please refer to the [SGLang official documentation](https://docs.sglang.ai/backend/server_arguments.html#common-launch-commands)
> - All officially supported vllm/lmdeploy parameters can be passed to MinerU through command line arguments, including the following commands: `mineru`, `mineru-openai-server`, `mineru-gradio`, `mineru-api`
> - If you want to learn more about `vllm` parameter usage, please refer to the [vllm official documentation](https://docs.vllm.ai/en/latest/cli/serve.html)
> - If you want to learn more about `lmdeploy` parameter usage, please refer to the [lmdeploy official documentation](https://lmdeploy.readthedocs.io/en/latest/llm/api_server.html)
## GPU Device Selection and Configuration
@@ -29,7 +22,7 @@
> ```bash
> CUDA_VISIBLE_DEVICES=1 mineru -p <input_path> -o <output_path>
> ```
> - This specification method is effective for all command line calls, including `mineru`, `mineru-sglang-server`, `mineru-gradio`, and `mineru-api`, and applies to both `pipeline` and `vlm` backends.
> - This specification method is effective for all command line calls, including `mineru`, `mineru-openai-server`, `mineru-gradio`, and `mineru-api`, and applies to both `pipeline` and `vlm` backends.
### Common Device Configuration Examples
> [!TIP]
@@ -46,14 +39,9 @@
> [!TIP]
> Here are some possible usage scenarios:
>
> - If you have multiple graphics cards and need to specify cards 0 and 1, using multi-card parallelism to start `sglang-server`, you can use the following command:
> - If you have multiple graphics cards and need to specify cards 0 and 1, using multi-card parallelism to start `openai-server`, you can use the following command:
> ```bash
> CUDA_VISIBLE_DEVICES=0,1 mineru-sglang-server --port 30000 --dp-size 2
> ```
>
> - If you have multiple GPUs and need to specify GPU 03, and start the `sglang-server` using multi-GPU data parallelism and tensor parallelism, you can use the following command:
> ```bash
> CUDA_VISIBLE_DEVICES=0,1,2,3 mineru-sglang-server --port 30000 --dp-size 2 --tp-size 2
> CUDA_VISIBLE_DEVICES=0,1 mineru-openai-server --engine vllm --port 30000 --data-parallel-size 2
> ```
>
> - If you have multiple graphics cards and need to start two `fastapi` services on cards 0 and 1, listening on different ports respectively, you can use the following commands:

View File

@@ -11,16 +11,16 @@ Options:
-p, --path PATH Input file path or directory (required)
-o, --output PATH Output directory (required)
-m, --method [auto|txt|ocr] Parsing method: auto (default), txt, ocr (pipeline backend only)
-b, --backend [pipeline|vlm-transformers|vlm-sglang-engine|vlm-sglang-client]
-b, --backend [pipeline|vlm-transformers|vlm-vllm-engine|vlm-lmdeploy-engine|vlm-http-client]
Parsing backend (default: pipeline)
-l, --lang [ch|ch_server|ch_lite|en|korean|japan|chinese_cht|ta|te|ka|latin|arabic|east_slavic|cyrillic|devanagari]
-l, --lang [ch|ch_server|ch_lite|en|korean|japan|chinese_cht|ta|te|ka|th|el|latin|arabic|east_slavic|cyrillic|devanagari]
Specify document language (improves OCR accuracy, pipeline backend only)
-u, --url TEXT Service address when using sglang-client
-u, --url TEXT Service address when using http-client
-s, --start INTEGER Starting page number for parsing (0-based)
-e, --end INTEGER Ending page number for parsing (0-based)
-f, --formula BOOLEAN Enable formula parsing (default: enabled)
-t, --table BOOLEAN Enable table parsing (default: enabled)
-d, --device TEXT Inference device (e.g., cpu/cuda/cuda:0/npu/mps, pipeline backend only)
-d, --device TEXT Inference device (e.g., cpu/cuda/cuda:0/npu/mps, pipeline and vlm-transformers backend only)
--vram INTEGER Maximum GPU VRAM usage per process (GB) (pipeline backend only)
--source [huggingface|modelscope|local]
Model source, default: huggingface
@@ -45,7 +45,7 @@ Options:
files to be input need to be placed in the
`example` folder within the directory where
the command is currently executed.
--enable-sglang-engine BOOLEAN Enable SgLang engine backend for faster
--enable-vllm-engine BOOLEAN Enable vllm engine backend for faster
processing.
--enable-api BOOLEAN Enable gradio API for serving the
application.
@@ -65,9 +65,49 @@ Options:
Some parameters of MinerU command line tools have equivalent environment variable configurations. Generally, environment variable configurations have higher priority than command line parameters and take effect across all command line tools.
Here are the environment variables and their descriptions:
- `MINERU_DEVICE_MODE`: Used to specify inference device, supports device types like `cpu/cuda/cuda:0/npu/mps`, only effective for `pipeline` backend.
- `MINERU_VIRTUAL_VRAM_SIZE`: Used to specify maximum GPU VRAM usage per process (GB), only effective for `pipeline` backend.
- `MINERU_MODEL_SOURCE`: Used to specify model source, supports `huggingface/modelscope/local`, defaults to `huggingface`, can be switched to `modelscope` or local models through environment variables.
- `MINERU_TOOLS_CONFIG_JSON`: Used to specify configuration file path, defaults to `mineru.json` in user directory, can specify other configuration file paths through environment variables.
- `MINERU_FORMULA_ENABLE`: Used to enable formula parsing, defaults to `true`, can be set to `false` through environment variables to disable formula parsing.
- `MINERU_TABLE_ENABLE`: Used to enable table parsing, defaults to `true`, can be set to `false` through environment variables to disable table parsing.
- `MINERU_DEVICE_MODE`:
* Used to specify inference device
* supports device types like `cpu/cuda/cuda:0/npu/mps`
* only effective for `pipeline` and `vlm-transformers` backends.
- `MINERU_VIRTUAL_VRAM_SIZE`:
* Used to specify maximum GPU VRAM usage per process (GB)
* only effective for `pipeline` backend.
- `MINERU_MODEL_SOURCE`:
* Used to specify model source
* supports `huggingface/modelscope/local`
* defaults to `huggingface`, can be switched to `modelscope` or local models through environment variables.
- `MINERU_TOOLS_CONFIG_JSON`:
* Used to specify configuration file path
* defaults to `mineru.json` in user directory, can specify other configuration file paths through environment variables.
- `MINERU_FORMULA_ENABLE`:
* Used to enable formula parsing
* defaults to `true`, can be set to `false` through environment variables to disable formula parsing.
- `MINERU_FORMULA_CH_SUPPORT`:
* Used to enable Chinese formula parsing optimization (experimental feature)
* Default is `false`, can be set to `true` via environment variable to enable Chinese formula parsing optimization.
* Only effective for `pipeline` backend.
- `MINERU_TABLE_ENABLE`:
* Used to enable table parsing
* Default is `true`, can be set to `false` via environment variable to disable table parsing.
- `MINERU_TABLE_MERGE_ENABLE`:
* Used to enable table merging functionality
* Default is `true`, can be set to `false` via environment variable to disable table merging functionality.
- `MINERU_PDF_RENDER_TIMEOUT`:
* Used to set the timeout period (in seconds) for rendering PDF to images
* Default is `300` seconds, can be set to other values via environment variable to adjust the image rendering timeout.
- `MINERU_INTRA_OP_NUM_THREADS`:
* Used to set the intra_op thread count for ONNX models, affects the computation speed of individual operators
* Default is `-1` (auto-select), can be set to other values via environment variable to adjust the thread count.
- `MINERU_INTER_OP_NUM_THREADS`:
* Used to set the inter_op thread count for ONNX models, affects the parallel execution of multiple operators
* Default is `-1` (auto-select), can be set to other values via environment variable to adjust the thread count.

View File

@@ -29,11 +29,11 @@ mineru -p <input_path> -o <output_path>
mineru -p <input_path> -o <output_path> -b vlm-transformers
```
> [!TIP]
> The vlm backend additionally supports `sglang` acceleration. Compared to the `transformers` backend, `sglang` can achieve 20-30x speedup. You can check the installation method for the complete package supporting `sglang` acceleration in the [Extension Modules Installation Guide](../quick_start/extension_modules.md).
> The vlm backend additionally supports `vllm`/`lmdeploy` acceleration. Compared to the `transformers` backend, inference speed can be significantly improved. You can check the installation method for the complete package supporting `vllm`/`lmdeploy` acceleration in the [Extension Modules Installation Guide](../quick_start/extension_modules.md).
If you need to adjust parsing options through custom parameters, you can also check the more detailed [Command Line Tools Usage Instructions](./cli_tools.md) in the documentation.
## Advanced Usage via API, WebUI, sglang-client/server
## Advanced Usage via API, WebUI, http-client/server
- Direct Python API calls: [Python Usage Example](https://github.com/opendatalab/MinerU/blob/master/demo/demo.py)
- FastAPI calls:
@@ -44,29 +44,35 @@ If you need to adjust parsing options through custom parameters, you can also ch
>Access `http://127.0.0.1:8000/docs` in your browser to view the API documentation.
- Start Gradio WebUI visual frontend:
```bash
# Using pipeline/vlm-transformers/vlm-sglang-client backends
# Using pipeline/vlm-transformers/vlm-http-client backends
mineru-gradio --server-name 0.0.0.0 --server-port 7860
# Or using vlm-sglang-engine/pipeline backends (requires sglang environment)
mineru-gradio --server-name 0.0.0.0 --server-port 7860 --enable-sglang-engine true
# Or using vlm-vllm-engine/pipeline backends (requires vllm environment)
mineru-gradio --server-name 0.0.0.0 --server-port 7860 --enable-vllm-engine true
# Or using vlm-lmdeploy-engine/pipeline backends (requires lmdeploy environment)
mineru-gradio --server-name 0.0.0.0 --server-port 7860 --enable-lmdeploy-engine true
```
>[!TIP]
>
>- Access `http://127.0.0.1:7860` in your browser to use the Gradio WebUI.
>- Access `http://127.0.0.1:7860/?view=api` to use the Gradio API.
- Using `sglang-client/server` method:
- Using `http-client/server` method:
```bash
# Start sglang server (requires sglang environment)
mineru-sglang-server --port 30000
# Start openai compatible server (requires vllm or lmdeploy environment)
mineru-openai-server
# Or start vllm server (requires vllm environment)
mineru-openai-server --engine vllm --port 30000
# Or start lmdeploy server (requires lmdeploy environment)
mineru-openai-server --engine lmdeploy --server-port 30000
```
>[!TIP]
>In another terminal, connect to sglang server via sglang client (only requires CPU and network, no sglang environment needed)
>In another terminal, connect to vllm server via http client (only requires CPU and network, no vllm environment needed)
> ```bash
> mineru -p <input_path> -o <output_path> -b vlm-sglang-client -u http://127.0.0.1:30000
> mineru -p <input_path> -o <output_path> -b vlm-http-client -u http://127.0.0.1:30000
> ```
> [!NOTE]
> All officially supported sglang parameters can be passed to MinerU through command line arguments, including the following commands: `mineru`, `mineru-sglang-server`, `mineru-gradio`, `mineru-api`.
> We have compiled some commonly used parameters and usage methods for `sglang`, which can be found in the documentation [Advanced Command Line Parameters](./advanced_cli_parameters.md).
> All officially supported `vllm/lmdeploy` parameters can be passed to MinerU through command line arguments, including the following commands: `mineru`, `mineru-openai-server`, `mineru-gradio`, `mineru-api`.
> We have compiled some commonly used parameters and usage methods for `vllm/lmdeploy`, which can be found in the documentation [Advanced Command Line Parameters](./advanced_cli_parameters.md).
## Extending MinerU Functionality with Configuration Files
@@ -77,7 +83,36 @@ MinerU is now ready to use out of the box, but also supports extending functiona
Here are some available configuration options:
- `latex-delimiter-config`: Used to configure LaTeX formula delimiters, defaults to `$` symbol, can be modified to other symbols or strings as needed.
- `llm-aided-config`: Used to configure parameters for LLM-assisted title hierarchy, compatible with all LLM models supporting `openai protocol`, defaults to using Alibaba Cloud Bailian's `qwen2.5-32b-instruct` model. You need to configure your own API key and set `enable` to `true` to enable this feature.
- `models-dir`: Used to specify local model storage directory, please specify model directories for `pipeline` and `vlm` backends separately. After specifying the directory, you can use local models by configuring the environment variable `export MINERU_MODEL_SOURCE=local`.
- `latex-delimiter-config`:
* Used to configure LaTeX formula delimiters
* Defaults to `$` symbol, can be modified to other symbols or strings as needed.
- `llm-aided-config`:
* Used to configure parameters for LLM-assisted title hierarchy
* Compatible with all LLM models supporting `openai protocol`, defaults to using Alibaba Cloud Bailian's `qwen3-next-80b-a3b-instruct` model.
* You need to configure your own API key and set `enable` to `true` to enable this feature.
* If your API provider does not support the `enable_thinking` parameter, please manually remove it.
* For example, in your configuration file, the `llm-aided-config` section may look like:
```json
"llm-aided-config": {
"api_key": "your_api_key",
"base_url": "https://dashscope.aliyuncs.com/compatible-mode/v1",
"model": "qwen3-next-80b-a3b-instruct",
"enable_thinking": false,
"enable": false
}
```
* To remove the `enable_thinking` parameter, simply delete the line containing `"enable_thinking": false`, resulting in:
```json
"llm-aided-config": {
"api_key": "your_api_key",
"base_url": "https://dashscope.aliyuncs.com/compatible-mode/v1",
"model": "qwen3-next-80b-a3b-instruct",
"enable": false
}
```
- `models-dir`:
* Used to specify local model storage directory
* Please specify model directories for `pipeline` and `vlm` backends separately.
* After specifying the directory, you can use local models by configuring the environment variable `export MINERU_MODEL_SOURCE=local`.

View File

@@ -2,7 +2,7 @@
如果未能列出您的问题,您也可以使用[DeepWiki](https://deepwiki.com/opendatalab/MinerU)与AI助手交流这可以解决大部分常见问题。
如果您仍然无法解决问题,您可通过[Discord](https://discord.gg/Tdedn9GTXq)或[WeChat](http://mineru.space/s/V85Yl)加入社区,与其他用户和开发者交流。
如果您仍然无法解决问题,您可通过[Discord](https://discord.gg/Tdedn9GTXq)或[WeChat](https://mineru.net/community-portal/?aliasId=3c430f94)加入社区,与其他用户和开发者交流。
??? question "在WSL2的Ubuntu22.04中遇到报错`ImportError: libGL.so.1: cannot open shared object file: No such file or directory`"
@@ -14,18 +14,6 @@
参考:[#388](https://github.com/opendatalab/MinerU/issues/388)
??? question "在 CentOS 7 或 Ubuntu 18 系统安装MinerU时报错`ERROR: Failed building wheel for simsimd`"
新版本albumentations(1.4.21)引入了依赖simsimd,由于simsimd在linux的预编译包要求glibc的版本大于等于2.28导致部分2019年之前发布的Linux发行版无法正常安装可通过如下命令安装:
```
conda create -n mineru python=3.11 -y
conda activate mineru
pip install -U "mineru[pipeline_old_linux]"
```
参考:[#1004](https://github.com/opendatalab/MinerU/issues/1004)
??? question "在 Linux 系统安装并使用时,解析结果缺失部份文字信息。"
MinerU在>=2.0的版本中使用`pypdfium2`代替`pymupdf`作为PDF页面的渲染引擎以解决AGPLv3的许可证问题在某些Linux发行版由于缺少CJK字体可能会在将PDF渲染成图片的过程中丢失部份文字。

View File

@@ -18,8 +18,9 @@
[![OpenDataLab](https://img.shields.io/badge/webapp_on_mineru.net-blue?logo=data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iMTM0IiBoZWlnaHQ9IjEzNCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIj48cGF0aCBkPSJtMTIyLDljMCw1LTQsOS05LDlzLTktNC05LTksNC05LDktOSw5LDQsOSw5eiIgZmlsbD0idXJsKCNhKSIvPjxwYXRoIGQ9Im0xMjIsOWMwLDUtNCw5LTksOXMtOS00LTktOSw0LTksOS05LDksNCw5LDl6IiBmaWxsPSIjMDEwMTAxIi8+PHBhdGggZD0ibTkxLDE4YzAsNS00LDktOSw5cy05LTQtOS05LDQtOSw5LTksOSw0LDksOXoiIGZpbGw9InVybCgjYikiLz48cGF0aCBkPSJtOTEsMThjMCw1LTQsOS05LDlzLTktNC05LTksNC05LDktOSw5LDQsOSw5eiIgZmlsbD0iIzAxMDEwMSIvPjxwYXRoIGZpbGwtcnVsZT0iZXZlbm9kZCIgY2xpcC1ydWxlPSJldmVub2RkIiBkPSJtMzksNjJjMCwxNiw4LDMwLDIwLDM4LDctNiwxMi0xNiwxMi0yNlY0OWMwLTQsMy03LDYtOGw0Ni0xMmM1LTEsMTEsMywxMSw4djMxYzAsMzctMzAsNjYtNjYsNjYtMzcsMC02Ni0zMC02Ni02NlY0NmMwLTQsMy03LDYtOGwyMC02YzUtMSwxMSwzLDExLDh2MjF6bS0yOSw2YzAsMTYsNiwzMCwxNyw0MCwzLDEsNSwxLDgsMSw1LDAsMTAtMSwxNS0zQzM3LDk1LDI5LDc5LDI5LDYyVjQybC0xOSw1djIweiIgZmlsbD0idXJsKCNjKSIvPjxwYXRoIGZpbGwtcnVsZT0iZXZlbm9kZCIgY2xpcC1ydWxlPSJldmVub2RkIiBkPSJtMzksNjJjMCwxNiw4LDMwLDIwLDM4LDctNiwxMi0xNiwxMi0yNlY0OWMwLTQsMy03LDYtOGw0Ni0xMmM1LTEsMTEsMywxMSw4djMxYzAsMzctMzAsNjYtNjYsNjYtMzcsMC02Ni0zMC02Ni02NlY0NmMwLTQsMy03LDYtOGwyMC02YzUtMSwxMSwzLDExLDh2MjF6bS0yOSw2YzAsMTYsNiwzMCwxNyw0MCwzLDEsNSwxLDgsMSw1LDAsMTAtMSwxNS0zQzM3LDk1LDI5LDc5LDI5LDYyVjQybC0xOSw1djIweiIgZmlsbD0iIzAxMDEwMSIvPjxkZWZzPjxsaW5lYXJHcmFkaWVudCBpZD0iYSIgeDE9Ijg0IiB5MT0iNDEiIHgyPSI3NSIgeTI9IjEyMCIgZ3JhZGllbnRVbml0cz0idXNlclNwYWNlT25Vc2UiPjxzdG9wIHN0b3AtY29sb3I9IiNmZmYiLz48c3RvcCBvZmZzZXQ9IjEiIHN0b3AtY29sb3I9IiMyZTJlMmUiLz48L2xpbmVhckdyYWRpZW50PjxsaW5lYXJHcmFkaWVudCBpZD0iYiIgeDE9Ijg0IiB5MT0iNDEiIHgyPSI3NSIgeTI9IjEyMCIgZ3JhZGllbnRVbml0cz0idXNlclNwYWNlT25Vc2UiPjxzdG9wIHN0b3AtY29sb3I9IiNmZmYiLz48c3RvcCBvZmZzZXQ9IjEiIHN0b3AtY29sb3I9IiMyZTJlMmUiLz48L2xpbmVhckdyYWRpZW50PjxsaW5lYXJHcmFkaWVudCBpZD0iYyIgeDE9Ijg0IiB5MT0iNDEiIHgyPSI3NSIgeTI9IjEyMCIgZ3JhZGllbnRVbml0cz0idXNlclNwYWNlT25Vc2UiPjxzdG9wIHN0b3AtY29sb3I9IiNmZmYiLz48c3RvcCBvZmZzZXQ9IjEiIHN0b3AtY29sb3I9IiMyZTJlMmUiLz48L2xpbmVhckdyYWRpZW50PjwvZGVmcz48L3N2Zz4=&labelColor=white)](https://mineru.net/OpenSourceTools/Extractor?source=github)
[![ModelScope](https://img.shields.io/badge/Demo_on_ModelScope-purple?logo=data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iMjIzIiBoZWlnaHQ9IjIwMCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIj4KCiA8Zz4KICA8dGl0bGU+TGF5ZXIgMTwvdGl0bGU+CiAgPHBhdGggaWQ9InN2Z18xNCIgZmlsbD0iIzYyNGFmZiIgZD0ibTAsODkuODRsMjUuNjUsMGwwLDI1LjY0OTk5bC0yNS42NSwwbDAsLTI1LjY0OTk5eiIvPgogIDxwYXRoIGlkPSJzdmdfMTUiIGZpbGw9IiM2MjRhZmYiIGQ9Im05OS4xNCwxMTUuNDlsMjUuNjUsMGwwLDI1LjY1bC0yNS42NSwwbDAsLTI1LjY1eiIvPgogIDxwYXRoIGlkPSJzdmdfMTYiIGZpbGw9IiM2MjRhZmYiIGQ9Im0xNzYuMDksMTQxLjE0bC0yNS42NDk5OSwwbDAsMjIuMTlsNDcuODQsMGwwLC00Ny44NGwtMjIuMTksMGwwLDI1LjY1eiIvPgogIDxwYXRoIGlkPSJzdmdfMTciIGZpbGw9IiMzNmNmZDEiIGQ9Im0xMjQuNzksODkuODRsMjUuNjUsMGwwLDI1LjY0OTk5bC0yNS42NSwwbDAsLTI1LjY0OTk5eiIvPgogIDxwYXRoIGlkPSJzdmdfMTgiIGZpbGw9IiMzNmNmZDEiIGQ9Im0wLDY0LjE5bDI1LjY1LDBsMCwyNS42NWwtMjUuNjUsMGwwLC0yNS42NXoiLz4KICA8cGF0aCBpZD0ic3ZnXzE5IiBmaWxsPSIjNjI0YWZmIiBkPSJtMTk4LjI4LDg5Ljg0bDI1LjY0OTk5LDBsMCwyNS42NDk5OWwtMjUuNjQ5OTksMGwwLC0yNS42NDk5OXoiLz4KICA8cGF0aCBpZD0ic3ZnXzIwIiBmaWxsPSIjMzZjZmQxIiBkPSJtMTk4LjI4LDY0LjE5bDI1LjY0OTk5LDBsMCwyNS42NWwtMjUuNjQ5OTksMGwwLC0yNS42NXoiLz4KICA8cGF0aCBpZD0ic3ZnXzIxIiBmaWxsPSIjNjI0YWZmIiBkPSJtMTUwLjQ0LDQybDAsMjIuMTlsMjUuNjQ5OTksMGwwLDI1LjY1bDIyLjE5LDBsMCwtNDcuODRsLTQ3Ljg0LDB6Ii8+CiAgPHBhdGggaWQ9InN2Z18yMiIgZmlsbD0iIzM2Y2ZkMSIgZD0ibTczLjQ5LDg5Ljg0bDI1LjY1LDBsMCwyNS42NDk5OWwtMjUuNjUsMGwwLC0yNS42NDk5OXoiLz4KICA8cGF0aCBpZD0ic3ZnXzIzIiBmaWxsPSIjNjI0YWZmIiBkPSJtNDcuODQsNjQuMTlsMjUuNjUsMGwwLC0yMi4xOWwtNDcuODQsMGwwLDQ3Ljg0bDIyLjE5LDBsMCwtMjUuNjV6Ii8+CiAgPHBhdGggaWQ9InN2Z18yNCIgZmlsbD0iIzYyNGFmZiIgZD0ibTQ3Ljg0LDExNS40OWwtMjIuMTksMGwwLDQ3Ljg0bDQ3Ljg0LDBsMCwtMjIuMTlsLTI1LjY1LDBsMCwtMjUuNjV6Ii8+CiA8L2c+Cjwvc3ZnPg==&labelColor=white)](https://www.modelscope.cn/studios/OpenDataLab/MinerU)
[![HuggingFace](https://img.shields.io/badge/Demo_on_HuggingFace-yellow.svg?logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAF8AAABYCAMAAACkl9t/AAAAk1BMVEVHcEz/nQv/nQv/nQr/nQv/nQr/nQv/nQv/nQr/wRf/txT/pg7/yRr/rBD/zRz/ngv/oAz/zhz/nwv/txT/ngv/0B3+zBz/nQv/0h7/wxn/vRb/thXkuiT/rxH/pxD/ogzcqyf/nQvTlSz/czCxky7/SjifdjT/Mj3+Mj3wMj15aTnDNz+DSD9RTUBsP0FRO0Q6O0WyIxEIAAAAGHRSTlMADB8zSWF3krDDw8TJ1NbX5efv8ff9/fxKDJ9uAAAGKklEQVR42u2Z63qjOAyGC4RwCOfB2JAGqrSb2WnTw/1f3UaWcSGYNKTdf/P+mOkTrE+yJBulvfvLT2A5ruenaVHyIks33npl/6C4s/ZLAM45SOi/1FtZPyFur1OYofBX3w7d54Bxm+E8db+nDr12ttmESZ4zludJEG5S7TO72YPlKZFyE+YCYUJTBZsMiNS5Sd7NlDmKM2Eg2JQg8awbglfqgbhArjxkS7dgp2RH6hc9AMLdZYUtZN5DJr4molC8BfKrEkPKEnEVjLbgW1fLy77ZVOJagoIcLIl+IxaQZGjiX597HopF5CkaXVMDO9Pyix3AFV3kw4lQLCbHuMovz8FallbcQIJ5Ta0vks9RnolbCK84BtjKRS5uA43hYoZcOBGIG2Epbv6CvFVQ8m8loh66WNySsnN7htL58LNp+NXT8/PhXiBXPMjLSxtwp8W9f/1AngRierBkA+kk/IpUSOeKByzn8y3kAAAfh//0oXgV4roHm/kz4E2z//zRc3/lgwBzbM2mJxQEa5pqgX7d1L0htrhx7LKxOZlKbwcAWyEOWqYSI8YPtgDQVjpB5nvaHaSnBaQSD6hweDi8PosxD6/PT09YY3xQA7LTCTKfYX+QHpA0GCcqmEHvr/cyfKQTEuwgbs2kPxJEB0iNjfJcCTPyocx+A0griHSmADiC91oNGVwJ69RudYe65vJmoqfpul0lrqXadW0jFKH5BKwAeCq+Den7s+3zfRJzA61/Uj/9H/VzLKTx9jFPPdXeeP+L7WEvDLAKAIoF8bPTKT0+TM7W8ePj3Rz/Yn3kOAp2f1Kf0Weony7pn/cPydvhQYV+eFOfmOu7VB/ViPe34/EN3RFHY/yRuT8ddCtMPH/McBAT5s+vRde/gf2c/sPsjLK+m5IBQF5tO+h2tTlBGnP6693JdsvofjOPnnEHkh2TnV/X1fBl9S5zrwuwF8NFrAVJVwCAPTe8gaJlomqlp0pv4Pjn98tJ/t/fL++6unpR1YGC2n/KCoa0tTLoKiEeUPDl94nj+5/Tv3/eT5vBQ60X1S0oZr+IWRR8Ldhu7AlLjPISlJcO9vrFotky9SpzDequlwEir5beYAc0R7D9KS1DXva0jhYRDXoExPdc6yw5GShkZXe9QdO/uOvHofxjrV/TNS6iMJS+4TcSTgk9n5agJdBQbB//IfF/HpvPt3Tbi7b6I6K0R72p6ajryEJrENW2bbeVUGjfgoals4L443c7BEE4mJO2SpbRngxQrAKRudRzGQ8jVOL2qDVjjI8K1gc3TIJ5KiFZ1q+gdsARPB4NQS4AjwVSt72DSoXNyOWUrU5mQ9nRYyjp89Xo7oRI6Bga9QNT1mQ/ptaJq5T/7WcgAZywR/XlPGAUDdet3LE+qS0TI+g+aJU8MIqjo0Kx8Ly+maxLjJmjQ18rA0YCkxLQbUZP1WqdmyQGJLUm7VnQFqodmXSqmRrdVpqdzk5LvmvgtEcW8PMGdaS23EOWyDVbACZzUJPaqMbjDxpA3Qrgl0AikimGDbqmyT8P8NOYiqrldF8rX+YN7TopX4UoHuSCYY7cgX4gHwclQKl1zhx0THf+tCAUValzjI7Wg9EhptrkIcfIJjA94evOn8B2eHaVzvBrnl2ig0So6hvPaz0IGcOvTHvUIlE2+prqAxLSQxZlU2stql1NqCCLdIiIN/i1DBEHUoElM9dBravbiAnKqgpi4IBkw+utSPIoBijDXJipSVV7MpOEJUAc5Qmm3BnUN+w3hteEieYKfRZSIUcXKMVf0u5wD4EwsUNVvZOtUT7A2GkffHjByWpHqvRBYrTV72a6j8zZ6W0DTE86Hn04bmyWX3Ri9WH7ZU6Q7h+ZHo0nHUAcsQvVhXRDZHChwiyi/hnPuOsSEF6Exk3o6Y9DT1eZ+6cASXk2Y9k+6EOQMDGm6WBK10wOQJCBwren86cPPWUcRAnTVjGcU1LBgs9FURiX/e6479yZcLwCBmTxiawEwrOcleuu12t3tbLv/N4RLYIBhYexm7Fcn4OJcn0+zc+s8/VfPeddZHAGN6TT8eGczHdR/Gts1/MzDkThr23zqrVfAMFT33Nx1RJsx1k5zuWILLnG/vsH+Fv5D4NTVcp1Gzo8AAAAAElFTkSuQmCC&labelColor=white)](https://huggingface.co/spaces/opendatalab/MinerU)
[![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/gist/myhloli/3b3a00a4a0a61577b6c30f989092d20d/mineru_demo.ipynb)
[![arXiv](https://img.shields.io/badge/arXiv-2409.18839-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2409.18839)
[![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/gist/myhloli/a3cb16570ab3cfeadf9d8f0ac91b4fca/mineru_demo.ipynb)
[![arXiv](https://img.shields.io/badge/MinerU-Technical%20Report-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2409.18839)
[![arXiv](https://img.shields.io/badge/MinerU2.5-Technical%20Report-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2509.22186)
[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/opendatalab/MinerU)
<div align="center">
@@ -33,7 +34,7 @@
<!-- join us -->
<p align="center">
👋 join us on <a href="https://discord.gg/Tdedn9GTXq" target="_blank">Discord</a> and <a href="http://mineru.space/s/V85Yl" target="_blank">WeChat</a>
👋 join us on <a href="https://discord.gg/Tdedn9GTXq" target="_blank">Discord</a> and <a href="https://mineru.net/community-portal/?aliasId=3c430f94" target="_blank">WeChat</a>
</p>
</div>
@@ -55,7 +56,7 @@ MinerU诞生于[书生-浦语](https://github.com/InternLM/InternLM)的预训练
- 自动识别并转换文档中的公式为LaTeX格式
- 自动识别并转换文档中的表格为HTML格式
- 自动检测扫描版PDF和乱码PDF并启用OCR功能
- OCR支持84种语言的检测与识别
- OCR支持109种语言的检测与识别
- 支持多种输出格式如多模态与NLP的Markdown、按阅读顺序排序的JSON、含有丰富信息的中间格式等
- 支持多种可视化结果包括layout可视化、span可视化等便于高效确认输出效果与质检
- 支持纯CPU环境运行并支持 GPU(CUDA)/NPU(CANN)/MPS 加速

Some files were not shown because too many files have changed in this diff Show More