diff --git a/README.md b/README.md index 97d4da71..6b35f93b 100644 --- a/README.md +++ b/README.md @@ -45,7 +45,7 @@ # Changelog -- 2025/12/31 2.7.0 Release +- 2025/12/30 2.7.0 Release - Simplified installation process. No need to separately install `vlm` acceleration engine dependencies. Using `uv pip install mineru[all]` during installation will install all optional backend dependencies. - Added new `hybrid` backend, which combines the advantages of `pipeline` and `vlm` backends. Built on vlm, it integrates some capabilities of pipeline, adding extra extensibility on top of high accuracy: - Directly extracts text from text PDFs, natively supports multi-language recognition in text PDF scenarios, and greatly reduces parsing hallucinations; diff --git a/README_zh-CN.md b/README_zh-CN.md index 1a78facf..2681a413 100644 --- a/README_zh-CN.md +++ b/README_zh-CN.md @@ -45,7 +45,7 @@ # 更新记录 -- 2025/12/31 2.7.0 发布 +- 2025/12/30 2.7.0 发布 - 简化安装流程,现在不再需要单独安装`vlm`加速引擎依赖包,安装时使用`uv pip install mineru[all]`即可安装所有可选后端的依赖包。 - 增加全新后端`hybrid`,该后端结合了`pipeline`和`vlm`后端的优势,在vlm的基础上,融入了pipeline的部分能力,在高精度的基础上增加了额外的扩展性: - 从文本pdf中直接抽取文本,在文本pdf场景原生支持多语言识别,并极大减少解析幻觉; @@ -130,7 +130,7 @@ https://github.com/user-attachments/assets/4bea02c9-6d54-4cd6-97ed-dff14340982c 后端特性 兼容性好 - 配置要求较高 + 硬件配置要求较高 适用于OpenAI兼容服务器2 diff --git a/docs/zh/quick_start/index.md b/docs/zh/quick_start/index.md index 79dbca37..8e22ec73 100644 --- a/docs/zh/quick_start/index.md +++ b/docs/zh/quick_start/index.md @@ -45,7 +45,7 @@ 后端特性 兼容性好 - 配置要求较高 + 硬件配置要求较高 适用于OpenAI兼容服务器2