refactor: update GPU model support information in Docker deployment documentation

This commit is contained in:
myhloli
2025-12-24 14:56:41 +08:00
parent 88822c7918
commit eeea4f38e3
2 changed files with 2 additions and 2 deletions

View File

@@ -11,7 +11,7 @@ docker build -t mineru:latest -f Dockerfile .
> [!TIP]
> The [Dockerfile](https://github.com/opendatalab/MinerU/blob/master/docker/global/Dockerfile) uses `vllm/vllm-openai:v0.10.1.1` as the base image by default. This version of vLLM v1 engine has limited support for GPU models.
> If you cannot use vLLM accelerated inference on Turing and earlier architecture GPUs, you can resolve this issue by changing the base image to `vllm/vllm-openai:v0.10.2`.
> This version supports a limited range of GPU models and may only function on Ampere, Ada Lovelace, and Hopper architectures. If you cannot use vLLM for accelerated inference on Volta, Turing, or Blackwell GPUs, you can resolve this issue by changing the base image to `vllm/vllm-openai:v0.11.0`.
## Docker Description

View File

@@ -11,7 +11,7 @@ docker build -t mineru:latest -f Dockerfile .
> [!TIP]
> [Dockerfile](https://github.com/opendatalab/MinerU/blob/master/docker/china/Dockerfile)默认使用`vllm/vllm-openai:v0.10.1.1`作为基础镜像,
> 该版本的vLLM v1 engine对显卡型号支持有限如您无法在Turing及更早架构的显卡上使用vLLM加速推理可通过更改基础镜像为`vllm/vllm-openai:v0.10.2`来解决该问题。
> 该版本的显卡型号支持有限,可能仅在 Ampere、Ada Lovelace、Hopper架构上工作如您无法在Volta、Turing、Blackwell显卡上使用vLLM加速推理可通过更改基础镜像为`vllm/vllm-openai:v0.11.0`来解决该问题。
## Docker说明