mirror of
https://github.com/opendatalab/MinerU.git
synced 2026-03-27 11:08:32 +07:00
refactor: update GPU model support information in Docker deployment documentation
This commit is contained in:
@@ -11,7 +11,7 @@ docker build -t mineru:latest -f Dockerfile .
|
||||
|
||||
> [!TIP]
|
||||
> The [Dockerfile](https://github.com/opendatalab/MinerU/blob/master/docker/global/Dockerfile) uses `vllm/vllm-openai:v0.10.1.1` as the base image by default. This version of vLLM v1 engine has limited support for GPU models.
|
||||
> If you cannot use vLLM accelerated inference on Turing and earlier architecture GPUs, you can resolve this issue by changing the base image to `vllm/vllm-openai:v0.10.2`.
|
||||
> This version supports a limited range of GPU models and may only function on Ampere, Ada Lovelace, and Hopper architectures. If you cannot use vLLM for accelerated inference on Volta, Turing, or Blackwell GPUs, you can resolve this issue by changing the base image to `vllm/vllm-openai:v0.11.0`.
|
||||
|
||||
## Docker Description
|
||||
|
||||
|
||||
@@ -11,7 +11,7 @@ docker build -t mineru:latest -f Dockerfile .
|
||||
|
||||
> [!TIP]
|
||||
> [Dockerfile](https://github.com/opendatalab/MinerU/blob/master/docker/china/Dockerfile)默认使用`vllm/vllm-openai:v0.10.1.1`作为基础镜像,
|
||||
> 该版本的vLLM v1 engine对显卡型号支持有限,如您无法在Turing及更早架构的显卡上使用vLLM加速推理,可通过更改基础镜像为`vllm/vllm-openai:v0.10.2`来解决该问题。
|
||||
> 该版本的显卡型号支持有限,可能仅在 Ampere、Ada Lovelace、Hopper架构上工作,如您无法在Volta、Turing、Blackwell显卡上使用vLLM加速推理,可通过更改基础镜像为`vllm/vllm-openai:v0.11.0`来解决该问题。
|
||||
|
||||
## Docker说明
|
||||
|
||||
|
||||
Reference in New Issue
Block a user