mirror of
https://github.com/docker/docs.git
synced 2026-03-27 06:18:55 +07:00
fix: clarify requirements (#22971)
Clarify supported platforms. --------- Co-authored-by: Allie Sadler <102604716+aevesdocker@users.noreply.github.com>
This commit is contained in:
6
_vale/Docker/Forbidden.yml
Normal file
6
_vale/Docker/Forbidden.yml
Normal file
@@ -0,0 +1,6 @@
|
||||
extends: substitution
|
||||
message: "Use '%s' instead of '%s'."
|
||||
level: error
|
||||
ignorecase: false
|
||||
swap:
|
||||
Docker CE: Docker Engine
|
||||
@@ -1,7 +1,8 @@
|
||||
(?i)[A-Z]{2,}'?s
|
||||
Adreno
|
||||
Aleksandrov
|
||||
Amazon
|
||||
Anchore
|
||||
Aleksandrov
|
||||
Apple
|
||||
Artifactory
|
||||
Azure
|
||||
@@ -114,6 +115,7 @@ Nginx
|
||||
npm
|
||||
Nutanix
|
||||
Nuxeo
|
||||
NVIDIA
|
||||
OAuth
|
||||
Okta
|
||||
Ollama
|
||||
@@ -126,8 +128,7 @@ PKG
|
||||
Postgres
|
||||
PowerShell
|
||||
Python
|
||||
Pyright
|
||||
pyright
|
||||
Qualcomm
|
||||
rollback
|
||||
rootful
|
||||
runc
|
||||
@@ -200,6 +201,7 @@ Zsh
|
||||
[Pp]rocfs
|
||||
[Pp]roxied
|
||||
[Pp]roxying
|
||||
[pP]yright
|
||||
[Rr]eal-time
|
||||
[Rr]egex(es)?
|
||||
[Rr]untimes?
|
||||
|
||||
@@ -40,6 +40,41 @@ with AI models locally.
|
||||
- Run and interact with AI models directly from the command line or from the Docker Desktop GUI
|
||||
- Manage local models and display logs
|
||||
|
||||
## Requirements
|
||||
|
||||
Docker Model Runner is supported on the following platforms:
|
||||
|
||||
{{< tabs >}}
|
||||
{{< tab name="Windows">}}
|
||||
|
||||
Windows(amd64):
|
||||
- NVIDIA GPUs
|
||||
- NVIDIA drivers 576.57+
|
||||
|
||||
Windows(arm64):
|
||||
- OpenCL for Adreno
|
||||
- Qualcomm Adreno GPU (6xx series and later)
|
||||
|
||||
> [!NOTE]
|
||||
> Some llama.cpp features might not be fully supported on the 6xx series.
|
||||
|
||||
{{< /tab >}}
|
||||
{{< tab name="MacOS">}}
|
||||
|
||||
- Apple Silicon
|
||||
|
||||
{{< /tab >}}
|
||||
{{< tab name="Linux">}}
|
||||
|
||||
Docker Engine only:
|
||||
|
||||
- Linux CPU & Linux NVIDIA
|
||||
- NVIDIA drivers 575.57.08+
|
||||
|
||||
{{< /tab >}}
|
||||
{{</tabs >}}
|
||||
|
||||
|
||||
## How it works
|
||||
|
||||
Models are pulled from Docker Hub the first time they're used and stored locally. They're loaded into memory only at runtime when a request is made, and unloaded when not in use to optimize resources. Since models can be large, the initial pull may take some time — but after that, they're cached locally for faster access. You can interact with the model using [OpenAI-compatible APIs](#what-api-endpoints-are-available).
|
||||
|
||||
@@ -162,7 +162,7 @@ Docker Init:
|
||||
Docker Model Runner:
|
||||
availability: Beta
|
||||
requires: Docker Engine or Docker Desktop (Windows) 4.41+ or Docker Desktop (MacOS) 4.40+
|
||||
for: Docker Desktop for Mac with Apple Silicon or Windows with NVIDIA GPUs
|
||||
for: See Requirements section below
|
||||
Docker Projects:
|
||||
availability: Beta
|
||||
Docker Scout exceptions:
|
||||
|
||||
@@ -84,6 +84,7 @@
|
||||
"Mac-and-Linux",
|
||||
"Mac-with-Apple-silicon",
|
||||
"Mac-with-Intel-chip",
|
||||
"MacOS",
|
||||
"Manually-create-assets",
|
||||
"NetworkManager",
|
||||
"Networking-mode",
|
||||
@@ -110,7 +111,9 @@
|
||||
"Run-Ollama-in-a-container",
|
||||
"Run-Ollama-outside-of-a-container",
|
||||
"Rust",
|
||||
"Separate-containers",
|
||||
"Shell-script",
|
||||
"Single-container",
|
||||
"Specific-version",
|
||||
"Svelte",
|
||||
"Ubuntu",
|
||||
|
||||
Reference in New Issue
Block a user