Clean up information architecture (#5893)
- Move /engine/admin/ and /engine/userguide/ topics to /config/ and /develop/ - Get rid of some stub topics that are no longer needed - Rename /engine/article-img/ to /engine/images/ - Mark ambassador linking topic as obsolete - Flesh out multistage build topic - Reorganize some terribly obsolete content in other files
162
_data/toc.yaml
@@ -221,19 +221,19 @@ guides:
|
||||
title: App development overview
|
||||
- path: /develop/dev-best-practices/
|
||||
title: App development best practices
|
||||
- sectiontitle: Work with images
|
||||
- sectiontitle: Develop images
|
||||
section:
|
||||
- path: /engine/userguide/eng-image/dockerfile_best-practices/
|
||||
- path: /develop/develop-images/dockerfile_best-practices/
|
||||
title: Best practices for writing Dockerfiles
|
||||
- path: /engine/userguide/eng-image/baseimages/
|
||||
- path: /develop/develop-images/baseimages/
|
||||
title: Create a base image
|
||||
- path: /engine/userguide/eng-image/multistage-build/
|
||||
- path: /develop/develop-images/multistage-build/
|
||||
title: Use multi-stage builds
|
||||
- path: /engine/reference/builder/
|
||||
title: Dockerfile reference
|
||||
nosync: true
|
||||
- path: /engine/userguide/eng-image/image_management/
|
||||
title: Image management
|
||||
- path: /develop/develop-images/image_management/
|
||||
title: Manage images
|
||||
- path: /samples/
|
||||
title: Docker app examples
|
||||
nosync: true
|
||||
@@ -308,61 +308,93 @@ guides:
|
||||
title: Use the VFS storage driver
|
||||
- sectiontitle: Run your app in production
|
||||
section:
|
||||
- sectiontitle: The basics
|
||||
section:
|
||||
- path: /engine/userguide/
|
||||
title: Overview
|
||||
- path: /engine/admin/
|
||||
title: Configuring and running Docker
|
||||
- path: /engine/admin/prometheus/
|
||||
title: Collect Docker metrics with Prometheus
|
||||
- path: /engine/admin/start-containers-automatically/
|
||||
title: Start containers automatically
|
||||
- path: /engine/admin/resource_constraints/
|
||||
title: Limit a container's resources
|
||||
- path: /engine/userguide/labels-custom-metadata/
|
||||
title: Apply custom metadata
|
||||
- path: /engine/admin/pruning/
|
||||
title: Prune unused Docker objects
|
||||
- path: /engine/admin/live-restore/
|
||||
title: Keep containers alive during daemon downtime
|
||||
- path: /engine/admin/systemd/
|
||||
title: Control and configure Docker with systemd
|
||||
- path: /engine/admin/formatting/
|
||||
title: Format command and log output
|
||||
- path: /registry/recipes/mirror/
|
||||
title: Run a local registry mirror
|
||||
nosync: true
|
||||
- sectiontitle: Configure all objects
|
||||
section:
|
||||
- path: /config/labels-custom-metadata/
|
||||
title: Apply custom metadata to objects
|
||||
- path: /config/pruning/
|
||||
title: Prune unused objects
|
||||
- path: /config/formatting/
|
||||
title: Format command and log output
|
||||
- sectiontitle: Configure the daemon
|
||||
section:
|
||||
- path: /config/daemon/
|
||||
title: Configure and run Docker
|
||||
- path: /config/daemon/systemd/
|
||||
title: Control Docker with systemd
|
||||
- path: /config/labels-custom-metadata/
|
||||
title: Apply custom metadata to daemons
|
||||
nosync: true
|
||||
- path: /config/containers/logging/
|
||||
title: Configuring default drivers
|
||||
nosync: true
|
||||
- path: /config/thirdparty/prometheus/
|
||||
title: Collect Docker metrics with Prometheus
|
||||
- sectiontitle: Configure containers
|
||||
section:
|
||||
- path: /config/containers/start-containers-automatically/
|
||||
title: Start containers automatically
|
||||
- path: /config/containers/live-restore/
|
||||
title: Keep containers alive during daemon downtime
|
||||
- path: /config/containers/multi-service_container/
|
||||
title: Run multiple services in a container
|
||||
- path: /config/containers/runmetrics/
|
||||
title: Container runtime metrics
|
||||
- path: /config/containers/resource_constraints/
|
||||
title: Limit a container's resources
|
||||
- path: /config/labels-custom-metadata/
|
||||
title: Apply custom metadata to containers
|
||||
nosync: true
|
||||
- path: /config/pruning/
|
||||
title: Prune unused containers
|
||||
nosync: true
|
||||
- sectiontitle: Logging
|
||||
section:
|
||||
- path: /engine/admin/logging/view_container_logs/
|
||||
- path: /config/containers/logging/
|
||||
title: View a container's logs
|
||||
- path: /engine/admin/logging/overview/
|
||||
title: Configuring logging drivers
|
||||
- path: /engine/admin/logging/plugins/
|
||||
- path: /config/containers/logging/configure/
|
||||
title: Configure logging drivers
|
||||
- path: /config/containers/logging/plugins/
|
||||
title: Use a logging driver plugin
|
||||
- path: /engine/admin/logging/log_tags/
|
||||
title: Log tags for logging driver
|
||||
- path: /engine/admin/logging/logentries/
|
||||
title: Logentries logging driver
|
||||
- path: /engine/admin/logging/json-file/
|
||||
title: JSON File logging driver
|
||||
- path: /engine/admin/logging/gelf/
|
||||
title: Graylog Extended Format (GELF) logging driver
|
||||
- path: /engine/admin/logging/syslog/
|
||||
title: Syslog logging driver
|
||||
- path: /engine/admin/logging/awslogs/
|
||||
title: Amazon CloudWatch logs logging driver
|
||||
- path: /engine/admin/logging/etwlogs/
|
||||
title: ETW logging driver
|
||||
- path: /engine/admin/logging/fluentd/
|
||||
title: Fluentd logging driver
|
||||
- path: /engine/admin/logging/gcplogs/
|
||||
title: Google Cloud logging driver
|
||||
- path: /engine/admin/logging/journald/
|
||||
title: Journald logging driver
|
||||
- path: /engine/admin/logging/splunk/
|
||||
title: Splunk logging driver
|
||||
- path: /config/containers/logging/log_tags/
|
||||
title: Customize log driver output
|
||||
- sectiontitle: Logging driver details
|
||||
section:
|
||||
- path: /config/containers/logging/logentries/
|
||||
title: Logentries logging driver
|
||||
- path: /config/containers/logging/json-file/
|
||||
title: JSON File logging driver
|
||||
- path: /config/containers/logging/gelf/
|
||||
title: Graylog Extended Format (GELF) logging driver
|
||||
- path: /config/containers/logging/syslog/
|
||||
title: Syslog logging driver
|
||||
- path: /config/containers/logging/awslogs/
|
||||
title: Amazon CloudWatch logs logging driver
|
||||
- path: /config/containers/logging/etwlogs/
|
||||
title: ETW logging driver
|
||||
- path: /config/containers/logging/fluentd/
|
||||
title: Fluentd logging driver
|
||||
- path: /config/containers/logging/gcplogs/
|
||||
title: Google Cloud logging driver
|
||||
- path: /config/containers/logging/journald/
|
||||
title: Journald logging driver
|
||||
- path: /config/containers/logging/splunk/
|
||||
title: Splunk logging driver
|
||||
- path: /registry/recipes/mirror/
|
||||
title: Run a local registry mirror
|
||||
nosync: true
|
||||
- sectiontitle: Work with external tools
|
||||
section:
|
||||
- path: /config/thirdparty/dsc/
|
||||
title: PowerShell DSC usage
|
||||
- path: /config/thirdparty/ansible/
|
||||
title: Ansible
|
||||
- path: /config/thirdparty/chef/
|
||||
title: Chef
|
||||
- path: /config/thirdparty/puppet/
|
||||
title: Puppet
|
||||
- path: /config/thirdparty/ambassador_pattern_linking/
|
||||
title: (Obsolete) Link via an ambassador container
|
||||
- sectiontitle: Security
|
||||
section:
|
||||
- path: /engine/security/security/
|
||||
@@ -453,22 +485,6 @@ guides:
|
||||
title: Swarm administration guide
|
||||
- path: /engine/swarm/raft/
|
||||
title: Raft consensus in swarm mode
|
||||
- sectiontitle: Work with external tools
|
||||
section:
|
||||
- path: /engine/admin/dsc/
|
||||
title: PowerShell DSC usage
|
||||
- path: /engine/admin/ansible/
|
||||
title: Using Ansible
|
||||
- path: /engine/admin/chef/
|
||||
title: Using Chef
|
||||
- path: /engine/admin/puppet/
|
||||
title: Using Puppet
|
||||
- path: /engine/admin/multi-service_container/
|
||||
title: Run multiple services in a container
|
||||
- path: /engine/admin/runmetrics/
|
||||
title: Runtime metrics
|
||||
- path: /engine/admin/ambassador_pattern_linking/
|
||||
title: Link via an ambassador container
|
||||
- sectiontitle: Extend Docker
|
||||
section:
|
||||
- path: /engine/extend/
|
||||
|
||||
82
config/containers/live-restore.md
Normal file
@@ -0,0 +1,82 @@
|
||||
---
|
||||
description: How to keep containers running when the daemon isn't available.
|
||||
keywords: docker, upgrade, daemon, dockerd, live-restore, daemonless container
|
||||
title: Keep containers alive during daemon downtime
|
||||
redirect_from:
|
||||
- /engine/admin/live-restore/
|
||||
---
|
||||
|
||||
By default, when the Docker daemon terminates, it shuts down running containers.
|
||||
Starting with Docker Engine 1.12, you can configure the daemon so that
|
||||
containers remain running if the daemon becomes unavailable. This functionality
|
||||
is called _live restore_. The live restore option helps reduce container
|
||||
downtime due to daemon crashes, planned outages, or upgrades.
|
||||
|
||||
> **Note**: Live restore is not supported on Windows containers, but it does work
|
||||
for Linux containers running on Docker for Windows.
|
||||
|
||||
## Enable live restore
|
||||
|
||||
There are two ways to enable the live restore setting to keep containers alive
|
||||
when the daemon becomes unavailable. **Only do one of the following**.
|
||||
|
||||
* Add the configuration to the daemon configuration file. On Linux, this
|
||||
defaults to `/etc/docker/daemon.json`. On Docker for Mac or Docker for Windows,
|
||||
select the Docker icon from the task bar, then click
|
||||
**Preferences** -> **Daemon** -> **Advanced**.
|
||||
|
||||
- Use the following JSON to enable `live-restore`.
|
||||
|
||||
```json
|
||||
{
|
||||
"live-restore": true
|
||||
}
|
||||
```
|
||||
|
||||
- Restart the Docker daemon. On Linux, you can avoid a restart (and avoid any
|
||||
downtime for your containers) by reload the Docker daemon. If you use
|
||||
`systemd`, then use the command `systemctl reload docker`. Otherwise, send a
|
||||
`SIGHUP` signal to the `dockerd` process.
|
||||
|
||||
* If you prefer, you can start the `dockerd` process manually with the
|
||||
`--live-restore` flag. This approach is not recommended because it does not
|
||||
set up the environment that `systemd` or another process manager would use
|
||||
when starting the Docker process. This can cause unexpected behavior.
|
||||
|
||||
|
||||
## Live restore during upgrades
|
||||
|
||||
The live restore feature supports restoring containers to the daemon for
|
||||
upgrades from one minor release to the next, such as when upgrading from Docker
|
||||
1.12.1 to 1.12.2.
|
||||
|
||||
If you skip releases during an upgrade, the daemon may not restore its
|
||||
connection to the containers. If the daemon can't restore the connection, it
|
||||
cannot manage the running containers and you must stop them manually.
|
||||
|
||||
## Live restore upon restart
|
||||
|
||||
The live restore option only works to restore containers if the daemon options,
|
||||
such as bridge IP addresses and graph driver, did not change. If any of these
|
||||
daemon-level configuration options have changed, the live restore may not work
|
||||
and you may need to manually stop the containers.
|
||||
|
||||
## Impact of live restore on running containers
|
||||
|
||||
If the daemon is down for a long time, running containers may fill up the FIFO
|
||||
log the daemon normally reads. A full log blocks containers from logging more
|
||||
data. The default buffer size is 64K. If the buffers fill, you must restart
|
||||
the Docker daemon to flush them.
|
||||
|
||||
On Linux, you can modify the kernel's buffer size by changing
|
||||
`/proc/sys/fs/pipe-max-size`. You cannot modify the buffer size on Docker for
|
||||
Mac or Docker for Windows.
|
||||
|
||||
## Live restore and swarm mode
|
||||
|
||||
The live restore option only pertains to standalone containers, and not to swarm
|
||||
services. Swarm services are managed by swarm managers. If swarm managers are
|
||||
not available, swarm services continue to run on worker nodes but cannot be
|
||||
managed until enough swarm managers are available to maintain a quorum.
|
||||
|
||||
|
||||
@@ -3,6 +3,7 @@ description: Describes how to use the Amazon CloudWatch Logs logging driver.
|
||||
keywords: AWS, Amazon, CloudWatch, logging, driver
|
||||
redirect_from:
|
||||
- /engine/reference/logging/awslogs/
|
||||
- /engine/admin/logging/awslogs/
|
||||
title: Amazon CloudWatch Logs logging driver
|
||||
---
|
||||
|
||||
@@ -5,6 +5,7 @@ redirect_from:
|
||||
- /engine/reference/logging/overview/
|
||||
- /engine/reference/logging/
|
||||
- /engine/admin/reference/logging/
|
||||
- /engine/admin/logging/overview/
|
||||
title: Configure logging drivers
|
||||
---
|
||||
|
||||
@@ -2,6 +2,8 @@
|
||||
description: Describes how to use the etwlogs logging driver.
|
||||
keywords: ETW, docker, logging, driver
|
||||
title: ETW logging driver
|
||||
redirect_from:
|
||||
- /engine/admin/logging/etwlogs/
|
||||
---
|
||||
|
||||
The ETW logging driver forwards container logs as ETW events.
|
||||
@@ -4,6 +4,7 @@ keywords: Fluentd, docker, logging, driver
|
||||
redirect_from:
|
||||
- /engine/reference/logging/fluentd/
|
||||
- /reference/logging/fluentd/
|
||||
- /engine/admin/logging/fluentd/
|
||||
title: Fluentd logging driver
|
||||
---
|
||||
|
||||
@@ -2,6 +2,8 @@
|
||||
description: Describes how to use the Google Cloud Logging driver.
|
||||
keywords: gcplogs, google, docker, logging, driver
|
||||
title: Google Cloud Logging driver
|
||||
redirect_from:
|
||||
- /engine/admin/logging/gcplogs/
|
||||
---
|
||||
|
||||
The Google Cloud Logging driver sends container logs to
|
||||
@@ -3,6 +3,7 @@ description: Describes how to use the Graylog Extended Format logging driver.
|
||||
keywords: graylog, gelf, logging, driver
|
||||
redirect_from:
|
||||
- /engine/reference/logging/gelf/
|
||||
- /engine/admin/logging/gelf/
|
||||
title: Graylog Extended Format logging driver
|
||||
---
|
||||
|
||||
@@ -4,6 +4,7 @@ keywords: docker, logging
|
||||
title: View logs for a container or service
|
||||
redirect_from:
|
||||
- /engine/admin/logging/
|
||||
- /engine/admin/logging/view_container_logs/
|
||||
---
|
||||
|
||||
The `docker logs` command shows information logged by a running container. The
|
||||
@@ -24,7 +25,7 @@ error messages. By default, `docker logs` shows the command's `STDOUT` and
|
||||
In some cases, `docker logs` may not show useful information unless you take
|
||||
additional steps.
|
||||
|
||||
- If you use a [logging driver](overview.md) which sends logs to a file, an
|
||||
- If you use a [logging driver](configure.md) which sends logs to a file, an
|
||||
external host, a database, or another logging back-end, `docker logs` may not
|
||||
show useful information.
|
||||
|
||||
@@ -49,5 +50,5 @@ its errors to `/proc/self/fd/2` (which is `STDERR`). See the
|
||||
|
||||
## Next steps
|
||||
|
||||
- Learn about using custom [logging drivers](overview.md).
|
||||
- Learn about writing a [Dockerfile](/engine/reference/builder.md).
|
||||
- Configure [logging drivers](configure.md).
|
||||
- Write a [Dockerfile](/engine/reference/builder.md).
|
||||
@@ -3,6 +3,7 @@ description: Describes how to use the Journald logging driver.
|
||||
keywords: Journald, docker, logging, driver
|
||||
redirect_from:
|
||||
- /engine/reference/logging/journald/
|
||||
- /engine/admin/logging/journald/
|
||||
title: Journald logging driver
|
||||
---
|
||||
|
||||
@@ -3,6 +3,7 @@ description: Describes how to use the json-file logging driver.
|
||||
keywords: json-file, docker, logging, driver
|
||||
redirect_from:
|
||||
- /engine/reference/logging/json-file/
|
||||
- /engine/admin/logging/json-file/
|
||||
title: JSON File logging driver
|
||||
---
|
||||
|
||||
@@ -3,7 +3,8 @@ description: Describes how to format tags for.
|
||||
keywords: docker, logging, driver, syslog, Fluentd, gelf, journald
|
||||
redirect_from:
|
||||
- /engine/reference/logging/log_tags/
|
||||
title: Log tags for logging driver
|
||||
- /engine/admin/logging/log_tags/
|
||||
title: Customize log driver output
|
||||
---
|
||||
|
||||
The `tag` log option specifies how to format a tag that identifies the
|
||||
@@ -2,6 +2,8 @@
|
||||
title: Logentries logging driver
|
||||
description: Describes how to use the logentries logging driver.
|
||||
keywords: logentries, docker, logging, driver
|
||||
redirect_from:
|
||||
- /engine/admin/logging/logentries/
|
||||
---
|
||||
|
||||
The `logentries` logging driver sends container logs to the
|
||||
@@ -2,10 +2,12 @@
|
||||
description: How to use logging driver plugins
|
||||
title: Use a logging driver plugin
|
||||
keywords: logging, driver, plugins, monitoring
|
||||
redirect_from:
|
||||
- /engine/admin/logging/plugins/
|
||||
---
|
||||
|
||||
Docker logging plugins allow you to extend and customize Docker's logging
|
||||
capabilities beyond those of the [built-in logging drivers](overview.md).
|
||||
capabilities beyond those of the [built-in logging drivers](configure.md).
|
||||
A logging service provider can
|
||||
[implement their own plugins](/engine/extend/plugins_logging.md) and make them
|
||||
available on Docker Hub, Docker Store, or a private registry. This topic shows
|
||||
@@ -24,7 +26,7 @@ a specific plugin using `docker inspect`.
|
||||
After the plugin is installed, you can configure the Docker daemon to use it as
|
||||
the default by setting the plugin's name as the value of the `logging-driver`
|
||||
key in the `daemon.json`, as detailed in the
|
||||
[logging overview](overview.md#configure-the-default-logging-driver). If the
|
||||
[logging overview](configure.md#configure-the-default-logging-driver). If the
|
||||
logging driver supports additional options, you can set those as the values of
|
||||
the `log-opts` array in the same file.
|
||||
|
||||
@@ -33,7 +35,7 @@ the `log-opts` array in the same file.
|
||||
After the plugin is installed, you can configure a container to use the plugin
|
||||
as its logging driver by specifying the `--log-driver` flag to `docker run`, as
|
||||
detailed in the
|
||||
[logging overview](overview.md#configure-the-logging-driver-for-a-container).
|
||||
[logging overview](configure.md#configure-the-logging-driver-for-a-container).
|
||||
If the logging driver supports additional options, you can specify them using
|
||||
one or more `--log-opt` flags with the option name as the key and the option
|
||||
value as the value.
|
||||
@@ -3,6 +3,7 @@ description: Describes how to use the Splunk logging driver.
|
||||
keywords: splunk, docker, logging, driver
|
||||
redirect_from:
|
||||
- /engine/reference/logging/splunk/
|
||||
- /engine/admin/logging/splunk/
|
||||
title: Splunk logging driver
|
||||
---
|
||||
|
||||
@@ -3,6 +3,7 @@ description: Describes how to use the syslog logging driver.
|
||||
keywords: syslog, docker, logging, driver
|
||||
redirect_from:
|
||||
- /engine/reference/logging/syslog/
|
||||
- /engine/admin/logging/syslog/
|
||||
title: Syslog logging driver
|
||||
---
|
||||
|
||||
@@ -4,6 +4,7 @@ keywords: docker, supervisor, process management
|
||||
redirect_from:
|
||||
- /engine/articles/using_supervisord/
|
||||
- /engine/admin/using_supervisord/
|
||||
- /engine/admin/multi-service_container/
|
||||
title: Run multiple services in a container
|
||||
---
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
---
|
||||
redirect_from:
|
||||
- "/engine/articles/systemd/"
|
||||
- /engine/articles/systemd/
|
||||
- /engine/admin/resource_constraints/
|
||||
title: "Limit a container's resources"
|
||||
description: "Limiting the system resources a container can use"
|
||||
keywords: "docker, daemon, configuration"
|
||||
@@ -77,10 +78,10 @@ Most of these options take a positive integer, followed by a suffix of `b`, `k`,
|
||||
| Option | Description |
|
||||
|:-----------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| `-m` or `--memory=` | The maximum amount of memory the container can use. If you set this option, the minimum allowed value is `4m` (4 megabyte). |
|
||||
| `--memory-swap`* | The amount of memory this container is allowed to swap to disk. See [`--memory-swap` details](resource_constraints.md#--memory-swap-details). |
|
||||
| `--memory-swappiness` | By default, the host kernel can swap out a percentage of anonymous pages used by a container. You can set `--memory-swappiness` to a value between 0 and 100, to tune this percentage. See [`--memory-swappiness` details](resource_constraints.md#--memory-swappiness-details). |
|
||||
| `--memory-swap`* | The amount of memory this container is allowed to swap to disk. See [`--memory-swap` details](#--memory-swap-details). |
|
||||
| `--memory-swappiness` | By default, the host kernel can swap out a percentage of anonymous pages used by a container. You can set `--memory-swappiness` to a value between 0 and 100, to tune this percentage. See [`--memory-swappiness` details](#--memory-swappiness-details). |
|
||||
| `--memory-reservation` | Allows you to specify a soft limit smaller than `--memory` which is activated when Docker detects contention or low memory on the host machine. If you use `--memory-reservation`, it must be set lower than `--memory` for it to take precedence. Because it is a soft limit, it does not guarantee that the container doesn't exceed the limit. |
|
||||
| `--kernel-memory` | The maximum amount of kernel memory the container can use. The minimum allowed value is `4m`. Because kernel memory cannot be swapped out, a container which is starved of kernel memory may block host machine resources, which can have side effects on the host machine and on other containers. See [`--kernel-memory` details](resource_constraints.md#--kernel-memory-details). |
|
||||
| `--kernel-memory` | The maximum amount of kernel memory the container can use. The minimum allowed value is `4m`. Because kernel memory cannot be swapped out, a container which is starved of kernel memory may block host machine resources, which can have side effects on the host machine and on other containers. See [`--kernel-memory` details](#--kernel-memory-details). |
|
||||
| `--oom-kill-disable` | By default, if an out-of-memory (OOM) error occurs, the kernel kills processes in a container. To change this behavior, use the `--oom-kill-disable` option. Only disable the OOM killer on containers where you have also set the `-m/--memory` option. If the `-m` flag is not set, the host can run out of memory and the kernel may need to kill the host system's processes to free memory. |
|
||||
|
||||
For more information about cgroups and memory in general, see the documentation
|
||||
@@ -230,7 +231,7 @@ for realtime tasks per runtime period. For instance, with the default period of
|
||||
containers using the realtime scheduler can run for 950000 microseconds for every
|
||||
1000000-microsecond period, leaving at least 50000 microseconds available for
|
||||
non-realtime tasks. To make this configuration permanent on systems which use
|
||||
`systemd`, see [Control and configure Docker with systemd](systemd.md).
|
||||
`systemd`, see [Control and configure Docker with systemd](/config/daemon/systemd.md).
|
||||
|
||||
#### Configure individual containers
|
||||
|
||||
@@ -4,6 +4,7 @@ keywords: docker, metrics, CPU, memory, disk, IO, run, runtime, stats
|
||||
redirect_from:
|
||||
- /engine/articles/run_metrics
|
||||
- /engine/articles/runmetrics
|
||||
- /engine/admin/runmetrics/
|
||||
title: Runtime metrics
|
||||
---
|
||||
|
||||
@@ -52,7 +53,7 @@ To figure out where your control groups are mounted, you can run:
|
||||
$ grep cgroup /proc/mounts
|
||||
```
|
||||
|
||||
## Enumerating cgroups
|
||||
### Enumerate cgroups
|
||||
|
||||
You can look into `/proc/cgroups` to see the different control group subsystems
|
||||
known to the system, the hierarchy they belong to, and how many groups they contain.
|
||||
@@ -63,7 +64,7 @@ the hierarchy mountpoint. `/` means the process has not been assigned to a
|
||||
group, while `/lxc/pumpkin` indicates that the process is a member of a
|
||||
container named `pumpkin`.
|
||||
|
||||
## Finding the cgroup for a given container
|
||||
### Find the cgroup for a given container
|
||||
|
||||
For each container, one cgroup is created in each hierarchy. On
|
||||
older systems with older versions of the LXC userland tools, the name of
|
||||
@@ -79,12 +80,12 @@ look it up with `docker inspect` or `docker ps --no-trunc`.
|
||||
Putting everything together to look at the memory metrics for a Docker
|
||||
container, take a look at `/sys/fs/cgroup/memory/docker/<longid>/`.
|
||||
|
||||
## Metrics from cgroups: memory, CPU, block I/O
|
||||
### Metrics from cgroups: memory, CPU, block I/O
|
||||
|
||||
For each subsystem (memory, CPU, and block I/O), one or
|
||||
more pseudo-files exist and contain statistics.
|
||||
|
||||
### Memory metrics: `memory.stat`
|
||||
#### Memory metrics: `memory.stat`
|
||||
|
||||
Memory metrics are found in the "memory" cgroup. The memory
|
||||
control group adds a little overhead, because it does very fine-grained
|
||||
@@ -179,7 +180,7 @@ jiffies". There are `USER_HZ` *"jiffies"* per second, and on x86 systems,
|
||||
[tickless kernels]( http://lwn.net/Articles/549580/) have made the number of
|
||||
ticks irrelevant.
|
||||
|
||||
### Block I/O metrics
|
||||
#### Block I/O metrics
|
||||
|
||||
Block I/O is accounted in the `blkio` controller.
|
||||
Different metrics are scattered across different files. While you can
|
||||
@@ -196,7 +197,7 @@ Metric | Description
|
||||
**blkio.io_serviced** | The number of I/O operations performed, regardless of their size. It also has 4 counters per device.
|
||||
**blkio.io_queued** | Indicates the number of I/O operations currently queued for this cgroup. In other words, if the cgroup isn't doing any I/O, this is zero. The opposite is not true. In other words, if there is no I/O queued, it does not mean that the cgroup is idle (I/O-wise). It could be doing purely synchronous reads on an otherwise quiescent device, which can therefore handle them immediately, without queuing. Also, while it is helpful to figure out which cgroup is putting stress on the I/O subsystem, keep in mind that it is a relative quantity. Even if a process group does not perform more I/O, its queue size can increase just because the device load increases because of other devices.
|
||||
|
||||
## Network metrics
|
||||
### Network metrics
|
||||
|
||||
Network metrics are not exposed directly by control groups. There is a
|
||||
good explanation for that: network interfaces exist within the context
|
||||
@@ -213,7 +214,7 @@ metrics with control groups.
|
||||
|
||||
Instead we can gather network metrics from other sources:
|
||||
|
||||
### IPtables
|
||||
#### IPtables
|
||||
|
||||
IPtables (or rather, the netfilter framework for which iptables is just
|
||||
an interface) can do some serious accounting.
|
||||
@@ -251,7 +252,7 @@ Then, you need to check those counters on a regular basis. If you
|
||||
happen to use `collectd`, there is a [nice plugin](https://collectd.org/wiki/index.php/Table_of_Plugins)
|
||||
to automate iptables counters collection.
|
||||
|
||||
### Interface-level counters
|
||||
#### Interface-level counters
|
||||
|
||||
Since each container has a virtual Ethernet interface, you might want to check
|
||||
directly the TX and RX counters of this interface. Each container is associated
|
||||
@@ -344,7 +345,7 @@ you close that file descriptor).
|
||||
The right approach would be to keep track of the first PID of each
|
||||
container, and re-open the namespace pseudo-file each time.
|
||||
|
||||
## Collecting metrics when a container exits
|
||||
## Collect metrics when a container exits
|
||||
|
||||
Sometimes, you do not care about real time metric collection, but when a
|
||||
container exits, you want to know how much CPU, memory, etc. it has
|
||||
@@ -4,6 +4,7 @@ keywords: containers, restart, policies, automation, administration
|
||||
redirect_from:
|
||||
- /engine/articles/host_integration/
|
||||
- /engine/admin/host_integration/
|
||||
- /engine/admin/start-containers-automatically/
|
||||
title: Start containers automatically
|
||||
---
|
||||
|
||||
@@ -4,6 +4,8 @@ keywords: docker, daemon, configuration, troubleshooting
|
||||
redirect_from:
|
||||
- /engine/articles/configuring/
|
||||
- /engine/admin/configuring/
|
||||
- /engine/admin/
|
||||
- /engine/userguide/
|
||||
title: Configure and troubleshoot the Docker daemon
|
||||
---
|
||||
|
||||
@@ -4,7 +4,8 @@ keywords: docker, daemon, systemd, configuration
|
||||
redirect_from:
|
||||
- /engine/articles/systemd/
|
||||
- /articles/systemd/
|
||||
title: Control and configure Docker with systemd
|
||||
- /engine/admin/systemd/
|
||||
title: Control Docker with systemd
|
||||
---
|
||||
|
||||
Many Linux distributions use systemd to start the Docker daemon. This document
|
||||
@@ -2,27 +2,19 @@
|
||||
description: CLI and log output formatting reference
|
||||
keywords: format, formatting, output, templates, log
|
||||
title: Format command and log output
|
||||
redirect_from:
|
||||
- /engine/admin/formatting/
|
||||
---
|
||||
|
||||
Docker uses [Go templates](https://golang.org/pkg/text/template/) which allow users to manipulate the output format
|
||||
of certain commands and log drivers. Each command a driver provides has a detailed
|
||||
list of elements they support in their templates:
|
||||
|
||||
- [Docker Images formatting](../reference/commandline/images.md#formatting)
|
||||
- [Docker Inspect formatting](../reference/commandline/inspect.md#examples)
|
||||
- [Docker Log Tag formatting](logging/log_tags.md)
|
||||
- [Docker Network Inspect formatting](../reference/commandline/network_inspect.md)
|
||||
- [Docker PS formatting](../reference/commandline/ps.md#formatting)
|
||||
- [Docker Stats formatting](../reference/commandline/stats.md#formatting)
|
||||
- [Docker Volume Inspect formatting](../reference/commandline/volume_inspect.md)
|
||||
- [Docker Version formatting](../reference/commandline/version.md#examples)
|
||||
|
||||
## Template functions
|
||||
Docker uses [Go templates](https://golang.org/pkg/text/template/) which you can
|
||||
use to manipulate the output format of certain commands and log drivers.
|
||||
|
||||
Docker provides a set of basic functions to manipulate template elements.
|
||||
This is the complete list of the available functions with examples:
|
||||
All of these examples use the `docker inspect` command, but many other CLI
|
||||
commands have a `--format` flag, and many of the CLI command references
|
||||
include examples of customizing the output format.
|
||||
|
||||
### `join`
|
||||
## join
|
||||
|
||||
`join` concatenates a list of strings to create a single string.
|
||||
It puts a separator between each element in the list.
|
||||
@@ -31,7 +23,7 @@ It puts a separator between each element in the list.
|
||||
$ docker inspect --format '{{join .Args " , "}}' container
|
||||
{% endraw %}
|
||||
|
||||
### `json`
|
||||
## json
|
||||
|
||||
`json` encodes an element as a json string.
|
||||
|
||||
@@ -39,7 +31,7 @@ It puts a separator between each element in the list.
|
||||
$ docker inspect --format '{{json .Mounts}}' container
|
||||
{% endraw %}
|
||||
|
||||
### `lower`
|
||||
## lower
|
||||
|
||||
`lower` transforms a string into its lowercase representation.
|
||||
|
||||
@@ -47,7 +39,7 @@ It puts a separator between each element in the list.
|
||||
$ docker inspect --format "{{lower .Name}}" container
|
||||
{% endraw %}
|
||||
|
||||
### `split`
|
||||
## split
|
||||
|
||||
`split` slices a string into a list of strings separated by a separator.
|
||||
|
||||
@@ -55,7 +47,7 @@ It puts a separator between each element in the list.
|
||||
$ docker inspect --format '{{split (join .Names "/") "/"}}' container
|
||||
{% endraw %}
|
||||
|
||||
### `title`
|
||||
## title
|
||||
|
||||
`title` capitalizes the first character of a string.
|
||||
|
||||
@@ -63,7 +55,7 @@ It puts a separator between each element in the list.
|
||||
$ docker inspect --format "{{title .Name}}" container
|
||||
{% endraw %}
|
||||
|
||||
### `upper`
|
||||
## upper
|
||||
|
||||
`upper` transforms a string into its uppercase representation.
|
||||
|
||||
@@ -2,6 +2,8 @@
|
||||
description: Description of labels, which are used to manage metadata on Docker objects.
|
||||
keywords: Usage, user guide, labels, metadata, docker, documentation, examples, annotating
|
||||
title: Docker object labels
|
||||
redirect_from:
|
||||
- /engine/userguide/labels-custom-metadata/
|
||||
---
|
||||
|
||||
Labels are a mechanism for applying metadata to Docker objects, including:
|
||||
@@ -75,35 +77,34 @@ Labels on images, containers, local daemons, volumes, and networks are static fo
|
||||
the lifetime of the object. To change these labels you must recreate the object.
|
||||
Labels on swarm nodes and services can be updated dynamically.
|
||||
|
||||
|
||||
- Images and containers
|
||||
- [Adding labels to images](../reference/builder.md#label)
|
||||
- [Overriding a container's labels at runtime](../reference/commandline/run.md#set-metadata-on-container--l---label---label-file)
|
||||
- [Inspecting labels on images or containers](../reference/commandline/inspect.md)
|
||||
- [Filtering images by label](../reference/commandline/inspect.md#filtering)
|
||||
- [Filtering containers by label](../reference/commandline/ps.md#filtering)
|
||||
- [Adding labels to images](/engine/reference/builder.md#label)
|
||||
- [Overriding a container's labels at runtime](/engine/reference/commandline/run.md#set-metadata-on-container--l---label---label-file)
|
||||
- [Inspecting labels on images or containers](/engine/reference/commandline/inspect.md)
|
||||
- [Filtering images by label](/engine/reference/commandline/inspect.md#filtering)
|
||||
- [Filtering containers by label](/engine/reference/commandline/ps.md#filtering)
|
||||
|
||||
- Local Docker daemons
|
||||
- [Adding labels to a Docker daemon at runtime](../reference/commandline/dockerd.md)
|
||||
- [Inspecting a Docker daemon's labels](../reference/commandline/info.md)
|
||||
- [Adding labels to a Docker daemon at runtime](/engine/reference/commandline/dockerd.md)
|
||||
- [Inspecting a Docker daemon's labels](/engine/reference/commandline/info.md)
|
||||
|
||||
- Volumes
|
||||
- [Adding labels to volumes](../reference/commandline/volume_create.md)
|
||||
- [Inspecting a volume's labels](../reference/commandline/volume_inspect.md)
|
||||
- [Filtering volumes by label](../reference/commandline/volume_ls.md#filtering)
|
||||
- [Adding labels to volumes](/engine/reference/commandline/volume_create.md)
|
||||
- [Inspecting a volume's labels](/engine/reference/commandline/volume_inspect.md)
|
||||
- [Filtering volumes by label](/engine/reference/commandline/volume_ls.md#filtering)
|
||||
|
||||
- Networks
|
||||
- [Adding labels to a network](../reference/commandline/network_create.md)
|
||||
- [Inspecting a network's labels](../reference/commandline/network_inspect.md)
|
||||
- [Filtering networks by label](../reference/commandline/network_ls.md#filtering)
|
||||
- [Adding labels to a network](/engine/reference/commandline/network_create.md)
|
||||
- [Inspecting a network's labels](/engine/reference/commandline/network_inspect.md)
|
||||
- [Filtering networks by label](/engine/reference/commandline/network_ls.md#filtering)
|
||||
|
||||
- Swarm nodes
|
||||
- [Adding or updating a swarm node's labels](../reference/commandline/node_update.md#add-label-metadata-to-a-node)
|
||||
- [Inspecting a swarm node's labels](../reference/commandline/node_inspect.md)
|
||||
- [Filtering swarm nodes by label](../reference/commandline/node_ls.md#filtering)
|
||||
- [Adding or updating a swarm node's labels](/engine/reference/commandline/node_update.md#add-label-metadata-to-a-node)
|
||||
- [Inspecting a swarm node's labels](/engine/reference/commandline/node_inspect.md)
|
||||
- [Filtering swarm nodes by label](/engine/reference/commandline/node_ls.md#filtering)
|
||||
|
||||
- Swarm services
|
||||
- [Adding labels when creating a swarm service](../reference/commandline/service_create.md#set-metadata-on-a-service-l-label)
|
||||
- [Updating a swarm service's labels](../reference/commandline/service_update.md)
|
||||
- [Inspecting a swarm service's labels](../reference/commandline/service_inspect.md)
|
||||
- [Filtering swarm services by label](../reference/commandline/service_ls.md#filtering)
|
||||
- [Adding labels when creating a swarm service](/engine/reference/commandline/service_create.md#set-metadata-on-a-service-l-label)
|
||||
- [Updating a swarm service's labels](/engine/reference/commandline/service_update.md)
|
||||
- [Inspecting a swarm service's labels](/engine/reference/commandline/service_inspect.md)
|
||||
- [Filtering swarm services by label](/engine/reference/commandline/service_ls.md#filtering)
|
||||
@@ -2,6 +2,8 @@
|
||||
description: Pruning unused objects
|
||||
keywords: pruning, prune, images, volumes, containers, networks, disk, administration, garbage collection
|
||||
title: Prune unused Docker objects
|
||||
redirect_from:
|
||||
- /engine/admin/pruning/
|
||||
---
|
||||
|
||||
Docker takes a conservative approach to cleaning up unused objects (often
|
||||
@@ -3,9 +3,36 @@ description: Using the Ambassador pattern to abstract (network) services
|
||||
keywords: Examples, Usage, links, docker, documentation, examples, names, name, container naming
|
||||
redirect_from:
|
||||
- /engine/articles/ambassador_pattern_linking/
|
||||
title: Link via an ambassador container
|
||||
- /engine/admin/ambassador_pattern_linking/
|
||||
title: (Obsolete) Link via an ambassador container
|
||||
noratings: true
|
||||
---
|
||||
|
||||
This content is out of date. Docker now includes better ways to
|
||||
manage multiple services together, as well as a mix of containerized and
|
||||
uncontainerized services. Consider using one or more of the following:
|
||||
|
||||
- [User-defined networks](/engine/userguide/networking.md#user-defined-networks)
|
||||
allow you to connect services together, including managing DNS resolution
|
||||
among them.
|
||||
|
||||
- [Overlay networks](/engine/userguide/networking/overlay-security-model.md)
|
||||
allow containers running on different Docker hosts to communicate in a
|
||||
seamless, encapsulated way.
|
||||
|
||||
- [Configs](/engine/swarm/configs.md) allow you to plug configuration details
|
||||
into swarm service containers at runtime instead of baking configuration
|
||||
details into your Docker images. This allows you to change configuration
|
||||
details, such as IP addresses to reach services external to Docker, on the fly.
|
||||
|
||||
- [Stacks](https://docs.docker.com/get-started/part5/) allow you to group
|
||||
multiple swarm services together, including defining networks, storage, and
|
||||
dependency relationships among the services.
|
||||
|
||||
Consider using one or more of the above solutions rather than the content below.
|
||||
|
||||
## Obsolete: Ambassador linking model
|
||||
|
||||
Rather than hardcoding network links between a service consumer and
|
||||
provider, Docker encourages service portability, for example instead of:
|
||||
|
||||
@@ -30,7 +57,7 @@ different docker host from the consumer.
|
||||
Using the `svendowideit/ambassador` container, the link wiring is
|
||||
controlled entirely from the `docker run` parameters.
|
||||
|
||||
## Two host example
|
||||
### Two host example
|
||||
|
||||
Start actual Redis server on one Docker host
|
||||
|
||||
@@ -54,7 +81,7 @@ ambassador.
|
||||
redis 172.17.0.160:6379> ping
|
||||
PONG
|
||||
|
||||
## How it works
|
||||
### How it works
|
||||
|
||||
The following example shows what the `svendowideit/ambassador` container
|
||||
does automatically (with a tiny amount of `sed`)
|
||||
@@ -119,7 +146,7 @@ And get the `redis-cli` image so we can talk over the ambassador bridge.
|
||||
redis 172.17.0.160:6379> ping
|
||||
PONG
|
||||
|
||||
## The svendowideit/ambassador Dockerfile
|
||||
### The svendowideit/ambassador Dockerfile
|
||||
|
||||
The `svendowideit/ambassador` image is based on the `alpine:3.2` image with
|
||||
`socat` installed. When you start the container, it uses a small `sed`
|
||||
@@ -2,6 +2,8 @@
|
||||
description: Installation and using Docker via Ansible
|
||||
keywords: ansible, installation, usage, docker, documentation
|
||||
title: Use Ansible
|
||||
redirect_from:
|
||||
- /engine/admin/ansible/
|
||||
---
|
||||
|
||||
Docker no longer maintains specific documentation about using Ansible from within
|
||||
@@ -3,6 +3,7 @@ description: Installation and using Docker via Chef
|
||||
keywords: chef, installation, usage, docker, documentation
|
||||
redirect_from:
|
||||
- /engine/articles/chef/
|
||||
- /engine/admin/chef/
|
||||
title: Use Chef
|
||||
---
|
||||
|
||||
@@ -3,6 +3,7 @@ description: Using DSC to configure a new Docker host
|
||||
keywords: powershell, dsc, installation, usage, docker, documentation
|
||||
redirect_from:
|
||||
- /engine/articles/dsc/
|
||||
- /engine/admin/dsc/
|
||||
title: Use PowerShell DSC
|
||||
---
|
||||
|
||||
|
Before Width: | Height: | Size: 171 KiB After Width: | Height: | Size: 171 KiB |
|
Before Width: | Height: | Size: 173 KiB After Width: | Height: | Size: 173 KiB |
|
Before Width: | Height: | Size: 131 KiB After Width: | Height: | Size: 131 KiB |
@@ -2,6 +2,8 @@
|
||||
description: Collecting Docker metrics with Prometheus
|
||||
keywords: prometheus, metrics
|
||||
title: Collect Docker metrics with Prometheus
|
||||
redirect_from:
|
||||
- /engine/admin/prometheus/
|
||||
---
|
||||
|
||||
[Prometheus](https://prometheus.io/) is an open-source systems monitoring and
|
||||
@@ -3,6 +3,7 @@ description: Installing and using Puppet
|
||||
keywords: puppet, installation, usage, docker, documentation
|
||||
redirect_from:
|
||||
- /engine/articles/puppet/
|
||||
- /engine/admin/puppet/
|
||||
title: Use Puppet
|
||||
---
|
||||
|
||||
@@ -3,6 +3,7 @@ description: How to create base images
|
||||
keywords: images, base image, examples
|
||||
redirect_from:
|
||||
- /engine/articles/baseimages/
|
||||
- /engine/userguide/eng-image/baseimages/
|
||||
title: Create a base image
|
||||
---
|
||||
|
||||
@@ -87,7 +88,7 @@ If you want to test it out, you can clone [the image repo](https://github.com/do
|
||||
|
||||
There are lots more resources available to help you write your `Dockerfile`.
|
||||
|
||||
* There's a [complete guide to all the instructions](../../reference/builder.md) available for use in a `Dockerfile` in the reference section.
|
||||
* There's a [complete guide to all the instructions](/engine/reference/builder.md) available for use in a `Dockerfile` in the reference section.
|
||||
* To help you write a clear, readable, maintainable `Dockerfile`, we've also
|
||||
written a [`Dockerfile` best practices guide](dockerfile_best-practices.md).
|
||||
* If your goal is to create a new Official Repository, be sure to read up on Docker's [Official Repositories](/docker-hub/official_repos/).
|
||||
@@ -6,6 +6,7 @@ redirect_from:
|
||||
- /engine/articles/dockerfile_best-practices/
|
||||
- /docker-cloud/getting-started/intermediate/optimize-dockerfiles/
|
||||
- /docker-cloud/tutorials/optimize-dockerfiles/
|
||||
- /engine/userguide/eng-image/dockerfile_best-practices/
|
||||
title: Best practices for writing Dockerfiles
|
||||
---
|
||||
|
||||
@@ -13,7 +14,7 @@ Docker can build images automatically by reading the instructions from a
|
||||
`Dockerfile`, a text file that contains all the commands, in order, needed to
|
||||
build a given image. `Dockerfile`s adhere to a specific format and use a
|
||||
specific set of instructions. You can learn the basics on the
|
||||
[Dockerfile Reference](../../reference/builder.md) page. If
|
||||
[Dockerfile Reference](/engine/reference/builder.md) page. If
|
||||
you’re new to writing `Dockerfile`s, you should start there.
|
||||
|
||||
This document covers the best practices and methods recommended by Docker,
|
||||
@@ -22,7 +23,7 @@ these practices and recommendations in action, check out the Dockerfile for
|
||||
[buildpack-deps](https://github.com/docker-library/buildpack-deps/blob/master/jessie/Dockerfile).
|
||||
|
||||
> **Note**: for more detailed explanations of any of the Dockerfile commands
|
||||
>mentioned here, visit the [Dockerfile Reference](../../reference/builder.md) page.
|
||||
>mentioned here, visit the [Dockerfile Reference](/engine/reference/builder.md) page.
|
||||
|
||||
## General guidelines and recommendations
|
||||
|
||||
@@ -57,14 +58,14 @@ Sending build context to Docker daemon 187.8MB
|
||||
To exclude files which are not relevant to the build, without restructuring your
|
||||
source repository, use a `.dockerignore` file. This file supports
|
||||
exclusion patterns similar to `.gitignore` files. For information on creating
|
||||
one, see the [.dockerignore file](../../reference/builder.md#dockerignore-file).
|
||||
one, see the [.dockerignore file](/engine/reference/builder.md#dockerignore-file).
|
||||
In addition to using a `.dockerignore` file, check out the information below
|
||||
on [multi-stage builds](#use-multi-stage-builds).
|
||||
|
||||
### Use multi-stage builds
|
||||
|
||||
If you use Docker 17.05 or higher, you can use
|
||||
[multi-stage builds](/engine/userguide/eng-image/multistage-build.md) to
|
||||
[multi-stage builds](multistage-build.md) to
|
||||
drastically reduce the size of your final image, without the need to
|
||||
jump through hoops to reduce the number of intermediate layers or remove
|
||||
intermediate files during the build.
|
||||
@@ -127,7 +128,7 @@ the web application, database, and an in-memory cache in a decoupled manner.
|
||||
You may have heard that there should be "one process per container". While this
|
||||
mantra has good intentions, it is not necessarily true that there should be only
|
||||
one operating system process per container. In addition to the fact that
|
||||
containers can now be [spawned with an init process](/engine/reference/run/#/specifying-an-init-process),
|
||||
containers can now be [spawned with an init process](/engine/reference/run.md#specifying-an-init-process),
|
||||
some programs might spawn additional processes of their own accord. For
|
||||
instance, [Celery](http://www.celeryproject.org/) can spawn multiple worker
|
||||
processes, or [Apache](https://httpd.apache.org/) might create a process per
|
||||
@@ -215,7 +216,7 @@ These recommendations help you to write an efficient and maintainable
|
||||
|
||||
### FROM
|
||||
|
||||
[Dockerfile reference for the FROM instruction](../../reference/builder.md#from)
|
||||
[Dockerfile reference for the FROM instruction](/engine/reference/builder.md#from)
|
||||
|
||||
Whenever possible, use current Official Repositories as the basis for your
|
||||
image. We recommend the [Alpine image](https://hub.docker.com/_/alpine/)
|
||||
@@ -224,7 +225,7 @@ while still being a full distribution.
|
||||
|
||||
### LABEL
|
||||
|
||||
[Understanding object labels](../labels-custom-metadata.md)
|
||||
[Understanding object labels](/config/labels-custom-metadata.md)
|
||||
|
||||
You can add labels to your image to help organize images by project, record
|
||||
licensing information, to aid in automation, or for other reasons. For each
|
||||
@@ -264,15 +265,15 @@ LABEL vendor=ACME\ Incorporated \
|
||||
com.example.release-date="2015-02-12"
|
||||
```
|
||||
|
||||
See [Understanding object labels](/engine/userguide/labels-custom-metadata.md)
|
||||
See [Understanding object labels](/config/labels-custom-metadata.md)
|
||||
for guidelines about acceptable label keys and values. For information about
|
||||
querying labels, refer to the items related to filtering in [Managing labels on
|
||||
objects](../labels-custom-metadata.md#managing-labels-on-objects). See also
|
||||
objects](/config/labels-custom-metadata.md#managing-labels-on-objects). See also
|
||||
[LABEL](/engine/reference/builder/#label) in the Dockerfile reference.
|
||||
|
||||
### RUN
|
||||
|
||||
[Dockerfile reference for the RUN instruction](../../reference/builder.md#run)
|
||||
[Dockerfile reference for the RUN instruction](/engine/reference/builder.md#run)
|
||||
|
||||
As always, to make your `Dockerfile` more readable, understandable, and
|
||||
maintainable, split long or complex `RUN` statements on multiple lines separated
|
||||
@@ -286,7 +287,7 @@ out for.
|
||||
|
||||
You should avoid `RUN apt-get upgrade` or `dist-upgrade`, as many of the
|
||||
“essential” packages from the parent images can't upgrade inside an
|
||||
[unprivileged container](/engine/reference/run/#security-configuration).
|
||||
[unprivileged container](/engine/reference/run.md#security-configuration).
|
||||
If a package contained in the parent image is out-of-date, you should contact its
|
||||
maintainers. If you know there’s a particular package, `foo`, that needs to be updated, use
|
||||
`apt-get install -y foo` to update automatically.
|
||||
@@ -402,7 +403,7 @@ RUN ["/bin/bash", "-c", "set -o pipefail && wget -O - https://some.site | wc -l
|
||||
|
||||
### CMD
|
||||
|
||||
[Dockerfile reference for the CMD instruction](../../reference/builder.md#cmd)
|
||||
[Dockerfile reference for the CMD instruction](/engine/reference/builder.md#cmd)
|
||||
|
||||
The `CMD` instruction should be used to run the software contained by your
|
||||
image, along with any arguments. `CMD` should almost always be used in the
|
||||
@@ -416,13 +417,13 @@ and perl. For example, `CMD ["perl", "-de0"]`, `CMD ["python"]`, or
|
||||
`CMD [“php”, “-a”]`. Using this form means that when you execute something like
|
||||
`docker run -it python`, you’ll get dropped into a usable shell, ready to go.
|
||||
`CMD` should rarely be used in the manner of `CMD [“param”, “param”]` in
|
||||
conjunction with [`ENTRYPOINT`](../../reference/builder.md#entrypoint), unless
|
||||
conjunction with [`ENTRYPOINT`](/engine/reference/builder.md#entrypoint), unless
|
||||
you and your expected users are already quite familiar with how `ENTRYPOINT`
|
||||
works.
|
||||
|
||||
### EXPOSE
|
||||
|
||||
[Dockerfile reference for the EXPOSE instruction](../../reference/builder.md#expose)
|
||||
[Dockerfile reference for the EXPOSE instruction](/engine/reference/builder.md#expose)
|
||||
|
||||
The `EXPOSE` instruction indicates the ports on which a container listens
|
||||
for connections. Consequently, you should use the common, traditional port for
|
||||
@@ -437,7 +438,7 @@ the recipient container back to the source (ie, `MYSQL_PORT_3306_TCP`).
|
||||
|
||||
### ENV
|
||||
|
||||
[Dockerfile reference for the ENV instruction](../../reference/builder.md#env)
|
||||
[Dockerfile reference for the ENV instruction](/engine/reference/builder.md#env)
|
||||
|
||||
To make new software easier to run, you can use `ENV` to update the
|
||||
`PATH` environment variable for the software your container installs. For
|
||||
@@ -462,8 +463,8 @@ auto-magically bump the version of the software in your container.
|
||||
|
||||
### ADD or COPY
|
||||
|
||||
[Dockerfile reference for the ADD instruction](../../reference/builder.md#add)<br/>
|
||||
[Dockerfile reference for the COPY instruction](../../reference/builder.md#copy)
|
||||
- [Dockerfile reference for the ADD instruction](/engine/reference/builder.md#add)
|
||||
- [Dockerfile reference for the COPY instruction](/engine/reference/builder.md#copy)
|
||||
|
||||
Although `ADD` and `COPY` are functionally similar, generally speaking, `COPY`
|
||||
is preferred. That’s because it’s more transparent than `ADD`. `COPY` only
|
||||
@@ -508,7 +509,7 @@ auto-extraction capability, you should always use `COPY`.
|
||||
|
||||
### ENTRYPOINT
|
||||
|
||||
[Dockerfile reference for the ENTRYPOINT instruction](../../reference/builder.md#entrypoint)
|
||||
[Dockerfile reference for the ENTRYPOINT instruction](/engine/reference/builder.md#entrypoint)
|
||||
|
||||
The best use for `ENTRYPOINT` is to set the image's main command, allowing that
|
||||
image to be run as though it was that command (and then use `CMD` as the
|
||||
@@ -558,7 +559,7 @@ exec "$@"
|
||||
> This script uses [the `exec` Bash command](http://wiki.bash-hackers.org/commands/builtin/exec)
|
||||
> so that the final running application becomes the container's PID 1. This allows
|
||||
> the application to receive any Unix signals sent to the container.
|
||||
> See the [`ENTRYPOINT`](../../reference/builder.md#entrypoint)
|
||||
> See the [`ENTRYPOINT`](/engine/reference/builder.md#entrypoint)
|
||||
> help for more details.
|
||||
|
||||
|
||||
@@ -584,7 +585,7 @@ Lastly, it could also be used to start a totally different tool, such as Bash:
|
||||
|
||||
### VOLUME
|
||||
|
||||
[Dockerfile reference for the VOLUME instruction](../../reference/builder.md#volume)
|
||||
[Dockerfile reference for the VOLUME instruction](/engine/reference/builder.md#volume)
|
||||
|
||||
The `VOLUME` instruction should be used to expose any database storage area,
|
||||
configuration storage, or files/folders created by your docker container. You
|
||||
@@ -593,7 +594,7 @@ parts of your image.
|
||||
|
||||
### USER
|
||||
|
||||
[Dockerfile reference for the USER instruction](../../reference/builder.md#user)
|
||||
[Dockerfile reference for the USER instruction](/engine/reference/builder.md#user)
|
||||
|
||||
If a service can run without privileges, use `USER` to change to a non-root
|
||||
user. Start by creating the user and group in the `Dockerfile` with something
|
||||
@@ -622,7 +623,7 @@ and forth frequently.
|
||||
|
||||
### WORKDIR
|
||||
|
||||
[Dockerfile reference for the WORKDIR instruction](../../reference/builder.md#workdir)
|
||||
[Dockerfile reference for the WORKDIR instruction](/engine/reference/builder.md#workdir)
|
||||
|
||||
For clarity and reliability, you should always use absolute paths for your
|
||||
`WORKDIR`. Also, you should use `WORKDIR` instead of proliferating
|
||||
@@ -631,7 +632,7 @@ troubleshoot, and maintain.
|
||||
|
||||
### ONBUILD
|
||||
|
||||
[Dockerfile reference for the ONBUILD instruction](../../reference/builder.md#onbuild)
|
||||
[Dockerfile reference for the ONBUILD instruction](/engine/reference/builder.md#onbuild)
|
||||
|
||||
An `ONBUILD` command executes after the current `Dockerfile` build completes.
|
||||
`ONBUILD` executes in any child image derived `FROM` the current image. Think
|
||||
@@ -665,8 +666,7 @@ These Official Repositories have exemplary `Dockerfile`s:
|
||||
|
||||
## Additional resources:
|
||||
|
||||
* [Dockerfile Reference](../../reference/builder.md)
|
||||
* [Dockerfile Reference](/engine/reference/builder.md)
|
||||
* [More about Base Images](baseimages.md)
|
||||
* [More about Automated Builds](/docker-hub/builds/)
|
||||
* [Guidelines for Creating Official
|
||||
Repositories](/docker-hub/official_repos/)
|
||||
* [Guidelines for Creating Official Repositories](/docker-hub/official_repos/)
|
||||
55
develop/develop-images/image_management.md
Normal file
@@ -0,0 +1,55 @@
|
||||
---
|
||||
redirect_from:
|
||||
- /reference/api/hub_registry_spec/
|
||||
- /userguide/image_management/
|
||||
- /engine/userguide/eng-image/image_management/
|
||||
description: Documentation for docker Registry and Registry API
|
||||
keywords: docker, registry, api, hub
|
||||
title: Manage images
|
||||
---
|
||||
|
||||
The easiest way to make your images available for use by others inside or
|
||||
outside your organization is to use a Docker registry, such as [Docker Hub](#docker-hub),
|
||||
[Docker Trusted Registry](#docker-trusted-registry), or
|
||||
by running your own [private registry](#docker-registry).
|
||||
|
||||
|
||||
## Docker Hub
|
||||
|
||||
[Docker Hub](/docker-hub/) is a public registry managed by Docker, Inc. It
|
||||
centralizes information about organizations, user accounts, and images. It
|
||||
includes a web UI, authentication and authorization using organizations, CLI and
|
||||
API access using commands such as `docker login`, `docker pull`, and `docker
|
||||
push`, comments, stars, search, and more. Docker Hub is also integrated into
|
||||
[Docker Store](/docker-store/), which is a marketplace that allows you to buy
|
||||
and sell entitlements to non-free images.
|
||||
|
||||
## Docker Registry
|
||||
|
||||
The Docker Registry is a component of Docker's ecosystem. A registry is a
|
||||
storage and content delivery system, holding named Docker images, available in
|
||||
different tagged versions. For example, the image `distribution/registry`, with
|
||||
tags `2.0` and `latest`. Users interact with a registry by using docker push and
|
||||
pull commands such as `docker pull myregistry.com/stevvooe/batman:voice`.
|
||||
|
||||
Docker Hub is an instance of a Docker Registry.
|
||||
|
||||
## Docker Trusted Registry
|
||||
|
||||
[Docker Trusted Registry](/datacenter/dtr/2.1/guides/index.md) is part of
|
||||
Docker Enterprise Edition, and is a private, secure Docker registry which
|
||||
includes features such as image signing and content trust, role-based access
|
||||
controls, and other Enterprise-grade features.
|
||||
|
||||
|
||||
## Content Trust
|
||||
|
||||
When transferring data among networked systems, *trust* is a central concern. In
|
||||
particular, when communicating over an untrusted medium such as the internet, it
|
||||
is critical to ensure the integrity and publisher of all of the data a system
|
||||
operates on. You use Docker to push and pull images (data) to a registry.
|
||||
Content trust gives you the ability to both verify the integrity and the
|
||||
publisher of all the data received from a registry over any channel.
|
||||
|
||||
See [Content trust](/engine/security/trust/index.md) for information about
|
||||
configuring and using this feature on Docker clients.
|
||||
@@ -1,7 +1,9 @@
|
||||
---
|
||||
description: Keeping your images small with multi-stage images
|
||||
keywords: images, containers, best practices
|
||||
keywords: images, containers, best practices, multi-stage, multistage
|
||||
title: Use multi-stage builds
|
||||
redirect_from:
|
||||
- /engine/userguide/eng-image/multistage-build/
|
||||
---
|
||||
|
||||
Multi-stage builds are a new feature requiring Docker 17.05 or higher on the
|
||||
@@ -125,7 +127,7 @@ the `alpine:latest` image as its base. The `COPY --from=0` line copies just the
|
||||
built artifact from the previous stage into this new stage. The Go SDK and any
|
||||
intermediate artifacts are left behind, and not saved in the final image.
|
||||
|
||||
### Name your build stages
|
||||
## Name your build stages
|
||||
|
||||
By default, the stages are not named, and you refer to them by their integer
|
||||
number, starting with 0 for the first `FROM` instruction. However, you can
|
||||
@@ -148,8 +150,35 @@ COPY --from=builder /go/src/github.com/alexellis/href-counter/app .
|
||||
CMD ["./app"]
|
||||
```
|
||||
|
||||
## Next steps
|
||||
## Stop at a specific build stage
|
||||
|
||||
When you build your image, you don't necessarily need to build the entire
|
||||
Dockerfile including every stage. You can specify a target build stage. The
|
||||
following command assumes you are using the previous `Dockerfile` but stops at
|
||||
the stage named `builder`:
|
||||
|
||||
```bash
|
||||
$ docker build -t docker build --target builder \
|
||||
-t alexellis2/href-counter:latest .
|
||||
```
|
||||
|
||||
A few scenarios where this might be very powerful are:
|
||||
|
||||
- Debugging a specific build stage
|
||||
- Using a `debug` stage with all debugging symbols or tools enabled, and a
|
||||
lean `production` stage
|
||||
- Using a `testing` stage in which your app gets populated with test data, but
|
||||
building for production using a different stage which uses real data
|
||||
|
||||
## Use an external image as a "stage"
|
||||
|
||||
When using multi-stage builds, you are not limited to copying from stages you
|
||||
created earlier in your Dockerfile. You can use the `COPY --from` instruction to
|
||||
copy from a separate image, either using the local image name, a tag available
|
||||
locally or on a Docker registry, or a tag ID. The Docker client pulls the image
|
||||
if necessary and copies the artifact from there. The syntax is:
|
||||
|
||||
```Dockerfile
|
||||
COPY --from=nginx:latest /etc/nginx/nginx.conf /nginx.conf
|
||||
```
|
||||
|
||||
- Check out the blog post
|
||||
[Builder pattern vs. Multi-stage builds in Docker](http://blog.alexellis.io/mutli-stage-docker-builds/)
|
||||
for full source code and a walk-through of these examples.
|
||||
|
Before Width: | Height: | Size: 27 KiB |
|
Before Width: | Height: | Size: 35 KiB |
|
Before Width: | Height: | Size: 30 KiB |
|
Before Width: | Height: | Size: 28 KiB |
|
Before Width: | Height: | Size: 76 KiB |
|
Before Width: | Height: | Size: 69 KiB |
|
Before Width: | Height: | Size: 9.4 KiB |
@@ -1,161 +0,0 @@
|
||||
---
|
||||
description: Resizing a Boot2Docker volume in VirtualBox with GParted
|
||||
keywords: boot2docker, volume, virtualbox
|
||||
published: false
|
||||
title: Resize a Boot2Docker volume
|
||||
---
|
||||
|
||||
# Getting "no space left on device" errors with Boot2Docker?
|
||||
|
||||
If you're using Boot2Docker with a large number of images, or the images you're
|
||||
working with are very large, your pulls might start failing with "no space left
|
||||
on device" errors when the Boot2Docker volume fills up. There are two solutions
|
||||
you can try.
|
||||
|
||||
## Solution 1: Add the `DiskImage` property in boot2docker profile
|
||||
|
||||
The `boot2docker` command reads its configuration from the `$BOOT2DOCKER_PROFILE` if set, or `$BOOT2DOCKER_DIR/profile` or `$HOME/.boot2docker/profile` (on Windows this is `%USERPROFILE%/.boot2docker/profile`).
|
||||
|
||||
1. View the existing configuration, use the `boot2docker config` command.
|
||||
|
||||
$ boot2docker config
|
||||
# boot2docker profile filename: /Users/mary/.boot2docker/profile
|
||||
Init = false
|
||||
Verbose = false
|
||||
Driver = "virtualbox"
|
||||
Clobber = true
|
||||
ForceUpgradeDownload = false
|
||||
SSH = "ssh"
|
||||
SSHGen = "ssh-keygen"
|
||||
SSHKey = "/Users/mary/.ssh/id_boot2docker"
|
||||
VM = "boot2docker-vm"
|
||||
Dir = "/Users/mary/.boot2docker"
|
||||
ISOURL = "https://api.github.com/repos/boot2docker/boot2docker/releases"
|
||||
ISO = "/Users/mary/.boot2docker/boot2docker.iso"
|
||||
DiskSize = 20000
|
||||
Memory = 2048
|
||||
CPUs = 8
|
||||
SSHPort = 2022
|
||||
DockerPort = 0
|
||||
HostIP = "192.168.59.3"
|
||||
DHCPIP = "192.168.59.99"
|
||||
NetMask = [255, 255, 255, 0]
|
||||
LowerIP = "192.168.59.103"
|
||||
UpperIP = "192.168.59.254"
|
||||
DHCPEnabled = true
|
||||
Serial = false
|
||||
SerialFile = "/Users/mary/.boot2docker/boot2docker-vm.sock"
|
||||
Waittime = 300
|
||||
Retries = 75
|
||||
|
||||
The configuration shows you where `boot2docker` is looking for the `profile` file. It also output the settings that are in use.
|
||||
|
||||
|
||||
2. Initialize a default file to customize using `boot2docker config > ~/.boot2docker/profile` command.
|
||||
|
||||
3. Add the following lines to `$HOME/.boot2docker/profile`:
|
||||
|
||||
# Disk image size in MB
|
||||
DiskSize = 50000
|
||||
|
||||
4. Run the following sequence of commands to restart Boot2Docker with the new settings.
|
||||
|
||||
$ boot2docker poweroff
|
||||
$ boot2docker destroy
|
||||
$ boot2docker init
|
||||
$ boot2docker up
|
||||
|
||||
## Solution 2: Increase the size of boot2docker volume
|
||||
|
||||
This solution increases the volume size by first cloning it, then resizing it
|
||||
using a disk partitioning tool. We recommend
|
||||
[GParted](https://sourceforge.net/projects/gparted/files/). The tool comes
|
||||
as a bootable ISO, is a free download, and works well with VirtualBox.
|
||||
|
||||
1. Stop Boot2Docker
|
||||
|
||||
Issue the command to stop the Boot2Docker VM on the command line:
|
||||
|
||||
$ boot2docker stop
|
||||
|
||||
2. Clone the VMDK image to a VDI image
|
||||
|
||||
Boot2Docker ships with a VMDK image, which can't be resized by VirtualBox's
|
||||
native tools. We instead create a VDI volume and clone the VMDK volume to
|
||||
it.
|
||||
|
||||
3. Using the command line VirtualBox tools, clone the VMDK image to a VDI image:
|
||||
|
||||
$ vboxmanage clonehd /full/path/to/boot2docker-hd.vmdk /full/path/to/<newVDIimage>.vdi --format VDI --variant Standard
|
||||
|
||||
4. Resize the VDI volume
|
||||
|
||||
Choose a size appropriate for your needs. If you're spinning up a
|
||||
lot of containers, or your containers are particularly large, larger is
|
||||
better:
|
||||
|
||||
$ vboxmanage modifyhd /full/path/to/<newVDIimage>.vdi --resize <size in MB>
|
||||
|
||||
5. Download a disk partitioning tool ISO
|
||||
|
||||
To resize the volume, we use [GParted](https://sourceforge.net/projects/gparted/files/).
|
||||
Once you've downloaded the tool, add the ISO to the Boot2Docker VM IDE bus.
|
||||
You might need to create the bus before you can add the ISO.
|
||||
|
||||
> **Note**:
|
||||
> It's important that you choose a partitioning tool that is available as an ISO so
|
||||
> that the Boot2Docker VM can be booted with it.
|
||||
|
||||
<table>
|
||||
<tr>
|
||||
<td><img src="/articles/b2d_volume_images/add_new_controller.png"><br><br></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><img src="/articles/b2d_volume_images/add_cd.png"></td>
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
6. Add the new VDI image
|
||||
|
||||
In the settings for the Boot2Docker image in VirtualBox, remove the VMDK image
|
||||
from the SATA controller and add the VDI image.
|
||||
|
||||
<img src="/articles/b2d_volume_images/add_volume.png">
|
||||
|
||||
7. Verify the boot order
|
||||
|
||||
In the **System** settings for the Boot2Docker VM, make sure that **CD/DVD** is
|
||||
at the top of the **Boot Order** list.
|
||||
|
||||
<img src="/articles/b2d_volume_images/boot_order.png">
|
||||
|
||||
8. Boot to the disk partitioning ISO
|
||||
|
||||
Manually start the Boot2Docker VM in VirtualBox, and the disk partitioning ISO
|
||||
should start up. Using GParted, choose the **GParted Live (default settings)**
|
||||
option. Choose the default keyboard, language, and XWindows settings, and the
|
||||
GParted tool starts up and displays the VDI volume you created. Right click
|
||||
on the VDI and choose **Resize/Move**.
|
||||
|
||||
<img src="/articles/b2d_volume_images/gparted.png">
|
||||
|
||||
9. Drag the slider representing the volume to the maximum available size.
|
||||
|
||||
10. Click **Resize/Move** followed by **Apply**.
|
||||
|
||||
<img src="/articles/b2d_volume_images/gparted2.png">
|
||||
|
||||
11. Quit GParted and shut down the VM.
|
||||
|
||||
12. Remove the GParted ISO from the IDE controller for the Boot2Docker VM in
|
||||
VirtualBox.
|
||||
|
||||
13. Start the Boot2Docker VM
|
||||
|
||||
Fire up the Boot2Docker VM manually in VirtualBox. The VM should log in
|
||||
automatically, but if it doesn't, the credentials are `docker/tcuser`. Using
|
||||
the `df -h` command, verify that your changes took effect.
|
||||
|
||||
<img src="/articles/b2d_volume_images/verify.png">
|
||||
|
||||
You're done!
|
||||
@@ -1,75 +0,0 @@
|
||||
---
|
||||
description: How to keep containers running when the daemon isn't available.
|
||||
keywords: docker, upgrade, daemon, dockerd, live-restore, daemonless container
|
||||
title: Keep containers alive during daemon downtime
|
||||
---
|
||||
|
||||
By default, when the Docker daemon terminates, it shuts down running containers.
|
||||
Starting with Docker Engine 1.12, you can configure the daemon so that
|
||||
containers remain running if the daemon becomes unavailable. The live restore
|
||||
option helps reduce container downtime due to daemon crashes, planned outages,
|
||||
or upgrades.
|
||||
|
||||
> **Note**: Live restore is not supported on Windows containers, but it does work
|
||||
for Linux containers running on Docker for Windows.
|
||||
|
||||
## Enable the live restore option
|
||||
|
||||
There are two ways to enable the live restore setting to keep containers alive
|
||||
when the daemon becomes unavailable:
|
||||
|
||||
* If the daemon is already running and you don't want to stop it, you can add
|
||||
the configuration to the daemon configuration file. For example, on a linux
|
||||
system the default configuration file is `/etc/docker/daemon.json`.
|
||||
|
||||
Use your favorite editor to enable the `live-restore` option in the
|
||||
`daemon.json`.
|
||||
|
||||
```bash
|
||||
{
|
||||
"live-restore": true
|
||||
}
|
||||
```
|
||||
|
||||
You need to send a `SIGHUP` signal to the daemon process for it to reload the
|
||||
configuration. For more information on how to configure the Docker daemon using
|
||||
`daemon.json`, see [daemon configuration file](../reference/commandline/dockerd.md#daemon-configuration-file).
|
||||
|
||||
* When you start the Docker daemon, pass the `--live-restore` flag:
|
||||
|
||||
```bash
|
||||
$ sudo dockerd --live-restore
|
||||
```
|
||||
|
||||
## Live restore during upgrades
|
||||
|
||||
The live restore feature supports restoring containers to the daemon for
|
||||
upgrades from one minor release to the next. For example from Docker Engine
|
||||
1.12.1 to 1.12.2.
|
||||
|
||||
If you skip releases during an upgrade, the daemon may not restore its connection to the containers. If the daemon can't restore the connection, it ignores the running containers and you must manage them manually.
|
||||
|
||||
## Live restore upon restart
|
||||
|
||||
The live restore option only works to restore the same set of daemon options
|
||||
as the daemon had before it stopped. For example, live restore may not work if
|
||||
the daemon restarts with a different bridge IP or a different graphdriver.
|
||||
|
||||
## Impact of live restore on running containers
|
||||
|
||||
A lengthy absence of the daemon can impact running containers. The containers
|
||||
process writes to FIFO logs for daemon consumption. If the daemon is unavailable
|
||||
to consume the output, the buffer fills up and blocks further writes to the
|
||||
log. A full log blocks the process until further space is available. The default
|
||||
buffer size is typically 64K.
|
||||
|
||||
You must restart Docker to flush the buffers.
|
||||
|
||||
You can modify the kernel's buffer size by changing `/proc/sys/fs/pipe-max-size`.
|
||||
|
||||
## Live restore and swarm mode
|
||||
|
||||
The live restore option is not compatible with Docker Engine swarm mode. When
|
||||
the Docker Engine runs in swarm mode, the orchestration feature manages tasks
|
||||
and keeps containers running according to a service specification.
|
||||
|
||||
@@ -47,7 +47,7 @@ _Docker Engine_ is a client-server application with these major components:
|
||||
|
||||
* A command line interface (CLI) client (the `docker` command).
|
||||
|
||||

|
||||

|
||||
|
||||
The CLI uses the Docker REST API to control or interact with the Docker daemon
|
||||
through scripting or direct CLI commands. Many other Docker applications use the
|
||||
@@ -107,7 +107,7 @@ run on the same system, or you can connect a Docker client to a remote Docker
|
||||
daemon. The Docker client and daemon communicate using a REST API, over UNIX
|
||||
sockets or a network interface.
|
||||
|
||||

|
||||

|
||||
|
||||
### The Docker daemon
|
||||
|
||||
|
||||
|
Before Width: | Height: | Size: 183 KiB After Width: | Height: | Size: 183 KiB |
|
Before Width: | Height: | Size: 18 KiB After Width: | Height: | Size: 18 KiB |
@@ -1,15 +0,0 @@
|
||||
---
|
||||
published: false
|
||||
---
|
||||
|
||||
Static files dir
|
||||
================
|
||||
|
||||
Files you put in /static_files/ are copied to the web visible /_static/
|
||||
|
||||
Be careful not to override pre-existing static files from the template.
|
||||
|
||||
Generally, layout related files should go in the /theme directory.
|
||||
|
||||
If you want to add images to your particular documentation page. Just put them next to
|
||||
your .rst source file and reference them relatively.
|
||||
|
Before Width: | Height: | Size: 15 KiB |
|
Before Width: | Height: | Size: 4.6 KiB |
|
Before Width: | Height: | Size: 7.0 KiB |
|
Before Width: | Height: | Size: 8.5 KiB |
|
Before Width: | Height: | Size: 9.0 KiB |
@@ -1,48 +0,0 @@
|
||||
---
|
||||
alias:
|
||||
- /reference/api/hub_registry_spec/
|
||||
- /userguide/image_management/
|
||||
description: Documentation for docker Registry and Registry API
|
||||
keywords: docker, registry, api, hub
|
||||
title: Image management
|
||||
---
|
||||
|
||||
The Docker Engine provides a client which you can use to create images on the command line or through a build process. You can run these images in a container or publish them for others to use. Storing the images you create, searching for images you might want, or publishing images others might use are all elements of image management.
|
||||
|
||||
This section provides an overview of the major features and products Docker provides for image management.
|
||||
|
||||
|
||||
## Docker Hub
|
||||
|
||||
The [Docker Hub](/docker-hub/) is responsible for centralizing information about user accounts, images, and public name spaces. It has different components:
|
||||
|
||||
- Web UI
|
||||
- Meta-data store (comments, stars, list public repositories)
|
||||
- Authentication service
|
||||
- Tokenization
|
||||
|
||||
There is only one instance of the Docker Hub, run and managed by Docker Inc. This public Hub is useful for most individuals and smaller companies.
|
||||
|
||||
## Docker Registry and the Docker Trusted Registry
|
||||
|
||||
The Docker Registry is a component of Docker's ecosystem. A registry is a
|
||||
storage and content delivery system, holding named Docker images, available in
|
||||
different tagged versions. For example, the image `distribution/registry`, with
|
||||
tags `2.0` and `latest`. Users interact with a registry by using docker push and
|
||||
pull commands such as, `docker pull myregistry.com/stevvooe/batman:voice`.
|
||||
|
||||
The Docker Hub has its own registry which, like the Hub itself, is run and managed by Docker. However, there are other ways to obtain a registry. You can purchase the [Docker Trusted Registry](/datacenter/dtr/2.1/guides/index.md) product to run on your company's network. Alternatively, you can use the Docker Registry component to build a private registry. For information about using a registry, see overview for the [Docker Registry](/registry).
|
||||
|
||||
|
||||
## Content Trust
|
||||
|
||||
When transferring data among networked systems, *trust* is a central concern. In
|
||||
particular, when communicating over an untrusted medium such as the internet, it
|
||||
is critical to ensure the integrity and publisher of all of the data a system
|
||||
operates on. You use Docker to push and pull images (data) to a registry.
|
||||
Content trust gives you the ability to both verify the integrity and the
|
||||
publisher of all the data received from a registry over any channel.
|
||||
|
||||
[Content trust](../../security/trust/index.md) is currently only available for users of the
|
||||
public Docker Hub. It is currently not available for the Docker Trusted Registry
|
||||
or for private registries.
|
||||
@@ -1,9 +0,0 @@
|
||||
---
|
||||
description: The Docker user guide home page
|
||||
keywords: introduction, images, dockerfile
|
||||
title: Work with images
|
||||
---
|
||||
|
||||
* [Create a base image](baseimages.md)
|
||||
* [Best practices for writing Dockerfiles](dockerfile_best-practices.md)
|
||||
* [Image management](image_management.md)
|
||||
@@ -1,13 +0,0 @@
|
||||
---
|
||||
description: How to use the Docker Engine user guide
|
||||
keywords: docker, overview
|
||||
title: Configure and use Docker
|
||||
redirect_from:
|
||||
- /engine/userguide/intro/
|
||||
---
|
||||
|
||||
After you've [installed Docker](/install/index.md) and completed the
|
||||
[Getting started guides](/get-started/), you're ready to take it to the next
|
||||
level. The topics in this section show how to develop and design your app,
|
||||
store your app's data, and get your app running in a scalable, secure, robust
|
||||
production environment.
|
||||
@@ -69,7 +69,7 @@ for multiple trusted collections in an associated database, and a Notary signer,
|
||||
stores private keys for and signs metadata for the Notary server. The following
|
||||
diagram illustrates this architecture:
|
||||
|
||||

|
||||

|
||||
|
||||
Root, targets, and (sometimes) snapshot metadata are generated and signed by
|
||||
clients, who upload the metadata to the Notary server. The server is
|
||||
|
||||