mirror of
https://github.com/docker/docs.git
synced 2026-03-31 00:08:55 +07:00
1929 lines
105 KiB
XML
1929 lines
105 KiB
XML
<?xml version="1.0" encoding="utf-8" standalone="yes" ?>
|
||
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
|
||
<channel>
|
||
<title>Engines on Docker Docs</title>
|
||
<link>http://localhost/engine/</link>
|
||
<description>Recent content in Engines on Docker Docs</description>
|
||
<generator>Hugo -- gohugo.io</generator>
|
||
<language>en-us</language>
|
||
<atom:link href="http://localhost/engine/index.xml" rel="self" type="application/rss+xml" />
|
||
|
||
<item>
|
||
<title></title>
|
||
<link>http://localhost/engine/articles/https/README/</link>
|
||
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
|
||
|
||
<guid>http://localhost/engine/articles/https/README/</guid>
|
||
<description><p>This is an initial attempt to make it easier to test the examples in the https.md
|
||
doc</p>
|
||
|
||
<p>at this point, it has to be a manual thing, and I&rsquo;ve been running it in boot2docker</p>
|
||
|
||
<p>so my process is</p>
|
||
|
||
<p>$ boot2docker ssh
|
||
$$ git clone <a href="https://github.com/docker/docker">https://github.com/docker/docker</a>
|
||
$$ cd docker/docs/articles/https
|
||
$$ make cert
|
||
lots of things to see and manually answer, as openssl wants to be interactive
|
||
<strong>NOTE:</strong> make sure you enter the hostname (<code>boot2docker</code> in my case) when prompted for <code>Computer Name</code>)
|
||
$$ sudo make run</p>
|
||
|
||
<p>start another terminal</p>
|
||
|
||
<p>$ boot2docker ssh
|
||
$$ cd docker/docs/articles/https
|
||
$$ make client</p>
|
||
|
||
<p>the last will connect first with <code>--tls</code> and then with <code>--tlsverify</code></p>
|
||
|
||
<p>both should succeed</p>
|
||
</description>
|
||
</item>
|
||
|
||
<item>
|
||
<title></title>
|
||
<link>http://localhost/engine/reference/api/README/</link>
|
||
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
|
||
|
||
<guid>http://localhost/engine/reference/api/README/</guid>
|
||
<description><p>This directory holds the authoritative specifications of APIs defined and implemented by Docker. Currently this includes:</p>
|
||
|
||
<ul>
|
||
<li>The remote API by which a docker node can be queried over HTTP</li>
|
||
<li>The registry API by which a docker node can download and upload
|
||
images for storage and sharing</li>
|
||
<li>The index search API by which a docker node can search the public
|
||
index for images to download</li>
|
||
<li>The docker.io OAuth and accounts API which 3rd party services can
|
||
use to access account information</li>
|
||
</ul>
|
||
</description>
|
||
</item>
|
||
|
||
<item>
|
||
<title></title>
|
||
<link>http://localhost/engine/security/apparmor/</link>
|
||
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
|
||
|
||
<guid>http://localhost/engine/security/apparmor/</guid>
|
||
<description>
|
||
|
||
<h2 id="apparmor-security-profiles-for-docker">AppArmor security profiles for Docker</h2>
|
||
|
||
<p>AppArmor (Application Armor) is a security module that allows a system
|
||
administrator to associate a security profile with each program. Docker
|
||
expects to find an AppArmor policy loaded and enforced.</p>
|
||
|
||
<p>Container profiles are loaded automatically by Docker. A profile
|
||
for the Docker Engine itself also exists and is installed
|
||
with the official <em>.deb</em> packages. Advanced users and package
|
||
managers may find the profile for <em>/usr/bin/docker</em> underneath
|
||
<a href="https://github.com/docker/docker/tree/master/contrib/apparmor">contrib/apparmor</a>
|
||
in the Docker Engine source repository.</p>
|
||
|
||
<h2 id="understand-the-policies">Understand the policies</h2>
|
||
|
||
<p>The <code>docker-default</code> profile the default for running
|
||
containers. It is moderately protective while
|
||
providing wide application compatibility.</p>
|
||
|
||
<p>The system&rsquo;s standard <code>unconfined</code> profile inherits all
|
||
system-wide policies, applying path-based policies
|
||
intended for the host system inside of containers.
|
||
This was the default for privileged containers
|
||
prior to Docker 1.8.</p>
|
||
|
||
<h2 id="overriding-the-profile-for-a-container">Overriding the profile for a container</h2>
|
||
|
||
<p>Users may override the AppArmor profile using the
|
||
<code>security-opt</code> option (per-container).</p>
|
||
|
||
<p>For example, the following explicitly specifies the default policy:</p>
|
||
|
||
<pre><code>$ docker run --rm -it --security-opt apparmor:docker-default hello-world
|
||
</code></pre>
|
||
</description>
|
||
</item>
|
||
|
||
<item>
|
||
<title></title>
|
||
<link>http://localhost/engine/static_files/README/</link>
|
||
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
|
||
|
||
<guid>http://localhost/engine/static_files/README/</guid>
|
||
<description>
|
||
|
||
<h1 id="static-files-dir">Static files dir</h1>
|
||
|
||
<p>Files you put in /static_files/ will be copied to the web visible /_static/</p>
|
||
|
||
<p>Be careful not to override pre-existing static files from the template.</p>
|
||
|
||
<p>Generally, layout related files should go in the /theme directory.</p>
|
||
|
||
<p>If you want to add images to your particular documentation page. Just put them next to
|
||
your .rst source file and reference them relatively.</p>
|
||
</description>
|
||
</item>
|
||
|
||
<item>
|
||
<title>AUFS storage driver in practice</title>
|
||
<link>http://localhost/engine/userguide/storagedriver/aufs-driver/</link>
|
||
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
|
||
|
||
<guid>http://localhost/engine/userguide/storagedriver/aufs-driver/</guid>
|
||
<description>
|
||
|
||
<h1 id="docker-and-aufs-in-practice">Docker and AUFS in practice</h1>
|
||
|
||
<p>AUFS was the first storage driver in use with Docker. As a result, it has a long and close history with Docker, is very stable, has a lot of real-world deployments, and has strong community support. AUFS has several features that make it a good choice for Docker. These features enable:</p>
|
||
|
||
<ul>
|
||
<li>Fast container startup times.</li>
|
||
<li>Efficient use of storage.</li>
|
||
<li>Efficient use of memory.</li>
|
||
</ul>
|
||
|
||
<p>Despite its capabilities and long history with Docker, some Linux distributions do not support AUFS. This is usually because AUFS is not included in the mainline (upstream) Linux kernel.</p>
|
||
|
||
<p>The following sections examine some AUFS features and how they relate to Docker.</p>
|
||
|
||
<h2 id="image-layering-and-sharing-with-aufs">Image layering and sharing with AUFS</h2>
|
||
|
||
<p>AUFS is a <em>unification filesystem</em>. This means that it takes multiple directories on a single Linux host, stacks them on top of each other, and provides a single unified view. To achieve this, AUFS uses <em>union mount</em>.</p>
|
||
|
||
<p>AUFS stacks multiple directories and exposes them as a unified view through a single mount point. All of the directories in the stack, as well as the union mount point, must all exist on the same Linux host. AUFS refers to each directory that it stacks as a <em>branch</em>.</p>
|
||
|
||
<p>Within Docker, AUFS union mounts enable image layering. The AUFS storage driver implements Docker image layers using this union mount system. AUFS branches correspond to Docker image layers. The diagram below shows a Docker container based on the <code>ubuntu:latest</code> image.</p>
|
||
|
||
<p><img src="../engine/userguide/storagedriver/images/aufs_layers.jpg" alt="" /></p>
|
||
|
||
<p>This diagram shows the relationship between the Docker image layers and the AUFS branches (directories) in <code>/var/lib/docker/aufs</code>. Each image layer and the container layer correspond to an AUFS branch (directory) in the Docker host&rsquo;s local storage area. The union mount point gives the unified view of all layers.</p>
|
||
|
||
<p>AUFS also supports the copy-on-write technology (CoW). Not all storage drivers do.</p>
|
||
|
||
<h2 id="container-reads-and-writes-with-aufs">Container reads and writes with AUFS</h2>
|
||
|
||
<p>Docker leverages AUFS CoW technology to enable image sharing and minimize the use of disk space. AUFS works at the file level. This means that all AUFS CoW operations copy entire files - even if only a small part of the file is being modified. This behavior can have a noticeable impact on container performance, especially if the files being copied are large, below a lot of image layers, or the CoW operation must search a deep directory tree.</p>
|
||
|
||
<p>Consider, for example, an application running in a container needs to add a single new value to a large key-value store (file). If this is the first time the file is modified it does not yet exist in the container&rsquo;s top writable layer. So, the CoW must <em>copy up</em> the file from the underlying image. The AUFS storage driver searches each image layer for the file. The search order is from top to bottom. When it is found, the entire file is <em>copied up</em> to the container&rsquo;s top writable layer. From there, it can be opened and modified.</p>
|
||
|
||
<p>Larger files obviously take longer to <em>copy up</em> than smaller files, and files that exist in lower image layers take longer to locate than those in higher layers. However, a <em>copy up</em> operation only occurs once per file on any given container. Subsequent reads and writes happen against the file&rsquo;s copy already <em>copied-up</em> to the container&rsquo;s top layer.</p>
|
||
|
||
<h2 id="deleting-files-with-the-aufs-storage-driver">Deleting files with the AUFS storage driver</h2>
|
||
|
||
<p>The AUFS storage driver deletes a file from a container by placing a <em>whiteout
|
||
file</em> in the container&rsquo;s top layer. The whiteout file effectively obscures the
|
||
existence of the file in image&rsquo;s lower, read-only layers. The simplified
|
||
diagram below shows a container based on an image with three image layers.</p>
|
||
|
||
<p><img src="../engine/userguide/storagedriver/images/aufs_delete.jpg" alt="" /></p>
|
||
|
||
<p>The <code>file3</code> was deleted from the container. So, the AUFS storage driver placed
|
||
a whiteout file in the container&rsquo;s top layer. This whiteout file effectively
|
||
&ldquo;deletes&rdquo; <code>file3</code> from the container by obscuring any of the original file&rsquo;s
|
||
existence in the image&rsquo;s read-only base layer. Of course, the image could have
|
||
been in any of the other layers instead or in addition depending on how the
|
||
layers are built.</p>
|
||
|
||
<h2 id="configure-docker-with-aufs">Configure Docker with AUFS</h2>
|
||
|
||
<p>You can only use the AUFS storage driver on Linux systems with AUFS installed. Use the following command to determine if your system supports AUFS.</p>
|
||
|
||
<pre><code class="language-bash">$ grep aufs /proc/filesystems
|
||
nodev aufs
|
||
</code></pre>
|
||
|
||
<p>This output indicates the system supports AUFS. Once you&rsquo;ve verified your
|
||
system supports AUFS, you can must instruct the Docker daemon to use it. You do
|
||
this from the command line with the <code>docker daemon</code> command:</p>
|
||
|
||
<pre><code class="language-bash">$ sudo docker daemon --storage-driver=aufs &amp;
|
||
</code></pre>
|
||
|
||
<p>Alternatively, you can edit the Docker config file and add the
|
||
<code>--storage-driver=aufs</code> option to the <code>DOCKER_OPTS</code> line.</p>
|
||
|
||
<pre><code class="language-bash"># Use DOCKER_OPTS to modify the daemon startup options.
|
||
DOCKER_OPTS=&quot;--storage-driver=aufs&quot;
|
||
</code></pre>
|
||
|
||
<p>Once your daemon is running, verify the storage driver with the <code>docker info</code> command.</p>
|
||
|
||
<pre><code class="language-bash">$ sudo docker info
|
||
Containers: 1
|
||
Images: 4
|
||
Storage Driver: aufs
|
||
Root Dir: /var/lib/docker/aufs
|
||
Backing Filesystem: extfs
|
||
Dirs: 6
|
||
Dirperm1 Supported: false
|
||
Execution Driver: native-0.2
|
||
...output truncated...
|
||
````
|
||
|
||
The output above shows that the Docker daemon is running the AUFS storage driver on top of an existing ext4 backing filesystem.
|
||
|
||
## Local storage and AUFS
|
||
|
||
As the `docker daemon` runs with the AUFS driver, the driver stores images and containers on within the Docker host's local storage area in the `/var/lib/docker/aufs` directory.
|
||
|
||
### Images
|
||
|
||
Image layers and their contents are stored under
|
||
`/var/lib/docker/aufs/mnt/diff/&lt;image-id&gt;` directory. The contents of an image
|
||
layer in this location includes all the files and directories belonging in that
|
||
image layer.
|
||
|
||
The `/var/lib/docker/aufs/layers/` directory contains metadata about how image
|
||
layers are stacked. This directory contains one file for every image or
|
||
container layer on the Docker host. Inside each file are the image layers names
|
||
that exist below it. The diagram below shows an image with 4 layers.
|
||
|
||

|
||
|
||
Inspecting the contents of the file relating to the top layer of the image
|
||
shows the three image layers below it. They are listed in the order they are
|
||
stacked.
|
||
|
||
```bash
|
||
$ cat /var/lib/docker/aufs/layers/91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c
|
||
|
||
d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82
|
||
|
||
c22013c8472965aa5b62559f2b540cd440716ef149756e7b958a1b2aba421e87
|
||
|
||
d3a1f33e8a5a513092f01bb7eb1c2abf4d711e5105390a3fe1ae2248cfde1391
|
||
</code></pre>
|
||
|
||
<p>The base layer in an image has no image layers below it, so its file is empty.</p>
|
||
|
||
<h3 id="containers">Containers</h3>
|
||
|
||
<p>Running containers are mounted at locations in the
|
||
<code>/var/lib/docker/aufs/mnt/&lt;container-id&gt;</code> directory. This is the AUFS union
|
||
mount point that exposes the container and all underlying image layers as a
|
||
single unified view. If a container is not running, its directory still exists
|
||
but is empty. This is because containers are only mounted when they are running.</p>
|
||
|
||
<p>Container metadata and various config files that are placed into the running
|
||
container are stored in <code>/var/lib/containers/&lt;container-id&gt;</code>. Files in this
|
||
directory exist for all containers on the system, including ones that are
|
||
stopped. However, when a container is running the container&rsquo;s log files are also
|
||
in this directory.</p>
|
||
|
||
<p>A container&rsquo;s thin writable layer is stored under
|
||
<code>/var/lib/docker/aufs/diff/&lt;container-id&gt;</code>. This directory is stacked by AUFS as
|
||
the containers top writable layer and is where all changes to the container are
|
||
stored. The directory exists even if the container is stopped. This means that
|
||
restarting a container will not lose changes made to it. Once a container is
|
||
deleted this directory is deleted.</p>
|
||
|
||
<p>Information about which image layers are stacked below a container&rsquo;s top
|
||
writable layer is stored in the following file
|
||
<code>/var/lib/docker/aufs/layers/&lt;container-id&gt;</code>. The command below shows that the
|
||
container with ID <code>b41a6e5a508d</code> has 4 image layers below it:</p>
|
||
|
||
<pre><code class="language-bash">$ cat /var/lib/docker/aufs/layers/b41a6e5a508dfa02607199dfe51ed9345a675c977f2cafe8ef3e4b0b5773404e-init
|
||
91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c
|
||
d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82
|
||
c22013c8472965aa5b62559f2b540cd440716ef149756e7b958a1b2aba421e87
|
||
d3a1f33e8a5a513092f01bb7eb1c2abf4d711e5105390a3fe1ae2248cfde1391
|
||
</code></pre>
|
||
|
||
<p>The image layers are shown in order. In the output above, the layer starting
|
||
with image ID &ldquo;d3a1&hellip;&rdquo; is the image&rsquo;s base layer. The image layer starting
|
||
with &ldquo;91e5&hellip;&rdquo; is the image&rsquo;s topmost layer.</p>
|
||
|
||
<h2 id="aufs-and-docker-performance">AUFS and Docker performance</h2>
|
||
|
||
<p>To summarize some of the performance related aspects already mentioned:</p>
|
||
|
||
<ul>
|
||
<li><p>The AUFS storage driver is a good choice for PaaS and other similar use-cases where container density is important. This is because AUFS efficiently shares images between multiple running containers, enabling fast container start times and minimal use of disk space.</p></li>
|
||
|
||
<li><p>The underlying mechanics of how AUFS shares files between image layers and containers uses the systems page cache very efficiently.</p></li>
|
||
|
||
<li><p>The AUFS storage driver can introduce significant latencies into container write performance. This is because the first time a container writes to any file, the file has be located and copied into the containers top writable layer. These latencies increase and are compounded when these files exist below many image layers and the files themselves are large.</p></li>
|
||
</ul>
|
||
|
||
<p>One final point. Data volumes provide the best and most predictable performance.
|
||
This is because they bypass the storage driver and do not incur any of the
|
||
potential overheads introduced by thin provisioning and copy-on-write. For this
|
||
reason, you may want to place heavy write workloads on data volumes.</p>
|
||
|
||
<h2 id="related-information">Related information</h2>
|
||
|
||
<ul>
|
||
<li><a href="../engine/userguide/storagedriver/imagesandcontainers/">Understand images, containers, and storage drivers</a></li>
|
||
<li><a href="../engine/userguide/storagedriver/selectadriver/">Select a storage driver</a></li>
|
||
<li><a href="../engine/userguide/storagedriver/btrfs-driver/">BTRFS storage driver in practice</a></li>
|
||
<li><a href="../engine/userguide/storagedriver/device-mapper-driver/">Device Mapper storage driver in practice</a></li>
|
||
</ul>
|
||
</description>
|
||
</item>
|
||
|
||
<item>
|
||
<title>About Docker</title>
|
||
<link>http://localhost/engine/misc/</link>
|
||
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
|
||
|
||
<guid>http://localhost/engine/misc/</guid>
|
||
<description>
|
||
|
||
<h1 id="about-docker">About Docker</h1>
|
||
|
||
<p><strong>Develop, Ship and Run Any Application, Anywhere</strong></p>
|
||
|
||
<p><a href="https://www.docker.com"><strong>Docker</strong></a> is a platform for developers and sysadmins
|
||
to develop, ship, and run applications. Docker lets you quickly assemble
|
||
applications from components and eliminates the friction that can come when
|
||
shipping code. Docker lets you get your code tested and deployed into production
|
||
as fast as possible.</p>
|
||
|
||
<p>Docker consists of:</p>
|
||
|
||
<ul>
|
||
<li>The Docker Engine - our lightweight and powerful open source container
|
||
virtualization technology combined with a work flow for building
|
||
and containerizing your applications.</li>
|
||
<li><a href="https://hub.docker.com">Docker Hub</a> - our SaaS service for
|
||
sharing and managing your application stacks.</li>
|
||
</ul>
|
||
|
||
<h2 id="why-docker">Why Docker?</h2>
|
||
|
||
<p><em>Faster delivery of your applications</em></p>
|
||
|
||
<ul>
|
||
<li>We want your environment to work better. Docker containers,
|
||
and the work flow that comes with them, help your developers,
|
||
sysadmins, QA folks, and release engineers work together to get your code
|
||
into production and make it useful. We&rsquo;ve created a standard
|
||
container format that lets developers care about their applications
|
||
inside containers while sysadmins and operators can work on running the
|
||
container in your deployment. This separation of duties streamlines and
|
||
simplifies the management and deployment of code.</li>
|
||
<li>We make it easy to build new containers, enable rapid iteration of
|
||
your applications, and increase the visibility of changes. This
|
||
helps everyone in your organization understand how an application works
|
||
and how it is built.</li>
|
||
<li>Docker containers are lightweight and fast! Containers have
|
||
sub-second launch times, reducing the cycle
|
||
time of development, testing, and deployment.</li>
|
||
</ul>
|
||
|
||
<p><em>Deploy and scale more easily</em></p>
|
||
|
||
<ul>
|
||
<li>Docker containers run (almost) everywhere. You can deploy
|
||
containers on desktops, physical servers, virtual machines, into
|
||
data centers, and up to public and private clouds.</li>
|
||
<li>Since Docker runs on so many platforms, it&rsquo;s easy to move your
|
||
applications around. You can easily move an application from a
|
||
testing environment into the cloud and back whenever you need.</li>
|
||
<li>Docker&rsquo;s lightweight containers also make scaling up and
|
||
down fast and easy. You can quickly launch more containers when
|
||
needed and then shut them down easily when they&rsquo;re no longer needed.</li>
|
||
</ul>
|
||
|
||
<p><em>Get higher density and run more workloads</em></p>
|
||
|
||
<ul>
|
||
<li>Docker containers don&rsquo;t need a hypervisor, so you can pack more of
|
||
them onto your hosts. This means you get more value out of every
|
||
server and can potentially reduce what you spend on equipment and
|
||
licenses.</li>
|
||
</ul>
|
||
|
||
<p><em>Faster deployment makes for easier management</em></p>
|
||
|
||
<ul>
|
||
<li>As Docker speeds up your work flow, it gets easier to make lots
|
||
of small changes instead of huge, big bang updates. Smaller
|
||
changes mean reduced risk and more uptime.</li>
|
||
</ul>
|
||
|
||
<h2 id="about-this-guide">About this guide</h2>
|
||
|
||
<p>The <a href="/engine/introduction/understanding-docker/">Understanding Docker section</a> will help you:</p>
|
||
|
||
<ul>
|
||
<li>See how Docker works at a high level</li>
|
||
<li>Understand the architecture of Docker</li>
|
||
<li>Discover Docker&rsquo;s features;</li>
|
||
<li>See how Docker compares to virtual machines</li>
|
||
<li>See some common use cases.</li>
|
||
</ul>
|
||
|
||
<h3 id="installation-guides">Installation guides</h3>
|
||
|
||
<p>The <a href="../engine/installation/">installation section</a> will show you how to install Docker
|
||
on a variety of platforms.</p>
|
||
|
||
<h3 id="docker-user-guide">Docker user guide</h3>
|
||
|
||
<p>To learn about Docker in more detail and to answer questions about usage and
|
||
implementation, check out the <a href="../engine/userguide/">Docker User Guide</a>.</p>
|
||
|
||
<h2 id="release-notes">Release notes</h2>
|
||
|
||
<p>A summary of the changes in each release in the current series can now be found
|
||
on the separate <a href="https://docs.docker.com/release-notes">Release Notes page</a></p>
|
||
|
||
<h2 id="feature-deprecation-policy">Feature Deprecation Policy</h2>
|
||
|
||
<p>As changes are made to Docker there may be times when existing features
|
||
will need to be removed or replaced with newer features. Before an existing
|
||
feature is removed it will be labeled as &ldquo;deprecated&rdquo; within the documentation
|
||
and will remain in Docker for, usually, at least 2 releases. After that time
|
||
it may be removed.</p>
|
||
|
||
<p>Users are expected to take note of the list of deprecated features each
|
||
release and plan their migration away from those features, and (if applicable)
|
||
towards the replacement features as soon as possible.</p>
|
||
|
||
<p>The complete list of deprecated features can be found on the
|
||
<a href="../engine/misc/deprecated/">Deprecated Features page</a>.</p>
|
||
|
||
<h2 id="licensing">Licensing</h2>
|
||
|
||
<p>Docker is licensed under the Apache License, Version 2.0. See
|
||
<a href="https://github.com/docker/docker/blob/master/LICENSE">LICENSE</a> for the full
|
||
license text.</p>
|
||
</description>
|
||
</item>
|
||
|
||
<item>
|
||
<title>Amazon CloudWatch Logs logging driver</title>
|
||
<link>http://localhost/engine/reference/logging/awslogs/</link>
|
||
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
|
||
|
||
<guid>http://localhost/engine/reference/logging/awslogs/</guid>
|
||
<description>
|
||
|
||
<h1 id="amazon-cloudwatch-logs-logging-driver">Amazon CloudWatch Logs logging driver</h1>
|
||
|
||
<p>The <code>awslogs</code> logging driver sends container logs to
|
||
<a href="https://aws.amazon.com/cloudwatch/details/#log-monitoring">Amazon CloudWatch Logs</a>.
|
||
Log entries can be retrieved through the <a href="https://console.aws.amazon.com/cloudwatch/home#logs:">AWS Management
|
||
Console</a> or the <a href="http://docs.aws.amazon.com/cli/latest/reference/logs/index.html">AWS SDKs
|
||
and Command Line Tools</a>.</p>
|
||
|
||
<h2 id="usage">Usage</h2>
|
||
|
||
<p>You can configure the default logging driver by passing the <code>--log-driver</code>
|
||
option to the Docker daemon:</p>
|
||
|
||
<pre><code>docker daemon --log-driver=awslogs
|
||
</code></pre>
|
||
|
||
<p>You can set the logging driver for a specific container by using the
|
||
<code>--log-driver</code> option to <code>docker run</code>:</p>
|
||
|
||
<pre><code>docker run --log-driver=awslogs ...
|
||
</code></pre>
|
||
|
||
<h2 id="amazon-cloudwatch-logs-options">Amazon CloudWatch Logs options</h2>
|
||
|
||
<p>You can use the <code>--log-opt NAME=VALUE</code> flag to specify Amazon CloudWatch Logs logging driver options.</p>
|
||
|
||
<h3 id="awslogs-region">awslogs-region</h3>
|
||
|
||
<p>The <code>awslogs</code> logging driver sends your Docker logs to a specific region. Use
|
||
the <code>awslogs-region</code> log option or the <code>AWS_REGION</code> environment variable to set
|
||
the region. By default, if your Docker daemon is running on an EC2 instance
|
||
and no region is set, the driver uses the instance&rsquo;s region.</p>
|
||
|
||
<pre><code>docker run --log-driver=awslogs --log-opt awslogs-region=us-east-1 ...
|
||
</code></pre>
|
||
|
||
<h3 id="awslogs-group">awslogs-group</h3>
|
||
|
||
<p>You must specify a
|
||
<a href="http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/WhatIsCloudWatchLogs.html">log group</a>
|
||
for the <code>awslogs</code> logging driver. You can specify the log group with the
|
||
<code>awslogs-group</code> log option:</p>
|
||
|
||
<pre><code>docker run --log-driver=awslogs --log-opt awslogs-region=us-east-1 --log-opt awslogs-group=myLogGroup ...
|
||
</code></pre>
|
||
|
||
<h3 id="awslogs-stream">awslogs-stream</h3>
|
||
|
||
<p>To configure which
|
||
<a href="http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/WhatIsCloudWatchLogs.html">log stream</a>
|
||
should be used, you can specify the <code>awslogs-stream</code> log option. If not
|
||
specified, the container ID is used as the log stream.</p>
|
||
|
||
<blockquote>
|
||
<p><strong>Note:</strong>
|
||
Log streams within a given log group should only be used by one container
|
||
at a time. Using the same log stream for multiple containers concurrently
|
||
can cause reduced logging performance.</p>
|
||
</blockquote>
|
||
|
||
<h2 id="credentials">Credentials</h2>
|
||
|
||
<p>You must provide AWS credentials to the Docker daemon to use the <code>awslogs</code>
|
||
logging driver. You can provide these credentials with the <code>AWS_ACCESS_KEY_ID</code>,
|
||
<code>AWS_SECRET_ACCESS_KEY</code>, and <code>AWS_SESSION_TOKEN</code> environment variables, the
|
||
default AWS shared credentials file (<code>~/.aws/credentials</code> of the root user), or
|
||
(if you are running the Docker daemon on an Amazon EC2 instance) the Amazon EC2
|
||
instance profile.</p>
|
||
|
||
<p>Credentials must have a policy applied that allows the <code>logs:CreateLogStream</code>
|
||
and <code>logs:PutLogEvents</code> actions, as shown in the following example.</p>
|
||
|
||
<pre><code>{
|
||
&quot;Version&quot;: &quot;2012-10-17&quot;,
|
||
&quot;Statement&quot;: [
|
||
{
|
||
&quot;Action&quot;: [
|
||
&quot;logs:CreateLogStream&quot;,
|
||
&quot;logs:PutLogEvents&quot;
|
||
],
|
||
&quot;Effect&quot;: &quot;Allow&quot;,
|
||
&quot;Resource&quot;: &quot;*&quot;
|
||
}
|
||
]
|
||
}
|
||
</code></pre>
|
||
</description>
|
||
</item>
|
||
|
||
<item>
|
||
<title>Amazon EC2 Installation</title>
|
||
<link>http://localhost/engine/installation/amazon/</link>
|
||
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
|
||
|
||
<guid>http://localhost/engine/installation/amazon/</guid>
|
||
<description>
|
||
|
||
<h2 id="amazon-ec2">Amazon EC2</h2>
|
||
|
||
<p>You can install Docker on any AWS EC2 Amazon Machine Image (AMI) which runs an
|
||
operating system that Docker supports. Amazon&rsquo;s website includes specific
|
||
instructions for <a href="http://docs.aws.amazon.com/AmazonECS/latest/developerguide/docker-basics.html#install_docker">installing on Amazon
|
||
Linux</a>. To install on
|
||
another AMI, follow the instructions for its specific operating
|
||
system in this installation guide.</p>
|
||
|
||
<p>For detailed information on Amazon AWS support for Docker, refer to <a href="http://docs.aws.amazon.com/AmazonECS/latest/developerguide/docker-basics.html">Amazon&rsquo;s
|
||
documentation</a>.</p>
|
||
</description>
|
||
</item>
|
||
|
||
<item>
|
||
<title>Applied Docker</title>
|
||
<link>http://localhost/engine/examples/</link>
|
||
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
|
||
|
||
<guid>http://localhost/engine/examples/</guid>
|
||
<description>
|
||
|
||
<h1 id="examples">Examples</h1>
|
||
|
||
<p>This section contains the following:</p>
|
||
|
||
<ul>
|
||
<li><a href="../engine/examples/mongodb/">Dockerizing MongoDB</a></li>
|
||
<li><a href="../engine/examples/postgresql_service/">Dockerizing PostgreSQL</a><br /></li>
|
||
<li><a href="../engine/examples/couchdb_data_volumes/">Dockerizing a CouchDB service</a><br /></li>
|
||
<li><a href="../engine/examples/nodejs_web_app/">Dockerizing a Node.js web app</a></li>
|
||
<li><a href="../engine/examples/running_redis_service/">Dockerizing a Redis service</a></li>
|
||
<li><a href="../engine/examples/apt-cacher-ng/">Dockerizing an apt-cacher-ng service</a></li>
|
||
<li><a href="../engine/userguide/dockerizing/">Dockerizing applications: A &lsquo;Hello world&rsquo;</a></li>
|
||
</ul>
|
||
</description>
|
||
</item>
|
||
|
||
<item>
|
||
<title>Apply custom metadata</title>
|
||
<link>http://localhost/engine/userguide/labels-custom-metadata/</link>
|
||
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
|
||
|
||
<guid>http://localhost/engine/userguide/labels-custom-metadata/</guid>
|
||
<description>
|
||
|
||
<h1 id="apply-custom-metadata">Apply custom metadata</h1>
|
||
|
||
<p>You can apply metadata to your images, containers, or daemons via
|
||
labels. Labels serve a wide range of uses, such as adding notes or licensing
|
||
information to an image, or to identify a host.</p>
|
||
|
||
<p>A label is a <code>&lt;key&gt;</code> / <code>&lt;value&gt;</code> pair. Docker stores the label values as
|
||
<em>strings</em>. You can specify multiple labels but each <code>&lt;key&gt;</code> must be
|
||
unique or the value will be overwritten. If you specify the same <code>key</code> several
|
||
times but with different values, newer labels overwrite previous labels. Docker
|
||
uses the last <code>key=value</code> you supply.</p>
|
||
|
||
<blockquote>
|
||
<p><strong>Note:</strong> Support for daemon-labels was added in Docker 1.4.1. Labels on
|
||
containers and images are new in Docker 1.6.0</p>
|
||
</blockquote>
|
||
|
||
<h2 id="label-keys-namespaces">Label keys (namespaces)</h2>
|
||
|
||
<p>Docker puts no hard restrictions on the <code>key</code> used for a label. However, using
|
||
simple keys can easily lead to conflicts. For example, you have chosen to
|
||
categorize your images by CPU architecture using &ldquo;architecture&rdquo; labels in
|
||
your Dockerfiles:</p>
|
||
|
||
<pre><code>LABEL architecture=&quot;amd64&quot;
|
||
|
||
LABEL architecture=&quot;ARMv7&quot;
|
||
</code></pre>
|
||
|
||
<p>Another user may apply the same label based on a building&rsquo;s &ldquo;architecture&rdquo;:</p>
|
||
|
||
<pre><code>LABEL architecture=&quot;Art Nouveau&quot;
|
||
</code></pre>
|
||
|
||
<p>To prevent naming conflicts, Docker recommends using namespaces to label keys
|
||
using reverse domain notation. Use the following guidelines to name your keys:</p>
|
||
|
||
<ul>
|
||
<li><p>All (third-party) tools should prefix their keys with the
|
||
reverse DNS notation of a domain controlled by the author. For
|
||
example, <code>com.example.some-label</code>.</p></li>
|
||
|
||
<li><p>The <code>com.docker.*</code>, <code>io.docker.*</code> and <code>org.dockerproject.*</code> namespaces are
|
||
reserved for Docker&rsquo;s internal use.</p></li>
|
||
|
||
<li><p>Keys should only consist of lower-cased alphanumeric characters,
|
||
dots and dashes (for example, <code>[a-z0-9-.]</code>).</p></li>
|
||
|
||
<li><p>Keys should start <em>and</em> end with an alpha numeric character.</p></li>
|
||
|
||
<li><p>Keys may not contain consecutive dots or dashes.</p></li>
|
||
|
||
<li><p>Keys <em>without</em> namespace (dots) are reserved for CLI use. This allows end-
|
||
users to add metadata to their containers and images without having to type
|
||
cumbersome namespaces on the command-line.</p></li>
|
||
</ul>
|
||
|
||
<p>These are simply guidelines and Docker does not <em>enforce</em> them. However, for
|
||
the benefit of the community, you <em>should</em> use namespaces for your label keys.</p>
|
||
|
||
<h2 id="store-structured-data-in-labels">Store structured data in labels</h2>
|
||
|
||
<p>Label values can contain any data type as long as it can be represented as a
|
||
string. For example, consider this JSON document:</p>
|
||
|
||
<pre><code>{
|
||
&quot;Description&quot;: &quot;A containerized foobar&quot;,
|
||
&quot;Usage&quot;: &quot;docker run --rm example/foobar [args]&quot;,
|
||
&quot;License&quot;: &quot;GPL&quot;,
|
||
&quot;Version&quot;: &quot;0.0.1-beta&quot;,
|
||
&quot;aBoolean&quot;: true,
|
||
&quot;aNumber&quot; : 0.01234,
|
||
&quot;aNestedArray&quot;: [&quot;a&quot;, &quot;b&quot;, &quot;c&quot;]
|
||
}
|
||
</code></pre>
|
||
|
||
<p>You can store this struct in a label by serializing it to a string first:</p>
|
||
|
||
<pre><code>LABEL com.example.image-specs=&quot;{\&quot;Description\&quot;:\&quot;A containerized foobar\&quot;,\&quot;Usage\&quot;:\&quot;docker run --rm example\\/foobar [args]\&quot;,\&quot;License\&quot;:\&quot;GPL\&quot;,\&quot;Version\&quot;:\&quot;0.0.1-beta\&quot;,\&quot;aBoolean\&quot;:true,\&quot;aNumber\&quot;:0.01234,\&quot;aNestedArray\&quot;:[\&quot;a\&quot;,\&quot;b\&quot;,\&quot;c\&quot;]}&quot;
|
||
</code></pre>
|
||
|
||
<p>While it is <em>possible</em> to store structured data in label values, Docker treats
|
||
this data as a &lsquo;regular&rsquo; string. This means that Docker doesn&rsquo;t offer ways to
|
||
query (filter) based on nested properties. If your tool needs to filter on
|
||
nested properties, the tool itself needs to implement this functionality.</p>
|
||
|
||
<h2 id="add-labels-to-images">Add labels to images</h2>
|
||
|
||
<p>To add labels to an image, use the <code>LABEL</code> instruction in your Dockerfile:</p>
|
||
|
||
<pre><code>LABEL [&lt;namespace&gt;.]&lt;key&gt;[=&lt;value&gt;] ...
|
||
</code></pre>
|
||
|
||
<p>The <code>LABEL</code> instruction adds a label to your image, optionally with a value.
|
||
Use surrounding quotes or backslashes for labels that contain
|
||
white space characters in the <code>&lt;value&gt;</code>:</p>
|
||
|
||
<pre><code>LABEL vendor=ACME\ Incorporated
|
||
LABEL com.example.version.is-beta
|
||
LABEL com.example.version=&quot;0.0.1-beta&quot;
|
||
LABEL com.example.release-date=&quot;2015-02-12&quot;
|
||
</code></pre>
|
||
|
||
<p>The <code>LABEL</code> instruction also supports setting multiple <code>&lt;key&gt;</code> / <code>&lt;value&gt;</code> pairs
|
||
in a single instruction:</p>
|
||
|
||
<pre><code>LABEL com.example.version=&quot;0.0.1-beta&quot; com.example.release-date=&quot;2015-02-12&quot;
|
||
</code></pre>
|
||
|
||
<p>Long lines can be split up by using a backslash (<code>\</code>) as continuation marker:</p>
|
||
|
||
<pre><code>LABEL vendor=ACME\ Incorporated \
|
||
com.example.is-beta \
|
||
com.example.version=&quot;0.0.1-beta&quot; \
|
||
com.example.release-date=&quot;2015-02-12&quot;
|
||
</code></pre>
|
||
|
||
<p>Docker recommends you add multiple labels in a single <code>LABEL</code> instruction. Using
|
||
individual instructions for each label can result in an inefficient image. This
|
||
is because each <code>LABEL</code> instruction in a Dockerfile produces a new IMAGE layer.</p>
|
||
|
||
<p>You can view the labels via the <code>docker inspect</code> command:</p>
|
||
|
||
<pre><code>$ docker inspect 4fa6e0f0c678
|
||
|
||
...
|
||
&quot;Labels&quot;: {
|
||
&quot;vendor&quot;: &quot;ACME Incorporated&quot;,
|
||
&quot;com.example.is-beta&quot;: &quot;&quot;,
|
||
&quot;com.example.version&quot;: &quot;0.0.1-beta&quot;,
|
||
&quot;com.example.release-date&quot;: &quot;2015-02-12&quot;
|
||
}
|
||
...
|
||
|
||
# Inspect labels on container
|
||
$ docker inspect -f &quot;{{json .Config.Labels }}&quot; 4fa6e0f0c678
|
||
|
||
{&quot;Vendor&quot;:&quot;ACME Incorporated&quot;,&quot;com.example.is-beta&quot;:&quot;&quot;,&quot;com.example.version&quot;:&quot;0.0.1-beta&quot;,&quot;com.example.release-date&quot;:&quot;2015-02-12&quot;}
|
||
|
||
# Inspect labels on images
|
||
$ docker inspect -f &quot;{{json .ContainerConfig.Labels }}&quot; myimage
|
||
</code></pre>
|
||
|
||
<h2 id="query-labels">Query labels</h2>
|
||
|
||
<p>Besides storing metadata, you can filter images and containers by label. To list all
|
||
running containers that have the <code>com.example.is-beta</code> label:</p>
|
||
|
||
<pre><code># List all running containers that have a `com.example.is-beta` label
|
||
$ docker ps --filter &quot;label=com.example.is-beta&quot;
|
||
</code></pre>
|
||
|
||
<p>List all running containers with the label <code>color</code> that have a value <code>blue</code>:</p>
|
||
|
||
<pre><code>$ docker ps --filter &quot;label=color=blue&quot;
|
||
</code></pre>
|
||
|
||
<p>List all images with the label <code>vendor</code> that have the value <code>ACME</code>:</p>
|
||
|
||
<pre><code>$ docker images --filter &quot;label=vendor=ACME&quot;
|
||
</code></pre>
|
||
|
||
<h2 id="container-labels">Container labels</h2>
|
||
|
||
<pre><code>docker run \
|
||
-d \
|
||
--label com.example.group=&quot;webservers&quot; \
|
||
--label com.example.environment=&quot;production&quot; \
|
||
busybox \
|
||
top
|
||
</code></pre>
|
||
|
||
<p>Please refer to the <a href="#query-labels">Query labels</a> section above for information
|
||
on how to query labels set on a container.</p>
|
||
|
||
<h2 id="daemon-labels">Daemon labels</h2>
|
||
|
||
<pre><code>docker daemon \
|
||
--dns 8.8.8.8 \
|
||
--dns 8.8.4.4 \
|
||
-H unix:///var/run/docker.sock \
|
||
--label com.example.environment=&quot;production&quot; \
|
||
--label com.example.storage=&quot;ssd&quot;
|
||
</code></pre>
|
||
|
||
<p>These labels appear as part of the <code>docker info</code> output for the daemon:</p>
|
||
|
||
<pre><code>$ docker -D info
|
||
Containers: 12
|
||
Images: 672
|
||
Server Version: 1.9.0
|
||
Storage Driver: aufs
|
||
Root Dir: /var/lib/docker/aufs
|
||
Backing Filesystem: extfs
|
||
Dirs: 697
|
||
Dirperm1 Supported: true
|
||
Execution Driver: native-0.2
|
||
Logging Driver: json-file
|
||
Kernel Version: 3.19.0-22-generic
|
||
Operating System: Ubuntu 15.04
|
||
CPUs: 24
|
||
Total Memory: 62.86 GiB
|
||
Name: docker
|
||
ID: I54V:OLXT:HVMM:TPKO:JPHQ:CQCD:JNLC:O3BZ:4ZVJ:43XJ:PFHZ:6N2S
|
||
Debug mode (server): true
|
||
File Descriptors: 59
|
||
Goroutines: 159
|
||
System Time: 2015-09-23T14:04:20.699842089+08:00
|
||
EventsListeners: 0
|
||
Init SHA1:
|
||
Init Path: /usr/bin/docker
|
||
Docker Root Dir: /var/lib/docker
|
||
Http Proxy: http://test:test@localhost:8080
|
||
Https Proxy: https://test:test@localhost:8080
|
||
WARNING: No swap limit support
|
||
Username: svendowideit
|
||
Registry: [https://index.docker.io/v1/]
|
||
Labels:
|
||
com.example.environment=production
|
||
com.example.storage=ssd
|
||
</code></pre>
|
||
</description>
|
||
</item>
|
||
|
||
<item>
|
||
<title>Automatically start containers</title>
|
||
<link>http://localhost/engine/articles/host_integration/</link>
|
||
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
|
||
|
||
<guid>http://localhost/engine/articles/host_integration/</guid>
|
||
<description>
|
||
|
||
<h1 id="automatically-start-containers">Automatically start containers</h1>
|
||
|
||
<p>As of Docker 1.2,
|
||
<a href="../engine/reference/run/#restart-policies-restart">restart policies</a> are the
|
||
built-in Docker mechanism for restarting containers when they exit. If set,
|
||
restart policies will be used when the Docker daemon starts up, as typically
|
||
happens after a system boot. Restart policies will ensure that linked containers
|
||
are started in the correct order.</p>
|
||
|
||
<p>If restart policies don&rsquo;t suit your needs (i.e., you have non-Docker processes
|
||
that depend on Docker containers), you can use a process manager like
|
||
<a href="http://upstart.ubuntu.com/">upstart</a>,
|
||
<a href="http://freedesktop.org/wiki/Software/systemd/">systemd</a> or
|
||
<a href="http://supervisord.org/">supervisor</a> instead.</p>
|
||
|
||
<h2 id="using-a-process-manager">Using a process manager</h2>
|
||
|
||
<p>Docker does not set any restart policies by default, but be aware that they will
|
||
conflict with most process managers. So don&rsquo;t set restart policies if you are
|
||
using a process manager.</p>
|
||
|
||
<p>When you have finished setting up your image and are happy with your
|
||
running container, you can then attach a process manager to manage it.
|
||
When you run <code>docker start -a</code>, Docker will automatically attach to the
|
||
running container, or start it if needed and forward all signals so that
|
||
the process manager can detect when a container stops and correctly
|
||
restart it.</p>
|
||
|
||
<p>Here are a few sample scripts for systemd and upstart to integrate with
|
||
Docker.</p>
|
||
|
||
<h2 id="examples">Examples</h2>
|
||
|
||
<p>The examples below show configuration files for two popular process managers,
|
||
upstart and systemd. In these examples, we&rsquo;ll assume that we have already
|
||
created a container to run Redis with <code>--name=redis_server</code>. These files define
|
||
a new service that will be started after the docker daemon service has started.</p>
|
||
|
||
<h3 id="upstart">upstart</h3>
|
||
|
||
<pre><code>description &quot;Redis container&quot;
|
||
author &quot;Me&quot;
|
||
start on filesystem and started docker
|
||
stop on runlevel [!2345]
|
||
respawn
|
||
script
|
||
/usr/bin/docker start -a redis_server
|
||
end script
|
||
</code></pre>
|
||
|
||
<h3 id="systemd">systemd</h3>
|
||
|
||
<pre><code>[Unit]
|
||
Description=Redis container
|
||
Requires=docker.service
|
||
After=docker.service
|
||
|
||
[Service]
|
||
Restart=always
|
||
ExecStart=/usr/bin/docker start -a redis_server
|
||
ExecStop=/usr/bin/docker stop -t 2 redis_server
|
||
|
||
[Install]
|
||
WantedBy=local.target
|
||
</code></pre>
|
||
|
||
<p>If you need to pass options to the redis container (such as <code>--env</code>),
|
||
then you&rsquo;ll need to use <code>docker run</code> rather than <code>docker start</code>. This will
|
||
create a new container every time the service is started, which will be stopped
|
||
and removed when the service is stopped.</p>
|
||
|
||
<pre><code>[Service]
|
||
...
|
||
ExecStart=/usr/bin/docker run --env foo=bar --name redis_server redis
|
||
ExecStop=/usr/bin/docker stop -t 2 redis_server ; /usr/bin/docker rm -f redis_server
|
||
...
|
||
</code></pre>
|
||
</description>
|
||
</item>
|
||
|
||
<item>
|
||
<title>Automation with content trust</title>
|
||
<link>http://localhost/engine/security/trust/trust_automation/</link>
|
||
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
|
||
|
||
<guid>http://localhost/engine/security/trust/trust_automation/</guid>
|
||
<description>
|
||
|
||
<h1 id="automation-with-content-trust">Automation with content trust</h1>
|
||
|
||
<p>Your automation systems that pull or build images can also work with trust. Any automation environment must set <code>DOCKER_TRUST_ENABLED</code> either manually or in in a scripted fashion before processing images.</p>
|
||
|
||
<h2 id="bypass-requests-for-passphrases">Bypass requests for passphrases</h2>
|
||
|
||
<p>To allow tools to wrap docker and push trusted content, there are two
|
||
environment variables that allow you to provide the passphrases without an
|
||
expect script, or typing them in:</p>
|
||
|
||
<ul>
|
||
<li><code>DOCKER_CONTENT_TRUST_ROOT_PASSPHRASE</code></li>
|
||
<li><code>DOCKER_CONTENT_TRUST_REPOSITORY_PASSPHRASE</code></li>
|
||
</ul>
|
||
|
||
<p>Docker attempts to use the contents of these environment variables as passphrase
|
||
for the keys. For example, an image publisher can export the repository <code>target</code>
|
||
and <code>snapshot</code> passphrases:</p>
|
||
|
||
<pre><code class="language-bash">$ export DOCKER_CONTENT_TRUST_ROOT_PASSPHRASE=&quot;u7pEQcGoebUHm6LHe6&quot;
|
||
$ export DOCKER_CONTENT_TRUST_REPOSITORY_PASSPHRASE=&quot;l7pEQcTKJjUHm6Lpe4&quot;
|
||
</code></pre>
|
||
|
||
<p>Then, when pushing a new tag the Docker client does not request these values but signs automatically:</p>
|
||
|
||
<pre><code class="language-bash">$ docker push docker/trusttest:latest
|
||
The push refers to a repository [docker.io/docker/trusttest] (len: 1)
|
||
a9539b34a6ab: Image already exists
|
||
b3dbab3810fc: Image already exists
|
||
latest: digest: sha256:d149ab53f871 size: 3355
|
||
Signing and pushing trust metadata
|
||
</code></pre>
|
||
|
||
<h2 id="building-with-content-trust">Building with content trust</h2>
|
||
|
||
<p>You can also build with content trust. Before running the <code>docker build</code> command, you should set the environment variable <code>DOCKER_CONTENT_TRUST</code> either manually or in in a scripted fashion. Consider the simple Dockerfile below.</p>
|
||
|
||
<pre><code class="language-Dockerfile">FROM docker/trusttest:latest
|
||
RUN echo
|
||
</code></pre>
|
||
|
||
<p>The <code>FROM</code> tag is pulling a signed image. You cannot build an image that has a
|
||
<code>FROM</code> that is not either present locally or signed. Given that content trust
|
||
data exists for the tag <code>latest</code>, the following build should succeed:</p>
|
||
|
||
<pre><code class="language-bash">$ docker build -t docker/trusttest:testing .
|
||
Using default tag: latest
|
||
latest: Pulling from docker/trusttest
|
||
|
||
b3dbab3810fc: Pull complete
|
||
a9539b34a6ab: Pull complete
|
||
Digest: sha256:d149ab53f871
|
||
</code></pre>
|
||
|
||
<p>If content trust is enabled, building from a Dockerfile that relies on tag without trust data, causes the build command to fail:</p>
|
||
|
||
<pre><code class="language-bash">$ docker build -t docker/trusttest:testing .
|
||
unable to process Dockerfile: No trust data for notrust
|
||
</code></pre>
|
||
|
||
<h2 id="related-information">Related information</h2>
|
||
|
||
<ul>
|
||
<li><a href="../engine/security/trust/content_trust/">Content trust in Docker</a></li>
|
||
<li><a href="../engine/security/trust/trust_key_mng/">Manage keys for content trust</a></li>
|
||
<li><a href="../engine/security/trust/trust_sandbox/">Play in a content trust sandbox</a></li>
|
||
</ul>
|
||
</description>
|
||
</item>
|
||
|
||
<item>
|
||
<title>BTRFS storage in practice</title>
|
||
<link>http://localhost/engine/userguide/storagedriver/btrfs-driver/</link>
|
||
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
|
||
|
||
<guid>http://localhost/engine/userguide/storagedriver/btrfs-driver/</guid>
|
||
<description>
|
||
|
||
<h1 id="docker-and-btrfs-in-practice">Docker and BTRFS in practice</h1>
|
||
|
||
<p>Btrfs is a next generation copy-on-write filesystem that supports many advanced
|
||
storage technologies that make it a good fit for Docker. Btrfs is included in
|
||
the mainline Linux kernel and it&rsquo;s on-disk-format is now considered stable.
|
||
However, many of its features are still under heavy development and users should
|
||
consider it a fast-moving target.</p>
|
||
|
||
<p>Docker&rsquo;s <code>btrfs</code> storage driver leverages many Btrfs features for image and
|
||
container management. Among these features are thin provisioning, copy-on-write,
|
||
and snapshotting.</p>
|
||
|
||
<p>This article refers to Docker&rsquo;s Btrfs storage driver as <code>btrfs</code> and the overall Btrfs Filesystem as Btrfs.</p>
|
||
|
||
<blockquote>
|
||
<p><strong>Note</strong>: The <a href="https://www.docker.com/compatibility-maintenance">Commercially Supported Docker Engine (CS-Engine)</a> does not currently support the <code>btrfs</code> storage driver.</p>
|
||
</blockquote>
|
||
|
||
<h2 id="the-future-of-btrfs">The future of Btrfs</h2>
|
||
|
||
<p>Btrfs has been long hailed as the future of Linux filesystems. With full support in the mainline Linux kernel, a stable on-disk-format, and active development with a focus on stability, this is now becoming more of a reality.</p>
|
||
|
||
<p>As far as Docker on the Linux platform goes, many people see the <code>btrfs</code> storage driver as a potential long-term replacement for the <code>devicemapper</code> storage driver. However, at the time of writing, the <code>devicemapper</code> storage driver should be considered safer, more stable, and more <em>production ready</em>. You should only consider the <code>btrfs</code> driver for production deployments if you understand it well and have existing experience with Btrfs.</p>
|
||
|
||
<h2 id="image-layering-and-sharing-with-btrfs">Image layering and sharing with Btrfs</h2>
|
||
|
||
<p>Docker leverages Btrfs <em>subvolumes</em> and <em>snapshots</em> for managing the on-disk components of image and container layers. Btrfs subvolumes look and feel like a normal Unix filesystem. As such, they can have their own internal directory structure that hooks into the wider Unix filesystem.</p>
|
||
|
||
<p>Subvolumes are natively copy-on-write and have space allocated to them on-demand
|
||
from an underlying storage pool. They can also be nested and snapped. The
|
||
diagram blow shows 4 subvolumes. &lsquo;Subvolume 2&rsquo; and &lsquo;Subvolume 3&rsquo; are nested,
|
||
whereas &lsquo;Subvolume 4&rsquo; shows its own internal directory tree.</p>
|
||
|
||
<p><img src="../engine/userguide/storagedriver/images/btfs_subvolume.jpg" alt="" /></p>
|
||
|
||
<p>Snapshots are a point-in-time read-write copy of an entire subvolume. They exist directly below the subvolume they were created from. You can create snapshots of snapshots as shown in the diagram below.</p>
|
||
|
||
<p><img src="../engine/userguide/storagedriver/images/btfs_snapshots.jpg" alt="" /></p>
|
||
|
||
<p>Btfs allocates space to subvolumes and snapshots on demand from an underlying pool of storage. The unit of allocation is referred to as a <em>chunk</em> and <em>chunks</em> are normally ~1GB in size.</p>
|
||
|
||
<p>Snapshots are first-class citizens in a Btrfs filesystem. This means that they look, feel, and operate just like regular subvolumes. The technology required to create them is built directly into the Btrfs filesystem thanks to its native copy-on-write design. This means that Btrfs snapshots are space efficient with little or no performance overhead. The diagram below shows a subvolume and it&rsquo;s snapshot sharing the same data.</p>
|
||
|
||
<p><img src="../engine/userguide/storagedriver/images/btfs_pool.jpg" alt="" /></p>
|
||
|
||
<p>Docker&rsquo;s <code>btrfs</code> storage driver stores every image layer and container in its own Btrfs subvolume or snapshot. The base layer of an image is stored as a subvolume whereas child image layers and containers are stored as snapshots. This is shown in the diagram below.</p>
|
||
|
||
<p><img src="../engine/userguide/storagedriver/images/btfs_container_layer.jpg" alt="" /></p>
|
||
|
||
<p>The high level process for creating images and containers on Docker hosts running the <code>btrfs</code> driver is as follows:</p>
|
||
|
||
<ol>
|
||
<li><p>The image&rsquo;s base layer is stored in a Btrfs subvolume under
|
||
<code>/var/lib/docker/btrfs/subvolumes</code>.</p>
|
||
|
||
<p>The image ID is used as the subvolume name. E.g., a base layer with image ID
|
||
&ldquo;f9a9f253f6105141e0f8e091a6bcdb19e3f27af949842db93acba9048ed2410b&rdquo; will be
|
||
stored in
|
||
<code>/var/lib/docker/btrfs/subvolumes/f9a9f253f6105141e0f8e091a6bcdb19e3f27af949842db93acba9048ed2410b</code></p></li>
|
||
|
||
<li><p>Subsequent image layers are stored as a Btrfs snapshot of the parent layer&rsquo;s subvolume or snapshot.</p>
|
||
|
||
<p>The diagram below shows a three-layer image. The base layer is a subvolume. Layer 1 is a snapshot of the base layer&rsquo;s subvolume. Layer 2 is a snapshot of Layer 1&rsquo;s snapshot.</p>
|
||
|
||
<p><img src="../engine/userguide/storagedriver/images/btfs_constructs.jpg" alt="" /></p></li>
|
||
</ol>
|
||
|
||
<h2 id="image-and-container-on-disk-constructs">Image and container on-disk constructs</h2>
|
||
|
||
<p>Image layers and containers are visible in the Docker host&rsquo;s filesystem at
|
||
<code>/var/lib/docker/btrfs/subvolumes/&lt;image-id&gt; OR &lt;container-id&gt;</code>. Directories for
|
||
containers are present even for containers with a stopped status. This is
|
||
because the <code>btrfs</code> storage driver mounts a default, top-level subvolume at
|
||
<code>/var/lib/docker/subvolumes</code>. All other subvolumes and snapshots exist below
|
||
that as Btrfs filesystem objects and not as individual mounts.</p>
|
||
|
||
<p>The following example shows a single Docker image with four image layers.</p>
|
||
|
||
<pre><code class="language-bash">$ sudo docker images -a
|
||
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
|
||
ubuntu latest 0a17decee413 2 weeks ago 188.3 MB
|
||
&lt;none&gt; &lt;none&gt; 3c9a9d7cc6a2 2 weeks ago 188.3 MB
|
||
&lt;none&gt; &lt;none&gt; eeb7cb91b09d 2 weeks ago 188.3 MB
|
||
&lt;none&gt; &lt;none&gt; f9a9f253f610 2 weeks ago 188.1 MB
|
||
</code></pre>
|
||
|
||
<p>Each image layer exists as a Btrfs subvolume or snapshot with the same name as it&rsquo;s image ID as illustrated by the <code>btrfs subvolume list</code> command shown below:</p>
|
||
|
||
<pre><code class="language-bash">$ sudo btrfs subvolume list /var/lib/docker
|
||
ID 257 gen 9 top level 5 path btrfs/subvolumes/f9a9f253f6105141e0f8e091a6bcdb19e3f27af949842db93acba9048ed2410b
|
||
ID 258 gen 10 top level 5 path btrfs/subvolumes/eeb7cb91b09d5de9edb2798301aeedf50848eacc2123e98538f9d014f80f243c
|
||
ID 260 gen 11 top level 5 path btrfs/subvolumes/3c9a9d7cc6a235eb2de58ca9ef3551c67ae42a991933ba4958d207b29142902b
|
||
ID 261 gen 12 top level 5 path btrfs/subvolumes/0a17decee4139b0de68478f149cc16346f5e711c5ae3bb969895f22dd6723751
|
||
</code></pre>
|
||
|
||
<p>Under the <code>/var/lib/docker/btrfs/subvolumes</code> directoy, each of these subvolumes and snapshots are visible as a normal Unix directory:</p>
|
||
|
||
<pre><code class="language-bash">$ ls -l /var/lib/docker/btrfs/subvolumes/
|
||
total 0
|
||
drwxr-xr-x 1 root root 132 Oct 16 14:44 0a17decee4139b0de68478f149cc16346f5e711c5ae3bb969895f22dd6723751
|
||
drwxr-xr-x 1 root root 132 Oct 16 14:44 3c9a9d7cc6a235eb2de58ca9ef3551c67ae42a991933ba4958d207b29142902b
|
||
drwxr-xr-x 1 root root 132 Oct 16 14:44 eeb7cb91b09d5de9edb2798301aeedf50848eacc2123e98538f9d014f80f243c
|
||
drwxr-xr-x 1 root root 132 Oct 16 14:44 f9a9f253f6105141e0f8e091a6bcdb19e3f27af949842db93acba9048ed2410b
|
||
</code></pre>
|
||
|
||
<p>Because Btrfs works at the filesystem level and not the block level, each image
|
||
and container layer can be browsed in the filesystem using normal Unix commands.
|
||
The example below shows a truncated output of an <code>ls -l</code> command against the
|
||
image&rsquo;s top layer:</p>
|
||
|
||
<pre><code class="language-bash">$ ls -l /var/lib/docker/btrfs/subvolumes/0a17decee4139b0de68478f149cc16346f5e711c5ae3bb969895f22dd6723751/
|
||
total 0
|
||
drwxr-xr-x 1 root root 1372 Oct 9 08:39 bin
|
||
drwxr-xr-x 1 root root 0 Apr 10 2014 boot
|
||
drwxr-xr-x 1 root root 882 Oct 9 08:38 dev
|
||
drwxr-xr-x 1 root root 2040 Oct 12 17:27 etc
|
||
drwxr-xr-x 1 root root 0 Apr 10 2014 home
|
||
...output truncated...
|
||
</code></pre>
|
||
|
||
<h2 id="container-reads-and-writes-with-btrfs">Container reads and writes with Btrfs</h2>
|
||
|
||
<p>A container is a space-efficient snapshot of an image. Metadata in the snapshot
|
||
points to the actual data blocks in the storage pool. This is the same as with a
|
||
subvolume. Therefore, reads performed against a snapshot are essentially the
|
||
same as reads performed against a subvolume. As a result, no performance
|
||
overhead is incurred from the Btrfs driver.</p>
|
||
|
||
<p>Writing a new file to a container invokes an allocate-on-demand operation to
|
||
allocate new data block to the container&rsquo;s snapshot. The file is then written to
|
||
this new space. The allocate-on-demand operation is native to all writes with
|
||
Btrfs and is the same as writing new data to a subvolume. As a result, writing
|
||
new files to a container&rsquo;s snapshot operate at native Btrfs speeds.</p>
|
||
|
||
<p>Updating an existing file in a container causes a copy-on-write operation
|
||
(technically <em>redirect-on-write</em>). The driver leaves the original data and
|
||
allocates new space to the snapshot. The updated data is written to this new
|
||
space. Then, the driver updates the filesystem metadata in the snapshot to point
|
||
to this new data. The original data is preserved in-place for subvolumes and
|
||
snapshots further up the tree. This behavior is native to copy-on-write
|
||
filesystems like Btrfs and incurs very little overhead.</p>
|
||
|
||
<p>With Btfs, writing and updating lots of small files can result in slow performance. More on this later.</p>
|
||
|
||
<h2 id="configuring-docker-with-btrfs">Configuring Docker with Btrfs</h2>
|
||
|
||
<p>The <code>btrfs</code> storage driver only operates on a Docker host where <code>/var/lib/docker</code> is mounted as a Btrfs filesystem. The following procedure shows how to configure Btrfs on Ubuntu 14.04 LTS.</p>
|
||
|
||
<h3 id="prerequisites">Prerequisites</h3>
|
||
|
||
<p>If you have already used the Docker daemon on your Docker host and have images you want to keep, <code>push</code> them to Docker Hub or your private Docker Trusted Registry before attempting this procedure.</p>
|
||
|
||
<p>Stop the Docker daemon. Then, ensure that you have a spare block device at <code>/dev/xvdb</code>. The device identifier may be different in your environment and you should substitute your own values throughout the procedure.</p>
|
||
|
||
<p>The procedure also assumes your kernel has the appropriate Btrfs modules loaded. To verify this, use the following command:</p>
|
||
|
||
<pre><code class="language-bash">$ cat /proc/filesystems | grep btrfs`
|
||
</code></pre>
|
||
|
||
<h3 id="configure-btrfs-on-ubuntu-14-04-lts">Configure Btrfs on Ubuntu 14.04 LTS</h3>
|
||
|
||
<p>Assuming your system meets the prerequisites, do the following:</p>
|
||
|
||
<ol>
|
||
<li><p>Install the &ldquo;btrfs-tools&rdquo; package.</p>
|
||
|
||
<pre><code>$ sudo apt-get install btrfs-tools
|
||
Reading package lists... Done
|
||
Building dependency tree
|
||
&lt;output truncated&gt;
|
||
</code></pre></li>
|
||
|
||
<li><p>Create the Btrfs storage pool.</p>
|
||
|
||
<p>Btrfs storage pools are created with the <code>mkfs.btrfs</code> command. Passing multiple devices to the <code>mkfs.btrfs</code> command creates a pool across all of those devices. Here you create a pool with a single device at <code>/dev/xvdb</code>.</p>
|
||
|
||
<pre><code>$ sudo mkfs.btrfs -f /dev/xvdb
|
||
WARNING! - Btrfs v3.12 IS EXPERIMENTAL
|
||
WARNING! - see http://btrfs.wiki.kernel.org before using
|
||
|
||
|
||
Turning ON incompat feature 'extref': increased hardlink limit per file to 65536
|
||
fs created label (null) on /dev/xvdb
|
||
nodesize 16384 leafsize 16384 sectorsize 4096 size 4.00GiB
|
||
Btrfs v3.12
|
||
</code></pre>
|
||
|
||
<p>Be sure to substitute <code>/dev/xvdb</code> with the appropriate device(s) on your
|
||
system.</p>
|
||
|
||
<blockquote>
|
||
<p><strong>Warning</strong>: Take note of the warning about Btrfs being experimental. As
|
||
noted earlier, Btrfs is not currently recommended for production deployments
|
||
unless you already have extensive experience.</p>
|
||
</blockquote></li>
|
||
|
||
<li><p>If it does not already exist, create a directory for the Docker host&rsquo;s local storage area at <code>/var/lib/docker</code>.</p>
|
||
|
||
<pre><code>$ sudo mkdir /var/lib/docker
|
||
</code></pre></li>
|
||
|
||
<li><p>Configure the system to automatically mount the Btrfs filesystem each time the system boots.</p>
|
||
|
||
<p>a. Obtain the Btrfs filesystem&rsquo;s UUID.</p>
|
||
|
||
<pre><code>$ sudo blkid /dev/xvdb
|
||
/dev/xvdb: UUID=&quot;a0ed851e-158b-4120-8416-c9b072c8cf47&quot; UUID_SUB=&quot;c3927a64-4454-4eef-95c2-a7d44ac0cf27&quot; TYPE=&quot;btrfs&quot;
|
||
</code></pre>
|
||
|
||
<p>b. Create a <code>/etc/fstab</code> entry to automatically mount <code>/var/lib/docker</code> each time the system boots.</p>
|
||
|
||
<pre><code>/dev/xvdb /var/lib/docker btrfs defaults 0 0
|
||
UUID=&quot;a0ed851e-158b-4120-8416-c9b072c8cf47&quot; /var/lib/docker btrfs defaults 0 0
|
||
</code></pre></li>
|
||
|
||
<li><p>Mount the new filesystem and verify the operation.</p>
|
||
|
||
<pre><code>$ sudo mount -a
|
||
$ mount
|
||
/dev/xvda1 on / type ext4 (rw,discard)
|
||
&lt;output truncated&gt;
|
||
/dev/xvdb on /var/lib/docker type btrfs (rw)
|
||
</code></pre>
|
||
|
||
<p>The last line in the output above shows the <code>/dev/xvdb</code> mounted at <code>/var/lib/docker</code> as Btrfs.</p></li>
|
||
</ol>
|
||
|
||
<p>Now that you have a Btrfs filesystem mounted at <code>/var/lib/docker</code>, the daemon should automatically load with the <code>btrfs</code> storage driver.</p>
|
||
|
||
<ol>
|
||
<li><p>Start the Docker daemon.</p>
|
||
|
||
<pre><code>$ sudo service docker start
|
||
docker start/running, process 2315
|
||
</code></pre>
|
||
|
||
<p>The procedure for starting the Docker daemon may differ depending on the
|
||
Linux distribution you are using.</p>
|
||
|
||
<p>You can start the Docker daemon with the <code>btrfs</code> storage driver by passing
|
||
the <code>--storage-driver=btrfs</code> flag to the <code>docker daemon</code> command or you can
|
||
add the <code>DOCKER_OPTS</code> line to the Docker config file.</p></li>
|
||
|
||
<li><p>Verify the storage driver with the <code>docker info</code> command.</p>
|
||
|
||
<pre><code>$ sudo docker info
|
||
Containers: 0
|
||
Images: 0
|
||
Storage Driver: btrfs
|
||
[...]
|
||
</code></pre></li>
|
||
</ol>
|
||
|
||
<p>Your Docker host is now configured to use the <code>btrfs</code> storage driver.</p>
|
||
|
||
<h2 id="btrfs-and-docker-performance">BTRFS and Docker performance</h2>
|
||
|
||
<p>There are several factors that influence Docker&rsquo;s performance under the <code>btrfs</code> storage driver.</p>
|
||
|
||
<ul>
|
||
<li><p><strong>Page caching</strong>. Btrfs does not support page cache sharing. This means that <em>n</em> containers accessing the same file require <em>n</em> copies to be cached. As a result, the <code>btrfs</code> driver may not be the best choice for PaaS and other high density container use cases.</p></li>
|
||
|
||
<li><p><strong>Small writes</strong>. Containers performing lots of small writes (including Docker hosts that start and stop many containers) can lead to poor use of Btrfs chunks. This can ultimately lead to out-of-space conditions on your Docker host and stop it working. This is currently a major drawback to using current versions of Btrfs.</p>
|
||
|
||
<p>If you use the <code>btrfs</code> storage driver, closely monitor the free space on your Btrfs filesystem using the <code>btrfs filesys show</code> command. Do not trust the output of normal Unix commands such as <code>df</code>; always use the Btrfs native commands.</p></li>
|
||
|
||
<li><p><strong>Sequential writes</strong>. Btrfs writes data to disk via journaling technique. This can impact sequential writes, where performance can be up to half.</p></li>
|
||
|
||
<li><p><strong>Fragmentation</strong>. Fragmentation is a natural byproduct of copy-on-write filesystems like Btrfs. Many small random writes can compound this issue. It can manifest as CPU spikes on Docker hosts using SSD media and head thrashing on Docker hosts using spinning media. Both of these result in poor performance.</p>
|
||
|
||
<p>Recent versions of Btrfs allow you to specify <code>autodefrag</code> as a mount option. This mode attempts to detect random writes and defragment them. You should perform your own tests before enabling this option on your Docker hosts. Some tests have shown this option has a negative performance impact on Docker hosts performing lots of small writes (including systems that start and stop many containers).</p></li>
|
||
|
||
<li><p><strong>Solid State Devices (SSD)</strong>. Btrfs has native optimizations for SSD media. To enable these, mount with the <code>-o ssd</code> mount option. These optimizations include enhanced SSD write performance by avoiding things like <em>seek optimizations</em> that have no use on SSD media.</p>
|
||
|
||
<p>Btfs also supports the TRIM/Discard primitives. However, mounting with the <code>-o discard</code> mount option can cause performance issues. Therefore, it is recommended you perform your own tests before using this option.</p></li>
|
||
|
||
<li><p><strong>Use Data Volumes</strong>. Data volumes provide the best and most predictable performance. This is because they bypass the storage driver and do not incur any of the potential overheads introduced by thin provisioning and copy-on-write. For this reason, you may want to place heavy write workloads on data volumes.</p></li>
|
||
</ul>
|
||
|
||
<h2 id="related-information">Related Information</h2>
|
||
|
||
<ul>
|
||
<li><a href="../engine/userguide/storagedriver/imagesandcontainers/">Understand images, containers, and storage drivers</a></li>
|
||
<li><a href="../engine/userguide/storagedriver/selectadriver/">Select a storage driver</a></li>
|
||
<li><a href="../engine/userguide/storagedriver/aufs-driver/">AUFS storage driver in practice</a></li>
|
||
<li><a href="../engine/userguide/storagedriver/device-mapper-driver/">Device Mapper storage driver in practice</a></li>
|
||
</ul>
|
||
</description>
|
||
</item>
|
||
|
||
<item>
|
||
<title>Best practices for writing Dockerfiles</title>
|
||
<link>http://localhost/engine/articles/dockerfile_best-practices/</link>
|
||
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
|
||
|
||
<guid>http://localhost/engine/articles/dockerfile_best-practices/</guid>
|
||
<description>
|
||
|
||
<h1 id="best-practices-for-writing-dockerfiles">Best practices for writing Dockerfiles</h1>
|
||
|
||
<h2 id="overview">Overview</h2>
|
||
|
||
<p>Docker can build images automatically by reading the instructions from a
|
||
<code>Dockerfile</code>, a text file that contains all the commands, in order, needed to
|
||
build a given image. <code>Dockerfile</code>s adhere to a specific format and use a
|
||
specific set of instructions. You can learn the basics on the
|
||
<a href="../engine/reference/builder/">Dockerfile Reference</a> page. If
|
||
you’re new to writing <code>Dockerfile</code>s, you should start there.</p>
|
||
|
||
<p>This document covers the best practices and methods recommended by Docker,
|
||
Inc. and the Docker community for creating easy-to-use, effective
|
||
<code>Dockerfile</code>s. We strongly suggest you follow these recommendations (in fact,
|
||
if you’re creating an Official Image, you <em>must</em> adhere to these practices).</p>
|
||
|
||
<p>You can see many of these practices and recommendations in action in the <a href="https://github.com/docker-library/buildpack-deps/blob/master/jessie/Dockerfile">buildpack-deps <code>Dockerfile</code></a>.</p>
|
||
|
||
<blockquote>
|
||
<p>Note: for more detailed explanations of any of the Dockerfile commands
|
||
mentioned here, visit the <a href="../engine/reference/builder/">Dockerfile Reference</a> page.</p>
|
||
</blockquote>
|
||
|
||
<h2 id="general-guidelines-and-recommendations">General guidelines and recommendations</h2>
|
||
|
||
<h3 id="containers-should-be-ephemeral">Containers should be ephemeral</h3>
|
||
|
||
<p>The container produced by the image your <code>Dockerfile</code> defines should be as
|
||
ephemeral as possible. By “ephemeral,” we mean that it can be stopped and
|
||
destroyed and a new one built and put in place with an absolute minimum of
|
||
set-up and configuration.</p>
|
||
|
||
<h3 id="use-a-dockerignore-file">Use a .dockerignore file</h3>
|
||
|
||
<p>In most cases, it&rsquo;s best to put each Dockerfile in an empty directory. Then,
|
||
add to that directory only the files needed for building the Dockerfile. To
|
||
increase the build&rsquo;s performance, you can exclude files and directories by
|
||
adding a <code>.dockerignore</code> file to that directory as well. This file supports
|
||
exclusion patterns similar to <code>.gitignore</code> files. For information on creating one,
|
||
see the <a href="../engine/reference/builder/#dockerignore-file">.dockerignore file</a>.</p>
|
||
|
||
<h3 id="avoid-installing-unnecessary-packages">Avoid installing unnecessary packages</h3>
|
||
|
||
<p>In order to reduce complexity, dependencies, file sizes, and build times, you
|
||
should avoid installing extra or unnecessary packages just because they
|
||
might be “nice to have.” For example, you don’t need to include a text editor
|
||
in a database image.</p>
|
||
|
||
<h3 id="run-only-one-process-per-container">Run only one process per container</h3>
|
||
|
||
<p>In almost all cases, you should only run a single process in a single
|
||
container. Decoupling applications into multiple containers makes it much
|
||
easier to scale horizontally and reuse containers. If that service depends on
|
||
another service, make use of <a href="../engine/userguide/networking/default_network/dockerlinks/">container linking</a>.</p>
|
||
|
||
<h3 id="minimize-the-number-of-layers">Minimize the number of layers</h3>
|
||
|
||
<p>You need to find the balance between readability (and thus long-term
|
||
maintainability) of the <code>Dockerfile</code> and minimizing the number of layers it
|
||
uses. Be strategic and cautious about the number of layers you use.</p>
|
||
|
||
<h3 id="sort-multi-line-arguments">Sort multi-line arguments</h3>
|
||
|
||
<p>Whenever possible, ease later changes by sorting multi-line arguments
|
||
alphanumerically. This will help you avoid duplication of packages and make the
|
||
list much easier to update. This also makes PRs a lot easier to read and
|
||
review. Adding a space before a backslash (<code>\</code>) helps as well.</p>
|
||
|
||
<p>Here’s an example from the <a href="https://github.com/docker-library/buildpack-deps"><code>buildpack-deps</code> image</a>:</p>
|
||
|
||
<pre><code>RUN apt-get update &amp;&amp; apt-get install -y \
|
||
bzr \
|
||
cvs \
|
||
git \
|
||
mercurial \
|
||
subversion
|
||
</code></pre>
|
||
|
||
<h3 id="build-cache">Build cache</h3>
|
||
|
||
<p>During the process of building an image Docker will step through the
|
||
instructions in your <code>Dockerfile</code> executing each in the order specified.
|
||
As each instruction is examined Docker will look for an existing image in its
|
||
cache that it can reuse, rather than creating a new (duplicate) image.
|
||
If you do not want to use the cache at all you can use the <code>--no-cache=true</code>
|
||
option on the <code>docker build</code> command.</p>
|
||
|
||
<p>However, if you do let Docker use its cache then it is very important to
|
||
understand when it will, and will not, find a matching image. The basic rules
|
||
that Docker will follow are outlined below:</p>
|
||
|
||
<ul>
|
||
<li><p>Starting with a base image that is already in the cache, the next
|
||
instruction is compared against all child images derived from that base
|
||
image to see if one of them was built using the exact same instruction. If
|
||
not, the cache is invalidated.</p></li>
|
||
|
||
<li><p>In most cases simply comparing the instruction in the <code>Dockerfile</code> with one
|
||
of the child images is sufficient. However, certain instructions require
|
||
a little more examination and explanation.</p></li>
|
||
|
||
<li><p>For the <code>ADD</code> and <code>COPY</code> instructions, the contents of the file(s)
|
||
in the image are examined and a checksum is calculated for each file.
|
||
The last-modified and last-accessed times of the file(s) are not considered in
|
||
these checksums. During the cache lookup, the checksum is compared against the
|
||
checksum in the existing images. If anything has changed in the file(s), such
|
||
as the contents and metadata, then the cache is invalidated.</p></li>
|
||
|
||
<li><p>Aside from the <code>ADD</code> and <code>COPY</code> commands cache checking will not look at the
|
||
files in the container to determine a cache match. For example, when processing
|
||
a <code>RUN apt-get -y update</code> command the files updated in the container
|
||
will not be examined to determine if a cache hit exists. In that case just
|
||
the command string itself will be used to find a match.</p></li>
|
||
</ul>
|
||
|
||
<p>Once the cache is invalidated, all subsequent <code>Dockerfile</code> commands will
|
||
generate new images and the cache will not be used.</p>
|
||
|
||
<h2 id="the-dockerfile-instructions">The Dockerfile instructions</h2>
|
||
|
||
<p>Below you&rsquo;ll find recommendations for the best way to write the
|
||
various instructions available for use in a <code>Dockerfile</code>.</p>
|
||
|
||
<h3 id="from">FROM</h3>
|
||
|
||
<p><a href="../engine/reference/builder/#from">Dockerfile reference for the FROM instruction</a></p>
|
||
|
||
<p>Whenever possible, use current Official Repositories as the basis for your
|
||
image. We recommend the <a href="https://registry.hub.docker.com/_/debian/">Debian image</a>
|
||
since it’s very tightly controlled and kept extremely minimal (currently under
|
||
100 mb), while still being a full distribution.</p>
|
||
|
||
<h3 id="run">RUN</h3>
|
||
|
||
<p><a href="../engine/reference/builder/#run">Dockerfile reference for the RUN instruction</a></p>
|
||
|
||
<p>As always, to make your <code>Dockerfile</code> more readable, understandable, and
|
||
maintainable, split long or complex <code>RUN</code> statements on multiple lines separated
|
||
with backslashes.</p>
|
||
|
||
<h3 id="apt-get">apt-get</h3>
|
||
|
||
<p>Probably the most common use-case for <code>RUN</code> is an application of <code>apt-get</code>. The
|
||
<code>RUN apt-get</code> command, because it installs packages, has several gotchas to look
|
||
out for.</p>
|
||
|
||
<p>You should avoid <code>RUN apt-get upgrade</code> or <code>dist-upgrade</code>, as many of the
|
||
“essential” packages from the base images won&rsquo;t upgrade inside an unprivileged
|
||
container. If a package contained in the base image is out-of-date, you should
|
||
contact its maintainers.
|
||
If you know there’s a particular package, <code>foo</code>, that needs to be updated, use
|
||
<code>apt-get install -y foo</code> to update automatically.</p>
|
||
|
||
<p>Always combine <code>RUN apt-get update</code> with <code>apt-get install</code> in the same <code>RUN</code>
|
||
statement, for example:</p>
|
||
|
||
<pre><code> RUN apt-get update &amp;&amp; apt-get install -y \
|
||
package-bar \
|
||
package-baz \
|
||
package-foo
|
||
</code></pre>
|
||
|
||
<p>Using <code>apt-get update</code> alone in a <code>RUN</code> statement causes caching issues and
|
||
subsequent <code>apt-get install</code> instructions fail.
|
||
For example, say you have a Dockerfile:</p>
|
||
|
||
<pre><code> FROM ubuntu:14.04
|
||
RUN apt-get update
|
||
RUN apt-get install -y curl
|
||
</code></pre>
|
||
|
||
<p>After building the image, all layers are in the Docker cache. Suppose you later
|
||
modify <code>apt-get install</code> by adding extra package:</p>
|
||
|
||
<pre><code> FROM ubuntu:14.04
|
||
RUN apt-get update
|
||
RUN apt-get install -y curl nginx
|
||
</code></pre>
|
||
|
||
<p>Docker sees the initial and modified instructions as identical and reuses the
|
||
cache from previous steps. As a result the <code>apt-get update</code> is <em>NOT</em> executed
|
||
because the build uses the cached version. Because the <code>apt-get update</code> is not
|
||
run, your build can potentially get an outdated version of the <code>curl</code> and <code>nginx</code>
|
||
packages.</p>
|
||
|
||
<p>Using <code>RUN apt-get update &amp;&amp; apt-get install -y</code> ensures your Dockerfile
|
||
installs the latest package versions with no further coding or manual
|
||
intervention. This technique is known as &ldquo;cache busting&rdquo;. You can also achieve
|
||
cache-busting by specifying a package version. This is known as version pinning,
|
||
for example:</p>
|
||
|
||
<pre><code> RUN apt-get update &amp;&amp; apt-get install -y \
|
||
package-bar \
|
||
package-baz \
|
||
package-foo=1.3.*
|
||
</code></pre>
|
||
|
||
<p>Version pinning forces the build to retrieve a particular version regardless of
|
||
what’s in the cache. This technique can also reduce failures due to unanticipated changes
|
||
in required packages.</p>
|
||
|
||
<p>Below is a well-formed <code>RUN</code> instruction that demonstrates all the <code>apt-get</code>
|
||
recommendations.</p>
|
||
|
||
<pre><code>RUN apt-get update &amp;&amp; apt-get install -y \
|
||
aufs-tools \
|
||
automake \
|
||
build-essential \
|
||
curl \
|
||
dpkg-sig \
|
||
libcap-dev \
|
||
libsqlite3-dev \
|
||
lxc=1.0* \
|
||
mercurial \
|
||
reprepro \
|
||
ruby1.9.1 \
|
||
ruby1.9.1-dev \
|
||
s3cmd=1.1.* \
|
||
&amp;&amp; apt-get clean \
|
||
&amp;&amp; rm -rf /var/lib/apt/lists/*
|
||
</code></pre>
|
||
|
||
<p>The <code>s3cmd</code> instructions specifies a version <code>1.1.0*</code>. If the image previously
|
||
used an older version, specifying the new one causes a cache bust of <code>apt-get
|
||
update</code> and ensure the installation of the new version. Listing packages on
|
||
each line can also prevent mistakes in package duplication.</p>
|
||
|
||
<p>In addition, cleaning up the apt cache and removing <code>/var/lib/apt/lists</code> helps
|
||
keep the image size down. Since the <code>RUN</code> statement starts with
|
||
<code>apt-get update</code>, the package cache will always be refreshed prior to
|
||
<code>apt-get install</code>.</p>
|
||
|
||
<h3 id="cmd">CMD</h3>
|
||
|
||
<p><a href="../engine/reference/builder/#cmd">Dockerfile reference for the CMD instruction</a></p>
|
||
|
||
<p>The <code>CMD</code> instruction should be used to run the software contained by your
|
||
image, along with any arguments. <code>CMD</code> should almost always be used in the
|
||
form of <code>CMD [“executable”, “param1”, “param2”…]</code>. Thus, if the image is for a
|
||
service (Apache, Rails, etc.), you would run something like
|
||
<code>CMD [&quot;apache2&quot;,&quot;-DFOREGROUND&quot;]</code>. Indeed, this form of the instruction is
|
||
recommended for any service-based image.</p>
|
||
|
||
<p>In most other cases, <code>CMD</code> should be given an interactive shell (bash, python,
|
||
perl, etc), for example, <code>CMD [&quot;perl&quot;, &quot;-de0&quot;]</code>, <code>CMD [&quot;python&quot;]</code>, or
|
||
<code>CMD [“php”, “-a”]</code>. Using this form means that when you execute something like
|
||
<code>docker run -it python</code>, you’ll get dropped into a usable shell, ready to go.
|
||
<code>CMD</code> should rarely be used in the manner of <code>CMD [“param”, “param”]</code> in
|
||
conjunction with <a href="../engine/reference/builder/#entrypoint"><code>ENTRYPOINT</code></a>, unless
|
||
you and your expected users are already quite familiar with how <code>ENTRYPOINT</code>
|
||
works.</p>
|
||
|
||
<h3 id="expose">EXPOSE</h3>
|
||
|
||
<p><a href="../engine/reference/builder/#expose">Dockerfile reference for the EXPOSE instruction</a></p>
|
||
|
||
<p>The <code>EXPOSE</code> instruction indicates the ports on which a container will listen
|
||
for connections. Consequently, you should use the common, traditional port for
|
||
your application. For example, an image containing the Apache web server would
|
||
use <code>EXPOSE 80</code>, while an image containing MongoDB would use <code>EXPOSE 27017</code> and
|
||
so on.</p>
|
||
|
||
<p>For external access, your users can execute <code>docker run</code> with a flag indicating
|
||
how to map the specified port to the port of their choice.
|
||
For container linking, Docker provides environment variables for the path from
|
||
the recipient container back to the source (ie, <code>MYSQL_PORT_3306_TCP</code>).</p>
|
||
|
||
<h3 id="env">ENV</h3>
|
||
|
||
<p><a href="../engine/reference/builder/#env">Dockerfile reference for the ENV instruction</a></p>
|
||
|
||
<p>In order to make new software easier to run, you can use <code>ENV</code> to update the
|
||
<code>PATH</code> environment variable for the software your container installs. For
|
||
example, <code>ENV PATH /usr/local/nginx/bin:$PATH</code> will ensure that <code>CMD [“nginx”]</code>
|
||
just works.</p>
|
||
|
||
<p>The <code>ENV</code> instruction is also useful for providing required environment
|
||
variables specific to services you wish to containerize, such as Postgres’s
|
||
<code>PGDATA</code>.</p>
|
||
|
||
<p>Lastly, <code>ENV</code> can also be used to set commonly used version numbers so that
|
||
version bumps are easier to maintain, as seen in the following example:</p>
|
||
|
||
<pre><code>ENV PG_MAJOR 9.3
|
||
ENV PG_VERSION 9.3.4
|
||
RUN curl -SL http://example.com/postgres-$PG_VERSION.tar.xz | tar -xJC /usr/src/postgress &amp;&amp; …
|
||
ENV PATH /usr/local/postgres-$PG_MAJOR/bin:$PATH
|
||
</code></pre>
|
||
|
||
<p>Similar to having constant variables in a program (as opposed to hard-coding
|
||
values), this approach lets you change a single <code>ENV</code> instruction to
|
||
auto-magically bump the version of the software in your container.</p>
|
||
|
||
<h3 id="add-or-copy">ADD or COPY</h3>
|
||
|
||
<p><a href="../engine/reference/builder/#add">Dockerfile reference for the ADD instruction</a><br/>
|
||
<a href="../engine/reference/builder/#copy">Dockerfile reference for the COPY instruction</a></p>
|
||
|
||
<p>Although <code>ADD</code> and <code>COPY</code> are functionally similar, generally speaking, <code>COPY</code>
|
||
is preferred. That’s because it’s more transparent than <code>ADD</code>. <code>COPY</code> only
|
||
supports the basic copying of local files into the container, while <code>ADD</code> has
|
||
some features (like local-only tar extraction and remote URL support) that are
|
||
not immediately obvious. Consequently, the best use for <code>ADD</code> is local tar file
|
||
auto-extraction into the image, as in <code>ADD rootfs.tar.xz /</code>.</p>
|
||
|
||
<p>If you have multiple <code>Dockerfile</code> steps that use different files from your
|
||
context, <code>COPY</code> them individually, rather than all at once. This will ensure that
|
||
each step&rsquo;s build cache is only invalidated (forcing the step to be re-run) if the
|
||
specifically required files change.</p>
|
||
|
||
<p>For example:</p>
|
||
|
||
<pre><code>COPY requirements.txt /tmp/
|
||
RUN pip install /tmp/requirements.txt
|
||
COPY . /tmp/
|
||
</code></pre>
|
||
|
||
<p>Results in fewer cache invalidations for the <code>RUN</code> step, than if you put the
|
||
<code>COPY . /tmp/</code> before it.</p>
|
||
|
||
<p>Because image size matters, using <code>ADD</code> to fetch packages from remote URLs is
|
||
strongly discouraged; you should use <code>curl</code> or <code>wget</code> instead. That way you can
|
||
delete the files you no longer need after they&rsquo;ve been extracted and you won&rsquo;t
|
||
have to add another layer in your image. For example, you should avoid doing
|
||
things like:</p>
|
||
|
||
<pre><code>ADD http://example.com/big.tar.xz /usr/src/things/
|
||
RUN tar -xJf /usr/src/things/big.tar.xz -C /usr/src/things
|
||
RUN make -C /usr/src/things all
|
||
</code></pre>
|
||
|
||
<p>And instead, do something like:</p>
|
||
|
||
<pre><code>RUN mkdir -p /usr/src/things \
|
||
&amp;&amp; curl -SL http://example.com/big.tar.xz \
|
||
| tar -xJC /usr/src/things \
|
||
&amp;&amp; make -C /usr/src/things all
|
||
</code></pre>
|
||
|
||
<p>For other items (files, directories) that do not require <code>ADD</code>’s tar
|
||
auto-extraction capability, you should always use <code>COPY</code>.</p>
|
||
|
||
<h3 id="entrypoint">ENTRYPOINT</h3>
|
||
|
||
<p><a href="../engine/reference/builder/#entrypoint">Dockerfile reference for the ENTRYPOINT instruction</a></p>
|
||
|
||
<p>The best use for <code>ENTRYPOINT</code> is to set the image&rsquo;s main command, allowing that
|
||
image to be run as though it was that command (and then use <code>CMD</code> as the
|
||
default flags).</p>
|
||
|
||
<p>Let&rsquo;s start with an example of an image for the command line tool <code>s3cmd</code>:</p>
|
||
|
||
<pre><code>ENTRYPOINT [&quot;s3cmd&quot;]
|
||
CMD [&quot;--help&quot;]
|
||
</code></pre>
|
||
|
||
<p>Now the image can be run like this to show the command&rsquo;s help:</p>
|
||
|
||
<pre><code>$ docker run s3cmd
|
||
</code></pre>
|
||
|
||
<p>Or using the right parameters to execute a command:</p>
|
||
|
||
<pre><code>$ docker run s3cmd ls s3://mybucket
|
||
</code></pre>
|
||
|
||
<p>This is useful because the image name can double as a reference to the binary as
|
||
shown in the command above.</p>
|
||
|
||
<p>The <code>ENTRYPOINT</code> instruction can also be used in combination with a helper
|
||
script, allowing it to function in a similar way to the command above, even
|
||
when starting the tool may require more than one step.</p>
|
||
|
||
<p>For example, the <a href="https://registry.hub.docker.com/_/postgres/">Postgres Official Image</a>
|
||
uses the following script as its <code>ENTRYPOINT</code>:</p>
|
||
|
||
<pre><code class="language-bash">#!/bin/bash
|
||
set -e
|
||
|
||
if [ &quot;$1&quot; = 'postgres' ]; then
|
||
chown -R postgres &quot;$PGDATA&quot;
|
||
|
||
if [ -z &quot;$(ls -A &quot;$PGDATA&quot;)&quot; ]; then
|
||
gosu postgres initdb
|
||
fi
|
||
|
||
exec gosu postgres &quot;$@&quot;
|
||
fi
|
||
|
||
exec &quot;$@&quot;
|
||
</code></pre>
|
||
|
||
<blockquote>
|
||
<p><strong>Note</strong>:
|
||
This script uses <a href="http://wiki.bash-hackers.org/commands/builtin/exec">the <code>exec</code> Bash command</a>
|
||
so that the final running application becomes the container&rsquo;s PID 1. This allows
|
||
the application to receive any Unix signals sent to the container.
|
||
See the <a href="../engine/reference/builder/#entrypoint"><code>ENTRYPOINT</code></a>
|
||
help for more details.</p>
|
||
</blockquote>
|
||
|
||
<p>The helper script is copied into the container and run via <code>ENTRYPOINT</code> on
|
||
container start:</p>
|
||
|
||
<pre><code>COPY ./docker-entrypoint.sh /
|
||
ENTRYPOINT [&quot;/docker-entrypoint.sh&quot;]
|
||
</code></pre>
|
||
|
||
<p>This script allows the user to interact with Postgres in several ways.</p>
|
||
|
||
<p>It can simply start Postgres:</p>
|
||
|
||
<pre><code>$ docker run postgres
|
||
</code></pre>
|
||
|
||
<p>Or, it can be used to run Postgres and pass parameters to the server:</p>
|
||
|
||
<pre><code>$ docker run postgres postgres --help
|
||
</code></pre>
|
||
|
||
<p>Lastly, it could also be used to start a totally different tool, such as Bash:</p>
|
||
|
||
<pre><code>$ docker run --rm -it postgres bash
|
||
</code></pre>
|
||
|
||
<h3 id="volume">VOLUME</h3>
|
||
|
||
<p><a href="../engine/reference/builder/#volume">Dockerfile reference for the VOLUME instruction</a></p>
|
||
|
||
<p>The <code>VOLUME</code> instruction should be used to expose any database storage area,
|
||
configuration storage, or files/folders created by your docker container. You
|
||
are strongly encouraged to use <code>VOLUME</code> for any mutable and/or user-serviceable
|
||
parts of your image.</p>
|
||
|
||
<h3 id="user">USER</h3>
|
||
|
||
<p><a href="../engine/reference/builder/#user">Dockerfile reference for the USER instruction</a></p>
|
||
|
||
<p>If a service can run without privileges, use <code>USER</code> to change to a non-root
|
||
user. Start by creating the user and group in the <code>Dockerfile</code> with something
|
||
like <code>RUN groupadd -r postgres &amp;&amp; useradd -r -g postgres postgres</code>.</p>
|
||
|
||
<blockquote>
|
||
<p><strong>Note:</strong> Users and groups in an image get a non-deterministic
|
||
UID/GID in that the “next” UID/GID gets assigned regardless of image
|
||
rebuilds. So, if it’s critical, you should assign an explicit UID/GID.</p>
|
||
</blockquote>
|
||
|
||
<p>You should avoid installing or using <code>sudo</code> since it has unpredictable TTY and
|
||
signal-forwarding behavior that can cause more problems than it solves. If
|
||
you absolutely need functionality similar to <code>sudo</code> (e.g., initializing the
|
||
daemon as root but running it as non-root), you may be able to use
|
||
<a href="https://github.com/tianon/gosu">“gosu”</a>.</p>
|
||
|
||
<p>Lastly, to reduce layers and complexity, avoid switching <code>USER</code> back
|
||
and forth frequently.</p>
|
||
|
||
<h3 id="workdir">WORKDIR</h3>
|
||
|
||
<p><a href="../engine/reference/builder/#workdir">Dockerfile reference for the WORKDIR instruction</a></p>
|
||
|
||
<p>For clarity and reliability, you should always use absolute paths for your
|
||
<code>WORKDIR</code>. Also, you should use <code>WORKDIR</code> instead of proliferating
|
||
instructions like <code>RUN cd … &amp;&amp; do-something</code>, which are hard to read,
|
||
troubleshoot, and maintain.</p>
|
||
|
||
<h3 id="onbuild">ONBUILD</h3>
|
||
|
||
<p><a href="../engine/reference/builder/#onbuild">Dockerfile reference for the ONBUILD instruction</a></p>
|
||
|
||
<p>An <code>ONBUILD</code> command executes after the current <code>Dockerfile</code> build completes.
|
||
<code>ONBUILD</code> executes in any child image derived <code>FROM</code> the current image. Think
|
||
of the <code>ONBUILD</code> command as an instruction the parent <code>Dockerfile</code> gives
|
||
to the child <code>Dockerfile</code>.</p>
|
||
|
||
<p>A Docker build executes <code>ONBUILD</code> commands before any command in a child
|
||
<code>Dockerfile</code>.</p>
|
||
|
||
<p><code>ONBUILD</code> is useful for images that are going to be built <code>FROM</code> a given
|
||
image. For example, you would use <code>ONBUILD</code> for a language stack image that
|
||
builds arbitrary user software written in that language within the
|
||
<code>Dockerfile</code>, as you can see in <a href="https://github.com/docker-library/ruby/blob/master/2.1/onbuild/Dockerfile">Ruby’s <code>ONBUILD</code> variants</a>.</p>
|
||
|
||
<p>Images built from <code>ONBUILD</code> should get a separate tag, for example:
|
||
<code>ruby:1.9-onbuild</code> or <code>ruby:2.0-onbuild</code>.</p>
|
||
|
||
<p>Be careful when putting <code>ADD</code> or <code>COPY</code> in <code>ONBUILD</code>. The “onbuild” image will
|
||
fail catastrophically if the new build&rsquo;s context is missing the resource being
|
||
added. Adding a separate tag, as recommended above, will help mitigate this by
|
||
allowing the <code>Dockerfile</code> author to make a choice.</p>
|
||
|
||
<h2 id="examples-for-official-repositories">Examples for Official Repositories</h2>
|
||
|
||
<p>These Official Repositories have exemplary <code>Dockerfile</code>s:</p>
|
||
|
||
<ul>
|
||
<li><a href="https://registry.hub.docker.com/_/golang/">Go</a></li>
|
||
<li><a href="https://registry.hub.docker.com/_/perl/">Perl</a></li>
|
||
<li><a href="https://registry.hub.docker.com/_/hylang/">Hy</a></li>
|
||
<li><a href="https://registry.hub.docker.com/_/rails">Rails</a></li>
|
||
</ul>
|
||
|
||
<h2 id="additional-resources">Additional resources:</h2>
|
||
|
||
<ul>
|
||
<li><a href="../engine/reference/builder/">Dockerfile Reference</a></li>
|
||
<li><a href="../engine/articles/baseimages/">More about Base Images</a></li>
|
||
<li><a href="https://docs.docker.com/docker-hub/builds/">More about Automated Builds</a></li>
|
||
<li><a href="https://docs.docker.com/docker-hub/official_repos/">Guidelines for Creating Official
|
||
Repositories</a></li>
|
||
</ul>
|
||
</description>
|
||
</item>
|
||
|
||
<item>
|
||
<title>Bind container ports to the host</title>
|
||
<link>http://localhost/engine/userguide/networking/default_network/binding/</link>
|
||
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
|
||
|
||
<guid>http://localhost/engine/userguide/networking/default_network/binding/</guid>
|
||
<description>
|
||
|
||
<h1 id="bind-container-ports-to-the-host">Bind container ports to the host</h1>
|
||
|
||
<p>The information in this section explains binding container ports within the Docker default bridge. This is a <code>bridge</code> network named <code>bridge</code> created automatically when you install Docker.</p>
|
||
|
||
<blockquote>
|
||
<p><strong>Note</strong>: The <a href="../engine/userguide/networking/dockernetworks/">Docker networks feature</a> allows you to
|
||
create user-defined networks in addition to the default bridge network.</p>
|
||
</blockquote>
|
||
|
||
<p>By default Docker containers can make connections to the outside world, but the
|
||
outside world cannot connect to containers. Each outgoing connection will
|
||
appear to originate from one of the host machine&rsquo;s own IP addresses thanks to an
|
||
<code>iptables</code> masquerading rule on the host machine that the Docker server creates
|
||
when it starts:</p>
|
||
|
||
<pre><code>$ sudo iptables -t nat -L -n
|
||
...
|
||
Chain POSTROUTING (policy ACCEPT)
|
||
target prot opt source destination
|
||
MASQUERADE all -- 172.17.0.0/16 0.0.0.0/0
|
||
...
|
||
</code></pre>
|
||
|
||
<p>The Docker server creates a masquerade rule that let containers connect to IP
|
||
addresses in the outside world.</p>
|
||
|
||
<p>If you want containers to accept incoming connections, you will need to provide
|
||
special options when invoking <code>docker run</code>. There are two approaches.</p>
|
||
|
||
<p>First, you can supply <code>-P</code> or <code>--publish-all=true|false</code> to <code>docker run</code> which
|
||
is a blanket operation that identifies every port with an <code>EXPOSE</code> line in the
|
||
image&rsquo;s <code>Dockerfile</code> or <code>--expose &lt;port&gt;</code> commandline flag and maps it to a host
|
||
port somewhere within an <em>ephemeral port range</em>. The <code>docker port</code> command then
|
||
needs to be used to inspect created mapping. The <em>ephemeral port range</em> is
|
||
configured by <code>/proc/sys/net/ipv4/ip_local_port_range</code> kernel parameter,
|
||
typically ranging from 32768 to 61000.</p>
|
||
|
||
<p>Mapping can be specified explicitly using <code>-p SPEC</code> or <code>--publish=SPEC</code> option.
|
||
It allows you to particularize which port on docker server - which can be any
|
||
port at all, not just one within the <em>ephemeral port range</em> &ndash; you want mapped
|
||
to which port in the container.</p>
|
||
|
||
<p>Either way, you should be able to peek at what Docker has accomplished in your
|
||
network stack by examining your NAT tables.</p>
|
||
|
||
<pre><code># What your NAT rules might look like when Docker
|
||
# is finished setting up a -P forward:
|
||
|
||
$ iptables -t nat -L -n
|
||
...
|
||
Chain DOCKER (2 references)
|
||
target prot opt source destination
|
||
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:49153 to:172.17.0.2:80
|
||
|
||
# What your NAT rules might look like when Docker
|
||
# is finished setting up a -p 80:80 forward:
|
||
|
||
Chain DOCKER (2 references)
|
||
target prot opt source destination
|
||
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 to:172.17.0.2:80
|
||
</code></pre>
|
||
|
||
<p>You can see that Docker has exposed these container ports on <code>0.0.0.0</code>, the
|
||
wildcard IP address that will match any possible incoming port on the host
|
||
machine. If you want to be more restrictive and only allow container services to
|
||
be contacted through a specific external interface on the host machine, you have
|
||
two choices. When you invoke <code>docker run</code> you can use either <code>-p
|
||
IP:host_port:container_port</code> or <code>-p IP::port</code> to specify the external interface
|
||
for one particular binding.</p>
|
||
|
||
<p>Or if you always want Docker port forwards to bind to one specific IP address,
|
||
you can edit your system-wide Docker server settings and add the option
|
||
<code>--ip=IP_ADDRESS</code>. Remember to restart your Docker server after editing this
|
||
setting.</p>
|
||
|
||
<blockquote>
|
||
<p><strong>Note</strong>: With hairpin NAT enabled (<code>--userland-proxy=false</code>), containers port
|
||
exposure is achieved purely through iptables rules, and no attempt to bind the
|
||
exposed port is ever made. This means that nothing prevents shadowing a
|
||
previously listening service outside of Docker through exposing the same port
|
||
for a container. In such conflicting situation, Docker created iptables rules
|
||
will take precedence and route to the container.</p>
|
||
</blockquote>
|
||
|
||
<p>The <code>--userland-proxy</code> parameter, true by default, provides a userland
|
||
implementation for inter-container and outside-to-container communication. When
|
||
disabled, Docker uses both an additional <code>MASQUERADE</code> iptable rule and the
|
||
<code>net.ipv4.route_localnet</code> kernel parameter which allow the host machine to
|
||
connect to a local container exposed port through the commonly used loopback
|
||
address: this alternative is preferred for performance reasons.</p>
|
||
|
||
<h2 id="related-information">Related information</h2>
|
||
|
||
<ul>
|
||
<li><a href="../engine/userguide/networking/dockernetworks/">Understand Docker container networks</a></li>
|
||
<li><a href="../engine/userguide/networking/work-with-networks/">Work with network commands</a></li>
|
||
<li><a href="../engine/userguide/networking/default_network/dockerlinks/">Legacy container links</a></li>
|
||
</ul>
|
||
</description>
|
||
</item>
|
||
|
||
</channel>
|
||
</rss> |