mirror of
https://github.com/docker/docs.git
synced 2026-04-05 10:48:55 +07:00
3152 lines
154 KiB
XML
3152 lines
154 KiB
XML
<?xml version="1.0" encoding="utf-8" standalone="yes" ?>
|
||
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
|
||
<channel>
|
||
<title>Engines on Docker Docs</title>
|
||
<link>http://docs-stage.docker.com/engine/</link>
|
||
<description>Recent content in Engines on Docker Docs</description>
|
||
<generator>Hugo -- gohugo.io</generator>
|
||
<language>en-us</language>
|
||
<atom:link href="http://docs-stage.docker.com/engine/index.xml" rel="self" type="application/rss+xml" />
|
||
|
||
<item>
|
||
<title>API Reference</title>
|
||
<link>http://docs-stage.docker.com/engine/reference/api/</link>
|
||
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
|
||
|
||
<guid>http://docs-stage.docker.com/engine/reference/api/</guid>
|
||
<description>
|
||
|
||
<h1 id="api-reference">API Reference</h1>
|
||
|
||
<ul>
|
||
<li><a href="../engine/reference/api/docker_remote_api/">Docker Remote API</a></li>
|
||
<li><a href="../engine/reference/api/remote_api_client_libraries/">Docker Remote API client libraries</a></li>
|
||
</ul>
|
||
</description>
|
||
</item>
|
||
|
||
<item>
|
||
<title>AUFS storage driver in practice</title>
|
||
<link>http://docs-stage.docker.com/engine/userguide/storagedriver/aufs-driver/</link>
|
||
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
|
||
|
||
<guid>http://docs-stage.docker.com/engine/userguide/storagedriver/aufs-driver/</guid>
|
||
<description>
|
||
|
||
<h1 id="docker-and-aufs-in-practice">Docker and AUFS in practice</h1>
|
||
|
||
<p>AUFS was the first storage driver in use with Docker. As a result, it has a
|
||
long and close history with Docker, is very stable, has a lot of real-world
|
||
deployments, and has strong community support. AUFS has several features that
|
||
make it a good choice for Docker. These features enable:</p>
|
||
|
||
<ul>
|
||
<li>Fast container startup times.</li>
|
||
<li>Efficient use of storage.</li>
|
||
<li>Efficient use of memory.</li>
|
||
</ul>
|
||
|
||
<p>Despite its capabilities and long history with Docker, some Linux distributions
|
||
do not support AUFS. This is usually because AUFS is not included in the
|
||
mainline (upstream) Linux kernel.</p>
|
||
|
||
<p>The following sections examine some AUFS features and how they relate to
|
||
Docker.</p>
|
||
|
||
<h2 id="image-layering-and-sharing-with-aufs">Image layering and sharing with AUFS</h2>
|
||
|
||
<p>AUFS is a <em>unification filesystem</em>. This means that it takes multiple
|
||
directories on a single Linux host, stacks them on top of each other, and
|
||
provides a single unified view. To achieve this, AUFS uses a <em>union mount</em>.</p>
|
||
|
||
<p>AUFS stacks multiple directories and exposes them as a unified view through a
|
||
single mount point. All of the directories in the stack, as well as the union
|
||
mount point, must all exist on the same Linux host. AUFS refers to each
|
||
directory that it stacks as a <em>branch</em>.</p>
|
||
|
||
<p>Within Docker, AUFS union mounts enable image layering. The AUFS storage driver
|
||
implements Docker image layers using this union mount system. AUFS branches
|
||
correspond to Docker image layers. The diagram below shows a Docker container
|
||
based on the <code>ubuntu:latest</code> image.</p>
|
||
|
||
<p><img src="../engine/userguide/storagedriver/images/aufs_layers.jpg" alt="" /></p>
|
||
|
||
<p>This diagram shows that each image layer, and the container layer, is
|
||
represented in the Docker hosts filesystem as a directory under
|
||
<code>/var/lib/docker/</code>. The union mount point provides the unified view of all
|
||
layers. As of Docker 1.10, image layer IDs do not correspond to the names of
|
||
the directories that contain their data.</p>
|
||
|
||
<p>AUFS also supports the copy-on-write technology (CoW). Not all storage drivers
|
||
do.</p>
|
||
|
||
<h2 id="container-reads-and-writes-with-aufs">Container reads and writes with AUFS</h2>
|
||
|
||
<p>Docker leverages AUFS CoW technology to enable image sharing and minimize the
|
||
use of disk space. AUFS works at the file level. This means that all AUFS CoW
|
||
operations copy entire files - even if only a small part of the file is being
|
||
modified. This behavior can have a noticeable impact on container performance,
|
||
especially if the files being copied are large, below a lot of image layers,
|
||
or the CoW operation must search a deep directory tree.</p>
|
||
|
||
<p>Consider, for example, an application running in a container needs to add a
|
||
single new value to a large key-value store (file). If this is the first time
|
||
the file is modified, it does not yet exist in the container&rsquo;s top writable
|
||
layer. So, the CoW must <em>copy up</em> the file from the underlying image. The AUFS
|
||
storage driver searches each image layer for the file. The search order is from
|
||
top to bottom. When it is found, the entire file is <em>copied up</em> to the
|
||
container&rsquo;s top writable layer. From there, it can be opened and modified.</p>
|
||
|
||
<p>Larger files obviously take longer to <em>copy up</em> than smaller files, and files
|
||
that exist in lower image layers take longer to locate than those in higher
|
||
layers. However, a <em>copy up</em> operation only occurs once per file on any given
|
||
container. Subsequent reads and writes happen against the file&rsquo;s copy already
|
||
<em>copied-up</em> to the container&rsquo;s top layer.</p>
|
||
|
||
<h2 id="deleting-files-with-the-aufs-storage-driver">Deleting files with the AUFS storage driver</h2>
|
||
|
||
<p>The AUFS storage driver deletes a file from a container by placing a <em>whiteout
|
||
file</em> in the container&rsquo;s top layer. The whiteout file effectively obscures the
|
||
existence of the file in the read-only image layers below. The simplified
|
||
diagram below shows a container based on an image with three image layers.</p>
|
||
|
||
<p><img src="../engine/userguide/storagedriver/images/aufs_delete.jpg" alt="" /></p>
|
||
|
||
<p>The <code>file3</code> was deleted from the container. So, the AUFS storage driver placed
|
||
a whiteout file in the container&rsquo;s top layer. This whiteout file effectively
|
||
&ldquo;deletes&rdquo; <code>file3</code> from the container by obscuring any of the original file&rsquo;s
|
||
existence in the image&rsquo;s read-only layers. This works the same no matter which
|
||
of the image&rsquo;s read-only layers the file exists in.</p>
|
||
|
||
<h2 id="configure-docker-with-aufs">Configure Docker with AUFS</h2>
|
||
|
||
<p>You can only use the AUFS storage driver on Linux systems with AUFS installed.
|
||
Use the following command to determine if your system supports AUFS.</p>
|
||
|
||
<pre><code>$ grep aufs /proc/filesystems
|
||
nodev aufs
|
||
</code></pre>
|
||
|
||
<p>This output indicates the system supports AUFS. Once you&rsquo;ve verified your
|
||
system supports AUFS, you can must instruct the Docker daemon to use it. You do
|
||
this from the command line with the <code>docker daemon</code> command:</p>
|
||
|
||
<pre><code>$ sudo docker daemon --storage-driver=aufs &amp;
|
||
</code></pre>
|
||
|
||
<p>Alternatively, you can edit the Docker config file and add the
|
||
<code>--storage-driver=aufs</code> option to the <code>DOCKER_OPTS</code> line.</p>
|
||
|
||
<pre><code># Use DOCKER_OPTS to modify the daemon startup options.
|
||
DOCKER_OPTS=&quot;--storage-driver=aufs&quot;
|
||
</code></pre>
|
||
|
||
<p>Once your daemon is running, verify the storage driver with the <code>docker info</code>
|
||
command.</p>
|
||
|
||
<pre><code>$ sudo docker info
|
||
Containers: 1
|
||
Images: 4
|
||
Storage Driver: aufs
|
||
Root Dir: /var/lib/docker/aufs
|
||
Backing Filesystem: extfs
|
||
Dirs: 6
|
||
Dirperm1 Supported: false
|
||
Execution Driver: native-0.2
|
||
...output truncated...
|
||
</code></pre>
|
||
|
||
<p>The output above shows that the Docker daemon is running the AUFS storage
|
||
driver on top of an existing <code>ext4</code> backing filesystem.</p>
|
||
|
||
<h2 id="local-storage-and-aufs">Local storage and AUFS</h2>
|
||
|
||
<p>As the <code>docker daemon</code> runs with the AUFS driver, the driver stores images and
|
||
containers within the Docker host&rsquo;s local storage area under
|
||
<code>/var/lib/docker/aufs/</code>.</p>
|
||
|
||
<h3 id="images">Images</h3>
|
||
|
||
<p>Image layers and their contents are stored under
|
||
<code>/var/lib/docker/aufs/diff/</code>. With Docker 1.10 and higher, image layer IDs do
|
||
not correspond to directory names.</p>
|
||
|
||
<p>The <code>/var/lib/docker/aufs/layers/</code> directory contains metadata about how image
|
||
layers are stacked. This directory contains one file for every image or
|
||
container layer on the Docker host (though file names no longer match image
|
||
layer IDs). Inside each file are the names of the directories that exist below
|
||
it in the stack</p>
|
||
|
||
<p>The command below shows the contents of a metadata file in
|
||
<code>/var/lib/docker/aufs/layers/</code> that lists the three directories that are
|
||
stacked below it in the union mount. Remember, these directory names do no map
|
||
to image layer IDs with Docker 1.10 and higher.</p>
|
||
|
||
<pre><code>$ cat /var/lib/docker/aufs/layers/91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c
|
||
d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82
|
||
c22013c8472965aa5b62559f2b540cd440716ef149756e7b958a1b2aba421e87
|
||
d3a1f33e8a5a513092f01bb7eb1c2abf4d711e5105390a3fe1ae2248cfde1391
|
||
</code></pre>
|
||
|
||
<p>The base layer in an image has no image layers below it, so its file is empty.</p>
|
||
|
||
<h3 id="containers">Containers</h3>
|
||
|
||
<p>Running containers are mounted below <code>/var/lib/docker/aufs/mnt/&lt;container-id&gt;</code>.
|
||
This is where the AUFS union mount point that exposes the container and all
|
||
underlying image layers as a single unified view exists. If a container is not
|
||
running, it still has a directory here but it is empty. This is because AUFS
|
||
only mounts a container when it is running. With Docker 1.10 and higher,
|
||
container IDs no longer correspond to directory names under
|
||
<code>/var/lib/docker/aufs/mnt/&lt;container-id&gt;</code>.</p>
|
||
|
||
<p>Container metadata and various config files that are placed into the running
|
||
container are stored in <code>/var/lib/docker/containers/&lt;container-id&gt;</code>. Files in
|
||
this directory exist for all containers on the system, including ones that are
|
||
stopped. However, when a container is running the container&rsquo;s log files are
|
||
also in this directory.</p>
|
||
|
||
<p>A container&rsquo;s thin writable layer is stored in a directory under
|
||
<code>/var/lib/docker/aufs/diff/</code>. With Docker 1.10 and higher, container IDs no
|
||
longer correspond to directory names. However, the containers thin writable
|
||
layer still exists under here and is stacked by AUFS as the top writable layer
|
||
and is where all changes to the container are stored. The directory exists even
|
||
if the container is stopped. This means that restarting a container will not
|
||
lose changes made to it. Once a container is deleted, it&rsquo;s thin writable layer
|
||
in this directory is deleted.</p>
|
||
|
||
<h2 id="aufs-and-docker-performance">AUFS and Docker performance</h2>
|
||
|
||
<p>To summarize some of the performance related aspects already mentioned:</p>
|
||
|
||
<ul>
|
||
<li><p>The AUFS storage driver is a good choice for PaaS and other similar use-cases
|
||
where container density is important. This is because AUFS efficiently shares
|
||
images between multiple running containers, enabling fast container start times
|
||
and minimal use of disk space.</p></li>
|
||
|
||
<li><p>The underlying mechanics of how AUFS shares files between image layers and
|
||
containers uses the systems page cache very efficiently.</p></li>
|
||
|
||
<li><p>The AUFS storage driver can introduce significant latencies into container
|
||
write performance. This is because the first time a container writes to any
|
||
file, the file has be located and copied into the containers top writable
|
||
layer. These latencies increase and are compounded when these files exist below
|
||
many image layers and the files themselves are large.</p></li>
|
||
</ul>
|
||
|
||
<p>One final point. Data volumes provide the best and most predictable
|
||
performance. This is because they bypass the storage driver and do not incur
|
||
any of the potential overheads introduced by thin provisioning and
|
||
copy-on-write. For this reason, you may want to place heavy write workloads on
|
||
data volumes.</p>
|
||
|
||
<h2 id="related-information">Related information</h2>
|
||
|
||
<ul>
|
||
<li><a href="../engine/userguide/storagedriver/imagesandcontainers/">Understand images, containers, and storage drivers</a></li>
|
||
<li><a href="../engine/userguide/storagedriver/selectadriver/">Select a storage driver</a></li>
|
||
<li><a href="../engine/userguide/storagedriver/btrfs-driver/">Btrfs storage driver in practice</a></li>
|
||
<li><a href="../engine/userguide/storagedriver/device-mapper-driver/">Device Mapper storage driver in practice</a></li>
|
||
</ul>
|
||
</description>
|
||
</item>
|
||
|
||
<item>
|
||
<title>Access authorization plugin</title>
|
||
<link>http://docs-stage.docker.com/engine/extend/plugins_authorization/</link>
|
||
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
|
||
|
||
<guid>http://docs-stage.docker.com/engine/extend/plugins_authorization/</guid>
|
||
<description>
|
||
|
||
<h1 id="create-an-authorization-plugin">Create an authorization plugin</h1>
|
||
|
||
<p>Docker&rsquo;s out-of-the-box authorization model is all or nothing. Any user with
|
||
permission to access the Docker daemon can run any Docker client command. The
|
||
same is true for callers using Docker&rsquo;s remote API to contact the daemon. If you
|
||
require greater access control, you can create authorization plugins and add
|
||
them to your Docker daemon configuration. Using an authorization plugin, a
|
||
Docker administrator can configure granular access policies for managing access
|
||
to Docker daemon.</p>
|
||
|
||
<p>Anyone with the appropriate skills can develop an authorization plugin. These
|
||
skills, at their most basic, are knowledge of Docker, understanding of REST, and
|
||
sound programming knowledge. This document describes the architecture, state,
|
||
and methods information available to an authorization plugin developer.</p>
|
||
|
||
<h2 id="basic-principles">Basic principles</h2>
|
||
|
||
<p>Docker&rsquo;s <a href="../engine/extend/plugin_api/">plugin infrastructure</a> enables
|
||
extending Docker by loading, removing and communicating with
|
||
third-party components using a generic API. The access authorization subsystem
|
||
was built using this mechanism.</p>
|
||
|
||
<p>Using this subsystem, you don&rsquo;t need to rebuild the Docker daemon to add an
|
||
authorization plugin. You can add a plugin to an installed Docker daemon. You do
|
||
need to restart the Docker daemon to add a new plugin.</p>
|
||
|
||
<p>An authorization plugin approves or denies requests to the Docker daemon based
|
||
on both the current authentication context and the command context. The
|
||
authentication context contains all user details and the authentication method.
|
||
The command context contains all the relevant request data.</p>
|
||
|
||
<p>Authorization plugins must follow the rules described in <a href="../engine/extend/plugin_api/">Docker Plugin API</a>.
|
||
Each plugin must reside within directories described under the
|
||
<a href="../engine/extend/plugin_api/#plugin-discovery">Plugin discovery</a> section.</p>
|
||
|
||
<p><strong>Note</strong>: the abbreviations <code>AuthZ</code> and <code>AuthN</code> mean authorization and authentication
|
||
respectively.</p>
|
||
|
||
<h2 id="basic-architecture">Basic architecture</h2>
|
||
|
||
<p>You are responsible for registering your plugin as part of the Docker daemon
|
||
startup. You can install multiple plugins and chain them together. This chain
|
||
can be ordered. Each request to the daemon passes in order through the chain.
|
||
Only when all the plugins grant access to the resource, is the access granted.</p>
|
||
|
||
<p>When an HTTP request is made to the Docker daemon through the CLI or via the
|
||
remote API, the authentication subsystem passes the request to the installed
|
||
authentication plugin(s). The request contains the user (caller) and command
|
||
context. The plugin is responsible for deciding whether to allow or deny the
|
||
request.</p>
|
||
|
||
<p>The sequence diagrams below depict an allow and deny authorization flow:</p>
|
||
|
||
<p><img src="../engine/extend/images/authz_allow.png" alt="Authorization Allow flow" /></p>
|
||
|
||
<p><img src="../engine/extend/images/authz_deny.png" alt="Authorization Deny flow" /></p>
|
||
|
||
<p>Each request sent to the plugin includes the authenticated user, the HTTP
|
||
headers, and the request/response body. Only the user name and the
|
||
authentication method used are passed to the plugin. Most importantly, no user
|
||
credentials or tokens are passed. Finally, not all request/response bodies
|
||
are sent to the authorization plugin. Only those request/response bodies where
|
||
the <code>Content-Type</code> is either <code>text/*</code> or <code>application/json</code> are sent.</p>
|
||
|
||
<p>For commands that can potentially hijack the HTTP connection (<code>HTTP
|
||
Upgrade</code>), such as <code>exec</code>, the authorization plugin is only called for the
|
||
initial HTTP requests. Once the plugin approves the command, authorization is
|
||
not applied to the rest of the flow. Specifically, the streaming data is not
|
||
passed to the authorization plugins. For commands that return chunked HTTP
|
||
response, such as <code>logs</code> and <code>events</code>, only the HTTP request is sent to the
|
||
authorization plugins.</p>
|
||
|
||
<p>During request/response processing, some authorization flows might
|
||
need to do additional queries to the Docker daemon. To complete such flows,
|
||
plugins can call the daemon API similar to a regular user. To enable these
|
||
additional queries, the plugin must provide the means for an administrator to
|
||
configure proper authentication and security policies.</p>
|
||
|
||
<h2 id="docker-client-flows">Docker client flows</h2>
|
||
|
||
<p>To enable and configure the authorization plugin, the plugin developer must
|
||
support the Docker client interactions detailed in this section.</p>
|
||
|
||
<h3 id="setting-up-docker-daemon">Setting up Docker daemon</h3>
|
||
|
||
<p>Enable the authorization plugin with a dedicated command line flag in the
|
||
<code>--authorization-plugin=PLUGIN_ID</code> format. The flag supplies a <code>PLUGIN_ID</code>
|
||
value. This value can be the plugin’s socket or a path to a specification file.</p>
|
||
|
||
<pre><code class="language-bash">$ docker daemon --authorization-plugin=plugin1 --authorization-plugin=plugin2,...
|
||
</code></pre>
|
||
|
||
<p>Docker&rsquo;s authorization subsystem supports multiple <code>--authorization-plugin</code> parameters.</p>
|
||
|
||
<h3 id="calling-authorized-command-allow">Calling authorized command (allow)</h3>
|
||
|
||
<pre><code class="language-bash">$ docker pull centos
|
||
...
|
||
f1b10cd84249: Pull complete
|
||
...
|
||
</code></pre>
|
||
|
||
<h3 id="calling-unauthorized-command-deny">Calling unauthorized command (deny)</h3>
|
||
|
||
<pre><code class="language-bash">$ docker pull centos
|
||
...
|
||
docker: Error response from daemon: authorization denied by plugin PLUGIN_NAME: volumes are not allowed.
|
||
</code></pre>
|
||
|
||
<h3 id="error-from-plugins">Error from plugins</h3>
|
||
|
||
<pre><code class="language-bash">$ docker pull centos
|
||
...
|
||
docker: Error response from daemon: plugin PLUGIN_NAME failed with error: AuthZPlugin.AuthZReq: Cannot connect to the Docker daemon. Is the docker daemon running on this host?.
|
||
</code></pre>
|
||
|
||
<h2 id="api-schema-and-implementation">API schema and implementation</h2>
|
||
|
||
<p>In addition to Docker&rsquo;s standard plugin registration method, each plugin
|
||
should implement the following two methods:</p>
|
||
|
||
<ul>
|
||
<li><p><code>/AuthzPlugin.AuthZReq</code> This authorize request method is called before the Docker daemon processes the client request.</p></li>
|
||
|
||
<li><p><code>/AuthzPlugin.AuthZRes</code> This authorize response method is called before the response is returned from Docker daemon to the client.</p></li>
|
||
</ul>
|
||
|
||
<h4 id="authzplugin-authzreq">/AuthzPlugin.AuthZReq</h4>
|
||
|
||
<p><strong>Request</strong>:</p>
|
||
|
||
<pre><code class="language-json">{
|
||
&quot;User&quot;: &quot;The user identification&quot;,
|
||
&quot;UserAuthNMethod&quot;: &quot;The authentication method used&quot;,
|
||
&quot;RequestMethod&quot;: &quot;The HTTP method&quot;,
|
||
&quot;RequestURI&quot;: &quot;The HTTP request URI&quot;,
|
||
&quot;RequestBody&quot;: &quot;Byte array containing the raw HTTP request body&quot;,
|
||
&quot;RequestHeader&quot;: &quot;Byte array containing the raw HTTP request header as a map[string][]string &quot;
|
||
}
|
||
</code></pre>
|
||
|
||
<p><strong>Response</strong>:</p>
|
||
|
||
<pre><code class="language-json">{
|
||
&quot;Allow&quot;: &quot;Determined whether the user is allowed or not&quot;,
|
||
&quot;Msg&quot;: &quot;The authorization message&quot;,
|
||
&quot;Err&quot;: &quot;The error message if things go wrong&quot;
|
||
}
|
||
</code></pre>
|
||
|
||
<h4 id="authzplugin-authzres">/AuthzPlugin.AuthZRes</h4>
|
||
|
||
<p><strong>Request</strong>:</p>
|
||
|
||
<pre><code class="language-json">{
|
||
&quot;User&quot;: &quot;The user identification&quot;,
|
||
&quot;UserAuthNMethod&quot;: &quot;The authentication method used&quot;,
|
||
&quot;RequestMethod&quot;: &quot;The HTTP method&quot;,
|
||
&quot;RequestURI&quot;: &quot;The HTTP request URI&quot;,
|
||
&quot;RequestBody&quot;: &quot;Byte array containing the raw HTTP request body&quot;,
|
||
&quot;RequestHeader&quot;: &quot;Byte array containing the raw HTTP request header as a map[string][]string&quot;,
|
||
&quot;ResponseBody&quot;: &quot;Byte array containing the raw HTTP response body&quot;,
|
||
&quot;ResponseHeader&quot;: &quot;Byte array containing the raw HTTP response header as a map[string][]string&quot;,
|
||
&quot;ResponseStatusCode&quot;:&quot;Response status code&quot;
|
||
}
|
||
</code></pre>
|
||
|
||
<p><strong>Response</strong>:</p>
|
||
|
||
<pre><code class="language-json">{
|
||
&quot;Allow&quot;: &quot;Determined whether the user is allowed or not&quot;,
|
||
&quot;Msg&quot;: &quot;The authorization message&quot;,
|
||
&quot;Err&quot;: &quot;The error message if things go wrong&quot;
|
||
}
|
||
</code></pre>
|
||
|
||
<h3 id="request-authorization">Request authorization</h3>
|
||
|
||
<p>Each plugin must support two request authorization messages formats, one from the daemon to the plugin and then from the plugin to the daemon. The tables below detail the content expected in each message.</p>
|
||
|
||
<h4 id="daemon-plugin">Daemon -&gt; Plugin</h4>
|
||
|
||
<table>
|
||
<thead>
|
||
<tr>
|
||
<th>Name</th>
|
||
<th>Type</th>
|
||
<th>Description</th>
|
||
</tr>
|
||
</thead>
|
||
|
||
<tbody>
|
||
<tr>
|
||
<td>User</td>
|
||
<td>string</td>
|
||
<td>The user identification</td>
|
||
</tr>
|
||
|
||
<tr>
|
||
<td>Authentication method</td>
|
||
<td>string</td>
|
||
<td>The authentication method used</td>
|
||
</tr>
|
||
|
||
<tr>
|
||
<td>Request method</td>
|
||
<td>enum</td>
|
||
<td>The HTTP method (GET/DELETE/POST)</td>
|
||
</tr>
|
||
|
||
<tr>
|
||
<td>Request URI</td>
|
||
<td>string</td>
|
||
<td>The HTTP request URI including API version (e.g., v.1.17/containers/json)</td>
|
||
</tr>
|
||
|
||
<tr>
|
||
<td>Request headers</td>
|
||
<td>map[string]string</td>
|
||
<td>Request headers as key value pairs (without the authorization header)</td>
|
||
</tr>
|
||
|
||
<tr>
|
||
<td>Request body</td>
|
||
<td>[]byte</td>
|
||
<td>Raw request body</td>
|
||
</tr>
|
||
</tbody>
|
||
</table>
|
||
|
||
<h4 id="plugin-daemon">Plugin -&gt; Daemon</h4>
|
||
|
||
<table>
|
||
<thead>
|
||
<tr>
|
||
<th>Name</th>
|
||
<th>Type</th>
|
||
<th>Description</th>
|
||
</tr>
|
||
</thead>
|
||
|
||
<tbody>
|
||
<tr>
|
||
<td>Allow</td>
|
||
<td>bool</td>
|
||
<td>Boolean value indicating whether the request is allowed or denied</td>
|
||
</tr>
|
||
|
||
<tr>
|
||
<td>Msg</td>
|
||
<td>string</td>
|
||
<td>Authorization message (will be returned to the client in case the access is denied)</td>
|
||
</tr>
|
||
|
||
<tr>
|
||
<td>Err</td>
|
||
<td>string</td>
|
||
<td>Error message (will be returned to the client in case the plugin encounter an error. The string value supplied may appear in logs, so should not include confidential information)</td>
|
||
</tr>
|
||
</tbody>
|
||
</table>
|
||
|
||
<h3 id="response-authorization">Response authorization</h3>
|
||
|
||
<p>The plugin must support two authorization messages formats, one from the daemon to the plugin and then from the plugin to the daemon. The tables below detail the content expected in each message.</p>
|
||
|
||
<h4 id="daemon-plugin-1">Daemon -&gt; Plugin</h4>
|
||
|
||
<table>
|
||
<thead>
|
||
<tr>
|
||
<th>Name</th>
|
||
<th>Type</th>
|
||
<th>Description</th>
|
||
</tr>
|
||
</thead>
|
||
|
||
<tbody>
|
||
<tr>
|
||
<td>User</td>
|
||
<td>string</td>
|
||
<td>The user identification</td>
|
||
</tr>
|
||
|
||
<tr>
|
||
<td>Authentication method</td>
|
||
<td>string</td>
|
||
<td>The authentication method used</td>
|
||
</tr>
|
||
|
||
<tr>
|
||
<td>Request method</td>
|
||
<td>string</td>
|
||
<td>The HTTP method (GET/DELETE/POST)</td>
|
||
</tr>
|
||
|
||
<tr>
|
||
<td>Request URI</td>
|
||
<td>string</td>
|
||
<td>The HTTP request URI including API version (e.g., v.1.17/containers/json)</td>
|
||
</tr>
|
||
|
||
<tr>
|
||
<td>Request headers</td>
|
||
<td>map[string]string</td>
|
||
<td>Request headers as key value pairs (without the authorization header)</td>
|
||
</tr>
|
||
|
||
<tr>
|
||
<td>Request body</td>
|
||
<td>[]byte</td>
|
||
<td>Raw request body</td>
|
||
</tr>
|
||
|
||
<tr>
|
||
<td>Response status code</td>
|
||
<td>int</td>
|
||
<td>Status code from the docker daemon</td>
|
||
</tr>
|
||
|
||
<tr>
|
||
<td>Response headers</td>
|
||
<td>map[string]string</td>
|
||
<td>Response headers as key value pairs</td>
|
||
</tr>
|
||
|
||
<tr>
|
||
<td>Response body</td>
|
||
<td>[]byte</td>
|
||
<td>Raw docker daemon response body</td>
|
||
</tr>
|
||
</tbody>
|
||
</table>
|
||
|
||
<h4 id="plugin-daemon-1">Plugin -&gt; Daemon</h4>
|
||
|
||
<table>
|
||
<thead>
|
||
<tr>
|
||
<th>Name</th>
|
||
<th>Type</th>
|
||
<th>Description</th>
|
||
</tr>
|
||
</thead>
|
||
|
||
<tbody>
|
||
<tr>
|
||
<td>Allow</td>
|
||
<td>bool</td>
|
||
<td>Boolean value indicating whether the response is allowed or denied</td>
|
||
</tr>
|
||
|
||
<tr>
|
||
<td>Msg</td>
|
||
<td>string</td>
|
||
<td>Authorization message (will be returned to the client in case the access is denied)</td>
|
||
</tr>
|
||
|
||
<tr>
|
||
<td>Err</td>
|
||
<td>string</td>
|
||
<td>Error message (will be returned to the client in case the plugin encounter an error. The string value supplied may appear in logs, so should not include confidential information)</td>
|
||
</tr>
|
||
</tbody>
|
||
</table>
|
||
</description>
|
||
</item>
|
||
|
||
<item>
|
||
<title>Administrate</title>
|
||
<link>http://docs-stage.docker.com/engine/admin/</link>
|
||
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
|
||
|
||
<guid>http://docs-stage.docker.com/engine/admin/</guid>
|
||
<description><p>&lt;![end-metadata]--&gt;</p>
|
||
</description>
|
||
</item>
|
||
|
||
<item>
|
||
<title>Amazon CloudWatch Logs logging driver</title>
|
||
<link>http://docs-stage.docker.com/engine/admin/logging/awslogs/</link>
|
||
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
|
||
|
||
<guid>http://docs-stage.docker.com/engine/admin/logging/awslogs/</guid>
|
||
<description>
|
||
|
||
<h1 id="amazon-cloudwatch-logs-logging-driver">Amazon CloudWatch Logs logging driver</h1>
|
||
|
||
<p>The <code>awslogs</code> logging driver sends container logs to
|
||
<a href="https://aws.amazon.com/cloudwatch/details/#log-monitoring">Amazon CloudWatch Logs</a>.
|
||
Log entries can be retrieved through the <a href="https://console.aws.amazon.com/cloudwatch/home#logs:">AWS Management
|
||
Console</a> or the <a href="http://docs.aws.amazon.com/cli/latest/reference/logs/index.html">AWS SDKs
|
||
and Command Line Tools</a>.</p>
|
||
|
||
<h2 id="usage">Usage</h2>
|
||
|
||
<p>You can configure the default logging driver by passing the <code>--log-driver</code>
|
||
option to the Docker daemon:</p>
|
||
|
||
<pre><code>docker daemon --log-driver=awslogs
|
||
</code></pre>
|
||
|
||
<p>You can set the logging driver for a specific container by using the
|
||
<code>--log-driver</code> option to <code>docker run</code>:</p>
|
||
|
||
<pre><code>docker run --log-driver=awslogs ...
|
||
</code></pre>
|
||
|
||
<h2 id="amazon-cloudwatch-logs-options">Amazon CloudWatch Logs options</h2>
|
||
|
||
<p>You can use the <code>--log-opt NAME=VALUE</code> flag to specify Amazon CloudWatch Logs logging driver options.</p>
|
||
|
||
<h3 id="awslogs-region">awslogs-region</h3>
|
||
|
||
<p>The <code>awslogs</code> logging driver sends your Docker logs to a specific region. Use
|
||
the <code>awslogs-region</code> log option or the <code>AWS_REGION</code> environment variable to set
|
||
the region. By default, if your Docker daemon is running on an EC2 instance
|
||
and no region is set, the driver uses the instance&rsquo;s region.</p>
|
||
|
||
<pre><code>docker run --log-driver=awslogs --log-opt awslogs-region=us-east-1 ...
|
||
</code></pre>
|
||
|
||
<h3 id="awslogs-group">awslogs-group</h3>
|
||
|
||
<p>You must specify a
|
||
<a href="http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/WhatIsCloudWatchLogs.html">log group</a>
|
||
for the <code>awslogs</code> logging driver. You can specify the log group with the
|
||
<code>awslogs-group</code> log option:</p>
|
||
|
||
<pre><code>docker run --log-driver=awslogs --log-opt awslogs-region=us-east-1 --log-opt awslogs-group=myLogGroup ...
|
||
</code></pre>
|
||
|
||
<h3 id="awslogs-stream">awslogs-stream</h3>
|
||
|
||
<p>To configure which
|
||
<a href="http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/WhatIsCloudWatchLogs.html">log stream</a>
|
||
should be used, you can specify the <code>awslogs-stream</code> log option. If not
|
||
specified, the container ID is used as the log stream.</p>
|
||
|
||
<blockquote>
|
||
<p><strong>Note:</strong>
|
||
Log streams within a given log group should only be used by one container
|
||
at a time. Using the same log stream for multiple containers concurrently
|
||
can cause reduced logging performance.</p>
|
||
</blockquote>
|
||
|
||
<h2 id="credentials">Credentials</h2>
|
||
|
||
<p>You must provide AWS credentials to the Docker daemon to use the <code>awslogs</code>
|
||
logging driver. You can provide these credentials with the <code>AWS_ACCESS_KEY_ID</code>,
|
||
<code>AWS_SECRET_ACCESS_KEY</code>, and <code>AWS_SESSION_TOKEN</code> environment variables, the
|
||
default AWS shared credentials file (<code>~/.aws/credentials</code> of the root user), or
|
||
(if you are running the Docker daemon on an Amazon EC2 instance) the Amazon EC2
|
||
instance profile.</p>
|
||
|
||
<p>Credentials must have a policy applied that allows the <code>logs:CreateLogStream</code>
|
||
and <code>logs:PutLogEvents</code> actions, as shown in the following example.</p>
|
||
|
||
<pre><code>{
|
||
&quot;Version&quot;: &quot;2012-10-17&quot;,
|
||
&quot;Statement&quot;: [
|
||
{
|
||
&quot;Action&quot;: [
|
||
&quot;logs:CreateLogStream&quot;,
|
||
&quot;logs:PutLogEvents&quot;
|
||
],
|
||
&quot;Effect&quot;: &quot;Allow&quot;,
|
||
&quot;Resource&quot;: &quot;*&quot;
|
||
}
|
||
]
|
||
}
|
||
</code></pre>
|
||
</description>
|
||
</item>
|
||
|
||
<item>
|
||
<title>AppArmor security profiles for Docker</title>
|
||
<link>http://docs-stage.docker.com/engine/security/apparmor/</link>
|
||
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
|
||
|
||
<guid>http://docs-stage.docker.com/engine/security/apparmor/</guid>
|
||
<description>
|
||
|
||
<h1 id="apparmor-security-profiles-for-docker">AppArmor security profiles for Docker</h1>
|
||
|
||
<p>AppArmor (Application Armor) is a Linux security module that protects an
|
||
operating system and its applications from security threats. To use it, a system
|
||
administrator associates an AppArmor security profile with each program. Docker
|
||
expects to find an AppArmor policy loaded and enforced.</p>
|
||
|
||
<p>Docker automatically loads container profiles. The Docker binary installs
|
||
a <code>docker-default</code> profile in the <code>/etc/apparmor.d/docker</code> file. This profile
|
||
is used on containers, <em>not</em> on the Docker Daemon.</p>
|
||
|
||
<p>A profile for the Docker Engine daemon exists but it is not currently installed
|
||
with the <code>deb</code> packages. If you are interested in the source for the daemon
|
||
profile, it is located in
|
||
<a href="https://github.com/docker/docker/tree/master/contrib/apparmor">contrib/apparmor</a>
|
||
in the Docker Engine source repository.</p>
|
||
|
||
<h2 id="understand-the-policies">Understand the policies</h2>
|
||
|
||
<p>The <code>docker-default</code> profile is the default for running containers. It is
|
||
moderately protective while providing wide application compatibility. The
|
||
profile is the following:</p>
|
||
|
||
<pre><code>#include &lt;tunables/global&gt;
|
||
|
||
|
||
profile docker-default flags=(attach_disconnected,mediate_deleted) {
|
||
|
||
#include &lt;abstractions/base&gt;
|
||
|
||
|
||
network,
|
||
capability,
|
||
file,
|
||
umount,
|
||
|
||
deny @{PROC}/{*,**^[0-9*],sys/kernel/shm*} wkx,
|
||
deny @{PROC}/sysrq-trigger rwklx,
|
||
deny @{PROC}/mem rwklx,
|
||
deny @{PROC}/kmem rwklx,
|
||
deny @{PROC}/kcore rwklx,
|
||
|
||
deny mount,
|
||
|
||
deny /sys/[^f]*/** wklx,
|
||
deny /sys/f[^s]*/** wklx,
|
||
deny /sys/fs/[^c]*/** wklx,
|
||
deny /sys/fs/c[^g]*/** wklx,
|
||
deny /sys/fs/cg[^r]*/** wklx,
|
||
deny /sys/firmware/efi/efivars/** rwklx,
|
||
deny /sys/kernel/security/** rwklx,
|
||
}
|
||
</code></pre>
|
||
|
||
<p>When you run a container, it uses the <code>docker-default</code> policy unless you
|
||
override it with the <code>security-opt</code> option. For example, the following
|
||
explicitly specifies the default policy:</p>
|
||
|
||
<pre><code class="language-bash">$ docker run --rm -it --security-opt apparmor=docker-default hello-world
|
||
</code></pre>
|
||
|
||
<h2 id="load-and-unload-profiles">Load and unload profiles</h2>
|
||
|
||
<p>To load a new profile into AppArmor for use with containers:</p>
|
||
|
||
<pre><code class="language-bash">$ apparmor_parser -r -W /path/to/your_profile
|
||
</code></pre>
|
||
|
||
<p>Then, run the custom profile with <code>--security-opt</code> like so:</p>
|
||
|
||
<pre><code class="language-bash">$ docker run --rm -it --security-opt apparmor=your_profile hello-world
|
||
</code></pre>
|
||
|
||
<p>To unload a profile from AppArmor:</p>
|
||
|
||
<pre><code class="language-bash"># stop apparmor
|
||
$ /etc/init.d/apparmor stop
|
||
# unload the profile
|
||
$ apparmor_parser -R /path/to/profile
|
||
# start apparmor
|
||
$ /etc/init.d/apparmor start
|
||
</code></pre>
|
||
|
||
<h3 id="resources-for-writing-profiles">Resources for writing profiles</h3>
|
||
|
||
<p>The syntax for file globbing in AppArmor is a bit different than some other
|
||
globbing implementations. It is highly suggested you take a look at some of the
|
||
below resources with regard to AppArmor profile syntax.</p>
|
||
|
||
<ul>
|
||
<li><a href="http://wiki.apparmor.net/index.php/QuickProfileLanguage">Quick Profile Language</a></li>
|
||
<li><a href="http://wiki.apparmor.net/index.php/AppArmor_Core_Policy_Reference#AppArmor_globbing_syntax">Globbing Syntax</a></li>
|
||
</ul>
|
||
|
||
<h2 id="nginx-example-profile">Nginx example profile</h2>
|
||
|
||
<p>In this example, you create a custom AppArmor profile for Nginx. Below is the
|
||
custom profile.</p>
|
||
|
||
<pre><code>#include &lt;tunables/global&gt;
|
||
|
||
|
||
profile docker-nginx flags=(attach_disconnected,mediate_deleted) {
|
||
#include &lt;abstractions/base&gt;
|
||
|
||
network inet tcp,
|
||
network inet udp,
|
||
network inet icmp,
|
||
|
||
deny network raw,
|
||
|
||
deny network packet,
|
||
|
||
file,
|
||
umount,
|
||
|
||
deny /bin/** wl,
|
||
deny /boot/** wl,
|
||
deny /dev/** wl,
|
||
deny /etc/** wl,
|
||
deny /home/** wl,
|
||
deny /lib/** wl,
|
||
deny /lib64/** wl,
|
||
deny /media/** wl,
|
||
deny /mnt/** wl,
|
||
deny /opt/** wl,
|
||
deny /proc/** wl,
|
||
deny /root/** wl,
|
||
deny /sbin/** wl,
|
||
deny /srv/** wl,
|
||
deny /tmp/** wl,
|
||
deny /sys/** wl,
|
||
deny /usr/** wl,
|
||
|
||
audit /** w,
|
||
|
||
/var/run/nginx.pid w,
|
||
|
||
/usr/sbin/nginx ix,
|
||
|
||
deny /bin/dash mrwklx,
|
||
deny /bin/sh mrwklx,
|
||
deny /usr/bin/top mrwklx,
|
||
|
||
|
||
capability chown,
|
||
capability dac_override,
|
||
capability setuid,
|
||
capability setgid,
|
||
capability net_bind_service,
|
||
|
||
deny @{PROC}/{*,**^[0-9*],sys/kernel/shm*} wkx,
|
||
deny @{PROC}/sysrq-trigger rwklx,
|
||
deny @{PROC}/mem rwklx,
|
||
deny @{PROC}/kmem rwklx,
|
||
deny @{PROC}/kcore rwklx,
|
||
deny mount,
|
||
deny /sys/[^f]*/** wklx,
|
||
deny /sys/f[^s]*/** wklx,
|
||
deny /sys/fs/[^c]*/** wklx,
|
||
deny /sys/fs/c[^g]*/** wklx,
|
||
deny /sys/fs/cg[^r]*/** wklx,
|
||
deny /sys/firmware/efi/efivars/** rwklx,
|
||
deny /sys/kernel/security/** rwklx,
|
||
}
|
||
</code></pre>
|
||
|
||
<ol>
|
||
<li><p>Save the custom profile to disk in the
|
||
<code>/etc/apparmor.d/containers/docker-nginx</code> file.</p>
|
||
|
||
<p>The file path in this example is not a requirement. In production, you could
|
||
use another.</p></li>
|
||
|
||
<li><p>Load the profile.</p>
|
||
|
||
<pre><code class="language-bash">$ sudo apparmor_parser -r -W /etc/apparmor.d/containers/docker-nginx
|
||
</code></pre></li>
|
||
|
||
<li><p>Run a container with the profile.</p>
|
||
|
||
<p>To run nginx in detached mode:</p>
|
||
|
||
<pre><code class="language-bash">$ docker run --security-opt &quot;apparmor=docker-nginx&quot; \
|
||
-p 80:80 -d --name apparmor-nginx nginx
|
||
</code></pre></li>
|
||
|
||
<li><p>Exec into the running container</p>
|
||
|
||
<pre><code class="language-bash">$ docker exec -it apparmor-nginx bash
|
||
</code></pre></li>
|
||
|
||
<li><p>Try some operations to test the profile.</p>
|
||
|
||
<pre><code class="language-bash">root@6da5a2a930b9:~# ping 8.8.8.8
|
||
ping: Lacking privilege for raw socket.
|
||
|
||
root@6da5a2a930b9:/# top
|
||
bash: /usr/bin/top: Permission denied
|
||
|
||
root@6da5a2a930b9:~# touch ~/thing
|
||
touch: cannot touch 'thing': Permission denied
|
||
|
||
root@6da5a2a930b9:/# sh
|
||
bash: /bin/sh: Permission denied
|
||
|
||
root@6da5a2a930b9:/# dash
|
||
bash: /bin/dash: Permission denied
|
||
</code></pre></li>
|
||
</ol>
|
||
|
||
<p>Congrats! You just deployed a container secured with a custom apparmor profile!</p>
|
||
|
||
<h2 id="debug-apparmor">Debug AppArmor</h2>
|
||
|
||
<p>You can use <code>dmesg</code> to debug problems and <code>aa-status</code> check the loaded profiles.</p>
|
||
|
||
<h3 id="use-dmesg">Use dmesg</h3>
|
||
|
||
<p>Here are some helpful tips for debugging any problems you might be facing with
|
||
regard to AppArmor.</p>
|
||
|
||
<p>AppArmor sends quite verbose messaging to <code>dmesg</code>. Usually an AppArmor line
|
||
looks like the following:</p>
|
||
|
||
<pre><code>[ 5442.864673] audit: type=1400 audit(1453830992.845:37): apparmor=&quot;ALLOWED&quot; operation=&quot;open&quot; profile=&quot;/usr/bin/docker&quot; name=&quot;/home/jessie/docker/man/man1/docker-attach.1&quot; pid=10923 comm=&quot;docker&quot; requested_mask=&quot;r&quot; denied_mask=&quot;r&quot; fsuid=1000 ouid=0
|
||
</code></pre>
|
||
|
||
<p>In the above example, you can see <code>profile=/usr/bin/docker</code>. This means the
|
||
user has the <code>docker-engine</code> (Docker Engine Daemon) profile loaded.</p>
|
||
|
||
<blockquote>
|
||
<p><strong>Note:</strong> On version of Ubuntu &gt; 14.04 this is all fine and well, but Trusty
|
||
users might run into some issues when trying to <code>docker exec</code>.</p>
|
||
</blockquote>
|
||
|
||
<p>Look at another log line:</p>
|
||
|
||
<pre><code>[ 3256.689120] type=1400 audit(1405454041.341:73): apparmor=&quot;DENIED&quot; operation=&quot;ptrace&quot; profile=&quot;docker-default&quot; pid=17651 comm=&quot;docker&quot; requested_mask=&quot;receive&quot; denied_mask=&quot;receive&quot;
|
||
</code></pre>
|
||
|
||
<p>This time the profile is <code>docker-default</code>, which is run on containers by
|
||
default unless in <code>privileged</code> mode. This line shows that apparmor has denied
|
||
<code>ptrace</code> in the container. This is exactly as expected.</p>
|
||
|
||
<h3 id="use-aa-status">Use aa-status</h3>
|
||
|
||
<p>If you need to check which profiles are loaded, you can use <code>aa-status</code>. The
|
||
output looks like:</p>
|
||
|
||
<pre><code class="language-bash">$ sudo aa-status
|
||
apparmor module is loaded.
|
||
14 profiles are loaded.
|
||
1 profiles are in enforce mode.
|
||
docker-default
|
||
13 profiles are in complain mode.
|
||
/usr/bin/docker
|
||
/usr/bin/docker///bin/cat
|
||
/usr/bin/docker///bin/ps
|
||
/usr/bin/docker///sbin/apparmor_parser
|
||
/usr/bin/docker///sbin/auplink
|
||
/usr/bin/docker///sbin/blkid
|
||
/usr/bin/docker///sbin/iptables
|
||
/usr/bin/docker///sbin/mke2fs
|
||
/usr/bin/docker///sbin/modprobe
|
||
/usr/bin/docker///sbin/tune2fs
|
||
/usr/bin/docker///sbin/xtables-multi
|
||
/usr/bin/docker///sbin/zfs
|
||
/usr/bin/docker///usr/bin/xz
|
||
38 processes have profiles defined.
|
||
37 processes are in enforce mode.
|
||
docker-default (6044)
|
||
...
|
||
docker-default (31899)
|
||
1 processes are in complain mode.
|
||
/usr/bin/docker (29756)
|
||
0 processes are unconfined but have a profile defined.
|
||
</code></pre>
|
||
|
||
<p>The above output shows that the <code>docker-default</code> profile running on various
|
||
container PIDs is in <code>enforce</code> mode. This means AppArmor is actively blocking
|
||
and auditing in <code>dmesg</code> anything outside the bounds of the <code>docker-default</code>
|
||
profile.</p>
|
||
|
||
<p>The output above also shows the <code>/usr/bin/docker</code> (Docker Engine daemon) profile
|
||
is running in <code>complain</code> mode. This means AppArmor <em>only</em> logs to <code>dmesg</code>
|
||
activity outside the bounds of the profile. (Except in the case of Ubuntu
|
||
Trusty, where some interesting behaviors are enforced.)</p>
|
||
|
||
<h2 id="contribute-docker-s-apparmor-code">Contribute Docker&rsquo;s AppArmor code</h2>
|
||
|
||
<p>Advanced users and package managers can find a profile for <code>/usr/bin/docker</code>
|
||
(Docker Engine Daemon) underneath
|
||
<a href="https://github.com/docker/docker/tree/master/contrib/apparmor">contrib/apparmor</a>
|
||
in the Docker Engine source repository.</p>
|
||
|
||
<p>The <code>docker-default</code> profile for containers lives in
|
||
<a href="https://github.com/docker/docker/tree/master/profiles/apparmor">profiles/apparmor</a>.</p>
|
||
</description>
|
||
</item>
|
||
|
||
<item>
|
||
<title>Apply custom metadata</title>
|
||
<link>http://docs-stage.docker.com/engine/userguide/labels-custom-metadata/</link>
|
||
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
|
||
|
||
<guid>http://docs-stage.docker.com/engine/userguide/labels-custom-metadata/</guid>
|
||
<description>
|
||
|
||
<h1 id="apply-custom-metadata">Apply custom metadata</h1>
|
||
|
||
<p>You can apply metadata to your images, containers, or daemons via
|
||
labels. Labels serve a wide range of uses, such as adding notes or licensing
|
||
information to an image, or to identify a host.</p>
|
||
|
||
<p>A label is a <code>&lt;key&gt;</code> / <code>&lt;value&gt;</code> pair. Docker stores the label values as
|
||
<em>strings</em>. You can specify multiple labels but each <code>&lt;key&gt;</code> must be
|
||
unique or the value will be overwritten. If you specify the same <code>key</code> several
|
||
times but with different values, newer labels overwrite previous labels. Docker
|
||
uses the last <code>key=value</code> you supply.</p>
|
||
|
||
<blockquote>
|
||
<p><strong>Note:</strong> Support for daemon-labels was added in Docker 1.4.1. Labels on
|
||
containers and images were added in Docker 1.6.0</p>
|
||
</blockquote>
|
||
|
||
<h2 id="label-keys-namespaces">Label keys (namespaces)</h2>
|
||
|
||
<p>Docker puts no hard restrictions on the <code>key</code> used for a label. However, using
|
||
simple keys can easily lead to conflicts. For example, you have chosen to
|
||
categorize your images by CPU architecture using &ldquo;architecture&rdquo; labels in
|
||
your Dockerfiles:</p>
|
||
|
||
<pre><code>LABEL architecture=&quot;amd64&quot;
|
||
|
||
LABEL architecture=&quot;ARMv7&quot;
|
||
</code></pre>
|
||
|
||
<p>Another user may apply the same label based on a building&rsquo;s &ldquo;architecture&rdquo;:</p>
|
||
|
||
<pre><code>LABEL architecture=&quot;Art Nouveau&quot;
|
||
</code></pre>
|
||
|
||
<p>To prevent naming conflicts, Docker recommends using namespaces to label keys
|
||
using reverse domain notation. Use the following guidelines to name your keys:</p>
|
||
|
||
<ul>
|
||
<li><p>All (third-party) tools should prefix their keys with the
|
||
reverse DNS notation of a domain controlled by the author. For
|
||
example, <code>com.example.some-label</code>.</p></li>
|
||
|
||
<li><p>The <code>com.docker.*</code>, <code>io.docker.*</code> and <code>org.dockerproject.*</code> namespaces are
|
||
reserved for Docker&rsquo;s internal use.</p></li>
|
||
|
||
<li><p>Keys should only consist of lower-cased alphanumeric characters,
|
||
dots and dashes (for example, <code>[a-z0-9-.]</code>).</p></li>
|
||
|
||
<li><p>Keys should start <em>and</em> end with an alpha numeric character.</p></li>
|
||
|
||
<li><p>Keys may not contain consecutive dots or dashes.</p></li>
|
||
|
||
<li><p>Keys <em>without</em> namespace (dots) are reserved for CLI use. This allows end-
|
||
users to add metadata to their containers and images without having to type
|
||
cumbersome namespaces on the command-line.</p></li>
|
||
</ul>
|
||
|
||
<p>These are simply guidelines and Docker does not <em>enforce</em> them. However, for
|
||
the benefit of the community, you <em>should</em> use namespaces for your label keys.</p>
|
||
|
||
<h2 id="store-structured-data-in-labels">Store structured data in labels</h2>
|
||
|
||
<p>Label values can contain any data type as long as it can be represented as a
|
||
string. For example, consider this JSON document:</p>
|
||
|
||
<pre><code>{
|
||
&quot;Description&quot;: &quot;A containerized foobar&quot;,
|
||
&quot;Usage&quot;: &quot;docker run --rm example/foobar [args]&quot;,
|
||
&quot;License&quot;: &quot;GPL&quot;,
|
||
&quot;Version&quot;: &quot;0.0.1-beta&quot;,
|
||
&quot;aBoolean&quot;: true,
|
||
&quot;aNumber&quot; : 0.01234,
|
||
&quot;aNestedArray&quot;: [&quot;a&quot;, &quot;b&quot;, &quot;c&quot;]
|
||
}
|
||
</code></pre>
|
||
|
||
<p>You can store this struct in a label by serializing it to a string first:</p>
|
||
|
||
<pre><code>LABEL com.example.image-specs=&quot;{\&quot;Description\&quot;:\&quot;A containerized foobar\&quot;,\&quot;Usage\&quot;:\&quot;docker run --rm example\\/foobar [args]\&quot;,\&quot;License\&quot;:\&quot;GPL\&quot;,\&quot;Version\&quot;:\&quot;0.0.1-beta\&quot;,\&quot;aBoolean\&quot;:true,\&quot;aNumber\&quot;:0.01234,\&quot;aNestedArray\&quot;:[\&quot;a\&quot;,\&quot;b\&quot;,\&quot;c\&quot;]}&quot;
|
||
</code></pre>
|
||
|
||
<p>While it is <em>possible</em> to store structured data in label values, Docker treats
|
||
this data as a &lsquo;regular&rsquo; string. This means that Docker doesn&rsquo;t offer ways to
|
||
query (filter) based on nested properties. If your tool needs to filter on
|
||
nested properties, the tool itself needs to implement this functionality.</p>
|
||
|
||
<h2 id="add-labels-to-images">Add labels to images</h2>
|
||
|
||
<p>To add labels to an image, use the <code>LABEL</code> instruction in your Dockerfile:</p>
|
||
|
||
<pre><code>LABEL [&lt;namespace&gt;.]&lt;key&gt;=&lt;value&gt; ...
|
||
</code></pre>
|
||
|
||
<p>The <code>LABEL</code> instruction adds a label to your image. A <code>LABEL</code> consists of a <code>&lt;key&gt;</code>
|
||
and a <code>&lt;value&gt;</code>.
|
||
Use an empty string for labels that don&rsquo;t have a <code>&lt;value&gt;</code>,
|
||
Use surrounding quotes or backslashes for labels that contain
|
||
white space characters in the <code>&lt;value&gt;</code>:</p>
|
||
|
||
<pre><code>LABEL vendor=ACME\ Incorporated
|
||
LABEL com.example.version.is-beta=
|
||
LABEL com.example.version.is-production=&quot;&quot;
|
||
LABEL com.example.version=&quot;0.0.1-beta&quot;
|
||
LABEL com.example.release-date=&quot;2015-02-12&quot;
|
||
</code></pre>
|
||
|
||
<p>The <code>LABEL</code> instruction also supports setting multiple <code>&lt;key&gt;</code> / <code>&lt;value&gt;</code> pairs
|
||
in a single instruction:</p>
|
||
|
||
<pre><code>LABEL com.example.version=&quot;0.0.1-beta&quot; com.example.release-date=&quot;2015-02-12&quot;
|
||
</code></pre>
|
||
|
||
<p>Long lines can be split up by using a backslash (<code>\</code>) as continuation marker:</p>
|
||
|
||
<pre><code>LABEL vendor=ACME\ Incorporated \
|
||
com.example.is-beta= \
|
||
com.example.is-production=&quot;&quot; \
|
||
com.example.version=&quot;0.0.1-beta&quot; \
|
||
com.example.release-date=&quot;2015-02-12&quot;
|
||
</code></pre>
|
||
|
||
<p>Docker recommends you add multiple labels in a single <code>LABEL</code> instruction. Using
|
||
individual instructions for each label can result in an inefficient image. This
|
||
is because each <code>LABEL</code> instruction in a Dockerfile produces a new IMAGE layer.</p>
|
||
|
||
<p>You can view the labels via the <code>docker inspect</code> command:</p>
|
||
|
||
<pre><code>$ docker inspect 4fa6e0f0c678
|
||
|
||
...
|
||
&quot;Labels&quot;: {
|
||
&quot;vendor&quot;: &quot;ACME Incorporated&quot;,
|
||
&quot;com.example.is-beta&quot;: &quot;&quot;,
|
||
&quot;com.example.is-production&quot;: &quot;&quot;,
|
||
&quot;com.example.version&quot;: &quot;0.0.1-beta&quot;,
|
||
&quot;com.example.release-date&quot;: &quot;2015-02-12&quot;
|
||
}
|
||
...
|
||
|
||
# Inspect labels on container
|
||
$ docker inspect -f &quot;{{json .Config.Labels }}&quot; 4fa6e0f0c678
|
||
|
||
{&quot;Vendor&quot;:&quot;ACME Incorporated&quot;,&quot;com.example.is-beta&quot;:&quot;&quot;, &quot;com.example.is-production&quot;:&quot;&quot;, &quot;com.example.version&quot;:&quot;0.0.1-beta&quot;,&quot;com.example.release-date&quot;:&quot;2015-02-12&quot;}
|
||
|
||
# Inspect labels on images
|
||
$ docker inspect -f &quot;{{json .ContainerConfig.Labels }}&quot; myimage
|
||
</code></pre>
|
||
|
||
<h2 id="query-labels">Query labels</h2>
|
||
|
||
<p>Besides storing metadata, you can filter images and containers by label. To list all
|
||
running containers that have the <code>com.example.is-beta</code> label:</p>
|
||
|
||
<pre><code># List all running containers that have a `com.example.is-beta` label
|
||
$ docker ps --filter &quot;label=com.example.is-beta&quot;
|
||
</code></pre>
|
||
|
||
<p>List all running containers with the label <code>color</code> that have a value <code>blue</code>:</p>
|
||
|
||
<pre><code>$ docker ps --filter &quot;label=color=blue&quot;
|
||
</code></pre>
|
||
|
||
<p>List all images with the label <code>vendor</code> that have the value <code>ACME</code>:</p>
|
||
|
||
<pre><code>$ docker images --filter &quot;label=vendor=ACME&quot;
|
||
</code></pre>
|
||
|
||
<h2 id="container-labels">Container labels</h2>
|
||
|
||
<pre><code>docker run \
|
||
-d \
|
||
--label com.example.group=&quot;webservers&quot; \
|
||
--label com.example.environment=&quot;production&quot; \
|
||
busybox \
|
||
top
|
||
</code></pre>
|
||
|
||
<p>Please refer to the <a href="#query-labels">Query labels</a> section above for information
|
||
on how to query labels set on a container.</p>
|
||
|
||
<h2 id="daemon-labels">Daemon labels</h2>
|
||
|
||
<pre><code>docker daemon \
|
||
--dns 8.8.8.8 \
|
||
--dns 8.8.4.4 \
|
||
-H unix:///var/run/docker.sock \
|
||
--label com.example.environment=&quot;production&quot; \
|
||
--label com.example.storage=&quot;ssd&quot;
|
||
</code></pre>
|
||
|
||
<p>These labels appear as part of the <code>docker info</code> output for the daemon:</p>
|
||
|
||
<pre><code>$ docker -D info
|
||
Containers: 12
|
||
Running: 5
|
||
Paused: 2
|
||
Stopped: 5
|
||
Images: 672
|
||
Server Version: 1.9.0
|
||
Storage Driver: aufs
|
||
Root Dir: /var/lib/docker/aufs
|
||
Backing Filesystem: extfs
|
||
Dirs: 697
|
||
Dirperm1 Supported: true
|
||
Execution Driver: native-0.2
|
||
Logging Driver: json-file
|
||
Kernel Version: 3.19.0-22-generic
|
||
Operating System: Ubuntu 15.04
|
||
CPUs: 24
|
||
Total Memory: 62.86 GiB
|
||
Name: docker
|
||
ID: I54V:OLXT:HVMM:TPKO:JPHQ:CQCD:JNLC:O3BZ:4ZVJ:43XJ:PFHZ:6N2S
|
||
Debug mode (server): true
|
||
File Descriptors: 59
|
||
Goroutines: 159
|
||
System Time: 2015-09-23T14:04:20.699842089+08:00
|
||
EventsListeners: 0
|
||
Init SHA1:
|
||
Init Path: /usr/bin/docker
|
||
Docker Root Dir: /var/lib/docker
|
||
Http Proxy: http://test:test@localhost:8080
|
||
Https Proxy: https://test:test@localhost:8080
|
||
WARNING: No swap limit support
|
||
Username: svendowideit
|
||
Registry: [https://index.docker.io/v1/]
|
||
Labels:
|
||
com.example.environment=production
|
||
com.example.storage=ssd
|
||
</code></pre>
|
||
</description>
|
||
</item>
|
||
|
||
<item>
|
||
<title>Automatically start containers</title>
|
||
<link>http://docs-stage.docker.com/engine/admin/host_integration/</link>
|
||
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
|
||
|
||
<guid>http://docs-stage.docker.com/engine/admin/host_integration/</guid>
|
||
<description>
|
||
|
||
<h1 id="automatically-start-containers">Automatically start containers</h1>
|
||
|
||
<p>As of Docker 1.2,
|
||
<a href="../engine/reference/run/#restart-policies-restart">restart policies</a> are the
|
||
built-in Docker mechanism for restarting containers when they exit. If set,
|
||
restart policies will be used when the Docker daemon starts up, as typically
|
||
happens after a system boot. Restart policies will ensure that linked containers
|
||
are started in the correct order.</p>
|
||
|
||
<p>If restart policies don&rsquo;t suit your needs (i.e., you have non-Docker processes
|
||
that depend on Docker containers), you can use a process manager like
|
||
<a href="http://upstart.ubuntu.com/">upstart</a>,
|
||
<a href="http://freedesktop.org/wiki/Software/systemd/">systemd</a> or
|
||
<a href="http://supervisord.org/">supervisor</a> instead.</p>
|
||
|
||
<h2 id="using-a-process-manager">Using a process manager</h2>
|
||
|
||
<p>Docker does not set any restart policies by default, but be aware that they will
|
||
conflict with most process managers. So don&rsquo;t set restart policies if you are
|
||
using a process manager.</p>
|
||
|
||
<p>When you have finished setting up your image and are happy with your
|
||
running container, you can then attach a process manager to manage it.
|
||
When you run <code>docker start -a</code>, Docker will automatically attach to the
|
||
running container, or start it if needed and forward all signals so that
|
||
the process manager can detect when a container stops and correctly
|
||
restart it.</p>
|
||
|
||
<p>Here are a few sample scripts for systemd and upstart to integrate with
|
||
Docker.</p>
|
||
|
||
<h2 id="examples">Examples</h2>
|
||
|
||
<p>The examples below show configuration files for two popular process managers,
|
||
upstart and systemd. In these examples, we&rsquo;ll assume that we have already
|
||
created a container to run Redis with <code>--name=redis_server</code>. These files define
|
||
a new service that will be started after the docker daemon service has started.</p>
|
||
|
||
<h3 id="upstart">upstart</h3>
|
||
|
||
<pre><code>description &quot;Redis container&quot;
|
||
author &quot;Me&quot;
|
||
start on filesystem and started docker
|
||
stop on runlevel [!2345]
|
||
respawn
|
||
script
|
||
/usr/bin/docker start -a redis_server
|
||
end script
|
||
</code></pre>
|
||
|
||
<h3 id="systemd">systemd</h3>
|
||
|
||
<pre><code>[Unit]
|
||
Description=Redis container
|
||
Requires=docker.service
|
||
After=docker.service
|
||
|
||
[Service]
|
||
Restart=always
|
||
ExecStart=/usr/bin/docker start -a redis_server
|
||
ExecStop=/usr/bin/docker stop -t 2 redis_server
|
||
|
||
[Install]
|
||
WantedBy=local.target
|
||
</code></pre>
|
||
|
||
<p>If you need to pass options to the redis container (such as <code>--env</code>),
|
||
then you&rsquo;ll need to use <code>docker run</code> rather than <code>docker start</code>. This will
|
||
create a new container every time the service is started, which will be stopped
|
||
and removed when the service is stopped.</p>
|
||
|
||
<pre><code>[Service]
|
||
...
|
||
ExecStart=/usr/bin/docker run --env foo=bar --name redis_server redis
|
||
ExecStop=/usr/bin/docker stop -t 2 redis_server ; /usr/bin/docker rm -f redis_server
|
||
...
|
||
</code></pre>
|
||
</description>
|
||
</item>
|
||
|
||
<item>
|
||
<title>Automation with content trust</title>
|
||
<link>http://docs-stage.docker.com/engine/security/trust/trust_automation/</link>
|
||
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
|
||
|
||
<guid>http://docs-stage.docker.com/engine/security/trust/trust_automation/</guid>
|
||
<description>
|
||
|
||
<h1 id="automation-with-content-trust">Automation with content trust</h1>
|
||
|
||
<p>Your automation systems that pull or build images can also work with trust. Any automation environment must set <code>DOCKER_TRUST_ENABLED</code> either manually or in a scripted fashion before processing images.</p>
|
||
|
||
<h2 id="bypass-requests-for-passphrases">Bypass requests for passphrases</h2>
|
||
|
||
<p>To allow tools to wrap docker and push trusted content, there are two
|
||
environment variables that allow you to provide the passphrases without an
|
||
expect script, or typing them in:</p>
|
||
|
||
<ul>
|
||
<li><code>DOCKER_CONTENT_TRUST_ROOT_PASSPHRASE</code></li>
|
||
<li><code>DOCKER_CONTENT_TRUST_REPOSITORY_PASSPHRASE</code></li>
|
||
</ul>
|
||
|
||
<p>Docker attempts to use the contents of these environment variables as passphrase
|
||
for the keys. For example, an image publisher can export the repository <code>target</code>
|
||
and <code>snapshot</code> passphrases:</p>
|
||
|
||
<pre><code class="language-bash">$ export DOCKER_CONTENT_TRUST_ROOT_PASSPHRASE=&quot;u7pEQcGoebUHm6LHe6&quot;
|
||
$ export DOCKER_CONTENT_TRUST_REPOSITORY_PASSPHRASE=&quot;l7pEQcTKJjUHm6Lpe4&quot;
|
||
</code></pre>
|
||
|
||
<p>Then, when pushing a new tag the Docker client does not request these values but signs automatically:</p>
|
||
|
||
<pre><code class="language-bash">$ docker push docker/trusttest:latest
|
||
The push refers to a repository [docker.io/docker/trusttest] (len: 1)
|
||
a9539b34a6ab: Image already exists
|
||
b3dbab3810fc: Image already exists
|
||
latest: digest: sha256:d149ab53f871 size: 3355
|
||
Signing and pushing trust metadata
|
||
</code></pre>
|
||
|
||
<h2 id="building-with-content-trust">Building with content trust</h2>
|
||
|
||
<p>You can also build with content trust. Before running the <code>docker build</code> command, you should set the environment variable <code>DOCKER_CONTENT_TRUST</code> either manually or in a scripted fashion. Consider the simple Dockerfile below.</p>
|
||
|
||
<pre><code class="language-Dockerfile">FROM docker/trusttest:latest
|
||
RUN echo
|
||
</code></pre>
|
||
|
||
<p>The <code>FROM</code> tag is pulling a signed image. You cannot build an image that has a
|
||
<code>FROM</code> that is not either present locally or signed. Given that content trust
|
||
data exists for the tag <code>latest</code>, the following build should succeed:</p>
|
||
|
||
<pre><code class="language-bash">$ docker build -t docker/trusttest:testing .
|
||
Using default tag: latest
|
||
latest: Pulling from docker/trusttest
|
||
|
||
b3dbab3810fc: Pull complete
|
||
a9539b34a6ab: Pull complete
|
||
Digest: sha256:d149ab53f871
|
||
</code></pre>
|
||
|
||
<p>If content trust is enabled, building from a Dockerfile that relies on tag without trust data, causes the build command to fail:</p>
|
||
|
||
<pre><code class="language-bash">$ docker build -t docker/trusttest:testing .
|
||
unable to process Dockerfile: No trust data for notrust
|
||
</code></pre>
|
||
|
||
<h2 id="related-information">Related information</h2>
|
||
|
||
<ul>
|
||
<li><a href="../engine/security/trust/content_trust/">Content trust in Docker</a></li>
|
||
<li><a href="../engine/security/trust/trust_key_mng/">Manage keys for content trust</a></li>
|
||
<li><a href="../engine/security/trust/trust_delegation/">Delegations for content trust</a></li>
|
||
<li><a href="../engine/security/trust/trust_sandbox/">Play in a content trust sandbox</a></li>
|
||
</ul>
|
||
</description>
|
||
</item>
|
||
|
||
<item>
|
||
<title>Best practices for writing Dockerfiles</title>
|
||
<link>http://docs-stage.docker.com/engine/userguide/eng-image/dockerfile_best-practices/</link>
|
||
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
|
||
|
||
<guid>http://docs-stage.docker.com/engine/userguide/eng-image/dockerfile_best-practices/</guid>
|
||
<description>
|
||
|
||
<h1 id="best-practices-for-writing-dockerfiles">Best practices for writing Dockerfiles</h1>
|
||
|
||
<p>Docker can build images automatically by reading the instructions from a
|
||
<code>Dockerfile</code>, a text file that contains all the commands, in order, needed to
|
||
build a given image. <code>Dockerfile</code>s adhere to a specific format and use a
|
||
specific set of instructions. You can learn the basics on the
|
||
<a href="../engine/reference/builder/">Dockerfile Reference</a> page. If
|
||
you’re new to writing <code>Dockerfile</code>s, you should start there.</p>
|
||
|
||
<p>This document covers the best practices and methods recommended by Docker,
|
||
Inc. and the Docker community for creating easy-to-use, effective
|
||
<code>Dockerfile</code>s. We strongly suggest you follow these recommendations (in fact,
|
||
if you’re creating an Official Image, you <em>must</em> adhere to these practices).</p>
|
||
|
||
<p>You can see many of these practices and recommendations in action in the <a href="https://github.com/docker-library/buildpack-deps/blob/master/jessie/Dockerfile">buildpack-deps <code>Dockerfile</code></a>.</p>
|
||
|
||
<blockquote>
|
||
<p>Note: for more detailed explanations of any of the Dockerfile commands
|
||
mentioned here, visit the <a href="../engine/reference/builder/">Dockerfile Reference</a> page.</p>
|
||
</blockquote>
|
||
|
||
<h2 id="general-guidelines-and-recommendations">General guidelines and recommendations</h2>
|
||
|
||
<h3 id="containers-should-be-ephemeral">Containers should be ephemeral</h3>
|
||
|
||
<p>The container produced by the image your <code>Dockerfile</code> defines should be as
|
||
ephemeral as possible. By “ephemeral,” we mean that it can be stopped and
|
||
destroyed and a new one built and put in place with an absolute minimum of
|
||
set-up and configuration.</p>
|
||
|
||
<h3 id="use-a-dockerignore-file">Use a .dockerignore file</h3>
|
||
|
||
<p>In most cases, it&rsquo;s best to put each Dockerfile in an empty directory. Then,
|
||
add to that directory only the files needed for building the Dockerfile. To
|
||
increase the build&rsquo;s performance, you can exclude files and directories by
|
||
adding a <code>.dockerignore</code> file to that directory as well. This file supports
|
||
exclusion patterns similar to <code>.gitignore</code> files. For information on creating one,
|
||
see the <a href="../engine/reference/builder/#dockerignore-file">.dockerignore file</a>.</p>
|
||
|
||
<h3 id="avoid-installing-unnecessary-packages">Avoid installing unnecessary packages</h3>
|
||
|
||
<p>In order to reduce complexity, dependencies, file sizes, and build times, you
|
||
should avoid installing extra or unnecessary packages just because they
|
||
might be “nice to have.” For example, you don’t need to include a text editor
|
||
in a database image.</p>
|
||
|
||
<h3 id="run-only-one-process-per-container">Run only one process per container</h3>
|
||
|
||
<p>In almost all cases, you should only run a single process in a single
|
||
container. Decoupling applications into multiple containers makes it much
|
||
easier to scale horizontally and reuse containers. If that service depends on
|
||
another service, make use of <a href="../engine/userguide/networking/default_network/dockerlinks/">container linking</a>.</p>
|
||
|
||
<h3 id="minimize-the-number-of-layers">Minimize the number of layers</h3>
|
||
|
||
<p>You need to find the balance between readability (and thus long-term
|
||
maintainability) of the <code>Dockerfile</code> and minimizing the number of layers it
|
||
uses. Be strategic and cautious about the number of layers you use.</p>
|
||
|
||
<h3 id="sort-multi-line-arguments">Sort multi-line arguments</h3>
|
||
|
||
<p>Whenever possible, ease later changes by sorting multi-line arguments
|
||
alphanumerically. This will help you avoid duplication of packages and make the
|
||
list much easier to update. This also makes PRs a lot easier to read and
|
||
review. Adding a space before a backslash (<code>\</code>) helps as well.</p>
|
||
|
||
<p>Here’s an example from the <a href="https://github.com/docker-library/buildpack-deps"><code>buildpack-deps</code> image</a>:</p>
|
||
|
||
<pre><code>RUN apt-get update &amp;&amp; apt-get install -y \
|
||
bzr \
|
||
cvs \
|
||
git \
|
||
mercurial \
|
||
subversion
|
||
</code></pre>
|
||
|
||
<h3 id="build-cache">Build cache</h3>
|
||
|
||
<p>During the process of building an image Docker will step through the
|
||
instructions in your <code>Dockerfile</code> executing each in the order specified.
|
||
As each instruction is examined Docker will look for an existing image in its
|
||
cache that it can reuse, rather than creating a new (duplicate) image.
|
||
If you do not want to use the cache at all you can use the <code>--no-cache=true</code>
|
||
option on the <code>docker build</code> command.</p>
|
||
|
||
<p>However, if you do let Docker use its cache then it is very important to
|
||
understand when it will, and will not, find a matching image. The basic rules
|
||
that Docker will follow are outlined below:</p>
|
||
|
||
<ul>
|
||
<li><p>Starting with a base image that is already in the cache, the next
|
||
instruction is compared against all child images derived from that base
|
||
image to see if one of them was built using the exact same instruction. If
|
||
not, the cache is invalidated.</p></li>
|
||
|
||
<li><p>In most cases simply comparing the instruction in the <code>Dockerfile</code> with one
|
||
of the child images is sufficient. However, certain instructions require
|
||
a little more examination and explanation.</p></li>
|
||
|
||
<li><p>For the <code>ADD</code> and <code>COPY</code> instructions, the contents of the file(s)
|
||
in the image are examined and a checksum is calculated for each file.
|
||
The last-modified and last-accessed times of the file(s) are not considered in
|
||
these checksums. During the cache lookup, the checksum is compared against the
|
||
checksum in the existing images. If anything has changed in the file(s), such
|
||
as the contents and metadata, then the cache is invalidated.</p></li>
|
||
|
||
<li><p>Aside from the <code>ADD</code> and <code>COPY</code> commands, cache checking will not look at the
|
||
files in the container to determine a cache match. For example, when processing
|
||
a <code>RUN apt-get -y update</code> command the files updated in the container
|
||
will not be examined to determine if a cache hit exists. In that case just
|
||
the command string itself will be used to find a match.</p></li>
|
||
</ul>
|
||
|
||
<p>Once the cache is invalidated, all subsequent <code>Dockerfile</code> commands will
|
||
generate new images and the cache will not be used.</p>
|
||
|
||
<h2 id="the-dockerfile-instructions">The Dockerfile instructions</h2>
|
||
|
||
<p>Below you&rsquo;ll find recommendations for the best way to write the
|
||
various instructions available for use in a <code>Dockerfile</code>.</p>
|
||
|
||
<h3 id="from">FROM</h3>
|
||
|
||
<p><a href="../engine/reference/builder/#from">Dockerfile reference for the FROM instruction</a></p>
|
||
|
||
<p>Whenever possible, use current Official Repositories as the basis for your
|
||
image. We recommend the <a href="https://hub.docker.com/_/debian/">Debian image</a>
|
||
since it’s very tightly controlled and kept minimal (currently under 150 mb),
|
||
while still being a full distribution.</p>
|
||
|
||
<h3 id="run">RUN</h3>
|
||
|
||
<p><a href="../engine/reference/builder/#run">Dockerfile reference for the RUN instruction</a></p>
|
||
|
||
<p>As always, to make your <code>Dockerfile</code> more readable, understandable, and
|
||
maintainable, split long or complex <code>RUN</code> statements on multiple lines separated
|
||
with backslashes.</p>
|
||
|
||
<h3 id="apt-get">apt-get</h3>
|
||
|
||
<p>Probably the most common use-case for <code>RUN</code> is an application of <code>apt-get</code>. The
|
||
<code>RUN apt-get</code> command, because it installs packages, has several gotchas to look
|
||
out for.</p>
|
||
|
||
<p>You should avoid <code>RUN apt-get upgrade</code> or <code>dist-upgrade</code>, as many of the
|
||
“essential” packages from the base images won&rsquo;t upgrade inside an unprivileged
|
||
container. If a package contained in the base image is out-of-date, you should
|
||
contact its maintainers.
|
||
If you know there’s a particular package, <code>foo</code>, that needs to be updated, use
|
||
<code>apt-get install -y foo</code> to update automatically.</p>
|
||
|
||
<p>Always combine <code>RUN apt-get update</code> with <code>apt-get install</code> in the same <code>RUN</code>
|
||
statement, for example:</p>
|
||
|
||
<pre><code> RUN apt-get update &amp;&amp; apt-get install -y \
|
||
package-bar \
|
||
package-baz \
|
||
package-foo
|
||
</code></pre>
|
||
|
||
<p>Using <code>apt-get update</code> alone in a <code>RUN</code> statement causes caching issues and
|
||
subsequent <code>apt-get install</code> instructions fail.
|
||
For example, say you have a Dockerfile:</p>
|
||
|
||
<pre><code> FROM ubuntu:14.04
|
||
RUN apt-get update
|
||
RUN apt-get install -y curl
|
||
</code></pre>
|
||
|
||
<p>After building the image, all layers are in the Docker cache. Suppose you later
|
||
modify <code>apt-get install</code> by adding extra package:</p>
|
||
|
||
<pre><code> FROM ubuntu:14.04
|
||
RUN apt-get update
|
||
RUN apt-get install -y curl nginx
|
||
</code></pre>
|
||
|
||
<p>Docker sees the initial and modified instructions as identical and reuses the
|
||
cache from previous steps. As a result the <code>apt-get update</code> is <em>NOT</em> executed
|
||
because the build uses the cached version. Because the <code>apt-get update</code> is not
|
||
run, your build can potentially get an outdated version of the <code>curl</code> and <code>nginx</code>
|
||
packages.</p>
|
||
|
||
<p>Using <code>RUN apt-get update &amp;&amp; apt-get install -y</code> ensures your Dockerfile
|
||
installs the latest package versions with no further coding or manual
|
||
intervention. This technique is known as &ldquo;cache busting&rdquo;. You can also achieve
|
||
cache-busting by specifying a package version. This is known as version pinning,
|
||
for example:</p>
|
||
|
||
<pre><code> RUN apt-get update &amp;&amp; apt-get install -y \
|
||
package-bar \
|
||
package-baz \
|
||
package-foo=1.3.*
|
||
</code></pre>
|
||
|
||
<p>Version pinning forces the build to retrieve a particular version regardless of
|
||
what’s in the cache. This technique can also reduce failures due to unanticipated changes
|
||
in required packages.</p>
|
||
|
||
<p>Below is a well-formed <code>RUN</code> instruction that demonstrates all the <code>apt-get</code>
|
||
recommendations.</p>
|
||
|
||
<pre><code>RUN apt-get update &amp;&amp; apt-get install -y \
|
||
aufs-tools \
|
||
automake \
|
||
build-essential \
|
||
curl \
|
||
dpkg-sig \
|
||
libcap-dev \
|
||
libsqlite3-dev \
|
||
mercurial \
|
||
reprepro \
|
||
ruby1.9.1 \
|
||
ruby1.9.1-dev \
|
||
s3cmd=1.1.* \
|
||
&amp;&amp; rm -rf /var/lib/apt/lists/*
|
||
</code></pre>
|
||
|
||
<p>The <code>s3cmd</code> instructions specifies a version <code>1.1.0*</code>. If the image previously
|
||
used an older version, specifying the new one causes a cache bust of <code>apt-get
|
||
update</code> and ensure the installation of the new version. Listing packages on
|
||
each line can also prevent mistakes in package duplication.</p>
|
||
|
||
<p>In addition, cleaning up the apt cache and removing <code>/var/lib/apt/lists</code> helps
|
||
keep the image size down. Since the <code>RUN</code> statement starts with
|
||
<code>apt-get update</code>, the package cache will always be refreshed prior to
|
||
<code>apt-get install</code>.</p>
|
||
|
||
<blockquote>
|
||
<p><strong>Note</strong>: The official Debian and Ubuntu images <a href="https://github.com/docker/docker/blob/03e2923e42446dbb830c654d0eec323a0b4ef02a/contrib/mkimage/debootstrap#L82-L105">automatically run <code>apt-get clean</code></a>,
|
||
so explicit invocation is not required.</p>
|
||
</blockquote>
|
||
|
||
<h3 id="cmd">CMD</h3>
|
||
|
||
<p><a href="../engine/reference/builder/#cmd">Dockerfile reference for the CMD instruction</a></p>
|
||
|
||
<p>The <code>CMD</code> instruction should be used to run the software contained by your
|
||
image, along with any arguments. <code>CMD</code> should almost always be used in the
|
||
form of <code>CMD [“executable”, “param1”, “param2”…]</code>. Thus, if the image is for a
|
||
service (Apache, Rails, etc.), you would run something like
|
||
<code>CMD [&quot;apache2&quot;,&quot;-DFOREGROUND&quot;]</code>. Indeed, this form of the instruction is
|
||
recommended for any service-based image.</p>
|
||
|
||
<p>In most other cases, <code>CMD</code> should be given an interactive shell (bash, python,
|
||
perl, etc), for example, <code>CMD [&quot;perl&quot;, &quot;-de0&quot;]</code>, <code>CMD [&quot;python&quot;]</code>, or
|
||
<code>CMD [“php”, “-a”]</code>. Using this form means that when you execute something like
|
||
<code>docker run -it python</code>, you’ll get dropped into a usable shell, ready to go.
|
||
<code>CMD</code> should rarely be used in the manner of <code>CMD [“param”, “param”]</code> in
|
||
conjunction with <a href="../engine/reference/builder/#entrypoint"><code>ENTRYPOINT</code></a>, unless
|
||
you and your expected users are already quite familiar with how <code>ENTRYPOINT</code>
|
||
works.</p>
|
||
|
||
<h3 id="expose">EXPOSE</h3>
|
||
|
||
<p><a href="../engine/reference/builder/#expose">Dockerfile reference for the EXPOSE instruction</a></p>
|
||
|
||
<p>The <code>EXPOSE</code> instruction indicates the ports on which a container will listen
|
||
for connections. Consequently, you should use the common, traditional port for
|
||
your application. For example, an image containing the Apache web server would
|
||
use <code>EXPOSE 80</code>, while an image containing MongoDB would use <code>EXPOSE 27017</code> and
|
||
so on.</p>
|
||
|
||
<p>For external access, your users can execute <code>docker run</code> with a flag indicating
|
||
how to map the specified port to the port of their choice.
|
||
For container linking, Docker provides environment variables for the path from
|
||
the recipient container back to the source (ie, <code>MYSQL_PORT_3306_TCP</code>).</p>
|
||
|
||
<h3 id="env">ENV</h3>
|
||
|
||
<p><a href="../engine/reference/builder/#env">Dockerfile reference for the ENV instruction</a></p>
|
||
|
||
<p>In order to make new software easier to run, you can use <code>ENV</code> to update the
|
||
<code>PATH</code> environment variable for the software your container installs. For
|
||
example, <code>ENV PATH /usr/local/nginx/bin:$PATH</code> will ensure that <code>CMD [“nginx”]</code>
|
||
just works.</p>
|
||
|
||
<p>The <code>ENV</code> instruction is also useful for providing required environment
|
||
variables specific to services you wish to containerize, such as Postgres’s
|
||
<code>PGDATA</code>.</p>
|
||
|
||
<p>Lastly, <code>ENV</code> can also be used to set commonly used version numbers so that
|
||
version bumps are easier to maintain, as seen in the following example:</p>
|
||
|
||
<pre><code>ENV PG_MAJOR 9.3
|
||
ENV PG_VERSION 9.3.4
|
||
RUN curl -SL http://example.com/postgres-$PG_VERSION.tar.xz | tar -xJC /usr/src/postgress &amp;&amp; …
|
||
ENV PATH /usr/local/postgres-$PG_MAJOR/bin:$PATH
|
||
</code></pre>
|
||
|
||
<p>Similar to having constant variables in a program (as opposed to hard-coding
|
||
values), this approach lets you change a single <code>ENV</code> instruction to
|
||
auto-magically bump the version of the software in your container.</p>
|
||
|
||
<h3 id="add-or-copy">ADD or COPY</h3>
|
||
|
||
<p><a href="../engine/reference/builder/#add">Dockerfile reference for the ADD instruction</a><br/>
|
||
<a href="../engine/reference/builder/#copy">Dockerfile reference for the COPY instruction</a></p>
|
||
|
||
<p>Although <code>ADD</code> and <code>COPY</code> are functionally similar, generally speaking, <code>COPY</code>
|
||
is preferred. That’s because it’s more transparent than <code>ADD</code>. <code>COPY</code> only
|
||
supports the basic copying of local files into the container, while <code>ADD</code> has
|
||
some features (like local-only tar extraction and remote URL support) that are
|
||
not immediately obvious. Consequently, the best use for <code>ADD</code> is local tar file
|
||
auto-extraction into the image, as in <code>ADD rootfs.tar.xz /</code>.</p>
|
||
|
||
<p>If you have multiple <code>Dockerfile</code> steps that use different files from your
|
||
context, <code>COPY</code> them individually, rather than all at once. This will ensure that
|
||
each step&rsquo;s build cache is only invalidated (forcing the step to be re-run) if the
|
||
specifically required files change.</p>
|
||
|
||
<p>For example:</p>
|
||
|
||
<pre><code>COPY requirements.txt /tmp/
|
||
RUN pip install --requirement /tmp/requirements.txt
|
||
COPY . /tmp/
|
||
</code></pre>
|
||
|
||
<p>Results in fewer cache invalidations for the <code>RUN</code> step, than if you put the
|
||
<code>COPY . /tmp/</code> before it.</p>
|
||
|
||
<p>Because image size matters, using <code>ADD</code> to fetch packages from remote URLs is
|
||
strongly discouraged; you should use <code>curl</code> or <code>wget</code> instead. That way you can
|
||
delete the files you no longer need after they&rsquo;ve been extracted and you won&rsquo;t
|
||
have to add another layer in your image. For example, you should avoid doing
|
||
things like:</p>
|
||
|
||
<pre><code>ADD http://example.com/big.tar.xz /usr/src/things/
|
||
RUN tar -xJf /usr/src/things/big.tar.xz -C /usr/src/things
|
||
RUN make -C /usr/src/things all
|
||
</code></pre>
|
||
|
||
<p>And instead, do something like:</p>
|
||
|
||
<pre><code>RUN mkdir -p /usr/src/things \
|
||
&amp;&amp; curl -SL http://example.com/big.tar.xz \
|
||
| tar -xJC /usr/src/things \
|
||
&amp;&amp; make -C /usr/src/things all
|
||
</code></pre>
|
||
|
||
<p>For other items (files, directories) that do not require <code>ADD</code>’s tar
|
||
auto-extraction capability, you should always use <code>COPY</code>.</p>
|
||
|
||
<h3 id="entrypoint">ENTRYPOINT</h3>
|
||
|
||
<p><a href="../engine/reference/builder/#entrypoint">Dockerfile reference for the ENTRYPOINT instruction</a></p>
|
||
|
||
<p>The best use for <code>ENTRYPOINT</code> is to set the image&rsquo;s main command, allowing that
|
||
image to be run as though it was that command (and then use <code>CMD</code> as the
|
||
default flags).</p>
|
||
|
||
<p>Let&rsquo;s start with an example of an image for the command line tool <code>s3cmd</code>:</p>
|
||
|
||
<pre><code>ENTRYPOINT [&quot;s3cmd&quot;]
|
||
CMD [&quot;--help&quot;]
|
||
</code></pre>
|
||
|
||
<p>Now the image can be run like this to show the command&rsquo;s help:</p>
|
||
|
||
<pre><code>$ docker run s3cmd
|
||
</code></pre>
|
||
|
||
<p>Or using the right parameters to execute a command:</p>
|
||
|
||
<pre><code>$ docker run s3cmd ls s3://mybucket
|
||
</code></pre>
|
||
|
||
<p>This is useful because the image name can double as a reference to the binary as
|
||
shown in the command above.</p>
|
||
|
||
<p>The <code>ENTRYPOINT</code> instruction can also be used in combination with a helper
|
||
script, allowing it to function in a similar way to the command above, even
|
||
when starting the tool may require more than one step.</p>
|
||
|
||
<p>For example, the <a href="https://hub.docker.com/_/postgres/">Postgres Official Image</a>
|
||
uses the following script as its <code>ENTRYPOINT</code>:</p>
|
||
|
||
<pre><code class="language-bash">#!/bin/bash
|
||
set -e
|
||
|
||
if [ &quot;$1&quot; = 'postgres' ]; then
|
||
chown -R postgres &quot;$PGDATA&quot;
|
||
|
||
if [ -z &quot;$(ls -A &quot;$PGDATA&quot;)&quot; ]; then
|
||
gosu postgres initdb
|
||
fi
|
||
|
||
exec gosu postgres &quot;$@&quot;
|
||
fi
|
||
|
||
exec &quot;$@&quot;
|
||
</code></pre>
|
||
|
||
<blockquote>
|
||
<p><strong>Note</strong>:
|
||
This script uses <a href="http://wiki.bash-hackers.org/commands/builtin/exec">the <code>exec</code> Bash command</a>
|
||
so that the final running application becomes the container&rsquo;s PID 1. This allows
|
||
the application to receive any Unix signals sent to the container.
|
||
See the <a href="../engine/reference/builder/#entrypoint"><code>ENTRYPOINT</code></a>
|
||
help for more details.</p>
|
||
</blockquote>
|
||
|
||
<p>The helper script is copied into the container and run via <code>ENTRYPOINT</code> on
|
||
container start:</p>
|
||
|
||
<pre><code>COPY ./docker-entrypoint.sh /
|
||
ENTRYPOINT [&quot;/docker-entrypoint.sh&quot;]
|
||
</code></pre>
|
||
|
||
<p>This script allows the user to interact with Postgres in several ways.</p>
|
||
|
||
<p>It can simply start Postgres:</p>
|
||
|
||
<pre><code>$ docker run postgres
|
||
</code></pre>
|
||
|
||
<p>Or, it can be used to run Postgres and pass parameters to the server:</p>
|
||
|
||
<pre><code>$ docker run postgres postgres --help
|
||
</code></pre>
|
||
|
||
<p>Lastly, it could also be used to start a totally different tool, such as Bash:</p>
|
||
|
||
<pre><code>$ docker run --rm -it postgres bash
|
||
</code></pre>
|
||
|
||
<h3 id="volume">VOLUME</h3>
|
||
|
||
<p><a href="../engine/reference/builder/#volume">Dockerfile reference for the VOLUME instruction</a></p>
|
||
|
||
<p>The <code>VOLUME</code> instruction should be used to expose any database storage area,
|
||
configuration storage, or files/folders created by your docker container. You
|
||
are strongly encouraged to use <code>VOLUME</code> for any mutable and/or user-serviceable
|
||
parts of your image.</p>
|
||
|
||
<h3 id="user">USER</h3>
|
||
|
||
<p><a href="../engine/reference/builder/#user">Dockerfile reference for the USER instruction</a></p>
|
||
|
||
<p>If a service can run without privileges, use <code>USER</code> to change to a non-root
|
||
user. Start by creating the user and group in the <code>Dockerfile</code> with something
|
||
like <code>RUN groupadd -r postgres &amp;&amp; useradd -r -g postgres postgres</code>.</p>
|
||
|
||
<blockquote>
|
||
<p><strong>Note:</strong> Users and groups in an image get a non-deterministic
|
||
UID/GID in that the “next” UID/GID gets assigned regardless of image
|
||
rebuilds. So, if it’s critical, you should assign an explicit UID/GID.</p>
|
||
</blockquote>
|
||
|
||
<p>You should avoid installing or using <code>sudo</code> since it has unpredictable TTY and
|
||
signal-forwarding behavior that can cause more problems than it solves. If
|
||
you absolutely need functionality similar to <code>sudo</code> (e.g., initializing the
|
||
daemon as root but running it as non-root), you may be able to use
|
||
<a href="https://github.com/tianon/gosu">“gosu”</a>.</p>
|
||
|
||
<p>Lastly, to reduce layers and complexity, avoid switching <code>USER</code> back
|
||
and forth frequently.</p>
|
||
|
||
<h3 id="workdir">WORKDIR</h3>
|
||
|
||
<p><a href="../engine/reference/builder/#workdir">Dockerfile reference for the WORKDIR instruction</a></p>
|
||
|
||
<p>For clarity and reliability, you should always use absolute paths for your
|
||
<code>WORKDIR</code>. Also, you should use <code>WORKDIR</code> instead of proliferating
|
||
instructions like <code>RUN cd … &amp;&amp; do-something</code>, which are hard to read,
|
||
troubleshoot, and maintain.</p>
|
||
|
||
<h3 id="onbuild">ONBUILD</h3>
|
||
|
||
<p><a href="../engine/reference/builder/#onbuild">Dockerfile reference for the ONBUILD instruction</a></p>
|
||
|
||
<p>An <code>ONBUILD</code> command executes after the current <code>Dockerfile</code> build completes.
|
||
<code>ONBUILD</code> executes in any child image derived <code>FROM</code> the current image. Think
|
||
of the <code>ONBUILD</code> command as an instruction the parent <code>Dockerfile</code> gives
|
||
to the child <code>Dockerfile</code>.</p>
|
||
|
||
<p>A Docker build executes <code>ONBUILD</code> commands before any command in a child
|
||
<code>Dockerfile</code>.</p>
|
||
|
||
<p><code>ONBUILD</code> is useful for images that are going to be built <code>FROM</code> a given
|
||
image. For example, you would use <code>ONBUILD</code> for a language stack image that
|
||
builds arbitrary user software written in that language within the
|
||
<code>Dockerfile</code>, as you can see in <a href="https://github.com/docker-library/ruby/blob/master/2.1/onbuild/Dockerfile">Ruby’s <code>ONBUILD</code> variants</a>.</p>
|
||
|
||
<p>Images built from <code>ONBUILD</code> should get a separate tag, for example:
|
||
<code>ruby:1.9-onbuild</code> or <code>ruby:2.0-onbuild</code>.</p>
|
||
|
||
<p>Be careful when putting <code>ADD</code> or <code>COPY</code> in <code>ONBUILD</code>. The “onbuild” image will
|
||
fail catastrophically if the new build&rsquo;s context is missing the resource being
|
||
added. Adding a separate tag, as recommended above, will help mitigate this by
|
||
allowing the <code>Dockerfile</code> author to make a choice.</p>
|
||
|
||
<h2 id="examples-for-official-repositories">Examples for Official Repositories</h2>
|
||
|
||
<p>These Official Repositories have exemplary <code>Dockerfile</code>s:</p>
|
||
|
||
<ul>
|
||
<li><a href="https://hub.docker.com/_/golang/">Go</a></li>
|
||
<li><a href="https://hub.docker.com/_/perl/">Perl</a></li>
|
||
<li><a href="https://hub.docker.com/_/hylang/">Hy</a></li>
|
||
<li><a href="https://hub.docker.com/_/rails">Rails</a></li>
|
||
</ul>
|
||
|
||
<h2 id="additional-resources">Additional resources:</h2>
|
||
|
||
<ul>
|
||
<li><a href="../engine/reference/builder/">Dockerfile Reference</a></li>
|
||
<li><a href="../engine/userguide/eng-image/baseimages/">More about Base Images</a></li>
|
||
<li><a href="https://docs.docker.com/docker-hub/builds/">More about Automated Builds</a></li>
|
||
<li><a href="https://docs.docker.com/docker-hub/official_repos/">Guidelines for Creating Official
|
||
Repositories</a></li>
|
||
</ul>
|
||
</description>
|
||
</item>
|
||
|
||
<item>
|
||
<title>Bind container ports to the host</title>
|
||
<link>http://docs-stage.docker.com/engine/userguide/networking/default_network/binding/</link>
|
||
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
|
||
|
||
<guid>http://docs-stage.docker.com/engine/userguide/networking/default_network/binding/</guid>
|
||
<description>
|
||
|
||
<h1 id="bind-container-ports-to-the-host">Bind container ports to the host</h1>
|
||
|
||
<p>The information in this section explains binding container ports within the Docker default bridge. This is a <code>bridge</code> network named <code>bridge</code> created automatically when you install Docker.</p>
|
||
|
||
<blockquote>
|
||
<p><strong>Note</strong>: The <a href="../engine/userguide/networking/dockernetworks/">Docker networks feature</a> allows you to
|
||
create user-defined networks in addition to the default bridge network.</p>
|
||
</blockquote>
|
||
|
||
<p>By default Docker containers can make connections to the outside world, but the
|
||
outside world cannot connect to containers. Each outgoing connection will
|
||
appear to originate from one of the host machine&rsquo;s own IP addresses thanks to an
|
||
<code>iptables</code> masquerading rule on the host machine that the Docker server creates
|
||
when it starts:</p>
|
||
|
||
<pre><code>$ sudo iptables -t nat -L -n
|
||
...
|
||
Chain POSTROUTING (policy ACCEPT)
|
||
target prot opt source destination
|
||
MASQUERADE all -- 172.17.0.0/16 0.0.0.0/0
|
||
...
|
||
</code></pre>
|
||
|
||
<p>The Docker server creates a masquerade rule that let containers connect to IP
|
||
addresses in the outside world.</p>
|
||
|
||
<p>If you want containers to accept incoming connections, you will need to provide
|
||
special options when invoking <code>docker run</code>. There are two approaches.</p>
|
||
|
||
<p>First, you can supply <code>-P</code> or <code>--publish-all=true|false</code> to <code>docker run</code> which
|
||
is a blanket operation that identifies every port with an <code>EXPOSE</code> line in the
|
||
image&rsquo;s <code>Dockerfile</code> or <code>--expose &lt;port&gt;</code> commandline flag and maps it to a host
|
||
port somewhere within an <em>ephemeral port range</em>. The <code>docker port</code> command then
|
||
needs to be used to inspect created mapping. The <em>ephemeral port range</em> is
|
||
configured by <code>/proc/sys/net/ipv4/ip_local_port_range</code> kernel parameter,
|
||
typically ranging from 32768 to 61000.</p>
|
||
|
||
<p>Mapping can be specified explicitly using <code>-p SPEC</code> or <code>--publish=SPEC</code> option.
|
||
It allows you to particularize which port on docker server - which can be any
|
||
port at all, not just one within the <em>ephemeral port range</em> -- you want mapped
|
||
to which port in the container.</p>
|
||
|
||
<p>Either way, you should be able to peek at what Docker has accomplished in your
|
||
network stack by examining your NAT tables.</p>
|
||
|
||
<pre><code># What your NAT rules might look like when Docker
|
||
# is finished setting up a -P forward:
|
||
|
||
$ iptables -t nat -L -n
|
||
...
|
||
Chain DOCKER (2 references)
|
||
target prot opt source destination
|
||
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:49153 to:172.17.0.2:80
|
||
|
||
# What your NAT rules might look like when Docker
|
||
# is finished setting up a -p 80:80 forward:
|
||
|
||
Chain DOCKER (2 references)
|
||
target prot opt source destination
|
||
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 to:172.17.0.2:80
|
||
</code></pre>
|
||
|
||
<p>You can see that Docker has exposed these container ports on <code>0.0.0.0</code>, the
|
||
wildcard IP address that will match any possible incoming port on the host
|
||
machine. If you want to be more restrictive and only allow container services to
|
||
be contacted through a specific external interface on the host machine, you have
|
||
two choices. When you invoke <code>docker run</code> you can use either <code>-p
|
||
IP:host_port:container_port</code> or <code>-p IP::port</code> to specify the external interface
|
||
for one particular binding.</p>
|
||
|
||
<p>Or if you always want Docker port forwards to bind to one specific IP address,
|
||
you can edit your system-wide Docker server settings and add the option
|
||
<code>--ip=IP_ADDRESS</code>. Remember to restart your Docker server after editing this
|
||
setting.</p>
|
||
|
||
<blockquote>
|
||
<p><strong>Note</strong>: With hairpin NAT enabled (<code>--userland-proxy=false</code>), containers port
|
||
exposure is achieved purely through iptables rules, and no attempt to bind the
|
||
exposed port is ever made. This means that nothing prevents shadowing a
|
||
previously listening service outside of Docker through exposing the same port
|
||
for a container. In such conflicting situation, Docker created iptables rules
|
||
will take precedence and route to the container.</p>
|
||
</blockquote>
|
||
|
||
<p>The <code>--userland-proxy</code> parameter, true by default, provides a userland
|
||
implementation for inter-container and outside-to-container communication. When
|
||
disabled, Docker uses both an additional <code>MASQUERADE</code> iptable rule and the
|
||
<code>net.ipv4.route_localnet</code> kernel parameter which allow the host machine to
|
||
connect to a local container exposed port through the commonly used loopback
|
||
address: this alternative is preferred for performance reasons.</p>
|
||
|
||
<h2 id="related-information">Related information</h2>
|
||
|
||
<ul>
|
||
<li><a href="../engine/userguide/networking/dockernetworks/">Understand Docker container networks</a></li>
|
||
<li><a href="../engine/userguide/networking/work-with-networks/">Work with network commands</a></li>
|
||
<li><a href="../engine/userguide/networking/default_network/dockerlinks/">Legacy container links</a></li>
|
||
</ul>
|
||
</description>
|
||
</item>
|
||
|
||
<item>
|
||
<title>Breaking changes</title>
|
||
<link>http://docs-stage.docker.com/engine/breaking_changes/</link>
|
||
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
|
||
|
||
<guid>http://docs-stage.docker.com/engine/breaking_changes/</guid>
|
||
<description>
|
||
|
||
<h1 id="breaking-changes-and-incompatibilities">Breaking changes and incompatibilities</h1>
|
||
|
||
<p>Every Engine release strives to be backward compatible with its predecessors.
|
||
In all cases, the policy is that feature removal is communicated two releases
|
||
in advance and documented as part of the <a href="../engine/deprecated/">deprecated features</a>
|
||
page.</p>
|
||
|
||
<p>Unfortunately, Docker is a fast moving project, and newly introduced features
|
||
may sometime introduce breaking changes and/or incompatibilities. This page
|
||
documents these by Engine version.</p>
|
||
|
||
<h1 id="engine-1-10">Engine 1.10</h1>
|
||
|
||
<p>There were two breaking changes in the 1.10 release.</p>
|
||
|
||
<h2 id="registry">Registry</h2>
|
||
|
||
<p>Registry 2.3 includes improvements to the image manifest that have caused a
|
||
breaking change. Images pushed by Engine 1.10 to a Registry 2.3 cannot be
|
||
pulled by digest by older Engine versions. A <code>docker pull</code> that encounters this
|
||
situation returns the following error:</p>
|
||
|
||
<pre><code> Error response from daemon: unsupported schema version 2 for tag TAGNAME
|
||
</code></pre>
|
||
|
||
<p>Docker Content Trust heavily relies on pull by digest. As a result, images
|
||
pushed from the Engine 1.10 CLI to a 2.3 Registry cannot be pulled by older
|
||
Engine CLIs (&lt; 1.10) with Docker Content Trust enabled.</p>
|
||
|
||
<p>If you are using an older Registry version (&lt; 2.3), this problem does not occur
|
||
with any version of the Engine CLI; push, pull, with and without content trust
|
||
work as you would expect.</p>
|
||
|
||
<h2 id="docker-content-trust">Docker Content Trust</h2>
|
||
|
||
<p>Engine older than the current 1.10 cannot pull images from repositories that
|
||
have enabled key delegation. Key delegation is a feature which requires a
|
||
manual action to enable.</p>
|
||
</description>
|
||
</item>
|
||
|
||
<item>
|
||
<title>Btrfs storage in practice</title>
|
||
<link>http://docs-stage.docker.com/engine/userguide/storagedriver/btrfs-driver/</link>
|
||
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
|
||
|
||
<guid>http://docs-stage.docker.com/engine/userguide/storagedriver/btrfs-driver/</guid>
|
||
<description>
|
||
|
||
<h1 id="docker-and-btrfs-in-practice">Docker and Btrfs in practice</h1>
|
||
|
||
<p>Btrfs is a next generation copy-on-write filesystem that supports many advanced
|
||
storage technologies that make it a good fit for Docker. Btrfs is included in
|
||
the mainline Linux kernel and its on-disk-format is now considered stable.
|
||
However, many of its features are still under heavy development and users
|
||
should consider it a fast-moving target.</p>
|
||
|
||
<p>Docker&rsquo;s <code>btrfs</code> storage driver leverages many Btrfs features for image and
|
||
container management. Among these features are thin provisioning,
|
||
copy-on-write, and snapshotting.</p>
|
||
|
||
<p>This article refers to Docker&rsquo;s Btrfs storage driver as <code>btrfs</code> and the overall
|
||
Btrfs Filesystem as Btrfs.</p>
|
||
|
||
<blockquote>
|
||
<p><strong>Note</strong>: The <a href="https://www.docker.com/compatibility-maintenance">Commercially Supported Docker Engine (CS-Engine)</a> does not currently support the <code>btrfs</code> storage driver.</p>
|
||
</blockquote>
|
||
|
||
<h2 id="the-future-of-btrfs">The future of Btrfs</h2>
|
||
|
||
<p>Btrfs has been long hailed as the future of Linux filesystems. With full
|
||
support in the mainline Linux kernel, a stable on-disk-format, and active
|
||
development with a focus on stability, this is now becoming more of a reality.</p>
|
||
|
||
<p>As far as Docker on the Linux platform goes, many people see the <code>btrfs</code>
|
||
storage driver as a potential long-term replacement for the <code>devicemapper</code>
|
||
storage driver. However, at the time of writing, the <code>devicemapper</code> storage
|
||
driver should be considered safer, more stable, and more <em>production ready</em>.
|
||
You should only consider the <code>btrfs</code> driver for production deployments if you
|
||
understand it well and have existing experience with Btrfs.</p>
|
||
|
||
<h2 id="image-layering-and-sharing-with-btrfs">Image layering and sharing with Btrfs</h2>
|
||
|
||
<p>Docker leverages Btrfs <em>subvolumes</em> and <em>snapshots</em> for managing the on-disk
|
||
components of image and container layers. Btrfs subvolumes look and feel like
|
||
a normal Unix filesystem. As such, they can have their own internal directory
|
||
structure that hooks into the wider Unix filesystem.</p>
|
||
|
||
<p>Subvolumes are natively copy-on-write and have space allocated to them
|
||
on-demand from an underlying storage pool. They can also be nested and snapped.
|
||
The diagram blow shows 4 subvolumes. &lsquo;Subvolume 2&rsquo; and &lsquo;Subvolume 3&rsquo; are
|
||
nested, whereas &lsquo;Subvolume 4&rsquo; shows its own internal directory tree.</p>
|
||
|
||
<p><img src="../engine/userguide/storagedriver/images/btfs_subvolume.jpg" alt="" /></p>
|
||
|
||
<p>Snapshots are a point-in-time read-write copy of an entire subvolume. They
|
||
exist directly below the subvolume they were created from. You can create
|
||
snapshots of snapshots as shown in the diagram below.</p>
|
||
|
||
<p><img src="../engine/userguide/storagedriver/images/btfs_snapshots.jpg" alt="" /></p>
|
||
|
||
<p>Btfs allocates space to subvolumes and snapshots on demand from an underlying
|
||
pool of storage. The unit of allocation is referred to as a <em>chunk</em>, and
|
||
<em>chunks</em> are normally ~1GB in size.</p>
|
||
|
||
<p>Snapshots are first-class citizens in a Btrfs filesystem. This means that they
|
||
look, feel, and operate just like regular subvolumes. The technology required
|
||
to create them is built directly into the Btrfs filesystem thanks to its
|
||
native copy-on-write design. This means that Btrfs snapshots are space
|
||
efficient with little or no performance overhead. The diagram below shows a
|
||
subvolume and its snapshot sharing the same data.</p>
|
||
|
||
<p><img src="../engine/userguide/storagedriver/images/btfs_pool.jpg" alt="" /></p>
|
||
|
||
<p>Docker&rsquo;s <code>btrfs</code> storage driver stores every image layer and container in its
|
||
own Btrfs subvolume or snapshot. The base layer of an image is stored as a
|
||
subvolume whereas child image layers and containers are stored as snapshots.
|
||
This is shown in the diagram below.</p>
|
||
|
||
<p><img src="../engine/userguide/storagedriver/images/btfs_container_layer.jpg" alt="" /></p>
|
||
|
||
<p>The high level process for creating images and containers on Docker hosts
|
||
running the <code>btrfs</code> driver is as follows:</p>
|
||
|
||
<ol>
|
||
<li><p>The image&rsquo;s base layer is stored in a Btrfs <em>subvolume</em> under
|
||
<code>/var/lib/docker/btrfs/subvolumes</code>.</p></li>
|
||
|
||
<li><p>Subsequent image layers are stored as a Btrfs <em>snapshot</em> of the parent
|
||
layer&rsquo;s subvolume or snapshot.</p>
|
||
|
||
<p>The diagram below shows a three-layer image. The base layer is a subvolume.
|
||
Layer 1 is a snapshot of the base layer&rsquo;s subvolume. Layer 2 is a snapshot of
|
||
Layer 1&rsquo;s snapshot.</p>
|
||
|
||
<p><img src="../engine/userguide/storagedriver/images/btfs_constructs.jpg" alt="" /></p></li>
|
||
</ol>
|
||
|
||
<p>As of Docker 1.10, image layer IDs no longer correspond to directory names
|
||
under <code>/var/lib/docker/</code>.</p>
|
||
|
||
<h2 id="image-and-container-on-disk-constructs">Image and container on-disk constructs</h2>
|
||
|
||
<p>Image layers and containers are visible in the Docker host&rsquo;s filesystem at
|
||
<code>/var/lib/docker/btrfs/subvolumes/</code>. However, as previously stated, directory
|
||
names no longer correspond to image layer IDs. That said, directories for
|
||
containers are present even for containers with a stopped status. This is
|
||
because the <code>btrfs</code> storage driver mounts a default, top-level subvolume at
|
||
<code>/var/lib/docker/subvolumes</code>. All other subvolumes and snapshots exist below
|
||
that as Btrfs filesystem objects and not as individual mounts.</p>
|
||
|
||
<p>Because Btrfs works at the filesystem level and not the block level, each image
|
||
and container layer can be browsed in the filesystem using normal Unix
|
||
commands. The example below shows a truncated output of an <code>ls -l</code> command an
|
||
image layer:</p>
|
||
|
||
<pre><code>$ ls -l /var/lib/docker/btrfs/subvolumes/0a17decee4139b0de68478f149cc16346f5e711c5ae3bb969895f22dd6723751/
|
||
total 0
|
||
drwxr-xr-x 1 root root 1372 Oct 9 08:39 bin
|
||
drwxr-xr-x 1 root root 0 Apr 10 2014 boot
|
||
drwxr-xr-x 1 root root 882 Oct 9 08:38 dev
|
||
drwxr-xr-x 1 root root 2040 Oct 12 17:27 etc
|
||
drwxr-xr-x 1 root root 0 Apr 10 2014 home
|
||
...output truncated...
|
||
</code></pre>
|
||
|
||
<h2 id="container-reads-and-writes-with-btrfs">Container reads and writes with Btrfs</h2>
|
||
|
||
<p>A container is a space-efficient snapshot of an image. Metadata in the snapshot
|
||
points to the actual data blocks in the storage pool. This is the same as with
|
||
a subvolume. Therefore, reads performed against a snapshot are essentially the
|
||
same as reads performed against a subvolume. As a result, no performance
|
||
overhead is incurred from the Btrfs driver.</p>
|
||
|
||
<p>Writing a new file to a container invokes an allocate-on-demand operation to
|
||
allocate new data block to the container&rsquo;s snapshot. The file is then written to
|
||
this new space. The allocate-on-demand operation is native to all writes with
|
||
Btrfs and is the same as writing new data to a subvolume. As a result, writing
|
||
new files to a container&rsquo;s snapshot operate at native Btrfs speeds.</p>
|
||
|
||
<p>Updating an existing file in a container causes a copy-on-write operation
|
||
(technically <em>redirect-on-write</em>). The driver leaves the original data and
|
||
allocates new space to the snapshot. The updated data is written to this new
|
||
space. Then, the driver updates the filesystem metadata in the snapshot to
|
||
point to this new data. The original data is preserved in-place for subvolumes
|
||
and snapshots further up the tree. This behavior is native to copy-on-write
|
||
filesystems like Btrfs and incurs very little overhead.</p>
|
||
|
||
<p>With Btfs, writing and updating lots of small files can result in slow
|
||
performance. More on this later.</p>
|
||
|
||
<h2 id="configuring-docker-with-btrfs">Configuring Docker with Btrfs</h2>
|
||
|
||
<p>The <code>btrfs</code> storage driver only operates on a Docker host where
|
||
<code>/var/lib/docker</code> is mounted as a Btrfs filesystem. The following procedure
|
||
shows how to configure Btrfs on Ubuntu 14.04 LTS.</p>
|
||
|
||
<h3 id="prerequisites">Prerequisites</h3>
|
||
|
||
<p>If you have already used the Docker daemon on your Docker host and have images
|
||
you want to keep, <code>push</code> them to Docker Hub or your private Docker Trusted
|
||
Registry before attempting this procedure.</p>
|
||
|
||
<p>Stop the Docker daemon. Then, ensure that you have a spare block device at
|
||
<code>/dev/xvdb</code>. The device identifier may be different in your environment and you
|
||
should substitute your own values throughout the procedure.</p>
|
||
|
||
<p>The procedure also assumes your kernel has the appropriate Btrfs modules
|
||
loaded. To verify this, use the following command:</p>
|
||
|
||
<pre><code>$ cat /proc/filesystems | grep btrfs
|
||
</code></pre>
|
||
|
||
<h3 id="configure-btrfs-on-ubuntu-14-04-lts">Configure Btrfs on Ubuntu 14.04 LTS</h3>
|
||
|
||
<p>Assuming your system meets the prerequisites, do the following:</p>
|
||
|
||
<ol>
|
||
<li><p>Install the &ldquo;btrfs-tools&rdquo; package.</p>
|
||
|
||
<pre><code>$ sudo apt-get install btrfs-tools
|
||
Reading package lists... Done
|
||
Building dependency tree
|
||
&lt;output truncated&gt;
|
||
</code></pre></li>
|
||
|
||
<li><p>Create the Btrfs storage pool.</p>
|
||
|
||
<p>Btrfs storage pools are created with the <code>mkfs.btrfs</code> command. Passing
|
||
multiple devices to the <code>mkfs.btrfs</code> command creates a pool across all of those
|
||
devices. Here you create a pool with a single device at <code>/dev/xvdb</code>.</p>
|
||
|
||
<pre><code>$ sudo mkfs.btrfs -f /dev/xvdb
|
||
WARNING! - Btrfs v3.12 IS EXPERIMENTAL
|
||
WARNING! - see http://btrfs.wiki.kernel.org before using
|
||
|
||
Turning ON incompat feature 'extref': increased hardlink limit per file to 65536
|
||
fs created label (null) on /dev/xvdb
|
||
nodesize 16384 leafsize 16384 sectorsize 4096 size 4.00GiB
|
||
Btrfs v3.12
|
||
</code></pre>
|
||
|
||
<p>Be sure to substitute <code>/dev/xvdb</code> with the appropriate device(s) on your
|
||
system.</p>
|
||
|
||
<blockquote>
|
||
<p><strong>Warning</strong>: Take note of the warning about Btrfs being experimental. As
|
||
noted earlier, Btrfs is not currently recommended for production deployments
|
||
unless you already have extensive experience.</p>
|
||
</blockquote></li>
|
||
|
||
<li><p>If it does not already exist, create a directory for the Docker host&rsquo;s local
|
||
storage area at <code>/var/lib/docker</code>.</p>
|
||
|
||
<pre><code>$ sudo mkdir /var/lib/docker
|
||
</code></pre></li>
|
||
|
||
<li><p>Configure the system to automatically mount the Btrfs filesystem each time the system boots.</p>
|
||
|
||
<p>a. Obtain the Btrfs filesystem&rsquo;s UUID.</p>
|
||
|
||
<pre><code>$ sudo blkid /dev/xvdb
|
||
/dev/xvdb: UUID=&quot;a0ed851e-158b-4120-8416-c9b072c8cf47&quot; UUID_SUB=&quot;c3927a64-4454-4eef-95c2-a7d44ac0cf27&quot; TYPE=&quot;btrfs&quot;
|
||
</code></pre>
|
||
|
||
<p>b. Create an <code>/etc/fstab</code> entry to automatically mount <code>/var/lib/docker</code>
|
||
each time the system boots. Either of the following lines will work, just
|
||
remember to substitute the UUID value with the value obtained from the previous
|
||
command.</p>
|
||
|
||
<pre><code>/dev/xvdb /var/lib/docker btrfs defaults 0 0
|
||
UUID=&quot;a0ed851e-158b-4120-8416-c9b072c8cf47&quot; /var/lib/docker btrfs defaults 0 0
|
||
</code></pre></li>
|
||
|
||
<li><p>Mount the new filesystem and verify the operation.</p>
|
||
|
||
<pre><code>$ sudo mount -a
|
||
$ mount
|
||
/dev/xvda1 on / type ext4 (rw,discard)
|
||
&lt;output truncated&gt;
|
||
/dev/xvdb on /var/lib/docker type btrfs (rw)
|
||
</code></pre>
|
||
|
||
<p>The last line in the output above shows the <code>/dev/xvdb</code> mounted at
|
||
<code>/var/lib/docker</code> as Btrfs.</p></li>
|
||
</ol>
|
||
|
||
<p>Now that you have a Btrfs filesystem mounted at <code>/var/lib/docker</code>, the daemon
|
||
should automatically load with the <code>btrfs</code> storage driver.</p>
|
||
|
||
<ol>
|
||
<li><p>Start the Docker daemon.</p>
|
||
|
||
<pre><code>$ sudo service docker start
|
||
docker start/running, process 2315
|
||
</code></pre>
|
||
|
||
<p>The procedure for starting the Docker daemon may differ depending on the
|
||
Linux distribution you are using.</p>
|
||
|
||
<p>You can force the Docker daemon to start with the <code>btrfs</code> storage
|
||
driver by either passing the <code>--storage-driver=btrfs</code> flag to the <code>docker
|
||
daemon</code> at startup, or adding it to the <code>DOCKER_OPTS</code> line to the Docker config
|
||
file.</p></li>
|
||
|
||
<li><p>Verify the storage driver with the <code>docker info</code> command.</p>
|
||
|
||
<pre><code>$ sudo docker info
|
||
Containers: 0
|
||
Images: 0
|
||
Storage Driver: btrfs
|
||
[...]
|
||
</code></pre></li>
|
||
</ol>
|
||
|
||
<p>Your Docker host is now configured to use the <code>btrfs</code> storage driver.</p>
|
||
|
||
<h2 id="btrfs-and-docker-performance">Btrfs and Docker performance</h2>
|
||
|
||
<p>There are several factors that influence Docker&rsquo;s performance under the <code>btrfs</code>
|
||
storage driver.</p>
|
||
|
||
<ul>
|
||
<li><p><strong>Page caching</strong>. Btrfs does not support page cache sharing. This means that
|
||
<em>n</em> containers accessing the same file require <em>n</em> copies to be cached. As a
|
||
result, the <code>btrfs</code> driver may not be the best choice for PaaS and other high
|
||
density container use cases.</p></li>
|
||
|
||
<li><p><strong>Small writes</strong>. Containers performing lots of small writes (including
|
||
Docker hosts that start and stop many containers) can lead to poor use of Btrfs
|
||
chunks. This can ultimately lead to out-of-space conditions on your Docker
|
||
host and stop it working. This is currently a major drawback to using current
|
||
versions of Btrfs.</p>
|
||
|
||
<p>If you use the <code>btrfs</code> storage driver, closely monitor the free space on
|
||
your Btrfs filesystem using the <code>btrfs filesys show</code> command. Do not trust the
|
||
output of normal Unix commands such as <code>df</code>; always use the Btrfs native
|
||
commands.</p></li>
|
||
|
||
<li><p><strong>Sequential writes</strong>. Btrfs writes data to disk via journaling technique.
|
||
This can impact sequential writes, where performance can be up to half.</p></li>
|
||
|
||
<li><p><strong>Fragmentation</strong>. Fragmentation is a natural byproduct of copy-on-write
|
||
filesystems like Btrfs. Many small random writes can compound this issue. It
|
||
can manifest as CPU spikes on Docker hosts using SSD media and head thrashing
|
||
on Docker hosts using spinning media. Both of these result in poor performance.</p>
|
||
|
||
<p>Recent versions of Btrfs allow you to specify <code>autodefrag</code> as a mount
|
||
option. This mode attempts to detect random writes and defragment them. You
|
||
should perform your own tests before enabling this option on your Docker hosts.
|
||
Some tests have shown this option has a negative performance impact on Docker
|
||
hosts performing lots of small writes (including systems that start and stop
|
||
many containers).</p></li>
|
||
|
||
<li><p><strong>Solid State Devices (SSD)</strong>. Btrfs has native optimizations for SSD media.
|
||
To enable these, mount with the <code>-o ssd</code> mount option. These optimizations
|
||
include enhanced SSD write performance by avoiding things like <em>seek
|
||
optimizations</em> that have no use on SSD media.</p>
|
||
|
||
<p>Btfs also supports the TRIM/Discard primitives. However, mounting with the
|
||
<code>-o discard</code> mount option can cause performance issues. Therefore, it is
|
||
recommended you perform your own tests before using this option.</p></li>
|
||
|
||
<li><p><strong>Use Data Volumes</strong>. Data volumes provide the best and most predictable
|
||
performance. This is because they bypass the storage driver and do not incur
|
||
any of the potential overheads introduced by thin provisioning and
|
||
copy-on-write. For this reason, you should place heavy write workloads on data
|
||
volumes.</p></li>
|
||
</ul>
|
||
|
||
<h2 id="related-information">Related Information</h2>
|
||
|
||
<ul>
|
||
<li><a href="../engine/userguide/storagedriver/imagesandcontainers/">Understand images, containers, and storage drivers</a></li>
|
||
<li><a href="../engine/userguide/storagedriver/selectadriver/">Select a storage driver</a></li>
|
||
<li><a href="../engine/userguide/storagedriver/aufs-driver/">AUFS storage driver in practice</a></li>
|
||
<li><a href="../engine/userguide/storagedriver/device-mapper-driver/">Device Mapper storage driver in practice</a></li>
|
||
</ul>
|
||
</description>
|
||
</item>
|
||
|
||
<item>
|
||
<title>Build your own bridge</title>
|
||
<link>http://docs-stage.docker.com/engine/userguide/networking/default_network/build-bridges/</link>
|
||
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
|
||
|
||
<guid>http://docs-stage.docker.com/engine/userguide/networking/default_network/build-bridges/</guid>
|
||
<description>
|
||
|
||
<h1 id="build-your-own-bridge">Build your own bridge</h1>
|
||
|
||
<p>This section explains how to build your own bridge to replace the Docker default
|
||
bridge. This is a <code>bridge</code> network named <code>bridge</code> created automatically when you
|
||
install Docker.</p>
|
||
|
||
<blockquote>
|
||
<p><strong>Note</strong>: The <a href="../engine/userguide/networking/dockernetworks/">Docker networks feature</a> allows you to
|
||
create user-defined networks in addition to the default bridge network.</p>
|
||
</blockquote>
|
||
|
||
<p>You can set up your own bridge before starting Docker and use <code>-b BRIDGE</code> or
|
||
<code>--bridge=BRIDGE</code> to tell Docker to use your bridge instead. If you already
|
||
have Docker up and running with its default <code>docker0</code> still configured,
|
||
you can directly create your bridge and restart Docker with it or want to begin by
|
||
stopping the service and removing the interface:</p>
|
||
|
||
<pre><code># Stopping Docker and removing docker0
|
||
|
||
$ sudo service docker stop
|
||
$ sudo ip link set dev docker0 down
|
||
$ sudo brctl delbr docker0
|
||
$ sudo iptables -t nat -F POSTROUTING
|
||
</code></pre>
|
||
|
||
<p>Then, before starting the Docker service, create your own bridge and give it
|
||
whatever configuration you want. Here we will create a simple enough bridge
|
||
that we really could just have used the options in the previous section to
|
||
customize <code>docker0</code>, but it will be enough to illustrate the technique.</p>
|
||
|
||
<pre><code># Create our own bridge
|
||
|
||
$ sudo brctl addbr bridge0
|
||
$ sudo ip addr add 192.168.5.1/24 dev bridge0
|
||
$ sudo ip link set dev bridge0 up
|
||
|
||
# Confirming that our bridge is up and running
|
||
|
||
$ ip addr show bridge0
|
||
4: bridge0: &lt;BROADCAST,MULTICAST&gt; mtu 1500 qdisc noop state UP group default
|
||
link/ether 66:38:d0:0d:76:18 brd ff:ff:ff:ff:ff:ff
|
||
inet 192.168.5.1/24 scope global bridge0
|
||
valid_lft forever preferred_lft forever
|
||
|
||
# Tell Docker about it and restart (on Ubuntu)
|
||
|
||
$ echo 'DOCKER_OPTS=&quot;-b=bridge0&quot;' &gt;&gt; /etc/default/docker
|
||
$ sudo service docker start
|
||
|
||
# Confirming new outgoing NAT masquerade is set up
|
||
|
||
$ sudo iptables -t nat -L -n
|
||
...
|
||
Chain POSTROUTING (policy ACCEPT)
|
||
target prot opt source destination
|
||
MASQUERADE all -- 192.168.5.0/24 0.0.0.0/0
|
||
</code></pre>
|
||
|
||
<p>The result should be that the Docker server starts successfully and is now
|
||
prepared to bind containers to the new bridge. After pausing to verify the
|
||
bridge&rsquo;s configuration, try creating a container -- you will see that its IP
|
||
address is in your new IP address range, which Docker will have auto-detected.</p>
|
||
|
||
<p>You can use the <code>brctl show</code> command to see Docker add and remove interfaces
|
||
from the bridge as you start and stop containers, and can run <code>ip addr</code> and <code>ip
|
||
route</code> inside a container to see that it has been given an address in the
|
||
bridge&rsquo;s IP address range and has been told to use the Docker host&rsquo;s IP address
|
||
on the bridge as its default gateway to the rest of the Internet.</p>
|
||
</description>
|
||
</item>
|
||
|
||
<item>
|
||
<title>Build your own images</title>
|
||
<link>http://docs-stage.docker.com/engine/userguide/containers/dockerimages/</link>
|
||
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
|
||
|
||
<guid>http://docs-stage.docker.com/engine/userguide/containers/dockerimages/</guid>
|
||
<description>
|
||
|
||
<h1 id="build-your-own-images">Build your own images</h1>
|
||
|
||
<p>Docker images are the basis of containers. Each time you&rsquo;ve used <code>docker run</code>
|
||
you told it which image you wanted. In the previous sections of the guide you
|
||
used Docker images that already exist, for example the <code>ubuntu</code> image and the
|
||
<code>training/webapp</code> image.</p>
|
||
|
||
<p>You also discovered that Docker stores downloaded images on the Docker host. If
|
||
an image isn&rsquo;t already present on the host then it&rsquo;ll be downloaded from a
|
||
registry: by default the <a href="https://hub.docker.com">Docker Hub Registry</a>.</p>
|
||
|
||
<p>In this section you&rsquo;re going to explore Docker images a bit more
|
||
including:</p>
|
||
|
||
<ul>
|
||
<li>Managing and working with images locally on your Docker host.</li>
|
||
<li>Creating basic images.</li>
|
||
<li>Uploading images to <a href="https://hub.docker.com">Docker Hub Registry</a>.</li>
|
||
</ul>
|
||
|
||
<h2 id="listing-images-on-the-host">Listing images on the host</h2>
|
||
|
||
<p>Let&rsquo;s start with listing the images you have locally on our host. You can
|
||
do this using the <code>docker images</code> command like so:</p>
|
||
|
||
<pre><code>$ docker images
|
||
REPOSITORY TAG IMAGE ID CREATED SIZE
|
||
ubuntu 14.04 1d073211c498 3 days ago 187.9 MB
|
||
busybox latest 2c5ac3f849df 5 days ago 1.113 MB
|
||
training/webapp latest 54bb4e8718e8 5 months ago 348.7 MB
|
||
</code></pre>
|
||
|
||
<p>You can see the images you&rsquo;ve previously used in the user guide.
|
||
Each has been downloaded from <a href="https://hub.docker.com">Docker Hub</a> when you
|
||
launched a container using that image. When you list images, you get three crucial pieces of information in the listing.</p>
|
||
|
||
<ul>
|
||
<li>What repository they came from, for example <code>ubuntu</code>.</li>
|
||
<li>The tags for each image, for example <code>14.04</code>.</li>
|
||
<li>The image ID of each image.</li>
|
||
</ul>
|
||
|
||
<blockquote>
|
||
<p><strong>Tip:</strong>
|
||
You can use <a href="https://github.com/justone/dockviz">a third-party dockviz tool</a>
|
||
or the <a href="https://imagelayers.io/">Image layers site</a> to display<br />
|
||
visualizations of image data.</p>
|
||
</blockquote>
|
||
|
||
<p>A repository potentially holds multiple variants of an image. In the case of
|
||
our <code>ubuntu</code> image you can see multiple variants covering Ubuntu 10.04, 12.04,
|
||
12.10, 13.04, 13.10 and 14.04. Each variant is identified by a tag and you can
|
||
refer to a tagged image like so:</p>
|
||
|
||
<pre><code>ubuntu:14.04
|
||
</code></pre>
|
||
|
||
<p>So when you run a container you refer to a tagged image like so:</p>
|
||
|
||
<pre><code>$ docker run -t -i ubuntu:14.04 /bin/bash
|
||
</code></pre>
|
||
|
||
<p>If instead you wanted to run an Ubuntu 12.04 image you&rsquo;d use:</p>
|
||
|
||
<pre><code>$ docker run -t -i ubuntu:12.04 /bin/bash
|
||
</code></pre>
|
||
|
||
<p>If you don&rsquo;t specify a variant, for example you just use <code>ubuntu</code>, then Docker
|
||
will default to using the <code>ubuntu:latest</code> image.</p>
|
||
|
||
<blockquote>
|
||
<p><strong>Tip:</strong>
|
||
You should always specify an image tag, for example <code>ubuntu:14.04</code>.
|
||
That way, you always know exactly what variant of an image you are using.
|
||
This is useful for troubleshooting and debugging.</p>
|
||
</blockquote>
|
||
|
||
<h2 id="getting-a-new-image">Getting a new image</h2>
|
||
|
||
<p>So how do you get new images? Well Docker will automatically download any image
|
||
you use that isn&rsquo;t already present on the Docker host. But this can potentially
|
||
add some time to the launch of a container. If you want to pre-load an image you
|
||
can download it using the <code>docker pull</code> command. Suppose you&rsquo;d like to
|
||
download the <code>centos</code> image.</p>
|
||
|
||
<pre><code>$ docker pull centos
|
||
Pulling repository centos
|
||
b7de3133ff98: Pulling dependent layers
|
||
5cc9e91966f7: Pulling fs layer
|
||
511136ea3c5a: Download complete
|
||
ef52fb1fe610: Download complete
|
||
. . .
|
||
|
||
Status: Downloaded newer image for centos
|
||
</code></pre>
|
||
|
||
<p>You can see that each layer of the image has been pulled down and now you
|
||
can run a container from this image and you won&rsquo;t have to wait to
|
||
download the image.</p>
|
||
|
||
<pre><code>$ docker run -t -i centos /bin/bash
|
||
bash-4.1#
|
||
</code></pre>
|
||
|
||
<h2 id="finding-images">Finding images</h2>
|
||
|
||
<p>One of the features of Docker is that a lot of people have created Docker
|
||
images for a variety of purposes. Many of these have been uploaded to
|
||
<a href="https://hub.docker.com">Docker Hub</a>. You can search these images on the
|
||
<a href="https://hub.docker.com">Docker Hub</a> website.</p>
|
||
|
||
<p><img src="../engine/userguide/containers/search.png" alt="indexsearch" /></p>
|
||
|
||
<p>You can also search for images on the command line using the <code>docker search</code>
|
||
command. Suppose your team wants an image with Ruby and Sinatra installed on
|
||
which to do our web application development. You can search for a suitable image
|
||
by using the <code>docker search</code> command to find all the images that contain the
|
||
term <code>sinatra</code>.</p>
|
||
|
||
<pre><code>$ docker search sinatra
|
||
NAME DESCRIPTION STARS OFFICIAL AUTOMATED
|
||
training/sinatra Sinatra training image 0 [OK]
|
||
marceldegraaf/sinatra Sinatra test app 0
|
||
mattwarren/docker-sinatra-demo 0 [OK]
|
||
luisbebop/docker-sinatra-hello-world 0 [OK]
|
||
bmorearty/handson-sinatra handson-ruby + Sinatra for Hands on with D... 0
|
||
subwiz/sinatra 0
|
||
bmorearty/sinatra 0
|
||
. . .
|
||
</code></pre>
|
||
|
||
<p>You can see the command returns a lot of images that use the term <code>sinatra</code>.
|
||
You&rsquo;ve received a list of image names, descriptions, Stars (which measure the
|
||
social popularity of images - if a user likes an image then they can &ldquo;star&rdquo; it),
|
||
and the Official and Automated build statuses. <a href="https://docs.docker.com/docker-hub/official_repos">Official
|
||
Repositories</a> are a carefully
|
||
curated set of Docker repositories supported by Docker, Inc. Automated
|
||
repositories are <a href="../engine/userguide/containers/dockerrepos/#automated-builds">Automated Builds</a> that allow
|
||
you to validate the source and content of an image.</p>
|
||
|
||
<p>You&rsquo;ve reviewed the images available to use and you decided to use the
|
||
<code>training/sinatra</code> image. So far you&rsquo;ve seen two types of images repositories,
|
||
images like <code>ubuntu</code>, which are called base or root images. These base images
|
||
are provided by Docker Inc and are built, validated and supported. These can be
|
||
identified by their single word names.</p>
|
||
|
||
<p>You&rsquo;ve also seen user images, for example the <code>training/sinatra</code> image you&rsquo;ve
|
||
chosen. A user image belongs to a member of the Docker community and is built
|
||
and maintained by them. You can identify user images as they are always
|
||
prefixed with the user name, here <code>training</code>, of the user that created them.</p>
|
||
|
||
<h2 id="pulling-our-image">Pulling our image</h2>
|
||
|
||
<p>You&rsquo;ve identified a suitable image, <code>training/sinatra</code>, and now you can download it using the <code>docker pull</code> command.</p>
|
||
|
||
<pre><code>$ docker pull training/sinatra
|
||
</code></pre>
|
||
|
||
<p>The team can now use this image by running their own containers.</p>
|
||
|
||
<pre><code>$ docker run -t -i training/sinatra /bin/bash
|
||
root@a8cb6ce02d85:/#
|
||
</code></pre>
|
||
|
||
<h2 id="creating-our-own-images">Creating our own images</h2>
|
||
|
||
<p>The team has found the <code>training/sinatra</code> image pretty useful but it&rsquo;s not quite
|
||
what they need and you need to make some changes to it. There are two ways you
|
||
can update and create images.</p>
|
||
|
||
<ol>
|
||
<li>You can update a container created from an image and commit the results to an image.</li>
|
||
<li>You can use a <code>Dockerfile</code> to specify instructions to create an image.</li>
|
||
</ol>
|
||
|
||
<h3 id="updating-and-committing-an-image">Updating and committing an image</h3>
|
||
|
||
<p>To update an image you first need to create a container from the image
|
||
you&rsquo;d like to update.</p>
|
||
|
||
<pre><code>$ docker run -t -i training/sinatra /bin/bash
|
||
root@0b2616b0e5a8:/#
|
||
</code></pre>
|
||
|
||
<blockquote>
|
||
<p><strong>Note:</strong>
|
||
Take note of the container ID that has been created, <code>0b2616b0e5a8</code>, as you&rsquo;ll
|
||
need it in a moment.</p>
|
||
</blockquote>
|
||
|
||
<p>Inside our running container let&rsquo;s add the <code>json</code> gem.</p>
|
||
|
||
<pre><code>root@0b2616b0e5a8:/# gem install json
|
||
</code></pre>
|
||
|
||
<p>Once this has completed let&rsquo;s exit our container using the <code>exit</code>
|
||
command.</p>
|
||
|
||
<p>Now you have a container with the change you want to make. You can then
|
||
commit a copy of this container to an image using the <code>docker commit</code>
|
||
command.</p>
|
||
|
||
<pre><code>$ docker commit -m &quot;Added json gem&quot; -a &quot;Kate Smith&quot; \
|
||
0b2616b0e5a8 ouruser/sinatra:v2
|
||
4f177bd27a9ff0f6dc2a830403925b5360bfe0b93d476f7fc3231110e7f71b1c
|
||
</code></pre>
|
||
|
||
<p>Here you&rsquo;ve used the <code>docker commit</code> command. You&rsquo;ve specified two flags: <code>-m</code>
|
||
and <code>-a</code>. The <code>-m</code> flag allows us to specify a commit message, much like you
|
||
would with a commit on a version control system. The <code>-a</code> flag allows us to
|
||
specify an author for our update.</p>
|
||
|
||
<p>You&rsquo;ve also specified the container you want to create this new image from,
|
||
<code>0b2616b0e5a8</code> (the ID you recorded earlier) and you&rsquo;ve specified a target for
|
||
the image:</p>
|
||
|
||
<pre><code>ouruser/sinatra:v2
|
||
</code></pre>
|
||
|
||
<p>Break this target down. It consists of a new user, <code>ouruser</code>, that you&rsquo;re
|
||
writing this image to. You&rsquo;ve also specified the name of the image, here you&rsquo;re
|
||
keeping the original image name <code>sinatra</code>. Finally you&rsquo;re specifying a tag for
|
||
the image: <code>v2</code>.</p>
|
||
|
||
<p>You can then look at our new <code>ouruser/sinatra</code> image using the <code>docker images</code>
|
||
command.</p>
|
||
|
||
<pre><code>$ docker images
|
||
REPOSITORY TAG IMAGE ID CREATED SIZE
|
||
training/sinatra latest 5bc342fa0b91 10 hours ago 446.7 MB
|
||
ouruser/sinatra v2 3c59e02ddd1a 10 hours ago 446.7 MB
|
||
ouruser/sinatra latest 5db5f8471261 10 hours ago 446.7 MB
|
||
</code></pre>
|
||
|
||
<p>To use our new image to create a container you can then:</p>
|
||
|
||
<pre><code>$ docker run -t -i ouruser/sinatra:v2 /bin/bash
|
||
root@78e82f680994:/#
|
||
</code></pre>
|
||
|
||
<h3 id="building-an-image-from-a-dockerfile">Building an image from a <code>Dockerfile</code></h3>
|
||
|
||
<p>Using the <code>docker commit</code> command is a pretty simple way of extending an image
|
||
but it&rsquo;s a bit cumbersome and it&rsquo;s not easy to share a development process for
|
||
images amongst a team. Instead you can use a new command, <code>docker build</code>, to
|
||
build new images from scratch.</p>
|
||
|
||
<p>To do this you create a <code>Dockerfile</code> that contains a set of instructions that
|
||
tell Docker how to build our image.</p>
|
||
|
||
<p>First, create a directory and a <code>Dockerfile</code>.</p>
|
||
|
||
<pre><code>$ mkdir sinatra
|
||
$ cd sinatra
|
||
$ touch Dockerfile
|
||
</code></pre>
|
||
|
||
<p>If you are using Docker Machine on Windows, you may access your host
|
||
directory by <code>cd</code> to <code>/c/Users/your_user_name</code>.</p>
|
||
|
||
<p>Each instruction creates a new layer of the image. Try a simple example now for
|
||
building your own Sinatra image for your fictitious development team.</p>
|
||
|
||
<pre><code># This is a comment
|
||
FROM ubuntu:14.04
|
||
MAINTAINER Kate Smith &lt;ksmith@example.com&gt;
|
||
RUN apt-get update &amp;&amp; apt-get install -y ruby ruby-dev
|
||
RUN gem install sinatra
|
||
</code></pre>
|
||
|
||
<p>Examine what your <code>Dockerfile</code> does. Each instruction prefixes a statement and
|
||
is capitalized.</p>
|
||
|
||
<pre><code>INSTRUCTION statement
|
||
</code></pre>
|
||
|
||
<blockquote>
|
||
<p><strong>Note:</strong> You use <code>#</code> to indicate a comment</p>
|
||
</blockquote>
|
||
|
||
<p>The first instruction <code>FROM</code> tells Docker what the source of our image is, in
|
||
this case you&rsquo;re basing our new image on an Ubuntu 14.04 image. The instruction uses the <code>MAINTAINER</code> instruction to specify who maintains the new image.</p>
|
||
|
||
<p>Lastly, you&rsquo;ve specified two <code>RUN</code> instructions. A <code>RUN</code> instruction executes
|
||
a command inside the image, for example installing a package. Here you&rsquo;re
|
||
updating our APT cache, installing Ruby and RubyGems and then installing the
|
||
Sinatra gem.</p>
|
||
|
||
<p>Now let&rsquo;s take our <code>Dockerfile</code> and use the <code>docker build</code> command to build an image.</p>
|
||
|
||
<pre><code>$ docker build -t ouruser/sinatra:v2 .
|
||
Sending build context to Docker daemon 2.048 kB
|
||
Sending build context to Docker daemon
|
||
Step 1 : FROM ubuntu:14.04
|
||
---&gt; e54ca5efa2e9
|
||
Step 2 : MAINTAINER Kate Smith &lt;ksmith@example.com&gt;
|
||
---&gt; Using cache
|
||
---&gt; 851baf55332b
|
||
Step 3 : RUN apt-get update &amp;&amp; apt-get install -y ruby ruby-dev
|
||
---&gt; Running in 3a2558904e9b
|
||
Selecting previously unselected package libasan0:amd64.
|
||
(Reading database ... 11518 files and directories currently installed.)
|
||
Preparing to unpack .../libasan0_4.8.2-19ubuntu1_amd64.deb ...
|
||
Unpacking libasan0:amd64 (4.8.2-19ubuntu1) ...
|
||
Selecting previously unselected package libatomic1:amd64.
|
||
Preparing to unpack .../libatomic1_4.8.2-19ubuntu1_amd64.deb ...
|
||
Unpacking libatomic1:amd64 (4.8.2-19ubuntu1) ...
|
||
Selecting previously unselected package libgmp10:amd64.
|
||
Preparing to unpack .../libgmp10_2%3a5.1.3+dfsg-1ubuntu1_amd64.deb ...
|
||
Unpacking libgmp10:amd64 (2:5.1.3+dfsg-1ubuntu1) ...
|
||
Selecting previously unselected package libisl10:amd64.
|
||
Preparing to unpack .../libisl10_0.12.2-1_amd64.deb ...
|
||
Unpacking libisl10:amd64 (0.12.2-1) ...
|
||
Selecting previously unselected package libcloog-isl4:amd64.
|
||
Preparing to unpack .../libcloog-isl4_0.18.2-1_amd64.deb ...
|
||
Unpacking libcloog-isl4:amd64 (0.18.2-1) ...
|
||
Selecting previously unselected package libgomp1:amd64.
|
||
Preparing to unpack .../libgomp1_4.8.2-19ubuntu1_amd64.deb ...
|
||
Unpacking libgomp1:amd64 (4.8.2-19ubuntu1) ...
|
||
Selecting previously unselected package libitm1:amd64.
|
||
Preparing to unpack .../libitm1_4.8.2-19ubuntu1_amd64.deb ...
|
||
Unpacking libitm1:amd64 (4.8.2-19ubuntu1) ...
|
||
Selecting previously unselected package libmpfr4:amd64.
|
||
Preparing to unpack .../libmpfr4_3.1.2-1_amd64.deb ...
|
||
Unpacking libmpfr4:amd64 (3.1.2-1) ...
|
||
Selecting previously unselected package libquadmath0:amd64.
|
||
Preparing to unpack .../libquadmath0_4.8.2-19ubuntu1_amd64.deb ...
|
||
Unpacking libquadmath0:amd64 (4.8.2-19ubuntu1) ...
|
||
Selecting previously unselected package libtsan0:amd64.
|
||
Preparing to unpack .../libtsan0_4.8.2-19ubuntu1_amd64.deb ...
|
||
Unpacking libtsan0:amd64 (4.8.2-19ubuntu1) ...
|
||
Selecting previously unselected package libyaml-0-2:amd64.
|
||
Preparing to unpack .../libyaml-0-2_0.1.4-3ubuntu3_amd64.deb ...
|
||
Unpacking libyaml-0-2:amd64 (0.1.4-3ubuntu3) ...
|
||
Selecting previously unselected package libmpc3:amd64.
|
||
Preparing to unpack .../libmpc3_1.0.1-1ubuntu1_amd64.deb ...
|
||
Unpacking libmpc3:amd64 (1.0.1-1ubuntu1) ...
|
||
Selecting previously unselected package openssl.
|
||
Preparing to unpack .../openssl_1.0.1f-1ubuntu2.4_amd64.deb ...
|
||
Unpacking openssl (1.0.1f-1ubuntu2.4) ...
|
||
Selecting previously unselected package ca-certificates.
|
||
Preparing to unpack .../ca-certificates_20130906ubuntu2_all.deb ...
|
||
Unpacking ca-certificates (20130906ubuntu2) ...
|
||
Selecting previously unselected package manpages.
|
||
Preparing to unpack .../manpages_3.54-1ubuntu1_all.deb ...
|
||
Unpacking manpages (3.54-1ubuntu1) ...
|
||
Selecting previously unselected package binutils.
|
||
Preparing to unpack .../binutils_2.24-5ubuntu3_amd64.deb ...
|
||
Unpacking binutils (2.24-5ubuntu3) ...
|
||
Selecting previously unselected package cpp-4.8.
|
||
Preparing to unpack .../cpp-4.8_4.8.2-19ubuntu1_amd64.deb ...
|
||
Unpacking cpp-4.8 (4.8.2-19ubuntu1) ...
|
||
Selecting previously unselected package cpp.
|
||
Preparing to unpack .../cpp_4%3a4.8.2-1ubuntu6_amd64.deb ...
|
||
Unpacking cpp (4:4.8.2-1ubuntu6) ...
|
||
Selecting previously unselected package libgcc-4.8-dev:amd64.
|
||
Preparing to unpack .../libgcc-4.8-dev_4.8.2-19ubuntu1_amd64.deb ...
|
||
Unpacking libgcc-4.8-dev:amd64 (4.8.2-19ubuntu1) ...
|
||
Selecting previously unselected package gcc-4.8.
|
||
Preparing to unpack .../gcc-4.8_4.8.2-19ubuntu1_amd64.deb ...
|
||
Unpacking gcc-4.8 (4.8.2-19ubuntu1) ...
|
||
Selecting previously unselected package gcc.
|
||
Preparing to unpack .../gcc_4%3a4.8.2-1ubuntu6_amd64.deb ...
|
||
Unpacking gcc (4:4.8.2-1ubuntu6) ...
|
||
Selecting previously unselected package libc-dev-bin.
|
||
Preparing to unpack .../libc-dev-bin_2.19-0ubuntu6_amd64.deb ...
|
||
Unpacking libc-dev-bin (2.19-0ubuntu6) ...
|
||
Selecting previously unselected package linux-libc-dev:amd64.
|
||
Preparing to unpack .../linux-libc-dev_3.13.0-30.55_amd64.deb ...
|
||
Unpacking linux-libc-dev:amd64 (3.13.0-30.55) ...
|
||
Selecting previously unselected package libc6-dev:amd64.
|
||
Preparing to unpack .../libc6-dev_2.19-0ubuntu6_amd64.deb ...
|
||
Unpacking libc6-dev:amd64 (2.19-0ubuntu6) ...
|
||
Selecting previously unselected package ruby.
|
||
Preparing to unpack .../ruby_1%3a1.9.3.4_all.deb ...
|
||
Unpacking ruby (1:1.9.3.4) ...
|
||
Selecting previously unselected package ruby1.9.1.
|
||
Preparing to unpack .../ruby1.9.1_1.9.3.484-2ubuntu1_amd64.deb ...
|
||
Unpacking ruby1.9.1 (1.9.3.484-2ubuntu1) ...
|
||
Selecting previously unselected package libruby1.9.1.
|
||
Preparing to unpack .../libruby1.9.1_1.9.3.484-2ubuntu1_amd64.deb ...
|
||
Unpacking libruby1.9.1 (1.9.3.484-2ubuntu1) ...
|
||
Selecting previously unselected package manpages-dev.
|
||
Preparing to unpack .../manpages-dev_3.54-1ubuntu1_all.deb ...
|
||
Unpacking manpages-dev (3.54-1ubuntu1) ...
|
||
Selecting previously unselected package ruby1.9.1-dev.
|
||
Preparing to unpack .../ruby1.9.1-dev_1.9.3.484-2ubuntu1_amd64.deb ...
|
||
Unpacking ruby1.9.1-dev (1.9.3.484-2ubuntu1) ...
|
||
Selecting previously unselected package ruby-dev.
|
||
Preparing to unpack .../ruby-dev_1%3a1.9.3.4_all.deb ...
|
||
Unpacking ruby-dev (1:1.9.3.4) ...
|
||
Setting up libasan0:amd64 (4.8.2-19ubuntu1) ...
|
||
Setting up libatomic1:amd64 (4.8.2-19ubuntu1) ...
|
||
Setting up libgmp10:amd64 (2:5.1.3+dfsg-1ubuntu1) ...
|
||
Setting up libisl10:amd64 (0.12.2-1) ...
|
||
Setting up libcloog-isl4:amd64 (0.18.2-1) ...
|
||
Setting up libgomp1:amd64 (4.8.2-19ubuntu1) ...
|
||
Setting up libitm1:amd64 (4.8.2-19ubuntu1) ...
|
||
Setting up libmpfr4:amd64 (3.1.2-1) ...
|
||
Setting up libquadmath0:amd64 (4.8.2-19ubuntu1) ...
|
||
Setting up libtsan0:amd64 (4.8.2-19ubuntu1) ...
|
||
Setting up libyaml-0-2:amd64 (0.1.4-3ubuntu3) ...
|
||
Setting up libmpc3:amd64 (1.0.1-1ubuntu1) ...
|
||
Setting up openssl (1.0.1f-1ubuntu2.4) ...
|
||
Setting up ca-certificates (20130906ubuntu2) ...
|
||
debconf: unable to initialize frontend: Dialog
|
||
debconf: (TERM is not set, so the dialog frontend is not usable.)
|
||
debconf: falling back to frontend: Readline
|
||
debconf: unable to initialize frontend: Readline
|
||
debconf: (This frontend requires a controlling tty.)
|
||
debconf: falling back to frontend: Teletype
|
||
Setting up manpages (3.54-1ubuntu1) ...
|
||
Setting up binutils (2.24-5ubuntu3) ...
|
||
Setting up cpp-4.8 (4.8.2-19ubuntu1) ...
|
||
Setting up cpp (4:4.8.2-1ubuntu6) ...
|
||
Setting up libgcc-4.8-dev:amd64 (4.8.2-19ubuntu1) ...
|
||
Setting up gcc-4.8 (4.8.2-19ubuntu1) ...
|
||
Setting up gcc (4:4.8.2-1ubuntu6) ...
|
||
Setting up libc-dev-bin (2.19-0ubuntu6) ...
|
||
Setting up linux-libc-dev:amd64 (3.13.0-30.55) ...
|
||
Setting up libc6-dev:amd64 (2.19-0ubuntu6) ...
|
||
Setting up manpages-dev (3.54-1ubuntu1) ...
|
||
Setting up libruby1.9.1 (1.9.3.484-2ubuntu1) ...
|
||
Setting up ruby1.9.1-dev (1.9.3.484-2ubuntu1) ...
|
||
Setting up ruby-dev (1:1.9.3.4) ...
|
||
Setting up ruby (1:1.9.3.4) ...
|
||
Setting up ruby1.9.1 (1.9.3.484-2ubuntu1) ...
|
||
Processing triggers for libc-bin (2.19-0ubuntu6) ...
|
||
Processing triggers for ca-certificates (20130906ubuntu2) ...
|
||
Updating certificates in /etc/ssl/certs... 164 added, 0 removed; done.
|
||
Running hooks in /etc/ca-certificates/update.d....done.
|
||
---&gt; c55c31703134
|
||
Removing intermediate container 3a2558904e9b
|
||
Step 4 : RUN gem install sinatra
|
||
---&gt; Running in 6b81cb6313e5
|
||
unable to convert &quot;\xC3&quot; to UTF-8 in conversion from ASCII-8BIT to UTF-8 to US-ASCII for README.rdoc, skipping
|
||
unable to convert &quot;\xC3&quot; to UTF-8 in conversion from ASCII-8BIT to UTF-8 to US-ASCII for README.rdoc, skipping
|
||
Successfully installed rack-1.5.2
|
||
Successfully installed tilt-1.4.1
|
||
Successfully installed rack-protection-1.5.3
|
||
Successfully installed sinatra-1.4.5
|
||
4 gems installed
|
||
Installing ri documentation for rack-1.5.2...
|
||
Installing ri documentation for tilt-1.4.1...
|
||
Installing ri documentation for rack-protection-1.5.3...
|
||
Installing ri documentation for sinatra-1.4.5...
|
||
Installing RDoc documentation for rack-1.5.2...
|
||
Installing RDoc documentation for tilt-1.4.1...
|
||
Installing RDoc documentation for rack-protection-1.5.3...
|
||
Installing RDoc documentation for sinatra-1.4.5...
|
||
---&gt; 97feabe5d2ed
|
||
Removing intermediate container 6b81cb6313e5
|
||
Successfully built 97feabe5d2ed
|
||
</code></pre>
|
||
|
||
<p>You&rsquo;ve specified our <code>docker build</code> command and used the <code>-t</code> flag to identify
|
||
our new image as belonging to the user <code>ouruser</code>, the repository name <code>sinatra</code>
|
||
and given it the tag <code>v2</code>.</p>
|
||
|
||
<p>You&rsquo;ve also specified the location of our <code>Dockerfile</code> using the <code>.</code> to
|
||
indicate a <code>Dockerfile</code> in the current directory.</p>
|
||
|
||
<blockquote>
|
||
<p><strong>Note:</strong>
|
||
You can also specify a path to a <code>Dockerfile</code>.</p>
|
||
</blockquote>
|
||
|
||
<p>Now you can see the build process at work. The first thing Docker does is
|
||
upload the build context: basically the contents of the directory you&rsquo;re
|
||
building in. This is done because the Docker daemon does the actual
|
||
build of the image and it needs the local context to do it.</p>
|
||
|
||
<p>Next you can see each instruction in the <code>Dockerfile</code> being executed
|
||
step-by-step. You can see that each step creates a new container, runs
|
||
the instruction inside that container and then commits that change -
|
||
just like the <code>docker commit</code> work flow you saw earlier. When all the
|
||
instructions have executed you&rsquo;re left with the <code>97feabe5d2ed</code> image
|
||
(also helpfuly tagged as <code>ouruser/sinatra:v2</code>) and all intermediate
|
||
containers will get removed to clean things up.</p>
|
||
|
||
<blockquote>
|
||
<p><strong>Note:</strong>
|
||
An image can&rsquo;t have more than 127 layers regardless of the storage driver.
|
||
This limitation is set globally to encourage optimization of the overall
|
||
size of images.</p>
|
||
</blockquote>
|
||
|
||
<p>You can then create a container from our new image.</p>
|
||
|
||
<pre><code>$ docker run -t -i ouruser/sinatra:v2 /bin/bash
|
||
root@8196968dac35:/#
|
||
</code></pre>
|
||
|
||
<blockquote>
|
||
<p><strong>Note:</strong>
|
||
This is just a brief introduction to creating images. We&rsquo;ve
|
||
skipped a whole bunch of other instructions that you can use. We&rsquo;ll see more of
|
||
those instructions in later sections of the Guide or you can refer to the
|
||
<a href="../engine/reference/builder/"><code>Dockerfile</code></a> reference for a
|
||
detailed description and examples of every instruction.
|
||
To help you write a clear, readable, maintainable <code>Dockerfile</code>, we&rsquo;ve also
|
||
written a <a href="../engine/userguide/eng-image/dockerfile_best-practices/"><code>Dockerfile</code> Best Practices guide</a>.</p>
|
||
</blockquote>
|
||
|
||
<h2 id="setting-tags-on-an-image">Setting tags on an image</h2>
|
||
|
||
<p>You can also add a tag to an existing image after you commit or build it. We
|
||
can do this using the <code>docker tag</code> command. Now, add a new tag to your
|
||
<code>ouruser/sinatra</code> image.</p>
|
||
|
||
<pre><code>$ docker tag 5db5f8471261 ouruser/sinatra:devel
|
||
</code></pre>
|
||
|
||
<p>The <code>docker tag</code> command takes the ID of the image, here <code>5db5f8471261</code>, and our
|
||
user name, the repository name and the new tag.</p>
|
||
|
||
<p>Now, see your new tag using the <code>docker images</code> command.</p>
|
||
|
||
<pre><code>$ docker images ouruser/sinatra
|
||
REPOSITORY TAG IMAGE ID CREATED SIZE
|
||
ouruser/sinatra latest 5db5f8471261 11 hours ago 446.7 MB
|
||
ouruser/sinatra devel 5db5f8471261 11 hours ago 446.7 MB
|
||
ouruser/sinatra v2 5db5f8471261 11 hours ago 446.7 MB
|
||
</code></pre>
|
||
|
||
<h2 id="image-digests">Image Digests</h2>
|
||
|
||
<p>Images that use the v2 or later format have a content-addressable identifier
|
||
called a <code>digest</code>. As long as the input used to generate the image is
|
||
unchanged, the digest value is predictable. To list image digest values, use
|
||
the <code>--digests</code> flag:</p>
|
||
|
||
<pre><code>$ docker images --digests | head
|
||
REPOSITORY TAG DIGEST IMAGE ID CREATED SIZE
|
||
ouruser/sinatra latest sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf 5db5f8471261 11 hours ago 446.7 MB
|
||
</code></pre>
|
||
|
||
<p>When pushing or pulling to a 2.0 registry, the <code>push</code> or <code>pull</code> command
|
||
output includes the image digest. You can <code>pull</code> using a digest value.</p>
|
||
|
||
<pre><code>$ docker pull ouruser/sinatra@sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf
|
||
</code></pre>
|
||
|
||
<p>You can also reference by digest in <code>create</code>, <code>run</code>, and <code>rmi</code> commands, as well as the
|
||
<code>FROM</code> image reference in a Dockerfile.</p>
|
||
|
||
<h2 id="push-an-image-to-docker-hub">Push an image to Docker Hub</h2>
|
||
|
||
<p>Once you&rsquo;ve built or created a new image you can push it to <a href="https://hub.docker.com">Docker
|
||
Hub</a> using the <code>docker push</code> command. This
|
||
allows you to share it with others, either publicly, or push it into <a href="https://hub.docker.com/account/billing-plans/">a
|
||
private repository</a>.</p>
|
||
|
||
<pre><code>$ docker push ouruser/sinatra
|
||
The push refers to a repository [ouruser/sinatra] (len: 1)
|
||
Sending image list
|
||
Pushing repository ouruser/sinatra (3 tags)
|
||
. . .
|
||
</code></pre>
|
||
|
||
<h2 id="remove-an-image-from-the-host">Remove an image from the host</h2>
|
||
|
||
<p>You can also remove images on your Docker host in a way <a href="../engine/userguide/containers/usingdocker/">similar to
|
||
containers</a> using the <code>docker rmi</code> command.</p>
|
||
|
||
<p>Delete the <code>training/sinatra</code> image as you don&rsquo;t need it anymore.</p>
|
||
|
||
<pre><code>$ docker rmi training/sinatra
|
||
Untagged: training/sinatra:latest
|
||
Deleted: 5bc342fa0b91cabf65246837015197eecfa24b2213ed6a51a8974ae250fedd8d
|
||
Deleted: ed0fffdcdae5eb2c3a55549857a8be7fc8bc4241fb19ad714364cbfd7a56b22f
|
||
Deleted: 5c58979d73ae448df5af1d8142436d81116187a7633082650549c52c3a2418f0
|
||
</code></pre>
|
||
|
||
<blockquote>
|
||
<p><strong>Note:</strong> To remove an image from the host, please make sure
|
||
that there are no containers actively based on it.</p>
|
||
</blockquote>
|
||
|
||
<h1 id="next-steps">Next steps</h1>
|
||
|
||
<p>Until now you&rsquo;ve seen how to build individual applications inside Docker
|
||
containers. Now learn how to build whole application stacks with Docker
|
||
by networking together multiple Docker containers.</p>
|
||
|
||
<p>Go to <a href="../engine/userguide/containers/networkingcontainers/">Network containers</a>.</p>
|
||
</description>
|
||
</item>
|
||
|
||
</channel>
|
||
</rss> |