@@ -85,7 +85,7 @@ Four different options affect container domain name services.
used inside of the container, by writing search lines into the
container's /etc/resolv.conf. When a container process attempts
to access host and the search domain example.com
- is set, for instance, the DNS logic will not only look up host
+ is set, for instance, the DNS logic not only looks up host
but also host.example.com.
@@ -114,14 +114,14 @@ Four different options affect container domain name services.
Regarding DNS settings, in the absence of the `--dns=IP_ADDRESS...`, `--dns-search=DOMAIN...`, or `--dns-opt=OPTION...` options, Docker makes each container's `/etc/resolv.conf` look like the `/etc/resolv.conf` of the host machine (where the `docker` daemon runs). When creating the container's `/etc/resolv.conf`, the daemon filters out all localhost IP address `nameserver` entries from the host's original file.
-Filtering is necessary because all localhost addresses on the host are unreachable from the container's network. After this filtering, if there are no more `nameserver` entries left in the container's `/etc/resolv.conf` file, the daemon adds public Google DNS nameservers (8.8.8.8 and 8.8.4.4) to the container's DNS configuration. If IPv6 is enabled on the daemon, the public IPv6 Google DNS nameservers will also be added (2001:4860:4860::8888 and 2001:4860:4860::8844).
+Filtering is necessary because all localhost addresses on the host are unreachable from the container's network. After this filtering, if there are no more `nameserver` entries left in the container's `/etc/resolv.conf` file, the daemon adds public Google DNS nameservers (8.8.8.8 and 8.8.4.4) to the container's DNS configuration. If IPv6 is enabled on the daemon, the public IPv6 Google DNS nameservers are also added (2001:4860:4860::8888 and 2001:4860:4860::8844).
> **Note**: If you need access to a host's localhost resolver, you must modify your DNS service on the host to listen on a non-localhost address that is reachable from within the container.
-You might wonder what happens when the host machine's `/etc/resolv.conf` file changes. The `docker` daemon has a file change notifier active which will watch for changes to the host DNS configuration.
+You might wonder what happens when the host machine's `/etc/resolv.conf` file changes. The `docker` daemon has a file change notifier active which watches for changes to the host DNS configuration.
-> **Note**: The file change notifier relies on the Linux kernel's inotify feature. Because this feature is currently incompatible with the overlay filesystem driver, a Docker daemon using "overlay" will not be able to take advantage of the `/etc/resolv.conf` auto-update feature.
+> **Note**: The file change notifier relies on the Linux kernel's inotify feature. Because this feature is currently incompatible with the overlay filesystem driver, a Docker daemon using "overlay" cannot take advantage of the `/etc/resolv.conf` auto-update feature.
-When the host file changes, all stopped containers which have a matching `resolv.conf` to the host will be updated immediately to this newest host configuration. Containers which are running when the host configuration changes will need to stop and start to pick up the host changes due to lack of a facility to ensure atomic writes of the `resolv.conf` file while the container is running. If the container's `resolv.conf` has been edited since it was started with the default configuration, no replacement will be attempted as it would overwrite the changes performed by the container. If the options (`--dns`, `--dns-search`, or `--dns-opt`) have been used to modify the default host configuration, then the replacement with an updated host's `/etc/resolv.conf` will not happen as well.
+When the host file changes, all stopped containers which have a matching `resolv.conf` to the host are updated immediately to this newest host configuration. Containers which are running when the host configuration changes need to stop and start to pick up the host changes due to lack of a facility to ensure atomic writes of the `resolv.conf` file while the container is running. If the container's `resolv.conf` has been edited since it was started with the default configuration, no replacement is attempted as it would overwrite the changes performed by the container. If the options (`--dns`, `--dns-search`, or `--dns-opt`) have been used to modify the default host configuration, then the replacement with an updated host's `/etc/resolv.conf` does not happen.
-> **Note**: For containers which were created prior to the implementation of the `/etc/resolv.conf` update feature in Docker 1.5.0: those containers will **not** receive updates when the host `resolv.conf` file changes. Only containers created with Docker 1.5.0 and above will utilize this auto-update feature.
+> **Note**: For containers which were created prior to the implementation of the `/etc/resolv.conf` update feature in Docker 1.5.0: those containers do **not** receive updates when the host `resolv.conf` file changes. Only containers created with Docker 1.5.0 and above utilize this auto-update feature.
diff --git a/engine/userguide/networking/default_network/container-communication.md b/engine/userguide/networking/default_network/container-communication.md
index 5880ffef7b..1a7911d095 100644
--- a/engine/userguide/networking/default_network/container-communication.md
+++ b/engine/userguide/networking/default_network/container-communication.md
@@ -17,9 +17,9 @@ factor is whether the host machine is forwarding its IP packets. The second is
whether the host's `iptables` allow this particular connection.
IP packet forwarding is governed by the `ip_forward` system parameter. Packets
-can only pass between containers if this parameter is `1`. Usually you will
-simply leave the Docker server at its default setting `--ip-forward=true` and
-Docker will go set `ip_forward` to `1` for you when the server starts up. If you
+can only pass between containers if this parameter is `1`. Usually, the default
+setting of `--ip-forward=true` is correct, and causes and
+Docker to set `ip_forward` to `1` for you when the server starts up. If you
set `--ip-forward=false` and your system's kernel has it enabled, the
`--ip-forward=false` option has no effect. To check the setting on your kernel
or to turn it on manually:
@@ -39,15 +39,15 @@ or to turn it on manually:
> **Note**: this setting does not affect containers that use the host
> network stack (`--network=host`).
-Many using Docker will want `ip_forward` to be on, to at least make
+Many using Docker need `ip_forward` to be on, to at least make
communication _possible_ between containers and the wider world. May also be
needed for inter-container communication if you are in a multiple bridge setup.
-Docker will never make changes to your system `iptables` rules if you set
-`--iptables=false` when the daemon starts. Otherwise the Docker server will
-append forwarding rules to the `DOCKER` filter chain.
+Docker never makes changes to your system `iptables` rules if you set
+`--iptables=false` when the daemon starts. Otherwise the Docker server
+appends forwarding rules to the `DOCKER` filter chain.
-Docker will flush any pre-existing rules from the `DOCKER` and `DOCKER-ISOLATION`
+Docker flushes any pre-existing rules from the `DOCKER` and `DOCKER-ISOLATION`
filter chains, if they exist. For this reason, any rules needed to further
restrict access to containers need to be added after Docker has started.
@@ -67,13 +67,13 @@ where *ext_if* is the name of the interface providing external connectivity to t
Whether two containers can communicate is governed, at the operating system level, by two factors.
-- Does the network topology even connect the containers' network interfaces? By default Docker will attach all containers to a single `docker0` bridge, providing a path for packets to travel between them. See the later sections of this document for other possible topologies.
+- Does the network topology even connect the containers' network interfaces? By default Docker attaches all containers to a single `docker0` bridge, providing a path for packets to travel between them. See the later sections of this document for other possible topologies.
-- Do your `iptables` allow this particular connection? Docker will never make changes to your system `iptables` rules if you set `--iptables=false` when the daemon starts. Otherwise the Docker server will add a default rule to the `FORWARD` chain with a blanket `ACCEPT` policy if you retain the default `--icc=true`, or else will set the policy to `DROP` if `--icc=false`.
+- Do your `iptables` allow this particular connection? Docker never makes changes to your system `iptables` rules if you set `--iptables=false` when the daemon starts. Otherwise the Docker server adds a default rule to the `FORWARD` chain with a blanket `ACCEPT` policy if you retain the default `--icc=true`, or else sets the policy to `DROP` if `--icc=false`.
It is a strategic question whether to leave `--icc=true` or change it to
-`--icc=false` so that `iptables` will protect other containers -- and the main
-host -- from having arbitrary ports probed or accessed by a container that gets
+`--icc=false` so that `iptables` can protect other containers, and the Docker
+host, from having arbitrary ports probed or accessed by a container that gets
compromised.
If you choose the most secure setting of `--icc=false`, then how can containers
@@ -82,14 +82,14 @@ The answer is the `--link=CONTAINER_NAME_or_ID:ALIAS` option, which was
mentioned in the previous section because of its effect upon name services. If
the Docker daemon is running with both `--icc=false` and `--iptables=true`
then, when it sees `docker run` invoked with the `--link=` option, the Docker
-server will insert a pair of `iptables` `ACCEPT` rules so that the new
+server inserts a pair of `iptables` `ACCEPT` rules so that the new
container can connect to the ports exposed by the other container -- the ports
that it mentioned in the `EXPOSE` lines of its `Dockerfile`.
> **Note**: The value `CONTAINER_NAME` in `--link=` must either be an
auto-assigned Docker name like `stupefied_pare` or the name you assigned
with `--name=` when you ran `docker run`. It cannot be a hostname, which Docker
-will not recognize in the context of the `--link=` option.
+does not recognize in the context of the `--link=` option.
You can run the `iptables` command on your Docker host to see whether the `FORWARD` chain has a default policy of `ACCEPT` or `DROP`:
@@ -160,6 +160,6 @@ host2: eth0/192.168.8.1, docker0/172.18.0.0/16
```
If the container running on `host1` needs the ability to communicate directly
with a container on `host2`, you need a route from `host1` to `host2`. After
-the route exists, `host2` needs to be able to accept packets destined for its
+the route exists, `host2` needs the ability to accept packets destined for its
running container, and forward them along. Setting the policy to `ACCEPT`
accomplishes this.
diff --git a/engine/userguide/networking/default_network/custom-docker0.md b/engine/userguide/networking/default_network/custom-docker0.md
index 2bda893929..d704a93f17 100644
--- a/engine/userguide/networking/default_network/custom-docker0.md
+++ b/engine/userguide/networking/default_network/custom-docker0.md
@@ -22,7 +22,7 @@ Docker configures `docker0` with an IP address, netmask, and IP allocation range
Containers which are connected to the default bridge are allocated IP addresses
within this range. Certain default settings apply to the default bridge unless
you specify otherwise. For instance, the default maximum transmission unit (MTU),
-or the largest packet length that the container will allow, defaults to 1500
+or the largest packet length that the container allows, defaults to 1500
bytes.
You can configure the default bridge network's settings using flags to the
@@ -56,7 +56,7 @@ each:
`172.16.1.0/28`. This range must be an IPv4 range for fixed IPs, and must
be a subset of the bridge IP range (`docker0` or set
using `--bridge` or the `bip` key in the `daemon.json` file). For example,
- with `--fixed-cidr=192.168.1.0/25`, IPs for your containers will be chosen from
+ with `--fixed-cidr=192.168.1.0/25`, IPs for your containers are chosen from
the first half of addresses included in the 192.168.1.0/24 subnet.
- `--mtu=BYTES`: override the maximum packet length on `docker0`.
@@ -83,8 +83,9 @@ docker0 8000.3a1d7362b4ee no veth65f9
vethdda6
```
-If the `brctl` command is not installed on your Docker host, then on Ubuntu you
-should be able to run `sudo apt-get install bridge-utils` to install it.
+If the `brctl` command is not installed on your Docker host, run
+`sudo apt-get install bridge-utils` (on Ubuntu hosts) to install it. For other
+operating systems, consult the OS documentation.
Finally, the `docker0` Ethernet bridge settings are used every time you create a
new container. Docker selects a free IP address from the range available on the
@@ -115,8 +116,8 @@ default via 172.17.42.1 dev eth0
root@f38c87f2a42d:/# exit
```
-Remember that the Docker host will not be willing to forward container packets
-out on to the Internet unless its `ip_forward` system setting is `1` -- see the
+The Docker host does not forward container packets
+out to the outside world unless its `ip_forward` system setting is `1` -- see the
section on
[Communicating to the outside world](container-communication.md#communicating-to-the-outside-world)
for details.
diff --git a/engine/userguide/networking/default_network/dockerlinks.md b/engine/userguide/networking/default_network/dockerlinks.md
index 854a8a4baa..5b98b614c7 100644
--- a/engine/userguide/networking/default_network/dockerlinks.md
+++ b/engine/userguide/networking/default_network/dockerlinks.md
@@ -67,7 +67,7 @@ This would bind port 5000 in the container to a randomly available port
between 8000 and 9000 on the host.
There are also a few other ways you can configure the `-p` flag. By
-default the `-p` flag will bind the specified port to all interfaces on
+default the `-p` flag binds the specified port to all interfaces on
the host machine. But you can also specify a binding to a specific
interface, for example only to the `localhost`.
@@ -88,7 +88,7 @@ You can also bind UDP ports by adding a trailing `/udp`. For example:
You also learned about the useful `docker port` shortcut which showed us the
current port bindings. This is also useful for showing you specific port
configurations. For example, if you've bound the container port to the
-`localhost` on the host machine, then the `docker port` output will reflect that.
+`localhost` on the host machine, then the `docker port` output reflects that.
$ docker port nostalgic_morse 5000
@@ -101,7 +101,7 @@ configurations. For example, if you've bound the container port to the
> **Note**:
> This section covers the legacy link feature in the default `bridge` network.
-> Please refer to [linking containers in user-defined networks](/engine/userguide/networking/work-with-networks.md#linking-containers-in-user-defined-networks)
+> Refer to [linking containers in user-defined networks](/engine/userguide/networking/work-with-networks.md#linking-containers-in-user-defined-networks)
> for more information on links in user-defined networks.
Network port mappings are not the only way Docker containers can connect to one
@@ -143,11 +143,11 @@ You can also use `docker inspect` to return the container's name.
> **Note**:
-> Container names have to be unique. That means you can only call
+> Container names must be unique. That means you can only call
> one container `web`. If you want to re-use a container name you must delete
> the old container (with `docker rm`) before you can create a new
> container with the same name. As an alternative you can use the `--rm`
-> flag with the `docker run` command. This will delete the container
+> flag with the `docker run` command. This deletes the container
> immediately after it is stopped.
## Communication across links
@@ -172,18 +172,18 @@ Now, create a new `web` container and link it with your `db` container.
$ docker run -d -P --name web --link db:db training/webapp python app.py
-This will link the new `web` container with the `db` container you created
+This links the new `web` container with the `db` container you created
earlier. The `--link` flag takes the form:
--link :alias
Where `name` is the name of the container we're linking to and `alias` is an
-alias for the link name. You'll see how that alias gets used shortly.
+alias for the link name. That alias is used shortly.
The `--link` flag also takes the form:
--link
-In which case the alias will match the name. You could have written the previous
+In this case the alias matches the name. You could write the previous
example as:
$ docker run -d -P --name web --link db training/webapp python app.py
@@ -203,7 +203,7 @@ So what does linking the containers actually do? You've learned that a link allo
source container to provide information about itself to a recipient container. In
our example, the recipient, `web`, can access information about the source `db`. To do
this, Docker creates a secure tunnel between the containers that doesn't need to
-expose any ports externally on the container; you'll note when we started the
+expose any ports externally on the container; when we started the
`db` container we did not use either the `-P` or `-p` flags. That's a big benefit of
linking: we don't need to expose the source container, here the PostgreSQL database, to
the network.
@@ -218,7 +218,7 @@ recipient container in two ways:
Docker creates several environment variables when you link containers. Docker
automatically creates environment variables in the target container based on
-the `--link` parameters. It will also expose all environment variables
+the `--link` parameters. It also exposes all environment variables
originating from Docker from the source container. These include variables from:
* the `ENV` commands in the source container's Dockerfile
@@ -298,8 +298,8 @@ with
`DB_`, which is populated from the `alias` you specified above. If the `alias`
were `db1`, the variables would be prefixed with `DB1_`. You can use these
environment variables to configure your applications to connect to the database
-on the `db` container. The connection will be secure and private; only the
-linked `web` container will be able to talk to the `db` container.
+on the `db` container. The connection is secure and private; only the
+linked `web` container can communicate with the `db` container.
### Important notes on Docker environment variables
@@ -309,7 +309,7 @@ if the source container is restarted. We recommend using the host entries in
`/etc/hosts` to resolve the IP address of linked containers.
These environment variables are only set for the first process in the
-container. Some daemons, such as `sshd`, will scrub them when spawning shells
+container. Some daemons, such as `sshd`, scrub them when spawning shells
for connection.
### Updating the `/etc/hosts` file
@@ -329,10 +329,10 @@ container:
You can see two relevant host entries. The first is an entry for the `web`
container that uses the Container ID as a host name. The second entry uses the
link alias to reference the IP address of the `db` container. In addition to
-the alias you provide, the linked container's name--if unique from the alias
-provided to the `--link` parameter--and the linked container's hostname will
-also be added in `/etc/hosts` for the linked container's IP address. You can ping
-that host now via any of these entries:
+the alias you provide, the linked container's name, if unique from the alias
+provided to the `--link` parameter, and the linked container's hostname are
+also added to `/etc/hosts` for the linked container's IP address. You can ping
+that host via any of these entries:
root@aed84ee21bde:/opt/webapp# apt-get install -yqq inetutils-ping
@@ -344,7 +344,7 @@ that host now via any of these entries:
56 bytes from 172.17.0.5: icmp_seq=2 ttl=64 time=0.256 ms
> **Note**:
-> In the example, you'll note you had to install `ping` because it was not included
+> In the example, you had to install `ping` because it was not included
> in the container initially.
Here, you used the `ping` command to ping the `db` container using its host entry,
@@ -356,8 +356,8 @@ to make use of your `db` container.
> example, you could have multiple (differently named) web containers attached to your
>`db` container.
-If you restart the source container, the linked containers `/etc/hosts` files
-will be automatically updated with the source container's new IP address,
+If you restart the source container, the `/etc/hosts` files on the linked containers
+are automatically updated with the source container's new IP address,
allowing linked communication to continue.
$ docker restart db
diff --git a/engine/userguide/networking/default_network/ipv6.md b/engine/userguide/networking/default_network/ipv6.md
index c0c95ea773..b0a2018906 100644
--- a/engine/userguide/networking/default_network/ipv6.md
+++ b/engine/userguide/networking/default_network/ipv6.md
@@ -19,11 +19,11 @@ reside on layer 3 of the [OSI model](http://en.wikipedia.org/wiki/OSI_model).
By default, the Docker daemon configures the container network for IPv4 only.
You can enable IPv4/IPv6 dualstack support by running the Docker daemon with the
-`--ipv6` flag. Docker will set up the bridge `docker0` with the IPv6 [link-local
+`--ipv6` flag. Docker sets up the bridge `docker0` with the IPv6 [link-local
address](http://en.wikipedia.org/wiki/Link-local_address) `fe80::1`.
-By default, containers that are created will only get a link-local IPv6 address.
-To assign globally routable IPv6 addresses to your containers you have to
+By default, containers that are created only get a link-local IPv6 address.
+To assign globally routable IPv6 addresses to your containers you need to
specify an IPv6 subnet to pick the addresses from. Set the IPv6 subnet via the
`--fixed-cidr-v6` parameter when starting Docker daemon:
@@ -59,20 +59,20 @@ $ sysctl net.ipv6.conf.default.forwarding=1
$ sysctl net.ipv6.conf.all.forwarding=1
```
-All traffic to the subnet `2001:db8:1::/64` will now be routed via the `docker0`
+All traffic to the subnet `2001:db8:1::/64` is routed via the `docker0`
interface.
> **Note**: IPv6 forwarding may interfere with your existing IPv6
> configuration: If you are using Router Advertisements to get IPv6 settings for
> your host's interfaces, set `accept_ra` to `2` using the following command.
-> Otherwise IPv6 enabled forwarding will result in rejecting Router Advertisements.
+> Otherwise IPv6 enabled forwarding results in rejecting Router Advertisements.
>
> $ sysctl net.ipv6.conf.eth0.accept_ra=2

-Every new container will get an IPv6 address from the defined subnet, and a
-default route will be added on `eth0` in the container via the address specified
+Each new container gets an IPv6 address from the defined subnet, and a
+default route is added on `eth0` in the container via the address specified
by the daemon option `--default-gateway-v6` (or `default-gateway-v6` in
`daemon.json`) if present. The default gateway defaults to `fe80::1`.
@@ -95,12 +95,12 @@ default via fe80::1 dev eth0 metric 1024
In this example, the container is assigned a link-local address with the subnet
`/64` (`fe80::42:acff:fe11:3/64`) and a globally routable IPv6 address
-(`2001:db8:1:0:0:242:ac11:3/64`). The container will create connections to
+(`2001:db8:1:0:0:242:ac11:3/64`). The container creates connections to
addresses outside of the `2001:db8:1::/64` network via the link-local gateway at
`fe80::1` on `eth0`.
-Often servers or virtual machines get a `/64` IPv6 subnet assigned (e.g.
-`2001:db8:23:42::/64`). In this case you can split it up further and provide
+If your server or virtual machine has a `/64` IPv6 subnet assigned to it, such
+as `2001:db8:23:42::/64`, you can split it up further and provide
Docker a `/80` subnet while using a separate `/80` subnet for other applications
on the host:
@@ -110,7 +110,7 @@ In this setup the subnet `2001:db8:23:42::/64` with a range from
`2001:db8:23:42:0:0:0:0` to `2001:db8:23:42:ffff:ffff:ffff:ffff` is attached to
`eth0`, with the host listening at `2001:db8:23:42::1`. The subnet
`2001:db8:23:42:1::/80` with an address range from `2001:db8:23:42:1:0:0:0` to
-`2001:db8:23:42:1:ffff:ffff:ffff` is attached to `docker0` and will be used by
+`2001:db8:23:42:1:ffff:ffff:ffff` is attached to `docker0` and is used by
containers.
### Using NDP proxying
@@ -137,8 +137,8 @@ $ ip -6 addr show
To slit up the configurable address range into two subnets
`2001:db8::c000/125` and `2001:db8::c008/125`, use the following `daemon.json`
-settings. The first subnet will be used by non-Docker processes on the host, and
-the second will be used by Docker.
+settings. The first subnet is used by non-Docker processes on the host, and
+the second is used by Docker.
```json
{
@@ -174,7 +174,7 @@ $ ip -6 neigh add proxy 2001:db8::c009 dev eth0
From now on, the kernel answers neighbor solicitation addresses for this address
on the device `eth0`. All traffic to this IPv6 address is routed through the
-Docker host, which will forward it to the container's network according to its
+Docker host, which forwards it to the container's network according to its
routing table via the `docker0` device:
```bash
@@ -184,7 +184,7 @@ $ ip -6 route show
2001:db8::/64 dev eth0 proto kernel metric 256
```
-You have to execute the `ip -6 neigh add proxy ...` command for every IPv6
+Execute the `ip -6 neigh add proxy ...` command for every IPv6
address in your Docker subnet. Unfortunately there is no functionality for
adding a whole subnet by executing one command. An alternative approach would be
to use an NDP proxy daemon such as
@@ -208,11 +208,11 @@ three routes configured:
- Route all traffic to `2001:db8:2::/64` via Host2 with IP `2001:db8::2`
Host1 also acts as a router on OSI layer 3. When one of the network clients
-tries to contact a target that is specified in Host1's routing table Host1 will
-forward the traffic accordingly. It acts as a router for all networks it knows:
+tries to contact a target that is specified in Host1's routing table Host1
+forwards the traffic accordingly. It acts as a router for all networks it knows:
`2001:db8::/64`, `2001:db8:1::/64`, and `2001:db8:2::/64`.
-On Host2 we have nearly the same configuration. Host2's containers will get IPv6
+On Host2 we have nearly the same configuration. Host2's containers gets IPv6
addresses from `2001:db8:2::/64`. Host2 has three routes configured:
- Route all traffic to `2001:db8:0::/64` via `eth0`
@@ -223,13 +223,13 @@ The difference to Host1 is that the network `2001:db8:2::/64` is directly
attached to Host2 via its `docker0` interface whereas Host2 reaches
`2001:db8:1::/64` via Host1's IPv6 address `2001:db8::1`.
-This way every container is able to contact every other container. The
+This way every container can contact every other container. The
containers `Container1-*` share the same subnet and contact each other directly.
-The traffic between `Container1-*` and `Container2-*` will be routed via Host1
+The traffic between `Container1-*` and `Container2-*` are routed via Host1
and Host2 because those containers do not share the same subnet.
-In a switched environment every host has to know all routes to every subnet.
-You always have to update the hosts' routing tables once you add or remove a
+In a switched environment every host needs to know all routes to every subnet.
+You always need to update the hosts' routing tables once you add or remove a
host to the cluster.
Every configuration in the diagram that is shown below the dashed line is
@@ -240,19 +240,19 @@ adapted to the individual environment.
### Routed network environment
In a routed network environment you replace the layer 2 switch with a layer 3
-router. Now the hosts just have to know their default gateway (the router) and
+router. Now the hosts just need to know their default gateway (the router) and
the route to their own containers (managed by Docker). The router holds all
routing information about the Docker subnets. When you add or remove a host to
-this environment you just have to update the routing table in the router - not
-on every host.
+this environment, just update the routing table in the router, rather than on
+every host.

In this scenario containers of the same host can communicate directly with each
-other. The traffic between containers on different hosts will be routed via
-their hosts and the router. For example packet from `Container1-1` to
-`Container2-1` will be routed through `Host1`, `Router`, and `Host2` until it
-arrives at `Container2-1`.
+other. The traffic between containers on different hosts is routed via
+their hosts and the router. For example, packets from `Container1-1` to
+`Container2-1` are routed through `Host1`, `Router`, and `Host2` until they
+arrive at `Container2-1`.
To keep the IPv6 addresses short in this example a `/48` network is assigned to
every host. The hosts use a `/64` subnet of this for its own services and one
diff --git a/engine/userguide/networking/get-started-macvlan.md b/engine/userguide/networking/get-started-macvlan.md
index d5c20b8ec7..96d6edddb9 100644
--- a/engine/userguide/networking/get-started-macvlan.md
+++ b/engine/userguide/networking/get-started-macvlan.md
@@ -14,7 +14,7 @@ Macvlan offers a number of unique features and plenty of room for further innova
- The examples on this page are all single host and setup using Docker 1.12.0+
-- All of the examples can be performed on a single host running Docker. Any examples using a sub-interface like `eth0.10` can be replaced with `eth0` or any other valid parent interface on the Docker host. Sub-interfaces with a `.` are created on the fly. `-o parent` interfaces can also be left out of the `docker network create` all together and the driver will create a `dummy` interface that will enable local host connectivity to perform the examples.
+- All of the examples can be performed on a single host running Docker. Any examples using a sub-interface like `eth0.10` can be replaced with `eth0` or any other valid parent interface on the Docker host. Sub-interfaces with a `.` are created on the fly. `-o parent` interfaces can also be left out of the `docker network create` all together and the driver creates a `dummy` interface that enables local host connectivity to perform the examples.
- Kernel requirements:
@@ -43,7 +43,7 @@ In the following example, `eth0` on the docker host has an IP on the `172.16.86.
> **Note**: For Macvlan bridge mode the subnet values need to match the NIC's interface of the Docker host. For example, Use the same subnet and gateway of the Docker host ethernet interface that is specified by the `-o parent=` option.
-- The parent interface used in this example is `eth0` and it is on the subnet `172.16.86.0/24`. The containers in the `docker network` will also need to be on this same subnet as the parent `-o parent=`. The gateway is an external router on the network, not any ip masquerading or any other local proxy.
+- The parent interface used in this example is `eth0` and it is on the subnet `172.16.86.0/24`. The containers in the `docker network` also need to be on this same subnet as the parent `-o parent=`. The gateway is an external router on the network, not any ip masquerading or any other local proxy.
- The driver is specified with `-d driver_name` option. In this case `-d macvlan`
@@ -91,9 +91,9 @@ ip route
# In this case the containers cannot ping the -o parent=172.16.86.250
```
-You can explicitly specify the `bridge` mode option `-o macvlan_mode=bridge`. It is the default so will be in `bridge` mode either way.
+You can explicitly specify the `bridge` mode option `-o macvlan_mode=bridge`. It is the default so is in `bridge` mode either way.
-While the `eth0` interface does not need to have an IP address in Macvlan Bridge it is not uncommon to have an IP address on the interface. Addresses can be excluded from getting an address from the default built in IPAM by using the `--aux-address=x.x.x.x` flag. This will blacklist the specified address from being handed out to containers. The same network example above blocking the `-o parent=eth0` address from being handed out to a container.
+While the `eth0` interface does not need to have an IP address in Macvlan Bridge it is not uncommon to have an IP address on the interface. Addresses can be excluded from getting an address from the default built in IPAM by using the `--aux-address=x.x.x.x` flag. This blacklists the specified address from being handed out to containers. The same network example above blocking the `-o parent=eth0` address from being handed out to a container.
```
docker network create -d macvlan \
@@ -103,7 +103,7 @@ docker network create -d macvlan \
-o parent=eth0 pub_net
```
-Another option for subpool IP address selection in a network provided by the default Docker IPAM driver is to use `--ip-range=`. This specifies the driver to allocate container addresses from this pool rather then the broader range from the `--subnet=` argument from a network create as seen in the following example that will allocate addresses beginning at `192.168.32.128` and increment upwards from there.
+Another option for subpool IP address selection in a network provided by the default Docker IPAM driver is to use `--ip-range=`. This specifies the driver to allocate container addresses from this pool rather then the broader range from the `--subnet=` argument from a network create as seen in the following example that allocates addresses beginning at `192.168.32.128` and increment upwards from there.
```
docker network create -d macvlan \
@@ -125,19 +125,19 @@ docker network rm
> Communication with the Docker host over macvlan
>
> - When using macvlan, you cannot ping or communicate with the default namespace IP address.
-> For example, if you create a container and try to ping the Docker host's `eth0`, it will
+> For example, if you create a container and try to ping the Docker host's `eth0`, it does
> **not** work. That traffic is explicitly filtered by the kernel modules themselves to
> offer additional provider isolation and security.
>
> - A macvlan subinterface can be added to the Docker host, to allow traffic between the Docker
> host and containers. The IP address needs to be set on this subinterface and removed from
> the parent address.
-
+
```
-ip link add mac0 link $PARENTDEV type macvlan mode bridge
+ip link add mac0 link $PARENTDEV type macvlan mode bridge
```
-On Debian or Ubuntu, adding the following to `/etc/network/interfaces` will make this persistent.
+On Debian or Ubuntu, adding the following to `/etc/network/interfaces` makes this persistent.
Consult your operating system documentation for more details.
```none
@@ -150,7 +150,7 @@ iface mac0 inet dhcp
post-down ip link del mac0 link eno1 type macvlan mode bridge
```
-For more on Docker networking commands, see
+For more on Docker networking commands, see
Working with Docker network commands](/engine/userguide/networking/work-with-networks/).
## Macvlan 802.1q Trunk Bridge Mode example usage
@@ -161,11 +161,11 @@ It is very common to have a compute host requirement of running multiple virtual

-Trunking 802.1q to a Linux host is notoriously painful for many in operations. It requires configuration file changes in order to be persistent through a reboot. If a bridge is involved, a physical NIC needs to be moved into the bridge and the bridge then gets the IP address. This has lead to many a stranded servers since the risk of cutting off access during that convoluted process is high.
+Trunking 802.1q to a Linux host is notoriously painful for many in operations. It requires configuration file changes to be persistent through a reboot. If a bridge is involved, a physical NIC needs to be moved into the bridge and the bridge then gets the IP address. This has lead to many a stranded servers since the risk of cutting off access during that convoluted process is high.
Like all of the Docker network drivers, the overarching goal is to alleviate the operational pains of managing network resources. To that end, when a network receives a sub-interface as the parent that does not exist, the drivers create the VLAN tagged interfaces while creating the network.
-In the case of a host reboot, instead of needing to modify often complex network configuration files the driver will recreate all network links when the Docker daemon restarts. The driver tracks if it created the VLAN tagged sub-interface originally with the network create and will **only** recreate the sub-interface after a restart or delete `docker network rm` the link if it created it in the first place with `docker network create`.
+In the case of a host reboot, instead of needing to modify often complex network configuration files the driver recreates all network links when the Docker daemon restarts. The driver tracks if it created the VLAN tagged sub-interface originally with the network create and **only** recreates the sub-interface after a restart or delete `docker network rm` the link if it created it in the first place with `docker network create`.
If the user doesn't want Docker to modify the `-o parent` sub-interface, the user simply needs to pass an existing link that already exists as the parent interface. Parent interfaces such as `eth0` are not deleted, only sub-interfaces that are not master links.
diff --git a/engine/userguide/networking/index.md b/engine/userguide/networking/index.md
index 72293546dc..e1916c707e 100644
--- a/engine/userguide/networking/index.md
+++ b/engine/userguide/networking/index.md
@@ -57,8 +57,8 @@ docker0 Link encap:Ethernet HWaddr 02:42:47:bc:3a:eb
> Running on Docker for Mac or Docker for Windows?
>
> If you are using Docker for Mac (or running Linux containers on Docker for Windows), the
-`docker network ls` command will work as described above, but the
-`ip addr show` and `ifconfig` commands may be present, but will give you information about
+`docker network ls` command works as described above, but the
+`ip addr show` and `ifconfig` commands may be present, but give you information about
the IP addresses for your local host, not Docker container networks.
This is because Docker uses network interfaces running inside a thin VM,
instead of on the host machine itself.
@@ -163,7 +163,7 @@ $ docker run -itd --name=container2 busybox
Inspect the `bridge` network again after starting two containers. Both of the
`busybox` containers are connected to the network. Make note of their IP
-addresses, which will be different on your host machine than in the example
+addresses, which is different on your host machine than in the example
below.
```none
@@ -213,7 +213,7 @@ $ docker network inspect bridge
Containers connected to the default `bridge` network can communicate with each
other by IP address. **Docker does not support automatic service discovery on the
-default bridge network. If you want containers to be able to resolve IP addresses
+default bridge network. If you want containers to resolve IP addresses
by container name, you should use _user-defined networks_ instead**. You can link
two containers together using the legacy `docker run --link` option, but this
is not recommended in most cases.
@@ -292,7 +292,7 @@ You can also manually start the `dockerd` with the flags `--bridge=none
--iptables=false`. However, this may not start the daemon with the same
environment as the system init scripts, so other behaviors may be changed.
-Disabling the default bridge network is an advanced option that most users will
+Disabling the default bridge network is an advanced option that most users do
not need.
## User-defined networks
@@ -489,7 +489,7 @@ think you may need to use overlay networks in this way, see
If your needs are not addressed by any of the above network mechanisms, you can
write your own network driver plugin, using Docker's plugin infrastructure.
-The plugin will run as a separate process on the host which runs the Docker
+The plugin runs as a separate process on the host which runs the Docker
daemon. Using network plugins is an advanced topic.
Network plugins follow the same restrictions and installation rules as other
@@ -504,9 +504,9 @@ $ docker network create --driver weave mynet
```
You can inspect the network, connect and disconnect containers from it, and
-remove it. A specific plugin may have specific requirements in order to be
-used. Check that plugin's documentation for specific information. For more
-information on writing plugins, see
+remove it. A specific plugin may have specific requirements. Check that plugin's
+documentation for specific information. For more information on writing plugins,
+see
[Extending Docker](../../extend/legacy_plugins.md) and
[Writing a network driver plugin](../../extend/plugins_network.md).
@@ -515,9 +515,9 @@ information on writing plugins, see
Docker daemon runs an embedded DNS server which provides DNS resolution among
containers connected to the same user-defined network, so that these containers
can resolve container names to IP addresses. If the embedded DNS server is
-unable to resolve the request, it will be forwarded to any external DNS servers
+unable to resolve the request, it is forwarded to any external DNS servers
configured for the container. To facilitate this when the container is created,
-only the embedded DNS server reachable at `127.0.0.11` will be listed in the
+only the embedded DNS server reachable at `127.0.0.11` is listed in the
container's `resolv.conf` file. For more information on embedded DNS server on
user-defined networks, see
[embedded DNS server in user-defined networks](configure-dns.md)
@@ -538,7 +538,7 @@ network and user-defined bridge networks.
available high-order port (higher than `30000`) on the host machine, unless
you specify the port to map to on the host machine at runtime. You cannot
specify the port to map to on the host machine when you build the image (in the
- Dockerfile), because there is no way to guarantee that the port will be available
+ Dockerfile), because there is no way to guarantee that the port is available
on the host machine where you run the image.
This example publishes port 80 in the container to a random high
@@ -556,7 +556,7 @@ network and user-defined bridge networks.
```
The next example specifies that port 80 should be mapped to port 8080 on the
- host machine. It will fail if port 8080 is not available.
+ host machine. It fails if port 8080 is not available.
```bash
$ docker run -it -d -p 8080:80 nginx
@@ -606,7 +606,7 @@ configure it in different ways:
Save the file.
-2. When you create or start new containers, the environment variables will be
+2. When you create or start new containers, the environment variables are
set automatically within the container.
### Set the environment variables manually
@@ -662,7 +662,7 @@ way to make `iptables` rules persistent.
Docker dynamically manages `iptables` rules for the daemon, as well as your
containers, services, and networks. In Docker 17.06 and higher, you can add
-rules to a new table called `DOCKER-USER`, and these rules will be loaded before
+rules to a new table called `DOCKER-USER`, and these rules are loaded before
any rules Docker creates automatically. This can be useful if you need to
pre-populate `iptables` rules that need to be in place before Docker runs.
diff --git a/engine/userguide/networking/overlay-security-model.md b/engine/userguide/networking/overlay-security-model.md
index 6fbb4a95c4..48aa80ed50 100644
--- a/engine/userguide/networking/overlay-security-model.md
+++ b/engine/userguide/networking/overlay-security-model.md
@@ -30,7 +30,7 @@ automatically rotate the keys every 12 hours.
>
> Overlay network encryption is not supported on Windows. If a Windows node
> attempts to connect to an encrypted overlay network, no error is detected but
-> the node will not be able to communicate.
+> the node cannot communicate.
{: .warning }
## Swarm mode overlay networks and unmanaged containers
diff --git a/engine/userguide/networking/overlay-standalone-swarm.md b/engine/userguide/networking/overlay-standalone-swarm.md
index 23174ba6ef..f59de80e7e 100644
--- a/engine/userguide/networking/overlay-standalone-swarm.md
+++ b/engine/userguide/networking/overlay-standalone-swarm.md
@@ -39,11 +39,11 @@ To use Docker with an external key-value store, you need the following:
Docker Machine and Docker Swarm are not mandatory to experience Docker
multi-host networking with a key-value store. However, this example uses them to
-illustrate how they are integrated. You'll use Machine to create both the
+illustrate how they are integrated. You use Machine to create both the
key-value store server and the host cluster using a standalone swarm.
>**Note**: These examples are not relevant to Docker running in swarm mode and
-> will not work in such a configuration.
+> do not work in such a configuration.
### Prerequisites
@@ -73,7 +73,7 @@ key-value stores. This example uses Consul.
When you provision a new machine, the process adds Docker to the
host. This means rather than installing Consul manually, you can create an
instance using the [consul image from Docker
- Hub](https://hub.docker.com/_/consul/). You'll do this in the next step.
+ Hub](https://hub.docker.com/_/consul/). You do this in the next step.
3. Set your local environment to the `mh-keystore` machine.
@@ -110,14 +110,14 @@ Keep your terminal open and move on to
### Create a swarm cluster
In this step, you use `docker-machine` to provision the hosts for your network.
-You won't actually create the network yet. You'll create several
-Docker machines in VirtualBox. One of the machines will act as the swarm manager
-and you'll create that first. As you create each host, you'll pass the Docker
+You don't actually create the network yet. You create several
+Docker machines in VirtualBox. One of the machines acts as the swarm manager
+and you create that first. As you create each host, you pass the Docker
daemon on that machine options that are needed by the `overlay` network driver.
> **Note**: This creates a standalone swarm cluster, rather than using Docker
> in swarm mode. These examples are not relevant to Docker running in swarm mode
-> and will not work in such a configuration.
+> and do not work in such a configuration.
1. Create a swarm manager.
@@ -325,7 +325,7 @@ it automatically is part of the network.
If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.
- For online documentation and support please refer to
+
For online documentation and support, refer to
nginx.org.
Commercial support is available at
nginx.com.
diff --git a/engine/userguide/networking/work-with-networks.md b/engine/userguide/networking/work-with-networks.md
index dc52f251bf..3d5ae669c8 100644
--- a/engine/userguide/networking/work-with-networks.md
+++ b/engine/userguide/networking/work-with-networks.md
@@ -125,7 +125,7 @@ The following arguments can be passed to `docker network create` for any network
The following example uses `-o` to bind to a specific IP address available on the host when binding
ports, then uses `docker network inspect` to inspect the network, and finally
-attaches a new container to the new network. Note that you should replace the IP address `172.23.0.1` shown in the
+attaches a new container to the new network. Replace the IP address `172.23.0.1` shown in the
example with an IP address available on a network interface in your host.
```bash
@@ -262,9 +262,9 @@ needed.
when connecting it to a network, by using the `--ip` or `--ip6` flag. When
you specify an IP address in this way while using a user-defined network,
the configuration is preserved as part of the container's configuration and
- will be applied when the container is reloaded. Assigned IP addresses are not
+ is applied when the container is reloaded. Assigned IP addresses are not
preserved when using non-user-defined networks, because there is no guarantee
- that a container's subnet will not change when the Docker daemon restarts unless
+ that a container's subnet does not change when the Docker daemon restarts unless
you use user-defined networks.
5. Inspect the network resources used by `container3`. The
@@ -455,7 +455,7 @@ After you complete the steps in
`container2` can resolve `container3`'s name automatically because both containers
are connected to the `isolated_nw` network. However, containers connected to the
default `bridge` network cannot resolve each other's container name. If you need
-containers to be able to communicate with each other over the `bridge` network,
+containers to communicate with each other over the `bridge` network,
you need to use the legacy [link](default_network/dockerlinks.md) feature.
This is the only use case where using `--link` is recommended. You should
strongly consider using user-defined networks instead.
@@ -490,7 +490,7 @@ The following example briefly describes how to use `--link`.
```
This is a little tricky, because `container5` does not exist yet. When
- `container5` is created, `container4` will be able to resolve the name `c5` to
+ `container5` is created, `container4` can to resolve the name `c5` to
`container5`'s IP address.
>**Note**: Any link between containers created with *legacy link* is static in
@@ -499,7 +499,7 @@ The following example briefly describes how to use `--link`.
networks supports dynamic links between containers, and tolerates restarts and
IP address changes in the linked container.
- Since you have not yet created container `container5` trying to ping it will result
+ Since you have not yet created container `container5` trying to ping it results
in an error. Attach to `container4` and try to ping either `container5` or `c5`:
```bash
@@ -587,12 +587,12 @@ The following example briefly describes how to use `--link`.
When you link containers, whether using the legacy `link` method or using
user-defined networks, any aliases you specify only have meaning to the
-container where they are specified, and won't work on other containers on the
+container where they are specified, and don't work on other containers on the
default `bridge` network.
In addition, if a container belongs to multiple networks, a given linked alias
is scoped within a given network. Thus, a container can be linked to different
-aliases in different networks, and the aliases will not work for containers which
+aliases in different networks, and the aliases do not work for containers which
are not on the same network.
The following example illustrates these points.
@@ -809,8 +809,8 @@ The following example illustrates how to set up and use network aliases.
```
When multiple containers share the same alias, one of those containers
- will resolve to the alias. If that container is unavailable, another
- container with the alias will be resolved. This provides a sort of high
+ resolves to the alias. If that container is unavailable, another
+ container with the alias is resolved. This provides a sort of high
availability within the cluster.
> **Note**: When the IP address is resolved, the container chosen to resolve
@@ -840,7 +840,7 @@ The following example illustrates how to set up and use network aliases.
```
In the terminal attached to `container4`, observe the `ping` output.
- It will pause when `container6` goes down, because the `ping` command
+ It pauses when `container6` goes down, because the `ping` command
looks up the IP when it is first invoked, and that IP is no longer reachable.
However, the `ping` command has a very long timeout by default, so no error
occurs.
@@ -868,7 +868,7 @@ The following example illustrates how to set up and use network aliases.
In the terminal attached to `container4`, run the `ping` command again. It
might now resolve to `container6` again. If you start and stop the `ping`
- several times, you will see responses from each of the containers.
+ several times, you can see responses from each of the containers.
```bash
$ docker attach container4
diff --git a/engine/userguide/storagedriver/aufs-driver.md b/engine/userguide/storagedriver/aufs-driver.md
index cf30c2b70c..b0e8e24a32 100644
--- a/engine/userguide/storagedriver/aufs-driver.md
+++ b/engine/userguide/storagedriver/aufs-driver.md
@@ -24,7 +24,7 @@ potential performance advantages over the `aufs` storage driver.
- If you use Ubuntu, you need to
[install extra packages](/engine/installation/linux/ubuntu.md#recommended-extra-packages-for-trusty-1404){: target="_blank" class="_"}
to add the AUFS module to the kernel. If you do not install these packages,
- you will need to use `devicemapper` on Ubuntu 14.04 (which is not recommended),
+ you need to use `devicemapper` on Ubuntu 14.04 (which is not recommended),
or `overlay2` on Ubuntu 16.04 and higher, which is also supported.
- AUFS cannot use the following backing filesystems: `aufs`, `btrfs`, or
`ecryptfs`. This means that the filesystem which contains
@@ -59,7 +59,7 @@ storage driver is configured, Docker uses it by default.
```
3. If you are using a different storage driver, either AUFS is not included in
- the kernel (in which case a different default driver will be used) or that
+ the kernel (in which case a different default driver is used) or that
Docker has been explicitly configured to use a different driver. Check
`/etc/docker/daemon.json` or the output of `ps auxw | grep dockerd` to see
if Docker has been started with the `--storage-driver` flag.
@@ -118,7 +118,7 @@ subdirectories of `/var/lib/docker/aufs/`.
file contains the IDs of all the layers below it in the stack (its parents).
- `mnt/`: Mount points, one per image or container layer, which are used to
assemble and mount the unified filesystem for a container. For images, which
- are read-only, these directories will always be empty.
+ are read-only, these directories are always empty.
#### The container layer
@@ -206,7 +206,7 @@ To summarize some of the performance related aspects already mentioned:
- The AUFS storage driver can introduce significant latencies into container
write performance. This is because the first time a container writes to any
- file, the file has to be located and copied into the containers top writable
+ file, the file needs to be located and copied into the containers top writable
layer. These latencies increase and are compounded when these files exist below
many image layers and the files themselves are large.
diff --git a/engine/userguide/storagedriver/btrfs-driver.md b/engine/userguide/storagedriver/btrfs-driver.md
index aa0fdfb666..7ed3eab99e 100644
--- a/engine/userguide/storagedriver/btrfs-driver.md
+++ b/engine/userguide/storagedriver/btrfs-driver.md
@@ -30,7 +30,7 @@ Btrfs Filesystem as Btrfs.
[Product compatibility matrix](https://success.docker.com/Policies/Compatibility_Matrix)
for all supported configurations for commercially-supported Docker.
-- Changing the storage driver will make any containers you have already
+- Changing the storage driver makes any containers you have already
created inaccessible on the local system. Use `docker save` to save containers,
and push existing images to Docker Hub or a private repository, so that you
not need to re-create them later.
@@ -153,7 +153,7 @@ $ sudo btrfs filesystem balance /var/lib/docker
```
> **Note**: While you can do these operations with Docker running, performance
-> will suffer. It might be best to plan an outage window to balance the Btrfs
+> suffers. It might be best to plan an outage window to balance the Btrfs
> filesystem.
## How the `btrfs` storage driver works
@@ -255,7 +255,7 @@ storage driver.
> **Note**: Many of these factors are mitigated by using Docker volumes for
> write-heavy workloads, rather than relying on storing data in the container's
-> writable layer. However, in the case of Btrfs, Docker volumes will still suffer
+> writable layer. However, in the case of Btrfs, Docker volumes still suffer
> from these draw-backs unless `/var/lib/docker/volumes/` is **not** backed by
> Btrfs.
diff --git a/engine/userguide/storagedriver/device-mapper-driver.md b/engine/userguide/storagedriver/device-mapper-driver.md
index 4fb53bf9e2..7d49fd9d22 100644
--- a/engine/userguide/storagedriver/device-mapper-driver.md
+++ b/engine/userguide/storagedriver/device-mapper-driver.md
@@ -12,8 +12,8 @@ storage driver as `devicemapper`, and the kernel framework as `Device Mapper`.
For the systems where it is supported, `devicemapper` support is included in
the Linux kernel. However, specific configuration is required to use it with
-Docker. For instance, on a stock installation of RHEL or CentOS, Docker will
-default to `overlay`, which is not a supported configuration.
+Docker. For instance, on a stock installation of RHEL or CentOS, Docker
+defaults to `overlay`, which is not a supported configuration.
The `devicemapper` driver uses block devices dedicated to Docker and operates at
the block level, rather than the file level. These devices can be extended by
@@ -33,7 +33,7 @@ a filesystem at the level of the operating system.
- `devicemapper` is also supported on Docker CE running on CentOS, Fedora,
Ubuntu, or Debian.
-- Changing the storage driver will make any containers you have already
+- Changing the storage driver makes any containers you have already
created inaccessible on the local system. Use `docker save` to save containers,
and push existing images to Docker Hub or a private repository, so that you
not need to re-create them later.
@@ -73,7 +73,7 @@ For production systems, see
- [Stable](/engine/reference/commandline/dockerd.md#storage-driver-options)
- [Edge](/edge/engine/reference/commandline/dockerd.md#storage-driver-options)
- Docker will not start if the `daemon.json` file contains badly-formed JSON.
+ Docker does not start if the `daemon.json` file contains badly-formed JSON.
3. Start Docker.
@@ -137,7 +137,7 @@ After you have satisfied the [prerequisites](#prerequisites), follow the steps
below to configure Docker to use the `devicemapper` storage driver in
`direct-lvm` mode.
-> **Warning**: Changing the storage driver will make any containers you have already
+> **Warning**: Changing the storage driver makes any containers you have already
created inaccessible on the local system. Use `docker save` to save containers,
and push existing images to Docker Hub or a private repository, so that you
don't need to recreate them later.
@@ -187,23 +187,23 @@ Restart Docker for the changes to take effect. Docker invokes the commands to
configure the block device for you.
> **Warning**: Changing these values after Docker has prepared the block device
-> for you is not supported and will cause an error.
+> for you is not supported and causes an error.
You still need to [perform periodic maintenance tasks](#manage-devicemapper).
#### Configure direct-lvm mode manually
-The procedure below will create a logical volume configured as a thin pool to
+The procedure below creates a logical volume configured as a thin pool to
use as backing for the storage pool. It assumes that you have a spare block
device at `/dev/xvdf` with enough free space to complete the task. The device
identifier and volume sizes may be different in your environment and you
should substitute your own values throughout the procedure. The procedure also
assumes that the Docker daemon is in the `stopped` state.
-1. Identify the block device you want to use. The device will be located under
+1. Identify the block device you want to use. The device is located under
`/dev/` (such as `/dev/xvdf`) and needs enough free space to store the
- images and container layers for the workloads that host will be running.
- Ideally, this will be a solid state drive.
+ images and container layers for the workloads that host runs.
+ A solid state drive is ideal.
2. Stop Docker.
@@ -286,7 +286,7 @@ assumes that the Docker daemon is in the `stopped` state.
`thin_pool_autoextend_percent` is the amount of space to add to the device
when automatically extending (0 = disabled).
- The example below will add 20% more capacity when the disk usage reaches
+ The example below adds 20% more capacity when the disk usage reaches
80%.
```none
@@ -307,7 +307,7 @@ assumes that the Docker daemon is in the `stopped` state.
```
11. Enable monitoring for logical volumes on your host. Without this step,
- automatic extension will not occur even in the presence of the LVM profile.
+ automatic extension does not occur even in the presence of the LVM profile.
```bash
$ sudo lvs -o+seg_monitor
@@ -390,8 +390,8 @@ assumes that the Docker daemon is in the `stopped` state.