Merge pull request #18998 from dvdksn/network-overlay-swarmdedup

network: deduplicate swarm info for overlay driver
This commit is contained in:
David Karlsson
2024-02-21 17:32:08 +01:00
committed by GitHub
4 changed files with 130 additions and 451 deletions

View File

@@ -165,6 +165,12 @@ $ docker service create --name dns-cache \
## Bypass the routing mesh
By default, swarm services which publish ports do so using the routing mesh.
When you connect to a published port on any swarm node (whether it is running a
given service or not), you are redirected to a worker which is running that
service, transparently. Effectively, Docker acts as a load balancer for your
swarm services.
You can bypass the routing mesh, so that when you access the bound port on a
given node, you are always accessing the instance of the service running on
that node. This is referred to as `host` mode. There are a few things to keep
@@ -248,10 +254,15 @@ To use an external load balancer without the routing mesh, set `--endpoint-mode`
to `dnsrr` instead of the default value of `vip`. In this case, there is not a
single virtual IP. Instead, Docker sets up DNS entries for the service such that
a DNS query for the service name returns a list of IP addresses, and the client
connects directly to one of these. You are responsible for providing the list of
IP addresses and ports to your load balancer. See
[Configure service discovery](networking.md#configure-service-discovery).
connects directly to one of these.
You can't use `--endpoint-mode dnsrr` together with `--publish mode=ingress`.
You must run your own load balancer in front of the service. A DNS query for
the service name on the Docker host returns a list of IP addresses for the
nodes running the service. Configure your load balancer to consume this list
and balance the traffic across the nodes.
See [Configure service discovery](networking.md#configure-service-discovery).
## Learn more
* [Deploy services to a swarm](services.md)
* [Deploy services to a swarm](services.md)

View File

@@ -5,10 +5,10 @@ title: Manage swarm service networks
toc_max: 3
---
This topic discusses how to manage the application data for your swarm services.
This page describes networking for swarm services.
## Swarm and types of traffic
A Docker swarm generates two different kinds of traffic:
- Control and management plane traffic: This includes swarm management
@@ -66,7 +66,19 @@ When setting up networking in a Swarm, special care should be taken. Consult
the [tutorial](swarm-tutorial/index.md#open-protocols-and-ports-between-the-hosts)
for an overview.
## Create an overlay network
## Overlay networking
When you initialize a swarm or join a Docker host to an existing swarm, two
new networks are created on that Docker host:
- An overlay network called `ingress`, which handles the control and data traffic
related to swarm services. When you create a swarm service and do not
connect it to a user-defined overlay network, it connects to the `ingress`
network by default.
- A bridge network called `docker_gwbridge`, which connects the individual
Docker daemon to the other daemons participating in the swarm.
### Create an overlay network
To create an overlay network, specify the `overlay` driver when using the
`docker network create` command:
@@ -432,6 +444,36 @@ $ docker swarm join \
192.168.99.100:2377
```
## Publish ports on an overlay network
Swarm services connected to the same overlay network effectively expose all
ports to each other. For a port to be accessible outside of the service, that
port must be _published_ using the `-p` or `--publish` flag on `docker service
create` or `docker service update`. Both the legacy colon-separated syntax and
the newer comma-separated value syntax are supported. The longer syntax is
preferred because it is somewhat self-documenting.
<table>
<thead>
<tr>
<th>Flag value</th>
<th>Description</th>
</tr>
</thead>
<tr>
<td><tt>-p 8080:80</tt> or<br /><tt>-p published=8080,target=80</tt></td>
<td>Map TCP port 80 on the service to port 8080 on the routing mesh.</td>
</tr>
<tr>
<td><tt>-p 8080:80/udp</tt> or<br /><tt>-p published=8080,target=80,protocol=udp</tt></td>
<td>Map UDP port 80 on the service to port 8080 on the routing mesh.</td>
</tr>
<tr>
<td><tt>-p 8080:80/tcp -p 8080:80/udp</tt> or <br /><tt>-p published=8080,target=80,protocol=tcp -p published=8080,target=80,protocol=udp</tt></td>
<td>Map TCP port 80 on the service to TCP port 8080 on the routing mesh, and map UDP port 80 on the service to UDP port 8080 on the routing mesh.</td>
</tr>
</table>
## Learn more
* [Deploy services to a swarm](services.md)

View File

@@ -10,279 +10,109 @@ aliases:
The `overlay` network driver creates a distributed network among multiple
Docker daemon hosts. This network sits on top of (overlays) the host-specific
networks, allowing containers connected to it (including swarm service
containers) to communicate securely when encryption is enabled. Docker
transparently handles routing of each packet to and from the correct Docker
daemon host and the correct destination container.
When you initialize a swarm or join a Docker host to an existing swarm, two
new networks are created on that Docker host:
- an overlay network called `ingress`, which handles the control and data traffic
related to swarm services. When you create a swarm service and do not
connect it to a user-defined overlay network, it connects to the `ingress`
network by default.
- a bridge network called `docker_gwbridge`, which connects the individual
Docker daemon to the other daemons participating in the swarm.
networks, allowing containers connected to it to communicate securely when
encryption is enabled. Docker transparently handles routing of each packet to
and from the correct Docker daemon host and the correct destination container.
You can create user-defined `overlay` networks using `docker network create`,
in the same way that you can create user-defined `bridge` networks. Services
or containers can be connected to more than one network at a time. Services or
containers can only communicate across networks they are each connected to.
containers can only communicate across networks they're each connected to.
Although you can connect both swarm services and standalone containers to an
overlay network, the default behaviors and configuration concerns are different.
For that reason, the rest of this topic is divided into operations that apply to
all overlay networks, those that apply to swarm service networks, and those that
apply to overlay networks used by standalone containers.
Overlay networks are often used to create a connection between Swarm services,
but you can also use it to connect standalone containers running on different
hosts. When using standalone containers, it's still required that you use
Swarm mode to establish a connection between the hosts.
## Operations for all overlay networks
This page describes overlay networks in general, and when used with standalone
containers. For information about overlay for Swarm services, see
[Manage Swarm service networks](../../engine/swarm/networking.md).
### Create an overlay network
## Create an overlay network
Take note of the following prerequisites:
Before you start, you must ensure that participating nodes can communicate over the network.
The following table lists ports that need to be open to each host participating in an overlay network:
- Firewall rules for Docker daemons using overlay networks.
| Ports | Description |
| :--------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `2377/tcp` | The default Swarm control plane port, is configurable with [`docker swarm join --listen-addr`](../../reference/cli/docker/swarm/join.md#--listen-addr-value) |
| `4789/udp` | The default overlay traffic port, configurable with [`docker swarm init --data-path-addr`](../../reference/cli/docker/swarm/init.md#data-path-port) |
| `7946/tcp`, `7946/udp` | Used for communication among nodes, not configurable |
You need the following ports open to traffic to and from each Docker host participating on an overlay network:
- TCP port 2377 for cluster management communications
- TCP and UDP port 7946 for communication among nodes
- UDP port 4789 for overlay network traffic
- Before you can create an overlay network, you need to either initialize your
Docker daemon as a swarm manager using `docker swarm init` or join it to an
existing swarm using `docker swarm join`. Either of these creates the default
`ingress` overlay network which swarm services use by default. You need
to do this even if you never plan to use swarm services. Afterward, you can
create additional user-defined overlay networks.
To create an overlay network for use with swarm services, use a command like
the following:
```console
$ docker network create -d overlay my-overlay
```
To create an overlay network which can be used by swarm services or
standalone containers to communicate with other standalone containers running on
other Docker daemons, add the `--attachable` flag:
To create an overlay network that containers on other Docker hosts can connect to,
run the following command:
```console
$ docker network create -d overlay --attachable my-attachable-overlay
```
The `--attachable` option enables both standalone containers
and Swarm services to connect to the overlay network.
Without `--attachable`, only Swarm services can connect to the network.
You can specify the IP address range, subnet, gateway, and other options. See
`docker network create --help` for details.
### Encrypt traffic on an overlay network
## Encrypt traffic on an overlay network
All swarm service management traffic is encrypted by default, using the
[AES algorithm](https://en.wikipedia.org/wiki/Galois/Counter_Mode) in
GCM mode. Manager nodes in the swarm rotate the key used to encrypt gossip data
every 12 hours.
Use the `--opt encrypted` flag to encrypt the application data
transmitted over the overlay network:
To encrypt application data as well, add `--opt encrypted` when creating the
overlay network. This enables IPSEC encryption at the level of the vxlan. This
encryption imposes a non-negligible performance penalty, so you should test this
option before using it in production.
```console
$ docker network create \
--opt encrypted \
--driver overlay \
--attachable \
my-attachable-multi-host-network
```
When you enable overlay encryption, Docker creates IPSEC tunnels between all the
nodes where tasks are scheduled for services attached to the overlay network.
These tunnels also use the AES algorithm in GCM mode and manager nodes
automatically rotate the keys every 12 hours.
This enables IPsec encryption at the level of the Virtual Extensible LAN (VXLAN).
This encryption imposes a non-negligible performance penalty,
so you should test this option before using it in production.
> **Warning**
>
> Do not attach Windows nodes to encrypted overlay networks. Overlay network encryption is not supported on Windows. If a Windows node
> attempts to connect to an encrypted overlay network, no error is detected but
> the node cannot communicate.
> Don't attach Windows containers to encrypted overlay networks.
>
> Overlay network encryption isn't supported on Windows.
> Swarm doesn't report an error when a Windows host
> attempts to connect to an encrypted overlay network,
> but networking for the Windows containers is affected as follows:
>
> - Windows containers can't communicate with Linux containers on the network
> - Data traffic between Windows containers on the network isn't encrypted
{ .warning }
#### Swarm mode overlay networks and standalone containers
## Attach a container to an overlay network
You can use the overlay network feature with both `--opt encrypted --attachable`
and attach unmanaged containers to that network:
Adding containers to an overlay network gives them the ability to communicate
with other containers without having to set up routing on the individual Docker
daemon hosts. A prerequisite for doing this is that the hosts have joined the same Swarm.
To join an overlay network named `multi-host-network` with a `busybox` container:
```console
$ docker network create --opt encrypted --driver overlay --attachable my-attachable-multi-host-network
$ docker run --network multi-host-network busybox sh
```
### Customize the default ingress network
> **Note**
>
> This only works if the overlay network is attachable
> (created with the `--attachable` flag).
Most users never need to configure the `ingress` network, but Docker allows you
to do so. This can be useful if the automatically-chosen subnet conflicts with
one that already exists on your network, or you need to customize other low-level
network settings such as the MTU.
## Container discovery
Customizing the `ingress` network involves removing and recreating it. This is
usually done before you create any services in the swarm. If you have existing
services which publish ports, those services need to be removed before you can
remove the `ingress` network.
Publishing ports of a container on an overlay network opens the ports to other
containers on the same network. Containers are discoverable by doing a DNS lookup
using the container name.
During the time that no `ingress` network exists, existing services which do not
publish ports continue to function but are not load-balanced. This affects
services which publish ports.
1. Inspect the `ingress` network using `docker network inspect ingress`, and
remove any services whose containers are connected to it. These are services
that publish ports, such as a WordPress service which publishes port 80. If
all such services are not stopped, the next step fails.
2. Remove the existing `ingress` network:
```console
$ docker network rm ingress
WARNING! Before removing the routing-mesh network, make sure all the nodes
in your swarm run the same docker engine version. Otherwise, removal may not
be effective and functionality of newly created ingress networks will be
impaired.
Are you sure you want to continue? [y/N]
```
3. Create a new overlay network using the `--ingress` flag, along with the
custom options you want to set. This example sets the MTU to 1200, sets
the subnet to `10.11.0.0/16`, and sets the gateway to `10.11.0.2`.
```console
$ docker network create \
--driver overlay \
--ingress \
--subnet=10.11.0.0/16 \
--gateway=10.11.0.2 \
--opt com.docker.network.driver.mtu=1200 \
my-ingress
```
> **Note**
>
> You can name your `ingress` network something other than
> `ingress`, but you can only have one. An attempt to create a second one
> fails.
4. Restart the services that you stopped in the first step.
### Customize the docker_gwbridge interface
The `docker_gwbridge` is a virtual bridge that connects the overlay networks
(including the `ingress` network) to an individual Docker daemon's physical
network. Docker creates it automatically when you initialize a swarm or join a
Docker host to a swarm, but it is not a Docker device. It exists in the kernel
of the Docker host. If you need to customize its settings, you must do so before
joining the Docker host to the swarm, or after temporarily removing the host
from the swarm.
1. Stop Docker.
2. Delete the existing `docker_gwbridge` interface.
```console
$ sudo ip link set docker_gwbridge down
$ sudo ip link del dev docker_gwbridge
```
3. Start Docker. Do not join or initialize the swarm.
4. Create or re-create the `docker_gwbridge` bridge manually with your custom
settings, using the `docker network create` command.
This example uses the subnet `10.11.0.0/16`. For a full list of customizable
options, see [Bridge driver options](../../reference/cli/docker/network/create.md#bridge-driver-options).
```console
$ docker network create \
--subnet 10.11.0.0/16 \
--opt com.docker.network.bridge.name=docker_gwbridge \
--opt com.docker.network.bridge.enable_icc=false \
--opt com.docker.network.bridge.enable_ip_masquerade=true \
docker_gwbridge
```
5. Initialize or join the swarm. Since the bridge already exists, Docker doesn't create it with automatic settings.
## Operations for swarm services
### Publish ports on an overlay network
Swarm services connected to the same overlay network effectively expose all
ports to each other. For a port to be accessible outside of the service, that
port must be published using the `-p` or `--publish` flag on `docker service
create` or `docker service update`. Both the legacy colon-separated syntax and
the newer comma-separated value syntax are supported. The longer syntax is
preferred because it is somewhat self-documenting.
<table>
<thead>
<tr>
<th>Flag value</th>
<th>Description</th>
</tr>
</thead>
<tr>
<td><tt>-p 8080:80</tt> or<br /><tt>-p published=8080,target=80</tt></td>
<td>Map TCP port 80 on the service to port 8080 on the routing mesh.</td>
</tr>
<tr>
<td><tt>-p 8080:80/udp</tt> or<br /><tt>-p published=8080,target=80,protocol=udp</tt></td>
<td>Map UDP port 80 on the service to port 8080 on the routing mesh.</td>
</tr>
<tr>
<td><tt>-p 8080:80/tcp -p 8080:80/udp</tt> or <br /><tt>-p published=8080,target=80,protocol=tcp -p published=8080,target=80,protocol=udp</tt></td>
<td>Map TCP port 80 on the service to TCP port 8080 on the routing mesh, and map UDP port 80 on the service to UDP port 8080 on the routing mesh.</td>
</tr>
</table>
### Bypass the routing mesh for a swarm service
By default, swarm services which publish ports do so using the routing mesh.
When you connect to a published port on any swarm node (whether it is running a
given service or not), you are redirected to a worker which is running that
service, transparently. Effectively, Docker acts as a load balancer for your
swarm services. Services using the routing mesh are running in _virtual IP (VIP)
mode_. Even a service running on each node (by means of the `--mode global`
flag) uses the routing mesh. When using the routing mesh, there is no guarantee
about which Docker node services client requests.
To bypass the routing mesh, you can start a service using _DNS Round Robin
(DNSRR) mode_, by setting the `--endpoint-mode` flag to `dnsrr`. You must run
your own load balancer in front of the service. A DNS query for the service name
on the Docker host returns a list of IP addresses for the nodes running the
service. Configure your load balancer to consume this list and balance the
traffic across the nodes.
### Separate control and data traffic
By default, control traffic relating to swarm management and traffic to and from
your applications runs over the same network, though the swarm control traffic
is encrypted. You can configure Docker to use separate network interfaces for
handling the two different types of traffic. When you initialize or join the
swarm, specify `--advertise-addr` and `--datapath-addr` separately. You must do
this for each node joining the swarm.
## Operations for standalone containers on overlay networks
### Attach a standalone container to an overlay network
The `ingress` network is created without the `--attachable` flag, which means
that only swarm services can use it, and not standalone containers. You can
connect standalone containers to user-defined overlay networks which are created
with the `--attachable` flag. This gives standalone containers running on
different Docker daemons the ability to communicate without the need to set up
routing on the individual Docker daemon hosts.
### Publish ports
| Flag value | Description |
|---------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------|
| `-p 8080:80` | Map TCP port 80 in the container to port `8080` on the overlay network. |
| `-p 8080:80/udp` | Map UDP port 80 in the container to port `8080` on the overlay network. |
| `-p 8080:80/sctp` | Map SCTP port 80 in the container to port `8080` on the overlay network. |
| Flag value | Description |
| ------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `-p 8080:80` | Map TCP port 80 in the container to port `8080` on the overlay network. |
| `-p 8080:80/udp` | Map UDP port 80 in the container to port `8080` on the overlay network. |
| `-p 8080:80/sctp` | Map SCTP port 80 in the container to port `8080` on the overlay network. |
| `-p 8080:80/tcp -p 8080:80/udp` | Map TCP port 80 in the container to TCP port `8080` on the overlay network, and map UDP port 80 in the container to UDP port `8080` on the overlay network. |
### Container discovery
For most situations, you should connect to the service name, which is load-balanced and handled by all containers ("tasks") backing the service. To get a list of all tasks backing the service, do a DNS lookup for `tasks.<service-name>.`
## Connection limit for overlay networks
Due to limitations set by the Linux kernel, overlay networks become unstable and
@@ -297,4 +127,4 @@ For more information about this limitation, see
- Go through the [overlay networking tutorial](../network-tutorial-overlay.md)
- Learn about [networking from the container's point of view](../index.md)
- Learn about [standalone bridge networks](bridge.md)
- Learn about [Macvlan networks](macvlan.md)
- Learn about [Macvlan networks](macvlan.md)

View File

@@ -12,8 +12,8 @@ For networking with standalone containers, see
[Networking with standalone containers](network-tutorial-standalone.md). If you need to
learn more about Docker networking in general, see the [overview](index.md).
This topic includes four different tutorials. You can run each of them on
Linux, Windows, or a Mac, but for the last two, you need a second Docker
This page includes the following tutorials. You can run each of them on
Linux, Windows, or a Mac, but for the last one, you need a second Docker
host running elsewhere.
- [Use the default overlay network](#use-the-default-overlay-network) demonstrates
@@ -29,10 +29,6 @@ host running elsewhere.
shows how to communicate between standalone containers on different Docker
daemons using an overlay network.
- [Communicate between a container and a swarm service](#communicate-between-a-container-and-a-swarm-service)
sets up communication between a standalone container and a swarm service,
using an attachable overlay network.
## Prerequisites
These require you to have at least a single-node swarm, which means that
@@ -438,206 +434,6 @@ example also uses Linux hosts, but the same commands work on Windows.
$ docker network rm test-net
```
## Communicate between a container and a swarm service
In this example, you start two different `alpine` containers on the same Docker
host and do some tests to understand how they communicate with each other. You
need to have Docker installed and running.
1. Open a terminal window. List current networks before you do anything else.
Here's what you should see if you've never added a network or initialized a
swarm on this Docker daemon. You may see different networks, but you should
at least see these (the network IDs will be different):
```console
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
17e324f45964 bridge bridge local
6ed54d316334 host host local
7092879f2cc8 none null local
```
The default `bridge` network is listed, along with `host` and `none`. The
latter two are not fully-fledged networks, but are used to start a container
connected directly to the Docker daemon host's networking stack, or to start
a container with no network devices. This tutorial will connect two
containers to the `bridge` network.
2. Start two `alpine` containers running `ash`, which is Alpine's default shell
rather than `bash`. The `-dit` flags mean to start the container detached
(in the background), interactive (with the ability to type into it), and
with a TTY (so you can see the input and output). Since you are starting it
detached, you won't be connected to the container right away. Instead, the
container's ID will be printed. Because you have not specified any
`--network` flags, the containers connect to the default `bridge` network.
```console
$ docker run -dit --name alpine1 alpine ash
$ docker run -dit --name alpine2 alpine ash
```
Check that both containers are actually started:
```console
$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
602dbf1edc81 alpine "ash" 4 seconds ago Up 3 seconds alpine2
da33b7aa74b0 alpine "ash" 17 seconds ago Up 16 seconds alpine1
```
3. Inspect the `bridge` network to see what containers are connected to it.
```console
$ docker network inspect bridge
[
{
"Name": "bridge",
"Id": "17e324f459648a9baaea32b248d3884da102dde19396c25b30ec800068ce6b10",
"Created": "2017-06-22T20:27:43.826654485Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Containers": {
"602dbf1edc81813304b6cf0a647e65333dc6fe6ee6ed572dc0f686a3307c6a2c": {
"Name": "alpine2",
"EndpointID": "03b6aafb7ca4d7e531e292901b43719c0e34cc7eef565b38a6bf84acf50f38cd",
"MacAddress": "02:42:ac:11:00:03",
"IPv4Address": "172.17.0.3/16",
"IPv6Address": ""
},
"da33b7aa74b0bf3bda3ebd502d404320ca112a268aafe05b4851d1e3312ed168": {
"Name": "alpine1",
"EndpointID": "46c044a645d6afc42ddd7857d19e9dcfb89ad790afb5c239a35ac0af5e8a5bc5",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
```
Near the top, information about the `bridge` network is listed, including
the IP address of the gateway between the Docker host and the `bridge`
network (`172.17.0.1`). Under the `Containers` key, each connected container
is listed, along with information about its IP address (`172.17.0.2` for
`alpine1` and `172.17.0.3` for `alpine2`).
4. The containers are running in the background. Use the `docker attach`
command to connect to `alpine1`.
```console
$ docker attach alpine1
/ #
```
The prompt changes to `#` to indicate that you are the `root` user within
the container. Use the `ip addr show` command to show the network interfaces
for `alpine1` as they look from within the container:
```console
# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
27: eth0@if28: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.2/16 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::42:acff:fe11:2/64 scope link
valid_lft forever preferred_lft forever
```
The first interface is the loopback device. Ignore it for now. Notice that
the second interface has the IP address `172.17.0.2`, which is the same
address shown for `alpine1` in the previous step.
5. From within `alpine1`, make sure you can connect to the internet by
pinging `google.com`. The `-c 2` flag limits the command two two `ping`
attempts.
```console
# ping -c 2 google.com
PING google.com (172.217.3.174): 56 data bytes
64 bytes from 172.217.3.174: seq=0 ttl=41 time=9.841 ms
64 bytes from 172.217.3.174: seq=1 ttl=41 time=9.897 ms
--- google.com ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 9.841/9.869/9.897 ms
```
6. Now try to ping the second container. First, ping it by its IP address,
`172.17.0.3`:
```console
# ping -c 2 172.17.0.3
PING 172.17.0.3 (172.17.0.3): 56 data bytes
64 bytes from 172.17.0.3: seq=0 ttl=64 time=0.086 ms
64 bytes from 172.17.0.3: seq=1 ttl=64 time=0.094 ms
--- 172.17.0.3 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.086/0.090/0.094 ms
```
This succeeds. Next, try pinging the `alpine2` container by container
name. This will fail.
```console
# ping -c 2 alpine2
ping: bad address 'alpine2'
```
7. Detach from `alpine1` without stopping it by using the detach sequence,
`CTRL` + `p` `CTRL` + `q` (hold down `CTRL` and type `p` followed by `q`).
If you wish, attach to `alpine2` and repeat steps 4, 5, and 6 there,
substituting `alpine1` for `alpine2`.
8. Stop and remove both containers.
```console
$ docker container stop alpine1 alpine2
$ docker container rm alpine1 alpine2
```
Remember, the default `bridge` network is not recommended for production. To
learn about user-defined bridge networks, continue to the
[next tutorial](network-tutorial-standalone.md#use-user-defined-bridge-networks).
## Other networking tutorials
- [Host networking tutorial](network-tutorial-host.md)